Implementación de redes tipo transformer en la selección estratégica de perfiles laborales a nivel empresarial
En este proyecto se desarrolla la implementación de un modelo RAG (Retrieval aumented Generation), encaminado a su aplicación en el contexto del reclutamiento y la selección de personal (limitado a las áreas relacionadas a Ingeniería Electrónica), para ello se tiene como punto de partida la obtenció...
- Autores:
-
Tovar Sánchez, Juan Sebastián
Castro Castellanos, Cristian Camilo
- Tipo de recurso:
- Trabajo de grado de pregrado
- Fecha de publicación:
- 2024
- Institución:
- Universidad Distrital Francisco José de Caldas
- Repositorio:
- RIUD: repositorio U. Distrital
- Idioma:
- spa
- OAI Identifier:
- oai:repository.udistrital.edu.co:11349/93707
- Acceso en línea:
- http://hdl.handle.net/11349/93707
- Palabra clave:
- LangChain
RAG
LlamaIndex
NLP
Inteligencia artificial
Inteligencia Computacional
Procesamiento de lenguaje Natural
Ingeniería Electrónica -- Tesis y disertaciones académicas
LangChain
RAG
LlamaIndex
NLP
Artificial intelligence
- Rights
- License
- Abierto (Texto Completo)
id |
UDISTRITA2_41a2e13c26e817277fe88bb07a52ef93 |
---|---|
oai_identifier_str |
oai:repository.udistrital.edu.co:11349/93707 |
network_acronym_str |
UDISTRITA2 |
network_name_str |
RIUD: repositorio U. Distrital |
repository_id_str |
|
dc.title.none.fl_str_mv |
Implementación de redes tipo transformer en la selección estratégica de perfiles laborales a nivel empresarial |
dc.title.titleenglish.none.fl_str_mv |
Implementation of transformer-type networks in the strategic selection of job profiles at the corporate level |
title |
Implementación de redes tipo transformer en la selección estratégica de perfiles laborales a nivel empresarial |
spellingShingle |
Implementación de redes tipo transformer en la selección estratégica de perfiles laborales a nivel empresarial LangChain RAG LlamaIndex NLP Inteligencia artificial Inteligencia Computacional Procesamiento de lenguaje Natural Ingeniería Electrónica -- Tesis y disertaciones académicas LangChain RAG LlamaIndex NLP Artificial intelligence |
title_short |
Implementación de redes tipo transformer en la selección estratégica de perfiles laborales a nivel empresarial |
title_full |
Implementación de redes tipo transformer en la selección estratégica de perfiles laborales a nivel empresarial |
title_fullStr |
Implementación de redes tipo transformer en la selección estratégica de perfiles laborales a nivel empresarial |
title_full_unstemmed |
Implementación de redes tipo transformer en la selección estratégica de perfiles laborales a nivel empresarial |
title_sort |
Implementación de redes tipo transformer en la selección estratégica de perfiles laborales a nivel empresarial |
dc.creator.fl_str_mv |
Tovar Sánchez, Juan Sebastián Castro Castellanos, Cristian Camilo |
dc.contributor.advisor.none.fl_str_mv |
Ferro Escobar, Roberto |
dc.contributor.author.none.fl_str_mv |
Tovar Sánchez, Juan Sebastián Castro Castellanos, Cristian Camilo |
dc.contributor.orcid.none.fl_str_mv |
Ferro Escobar, Roberto [0000-0002-8978-538X] |
dc.subject.none.fl_str_mv |
LangChain RAG LlamaIndex NLP Inteligencia artificial |
topic |
LangChain RAG LlamaIndex NLP Inteligencia artificial Inteligencia Computacional Procesamiento de lenguaje Natural Ingeniería Electrónica -- Tesis y disertaciones académicas LangChain RAG LlamaIndex NLP Artificial intelligence |
dc.subject.lemb.none.fl_str_mv |
Inteligencia Computacional Procesamiento de lenguaje Natural Ingeniería Electrónica -- Tesis y disertaciones académicas |
dc.subject.keyword.none.fl_str_mv |
LangChain RAG LlamaIndex NLP Artificial intelligence |
description |
En este proyecto se desarrolla la implementación de un modelo RAG (Retrieval aumented Generation), encaminado a su aplicación en el contexto del reclutamiento y la selección de personal (limitado a las áreas relacionadas a Ingeniería Electrónica), para ello se tiene como punto de partida la obtención de una base de datos documental (conformada por archivos tipo PDF), pasando por una fase de preprocesamiento basada en limpieza de texto y tokenizacion, para posteriormente convertirse en una base de datos vectorizada. Los datos son preparados para el entrenamiento del modelo mediante operaciones de chuking e indexing, permitiendo en consecuencia la inclusión de un LLM (Large Language Model) basado en un modelo transformer, el cual, junto a mecanismos de búsqueda vectorial y aprendizaje por similitud, permiten la generación de lenguaje y la recuperación de información respectivamente. Es así como al hacer un proceso de integración de cada una de las partes se conforma el RAG, con base a ello se pretende encontrar los mejores parámetros de acuerdo a las condiciones dadas, evaluando el rendimiento obtenido en cada caso, en busca del mejor resultado. |
publishDate |
2024 |
dc.date.created.none.fl_str_mv |
2024-08-13 |
dc.date.accessioned.none.fl_str_mv |
2025-03-16T20:39:30Z |
dc.date.available.none.fl_str_mv |
2025-03-16T20:39:30Z |
dc.type.none.fl_str_mv |
bachelorThesis |
dc.type.degree.none.fl_str_mv |
Monografía |
dc.type.driver.none.fl_str_mv |
info:eu-repo/semantics/bachelorThesis |
dc.type.coar.none.fl_str_mv |
http://purl.org/coar/resource_type/c_7a1f |
format |
http://purl.org/coar/resource_type/c_7a1f |
dc.identifier.uri.none.fl_str_mv |
http://hdl.handle.net/11349/93707 |
url |
http://hdl.handle.net/11349/93707 |
dc.language.iso.none.fl_str_mv |
spa |
language |
spa |
dc.relation.references.none.fl_str_mv |
[Nvidia,2024]¿Qué Es un Modelo Transformer? | Blog de NVIDIA. (n.d.). Retrieved April 20, 2024, from 1 [LangChain,2024] ChatGPT Over Your Data. (n.d.). Retrieved April 21, 2024, from https://blog.langchain.dev/tutorial-chatgpt-over-your-data/ [Lewis,2024] Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W.-T., Rocktäschel, T., Riedel, S., & Kiela, D. (n.d.). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Retrieved April 21, 2024, from https://github.com/huggingface/transformers/blob/master/ [Nvidia,2024] What Is Retrieval-Augmented Generation aka RAG | NVIDIA Blogs. (n.d.). Retrieved April 21, 2024, from https://blogs.nvidia.com/blog/what-is-retrieval-augmented generation/ [Pasupat,2024] Guu, K., Lee, K., Tung, Z., Pasupat, P., & Chang, M.-W. (2020). REALM: Retrieval-Augmented Language Model Pre-Training. [Sun z,2022] Sun, Z., Wang, X., Tay, Y., Yang, Y., & Denny, Z. (2022). Recitation augmented language models.. https://doi.org/10.48550/arxiv.2210.01296 [Tanay d,2022] Tanay, D., Paranjape, B., Hajishirzi, H., & Zettlemoyer, L. (2022). Core: a retrieve-then-edit framework for counterfactual data generation.. https://doi.org/10.48550/arxiv.2210.04873 [Izacardd G,2022] Izacard, G., Lewis, P., Lomelí, M., Hosseini, L., Petroni, F., Schick, T., … & Grave, É. (2022). Atlas: few-shot learning with retrieval augmented language models.. https://doi.org/10.48550/arxiv.2208.03299 [Glab,2022] Glaß, M., Rossiello, G., Mahbub, C., & Gliozzo, A. (2021). Robust retrieval augmented generation for zero-shot slot filling.. https://doi.org/10.48550/arxiv.2108.13934 [Yang z,2023] Yang, Z., Wei, P., Liu, Z., Korthikanti, V., Nie, W., Huang, D., … & Anandkumar, A. (2023). Re-vilm: retrieval-augmented visual language model for zero and few-shot image captioning.. https://doi.org/10.48550/arxiv.2302.04858 [Kimk B,2023] Kim, B., Seo, S., Han, S., Erdenee, E., & Chang, B. (2021). Distilling the knowledge of large-scale generative models into retrieval models for efficient open-domain conversation.. https://doi.org/10.18653/v1/2021.findings-emnlp.286 [NT.M.,2020] N. T. M. Trang and M. Shcherbakov, "Vietnamese Question Answering System f rom Multilingual BERT Models to Monolingual BERT Model," 2020 9th International Conference System Modeling and Advancement in Research Trends (SMART), [Moradabad, 2020] Moradabad, India, 2020, pp. 201-206, doi: 10.1109/SMART50582.2020.9337155. keywords: {Training;Bit error rate;Systems modeling;Knowledge discovery;Natural language processing;Task analysis;Read only memory;Question answering system;BERT;PhoBERT;DeepPavlov;multilingual BERT model;monolingual BERT model;Vietnamese Question Answering} Ghani and I. K. Raharjana, "Chatbots in Academia: A Retrieval-Augmented Generation Approach for Improved Efficient Information Access," 2024 16th International Conference on Knowledge and Smart Technology (KST), Krabi, Thailand, 2024, pp. 259-264, doi: 10.1109/KST61284.2024.10499652. keywords: {Analytical models;Databases;Virtual assistants;Search methods;Natural languages;Oral communication;Chatbots;Academic Chatbots;Retrieval-Augmented Generation;Large Language Models;Technology}.]¿Qué es LangChain? | IBM. (n.d.). Retrieved May 7, 2024, from https://www.ibm.com/mx es/topics/langchain [Stork ai,2024] Descripción general del marco de LlamaIndex | Stork. (n.d.). Retrieved May 7, 2024, from https://www.stork.ai/es/blog/an-overview-of-the-llamaindex-framework [Xataca,2023] LLaMA 3: qué es y qué novedades tiene la nueva versión de la IA que se integrará en Facebook, Instagram y WhatsApp con Meta AI. (n.d.). Retrieved May 7, 2024, from https://www.xataka.com/basics/llama-3-que-que-novedades-tiene-nueva-version-ia que-se-integrara-facebook-instagram-whatsapp-meta-ai [Victor M,2024] Mixtral: El Modelo de Lenguaje de Código Abierto que Transforma la IA - Víctor Mollá. (n.d.). Retrieved May 7, 2024, from https://www.victormolla.com/mixtral el-modelo-de-lenguaje-de-c%C3%B3digo-abierto-que-transforma-la-ia [Microsoft,2024] microsoft/MiniLM-L12-H384-uncased · Hugging Face. (n.d.). Retrieved May 13, 2024, from https://huggingface.co/microsoft/MiniLM-L12-H384-uncased [Microsoft,2023] unilm/minilm at master · microsoft/unilm · GitHub. (n.d.). Retrieved May 13, 2024, from https://github.com/microsoft/unilm/tree/master/minilm [eweek,2024] 6 Best Large Language Models (LLMs) in 2024. (n.d.). Retrieved May 21, 2024, from https://www.eweek.com/artificial-intelligence/best-large-language-models/ [Rothman, 2022] Denis Rothman; Antonio Gulli, Transformers for Natural Language Processing: Build, train, and fine-tune deep neural network architectures for NLP with Python, Hugging Face, and OpenAI's GPT-3, ChatGPT, and GPT-4, Packt Publishing, 2022 [X. Zheng, 2021]X. Zheng, C. Zhang and P. C. Woodland, "Adapting GPT, GPT-2 and BERT Language Models for Speech Recognition," 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Cartagena, Colombia, 2021, pp. 162 168, doi: 10.1109/ASRU51503.2021.9688232. keywords: {Training;Adaptation models;Bit error rate;Optimization methods;Switches;Computer architecture;Artificial neural networks;Bidirectional LM;GPT;GPT-2;BERT} [Y. Liu, 2023]Y. Liu, H. Huang, J. Gao and S. Gai, "A study of Chinese Text Classification based on a new type of BERT pre-training," 2023 5th International Conference on Natural Language Processing (ICNLP), Guangzhou, China, 2023, pp. 303-307, doi: 10.1109/ICNLP58431.2023.00062. keywords: {Training;Knowledge engineering;Text categorization;Semantics;Feature extraction;Natural mining;Chinese TC;BERT model;RoBERTa;BERT-BiGRU} [S. Jhajaria, 2023] S. Jhajaria and D. Kaur, "Study and Comparative Analysis of ChatGPT, GPT and DAll-E2," 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), Delhi, India, 2023, pp. 1-5, doi: 10.1109/ICCCNT56998.2023.10307823. keywords: {Training;Computer vision;Analytical models;Visualization;Computational modeling;Training data;Chatbots;Natural language processing;GPT;ChatGPT;Dall-E2;comparative Analysis} [I. Goodfellow, 2016] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016, pp. 103-110 |
dc.rights.coar.fl_str_mv |
http://purl.org/coar/access_right/c_abf2 |
dc.rights.acceso.none.fl_str_mv |
Abierto (Texto Completo) |
rights_invalid_str_mv |
Abierto (Texto Completo) http://purl.org/coar/access_right/c_abf2 |
dc.format.mimetype.none.fl_str_mv |
pdf |
dc.publisher.none.fl_str_mv |
Universidad Distrital Francisco José de Caldas |
publisher.none.fl_str_mv |
Universidad Distrital Francisco José de Caldas |
institution |
Universidad Distrital Francisco José de Caldas |
bitstream.url.fl_str_mv |
https://repository.udistrital.edu.co/bitstreams/dab3aa91-b2e3-422e-b5a9-06506f77af45/download https://repository.udistrital.edu.co/bitstreams/0cdc239c-3120-4bdd-887b-48180cadf61d/download https://repository.udistrital.edu.co/bitstreams/2b08b575-9953-4f00-a3de-3919888360f9/download https://repository.udistrital.edu.co/bitstreams/5ea0a4d1-4e6e-4dca-9627-ad96e3d77d37/download https://repository.udistrital.edu.co/bitstreams/38e57b55-102e-4227-9065-99ef1c9a1e82/download https://repository.udistrital.edu.co/bitstreams/13daa428-acc3-4081-95ac-dee93e142ad9/download |
bitstream.checksum.fl_str_mv |
997daf6c648c962d566d7b082dac908d e84e12149471b6c63654ce89f7ed4786 766949de9f8b508ce4e69d0ec3ba0f5c 47a588cb90be081791ac8710b63051f1 cc386fb2eb0637fb9e081a9c1a3289c0 f25d61c7a1a5a904c523d1a361e83564 |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositorio Universidad Distrital |
repository.mail.fl_str_mv |
repositorio@udistrital.edu.co |
_version_ |
1828165180581740544 |
spelling |
Ferro Escobar, RobertoTovar Sánchez, Juan SebastiánCastro Castellanos, Cristian CamiloFerro Escobar, Roberto [0000-0002-8978-538X]2025-03-16T20:39:30Z2025-03-16T20:39:30Z2024-08-13http://hdl.handle.net/11349/93707En este proyecto se desarrolla la implementación de un modelo RAG (Retrieval aumented Generation), encaminado a su aplicación en el contexto del reclutamiento y la selección de personal (limitado a las áreas relacionadas a Ingeniería Electrónica), para ello se tiene como punto de partida la obtención de una base de datos documental (conformada por archivos tipo PDF), pasando por una fase de preprocesamiento basada en limpieza de texto y tokenizacion, para posteriormente convertirse en una base de datos vectorizada. Los datos son preparados para el entrenamiento del modelo mediante operaciones de chuking e indexing, permitiendo en consecuencia la inclusión de un LLM (Large Language Model) basado en un modelo transformer, el cual, junto a mecanismos de búsqueda vectorial y aprendizaje por similitud, permiten la generación de lenguaje y la recuperación de información respectivamente. Es así como al hacer un proceso de integración de cada una de las partes se conforma el RAG, con base a ello se pretende encontrar los mejores parámetros de acuerdo a las condiciones dadas, evaluando el rendimiento obtenido en cada caso, en busca del mejor resultado.In this project, a RAG (Retrieval-Augmented Generation) model is developed for application in the context of recruitment and personnel selection (limited to areas related to Electronic Engineering). The starting point is the creation of a document database (composed of PDF files), followed by a preprocessing phase based on text cleaning and tokenization, which is then converted into a vectorized database. The data is prepared for model training through chunking and indexing operations, enabling the inclusion of a Large Language Model (LLM) based on a transformer model. This model, along with vector search mechanisms and similarity learning, allows for language generation and information retrieval, respectively. By integrating each of these components, the RAG model is constructed. The aim is to find the best parameters according to the given conditions, evaluating the performance obtained in each case to achieve the best result.pdfspaUniversidad Distrital Francisco José de CaldasLangChainRAGLlamaIndexNLPInteligencia artificialInteligencia ComputacionalProcesamiento de lenguaje NaturalIngeniería Electrónica -- Tesis y disertaciones académicasLangChainRAGLlamaIndexNLPArtificial intelligenceImplementación de redes tipo transformer en la selección estratégica de perfiles laborales a nivel empresarialImplementation of transformer-type networks in the strategic selection of job profiles at the corporate levelbachelorThesisMonografíainfo:eu-repo/semantics/bachelorThesishttp://purl.org/coar/resource_type/c_7a1fAbierto (Texto Completo)http://purl.org/coar/access_right/c_abf2[Nvidia,2024]¿Qué Es un Modelo Transformer? | Blog de NVIDIA. (n.d.). Retrieved April 20, 2024, from 1[LangChain,2024] ChatGPT Over Your Data. (n.d.). Retrieved April 21, 2024, from https://blog.langchain.dev/tutorial-chatgpt-over-your-data/[Lewis,2024] Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W.-T., Rocktäschel, T., Riedel, S., & Kiela, D. (n.d.). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Retrieved April 21, 2024, from https://github.com/huggingface/transformers/blob/master/[Nvidia,2024] What Is Retrieval-Augmented Generation aka RAG | NVIDIA Blogs. (n.d.). Retrieved April 21, 2024, from https://blogs.nvidia.com/blog/what-is-retrieval-augmented generation/[Pasupat,2024] Guu, K., Lee, K., Tung, Z., Pasupat, P., & Chang, M.-W. (2020). REALM: Retrieval-Augmented Language Model Pre-Training.[Sun z,2022] Sun, Z., Wang, X., Tay, Y., Yang, Y., & Denny, Z. (2022). Recitation augmented language models.. https://doi.org/10.48550/arxiv.2210.01296[Tanay d,2022] Tanay, D., Paranjape, B., Hajishirzi, H., & Zettlemoyer, L. (2022). Core: a retrieve-then-edit framework for counterfactual data generation.. https://doi.org/10.48550/arxiv.2210.04873[Izacardd G,2022] Izacard, G., Lewis, P., Lomelí, M., Hosseini, L., Petroni, F., Schick, T., … & Grave, É. (2022). Atlas: few-shot learning with retrieval augmented language models.. https://doi.org/10.48550/arxiv.2208.03299[Glab,2022] Glaß, M., Rossiello, G., Mahbub, C., & Gliozzo, A. (2021). Robust retrieval augmented generation for zero-shot slot filling.. https://doi.org/10.48550/arxiv.2108.13934[Yang z,2023] Yang, Z., Wei, P., Liu, Z., Korthikanti, V., Nie, W., Huang, D., … & Anandkumar, A. (2023). Re-vilm: retrieval-augmented visual language model for zero and few-shot image captioning.. https://doi.org/10.48550/arxiv.2302.04858[Kimk B,2023] Kim, B., Seo, S., Han, S., Erdenee, E., & Chang, B. (2021). Distilling the knowledge of large-scale generative models into retrieval models for efficient open-domain conversation.. https://doi.org/10.18653/v1/2021.findings-emnlp.286[NT.M.,2020] N. T. M. Trang and M. Shcherbakov, "Vietnamese Question Answering System f rom Multilingual BERT Models to Monolingual BERT Model," 2020 9th International Conference System Modeling and Advancement in Research Trends(SMART), [Moradabad, 2020] Moradabad, India, 2020, pp. 201-206, doi: 10.1109/SMART50582.2020.9337155. keywords: {Training;Bit error rate;Systems modeling;Knowledge discovery;Natural language processing;Task analysis;Read only memory;Question answering system;BERT;PhoBERT;DeepPavlov;multilingual BERT model;monolingual BERT model;Vietnamese Question Answering}Ghani and I. K. Raharjana, "Chatbots in Academia: A Retrieval-Augmented Generation Approach for Improved Efficient Information Access," 2024 16th International Conference on Knowledge and Smart Technology (KST), Krabi, Thailand, 2024, pp. 259-264, doi: 10.1109/KST61284.2024.10499652. keywords: {Analytical models;Databases;Virtual assistants;Search methods;Natural languages;Oral communication;Chatbots;Academic Chatbots;Retrieval-Augmented Generation;Large Language Models;Technology}.]¿Qué es LangChain? | IBM. (n.d.). Retrieved May 7, 2024, from https://www.ibm.com/mx es/topics/langchain[Stork ai,2024] Descripción general del marco de LlamaIndex | Stork. (n.d.). Retrieved May 7, 2024, from https://www.stork.ai/es/blog/an-overview-of-the-llamaindex-framework[Xataca,2023] LLaMA 3: qué es y qué novedades tiene la nueva versión de la IA que se integrará en Facebook, Instagram y WhatsApp con Meta AI. (n.d.). Retrieved May 7, 2024, from https://www.xataka.com/basics/llama-3-que-que-novedades-tiene-nueva-version-ia que-se-integrara-facebook-instagram-whatsapp-meta-ai[Victor M,2024] Mixtral: El Modelo de Lenguaje de Código Abierto que Transforma la IA - Víctor Mollá. (n.d.). Retrieved May 7, 2024, from https://www.victormolla.com/mixtral el-modelo-de-lenguaje-de-c%C3%B3digo-abierto-que-transforma-la-ia[Microsoft,2024] microsoft/MiniLM-L12-H384-uncased · Hugging Face. (n.d.). Retrieved May 13, 2024, from https://huggingface.co/microsoft/MiniLM-L12-H384-uncased[Microsoft,2023] unilm/minilm at master · microsoft/unilm · GitHub. (n.d.). Retrieved May 13, 2024, from https://github.com/microsoft/unilm/tree/master/minilm[eweek,2024] 6 Best Large Language Models (LLMs) in 2024. (n.d.). Retrieved May 21, 2024, from https://www.eweek.com/artificial-intelligence/best-large-language-models/[Rothman, 2022] Denis Rothman; Antonio Gulli, Transformers for Natural Language Processing: Build, train, and fine-tune deep neural network architectures for NLP with Python, Hugging Face, and OpenAI's GPT-3, ChatGPT, and GPT-4, Packt Publishing, 2022[X. Zheng, 2021]X. Zheng, C. Zhang and P. C. Woodland, "Adapting GPT, GPT-2 and BERT Language Models for Speech Recognition," 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Cartagena, Colombia, 2021, pp. 162 168, doi: 10.1109/ASRU51503.2021.9688232. keywords: {Training;Adaptation models;Bit error rate;Optimization methods;Switches;Computer architecture;Artificial neural networks;Bidirectional LM;GPT;GPT-2;BERT}[Y. Liu, 2023]Y. Liu, H. Huang, J. Gao and S. Gai, "A study of Chinese Text Classification based on a new type of BERT pre-training," 2023 5th International Conference on Natural Language Processing (ICNLP), Guangzhou, China, 2023, pp. 303-307, doi: 10.1109/ICNLP58431.2023.00062. keywords: {Training;Knowledge engineering;Text categorization;Semantics;Feature extraction;Natural mining;Chinese TC;BERT model;RoBERTa;BERT-BiGRU}[S. Jhajaria, 2023] S. Jhajaria and D. Kaur, "Study and Comparative Analysis of ChatGPT, GPT and DAll-E2," 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), Delhi, India, 2023, pp. 1-5, doi: 10.1109/ICCCNT56998.2023.10307823. keywords: {Training;Computer vision;Analytical models;Visualization;Computational modeling;Training data;Chatbots;Natural language processing;GPT;ChatGPT;Dall-E2;comparative Analysis}[I. Goodfellow, 2016] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016, pp. 103-110LICENSElicense.txtlicense.txttext/plain; charset=utf-87167https://repository.udistrital.edu.co/bitstreams/dab3aa91-b2e3-422e-b5a9-06506f77af45/download997daf6c648c962d566d7b082dac908dMD51ORIGINALTovarSanchezJuanSebastian2024.pdfTovarSanchezJuanSebastian2024.pdfTrabajo de Gradoapplication/pdf2760723https://repository.udistrital.edu.co/bitstreams/0cdc239c-3120-4bdd-887b-48180cadf61d/downloade84e12149471b6c63654ce89f7ed4786MD52TovarSanchezJuanSebastian2024Anexos.zipTovarSanchezJuanSebastian2024Anexos.zipapplication/zip286272https://repository.udistrital.edu.co/bitstreams/2b08b575-9953-4f00-a3de-3919888360f9/download766949de9f8b508ce4e69d0ec3ba0f5cMD53Licencia de uso y publicacion.pdfLicencia de uso y publicacion.pdfapplication/pdf221852https://repository.udistrital.edu.co/bitstreams/5ea0a4d1-4e6e-4dca-9627-ad96e3d77d37/download47a588cb90be081791ac8710b63051f1MD54THUMBNAILTovarSanchezJuanSebastian2024.pdf.jpgTovarSanchezJuanSebastian2024.pdf.jpgIM Thumbnailimage/jpeg3756https://repository.udistrital.edu.co/bitstreams/38e57b55-102e-4227-9065-99ef1c9a1e82/downloadcc386fb2eb0637fb9e081a9c1a3289c0MD55Licencia de uso y publicacion.pdf.jpgLicencia de uso y publicacion.pdf.jpgIM Thumbnailimage/jpeg9508https://repository.udistrital.edu.co/bitstreams/13daa428-acc3-4081-95ac-dee93e142ad9/downloadf25d61c7a1a5a904c523d1a361e83564MD5611349/93707oai:repository.udistrital.edu.co:11349/937072025-03-17 01:03:21.796open.accesshttps://repository.udistrital.edu.coRepositorio Universidad Distritalrepositorio@udistrital.edu.coTElDRU5DSUEgWSBBVVRPUklaQUNJw5NOIEVTUEVDSUFMIFBBUkEgUFVCTElDQVIgWSBQRVJNSVRJUiBMQSBDT05TVUxUQSBZIFVTTyBERSBDT05URU5JRE9TIEVOIEVMIFJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgREUgTEEgVU5JVkVSU0lEQUQgRElTVFJJVEFMCgpUw6lybWlub3MgeSBjb25kaWNpb25lcyBkZSB1c28gcGFyYSBwdWJsaWNhY2nDs24gZGUgb2JyYXMgZW4gZWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCBkZSBsYSBVbml2ZXJzaWRhZCBEaXN0cml0YWwgRnJhbmNpc2NvIEpvc8OpIGRlIENhbGRhcyAoUklVRCkKCkNvbW8gdGl0dWxhcihlcykgZGVsKG9zKSBkZXJlY2hvKHMpIGRlIGF1dG9yLCBjb25maWVybyAoZXJpbW9zKSBhIGxhIFVuaXZlcnNpZGFkIERpc3RyaXRhbCBGcmFuY2lzY28gSm9zw6kgZGUgQ2FsZGFzIChlbiBhZGVsYW50ZSwgTEEgVU5JVkVSU0lEQUQpIHVuYSBsaWNlbmNpYSBwYXJhIHVzbyBubyBleGNsdXNpdmEsIGxpbWl0YWRhIHkgZ3JhdHVpdGEgc29icmUgbGEgb2JyYSBxdWUgaW50ZWdyYXLDoSBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsIChlbiBhZGVsYW50ZSwgUklVRCksIGRlIGFjdWVyZG8gYSBsYXMgc2lndWllbnRlcyByZWdsYXMsIGxhcyBjdWFsZXMgZGVjbGFybyAoYW1vcykgY29ub2NlciB5IGFjZXB0YXI6CgphKQlFc3RhcsOhIHZpZ2VudGUgYSBwYXJ0aXIgZGUgbGEgZmVjaGEgZW4gcXVlIHNlIGluY2x1eWEgZW4gZWwgUklVRCB5IGhhc3RhIHBvciB1biBwbGF6byBkZSBkaWV6ICgxMCkgQcOxb3MsIHByb3Jyb2dhYmxlIGluZGVmaW5pZGFtZW50ZSBwb3IgZWwgdGllbXBvIHF1ZSBkdXJlIGVsIGRlcmVjaG8gUGF0cmltb25pYWwgZGVsIGF1dG9yOyBsYSBjdWFsIHBvZHLDoSBkYXJzZSBwb3IgdGVybWluYWRhIHByZXZpYSBzb2xpY2l0dWQgYSBMQSBVTklWRVJTSURBRCBwb3IgZXNjcml0byBjb24gdW5hIGFudGVsYWNpw7NuIGRlIGRvcyAoMikgbWVzZXMgYW50ZXMgZGVsIHZlbmNpbWllbnRvIGRlbCBwbGF6byBpbmljaWFsIG8gZWwgZGUgc3UocykgcHLDs3Jyb2dhKHMpLgoKYikJTEEgVU5JVkVSU0lEQUQgcG9kcsOhIHB1YmxpY2FyIGxhIG9icmEgZW4gbGFzIGRpc3RpbnRhcyB2ZXJzaW9uZXMgcmVxdWVyaWRhcyBwb3IgZWwgUklVRCAoZGlnaXRhbCwgaW1wcmVzbywgZWxlY3Ryw7NuaWNvIHUgb3RybyBtZWRpbyBjb25vY2lkbyBvIHBvciBjb25vY2VyKSBMQSBVTklWRVJTSURBRCBubyBzZXLDoSByZXNwb25zYWJsZSBlbiBlbCBldmVudG8gcXVlIGVsIGRvY3VtZW50byBhcGFyZXpjYSByZWZlcmVuY2lhZG8gZW4gbW90b3JlcyBkZSBiw7pzcXVlZGEgbyByZXBvc2l0b3Jpb3MgZGlmZXJlbnRlcyBhbCBSSVVELCB1bmEgdmV6IGVsKG9zKSBhdXRvcihlcykgc29saWNpdGVuIHN1IGVsaW1pbmFjacOzbiBkZWwgUklVRCwgZGFkbyBxdWUgbGEgbWlzbWEgc2Vyw6EgcHVibGljYWRhIGVuIEludGVybmV0LgoKYykJTGEgYXV0b3JpemFjacOzbiBzZSBoYWNlIGEgdMOtdHVsbyBncmF0dWl0bywgcG9yIGxvIHRhbnRvLCBsb3MgYXV0b3JlcyByZW51bmNpYW4gYSByZWNpYmlyIGJlbmVmaWNpbyBhbGd1bm8gcG9yIGxhIHB1YmxpY2FjacOzbiwgZGlzdHJpYnVjacOzbiwgY29tdW5pY2FjacOzbiBww7pibGljYSB5IGN1YWxxdWllciBvdHJvIHVzbyBxdWUgc2UgaGFnYSBlbiBsb3MgdMOpcm1pbm9zIGRlIGxhIHByZXNlbnRlIGxpY2VuY2lhIHkgZGUgbGEgbGljZW5jaWEgZGUgdXNvIGNvbiBxdWUgc2UgcHVibGljYSAoQ3JlYXRpdmUgQ29tbW9ucykuCgpkKQlMb3MgY29udGVuaWRvcyBwdWJsaWNhZG9zIGVuIGVsIFJJVUQgc29uIG9icmEocykgb3JpZ2luYWwoZXMpIHNvYnJlIGxhIGN1YWwoZXMpIGVsKG9zKSBhdXRvcihlcykgY29tbyB0aXR1bGFyZXMgZGUgbG9zIGRlcmVjaG9zIGRlIGF1dG9yLCBhc3VtZW4gdG90YWwgcmVzcG9uc2FiaWxpZGFkIHBvciBlbCBjb250ZW5pZG8gZGUgc3Ugb2JyYSBhbnRlIExBIFVOSVZFUlNJREFEIHkgYW50ZSB0ZXJjZXJvcy4gRW4gdG9kbyBjYXNvIExBIFVOSVZFUlNJREFEIHNlIGNvbXByb21ldGUgYSBpbmRpY2FyIHNpZW1wcmUgbGEgYXV0b3LDrWEgaW5jbHV5ZW5kbyBlbCBub21icmUgZGVsIGF1dG9yIHkgbGEgZmVjaGEgZGUgcHVibGljYWNpw7NuLgoKZSkJTEEgVU5JVkVSU0lEQUQgcG9kcsOhIGluY2x1aXIgbGEgb2JyYSBlbiBsb3Mgw61uZGljZXMgeSBidXNjYWRvcmVzIHF1ZSBlc3RpbWVuIG5lY2VzYXJpb3MgcGFyYSBtYXhpbWl6YXIgbGEgdmlzaWJpbGlkYWQgZWwgdXNvIHkgZWwgaW1wYWN0byBkZSBsYSBwcm9kdWNjacOzbiBjaWVudMOtZmljYSwgYXJ0w61zdGljYSB5IGFjYWTDqW1pY2EgZW4gbGEgY29tdW5pZGFkIGxvY2FsLCBuYWNpb25hbCBvIGludGVybmFjaW9uYWwuCgoKZikJTEEgVU5JVkVSU0lEQUQgcG9kcsOhIGNvbnZlcnRpciBsYSBvYnJhIGEgY3VhbHF1aWVyIG1lZGlvIG8gZm9ybWF0byBjb24gZWwgZmluIGRlIHN1IHByZXNlcnZhY2nDs24gZW4gZWwgdGllbXBvIHF1ZSBsYSBwcmVzZW50ZSBsaWNlbmNpYSB5IGxhIGRlIHN1cyBwcsOzcnJvZ2FzLgoKCkNvbiBiYXNlIGVuIGxvIGFudGVyaW9yIGF1dG9yaXpvKGFtb3MpLCBhIGZhdm9yIGRlbCBSSVVEIHkgZGUgc3VzIHVzdWFyaW9zLCBsYSBwdWJsaWNhY2nDs24geSBjb25zdWx0YSBkZSBsYSBzaWd1aWVudGUgb2JyYToKClRpdHVsbwoKQXV0b3IgICAgICAgQXBlbGxpZG9zICAgICAgICAgTm9tYnJlcwoKMQoKMgoKMwoKCmcpCUF1dG9yaXpvKGFtb3MpLCBxdWUgbGEgb2JyYSBzZWEgcHVlc3RhIGEgZGlzcG9zaWNpw7NuIGRlbCBww7pibGljbyBlbiBsb3MgdMOpcm1pbm9zIGVzdGFibGVjaWRvcyBlbiBsb3MgbGl0ZXJhbGVzIGFudGVyaW9yZXMsIGJham8gbG9zIGzDrW1pdGVzIGRlZmluaWRvcyBwb3IgTEEgVU5JVkVSU0lEQUQsIGVuIGxhcyDigJxDb25kaWNpb25lcyBkZSB1c28gZGUgZXN0cmljdG8gY3VtcGxpbWllbnRv4oCdIGRlIGxvcyByZWN1cnNvcyBwdWJsaWNhZG9zIGVuIGVsIFJJVUQsIGN1eW8gdGV4dG8gY29tcGxldG8gc2UgcHVlZGUgY29uc3VsdGFyIGVuIGh0dHA6Ly9yZXBvc2l0b3J5LnVkaXN0cml0YWwuZWR1LmNvLwoKaCkJQ29ub3pjbyhjZW1vcykgeSBhY2VwdG8oYW1vcykgcXVlIG90b3JnbyhhbW9zKSB1bmEgbGljZW5jaWEgZXNwZWNpYWwgcGFyYSBwdWJsaWNhY2nDs24gZGUgb2JyYXMgZW4gZWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCBkZSBsYSBVbml2ZXJzaWRhZCBEaXN0cml0YWwgRnJhbmNpc2NvIEpvc8OpIGRlIENhbGRhcywgbGljZW5jaWEgICBkZSBsYSBjdWFsIGhlIChoZW1vcykgb2J0ZW5pZG8gdW5hIGNvcGlhLgoKaSkJTWFuaWZpZXN0byhhbW9zKSBtaSAobnVlc3RybykgdG90YWwgYWN1ZXJkbyBjb24gbGFzIGNvbmRpY2lvbmVzIGRlIHVzbyB5IHB1YmxpY2FjacOzbiBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsIGRlIGxhIFVuaXZlcnNpZGFkIERpc3RyaXRhbCBGcmFuY2lzY28gSm9zw6kgZGUgQ2FsZGFzIHF1ZSBzZSBkZXNjcmliZW4geSBleHBsaWNhbiBlbiBlbCBwcmVzZW50ZSBkb2N1bWVudG8uCgpqKQlDb25vemNvKGNlbW9zKSBsYSBub3JtYXRpdmlkYWQgaW50ZXJuYSBkZSAgTEEgVU5JVkVSU0lEQUQ7IGVuIGNvbmNyZXRvLCBlbCBBY3VlcmRvIDAwNCBkZSAyMDEyIGRlbCBDU1UsIEFjdWVyZG8gMDIzIGRlIDIwMTIgZGVsIENTVSBzb2JyZSBQb2zDrXRpY2EgRWRpdG9yaWFsLCBBY3VlcmRvIDAyNiAgZGVsIDMxIGRlIGp1bGlvIGRlIDIwMTIgc29icmUgZWwgcHJvY2VkaW1pZW50byBwYXJhIGxhIHB1YmxpY2FjacOzbiBkZSB0ZXNpcyBkZSBwb3N0Z3JhZG8gZGUgbG9zIGVzdHVkaWFudGVzIGRlIGxhIFVuaXZlcnNpZGFkIERpc3RyaXRhbCBGcmFuY2lzY28gSm9zw6kgZGUgQ2FsZGFzLCAgQWN1ZXJkbyAwMzAgZGVsIDAzIGRlIGRpY2llbWJyZSBkZSAyMDEzIHBvciBtZWRpbyBkZWwgY3VhbCBzZSBjcmVhIGVsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwgZGUgbGEgVW5pdmVyc2lkYWQgRGlzdHJpdGFsIEZyYW5jaXNjbyBKb3PDqSBkZSBDYWxkYXMsIEFjdWVyZG8gMDM4IGRlIDIwMTUgMjAxNSDigJxwb3IgZWwgY3VhbCBzZSBtb2RpZmljYSBlbCBBY3VlcmRvIDAzMSBkZSAyMDE0IGRlIDIwMTQgcXVlIHJlZ2xhbWVudGEgZWwgdHJhYmFqbyBkZSBncmFkbyBwYXJhIGxvcyBlc3R1ZGlhbnRlcyBkZSBwcmVncmFkbyBkZSBsYSBVbml2ZXJzaWRhZCBEaXN0cml0YWwgRnJhbmNpc2NvIEpvc8OpIGRlIENhbGRhcyB5IHNlIGRpY3RhbiBvdHJhcyBkaXJlY3RyaWNlc+KAnSB5IGxhcyBkZW3DoXMgbm9ybWFzIGNvbmNvcmRhbnRlIHkgY29tcGxlbWVudGFyaWFzIHF1ZSByaWdlbiBhbCByZXNwZWN0bywgZXNwZWNpYWxtZW50ZSBsYSBsZXkgMjMgZGUgMTk4MiwgbGEgbGV5IDQ0IGRlIDE5OTMgeSBsYSBkZWNpc2nDs24gQW5kaW5hIDM1MSBkZSAxOTkzLiBFc3RvcyBkb2N1bWVudG9zIHBvZHLDoW4gc2VyIGNvbnN1bHRhZG9zIHkgZGVzY2FyZ2Fkb3MgZW4gZWwgcG9ydGFsIHdlYiBkZSBsYSBiaWJsaW90ZWNhIGh0dHA6Ly9zaXN0ZW1hZGViaWJsaW90ZWNhcy51ZGlzdHJpdGFsLmVkdS5jby8KCmspCUFjZXB0byhhbW9zKSBxdWUgTEEgVU5JVkVSU0lEQUQgbm8gc2UgcmVzcG9uc2FiaWxpemEgcG9yIGxhcyBpbmZyYWNjaW9uZXMgYSBsYSBwcm9waWVkYWQgaW50ZWxlY3R1YWwgbyBEZXJlY2hvcyBkZSBBdXRvciBjYXVzYWRhcyBwb3IgbG9zIHRpdHVsYXJlcyBkZSBsYSBwcmVzZW50ZSBMaWNlbmNpYSB5IGRlY2xhcmFtb3MgcXVlIG1hbnRlbmRyw6kgKGVtb3MpIGluZGVtbmUgYSBMQSBVTklWRVJTSURBRCBwb3IgbGFzIHJlY2xhbWFjaW9uZXMgbGVnYWxlcyBkZSBjdWFscXVpZXIgdGlwbyBxdWUgbGxlZ2FyZW4gYSBwcmVzZW50YXJzZSBwb3IgdmlvbGFjacOzbiBkZSBkZXJlY2hvcyBhIGxhIHByb3BpZWRhZCBpbnRlbGVjdHVhbCBvIGRlIEF1dG9yIHJlbGFjaW9uYWRvcyBjb24gbG9zIGRvY3VtZW50b3MgcmVnaXN0cmFkb3MgZW4gZWwgUklVRC4KCmwpCUVsIChsb3MpIGF1dG9yKGVzKSBtYW5pZmllc3RhKG1vcykgcXVlIGxhIG9icmEgb2JqZXRvIGRlIGxhIHByZXNlbnRlIGF1dG9yaXphY2nDs24gZXMgb3JpZ2luYWwsIGRlIGV4Y2x1c2l2YSBhdXRvcsOtYSwgeSBzZSByZWFsaXrDsyBzaW4gdmlvbGFyIG8gdXN1cnBhciBkZXJlY2hvcyBkZSBhdXRvciBkZSB0ZXJjZXJvczsgZGUgdGFsIHN1ZXJ0ZSwgZW4gY2FzbyBkZSBwcmVzZW50YXJzZSBjdWFscXVpZXIgcmVjbGFtYWNpw7NuIG8gYWNjacOzbiBwb3IgcGFydGUgZGUgdW4gdGVyY2VybyBlbiBjdWFudG8gYSBsb3MgZGVyZWNob3MgZGUgYXV0b3Igc29icmUgbGEgb2JyYSwgZWwgKGxvcykgZXN0dWRpYW50ZShzKSDigJMgYXV0b3IoZXMpIGFzdW1pcsOhKG4pIHRvZGEgbGEgcmVzcG9uc2FiaWxpZGFkIHkgc2FsZHLDoShuKSBlbiBkZWZlbnNhIGRlIGxvcyBkZXJlY2hvcyBhcXXDrSBhdXRvcml6YWRvcy4gUGFyYSB0b2RvcyBsb3MgZWZlY3RvcywgTEEgVU5JVkVSU0lEQUQgYWN0w7phIGNvbW8gdW4gdGVyY2VybyBkZSBidWVuYSBmZS4KCgptKQlFbCAobG9zKSBhdXRvcihlcykgbWFuaWZpZXN0YShtb3MpIHF1ZSBjb25vemNvKGNlbW9zKSBsYSBhdXRvbm9tw61hIHkgbG9zIGRlcmVjaG9zLCBxdWUgcG9zZWUobW9zKSBzb2JyZSBsYSBvYnJhIHksIGNvbW8gdGFsLCBlcyAoc29tb3MpIHJlc3BvbnNhYmxlKHMpIGRlbCBhbGNhbmNlIGp1csOtZGljbyB5IGxlZ2FsLCBkZSBlc2NvZ2VyIGxhIG9wY2nDs24gZGUgbGEgcHVibGljYWNpw7NuIG8gZGUgcmVzdHJpY2Npw7NuIGRlIGxhIHB1YmxpY2FjacOzbiBkZWwgZG9jdW1lbnRvIHJlZ2lzdHJhZG8gZW4gZWwgUklVRC4KCgoKCgoKU0kgRUwgRE9DVU1FTlRPIFNFIEJBU0EgRU4gVU4gVFJBQkFKTyBRVUUgSEEgU0lETyBQQVRST0NJTkFETyBPIEFQT1lBRE8gUE9SIFVOQSBBR0VOQ0lBIE8gVU5BIE9SR0FOSVpBQ0nDk04sIENPTiBFWENFUENJw5NOIERFIExBIFVOSVZFUlNJREFEIERJU1RSSVRBTCBGUkFOQ0lTQ08gSk9TRSBERSBDQUxEQVMsIExPUyBBVVRPUkVTIEdBUkFOVElaQU4gUVVFIFNFIEhBIENVTVBMSURPIENPTiBMT1MKREVSRUNIT1MgWSBPQkxJR0FDSU9ORVMgUkVRVUVSSURPUyBQT1IgRUwgUkVTUEVDVElWTyBDT05UUkFUTyBPIEFDVUVSRE8uCgoKCgoKCgoKCgoKCgoKCgoKCgoKCkVuIGNvbnN0YW5jaWEgZGUgbG8gYW50ZXJpb3IsIGZpcm1vKGFtb3MpIGVsIHByZXNlbnRlIGRvY3VtZW50bywgZW4gbGEgY2l1ZGFkIGRlIEJvZ290w6EsIEQuQy4sIGEgbG9zCgoKRklSTUEgREUgTE9TIFRJVFVMQVJFUyBERSBERVJFQ0hPUyBERSBBVVRPUgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fICAgQy5DLiBOby4gX19fX19fX19fX19fX19fX19fCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18gICBDLkMuIE5vLiBfX19fX19fX19fX19fX19fX18KCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXyAgIEMuQy4gTm8uIF9fX19fX19fX19fX19fX19fXwoKCgpDb3JyZW8gRWxlY3Ryw7NuaWNvIEluc3RpdHVjaW9uYWwgZGVsIChkZSBsb3MpIEF1dG9yKGVzKToKCkF1dG9yCSAgICAgIENvcnJlbyBFbGVjdHLDs25pY28KCjEKCjIKCjMKCk5vbWJyZSBkZSBEaXJlY3RvcihlcykgZGUgR3JhZG86CgoxCgoyCgozCgpOb21icmUgRmFjdWx0YWQgeSBQcm95ZWN0byBDdXJyaWN1bGFyOgoKRmFjdWx0YWQJUHJveWVjdG8gQ3VycmljdWxhcgoKCgoKCgoKCk5vdGE6IEVuIGNhc28gcXVlIG5vIGVzdMOpIGRlIGFjdWVyZG8gY29uIGxhcyBjb25kaWNpb25lcyBkZSBsYSBwcmVzZW50ZSBsaWNlbmNpYSwgeSBtYW5pZmllc3RlIGFsZ3VuYSByZXN0cmljY2nDs24gc29icmUgbGEgb2JyYSwganVzdGlmaXF1ZSBsb3MgbW90aXZvcyBwb3IgbG9zIGN1YWxlcyBlbCBkb2N1bWVudG8geSBzdXMgYW5leG9zIG5vIHB1ZWRlbiBzZXIgcHVibGljYWRvcyBlbiBlbCBSZXBvc2l0b3JpbyBJbnN0aXR1Y2lvbmFsIGRlIGxhIFVuaXZlcnNpZGFkIERpc3RyaXRhbCBGcmFuY2lzY28gSm9zw6kgZGUgQ2FsZGFzIFJJVUQuCgoKU2kgcmVxdWllcmUgbcOhcyBlc3BhY2lvLCBwdWVkZSBhbmV4YXIgdW5hIGNvcGlhIHNpbWlsYXIgYSBlc3RhIGhvamEK |