Control de computadoras con 9 gestos de las manos usando una cámara web estándar
- Autores:
-
Barragán, Osimani
- Tipo de recurso:
- Article of journal
- Fecha de publicación:
- 2025
- Institución:
- Universidad de Cundinamarca
- Repositorio:
- Repositorio UdeC
- Idioma:
- OAI Identifier:
- oai:repositorio.cun.edu.co:cun/10874
- Acceso en línea:
- https://repositorio.cun.edu.co/handle/cun/10874
https://doi.org/10.52143/2346139X.1066
- Palabra clave:
- Visión artificial
Reconocimiento de gestos de las manos
Redes neuronales artificiales
Nube de puntos
Computer vision
Hand gesture recognition
Artificial neural networks
Point cloud
- Rights
- openAccess
- License
- #ashtag - 2025
| id |
RUCUN2_bd571b96df0eea7da29d5717da0584cb |
|---|---|
| oai_identifier_str |
oai:repositorio.cun.edu.co:cun/10874 |
| network_acronym_str |
RUCUN2 |
| network_name_str |
Repositorio UdeC |
| repository_id_str |
|
| dc.title.spa.fl_str_mv |
Control de computadoras con 9 gestos de las manos usando una cámara web estándar |
| title |
Control de computadoras con 9 gestos de las manos usando una cámara web estándar |
| spellingShingle |
Control de computadoras con 9 gestos de las manos usando una cámara web estándar Visión artificial Reconocimiento de gestos de las manos Redes neuronales artificiales Nube de puntos Computer vision Hand gesture recognition Artificial neural networks Point cloud |
| title_short |
Control de computadoras con 9 gestos de las manos usando una cámara web estándar |
| title_full |
Control de computadoras con 9 gestos de las manos usando una cámara web estándar |
| title_fullStr |
Control de computadoras con 9 gestos de las manos usando una cámara web estándar |
| title_full_unstemmed |
Control de computadoras con 9 gestos de las manos usando una cámara web estándar |
| title_sort |
Control de computadoras con 9 gestos de las manos usando una cámara web estándar |
| dc.creator.fl_str_mv |
Barragán, Osimani |
| dc.contributor.author.spa.fl_str_mv |
Barragán, Osimani |
| dc.subject.none.fl_str_mv |
Visión artificial Reconocimiento de gestos de las manos Redes neuronales artificiales Nube de puntos Computer vision Hand gesture recognition Artificial neural networks Point cloud |
| topic |
Visión artificial Reconocimiento de gestos de las manos Redes neuronales artificiales Nube de puntos Computer vision Hand gesture recognition Artificial neural networks Point cloud |
| publishDate |
2025 |
| dc.date.issued.none.fl_str_mv |
%0-%12-%30 |
| dc.date.accessioned.none.fl_str_mv |
2022-12-30 00:00:00 2025-11-05T14:59:32Z |
| dc.date.available.none.fl_str_mv |
2022-12-30 00:00:00 |
| dc.type.spa.fl_str_mv |
Artículo de revista |
| dc.type.coar.fl_str_mv |
http://purl.org/coar/resource_type/c_2df8fbb1 |
| dc.type.coar.none.fl_str_mv |
http://purl.org/coar/resource_type/c_6501 |
| dc.type.coarversion.none.fl_str_mv |
http://purl.org/coar/version/c_970fb48d4fbd8a85 |
| dc.type.content.none.fl_str_mv |
Text |
| dc.type.driver.none.fl_str_mv |
info:eu-repo/semantics/article |
| dc.type.local.eng.fl_str_mv |
Journal article |
| dc.type.version.none.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
| format |
http://purl.org/coar/resource_type/c_6501 |
| status_str |
publishedVersion |
| dc.identifier.uri.none.fl_str_mv |
https://repositorio.cun.edu.co/handle/cun/10874 |
| dc.identifier.doi.none.fl_str_mv |
10.52143/2346139X.1066 |
| dc.identifier.eissn.none.fl_str_mv |
2346-139X |
| dc.identifier.url.none.fl_str_mv |
https://doi.org/10.52143/2346139X.1066 |
| url |
https://repositorio.cun.edu.co/handle/cun/10874 https://doi.org/10.52143/2346139X.1066 |
| identifier_str_mv |
10.52143/2346139X.1066 2346-139X |
| dc.language.iso.none.fl_str_mv |
|
| language_invalid_str_mv |
|
| dc.relation.bitstream.none.fl_str_mv |
https://revistas.cun.edu.co/index.php/hashtag/article/download/1066/770 |
| dc.relation.citationedition.spa.fl_str_mv |
Núm. 21 , Año 2022 : Revista Hashtag 2022B |
| dc.relation.citationissue.spa.fl_str_mv |
21 |
| dc.relation.citationvolume.spa.fl_str_mv |
2 |
| dc.relation.ispartofjournal.spa.fl_str_mv |
#ashtag |
| dc.relation.references.none.fl_str_mv |
Chhibber, N., Surale, H. B., Matulic, F., & Vogel, D. (2021). Typealike: Near-keyboard hand postures for expanded laptop interaction. ACM on Human-Computer Interaction, 1-20. Devlin, J., Chang, M., Lee, K., & Toutanova, K. (2019). BERT: pre-training of deep bidirectional transformers for language understanding. Proceedings of NAACL-HLT 2019, 4171–4186. Hu, F., He, P., Xu, S., Li, Y., & Zhang, C. (2020). Fingertrak: Continuous 3d hand pose tracking by deep learning hand silhouettes captured by miniature thermal cameras on wrist. ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4, 1-24. Hu, H., Zhao, W., Zhou, W., Wang, Y., & Li, H. (2021). Signbert: Pre-training of hand-model-aware representation for sign language recognition. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 11067–11076. Joo, Y. R., Shiratori, T., & Hanbyul. (2021). FrankMocap: Fast Monocular 3D Hand and Body Motion Capture by Regression and Integration. ICCV Workshop 2021. Kim, D. U., In Kim, K., & Baek, S. (2021). End-to-end detection and pose estimation of two interacting hands. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 11169–11178. Kim, Y., An, S.-G., Lee, J., & Bae, S.-H. (2018). Agile 3d sketching with air scaffolding. Conference on Human Factors in Computing Systems CHI'18, 1-12. Lee, J. H., An, S., Kim, Y., & Bae, S. (2018). Projective windows: Bringing windows in space to the fingertip. Conference on Human Factors in Computing Systems CHI'2018, 1-8. Liao, J., & Wang, H. (2019). Gestures as intrinsic creativity support: Understanding the usage and function of hand gestures in computer-mediated group brainstorming. ACM on Human-Computer Interaction, 1-16. Liu, D., Zhang, L., & Wu, Y. (2022). Ld-congr: A large rgb-d video dataset for long-distance continuous gesture recognition. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 3294–3302. Malik, Z. C., Radosavovic, I., Kanazawa, A., & Jitendra. (2021). Reconstructing Hand-Object Interactions in the Wild. International Conference on Computer Vision (ICCV). Matulic, F., Arakawa, R., Vogel, B., & Vogel, D. (2020). Pensight: Enhanced interaction with a pen-top camera. Conference on Human Factors in Computing Systems CHI'20, 1-14. Min, Y., Chai, X., Zhao, L., & Chen, X. (2019). Flickernet: Adaptive 3d gesture recognition from sparse point clouds. The British Machine Vision Conference (BMVC). Min, Y., Zhang, Y., Chai, X., & Chen, X. (2020). An efficient pointlstm for point clouds based gesture recognition. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5760–5769. Osimani, C., Piedra Fernández, J. A., & Ojeda Castello, J. J. (2023). Point Cloud Deep Learning Solution for Hand Gesture Recognition. International Journal of Interactive Multimedia and Artificial Intelligence. doi:http://dx.doi.org/10.9781/ijimai.2023.01.001 Pei, S., Chen, A., Lee, J., & Zhang, Y. (2022). Hand interfaces: Using hands to imitate objects in ar/vr for expressive interactions. Conference on Human Factors in Computing Systems CHI ’22. Qi, C. R., Su, H., Mo, K., & Guibas, L. J. (2017). Pointnet: Deep learning on point sets for 3d classification and segmentation. IEEE conference on computer vision and pattern recognition, 652–660. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., . . . Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems. Weng, Y., Yu, C., Shi, Y., Zhao, Y., Yan, Y., & Shi, Y. (2021). Facesight: Enabling hand-to-face gesture interaction on ar glasses with a downward-facing camera vision. CHI ’21 Conference on Human Factors in Computing Systems, 1-14. Zhang, F., Bazarevsky, V., Vakunov, A., Tkachenka, A., Sung, G., Chang, C.-L., & Grundmann, M. (2020). Mediapipe hands: On-device real-time hand tracking. Fourth Workshop on Computer Vision for AR/VR (CV4ARVR). Zhou, Q., Sykes, S., Fels, S., & Kin, K. (2020). Gripmarks: Using hand grips to transform in-hand objects into mixed reality input. CHI ’20: Conference on Human Factors in Computing Systems, 1–11. |
| dc.rights.none.fl_str_mv |
#ashtag - 2025 |
| dc.rights.uri.none.fl_str_mv |
https://creativecommons.org/licenses/by-nc-sa/4.0/ |
| dc.rights.accessrights.none.fl_str_mv |
info:eu-repo/semantics/openAccess |
| dc.rights.coar.none.fl_str_mv |
http://purl.org/coar/access_right/c_abf2 |
| rights_invalid_str_mv |
#ashtag - 2025 https://creativecommons.org/licenses/by-nc-sa/4.0/ http://purl.org/coar/access_right/c_abf2 |
| eu_rights_str_mv |
openAccess |
| dc.format.mimetype.none.fl_str_mv |
application/vnd.openxmlformats-officedocument.wordprocessingml.document |
| dc.publisher.spa.fl_str_mv |
Fondo Editorial CUN |
| dc.source.none.fl_str_mv |
https://revistas.cun.edu.co/index.php/hashtag/article/view/1066 |
| institution |
Universidad de Cundinamarca |
| bitstream.url.fl_str_mv |
https://repositorio.cun.edu.co/bitstreams/fc74574f-5cc6-4077-bb5e-220cb1fb1589/download |
| bitstream.checksum.fl_str_mv |
5cf20c26609474e3da318b55849fbf35 |
| bitstream.checksumAlgorithm.fl_str_mv |
MD5 |
| repository.name.fl_str_mv |
Repositorio Digital Corporación Unificada Nacional de Educación Superior |
| repository.mail.fl_str_mv |
bdigital@metabiblioteca.com |
| _version_ |
1849967514591690752 |
| spelling |
Barragán, Osimani2022-12-30 00:00:002025-11-05T14:59:32Z2022-12-30 00:00:00%0-%12-%30https://repositorio.cun.edu.co/handle/cun/1087410.52143/2346139X.10662346-139Xhttps://doi.org/10.52143/2346139X.1066application/vnd.openxmlformats-officedocument.wordprocessingml.documentFondo Editorial CUN#ashtag - 2025https://creativecommons.org/licenses/by-nc-sa/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2https://revistas.cun.edu.co/index.php/hashtag/article/view/1066Visión artificialReconocimiento de gestos de las manosRedes neuronales artificialesNube de puntosComputer visionHand gesture recognitionArtificial neural networksPoint cloudControl de computadoras con 9 gestos de las manos usando una cámara web estándarArtículo de revistahttp://purl.org/coar/resource_type/c_6501http://purl.org/coar/resource_type/c_2df8fbb1http://purl.org/coar/version/c_970fb48d4fbd8a85Textinfo:eu-repo/semantics/articleJournal articleinfo:eu-repo/semantics/publishedVersionhttps://revistas.cun.edu.co/index.php/hashtag/article/download/1066/770Núm. 21 , Año 2022 : Revista Hashtag 2022B212#ashtagChhibber, N., Surale, H. B., Matulic, F., & Vogel, D. (2021). Typealike: Near-keyboard hand postures for expanded laptop interaction. ACM on Human-Computer Interaction, 1-20. Devlin, J., Chang, M., Lee, K., & Toutanova, K. (2019). BERT: pre-training of deep bidirectional transformers for language understanding. Proceedings of NAACL-HLT 2019, 4171–4186. Hu, F., He, P., Xu, S., Li, Y., & Zhang, C. (2020). Fingertrak: Continuous 3d hand pose tracking by deep learning hand silhouettes captured by miniature thermal cameras on wrist. ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4, 1-24. Hu, H., Zhao, W., Zhou, W., Wang, Y., & Li, H. (2021). Signbert: Pre-training of hand-model-aware representation for sign language recognition. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 11067–11076. Joo, Y. R., Shiratori, T., & Hanbyul. (2021). FrankMocap: Fast Monocular 3D Hand and Body Motion Capture by Regression and Integration. ICCV Workshop 2021. Kim, D. U., In Kim, K., & Baek, S. (2021). End-to-end detection and pose estimation of two interacting hands. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 11169–11178. Kim, Y., An, S.-G., Lee, J., & Bae, S.-H. (2018). Agile 3d sketching with air scaffolding. Conference on Human Factors in Computing Systems CHI'18, 1-12. Lee, J. H., An, S., Kim, Y., & Bae, S. (2018). Projective windows: Bringing windows in space to the fingertip. Conference on Human Factors in Computing Systems CHI'2018, 1-8. Liao, J., & Wang, H. (2019). Gestures as intrinsic creativity support: Understanding the usage and function of hand gestures in computer-mediated group brainstorming. ACM on Human-Computer Interaction, 1-16. Liu, D., Zhang, L., & Wu, Y. (2022). Ld-congr: A large rgb-d video dataset for long-distance continuous gesture recognition. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 3294–3302. Malik, Z. C., Radosavovic, I., Kanazawa, A., & Jitendra. (2021). Reconstructing Hand-Object Interactions in the Wild. International Conference on Computer Vision (ICCV). Matulic, F., Arakawa, R., Vogel, B., & Vogel, D. (2020). Pensight: Enhanced interaction with a pen-top camera. Conference on Human Factors in Computing Systems CHI'20, 1-14. Min, Y., Chai, X., Zhao, L., & Chen, X. (2019). Flickernet: Adaptive 3d gesture recognition from sparse point clouds. The British Machine Vision Conference (BMVC). Min, Y., Zhang, Y., Chai, X., & Chen, X. (2020). An efficient pointlstm for point clouds based gesture recognition. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5760–5769. Osimani, C., Piedra Fernández, J. A., & Ojeda Castello, J. J. (2023). Point Cloud Deep Learning Solution for Hand Gesture Recognition. International Journal of Interactive Multimedia and Artificial Intelligence. doi:http://dx.doi.org/10.9781/ijimai.2023.01.001 Pei, S., Chen, A., Lee, J., & Zhang, Y. (2022). Hand interfaces: Using hands to imitate objects in ar/vr for expressive interactions. Conference on Human Factors in Computing Systems CHI ’22. Qi, C. R., Su, H., Mo, K., & Guibas, L. J. (2017). Pointnet: Deep learning on point sets for 3d classification and segmentation. IEEE conference on computer vision and pattern recognition, 652–660. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., . . . Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems. Weng, Y., Yu, C., Shi, Y., Zhao, Y., Yan, Y., & Shi, Y. (2021). Facesight: Enabling hand-to-face gesture interaction on ar glasses with a downward-facing camera vision. CHI ’21 Conference on Human Factors in Computing Systems, 1-14. Zhang, F., Bazarevsky, V., Vakunov, A., Tkachenka, A., Sung, G., Chang, C.-L., & Grundmann, M. (2020). Mediapipe hands: On-device real-time hand tracking. Fourth Workshop on Computer Vision for AR/VR (CV4ARVR). Zhou, Q., Sykes, S., Fels, S., & Kin, K. (2020). Gripmarks: Using hand grips to transform in-hand objects into mixed reality input. CHI ’20: Conference on Human Factors in Computing Systems, 1–11.PublicationOREORE.xmltext/xml1948https://repositorio.cun.edu.co/bitstreams/fc74574f-5cc6-4077-bb5e-220cb1fb1589/download5cf20c26609474e3da318b55849fbf35MD51falseAnonymousREADcun/10874oai:repositorio.cun.edu.co:cun/108742025-11-05 09:59:32.782https://creativecommons.org/licenses/by-nc-sa/4.0/#ashtag - 2025metadata.onlyhttps://repositorio.cun.edu.coRepositorio Digital Corporación Unificada Nacional de Educación Superiorbdigital@metabiblioteca.com |
