An_enhanced_joint_hilbert_embedding-based_metric_to_support_mocap_data_classification_with_preserved_interpretability
Motion capture (Mocap) data are widely used as time series to study human movement. Indeed, animation movies, video games, and biomechanical systems for rehabilitation are significant applications related to Mocap data. However, classifying multi-channel time series from Mocap requires coding the in...
- Autores:
-
Valencia-Marín, Cristian Kaori
Velásquez-Martínez, Luisa Fernanda
Alvarez-Meza, Andrés Marino
Castellanos-Domínguez, Germán
Pulgarín Giraldo, Juan Diego
- Tipo de recurso:
- Article of investigation
- Fecha de publicación:
- 2021
- Institución:
- Universidad Autónoma de Occidente
- Repositorio:
- RED: Repositorio Educativo Digital UAO
- Idioma:
- eng
- OAI Identifier:
- oai:red.uao.edu.co:10614/15904
- Acceso en línea:
- https://hdl.handle.net/10614/15904
https://doi.org/10.3390/s21134443
https://red.uao.edu.co/
- Palabra clave:
- Hilbert embedding
Joint distribution
Time series
Classification
Mocap data
- Rights
- openAccess
- License
- Derechos reservados - MDPI, 2021
id |
REPOUAO2_3fdb2359a75087fa3194b6eaab9c5f10 |
---|---|
oai_identifier_str |
oai:red.uao.edu.co:10614/15904 |
network_acronym_str |
REPOUAO2 |
network_name_str |
RED: Repositorio Educativo Digital UAO |
repository_id_str |
|
dc.title.eng.fl_str_mv |
An_enhanced_joint_hilbert_embedding-based_metric_to_support_mocap_data_classification_with_preserved_interpretability |
title |
An_enhanced_joint_hilbert_embedding-based_metric_to_support_mocap_data_classification_with_preserved_interpretability |
spellingShingle |
An_enhanced_joint_hilbert_embedding-based_metric_to_support_mocap_data_classification_with_preserved_interpretability Hilbert embedding Joint distribution Time series Classification Mocap data |
title_short |
An_enhanced_joint_hilbert_embedding-based_metric_to_support_mocap_data_classification_with_preserved_interpretability |
title_full |
An_enhanced_joint_hilbert_embedding-based_metric_to_support_mocap_data_classification_with_preserved_interpretability |
title_fullStr |
An_enhanced_joint_hilbert_embedding-based_metric_to_support_mocap_data_classification_with_preserved_interpretability |
title_full_unstemmed |
An_enhanced_joint_hilbert_embedding-based_metric_to_support_mocap_data_classification_with_preserved_interpretability |
title_sort |
An_enhanced_joint_hilbert_embedding-based_metric_to_support_mocap_data_classification_with_preserved_interpretability |
dc.creator.fl_str_mv |
Valencia-Marín, Cristian Kaori Velásquez-Martínez, Luisa Fernanda Alvarez-Meza, Andrés Marino Castellanos-Domínguez, Germán Pulgarín Giraldo, Juan Diego |
dc.contributor.author.none.fl_str_mv |
Valencia-Marín, Cristian Kaori Velásquez-Martínez, Luisa Fernanda Alvarez-Meza, Andrés Marino Castellanos-Domínguez, Germán Pulgarín Giraldo, Juan Diego |
dc.subject.proposal.eng.fl_str_mv |
Hilbert embedding Joint distribution Time series Classification Mocap data |
topic |
Hilbert embedding Joint distribution Time series Classification Mocap data |
description |
Motion capture (Mocap) data are widely used as time series to study human movement. Indeed, animation movies, video games, and biomechanical systems for rehabilitation are significant applications related to Mocap data. However, classifying multi-channel time series from Mocap requires coding the intrinsic dependencies (even nonlinear relationships) between human body joints. Furthermore, the same human action may have variations because the individual alters their movement and therefore the inter/intraclass variability. Here, we introduce an enhanced Hilbert embedding-based approach from a cross-covariance operator, termed EHECCO, to map the input Mocap time series to a tensor space built from both 3D skeletal joints and a principal component analysis-based projection. Obtained results demonstrate how EHECCO represents and discriminates joint probability distributions as kernel-based evaluation of input time series within a tensor reproducing kernel Hilbert space (RKHS). Our approach achieves competitive classification results for style/subject and action recognition tasks on well-known publicly available databases. Moreover, EHECCO favors the interpretation of relevant anthropometric variables correlated with players’ expertise and acted movement on a Tennis-Mocap database (also publicly available with this work). Thereby, our EHECCO-based framework provides a unified representation (through the tensor RKHS) of the Mocap time series to compute linear correlations between a coded metric from joint distributions and player properties, i.e., age, body measurements, and sport movement (action class) |
publishDate |
2021 |
dc.date.issued.none.fl_str_mv |
2021 |
dc.date.accessioned.none.fl_str_mv |
2024-11-15T13:42:54Z |
dc.date.available.none.fl_str_mv |
2024-11-15T13:42:54Z |
dc.type.spa.fl_str_mv |
Artículo de revista |
dc.type.coarversion.fl_str_mv |
http://purl.org/coar/version/c_970fb48d4fbd8a85 |
dc.type.coar.eng.fl_str_mv |
http://purl.org/coar/resource_type/c_2df8fbb1 |
dc.type.content.eng.fl_str_mv |
Text |
dc.type.driver.eng.fl_str_mv |
info:eu-repo/semantics/article |
dc.type.redcol.eng.fl_str_mv |
http://purl.org/redcol/resource_type/ART |
dc.type.version.eng.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
format |
http://purl.org/coar/resource_type/c_2df8fbb1 |
status_str |
publishedVersion |
dc.identifier.citation.spa.fl_str_mv |
Valencia-Marín, C. K., et.al. (2021). An enhanced joint hilbert embedding-based metric to support mocap data classification with preserved interpretability. Sensors. 21(13). 17 p. https://doi.org/10.3390/s21134443 |
dc.identifier.uri.none.fl_str_mv |
https://hdl.handle.net/10614/15904 |
dc.identifier.doi.spa.fl_str_mv |
https://doi.org/10.3390/s21134443 |
dc.identifier.eissn.spa.fl_str_mv |
14248220 |
dc.identifier.instname.spa.fl_str_mv |
Universidad Autónoma de Occidente |
dc.identifier.reponame.spa.fl_str_mv |
Respositorio Educativo Digital UAO |
dc.identifier.repourl.none.fl_str_mv |
https://red.uao.edu.co/ |
identifier_str_mv |
Valencia-Marín, C. K., et.al. (2021). An enhanced joint hilbert embedding-based metric to support mocap data classification with preserved interpretability. Sensors. 21(13). 17 p. https://doi.org/10.3390/s21134443 14248220 Universidad Autónoma de Occidente Respositorio Educativo Digital UAO |
url |
https://hdl.handle.net/10614/15904 https://doi.org/10.3390/s21134443 https://red.uao.edu.co/ |
dc.language.iso.eng.fl_str_mv |
eng |
language |
eng |
dc.relation.citationendpage.spa.fl_str_mv |
17 |
dc.relation.citationissue.spa.fl_str_mv |
13 |
dc.relation.citationstartpage.spa.fl_str_mv |
1 |
dc.relation.citationvolume.spa.fl_str_mv |
21 |
dc.relation.ispartofjournal.eng.fl_str_mv |
Sensors |
dc.relation.references.none.fl_str_mv |
1. Kadu, H.; Kuo, C. Automatic human mocap data classification. IEEE Trans. Multimed. 2014, 16, 2191–2202. [CrossRef] 2. Kotsifakos, A. Case study: Model-based vs. distance-based search in time series databases. In Proceedings of the Exploratory Data Analysis (EDA)Workshop in SIAM International Conference on Data Mining (SDM), Philadelphia, PA, USA, 23–26 April 2014. 3. Anantasech, P.; Ratanamahatana, C. EnhancedWeighted Dynamic TimeWarping for Time Series Classification. In Proceedings of the Third International Congress on Information and Communication Technology, London, UK, 27–28 February 2019; pp. 655–664. 4. Fawaz, H.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P. Deep learning for time series classification: A review. Data Min. Knowl. Discov. 2019, 33, 917–963. [CrossRef] 5. Bicego, M.; Murino, V.; Figueiredo, M. Similarity-based classification of sequences using hidden Markov models. Pattern Recognit. 2004, 37, 2281–2291. [CrossRef] 6. Bicego, M.; Murino, V. Investigating hidden Markov models’ capabilities in 2D shape classification. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 281–286. [CrossRef] 7. Tanisaro, P.; Heidemann, G. Time series classification using time warping invariant echo state networks. In Proceedings of the 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), Anaheim, CA, USA, 18–20 December 2016; pp. 831–836. 8. Nurai, T.; Naqvi,W. A research protocol of an observational study on efficacy of microsoft kinect azure in evaluation of static posture in normal healthy population. Research Square. 2021, 1, 1–9. 9. Yu, T.; Jin, H.; Tan, W.T.; Nahrstedt, K. SKEPRID: Pose and illumination change-resistant skeleton-based person re-identification. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2018, 14, 1–24. [CrossRef] 10. Jiang,W.; Xue, H.; Miao, C.;Wang, S.; Lin, S.; Tian, C.; Murali, S.; Hu, H.; Sun, Z.; Su, L. Towards 3D human pose construction using wifi. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking, London, UK, 21–25 September 2020; pp. 1–14. 11. Yang, C.; Wang, X.; Mao, S. RFID-Pose: Vision-Aided Three-Dimensional Human Pose Estimation With Radio-Frequency Identification. IEEE Trans. Reliab. 2020. [CrossRef] 12. Božek, P.; Pivarˇciová, E. Registration of holographic images based on the integral transformation. Comput. Informatics 2012, 31, 1369–1383. 13. Jozef, C.; Bozek, P.; Pivarciová, E. A new system for measuring the deflection of the beam with the support of digital holographic interferometry. J. Electr. Eng. 2015, 66, 53–56. [CrossRef] 14. de Souza, C.; Gaidon, A.; Cabon, Y.; Murray, N.; López, A. Generating human action videos by coupling 3D game engines and probabilistic graphical models. Int. J. Comput. Vis. 2019, 128, 1–32. [CrossRef] 15. Alarcón-Aldana, A.; Callejas-Cuervo, M.; Bo, A. Upper Limb Physical Rehabilitation Using Serious Videogames and Motion Capture Systems: A Systematic Review. Sensors 2020, 20, 5989. [CrossRef] 16. Jedliˇcka, P.; Krˇ noul, Z.; Kanis, J.; Železn` y, M. Sign Language Motion Capture Dataset for Data-driven Synthesis. In Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives, Marseille, France, 11–16 May 2020; pp. 101–106. 17. Protopapadakis, E.; Voulodimos, A.; Doulamis, A.; Camarinopoulos, S.; Doulamis, N.; Miaoulis, G. Dance pose identification from motion capture data: A comparison of classifiers. Technologies 2018, 6, 31. [CrossRef] 18. Sun, C.; Junejo, I.; Foroosh, H. Motion retrieval using low-rank subspace decomposition of motion volume. In Computer Graphics Forum; Blackwell Publishing Ltd.: Oxford, UK, 2011; Volume 30, pp. 1953–1962. 19. Sebernegg, A.; Kán, P.; Kaufmann, H. Motion Similarity Modeling–A State of the Art Report. arXiv 2020, arXiv:2008.05872. 20. Vrigkas, M.; Nikou, C.; Kakadiaris, I. A review of human activity recognition methods. Front. Robot. AI 2015, 2, 28. [CrossRef] 21. Gedat, E.; Fechner, P.; Fiebelkorn, R.; Vandenhouten, R. Human action recognition with hidden Markov models and neural network derived poses. In Proceedings of the 2017 IEEE 15th International Symposium on Intelligent Systems and Informatics (SISY), Subotica, Serbia, 14–16 September 2017; pp. 000157–000162. 22. Principe, J. Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010. 23. Pulgarin-Giraldo, J.; Alvarez-Meza, A.; Van Vaerenbergh, S.; Santamaría, I.; Castellanos-Dominguez, G. Analysis and classification of MoCap data by hilbert space embedding-based distance and multikernel learning. In Proceedings of the 23rd Iberoamerican Congress on Pattern Recognition, Madrid, Spain, 19–22 November 2018; pp. 186–193. 24. Williams, C.; Rasmussen, C. Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, 2006; Volume 2. 25. Milios, D.; Camoriano, R.; Michiardi, P.; Rosasco, L.; Filippone, M. Dirichlet-based gaussian processes for large-scale calibrated classification. arXiv 2018, arXiv:1805.10915. 26. Aristidou, A.; Cohen-Or, D.; Hodgins, J.; Chrysanthou, Y.; Shamir, A. Deep motifs and motion signatures. ACM Trans. Graph. (TOG) 2018, 37, 1–13. [CrossRef] 27. Laraba, S.; Brahimi, M.; Tilmanne, J.; Dutoit, T. 3D skeleton-based action recognition by representing motion capture sequences as 2D-RGB images. Comput. Animat. Virtual Worlds 2017, 28, e1782. [CrossRef] 28. Dridi, N.; Hadzagic, M. Akaike and bayesian information criteria for hidden markov models. IEEE Signal Process. Lett. 2018, 26, 302–306. [CrossRef] 29. Singh, A.; Principe, J. Information theoretic learning with adaptive kernels. Signal Process. 2011, 91, 203–213. [CrossRef] 30. Blandon, J.; Valencia, C.; Alvarez, A.; Echeverry, J.; Alvarez, M.; Orozco, A. Shape classification using hilbert space embeddings and kernel adaptive filtering. In International Conference Image Analysis and Recognition; Springer: Berlin/Heidelberg, Germany, 2018; pp. 245–251. 31. Huang, Z.; Wan, C.; Probst, T.; Van Gool, L. Deep learning on lie groups for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6099–6108. 32. Kamilaris, A.; Prenafeta-Boldú, F. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [CrossRef] 33. Duin, R.; Pekalska, E. Dissimilarity Representation For Pattern Recognition, The: Foundations And Applications; World Scientific: Hackensack, NJ, USA, 2005; Volume 64. 34. García-Vega, S.; Álvarez-Meza, A.; Castellanos-Domínguez, G. MoCap Data Segmentation and Classification Using Kernel Based Multi-channel Analysis. In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications; Springer: Berlin/Heidelberg, Germany, 2013; pp. 495–502. 35. Müller, M. Dynamic time warping. Information Retrieval for Music and Motion; Springer: Cham, Switzerland, 2007; pp. 69–84. 36. Jeong, Y.; Jeong, M.; Omitaomu, O. Weighted dynamic time warping for time series classification. Pattern Recognit. 2011, 44, 2231–2240. [CrossRef] 37. Liu, X.; Sarker, M.; Milanova, M.; OGorman, L. Video-Based Monitoring and Analytics of Human Gait for Companion Robot. In Proceedings of the New Approaches for Multidimensional Signal Processing: Proceedings of InternationalWorkshop, NAMSP 2020, Sofia, Bulgaria, 9–11 July 2021; Volume 216, p. 15. 38. Liu, L.; Li, P.; Chu, M.; Cai, H. Stochastic gradient support vector machine with local structural information for pattern recognition. Int. J. Mach. Learn. Cybern. 2021, 1,1–18. 39. Smola, A.; Gretton, A.; Song, L.; Schölkopf, B. Algorithmic Learning Theory. In Proceedings of the 18th International Conference, ALT 2007, Sendai, Japan, 1–4 October 2007; Chapter A Hilbert Space Embedding for Distributions; Springer: Berlin/Heidelberg, Germany, 2007; pp. 13–31. 40. Huang, Z.; Van Gool, L. A riemannian network for spd matrix learning. arXiv 2016, arXiv:1608.04233. 41. Vemulapalli, R.; Arrate, F.; Chellappa, R. Human action recognition by representing 3d skeletons as points in a lie group. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 588–595. 42. Vemulapalli, R.; Chellapa, R. Rolling rotations for recognizing human actions from 3d skeletal data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4471–4479. 43. Gretton, A.; Bousquet, O.; Smola, A.; Schölkopf, B. Measuring statistical dependence with Hilbert-Schmidt norms. In International Conference on Algorithmic Learning Theory; Springer: Berlin/Heidelberg, Germany, 2005; pp. 63–77. 44. Song, L.; Fukumizu, K.; Gretton, A. Kernel embeddings of conditional distributions: A unified kernel framework for nonparametric inference in graphical models. IEEE Signal Process. Mag. 2013, 30, 98–111. [CrossRef] 45. Zhao, J.; Xie, X.; Xu, X.; Sun, S. Multi-view learning overview: Recent progress and new challenges. Inf. Fusion 2017, 38, 43–54. [CrossRef] 46. Shimizu, T.; Hachiuma, R.; Saito, H.; Yoshikawa, T.; Lee, C. Prediction of future shot direction using pose and position of tennis player. In Proceedings of the 2nd InternationalWorkshop on Multimedia Content Analysis in Sports, Nice, France, 21–25 October 2019; pp. 59–66. 47. Muandet, K.; Fukumizu, K.; Sriperumbudur, B.; Schölkopf, B. Kernel mean embedding of distributions: A review and beyond. arXiv 2016, arXiv:1605.09522. 48. Sriperumbudur, B.; Gretton, A.; Fukumizu, K.; Schölkopf, B.; Lanckriet, G. Hilbert space embeddings and metrics on probability measures. J. Mach. Learn. Res. 2010, 11, 1517–1561. 49. Berlinet, A.; Thomas-Agnan, C. Reproducing Kernel Hilbert Spaces in Probability and Statistics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011. 50. Carter, T. An introduction to information theory and entropy. Complex Syst. Summer Sch. Santa Fe 2007, 1, 1–139. 51. Smola, A.; Gretton, A.; Song, L.; Schölkopf, B. A Hilbert space embedding for distributions. International Conference on Algorithmic Learning Theory; Springer: Berlin/Heidelberg, Germany, 2007; pp. 13–31. 52. Gretton, A.; Borgwardt, K.; Rasch, M.; Schölkopf, B.; Smola, A. A Kernel Two-sample Test. J. Mach. Learn. Res. 2012, 13, 723–773. 53. Schölkopf, B.; Smola, A. Learning with Kernels; The MIT Press: Cambridge, MA, USA, 2002. 54. Géron, A. Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems; O’Reilly Media: Sebastopol, CA, USA, 2019. 55. Jolliffe, I.; Cadima, J. Principal component analysis: A review and recent developments. Philos. Trans. R. Soc. Math. Phys. Eng. Sci. 2016, 374, 20150202. [CrossRef] 56. Álvarez-Meza, A.; Cárdenas-Peña, D.; Castellanos-Dominguez, G. Unsupervised kernel function building using maximization of information potential variability. In Iberoamerican Congress on Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2014; pp. 335–342. 57. Müller, M.; Röder, T.; Clausen, M.; Eberhardt, B.; Krüger, B.; Weber, A. Documentation Mocap Database hdm05; University of Bonn: Bonn, Germany, 2007. 58. Müller, M.; Röder, T. Motion templates for automatic classification and retrieval of motion capture data. In Proceedings of the 2006 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Vienna, Austria, 2–4 September 2006; pp. 137–146. 59. Kapadia, M.; Chiang, I.; Thomas, T.; Badler, N.; Kider, J. Efficient motion retrieval in large motion databases. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, Orlando, FL, USA, 21–23 March 2013; pp. 19–28. 60. Arora, S.; Hu,W.; Kothari, P.K. An analysis of the t-sne algorithm for data visualization. In Proceedings of the 31st Conference On Learning Theory, Stockholm, Sweden, 5–9 July 2018; pp. 1455–1462. 61. Lee, J.A.; Renard, E.; Bernard, G.; Dupont, P.; Verleysen, M. Type 1 and 2 mixtures of Kullback–Leibler divergences as cost functions in dimensionality reduction based on similarity preservation. Neurocomputing 2013, 112, 92–108. [CrossRef] 62. Landlinger, J.; Lindinger, S.; Stöggl, T.; Wagner, H.; Müller, E. Key factors and timing patterns in the tennis forehand of different skill levels. J. Sports Sci. Med. 2010, 9, 643. 63. Delgado-Garcia, G.; Vanrenterghem, J.; Munoz-Garcia, A.; Molina-Molina, A.; Soto-Hermoso, V.M. Does stroke performance in amateur tennis players depend on functional power generating capacity? J. Sport. Med. Phys. Fit. 2019, 59, 760–766. [CrossRef] [PubMed] 64. Fett, J.; Ulbricht, A.; Ferrauti, A. Impact of Physical Performance and Anthropometric Characteristics on Serve Velocity in Elite Junior Tennis Players. J. Strength Cond. Res. 2020, 34, 192–202. [CrossRef] [PubMed] 65. Tsoulfa, K.; Dalamitros, A.; Manou, V.; Stavropoulos, N.; Kellis, S. Can a one-day field testing discriminate between competitive and noncompetitive preteen tennis players? J. Phys. Educ. Sport 2016, 16, 1075–1077. [CrossRef] 66. Coulibaly, S.; Kouassi, F.; Beugré, J.B.; Kouadio, J.; Assi, A.; Sonan, N.; Kouamé, N.; Pineau, J.C. Left and right-hand correspondence of the anthropometrical parameters of the upper and manual lateral limb within professional tennis players. Gazz. Med. Ital. Arch. Per. Sci. Med. 2017, 176, 338–344. 67. García-Murillo, D.G.; Alvarez-Meza, A.; Castellanos-Dominguez, G. Single-Trial Kernel-Based Functional Connectivity for Enhanced Feature Extraction in Motor-Related Tasks. Sensors 2021, 21, 2750. [CrossRef] 68. Pomponi, J.; Scardapane, S.; Uncini, A. Bayesian neural networks with maximum mean discrepancy regularization. Neurocomputing 2021. [CrossRef] |
dc.rights.spa.fl_str_mv |
Derechos reservados - MDPI, 2021 |
dc.rights.coar.fl_str_mv |
http://purl.org/coar/access_right/c_abf2 |
dc.rights.uri.eng.fl_str_mv |
https://creativecommons.org/licenses/by-nc-nd/4.0/ |
dc.rights.accessrights.eng.fl_str_mv |
info:eu-repo/semantics/openAccess |
dc.rights.creativecommons.spa.fl_str_mv |
Atribución-NoComercial-SinDerivadas 4.0 Internacional (CC BY-NC-ND 4.0) |
rights_invalid_str_mv |
Derechos reservados - MDPI, 2021 https://creativecommons.org/licenses/by-nc-nd/4.0/ Atribución-NoComercial-SinDerivadas 4.0 Internacional (CC BY-NC-ND 4.0) http://purl.org/coar/access_right/c_abf2 |
eu_rights_str_mv |
openAccess |
dc.format.extent.spa.fl_str_mv |
17 páginas |
dc.format.mimetype.none.fl_str_mv |
application/pdf |
dc.publisher.eng.fl_str_mv |
MDPI |
dc.publisher.place.eng.fl_str_mv |
Basel, Switzerland |
institution |
Universidad Autónoma de Occidente |
bitstream.url.fl_str_mv |
https://red.uao.edu.co/bitstreams/a77f3c85-44de-4a59-9836-e38e31de09da/download https://red.uao.edu.co/bitstreams/5503f736-3c37-4928-b03a-6d78f187caa0/download https://red.uao.edu.co/bitstreams/96a241ae-2ebb-426e-9d57-f1779ae8b2e2/download https://red.uao.edu.co/bitstreams/d2687e17-377c-47ab-8e80-44ae0e510eaf/download |
bitstream.checksum.fl_str_mv |
919be7ba835983ed0b6f843f62749c5d 6987b791264a2b5525252450f99b10d1 51a45048ddf03c7935aa2441b8ec43ab 5bfb1150976de99a1548df6eb2e8f09c |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositorio Digital Universidad Autonoma de Occidente |
repository.mail.fl_str_mv |
repositorio@uao.edu.co |
_version_ |
1831928695948836864 |
spelling |
Valencia-Marín, Cristian KaoriVelásquez-Martínez, Luisa FernandaAlvarez-Meza, Andrés MarinoCastellanos-Domínguez, GermánPulgarín Giraldo, Juan Diegovirtual::5774-12024-11-15T13:42:54Z2024-11-15T13:42:54Z2021Valencia-Marín, C. K., et.al. (2021). An enhanced joint hilbert embedding-based metric to support mocap data classification with preserved interpretability. Sensors. 21(13). 17 p. https://doi.org/10.3390/s21134443https://hdl.handle.net/10614/15904https://doi.org/10.3390/s2113444314248220Universidad Autónoma de OccidenteRespositorio Educativo Digital UAOhttps://red.uao.edu.co/Motion capture (Mocap) data are widely used as time series to study human movement. Indeed, animation movies, video games, and biomechanical systems for rehabilitation are significant applications related to Mocap data. However, classifying multi-channel time series from Mocap requires coding the intrinsic dependencies (even nonlinear relationships) between human body joints. Furthermore, the same human action may have variations because the individual alters their movement and therefore the inter/intraclass variability. Here, we introduce an enhanced Hilbert embedding-based approach from a cross-covariance operator, termed EHECCO, to map the input Mocap time series to a tensor space built from both 3D skeletal joints and a principal component analysis-based projection. Obtained results demonstrate how EHECCO represents and discriminates joint probability distributions as kernel-based evaluation of input time series within a tensor reproducing kernel Hilbert space (RKHS). Our approach achieves competitive classification results for style/subject and action recognition tasks on well-known publicly available databases. Moreover, EHECCO favors the interpretation of relevant anthropometric variables correlated with players’ expertise and acted movement on a Tennis-Mocap database (also publicly available with this work). Thereby, our EHECCO-based framework provides a unified representation (through the tensor RKHS) of the Mocap time series to compute linear correlations between a coded metric from joint distributions and player properties, i.e., age, body measurements, and sport movement (action class)17 páginasapplication/pdfengMDPIBasel, SwitzerlandDerechos reservados - MDPI, 2021https://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccessAtribución-NoComercial-SinDerivadas 4.0 Internacional (CC BY-NC-ND 4.0)http://purl.org/coar/access_right/c_abf2An_enhanced_joint_hilbert_embedding-based_metric_to_support_mocap_data_classification_with_preserved_interpretabilityArtículo de revistahttp://purl.org/coar/resource_type/c_2df8fbb1Textinfo:eu-repo/semantics/articlehttp://purl.org/redcol/resource_type/ARTinfo:eu-repo/semantics/publishedVersionhttp://purl.org/coar/version/c_970fb48d4fbd8a851713121Sensors1. Kadu, H.; Kuo, C. Automatic human mocap data classification. IEEE Trans. Multimed. 2014, 16, 2191–2202. [CrossRef]2. Kotsifakos, A. Case study: Model-based vs. distance-based search in time series databases. In Proceedings of the Exploratory Data Analysis (EDA)Workshop in SIAM International Conference on Data Mining (SDM), Philadelphia, PA, USA, 23–26 April 2014.3. Anantasech, P.; Ratanamahatana, C. EnhancedWeighted Dynamic TimeWarping for Time Series Classification. In Proceedings of the Third International Congress on Information and Communication Technology, London, UK, 27–28 February 2019; pp. 655–664.4. Fawaz, H.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P. Deep learning for time series classification: A review. Data Min. Knowl. Discov. 2019, 33, 917–963. [CrossRef]5. Bicego, M.; Murino, V.; Figueiredo, M. Similarity-based classification of sequences using hidden Markov models. Pattern Recognit. 2004, 37, 2281–2291. [CrossRef]6. Bicego, M.; Murino, V. Investigating hidden Markov models’ capabilities in 2D shape classification. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 281–286. [CrossRef]7. Tanisaro, P.; Heidemann, G. Time series classification using time warping invariant echo state networks. In Proceedings of the 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), Anaheim, CA, USA, 18–20 December 2016; pp. 831–836.8. Nurai, T.; Naqvi,W. A research protocol of an observational study on efficacy of microsoft kinect azure in evaluation of static posture in normal healthy population. Research Square. 2021, 1, 1–9.9. Yu, T.; Jin, H.; Tan, W.T.; Nahrstedt, K. SKEPRID: Pose and illumination change-resistant skeleton-based person re-identification. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2018, 14, 1–24. [CrossRef]10. Jiang,W.; Xue, H.; Miao, C.;Wang, S.; Lin, S.; Tian, C.; Murali, S.; Hu, H.; Sun, Z.; Su, L. Towards 3D human pose construction using wifi. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking, London, UK, 21–25 September 2020; pp. 1–14.11. Yang, C.; Wang, X.; Mao, S. RFID-Pose: Vision-Aided Three-Dimensional Human Pose Estimation With Radio-Frequency Identification. IEEE Trans. Reliab. 2020. [CrossRef]12. Božek, P.; Pivarˇciová, E. Registration of holographic images based on the integral transformation. Comput. Informatics 2012, 31, 1369–1383.13. Jozef, C.; Bozek, P.; Pivarciová, E. A new system for measuring the deflection of the beam with the support of digital holographic interferometry. J. Electr. Eng. 2015, 66, 53–56. [CrossRef]14. de Souza, C.; Gaidon, A.; Cabon, Y.; Murray, N.; López, A. Generating human action videos by coupling 3D game engines and probabilistic graphical models. Int. J. Comput. Vis. 2019, 128, 1–32. [CrossRef]15. Alarcón-Aldana, A.; Callejas-Cuervo, M.; Bo, A. Upper Limb Physical Rehabilitation Using Serious Videogames and Motion Capture Systems: A Systematic Review. Sensors 2020, 20, 5989. [CrossRef]16. Jedliˇcka, P.; Krˇ noul, Z.; Kanis, J.; Železn` y, M. Sign Language Motion Capture Dataset for Data-driven Synthesis. In Proceedings of the LREC2020 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives, Marseille, France, 11–16 May 2020; pp. 101–106.17. Protopapadakis, E.; Voulodimos, A.; Doulamis, A.; Camarinopoulos, S.; Doulamis, N.; Miaoulis, G. Dance pose identification from motion capture data: A comparison of classifiers. Technologies 2018, 6, 31. [CrossRef]18. Sun, C.; Junejo, I.; Foroosh, H. Motion retrieval using low-rank subspace decomposition of motion volume. In Computer Graphics Forum; Blackwell Publishing Ltd.: Oxford, UK, 2011; Volume 30, pp. 1953–1962.19. Sebernegg, A.; Kán, P.; Kaufmann, H. Motion Similarity Modeling–A State of the Art Report. arXiv 2020, arXiv:2008.05872.20. Vrigkas, M.; Nikou, C.; Kakadiaris, I. A review of human activity recognition methods. Front. Robot. AI 2015, 2, 28. [CrossRef]21. Gedat, E.; Fechner, P.; Fiebelkorn, R.; Vandenhouten, R. Human action recognition with hidden Markov models and neural network derived poses. In Proceedings of the 2017 IEEE 15th International Symposium on Intelligent Systems and Informatics (SISY), Subotica, Serbia, 14–16 September 2017; pp. 000157–000162.22. Principe, J. Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010.23. Pulgarin-Giraldo, J.; Alvarez-Meza, A.; Van Vaerenbergh, S.; Santamaría, I.; Castellanos-Dominguez, G. Analysis and classification of MoCap data by hilbert space embedding-based distance and multikernel learning. In Proceedings of the 23rd Iberoamerican Congress on Pattern Recognition, Madrid, Spain, 19–22 November 2018; pp. 186–193.24. Williams, C.; Rasmussen, C. Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, 2006; Volume 2.25. Milios, D.; Camoriano, R.; Michiardi, P.; Rosasco, L.; Filippone, M. Dirichlet-based gaussian processes for large-scale calibrated classification. arXiv 2018, arXiv:1805.10915.26. Aristidou, A.; Cohen-Or, D.; Hodgins, J.; Chrysanthou, Y.; Shamir, A. Deep motifs and motion signatures. ACM Trans. Graph. (TOG) 2018, 37, 1–13. [CrossRef]27. Laraba, S.; Brahimi, M.; Tilmanne, J.; Dutoit, T. 3D skeleton-based action recognition by representing motion capture sequences as 2D-RGB images. Comput. Animat. Virtual Worlds 2017, 28, e1782. [CrossRef]28. Dridi, N.; Hadzagic, M. Akaike and bayesian information criteria for hidden markov models. IEEE Signal Process. Lett. 2018, 26, 302–306. [CrossRef]29. Singh, A.; Principe, J. Information theoretic learning with adaptive kernels. Signal Process. 2011, 91, 203–213. [CrossRef]30. Blandon, J.; Valencia, C.; Alvarez, A.; Echeverry, J.; Alvarez, M.; Orozco, A. Shape classification using hilbert space embeddings and kernel adaptive filtering. In International Conference Image Analysis and Recognition; Springer: Berlin/Heidelberg, Germany, 2018; pp. 245–251.31. Huang, Z.; Wan, C.; Probst, T.; Van Gool, L. Deep learning on lie groups for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6099–6108.32. Kamilaris, A.; Prenafeta-Boldú, F. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [CrossRef]33. Duin, R.; Pekalska, E. Dissimilarity Representation For Pattern Recognition, The: Foundations And Applications; World Scientific: Hackensack, NJ, USA, 2005; Volume 64.34. García-Vega, S.; Álvarez-Meza, A.; Castellanos-Domínguez, G. MoCap Data Segmentation and Classification Using Kernel Based Multi-channel Analysis. In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications; Springer: Berlin/Heidelberg, Germany, 2013; pp. 495–502.35. Müller, M. Dynamic time warping. Information Retrieval for Music and Motion; Springer: Cham, Switzerland, 2007; pp. 69–84.36. Jeong, Y.; Jeong, M.; Omitaomu, O. Weighted dynamic time warping for time series classification. Pattern Recognit. 2011, 44, 2231–2240. [CrossRef]37. Liu, X.; Sarker, M.; Milanova, M.; OGorman, L. Video-Based Monitoring and Analytics of Human Gait for Companion Robot. In Proceedings of the New Approaches for Multidimensional Signal Processing: Proceedings of InternationalWorkshop, NAMSP 2020, Sofia, Bulgaria, 9–11 July 2021; Volume 216, p. 15.38. Liu, L.; Li, P.; Chu, M.; Cai, H. Stochastic gradient support vector machine with local structural information for pattern recognition. Int. J. Mach. Learn. Cybern. 2021, 1,1–18.39. Smola, A.; Gretton, A.; Song, L.; Schölkopf, B. Algorithmic Learning Theory. In Proceedings of the 18th International Conference, ALT 2007, Sendai, Japan, 1–4 October 2007; Chapter A Hilbert Space Embedding for Distributions; Springer: Berlin/Heidelberg, Germany, 2007; pp. 13–31.40. Huang, Z.; Van Gool, L. A riemannian network for spd matrix learning. arXiv 2016, arXiv:1608.04233.41. Vemulapalli, R.; Arrate, F.; Chellappa, R. Human action recognition by representing 3d skeletons as points in a lie group. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 588–595.42. Vemulapalli, R.; Chellapa, R. Rolling rotations for recognizing human actions from 3d skeletal data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4471–4479.43. Gretton, A.; Bousquet, O.; Smola, A.; Schölkopf, B. Measuring statistical dependence with Hilbert-Schmidt norms. In International Conference on Algorithmic Learning Theory; Springer: Berlin/Heidelberg, Germany, 2005; pp. 63–77.44. Song, L.; Fukumizu, K.; Gretton, A. Kernel embeddings of conditional distributions: A unified kernel framework for nonparametric inference in graphical models. IEEE Signal Process. Mag. 2013, 30, 98–111. [CrossRef]45. Zhao, J.; Xie, X.; Xu, X.; Sun, S. Multi-view learning overview: Recent progress and new challenges. Inf. Fusion 2017, 38, 43–54. [CrossRef]46. Shimizu, T.; Hachiuma, R.; Saito, H.; Yoshikawa, T.; Lee, C. Prediction of future shot direction using pose and position of tennis player. In Proceedings of the 2nd InternationalWorkshop on Multimedia Content Analysis in Sports, Nice, France, 21–25 October 2019; pp. 59–66.47. Muandet, K.; Fukumizu, K.; Sriperumbudur, B.; Schölkopf, B. Kernel mean embedding of distributions: A review and beyond. arXiv 2016, arXiv:1605.09522.48. Sriperumbudur, B.; Gretton, A.; Fukumizu, K.; Schölkopf, B.; Lanckriet, G. Hilbert space embeddings and metrics on probability measures. J. Mach. Learn. Res. 2010, 11, 1517–1561.49. Berlinet, A.; Thomas-Agnan, C. Reproducing Kernel Hilbert Spaces in Probability and Statistics; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2011.50. Carter, T. An introduction to information theory and entropy. Complex Syst. Summer Sch. Santa Fe 2007, 1, 1–139.51. Smola, A.; Gretton, A.; Song, L.; Schölkopf, B. A Hilbert space embedding for distributions. International Conference on Algorithmic Learning Theory; Springer: Berlin/Heidelberg, Germany, 2007; pp. 13–31.52. Gretton, A.; Borgwardt, K.; Rasch, M.; Schölkopf, B.; Smola, A. A Kernel Two-sample Test. J. Mach. Learn. Res. 2012, 13, 723–773.53. Schölkopf, B.; Smola, A. Learning with Kernels; The MIT Press: Cambridge, MA, USA, 2002.54. Géron, A. Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems; O’Reilly Media: Sebastopol, CA, USA, 2019.55. Jolliffe, I.; Cadima, J. Principal component analysis: A review and recent developments. Philos. Trans. R. Soc. Math. Phys. Eng. Sci. 2016, 374, 20150202. [CrossRef]56. Álvarez-Meza, A.; Cárdenas-Peña, D.; Castellanos-Dominguez, G. Unsupervised kernel function building using maximization of information potential variability. In Iberoamerican Congress on Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2014; pp. 335–342.57. Müller, M.; Röder, T.; Clausen, M.; Eberhardt, B.; Krüger, B.; Weber, A. Documentation Mocap Database hdm05; University of Bonn: Bonn, Germany, 2007.58. Müller, M.; Röder, T. Motion templates for automatic classification and retrieval of motion capture data. In Proceedings of the 2006 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Vienna, Austria, 2–4 September 2006; pp. 137–146.59. Kapadia, M.; Chiang, I.; Thomas, T.; Badler, N.; Kider, J. Efficient motion retrieval in large motion databases. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, Orlando, FL, USA, 21–23 March 2013; pp. 19–28.60. Arora, S.; Hu,W.; Kothari, P.K. An analysis of the t-sne algorithm for data visualization. In Proceedings of the 31st Conference On Learning Theory, Stockholm, Sweden, 5–9 July 2018; pp. 1455–1462.61. Lee, J.A.; Renard, E.; Bernard, G.; Dupont, P.; Verleysen, M. Type 1 and 2 mixtures of Kullback–Leibler divergences as cost functions in dimensionality reduction based on similarity preservation. Neurocomputing 2013, 112, 92–108. [CrossRef]62. Landlinger, J.; Lindinger, S.; Stöggl, T.; Wagner, H.; Müller, E. Key factors and timing patterns in the tennis forehand of different skill levels. J. Sports Sci. Med. 2010, 9, 643.63. Delgado-Garcia, G.; Vanrenterghem, J.; Munoz-Garcia, A.; Molina-Molina, A.; Soto-Hermoso, V.M. Does stroke performance in amateur tennis players depend on functional power generating capacity? J. Sport. Med. Phys. Fit. 2019, 59, 760–766. [CrossRef] [PubMed]64. Fett, J.; Ulbricht, A.; Ferrauti, A. Impact of Physical Performance and Anthropometric Characteristics on Serve Velocity in Elite Junior Tennis Players. J. Strength Cond. Res. 2020, 34, 192–202. [CrossRef] [PubMed]65. Tsoulfa, K.; Dalamitros, A.; Manou, V.; Stavropoulos, N.; Kellis, S. Can a one-day field testing discriminate between competitive and noncompetitive preteen tennis players? J. Phys. Educ. Sport 2016, 16, 1075–1077. [CrossRef]66. Coulibaly, S.; Kouassi, F.; Beugré, J.B.; Kouadio, J.; Assi, A.; Sonan, N.; Kouamé, N.; Pineau, J.C. Left and right-hand correspondence of the anthropometrical parameters of the upper and manual lateral limb within professional tennis players. Gazz. Med. Ital. Arch. Per. Sci. Med. 2017, 176, 338–344.67. García-Murillo, D.G.; Alvarez-Meza, A.; Castellanos-Dominguez, G. Single-Trial Kernel-Based Functional Connectivity for Enhanced Feature Extraction in Motor-Related Tasks. Sensors 2021, 21, 2750. [CrossRef]68. Pomponi, J.; Scardapane, S.; Uncini, A. Bayesian neural networks with maximum mean discrepancy regularization. Neurocomputing 2021. [CrossRef]Hilbert embeddingJoint distributionTime seriesClassificationMocap dataComunidad generalPublication33e9b6b4-bd6d-4b86-b500-ae237e1e9a98virtual::5774-133e9b6b4-bd6d-4b86-b500-ae237e1e9a98virtual::5774-1https://scholar.google.com.co/citations?user=Bwuc2BkAAAAJ&hl=envirtual::5774-10000-0002-6409-5104virtual::5774-1https://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=0000207497virtual::5774-1ORIGINALAn_enhanced_joint_hilbert_embedding-based_metric_to_support_mocap_data_classification_with_preserved_interpretability.pdfAn_enhanced_joint_hilbert_embedding-based_metric_to_support_mocap_data_classification_with_preserved_interpretability.pdfArchivo texto completo del artículo de revista, PDFapplication/pdf1275790https://red.uao.edu.co/bitstreams/a77f3c85-44de-4a59-9836-e38e31de09da/download919be7ba835983ed0b6f843f62749c5dMD51LICENSElicense.txtlicense.txttext/plain; charset=utf-81672https://red.uao.edu.co/bitstreams/5503f736-3c37-4928-b03a-6d78f187caa0/download6987b791264a2b5525252450f99b10d1MD52TEXTAn_enhanced_joint_hilbert_embedding-based_metric_to_support_mocap_data_classification_with_preserved_interpretability.pdf.txtAn_enhanced_joint_hilbert_embedding-based_metric_to_support_mocap_data_classification_with_preserved_interpretability.pdf.txtExtracted texttext/plain62621https://red.uao.edu.co/bitstreams/96a241ae-2ebb-426e-9d57-f1779ae8b2e2/download51a45048ddf03c7935aa2441b8ec43abMD53THUMBNAILAn_enhanced_joint_hilbert_embedding-based_metric_to_support_mocap_data_classification_with_preserved_interpretability.pdf.jpgAn_enhanced_joint_hilbert_embedding-based_metric_to_support_mocap_data_classification_with_preserved_interpretability.pdf.jpgGenerated Thumbnailimage/jpeg16072https://red.uao.edu.co/bitstreams/d2687e17-377c-47ab-8e80-44ae0e510eaf/download5bfb1150976de99a1548df6eb2e8f09cMD5410614/15904oai:red.uao.edu.co:10614/159042024-11-16 03:01:10.63https://creativecommons.org/licenses/by-nc-nd/4.0/Derechos reservados - MDPI, 2021open.accesshttps://red.uao.edu.coRepositorio Digital Universidad Autonoma de Occidenterepositorio@uao.edu.coPHA+RUwgQVVUT1IgYXV0b3JpemEgYSBsYSBVbml2ZXJzaWRhZCBBdXTDs25vbWEgZGUgT2NjaWRlbnRlLCBkZSBmb3JtYSBpbmRlZmluaWRhLCBwYXJhIHF1ZSBlbiBsb3MgdMOpcm1pbm9zIGVzdGFibGVjaWRvcyBlbiBsYSBMZXkgMjMgZGUgMTk4MiwgbGEgTGV5IDQ0IGRlIDE5OTMsIGxhIERlY2lzacOzbiBhbmRpbmEgMzUxIGRlIDE5OTMsIGVsIERlY3JldG8gNDYwIGRlIDE5OTUgeSBkZW3DoXMgbGV5ZXMgeSBqdXJpc3BydWRlbmNpYSB2aWdlbnRlIGFsIHJlc3BlY3RvLCBoYWdhIHB1YmxpY2FjacOzbiBkZSBlc3RlIGNvbiBmaW5lcyBlZHVjYXRpdm9zLiBQQVJBR1JBRk86IEVzdGEgYXV0b3JpemFjacOzbiBhZGVtw6FzIGRlIHNlciB2w6FsaWRhIHBhcmEgbGFzIGZhY3VsdGFkZXMgeSBkZXJlY2hvcyBkZSB1c28gc29icmUgbGEgb2JyYSBlbiBmb3JtYXRvIG8gc29wb3J0ZSBtYXRlcmlhbCwgdGFtYmnDqW4gcGFyYSBmb3JtYXRvIGRpZ2l0YWwsIGVsZWN0csOzbmljbywgdmlydHVhbCwgcGFyYSB1c29zIGVuIHJlZCwgSW50ZXJuZXQsIGV4dHJhbmV0LCBpbnRyYW5ldCwgYmlibGlvdGVjYSBkaWdpdGFsIHkgZGVtw6FzIHBhcmEgY3VhbHF1aWVyIGZvcm1hdG8gY29ub2NpZG8gbyBwb3IgY29ub2Nlci4gRUwgQVVUT1IsIGV4cHJlc2EgcXVlIGVsIGRvY3VtZW50byAodHJhYmFqbyBkZSBncmFkbywgcGFzYW50w61hLCBjYXNvcyBvIHRlc2lzKSBvYmpldG8gZGUgbGEgcHJlc2VudGUgYXV0b3JpemFjacOzbiBlcyBvcmlnaW5hbCB5IGxhIGVsYWJvcsOzIHNpbiBxdWVicmFudGFyIG5pIHN1cGxhbnRhciBsb3MgZGVyZWNob3MgZGUgYXV0b3IgZGUgdGVyY2Vyb3MsIHkgZGUgdGFsIGZvcm1hLCBlbCBkb2N1bWVudG8gKHRyYWJham8gZGUgZ3JhZG8sIHBhc2FudMOtYSwgY2Fzb3MgbyB0ZXNpcykgZXMgZGUgc3UgZXhjbHVzaXZhIGF1dG9yw61hIHkgdGllbmUgbGEgdGl0dWxhcmlkYWQgc29icmUgw6lzdGUuIFBBUkFHUkFGTzogZW4gY2FzbyBkZSBwcmVzZW50YXJzZSBhbGd1bmEgcmVjbGFtYWNpw7NuIG8gYWNjacOzbiBwb3IgcGFydGUgZGUgdW4gdGVyY2VybywgcmVmZXJlbnRlIGEgbG9zIGRlcmVjaG9zIGRlIGF1dG9yIHNvYnJlIGVsIGRvY3VtZW50byAoVHJhYmFqbyBkZSBncmFkbywgUGFzYW50w61hLCBjYXNvcyBvIHRlc2lzKSBlbiBjdWVzdGnDs24sIEVMIEFVVE9SLCBhc3VtaXLDoSBsYSByZXNwb25zYWJpbGlkYWQgdG90YWwsIHkgc2FsZHLDoSBlbiBkZWZlbnNhIGRlIGxvcyBkZXJlY2hvcyBhcXXDrSBhdXRvcml6YWRvczsgcGFyYSB0b2RvcyBsb3MgZWZlY3RvcywgbGEgVW5pdmVyc2lkYWQgIEF1dMOzbm9tYSBkZSBPY2NpZGVudGUgYWN0w7phIGNvbW8gdW4gdGVyY2VybyBkZSBidWVuYSBmZS4gVG9kYSBwZXJzb25hIHF1ZSBjb25zdWx0ZSB5YSBzZWEgZW4gbGEgYmlibGlvdGVjYSBvIGVuIG1lZGlvIGVsZWN0csOzbmljbyBwb2Ryw6EgY29waWFyIGFwYXJ0ZXMgZGVsIHRleHRvIGNpdGFuZG8gc2llbXByZSBsYSBmdWVudGUsIGVzIGRlY2lyIGVsIHTDrXR1bG8gZGVsIHRyYWJham8geSBlbCBhdXRvci4gRXN0YSBhdXRvcml6YWNpw7NuIG5vIGltcGxpY2EgcmVudW5jaWEgYSBsYSBmYWN1bHRhZCBxdWUgdGllbmUgRUwgQVVUT1IgZGUgcHVibGljYXIgdG90YWwgbyBwYXJjaWFsbWVudGUgbGEgb2JyYS48L3A+Cg== |