Evaluación del efecto emocional de audios inmersivos con respecto a audios estereofónicos mediante respuestas psicofisiológicas

El objetivo principal de este proyecto constó en evaluar el efecto emocional de audios inmersivos con respecto a sus equivalentes en formato estéreo mediante la prueba subjetiva Self-Assesment Manikin (SAM) y el electroencefalograma (EEG); todo esto utilizando los estímulos estandarizados emocionalm...

Full description

Autores:
Rubio Lancheros, Elian David
Niño Galarza, Juan Diego
Tipo de recurso:
Trabajo de grado de pregrado
Fecha de publicación:
2023
Institución:
Universidad de San Buenaventura
Repositorio:
Repositorio USB
Idioma:
OAI Identifier:
oai:bibliotecadigital.usb.edu.co:10819/24861
Acceso en línea:
https://hdl.handle.net/10819/24861
Palabra clave:
620 - Ingeniería y operaciones afines
Audio inmersivo
audio estereofónico
respuestas psicofisiológicas
electroencefalografía (EEG)
psicoacústica
Self-Assesment Manikin (SAM)
International Affective Digital Sounds (IADS)
Machine Learning
KNearest Neighbors (KNN).
Rights
openAccess
License
http://purl.org/coar/access_right/c_abf2
id SANBUENAV2_742366db63facbae72aa2cd0b005ac14
oai_identifier_str oai:bibliotecadigital.usb.edu.co:10819/24861
network_acronym_str SANBUENAV2
network_name_str Repositorio USB
repository_id_str
dc.title.spa.fl_str_mv Evaluación del efecto emocional de audios inmersivos con respecto a audios estereofónicos mediante respuestas psicofisiológicas
title Evaluación del efecto emocional de audios inmersivos con respecto a audios estereofónicos mediante respuestas psicofisiológicas
spellingShingle Evaluación del efecto emocional de audios inmersivos con respecto a audios estereofónicos mediante respuestas psicofisiológicas
620 - Ingeniería y operaciones afines
Audio inmersivo
audio estereofónico
respuestas psicofisiológicas
electroencefalografía (EEG)
psicoacústica
Self-Assesment Manikin (SAM)
International Affective Digital Sounds (IADS)
Machine Learning
KNearest Neighbors (KNN).
title_short Evaluación del efecto emocional de audios inmersivos con respecto a audios estereofónicos mediante respuestas psicofisiológicas
title_full Evaluación del efecto emocional de audios inmersivos con respecto a audios estereofónicos mediante respuestas psicofisiológicas
title_fullStr Evaluación del efecto emocional de audios inmersivos con respecto a audios estereofónicos mediante respuestas psicofisiológicas
title_full_unstemmed Evaluación del efecto emocional de audios inmersivos con respecto a audios estereofónicos mediante respuestas psicofisiológicas
title_sort Evaluación del efecto emocional de audios inmersivos con respecto a audios estereofónicos mediante respuestas psicofisiológicas
dc.creator.fl_str_mv Rubio Lancheros, Elian David
Niño Galarza, Juan Diego
dc.contributor.advisor.none.fl_str_mv Herrera Martínez, Marcelo
dc.contributor.author.none.fl_str_mv Rubio Lancheros, Elian David
Niño Galarza, Juan Diego
dc.subject.ddc.none.fl_str_mv 620 - Ingeniería y operaciones afines
topic 620 - Ingeniería y operaciones afines
Audio inmersivo
audio estereofónico
respuestas psicofisiológicas
electroencefalografía (EEG)
psicoacústica
Self-Assesment Manikin (SAM)
International Affective Digital Sounds (IADS)
Machine Learning
KNearest Neighbors (KNN).
dc.subject.proposal.spa.fl_str_mv Audio inmersivo
audio estereofónico
respuestas psicofisiológicas
electroencefalografía (EEG)
psicoacústica
dc.subject.proposal.eng.fl_str_mv Self-Assesment Manikin (SAM)
International Affective Digital Sounds (IADS)
Machine Learning
KNearest Neighbors (KNN).
description El objetivo principal de este proyecto constó en evaluar el efecto emocional de audios inmersivos con respecto a sus equivalentes en formato estéreo mediante la prueba subjetiva Self-Assesment Manikin (SAM) y el electroencefalograma (EEG); todo esto utilizando los estímulos estandarizados emocionalmente de la IADS (International Affective Digital Sounds). En este documento también se presentan todos los aspectos ingenieriles en cuanto a la selección y reparación de los audios a utilizar, la producción de audio requerida para crear los ambientes inmersivos y estéreos, y los procesos de análisis de señal y datos necesarios para llegar a las conclusiones pertinentes
publishDate 2023
dc.date.issued.none.fl_str_mv 2023
dc.date.accessioned.none.fl_str_mv 2025-05-23T16:34:53Z
dc.date.available.none.fl_str_mv 2025-05-23T16:34:53Z
dc.type.spa.fl_str_mv Trabajo de grado - Pregrado
dc.type.coar.spa.fl_str_mv http://purl.org/coar/resource_type/c_7a1f
dc.type.content.spa.fl_str_mv Text
dc.type.driver.spa.fl_str_mv info:eu-repo/semantics/bachelorThesis
dc.type.redcol.spa.fl_str_mv http://purl.org/redcol/resource_type/TP
dc.type.version.spa.fl_str_mv info:eu-repo/semantics/acceptedVersion
format http://purl.org/coar/resource_type/c_7a1f
status_str acceptedVersion
dc.identifier.instname.spa.fl_str_mv instname:Universidad de San Buenaventura
dc.identifier.reponame.spa.fl_str_mv reponame:Repositorio Institucional Universidad de San Buenaventura
dc.identifier.repourl.spa.fl_str_mv repourl:https://bibliotecadigital.usb.edu.co/
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/10819/24861
identifier_str_mv instname:Universidad de San Buenaventura
reponame:Repositorio Institucional Universidad de San Buenaventura
repourl:https://bibliotecadigital.usb.edu.co/
url https://hdl.handle.net/10819/24861
dc.relation.references.none.fl_str_mv Adorni, R., Brugnera, A., Gatti, A., Tasca, G. A., Sakatani, K., & Compare, A. (2019). Psychophysiological Responses to Stress Related to Anxiety in Healthy Aging: A NearInfrared Spectroscopy (NIRS) Study. Journal of Psychophysiology, 33(3), 188–197. https://doi.org/10.1027/0269-8803/a000221
Allen, J. J. B., Coan, J. A., & Nazarian, M. (2004). Issues and assumptions on the road from raw signals to metrics of frontal EEG asymmetry in emotion. Biological Psychology, 67(1–2), 183–218. https://doi.org/10.1016/j.biopsycho.2004.03.007
Arteaga, D. (2015). Introduction to Ambisonics. En Audio 3D – Grau en Enginyeria de Sistemes Audiovisuals Universitat Pompeu Fabra (Número June). http://www.ironbridgeelt.com/downloads/FrancescaOrtolani-IntroductionToAmbisonics.pdf
Asutay, E., Västfjäll, D., Tajadura-Jiménez, A., Genell, A., Bergman, P., & Kleiner, M. (2012). Emoacoustics: A study of the psychoacoustical and psychological dimensions of emotional sound design. AES: Journal of the Audio Engineering Society, 60(1–2), 21–28.
Ayon, D. (2016). Machine Learning Algorithms : A Review. International Journal of Computer Science and Information Technologies, 7(3), 1174–1179. https://doi.org/10.21275/ART20203995
Bai, J., Luo, K., Peng, J., Shi, J., Wu, Y., Feng, L., Li, J., & Wang, Y. (2017). Music emotions recognition by cognitive classification methodologies. Proceedings of 2017 IEEE 16th International Conference on Cognitive Informatics and Cognitive Computing, ICCI*CC 2017, April 2022, 121–129. https://doi.org/10.1109/ICCI-CC.2017.8109740
Baumgartner, T., Esslen, M., & Jäncke, L. (2006). From emotion perception to emotion experience: Emotions evoked by pictures and classical music. International Journal of Psychophysiology, 60(1), 34–43. https://doi.org/https://doi.org/10.1016/j.ijpsycho.2005.04.007
Berthoz, A., & Viaud-Delmon, I. (1999). Multisensory integration in spatial orientation. Current Opinion in Neurobiology, 9(6), 708–712. https://doi.org/https://doi.org/10.1016/S0959-4388(99)00041-0
Berwick, N., & Lee, H. (2020). Spatial unmasking effect on speech reception threshold in the median plane. Applied Sciences (Switzerland), 10(15). https://doi.org/10.3390/APP10155257
Bonneterre, J.-S., Henney, M., & Khong, Y. (2019). KZ ZST Headphones Review by Headphones Reviews on RTings. https://www.rtings.com/headphones/reviews/kz/zst#page-discussions
Bradley, M. M., & Lang, P. J. (1994). Measuring emotion: The self-assessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry, 25(1), 49–59. https://doi.org/10.1016/0005-7916(94)90063-9
Bradley, M. M., & Lang, P. J. (2007). The International Affective Digitized Sounds Affective Ratings of Sounds and Instruction Manual. Technical report B-3. University of Florida, Gainesville, Fl., 29–46. http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle:The+International+Affect ive+Digitized+Sounds+Affective+Ratings+of+Sounds+and+Instruction+Manual#1
Butler, S. R., & Glass, A. (1974). Asymmetries in the electroencephalogram associated with cerebral dominance. Electroencephalography and Clinical Neurophysiology, 36, 481–491. https://doi.org/https://doi.org/10.1016/0013-4694(74)90205-3
Calderón, S., Rincón, R., Araujo, A., & Gantiva, C. (2018). Effect of congruence between sound and video on heart rate and self-reported measures of emotion. Europe’s Journal of Psychology, 14(3), 621–631. https://doi.org/10.5964/ejop.v14i3.1593
Camara, N., & Stewart-Rushworth, N. (2019). Surroundscapes: The power of immersive sound.
Chion, M. (1993). La audiovisión: Introducción a un análisis conjunto de la imagen y el sonido. Cifuentes-Avellaneda Á, Rivera-Montero D, Vera-Gil C, Murad-Rivera R, Sánchez S, & Castaño
L. (2020). Informe 3. Ansiedad, depresión y miedo: impulsores de la mala salud mental durante el distanciamiento físico en Colombia. Estudio solidaridad Profamilia [revista en Internet] 2020 [acceso 2 de marzo de 2021]; 2020: 1-13.https://doi.org/10.13140/RG.2.2.32144.64002
Cohen, M. X. (2014). Analyzing Neural Time Series Data: Theory and Practice (J. Grafman (ed.)). The MIT Press.
Costantini, G. (2018). Approaches to Sound Design: Murch and Burtt. The New Soundtrack, 8(2), 169–174. https://doi.org/10.3366/sound.2018.0129
Craddock, M. (2022). Filter EEG data by eegUtils. https://craddm.github.io/eegUtils/reference/eeg_filter.html
Cuadrado, F. J. (2015). Generar emociones a través del diseño de sonido. En III Congreso Internacional de Historia, Arte y Literatura en el Cine. Libro de Actas Tomo II (Tomo II, pp. 272–284).
Cuadrado, F., Lopez-Cobo, I., Mateos-Blanco, T., & Tajadura-Jiménez, A. (2020). Arousing the Sound: A Field Study on the Emotional Impact on Children of Arousing Sound Design and 3D Audio Spatialization in an Audio Story. Frontiers in Psychology, 11(May), 1–19. https://doi.org/10.3389/fpsyg.2020.00737
Cunningham, S., Ridley, H., Weinel, J., & Picking, R. (2019). Audio emotion recognition using machine learning to support sound design. ACM International Conference Proceeding Series, 116–123. https://doi.org/10.1145/3356590.3356609
DANE. (2021). Nota Estadística Salud Mnetal en Colombia: un análisis de los efectos de la pandemia. de Cheveigné, A., & Nelken, I. (2019). Filters: When, Why, and How (Not) to Use Them. Neuron, 102(2), 280–293. https://doi.org/10.1016/j.neuron.2019.02.039
DearReality. (2021). DearVR Micro User Manual.
Díaz, C., & Sánchez, D. (2016). Análisis psicoacústico de la respuesta del estado de relajación del ser humano a rangos de frecuencia. Universidad de San Buenaventura Bogotá.
Dou, J., & Qin, J. (2017). Research on user mental model acquisition based on multidimensional data collaborative analysis in product service system innovation process. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10276 LNAI, 35–44. https://doi.org/10.1007/978-3-319-58475-1_3
Drossos, K., Floros, A., & Giannakoulopoulos, A. (2014). BEADS: A dataset of Binaural Emotionally Annotated Digital Sounds. IISA 2014 - 5th International Conference on Information, Intelligence, Systems and Applications, July, 158–163. https://doi.org/10.1109/IISA.2014.6878749
Drossos, K., Kaliakatsos-Papakostas, M., Floros, A., & Virtanen, T. (2016). On the impact of the semantic content of sound events in emotion elicitation. AES: Journal of the Audio Engineering Society, 64(7–8), 525–532. https://doi.org/10.17743/jaes.2016.0024
Elliott, S., House, C., Cheer, J., & Simon-Galvez, M. (2016). Cross-Talk Cancellation for Headrest Sound Reproduction. Audio Engineering Society Conference: 2016 AES International Conference on Sound Field Control. http://www.aes.org/elib/browse.cfm?elib=18304
Facebook for Business. (2018). Un vistazo al futuro: qué implican la realidad aumentada y la virtual para las marcas y los anunciantes | Facebook IQ | Facebook para empresas. https://www.facebook.com/business/news/insights/3-things-marketers-need-to-know-about-ar-and-vr
Fan, J., Wade, J. W., Key, A. P., Warren, Z. E., & Sarkar, N. (2018). EEG-based affect and workload recognition in a virtual driving environment for ASD intervention. IEEE Transactions on Biomedical Engineering, 65(1), 43–51. https://doi.org/10.1109/TBME.2017.2693157
Firat, R. B. (2019). Opening the “Black Box”: Functions of the Frontal Lobes and Their Implications for Sociology. Frontiers in Sociology, 4(February). https://doi.org/10.3389/fsoc.2019.00003
Fonseca, N. (2020). Sound Particles Reference Manual. En Technology.
Fontana, S., Farina, A., & Grenier, Y. (2007). BINAURAL FOR POPULAR MUSIC : A CASE OF STUDY Ecole Nationale Supérieure des Télécommunications , TSI Paris , France Università di Parma , Parma , Italia. 85–90.
García, A. P. (2021). TÉCNICAS PARAMÉTRICAS DE UPMIXING EN AMBISONICS : EVALUACIÓN PERCEPTUAL. Universidad de San Buenaventura Medellín.
Gelfand, S. A. (2010). Hearing: An introduction to psychological and physiological acoustics, fourth edition. En Hearing: An Introduction to Psychological and Physiological Acoustics, Fourth Edition (5th Editio).
Giraldo, S., & Ramirez, R. (2013). Brain-Activity-Driven Real-Time Music Emotive Control. …3rd International Conference on Music & Emotion ( …, June, 11–15. https://jyx.jyu.fi/dspace/handle/123456789/41625
Grimshaw, M., Lindley, C. A., & Nacke, L. (2008). Sound and immersion in the first-person shooter: Mixed measurement of the player’s sonic experience. Proceedings of the Audio Mostly Conference - A Conference on Interaction with Sound, 9–15.
Guideline 5: Guidelines for standard electrode position nomenclature. (2006). En Journal ofclinical neurophysiology : official publication of the American Electroencephalographic Society (Vol. 23, Número 2, pp. 107–110). https://doi.org/10.1097/00004691-200604000-00006
Handayani, D., Wahab, A., & Yaacob, H. (2015). Recognition of emotions in video clips: The self-assessment manikin validation. Telkomnika (Telecommunication Computing Electronics and Control), 13(4), 1343–1351. https://doi.org/10.12928/telkomnika.v13i4.2735
Hermann, E. (2022). Neural responses to positive and negative valence: How can valence influence frontal alpha asymmetry? Tilburg University.
Hillman, N., & Pauletto, S. (2015). The Craftsman: The use of sound design to elicit emotions. The Soundtrack, 7(1), 5–23. https://doi.org/10.1386/st.7.1.5_1 Honda, S., Ishikawa, Y., Konno, R., Imai, E., Nomiyama, N., Sakurada, K., Koumura, T., Kondo,
H. M., Furukawa, S., Fujii, S., & Nakatani, M. (2020). Proximal Binaural Sound Can Induce Subjective Frisson. Frontiers in Psychology, 11(March, Article 316), 1–10. https://doi.org/10.3389/fpsyg.2020.00316
Hsu, B. W., & Wang, M. J. J. (2013). Evaluating the effectiveness of using electroencephalogram power indices to measure visual fatigue. Perceptual and Motor Skills, 116(1), 235–252. https://doi.org/10.2466/29.15.24.PMS.116.1.235-252
Hughes, S., & Kearney, G. (2015). Fear and Localisation: Emotional Fine-Tuning Utlising Multiple Source Directions. AES: Journal of the Audio Engineering Society, 56th International Conference, London, UK.
IBM. (2022). ¿Qué es el algoritmo de k vecinos más cercanos? https://www.ibm.com/coes/topics/knn Jebelli, H., Hwang, S., & Lee, S. (2017). Feasibility of Field Measurement of Construction Workers’ Valence Using a Wearable EEG Device. Barrett 1998, 99–106.
Jiménez, R. A. (2012). Estudio para determinar un par de frecuencias que geneneren un estado de relajación en el ser humano, mediante la reproducción de sonidos binaurales. Universidad de San Buenaventura Bogotá.
Johansson, G. (2022). Frontal Alpha Asymmetry scores in threatening and non-threatening conditions. Högskolan I Skövde.
Juslin, P. N. (2009). Sound of music : Seven ways in which the brain can evoke emotions from sound. En Sound, mind and emotion (8a ed., pp. 11–41). Sound Environment Centre.
Kahsnitz, M., & RTW. (2021). Worldwide Loudness Delivery Standards. https://www.rtw.com/en/blog/worldwide-loudness-delivery-standards.html
Katsigiannis, S., & Ramzan, N. (2018). DREAMER: A Database for Emotion Recognition Through EEG and ECG Signals from Wireless Low-cost Off-the-Shelf Devices. IEEE Journal of Biomedical and Health Informatics, 22(1), 98–107. https://doi.org/10.1109/JBHI.2017.2688239
Katz, L. (2019). How to setup surround sound home audio - SoundGuys. https://www.soundguys.com/how-to-setup-home-theater-surround-sound-24444/
Kim, J., Kim, W., & Kim, J.-T. (2015). Psycho-physiological responses of drivers to road section types and elapsed driving time on a freeway. Can. J. Civ. Eng., 42, 881–888. https://doi.org/https://doi.org/10.1139/cjce-2014-0392
Koelstra, S., Mülh, C., Soleymani, M., Lee, J.-S., Yazdani, A., Ebrahimi, T., Pun, T., Nijholt, A., & Patras, I. (2012). DEAP: A Database for Emotion Analysis using Physiological Signals. IEEE Transactions on Affective Computing.
Lan, Z., Sourina, O., Wang, L., & Liu, Y. (2016). Real-time EEG-based emotion monitoring using stable features. Visual Computer, 32(3), 347–358. ttps://doi.org/10.1007/s00371-015-1183-y
Lepa, S., Weinzierl, S., Maempel, H. J., & Ungeheuer, E. (2014). Emotional impact of different forms of spatialization in everyday mediatized music listening: Placebo or technology effects? 136th Audio Engineering Society Convention 2014, Convention Paper 9024, 141–148.
Levitin, D. (2006). Tu Cerebro y La Música (Titivillus (Ed.)). Lectulandia . Li, T.-H., Liu, W., Zheng, W.-L., & Lu, B.-L. (2019). Classification of Five Emotions from EEG and Eye Movement Signals: Discrimination Ability and Stability over Time. 607–610. https://doi.org/10.1109/NER.2019.8716943
Li, Y., Cai, J., Dong, Q., Wu, L., & Chen, Q. (2020). Psychophysiological responses of young people to soundscapes in actual rural and city environments. AES: Journal of the Audio Engineering Society, 68(12), 910–925. https://doi.org/10.17743/JAES.2020.0060
Liao, D., Shu, L., Liang, G., Li, Y., Zhang, Y., Zhang, W., & Xu, X. (2020). Design and Evaluation of Affective Virtual Reality System Based on Multimodal Physiological Signals and Self-Assessment Manikin. IEEE Journal of Electromagnetics, RF and Microwaves in Medicine and Biology, 4(3), 216–224. https://doi.org/10.1109/JERM.2019.2948767
Lorenzi, A., Gentil, A., & Gil-Loyzaga, P. (2019). AUDICIÓN BINAURAL Y MONOAURAL | Cochlea. http://www.cochlea.eu/es/sonido/psicoacustica/localizacion
Macía, A. F. (2017). Design of a protocol for the measurement of physiological and emotional responses to sound stimuli. Universidad de San Buenaventura Medellín.
MATLAB. (2022). Bandpower code documentation (MATLAB R2022b). https://www.mathworks.com/help/signal/ref/bandpower.html
Mosquera, D., & Casas, J. (2017). Generación De Audio Espacializado En Tres Dimensiones. Pontificia Universidad Javeriana
My-MS.org. (2022). Brain Anatomy Part 2: Lobes. My-MS.org: For Information on Multiple Sclerosis. https://my-ms.org/anatomy_brain_part2.htm
Nair, S. (2016). Reverse Engineering Emotions in an Immersive Audio Mix Format. IBC, 1–5.
Narbutt, M., Skoglund, J., Allen, A., Chinen, M., Barry, D., & Hines, A. (2019). AMBIQUAL: Towards a Quality Metric for Headphone Rendered Compressed Ambisonic Spatial Audio.Applied Sciences (Switzerland), 9(13), 1–21. https://doi.org/10.3390/app9132618
Navea, R. F., & Dadios, E. (2015). Beta/Alpha power ratio and alpha asymmetry characterization of EEG signals due to musical tone stimulation. Project Einstein 2015, October.
NeuroSky. (s/f). EEG: The Ultimate Guide. Recuperado el 12 de noviembre de 2021, de http://neurosky.com/biosensors/eeg-sensor/ultimate-guide-to-eeg/
NeuroSky. (2015). MindWave Mobile : User Guide. August, 1–18.
Olive, S. E., & Welti, T. (2012). Defining Immersion: Literature Review and Implications for Research on Immersive Audiovisual Experiences. 1–17.
OMS. (1998). Prevention of noise-induced hearing loss : report of an informal consultation held at the World Health Organization, Geneva, on 28-30 October 1997. World Health Organization.
OpenBCI. (2022a). Gelfree Electrode Cap Guide.
OpenBCI. (2022b). GUI Widget Guide - OpenBCI GUI Documentation.
Otto, N., Amman, S., Eaton, C., & Lake, S. (1999). Guidelines for jury evaluations of automotive sounds. SAE Technical Papers, April. https://doi.org/10.4271/1999-01-1822
Posner, J., Russel, J. A., & Peterson, B. S. (2005). The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology. National Institude of Health, 17, 715–734.
Pouyanfar, S., & Sameti, H. (2014). Music emotion recognition using two level classification. 2014 Iranian Conference on Intelligent Systems, ICIS 2014, February 2017. https://doi.org/10.1109/IranianCIS.2014.6802519
Puentes, D. H. (2008). Neuro-estimulador Auditivo Binaural para Tratamiento y Optimización de Estados Cerebrales Inducidos de Vigilia y Concentración. Universidad de San Buenaventura Bogotá.
Restrepo Cabanzo, L. P. (2017). Comparación objetiva y subjetiva de la sonoridad entre una mezcla de sonido envolvente 5.1 y binaural de un cortometraje usando el sistema de reproducción Opsodis. Universidad de San Buenaventura Medellín.
Reuderink, B., Mühl, C., & Poel, M. (2013). Valence, arousal and dominance in the EEG during game play. International Journal of Autonomous and Adaptive Communications Systems, 6(1), 45–62. https://doi.org/10.1504/IJAACS.2013.050691
Reyes, M. F., & Velasco, J. S. (2014). Análisis Psicoacústico a Partir de Estímulos Auditivos Generados por Medio de Pulsos Binaurales en Relación al Rango de Frecuencia en una Composición Sonora. Universidad de San Buenaventura Bogotá.
Roginska, A., & Geluso, P. (2017). Immersive sound: The art and science of binaural and multichannel audio. En Immersive Sound: The Art and Science of Binaural and Multi-Channel Audio. https://doi.org/10.4324/9781315707525
Rudrich, D., Zotter, F., Grill, S., & Hubber, M. (2021). IEM Plug-in Suite Documentation and Plug-in Descriptions. https://plugins.iem.at/docs/plugindescriptions/
Rumsey, F. (2016). Immersive audio: Objects, mixing, and rendering. AES: Journal of the Audio Engineering Society, 64(7–8), 584–588.
Rumsey, F. (2018). Spatial audio Channels, objects, or ambisonics? AES: Journal of the Audio Engineering Society, 66(11), 987–992.
Rumsey, F. (2020). Immersive audio—Defining and evaluating the experience. J. Audio Eng. Soc, 68(5), 388–392. http://www.aes.org/e-lib/browse.cfm?elib=20856
Sarno, R., Munawar, M. N., & Nugraha, B. T. (2016). Real-time electroencephalography-based emotion recognition system. International Review on Computers and Software, 11(5), 456–465. https://doi.org/10.15866/irecos.v11i5.9334
Sikström, E., Nilsson, N. C., Nordahl, R., & Serafin, S. (2013). Preliminary investigation of self reported emotional responses to approaching and receding footstep sounds in a virtual reality context. Proceedings of the AES International Conference, 18–23.
Silverthorn, D. U. (2019). Fisiología humana un enfoque integrado. En Fisiología humana un enfoque integrado (8a ed.). Editorial Médica Panamericana.
Soleymani, M., Lichtenauer, J., Pun, T., & Pantic, M. (2012). A Multi-Modal Affective Database for Affect Recognition and Implicit Tagging. Affective Computing, IEEE Transactions on, 3, 1. https://doi.org/10.1109/T-AFFC.2011.25
Song, T., Zheng, W., Song, P., & Cui, Z. (2018). EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks. IEEE Transactions on Affective Computing, PP, 1. https://doi.org/10.1109/TAFFC.2018.2817622
Sotgiu, A. De, Coccoli, M., & Vercelli, G. (2020). Comparing the perception of “sense of presence” between a stereo mix and a binaural mix in immersive music. 148th Audio Engineering Society Convention 2020, Convention e-Brief 588, 1–5.
Soto, S. A. (2021). Suscripciones a servicios de streaming en Colombia pueden costarle $132.500 mensuales. https://www.larepublica.co/internet-economy/suscripciones-aservicios-de-streaming-en-colombia-pueden-costarle-132500-mensuales-3124703
Statista, & Orús, A. (2021). • Música en streaming: usuarios de pago a nivel mundial 2010-2020 | Statista. https://es.statista.com/estadisticas/636319/usuarios-de-pago-de-servicios-demusica-en-streaming-a-nivel-mundial/
Stockburger, A. (2006). THE RENDERED ARENA MODALITIES OF SPACE IN VIDEO AND COMPUTER GAMES. En Narrative. University of the Arts, London.
Subramanian, R., Wache, J., Abadi, M. K., Vieriu, R. L., Winkler, S., & Sebe, N. (2018). ASCERTAIN: Emotion and personality recognition using commercial sensors. IEEE Transactions on Affective Computing, 9(2), 147–160. https://doi.org/10.1109/TAFFC.2016.2625250
Suhaimi, N. S., Mountstephens, J., & Teo, J. (2020). EEG-Based Emotion Recognition: A Stateof-the-Art Review of Current Trends and Opportunities. Computational Intelligence and Neuroscience, 2020. https://doi.org/10.1155/2020/8875426
Trinnov. (2021). Trinnov | What is Immersive Sound? What is Object-based audio? https://www.trinnov.com/en/blog/posts/what-is-immersive-sound/
Vallat, R. (2018). Compute the average bandpower of an EEG signal.
Vargas, A. (2009). Análisis Psicoacústico de Producciones Audiovisuales. Universidad de San Buenaventura Bogotá.
Yang, W., Makita, K., Nakao, T., Kanayama, N., Machizawa, M. G., Sasaoka, T., Sugata, A., Kobayashi, R., Hiramoto, R., Yamawaki, S., Iwanaga, M., & Miyatani, M. (2018). Affective auditory stimulus database: An expanded version of the International Affective Digitized Sounds (IADS-E). Behavior Research Methods, 50(4), 1415–1429. https://doi.org/10.3758/s13428-018-1027-6
Zangeneh Soroush, M., Maghooli, K., Setarehdan, K., & Motie Nasrabadi, A. (2018). Emotion Classification through Nonlinear EEG Analysis Using Machine Learning Methods. International Clinical Neuroscience Journal, 5, 135–149. https://doi.org/10.15171/icnj.2018.26
Zeng, Z., Pantic, M., & Roisman, G. (2009). A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. IEEE transactions on pattern analysis and machine intelligence, 31, 39–58. https://doi.org/10.1109/TPAMI.2008.52
Zhu, L., Tian, X., Xu, X., & Shu, L. (2019). Design and Evaluation of the Mental Relaxation VR Scenes Using Forehead EEG Features. IEEE MTT-S 2019 International Microwave Biomedical Conference, IMBioC 2019 - Proceedings, 2019–2022. https://doi.org/10.1109/IMBIOC.2019.8777812
Zor, J. de. (2010). INFORME LAS FRECUENCIAS CEREBRALES O la puerta del espacio. https://www.hispamap.net/ondas.htm
Zotter, F., & Frank, M. (2019). XY, MS, and First-Order Ambisonics. Springer Topics in Signal Processing, 19, 1–22. https://doi.org/10.1007/978-3-030-17207-7_1
dc.rights.accessrights.spa.fl_str_mv info:eu-repo/semantics/openAccess
dc.rights.coar.spa.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.license.*.fl_str_mv Attribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.uri.*.fl_str_mv http://creativecommons.org/licenses/by-nc-nd/4.0/
eu_rights_str_mv openAccess
rights_invalid_str_mv http://purl.org/coar/access_right/c_abf2
Attribution-NonCommercial-NoDerivatives 4.0 International
http://creativecommons.org/licenses/by-nc-nd/4.0/
dc.format.extent.none.fl_str_mv 193 páginas
dc.format.mimetype.none.fl_str_mv application/pdf
dc.publisher.spa.fl_str_mv Universidad de San Buenaventura
dc.publisher.branch.spa.fl_str_mv Bogotá
dc.publisher.faculty.spa.fl_str_mv Facultad de Ingeniería
dc.publisher.place.none.fl_str_mv Bogotá
dc.publisher.program.spa.fl_str_mv Ingeniería de Sonido
institution Universidad de San Buenaventura
bitstream.url.fl_str_mv https://bibliotecadigital.usb.edu.co/bitstreams/0f3f825a-9765-4953-a8fb-90ec8bacb0f6/download
https://bibliotecadigital.usb.edu.co/bitstreams/9eaad015-5785-42a3-8b2b-14f7e8d40a0d/download
https://bibliotecadigital.usb.edu.co/bitstreams/fc2f7b7c-dec3-41fc-93a6-ed9878b4e46e/download
https://bibliotecadigital.usb.edu.co/bitstreams/1d404512-f113-4d95-b209-7acb3424ac96/download
https://bibliotecadigital.usb.edu.co/bitstreams/6588aa57-46fd-49db-95f6-1b1b81ff81e7/download
https://bibliotecadigital.usb.edu.co/bitstreams/2c362475-799a-4fb6-b82c-bce88ec4acf1/download
https://bibliotecadigital.usb.edu.co/bitstreams/725fb333-a793-4205-adc1-37c372abdf7d/download
https://bibliotecadigital.usb.edu.co/bitstreams/d867c99b-c733-4e3d-86c5-aaaa7dff4d90/download
bitstream.checksum.fl_str_mv 134a5c70e495645fe759b0b82bf3cbf8
7ea1075bac452031d91922e767e554bc
3b6ce8e9e36c89875e8cf39962fe8920
ce8fd7f912f132cbeb263b9ddc893467
65f2f2839986bd16a5f098318551e6b0
0a07d3af0821a37431c8a5288583de91
3d3ee690f5fc2e5cb235c0938003ed64
cbb91ea8afb49efce459ef2ffe77a52e
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional Universidad de San Buenaventura Colombia
repository.mail.fl_str_mv bdigital@metabiblioteca.com
_version_ 1837099278523170816
spelling Herrera Martínez, MarceloRubio Lancheros, Elian DavidNiño Galarza, Juan Diego2025-05-23T16:34:53Z2025-05-23T16:34:53Z2023El objetivo principal de este proyecto constó en evaluar el efecto emocional de audios inmersivos con respecto a sus equivalentes en formato estéreo mediante la prueba subjetiva Self-Assesment Manikin (SAM) y el electroencefalograma (EEG); todo esto utilizando los estímulos estandarizados emocionalmente de la IADS (International Affective Digital Sounds). En este documento también se presentan todos los aspectos ingenieriles en cuanto a la selección y reparación de los audios a utilizar, la producción de audio requerida para crear los ambientes inmersivos y estéreos, y los procesos de análisis de señal y datos necesarios para llegar a las conclusiones pertinentesThe main objective of this project was to evaluate the emotional impact of immersive audio compared to its stereo counterparts using the subjective Self-Assessment Manikin (SAM) test and the electroencephalogram (EEG); all using the emotionally standardized stimuli of the IADS (International Affective Digital Sounds). This document also presents all the engineering aspects regarding the selection and repair of the audio to be used, the audio production required to create immersive and stereo environments, and the signal and data analysis processes necessary to reach the relevant conclusions.PregradoIngeniero de Sonido193 páginasapplication/pdfinstname:Universidad de San Buenaventurareponame:Repositorio Institucional Universidad de San Buenaventurarepourl:https://bibliotecadigital.usb.edu.co/https://hdl.handle.net/10819/24861Universidad de San BuenaventuraBogotáFacultad de IngenieríaBogotáIngeniería de SonidoAdorni, R., Brugnera, A., Gatti, A., Tasca, G. A., Sakatani, K., & Compare, A. (2019). Psychophysiological Responses to Stress Related to Anxiety in Healthy Aging: A NearInfrared Spectroscopy (NIRS) Study. Journal of Psychophysiology, 33(3), 188–197. https://doi.org/10.1027/0269-8803/a000221Allen, J. J. B., Coan, J. A., & Nazarian, M. (2004). Issues and assumptions on the road from raw signals to metrics of frontal EEG asymmetry in emotion. Biological Psychology, 67(1–2), 183–218. https://doi.org/10.1016/j.biopsycho.2004.03.007Arteaga, D. (2015). Introduction to Ambisonics. En Audio 3D – Grau en Enginyeria de Sistemes Audiovisuals Universitat Pompeu Fabra (Número June). http://www.ironbridgeelt.com/downloads/FrancescaOrtolani-IntroductionToAmbisonics.pdfAsutay, E., Västfjäll, D., Tajadura-Jiménez, A., Genell, A., Bergman, P., & Kleiner, M. (2012). Emoacoustics: A study of the psychoacoustical and psychological dimensions of emotional sound design. AES: Journal of the Audio Engineering Society, 60(1–2), 21–28.Ayon, D. (2016). Machine Learning Algorithms : A Review. International Journal of Computer Science and Information Technologies, 7(3), 1174–1179. https://doi.org/10.21275/ART20203995Bai, J., Luo, K., Peng, J., Shi, J., Wu, Y., Feng, L., Li, J., & Wang, Y. (2017). Music emotions recognition by cognitive classification methodologies. Proceedings of 2017 IEEE 16th International Conference on Cognitive Informatics and Cognitive Computing, ICCI*CC 2017, April 2022, 121–129. https://doi.org/10.1109/ICCI-CC.2017.8109740Baumgartner, T., Esslen, M., & Jäncke, L. (2006). From emotion perception to emotion experience: Emotions evoked by pictures and classical music. International Journal of Psychophysiology, 60(1), 34–43. https://doi.org/https://doi.org/10.1016/j.ijpsycho.2005.04.007Berthoz, A., & Viaud-Delmon, I. (1999). Multisensory integration in spatial orientation. Current Opinion in Neurobiology, 9(6), 708–712. https://doi.org/https://doi.org/10.1016/S0959-4388(99)00041-0Berwick, N., & Lee, H. (2020). Spatial unmasking effect on speech reception threshold in the median plane. Applied Sciences (Switzerland), 10(15). https://doi.org/10.3390/APP10155257Bonneterre, J.-S., Henney, M., & Khong, Y. (2019). KZ ZST Headphones Review by Headphones Reviews on RTings. https://www.rtings.com/headphones/reviews/kz/zst#page-discussionsBradley, M. M., & Lang, P. J. (1994). Measuring emotion: The self-assessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry, 25(1), 49–59. https://doi.org/10.1016/0005-7916(94)90063-9Bradley, M. M., & Lang, P. J. (2007). The International Affective Digitized Sounds Affective Ratings of Sounds and Instruction Manual. Technical report B-3. University of Florida, Gainesville, Fl., 29–46. http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle:The+International+Affect ive+Digitized+Sounds+Affective+Ratings+of+Sounds+and+Instruction+Manual#1Butler, S. R., & Glass, A. (1974). Asymmetries in the electroencephalogram associated with cerebral dominance. Electroencephalography and Clinical Neurophysiology, 36, 481–491. https://doi.org/https://doi.org/10.1016/0013-4694(74)90205-3Calderón, S., Rincón, R., Araujo, A., & Gantiva, C. (2018). Effect of congruence between sound and video on heart rate and self-reported measures of emotion. Europe’s Journal of Psychology, 14(3), 621–631. https://doi.org/10.5964/ejop.v14i3.1593Camara, N., & Stewart-Rushworth, N. (2019). Surroundscapes: The power of immersive sound.Chion, M. (1993). La audiovisión: Introducción a un análisis conjunto de la imagen y el sonido. Cifuentes-Avellaneda Á, Rivera-Montero D, Vera-Gil C, Murad-Rivera R, Sánchez S, & CastañoL. (2020). Informe 3. Ansiedad, depresión y miedo: impulsores de la mala salud mental durante el distanciamiento físico en Colombia. Estudio solidaridad Profamilia [revista en Internet] 2020 [acceso 2 de marzo de 2021]; 2020: 1-13.https://doi.org/10.13140/RG.2.2.32144.64002Cohen, M. X. (2014). Analyzing Neural Time Series Data: Theory and Practice (J. Grafman (ed.)). The MIT Press.Costantini, G. (2018). Approaches to Sound Design: Murch and Burtt. The New Soundtrack, 8(2), 169–174. https://doi.org/10.3366/sound.2018.0129Craddock, M. (2022). Filter EEG data by eegUtils. https://craddm.github.io/eegUtils/reference/eeg_filter.htmlCuadrado, F. J. (2015). Generar emociones a través del diseño de sonido. En III Congreso Internacional de Historia, Arte y Literatura en el Cine. Libro de Actas Tomo II (Tomo II, pp. 272–284).Cuadrado, F., Lopez-Cobo, I., Mateos-Blanco, T., & Tajadura-Jiménez, A. (2020). Arousing the Sound: A Field Study on the Emotional Impact on Children of Arousing Sound Design and 3D Audio Spatialization in an Audio Story. Frontiers in Psychology, 11(May), 1–19. https://doi.org/10.3389/fpsyg.2020.00737Cunningham, S., Ridley, H., Weinel, J., & Picking, R. (2019). Audio emotion recognition using machine learning to support sound design. ACM International Conference Proceeding Series, 116–123. https://doi.org/10.1145/3356590.3356609DANE. (2021). Nota Estadística Salud Mnetal en Colombia: un análisis de los efectos de la pandemia. de Cheveigné, A., & Nelken, I. (2019). Filters: When, Why, and How (Not) to Use Them. Neuron, 102(2), 280–293. https://doi.org/10.1016/j.neuron.2019.02.039DearReality. (2021). DearVR Micro User Manual.Díaz, C., & Sánchez, D. (2016). Análisis psicoacústico de la respuesta del estado de relajación del ser humano a rangos de frecuencia. Universidad de San Buenaventura Bogotá.Dou, J., & Qin, J. (2017). Research on user mental model acquisition based on multidimensional data collaborative analysis in product service system innovation process. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10276 LNAI, 35–44. https://doi.org/10.1007/978-3-319-58475-1_3Drossos, K., Floros, A., & Giannakoulopoulos, A. (2014). BEADS: A dataset of Binaural Emotionally Annotated Digital Sounds. IISA 2014 - 5th International Conference on Information, Intelligence, Systems and Applications, July, 158–163. https://doi.org/10.1109/IISA.2014.6878749Drossos, K., Kaliakatsos-Papakostas, M., Floros, A., & Virtanen, T. (2016). On the impact of the semantic content of sound events in emotion elicitation. AES: Journal of the Audio Engineering Society, 64(7–8), 525–532. https://doi.org/10.17743/jaes.2016.0024Elliott, S., House, C., Cheer, J., & Simon-Galvez, M. (2016). Cross-Talk Cancellation for Headrest Sound Reproduction. Audio Engineering Society Conference: 2016 AES International Conference on Sound Field Control. http://www.aes.org/elib/browse.cfm?elib=18304Facebook for Business. (2018). Un vistazo al futuro: qué implican la realidad aumentada y la virtual para las marcas y los anunciantes | Facebook IQ | Facebook para empresas. https://www.facebook.com/business/news/insights/3-things-marketers-need-to-know-about-ar-and-vrFan, J., Wade, J. W., Key, A. P., Warren, Z. E., & Sarkar, N. (2018). EEG-based affect and workload recognition in a virtual driving environment for ASD intervention. IEEE Transactions on Biomedical Engineering, 65(1), 43–51. https://doi.org/10.1109/TBME.2017.2693157Firat, R. B. (2019). Opening the “Black Box”: Functions of the Frontal Lobes and Their Implications for Sociology. Frontiers in Sociology, 4(February). https://doi.org/10.3389/fsoc.2019.00003Fonseca, N. (2020). Sound Particles Reference Manual. En Technology.Fontana, S., Farina, A., & Grenier, Y. (2007). BINAURAL FOR POPULAR MUSIC : A CASE OF STUDY Ecole Nationale Supérieure des Télécommunications , TSI Paris , France Università di Parma , Parma , Italia. 85–90.García, A. P. (2021). TÉCNICAS PARAMÉTRICAS DE UPMIXING EN AMBISONICS : EVALUACIÓN PERCEPTUAL. Universidad de San Buenaventura Medellín.Gelfand, S. A. (2010). Hearing: An introduction to psychological and physiological acoustics, fourth edition. En Hearing: An Introduction to Psychological and Physiological Acoustics, Fourth Edition (5th Editio).Giraldo, S., & Ramirez, R. (2013). Brain-Activity-Driven Real-Time Music Emotive Control. …3rd International Conference on Music & Emotion ( …, June, 11–15. https://jyx.jyu.fi/dspace/handle/123456789/41625Grimshaw, M., Lindley, C. A., & Nacke, L. (2008). Sound and immersion in the first-person shooter: Mixed measurement of the player’s sonic experience. Proceedings of the Audio Mostly Conference - A Conference on Interaction with Sound, 9–15.Guideline 5: Guidelines for standard electrode position nomenclature. (2006). En Journal ofclinical neurophysiology : official publication of the American Electroencephalographic Society (Vol. 23, Número 2, pp. 107–110). https://doi.org/10.1097/00004691-200604000-00006Handayani, D., Wahab, A., & Yaacob, H. (2015). Recognition of emotions in video clips: The self-assessment manikin validation. Telkomnika (Telecommunication Computing Electronics and Control), 13(4), 1343–1351. https://doi.org/10.12928/telkomnika.v13i4.2735Hermann, E. (2022). Neural responses to positive and negative valence: How can valence influence frontal alpha asymmetry? Tilburg University.Hillman, N., & Pauletto, S. (2015). The Craftsman: The use of sound design to elicit emotions. The Soundtrack, 7(1), 5–23. https://doi.org/10.1386/st.7.1.5_1 Honda, S., Ishikawa, Y., Konno, R., Imai, E., Nomiyama, N., Sakurada, K., Koumura, T., Kondo,H. M., Furukawa, S., Fujii, S., & Nakatani, M. (2020). Proximal Binaural Sound Can Induce Subjective Frisson. Frontiers in Psychology, 11(March, Article 316), 1–10. https://doi.org/10.3389/fpsyg.2020.00316Hsu, B. W., & Wang, M. J. J. (2013). Evaluating the effectiveness of using electroencephalogram power indices to measure visual fatigue. Perceptual and Motor Skills, 116(1), 235–252. https://doi.org/10.2466/29.15.24.PMS.116.1.235-252Hughes, S., & Kearney, G. (2015). Fear and Localisation: Emotional Fine-Tuning Utlising Multiple Source Directions. AES: Journal of the Audio Engineering Society, 56th International Conference, London, UK.IBM. (2022). ¿Qué es el algoritmo de k vecinos más cercanos? https://www.ibm.com/coes/topics/knn Jebelli, H., Hwang, S., & Lee, S. (2017). Feasibility of Field Measurement of Construction Workers’ Valence Using a Wearable EEG Device. Barrett 1998, 99–106.Jiménez, R. A. (2012). Estudio para determinar un par de frecuencias que geneneren un estado de relajación en el ser humano, mediante la reproducción de sonidos binaurales. Universidad de San Buenaventura Bogotá.Johansson, G. (2022). Frontal Alpha Asymmetry scores in threatening and non-threatening conditions. Högskolan I Skövde.Juslin, P. N. (2009). Sound of music : Seven ways in which the brain can evoke emotions from sound. En Sound, mind and emotion (8a ed., pp. 11–41). Sound Environment Centre.Kahsnitz, M., & RTW. (2021). Worldwide Loudness Delivery Standards. https://www.rtw.com/en/blog/worldwide-loudness-delivery-standards.htmlKatsigiannis, S., & Ramzan, N. (2018). DREAMER: A Database for Emotion Recognition Through EEG and ECG Signals from Wireless Low-cost Off-the-Shelf Devices. IEEE Journal of Biomedical and Health Informatics, 22(1), 98–107. https://doi.org/10.1109/JBHI.2017.2688239Katz, L. (2019). How to setup surround sound home audio - SoundGuys. https://www.soundguys.com/how-to-setup-home-theater-surround-sound-24444/Kim, J., Kim, W., & Kim, J.-T. (2015). Psycho-physiological responses of drivers to road section types and elapsed driving time on a freeway. Can. J. Civ. Eng., 42, 881–888. https://doi.org/https://doi.org/10.1139/cjce-2014-0392Koelstra, S., Mülh, C., Soleymani, M., Lee, J.-S., Yazdani, A., Ebrahimi, T., Pun, T., Nijholt, A., & Patras, I. (2012). DEAP: A Database for Emotion Analysis using Physiological Signals. IEEE Transactions on Affective Computing.Lan, Z., Sourina, O., Wang, L., & Liu, Y. (2016). Real-time EEG-based emotion monitoring using stable features. Visual Computer, 32(3), 347–358. ttps://doi.org/10.1007/s00371-015-1183-yLepa, S., Weinzierl, S., Maempel, H. J., & Ungeheuer, E. (2014). Emotional impact of different forms of spatialization in everyday mediatized music listening: Placebo or technology effects? 136th Audio Engineering Society Convention 2014, Convention Paper 9024, 141–148.Levitin, D. (2006). Tu Cerebro y La Música (Titivillus (Ed.)). Lectulandia . Li, T.-H., Liu, W., Zheng, W.-L., & Lu, B.-L. (2019). Classification of Five Emotions from EEG and Eye Movement Signals: Discrimination Ability and Stability over Time. 607–610. https://doi.org/10.1109/NER.2019.8716943Li, Y., Cai, J., Dong, Q., Wu, L., & Chen, Q. (2020). Psychophysiological responses of young people to soundscapes in actual rural and city environments. AES: Journal of the Audio Engineering Society, 68(12), 910–925. https://doi.org/10.17743/JAES.2020.0060Liao, D., Shu, L., Liang, G., Li, Y., Zhang, Y., Zhang, W., & Xu, X. (2020). Design and Evaluation of Affective Virtual Reality System Based on Multimodal Physiological Signals and Self-Assessment Manikin. IEEE Journal of Electromagnetics, RF and Microwaves in Medicine and Biology, 4(3), 216–224. https://doi.org/10.1109/JERM.2019.2948767Lorenzi, A., Gentil, A., & Gil-Loyzaga, P. (2019). AUDICIÓN BINAURAL Y MONOAURAL | Cochlea. http://www.cochlea.eu/es/sonido/psicoacustica/localizacionMacía, A. F. (2017). Design of a protocol for the measurement of physiological and emotional responses to sound stimuli. Universidad de San Buenaventura Medellín.MATLAB. (2022). Bandpower code documentation (MATLAB R2022b). https://www.mathworks.com/help/signal/ref/bandpower.htmlMosquera, D., & Casas, J. (2017). Generación De Audio Espacializado En Tres Dimensiones. Pontificia Universidad JaverianaMy-MS.org. (2022). Brain Anatomy Part 2: Lobes. My-MS.org: For Information on Multiple Sclerosis. https://my-ms.org/anatomy_brain_part2.htmNair, S. (2016). Reverse Engineering Emotions in an Immersive Audio Mix Format. IBC, 1–5.Narbutt, M., Skoglund, J., Allen, A., Chinen, M., Barry, D., & Hines, A. (2019). AMBIQUAL: Towards a Quality Metric for Headphone Rendered Compressed Ambisonic Spatial Audio.Applied Sciences (Switzerland), 9(13), 1–21. https://doi.org/10.3390/app9132618Navea, R. F., & Dadios, E. (2015). Beta/Alpha power ratio and alpha asymmetry characterization of EEG signals due to musical tone stimulation. Project Einstein 2015, October.NeuroSky. (s/f). EEG: The Ultimate Guide. Recuperado el 12 de noviembre de 2021, de http://neurosky.com/biosensors/eeg-sensor/ultimate-guide-to-eeg/NeuroSky. (2015). MindWave Mobile : User Guide. August, 1–18.Olive, S. E., & Welti, T. (2012). Defining Immersion: Literature Review and Implications for Research on Immersive Audiovisual Experiences. 1–17.OMS. (1998). Prevention of noise-induced hearing loss : report of an informal consultation held at the World Health Organization, Geneva, on 28-30 October 1997. World Health Organization.OpenBCI. (2022a). Gelfree Electrode Cap Guide.OpenBCI. (2022b). GUI Widget Guide - OpenBCI GUI Documentation.Otto, N., Amman, S., Eaton, C., & Lake, S. (1999). Guidelines for jury evaluations of automotive sounds. SAE Technical Papers, April. https://doi.org/10.4271/1999-01-1822Posner, J., Russel, J. A., & Peterson, B. S. (2005). The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology. National Institude of Health, 17, 715–734.Pouyanfar, S., & Sameti, H. (2014). Music emotion recognition using two level classification. 2014 Iranian Conference on Intelligent Systems, ICIS 2014, February 2017. https://doi.org/10.1109/IranianCIS.2014.6802519Puentes, D. H. (2008). Neuro-estimulador Auditivo Binaural para Tratamiento y Optimización de Estados Cerebrales Inducidos de Vigilia y Concentración. Universidad de San Buenaventura Bogotá.Restrepo Cabanzo, L. P. (2017). Comparación objetiva y subjetiva de la sonoridad entre una mezcla de sonido envolvente 5.1 y binaural de un cortometraje usando el sistema de reproducción Opsodis. Universidad de San Buenaventura Medellín.Reuderink, B., Mühl, C., & Poel, M. (2013). Valence, arousal and dominance in the EEG during game play. International Journal of Autonomous and Adaptive Communications Systems, 6(1), 45–62. https://doi.org/10.1504/IJAACS.2013.050691Reyes, M. F., & Velasco, J. S. (2014). Análisis Psicoacústico a Partir de Estímulos Auditivos Generados por Medio de Pulsos Binaurales en Relación al Rango de Frecuencia en una Composición Sonora. Universidad de San Buenaventura Bogotá.Roginska, A., & Geluso, P. (2017). Immersive sound: The art and science of binaural and multichannel audio. En Immersive Sound: The Art and Science of Binaural and Multi-Channel Audio. https://doi.org/10.4324/9781315707525Rudrich, D., Zotter, F., Grill, S., & Hubber, M. (2021). IEM Plug-in Suite Documentation and Plug-in Descriptions. https://plugins.iem.at/docs/plugindescriptions/Rumsey, F. (2016). Immersive audio: Objects, mixing, and rendering. AES: Journal of the Audio Engineering Society, 64(7–8), 584–588.Rumsey, F. (2018). Spatial audio Channels, objects, or ambisonics? AES: Journal of the Audio Engineering Society, 66(11), 987–992.Rumsey, F. (2020). Immersive audio—Defining and evaluating the experience. J. Audio Eng. Soc, 68(5), 388–392. http://www.aes.org/e-lib/browse.cfm?elib=20856Sarno, R., Munawar, M. N., & Nugraha, B. T. (2016). Real-time electroencephalography-based emotion recognition system. International Review on Computers and Software, 11(5), 456–465. https://doi.org/10.15866/irecos.v11i5.9334Sikström, E., Nilsson, N. C., Nordahl, R., & Serafin, S. (2013). Preliminary investigation of self reported emotional responses to approaching and receding footstep sounds in a virtual reality context. Proceedings of the AES International Conference, 18–23.Silverthorn, D. U. (2019). Fisiología humana un enfoque integrado. En Fisiología humana un enfoque integrado (8a ed.). Editorial Médica Panamericana.Soleymani, M., Lichtenauer, J., Pun, T., & Pantic, M. (2012). A Multi-Modal Affective Database for Affect Recognition and Implicit Tagging. Affective Computing, IEEE Transactions on, 3, 1. https://doi.org/10.1109/T-AFFC.2011.25Song, T., Zheng, W., Song, P., & Cui, Z. (2018). EEG Emotion Recognition Using Dynamical Graph Convolutional Neural Networks. IEEE Transactions on Affective Computing, PP, 1. https://doi.org/10.1109/TAFFC.2018.2817622Sotgiu, A. De, Coccoli, M., & Vercelli, G. (2020). Comparing the perception of “sense of presence” between a stereo mix and a binaural mix in immersive music. 148th Audio Engineering Society Convention 2020, Convention e-Brief 588, 1–5.Soto, S. A. (2021). Suscripciones a servicios de streaming en Colombia pueden costarle $132.500 mensuales. https://www.larepublica.co/internet-economy/suscripciones-aservicios-de-streaming-en-colombia-pueden-costarle-132500-mensuales-3124703Statista, & Orús, A. (2021). • Música en streaming: usuarios de pago a nivel mundial 2010-2020 | Statista. https://es.statista.com/estadisticas/636319/usuarios-de-pago-de-servicios-demusica-en-streaming-a-nivel-mundial/Stockburger, A. (2006). THE RENDERED ARENA MODALITIES OF SPACE IN VIDEO AND COMPUTER GAMES. En Narrative. University of the Arts, London.Subramanian, R., Wache, J., Abadi, M. K., Vieriu, R. L., Winkler, S., & Sebe, N. (2018). ASCERTAIN: Emotion and personality recognition using commercial sensors. IEEE Transactions on Affective Computing, 9(2), 147–160. https://doi.org/10.1109/TAFFC.2016.2625250Suhaimi, N. S., Mountstephens, J., & Teo, J. (2020). EEG-Based Emotion Recognition: A Stateof-the-Art Review of Current Trends and Opportunities. Computational Intelligence and Neuroscience, 2020. https://doi.org/10.1155/2020/8875426Trinnov. (2021). Trinnov | What is Immersive Sound? What is Object-based audio? https://www.trinnov.com/en/blog/posts/what-is-immersive-sound/Vallat, R. (2018). Compute the average bandpower of an EEG signal.Vargas, A. (2009). Análisis Psicoacústico de Producciones Audiovisuales. Universidad de San Buenaventura Bogotá.Yang, W., Makita, K., Nakao, T., Kanayama, N., Machizawa, M. G., Sasaoka, T., Sugata, A., Kobayashi, R., Hiramoto, R., Yamawaki, S., Iwanaga, M., & Miyatani, M. (2018). Affective auditory stimulus database: An expanded version of the International Affective Digitized Sounds (IADS-E). Behavior Research Methods, 50(4), 1415–1429. https://doi.org/10.3758/s13428-018-1027-6Zangeneh Soroush, M., Maghooli, K., Setarehdan, K., & Motie Nasrabadi, A. (2018). Emotion Classification through Nonlinear EEG Analysis Using Machine Learning Methods. International Clinical Neuroscience Journal, 5, 135–149. https://doi.org/10.15171/icnj.2018.26Zeng, Z., Pantic, M., & Roisman, G. (2009). A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. IEEE transactions on pattern analysis and machine intelligence, 31, 39–58. https://doi.org/10.1109/TPAMI.2008.52Zhu, L., Tian, X., Xu, X., & Shu, L. (2019). Design and Evaluation of the Mental Relaxation VR Scenes Using Forehead EEG Features. IEEE MTT-S 2019 International Microwave Biomedical Conference, IMBioC 2019 - Proceedings, 2019–2022. https://doi.org/10.1109/IMBIOC.2019.8777812Zor, J. de. (2010). INFORME LAS FRECUENCIAS CEREBRALES O la puerta del espacio. https://www.hispamap.net/ondas.htmZotter, F., & Frank, M. (2019). XY, MS, and First-Order Ambisonics. Springer Topics in Signal Processing, 19, 1–22. https://doi.org/10.1007/978-3-030-17207-7_1info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Attribution-NonCommercial-NoDerivatives 4.0 Internationalhttp://creativecommons.org/licenses/by-nc-nd/4.0/620 - Ingeniería y operaciones afinesAudio inmersivoaudio estereofónicorespuestas psicofisiológicaselectroencefalografía (EEG)psicoacústicaSelf-Assesment Manikin (SAM)International Affective Digital Sounds (IADS)Machine LearningKNearest Neighbors (KNN).Evaluación del efecto emocional de audios inmersivos con respecto a audios estereofónicos mediante respuestas psicofisiológicasTrabajo de grado - Pregradohttp://purl.org/coar/resource_type/c_7a1fTextinfo:eu-repo/semantics/bachelorThesishttp://purl.org/redcol/resource_type/TPinfo:eu-repo/semantics/acceptedVersionComunidad Científica y AcadémicaPublicationORIGINALFormato_Autorización_Publicación_Repositorio_USBColFormato_Autorización_Publicación_Repositorio_USBColapplication/pdf217236https://bibliotecadigital.usb.edu.co/bitstreams/0f3f825a-9765-4953-a8fb-90ec8bacb0f6/download134a5c70e495645fe759b0b82bf3cbf8MD51Evaluación_Emocional_Inmersivo_Rubio_2023.pdfEvaluación_Emocional_Inmersivo_Rubio_2023.pdfapplication/pdf34581379https://bibliotecadigital.usb.edu.co/bitstreams/9eaad015-5785-42a3-8b2b-14f7e8d40a0d/download7ea1075bac452031d91922e767e554bcMD52CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8899https://bibliotecadigital.usb.edu.co/bitstreams/fc2f7b7c-dec3-41fc-93a6-ed9878b4e46e/download3b6ce8e9e36c89875e8cf39962fe8920MD53LICENSElicense.txtlicense.txttext/plain; charset=utf-82079https://bibliotecadigital.usb.edu.co/bitstreams/1d404512-f113-4d95-b209-7acb3424ac96/downloadce8fd7f912f132cbeb263b9ddc893467MD54TEXTFormato_Autorización_Publicación_Repositorio_USBCol.txtFormato_Autorización_Publicación_Repositorio_USBCol.txtExtracted texttext/plain7224https://bibliotecadigital.usb.edu.co/bitstreams/6588aa57-46fd-49db-95f6-1b1b81ff81e7/download65f2f2839986bd16a5f098318551e6b0MD55Evaluación_Emocional_Inmersivo_Rubio_2023.pdf.txtEvaluación_Emocional_Inmersivo_Rubio_2023.pdf.txtExtracted texttext/plain101613https://bibliotecadigital.usb.edu.co/bitstreams/2c362475-799a-4fb6-b82c-bce88ec4acf1/download0a07d3af0821a37431c8a5288583de91MD57THUMBNAILFormato_Autorización_Publicación_Repositorio_USBCol.jpgFormato_Autorización_Publicación_Repositorio_USBCol.jpgGenerated Thumbnailimage/jpeg16115https://bibliotecadigital.usb.edu.co/bitstreams/725fb333-a793-4205-adc1-37c372abdf7d/download3d3ee690f5fc2e5cb235c0938003ed64MD56Evaluación_Emocional_Inmersivo_Rubio_2023.pdf.jpgEvaluación_Emocional_Inmersivo_Rubio_2023.pdf.jpgGenerated Thumbnailimage/jpeg13852https://bibliotecadigital.usb.edu.co/bitstreams/d867c99b-c733-4e3d-86c5-aaaa7dff4d90/downloadcbb91ea8afb49efce459ef2ffe77a52eMD5810819/24861oai:bibliotecadigital.usb.edu.co:10819/248612025-05-24 04:34:05.631http://creativecommons.org/licenses/by-nc-nd/4.0/Attribution-NonCommercial-NoDerivatives 4.0 Internationalhttps://bibliotecadigital.usb.edu.coRepositorio Institucional Universidad de San Buenaventura Colombiabdigital@metabiblioteca.comPGNlbnRlcj4KPGgzPlJFUE9TSVRPUklPIElOU1RJVFVDSU9OQUwgVU5JVkVSU0lEQUQgREUgU0FOIEJVRU5BVkVOVFVSQSAtIENPTE9NQklBPC9oMz4KPHA+ClTDqXJtaW5vcyBkZSBsYSBsaWNlbmNpYSBnZW5lcmFsIHBhcmEgcHVibGljYWNpw7NuIGRlIG9icmFzIGVuIGVsIHJlcG9zaXRvcmlvIGluc3RpdHVjaW9uYWw8L3A+PC9jZW50ZXI+CjxQIEFMSUdOPWNlbnRlcj4KUG9yIG1lZGlvIGRlIGVzdGUgZm9ybWF0byBtYW5pZmllc3RvIG1pIHZvbHVudGFkIGRlIEFVVE9SSVpBUiBhIGxhIFVuaXZlcnNpZGFkIGRlIFNhbiBCdWVuYXZlbnR1cmEsIFNlZGUgQm9nb3TDoSB5IDxCUj5TZWNjaW9uYWxlcyBNZWRlbGzDrW4sIENhbGkgeSBDYXJ0YWdlbmEsIGxhIGRpZnVzacOzbiBlbiB0ZXh0byBjb21wbGV0byBkZSBtYW5lcmEgZ3JhdHVpdGEgeSBwb3IgdGllbXBvIGluZGVmaW5pZG8gZW4gZWw8QlI+IFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwgVW5pdmVyc2lkYWQgZGUgU2FuIEJ1ZW5hdmVudHVyYSwgZWwgZG9jdW1lbnRvIGFjYWTDqW1pY28gLSBpbnZlc3RpZ2F0aXZvIG9iamV0byBkZSBsYSBwcmVzZW50ZSA8QlI+YXV0b3JpemFjacOzbiwgY29uIGZpbmVzIGVzdHJpY3RhbWVudGUgZWR1Y2F0aXZvcywgY2llbnTDrcKtZmljb3MgeSBjdWx0dXJhbGVzLCBlbiBsb3MgdMOpcm1pbm9zIGVzdGFibGVjaWRvcyBlbiBsYSBMZXkgMjMgZGUgPEJSPiAxOTgyLCBMZXkgNDQgZGUgMTk5MywgRGVjaXNpw7NuIEFuZGluYSAzNTEgZGUgMTk5MywgRGVjcmV0byA0NjAgZGUgMTk5NSB5IGRlbcOhcyBub3JtYXMgZ2VuZXJhbGVzIHNvYnJlIGRlcmVjaG9zPEJSPiBkZSBhdXRvci4gPEJSPiAKIApDb21vIGF1dG9yIG1hbmlmaWVzdG8gcXVlIGVsIHByZXNlbnRlIGRvY3VtZW50byBhY2Fkw6ltaWNvIC0gaW52ZXN0aWdhdGl2byBlcyBvcmlnaW5hbCB5IHNlIHJlYWxpesOzIHNpbiB2aW9sYXIgbyA8QlI+IHVzdXJwYXIgZGVyZWNob3MgZGUgYXV0b3IgZGUgdGVyY2Vyb3MsIHBvciBsbyB0YW50bywgbGEgb2JyYSBlcyBkZSBtaSBleGNsdXNpdmEgYXV0b3LDrcKtYSB5IHBvc2VvIGxhIHRpdHVsYXJpZGFkIDxCUj4gc29icmUgbGEgbWlzbWEuIExhIFVuaXZlcnNpZGFkIGRlIFNhbiBCdWVuYXZlbnR1cmEgbm8gc2Vyw6EgcmVzcG9uc2FibGUgZGUgbmluZ3VuYSB1dGlsaXphY2nDs24gaW5kZWJpZGEgZGVsIGRvY3VtZW50byA8QlI+cG9yIHBhcnRlIGRlIHRlcmNlcm9zIHkgc2Vyw6EgZXhjbHVzaXZhbWVudGUgbWkgcmVzcG9uc2FiaWxpZGFkIGF0ZW5kZXIgcGVyc29uYWxtZW50ZSBjdWFscXVpZXIgcmVjbGFtYWNpw7NuIHF1ZSBwdWVkYTxCUj4gcHJlc2VudGFyc2UgYSBsYSBVbml2ZXJzaWRhZC4gPEJSPgogCkF1dG9yaXpvIGFsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwgZGUgbGEgVW5pdmVyc2lkYWQgZGUgU2FuIEJ1ZW5hdmVudHVyYSBjb252ZXJ0aXIgZWwgZG9jdW1lbnRvIGFsIGZvcm1hdG8gcXVlIDxCUj5yZXF1aWVyYSAoaW1wcmVzbywgZGlnaXRhbCwgZWxlY3Ryw7NuaWNvIG8gY3VhbHF1aWVyIG90cm8gY29ub2NpZG8gbyBwb3IgY29ub2NlcikgbyBjb24gZmluZXMgZGU8QlI+IHByZXNlcnZhY2nDs24gZGlnaXRhbC4gPEJSPgogCkVzdGEgYXV0b3JpemFjacOzbiBubyBpbXBsaWNhIHJlbnVuY2lhIGEgbGEgZmFjdWx0YWQgcXVlIHRlbmdvIGRlIHB1YmxpY2FyIHBvc3Rlcmlvcm1lbnRlIGxhIG9icmEsIGVuIGZvcm1hIHRvdGFsIG8gPEJSPnBhcmNpYWwsIHBvciBsbyBjdWFsIHBvZHLDqSwgZGFuZG8gYXZpc28gcG9yIGVzY3JpdG8gY29uIG5vIG1lbm9zIGRlIHVuIG1lcyBkZSBhbnRlbGFjacOzbiwgc29saWNpdGFyIHF1ZSBlbCA8QlI+ZG9jdW1lbnRvIGRlamUgZGUgZXN0YXIgZGlzcG9uaWJsZSBwYXJhIGVsIHDDumJsaWNvIGVuIGVsIFJlcG9zaXRvcmlvIEluc3RpdHVjaW9uYWwgZGUgbGEgVW5pdmVyc2lkYWQgZGUgU2FuIEJ1ZW5hdmVudHVyYSwgPEJSPiBhc8Otwq0gbWlzbW8sIGN1YW5kbyBzZSByZXF1aWVyYSBwb3IgcmF6b25lcyBsZWdhbGVzIHkvbyByZWdsYXMgZGVsIGVkaXRvciBkZSB1bmEgcmV2aXN0YS4gPEJSPjwvUD4K