Paravertebral muscle segmentation for body composition analysis in CT scans
In this study, deep learning techniques, specifically U-Net-based architectures, are used to automate the segmentation of paraspinal muscles in CT scans. This undergraduate research thesis aims to strengthen segmentation accuracy and improve body composition analysis (BCA) precision. Several models...
- Autores:
-
Gómez Mesa, Lina María
- Tipo de recurso:
- Trabajo de grado de pregrado
- Fecha de publicación:
- 2025
- Institución:
- Universidad de los Andes
- Repositorio:
- Séneca: repositorio Uniandes
- Idioma:
- eng
- OAI Identifier:
- oai:repositorio.uniandes.edu.co:1992/75992
- Acceso en línea:
- https://hdl.handle.net/1992/75992
- Palabra clave:
- Body composition analysis
Image processing
Paraspinal muscle
Semantic segmentation
Ingeniería
- Rights
- openAccess
- License
- Attribution 4.0 International
id |
UNIANDES2_53198417505f900d008fd9dd3dcdefac |
---|---|
oai_identifier_str |
oai:repositorio.uniandes.edu.co:1992/75992 |
network_acronym_str |
UNIANDES2 |
network_name_str |
Séneca: repositorio Uniandes |
repository_id_str |
|
dc.title.eng.fl_str_mv |
Paravertebral muscle segmentation for body composition analysis in CT scans |
title |
Paravertebral muscle segmentation for body composition analysis in CT scans |
spellingShingle |
Paravertebral muscle segmentation for body composition analysis in CT scans Body composition analysis Image processing Paraspinal muscle Semantic segmentation Ingeniería |
title_short |
Paravertebral muscle segmentation for body composition analysis in CT scans |
title_full |
Paravertebral muscle segmentation for body composition analysis in CT scans |
title_fullStr |
Paravertebral muscle segmentation for body composition analysis in CT scans |
title_full_unstemmed |
Paravertebral muscle segmentation for body composition analysis in CT scans |
title_sort |
Paravertebral muscle segmentation for body composition analysis in CT scans |
dc.creator.fl_str_mv |
Gómez Mesa, Lina María |
dc.contributor.advisor.none.fl_str_mv |
Reyes Gómez, Juan Pablo |
dc.contributor.author.none.fl_str_mv |
Gómez Mesa, Lina María |
dc.subject.keyword.eng.fl_str_mv |
Body composition analysis Image processing |
topic |
Body composition analysis Image processing Paraspinal muscle Semantic segmentation Ingeniería |
dc.subject.keyword.none.fl_str_mv |
Paraspinal muscle Semantic segmentation |
dc.subject.themes.spa.fl_str_mv |
Ingeniería |
description |
In this study, deep learning techniques, specifically U-Net-based architectures, are used to automate the segmentation of paraspinal muscles in CT scans. This undergraduate research thesis aims to strengthen segmentation accuracy and improve body composition analysis (BCA) precision. Several models are evaluated, including U-Net, U-Net++, AttU-Net, TransUNet, UNETR, and SwinUNETR to compare their performance under a transfer learning approach on the CAVAAT dataset. The models are evaluated using performance metrics such as the Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD). In addition, SAM (Segment Anything Model) is tested for its adaptability to medical imaging tasks, providing insights into its accuracy and potential applications in scenarios with limited annotations. \href{https://github.com/Lina-go/PMSegmentation.git}{Github Repository} has models, data, and code. |
publishDate |
2025 |
dc.date.accessioned.none.fl_str_mv |
2025-02-03T15:03:08Z |
dc.date.available.none.fl_str_mv |
2025-02-03T15:03:08Z |
dc.date.issued.none.fl_str_mv |
2025-01-31 |
dc.type.none.fl_str_mv |
Trabajo de grado - Pregrado |
dc.type.driver.none.fl_str_mv |
info:eu-repo/semantics/bachelorThesis |
dc.type.version.none.fl_str_mv |
info:eu-repo/semantics/acceptedVersion |
dc.type.coar.none.fl_str_mv |
http://purl.org/coar/resource_type/c_7a1f |
dc.type.content.none.fl_str_mv |
Text |
dc.type.redcol.none.fl_str_mv |
http://purl.org/redcol/resource_type/TP |
format |
http://purl.org/coar/resource_type/c_7a1f |
status_str |
acceptedVersion |
dc.identifier.uri.none.fl_str_mv |
https://hdl.handle.net/1992/75992 |
dc.identifier.instname.none.fl_str_mv |
instname:Universidad de los Andes |
dc.identifier.reponame.none.fl_str_mv |
reponame:Repositorio Institucional Séneca |
dc.identifier.repourl.none.fl_str_mv |
repourl:https://repositorio.uniandes.edu.co/ |
url |
https://hdl.handle.net/1992/75992 |
identifier_str_mv |
instname:Universidad de los Andes reponame:Repositorio Institucional Séneca repourl:https://repositorio.uniandes.edu.co/ |
dc.language.iso.none.fl_str_mv |
eng |
language |
eng |
dc.relation.references.none.fl_str_mv |
[1] L. T. Leong, M. C. Wong, Y. E. Liu, Y. Glaser, B. K. Quon, N. N. Kelly, D. Cataldi, P. Sadowski, S. B. Heymsfield, and J. A. Shepherd, “Generative deep learning furthers the understanding of local distributions of fat and muscle on body shape and health using 3d surface scans,” Communications Medicine, vol. 4, no. 1, p. 13, 2024. [2] C. V. Albanese, E. Diessel, and H. K. Genant, “Clinical applications of body composition measurements using dxa,” Journal of Clinical Densitometry, vol. 6, no. 2, pp. 75–85, 2003. [3] P. J. Pickhardt, P. M. Graffy, A. A. Perez, M. G. Lubner, D. C. Elton, and R. M. Summers, “Opportunistic screening at abdominal ct: use of automated body composition biomarkers for added cardiometabolic value,” Radiographics, vol. 41, no. 2, pp. 524–542, 2021. [4] H. S. Kim, H. Kim, S. Kim, Y. Cha, J.-T. Kim, J.-W. Kim, Y.-C. Ha, and J.-I. Yoo, “Precise individual muscle segmentation in whole thigh ct scans for sarcopenia assessment using u-net transformer,” Scientific Reports, vol. 14, no. 1, p. 3301, 2024. [5] P. Piqueras, A. Ballester, J. V. Durá-Gil, S. Martinez-Hervas, J. Redón, and J. T. Real, “Anthropometric indicators as a tool for diagnosis of obesity and other health risk factors: a literature review,” Frontiers in psychology, vol. 12, p. 631179, 2021. [6] C. Amaya Porras, “Comparative study of deep learning segmentation models for body composition analysis in ct scans,” Master’s thesis, Universidad de los Andes, 2020, disponible en: http://hdl.handle.net/1992/44828. [7] K.-J. Tsai, C.-C. Chang, L.-C. Lo, J. Y. Chiang, C.-S. Chang, and Y.-J. Huang, “Automatic segmentation of paravertebral muscles in abdominal ct scan by u-net: The application of data augmentation technique to increase the jaccard ratio of deep learning,” Medicine, vol. 100, no. 44, p. e27649, 2021. [8] H. E. Berg, D. Truong, E. Skoglund, T. Gustafsson, and T. R. Lundberg, “Threshold-automated ct measurements of muscle size and radiological attenuation in multiple lower-extremity muscles of older individuals,” Clinical Physiology and Functional Imaging, vol. 40, no. 3, pp. 165–172, 2020. [9] K. Popuri, D. Cobzas, N. Esfandiari, V. Baracos, and M. Jägersand, “Body composition assessment in axial ct images using fem-based automatic segmentation of skeletal muscle,” IEEE transactions on medical imaging, vol. 35, no. 2, pp. 512–520, 2015. 10] K. Popuri, D. Cobzas, M. Jägersand, N. Esfandiari, and V. Baracos, “Fem-based automatic segmentation of muscle and fat tissues from thoracic ct images,” in 2013 IEEE 10th International Symposium on Biomedical Imaging. IEEE, 2013, pp. 149–152. [11] Y. Wei, X. Tao, B. Xu, and A. Castelein, “Paraspinal muscle segmentation in ct images using gsm-based fuzzy c-means clustering,” Journal of Computer and Communications, vol. 2, no. 9, pp. 70–77, 2014. [12] H. Lee, F. M. Troschel, S. Tajmir, G. Fuchs, J. Mario, F. J. Fintelmann, and S. Do, “Pixel-level deep segmentation: artificial intelligence quantifies muscle on computed tomography for body morphometric analysis,” Journal of digital imaging, vol. 30, pp. 487–498, 2017. [13] L. L. Ackermans, L. Volmer, L. Wee, R. Brecheisen, P. Sánchez-González, A. P. Seiffert, E. J. Gómez, A. Dekker, J. A. Ten Bosch, S. M. Olde Damink et al., “Deep learning automated segmentation for muscle and adipose tissue from abdominal computed tomography in polytrauma patients,” Sensors, vol. 21, no. 6, p. 2083, 2021. [14] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” 2015. [Online]. Available: https://arxiv.org/abs/1505.04597 [15] M. E. Rayed, S. S. Islam, S. I. Niha, J. R. Jim, M. M. Kabir, and M. Mridha, “Deep learning for medical image segmentation: State-of-the-art advancements and challenges,” Informatics in Medicine Unlocked, vol. 47, p. 101504, 2024. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S2352914824000601 [16] X. Liu, L. Song, S. Liu, and Y. Zhang, “A review of deep-learning-based medical image segmentation methods,” Sustainability, vol. 13, no. 3, 2021. [Online]. Available: https://www.mdpi.com/2071-1050/13/3/1224 [17] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A nested u-net architecture for medical image segmentation,” CoRR, vol. abs/1807.10165, 2018. [Online]. Available: http://arxiv.org/abs/1807.10165 [18] O. Oktay, J. Schlemper, L. L. Folgoc, M. C. H. Lee, M. P. Heinrich, K. Misawa, K. Mori, S. G. McDonagh, N. Y. Hammerla, B. Kainz, B. Glocker, and D. Rueckert, “Attention u-net: Learning where to look for the pancreas,” CoRR, vol. abs/1804.03999, 2018. [Online]. Available: http://arxiv.org/abs/1804.03999 [19] J. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang, L. Lu, A. L. Yuille, and Y. Zhou, “Transunet: Transformers make strong encoders for medical image segmentation,” 2021. [Online]. Available: https://arxiv.org/abs/2102.04306 [20] J. Chen, J. Mei, X. Li, Y. Lu, Q. Yu, Q. Wei, X. Luo, Y. Xie, E. Adeli, Y. Wang, M. P. Lungren, S. Zhang, L. Xing, L. Lu, A. Yuille, and Y. Zhou, “Transunet: Rethinking the u-net architecture design for medical image segmentation through the lens of transformers,” Medical Image Analysis, vol. 97, p. 103280, 2024. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1361841524002056 [21] A. Hatamizadeh, Y. Tang, V. Nath, D. Yang, A. Myronenko, B. Landman, H. R. Roth, and D. Xu, “Unetr: Transformers for 3d medical image segmentation,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2022, pp. 574–584. [22] A. Hatamizadeh, V. Nath, Y. Tang, D. Yang, H. Roth, and D. Xu, “Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images,” 2022. [Online]. Available: https://arxiv.org/abs/2201.01266 [23] T. B. Brown, “Language models are few-shot learners,” arXiv preprint arXiv:2005.14165, 2020. [24] H. Guo, J. Zhang, J. Huang, T. C. W. Mok, D. Guo, K. Yan, L. Lu, D. Jin, and M. Xu, “Towards a comprehensive, efficient and promptable anatomic structure segmentation model using 3d whole-body ct scans,” 2024. [Online]. Available: https://arxiv.org/abs/2403.15063 [25] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo et al., “Segment anything,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 4015–4026. [26] H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, and R. M. Summers, “Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning,” IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1285–1298, 2016. [27] J. Ma, Y. He, F. Li, L. Han, C. You, and B. Wang, “Segment anything in medical images,” Nature Communications, vol. 15, no. 1, Jan. 2024. [Online]. Available: http://dx.doi.org/10.1038/s41467-024-44824-z [28] S. Studer, T. B. Bui, C. Drescher, A. Hanuschkin, L. Winkler, S. Peters, and K.-R. Mueller, “Towards crisp-ml(q): A machine learning process model with quality assurance methodology,” 2021. [Online]. Available: https://arxiv.org/abs/2003.05155 [29] C. M. Amaya Porras, “Comparative study of deep learning segmentation models for body composition analysis in ct scans,” Fundación Universitaria de Ciencias de la Salud, Tech. Rep., 2020. [Online]. Available: http://hdl.handle.net/1992/44828 [30] J. D. Torres Pinzón, “Segmentación del tejido muscular paravertebral en imágenes tac,” Tech. Rep., 2018. [31] S. Masoudi, S. A. Harmon, S. Mehralivand, S. M. Walker, H. Raviprakash, U. Bagci, P. L. Choyke, and B. Turkbey, “Quick guide on radiology image pre-processing for deep learning applications in prostate cancer research,” Journal of Medical Imaging, vol. 8, no. 1, pp. 010901010901, 2021. [32] K. Engelke, O. Museyko, L. Wang, and J.-D. Laredo, “Quantitative analysis of skeletal muscle by computed tomography imaging—state of the art,” Journal of orthopaedic translation, vol. 15, pp. 91–103, 2018. [33] C. H. Sudre, W. Li, T. Vercauteren, S. Ourselin, and M. Jorge Cardoso, Generalised Dice Overlap as a Deep Learning Loss Function for Highly Unbalanced Segmentations. Springer International Publishing, 2017, p. 240–248. [Online]. Available: http://dx.doi.org/10.1007/ 978-3-319-67558-9_28 [34] A. Mao, M. Mohri, and Y. Zhong, “Cross-entropy loss functions: Theoretical analysis and applications,” 2023. [Online]. Available: https://arxiv.org/abs/2304.07288 [35] R. F. Khan, B.-D. Lee, and M. S. Lee, “Transformers in medical image segmentation: a narrative review,” Quantitative Imaging in Medicine and Surgery, vol. 13, no. 12, p. 8747, 2023. [36] D. N. A. Kareem, M. Fiaz, N. Novershtern, and H. Cholakkal, “Medical image segmentation using directional window attention,” 2024. [Online]. Available: https: //arxiv.org/abs/2406.17471 |
dc.rights.en.fl_str_mv |
Attribution 4.0 International |
dc.rights.uri.none.fl_str_mv |
http://creativecommons.org/licenses/by/4.0/ |
dc.rights.accessrights.none.fl_str_mv |
info:eu-repo/semantics/openAccess |
dc.rights.coar.none.fl_str_mv |
http://purl.org/coar/access_right/c_abf2 |
rights_invalid_str_mv |
Attribution 4.0 International http://creativecommons.org/licenses/by/4.0/ http://purl.org/coar/access_right/c_abf2 |
eu_rights_str_mv |
openAccess |
dc.format.extent.none.fl_str_mv |
20 páginas |
dc.format.mimetype.none.fl_str_mv |
application/pdf |
dc.publisher.none.fl_str_mv |
Universidad de los Andes |
dc.publisher.program.none.fl_str_mv |
Ingeniería de Sistemas y Computación |
dc.publisher.faculty.none.fl_str_mv |
Facultad de Ingeniería |
dc.publisher.department.none.fl_str_mv |
Departamento de Ingeniería de Sistemas y Computación |
publisher.none.fl_str_mv |
Universidad de los Andes |
institution |
Universidad de los Andes |
bitstream.url.fl_str_mv |
https://repositorio.uniandes.edu.co/bitstreams/688b236f-5ce7-4d94-b116-223cc4c0bad9/download https://repositorio.uniandes.edu.co/bitstreams/d688b39d-838a-4d63-b2a5-f625425179fd/download https://repositorio.uniandes.edu.co/bitstreams/ddcb8ddf-1747-4b1c-906e-e6ec2cad2926/download https://repositorio.uniandes.edu.co/bitstreams/2232a4a5-a5ec-4eed-af90-da78d7f81b26/download https://repositorio.uniandes.edu.co/bitstreams/cba32e1e-7869-47d6-88f5-b094dd729844/download https://repositorio.uniandes.edu.co/bitstreams/accacc98-0dc1-4c40-a10b-38c57e354716/download https://repositorio.uniandes.edu.co/bitstreams/408fe06b-8ecc-43f6-a06d-0e7ff0c0ca51/download https://repositorio.uniandes.edu.co/bitstreams/b3e13709-986f-40d6-8ff0-6737dd49eb5a/download |
bitstream.checksum.fl_str_mv |
25929886b84d38e1f88bc0c70f9147f9 1f43ea193cda5dcd097882e651af23d2 0175ea4a2d4caec4bbcc37e300941108 ae9e573a68e7f92501b6913cc846c39f ce2ccdf6bb329dec5130a90823cba300 698fc993734be9466702da2f33c1d993 464b633ab64c83244d71be8af30aacd6 cd48ada3aa01756240daa5bc422f72a8 |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 MD5 MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositorio institucional Séneca |
repository.mail.fl_str_mv |
adminrepositorio@uniandes.edu.co |
_version_ |
1831927817441378304 |
spelling |
Reyes Gómez, Juan Pablovirtual::23086-1Gómez Mesa, Lina María2025-02-03T15:03:08Z2025-02-03T15:03:08Z2025-01-31https://hdl.handle.net/1992/75992instname:Universidad de los Andesreponame:Repositorio Institucional Sénecarepourl:https://repositorio.uniandes.edu.co/In this study, deep learning techniques, specifically U-Net-based architectures, are used to automate the segmentation of paraspinal muscles in CT scans. This undergraduate research thesis aims to strengthen segmentation accuracy and improve body composition analysis (BCA) precision. Several models are evaluated, including U-Net, U-Net++, AttU-Net, TransUNet, UNETR, and SwinUNETR to compare their performance under a transfer learning approach on the CAVAAT dataset. The models are evaluated using performance metrics such as the Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD). In addition, SAM (Segment Anything Model) is tested for its adaptability to medical imaging tasks, providing insights into its accuracy and potential applications in scenarios with limited annotations. \href{https://github.com/Lina-go/PMSegmentation.git}{Github Repository} has models, data, and code.Pregrado20 páginasapplication/pdfengUniversidad de los AndesIngeniería de Sistemas y ComputaciónFacultad de IngenieríaDepartamento de Ingeniería de Sistemas y ComputaciónAttribution 4.0 Internationalhttp://creativecommons.org/licenses/by/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2Paravertebral muscle segmentation for body composition analysis in CT scansTrabajo de grado - Pregradoinfo:eu-repo/semantics/bachelorThesisinfo:eu-repo/semantics/acceptedVersionhttp://purl.org/coar/resource_type/c_7a1fTexthttp://purl.org/redcol/resource_type/TPBody composition analysisImage processingParaspinal muscleSemantic segmentationIngeniería[1] L. T. Leong, M. C. Wong, Y. E. Liu, Y. Glaser, B. K. Quon, N. N. Kelly, D. Cataldi, P. Sadowski, S. B. Heymsfield, and J. A. Shepherd, “Generative deep learning furthers the understanding of local distributions of fat and muscle on body shape and health using 3d surface scans,” Communications Medicine, vol. 4, no. 1, p. 13, 2024.[2] C. V. Albanese, E. Diessel, and H. K. Genant, “Clinical applications of body composition measurements using dxa,” Journal of Clinical Densitometry, vol. 6, no. 2, pp. 75–85, 2003.[3] P. J. Pickhardt, P. M. Graffy, A. A. Perez, M. G. Lubner, D. C. Elton, and R. M. Summers, “Opportunistic screening at abdominal ct: use of automated body composition biomarkers for added cardiometabolic value,” Radiographics, vol. 41, no. 2, pp. 524–542, 2021.[4] H. S. Kim, H. Kim, S. Kim, Y. Cha, J.-T. Kim, J.-W. Kim, Y.-C. Ha, and J.-I. Yoo, “Precise individual muscle segmentation in whole thigh ct scans for sarcopenia assessment using u-net transformer,” Scientific Reports, vol. 14, no. 1, p. 3301, 2024.[5] P. Piqueras, A. Ballester, J. V. Durá-Gil, S. Martinez-Hervas, J. Redón, and J. T. Real, “Anthropometric indicators as a tool for diagnosis of obesity and other health risk factors: a literature review,” Frontiers in psychology, vol. 12, p. 631179, 2021.[6] C. Amaya Porras, “Comparative study of deep learning segmentation models for body composition analysis in ct scans,” Master’s thesis, Universidad de los Andes, 2020, disponible en: http://hdl.handle.net/1992/44828.[7] K.-J. Tsai, C.-C. Chang, L.-C. Lo, J. Y. Chiang, C.-S. Chang, and Y.-J. Huang, “Automatic segmentation of paravertebral muscles in abdominal ct scan by u-net: The application of data augmentation technique to increase the jaccard ratio of deep learning,” Medicine, vol. 100, no. 44, p. e27649, 2021.[8] H. E. Berg, D. Truong, E. Skoglund, T. Gustafsson, and T. R. Lundberg, “Threshold-automated ct measurements of muscle size and radiological attenuation in multiple lower-extremity muscles of older individuals,” Clinical Physiology and Functional Imaging, vol. 40, no. 3, pp. 165–172, 2020.[9] K. Popuri, D. Cobzas, N. Esfandiari, V. Baracos, and M. Jägersand, “Body composition assessment in axial ct images using fem-based automatic segmentation of skeletal muscle,” IEEE transactions on medical imaging, vol. 35, no. 2, pp. 512–520, 2015.10] K. Popuri, D. Cobzas, M. Jägersand, N. Esfandiari, and V. Baracos, “Fem-based automatic segmentation of muscle and fat tissues from thoracic ct images,” in 2013 IEEE 10th International Symposium on Biomedical Imaging. IEEE, 2013, pp. 149–152.[11] Y. Wei, X. Tao, B. Xu, and A. Castelein, “Paraspinal muscle segmentation in ct images using gsm-based fuzzy c-means clustering,” Journal of Computer and Communications, vol. 2, no. 9, pp. 70–77, 2014.[12] H. Lee, F. M. Troschel, S. Tajmir, G. Fuchs, J. Mario, F. J. Fintelmann, and S. Do, “Pixel-level deep segmentation: artificial intelligence quantifies muscle on computed tomography for body morphometric analysis,” Journal of digital imaging, vol. 30, pp. 487–498, 2017.[13] L. L. Ackermans, L. Volmer, L. Wee, R. Brecheisen, P. Sánchez-González, A. P. Seiffert, E. J. Gómez, A. Dekker, J. A. Ten Bosch, S. M. Olde Damink et al., “Deep learning automated segmentation for muscle and adipose tissue from abdominal computed tomography in polytrauma patients,” Sensors, vol. 21, no. 6, p. 2083, 2021.[14] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” 2015. [Online]. Available: https://arxiv.org/abs/1505.04597[15] M. E. Rayed, S. S. Islam, S. I. Niha, J. R. Jim, M. M. Kabir, and M. Mridha, “Deep learning for medical image segmentation: State-of-the-art advancements and challenges,” Informatics in Medicine Unlocked, vol. 47, p. 101504, 2024. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S2352914824000601[16] X. Liu, L. Song, S. Liu, and Y. Zhang, “A review of deep-learning-based medical image segmentation methods,” Sustainability, vol. 13, no. 3, 2021. [Online]. Available: https://www.mdpi.com/2071-1050/13/3/1224[17] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A nested u-net architecture for medical image segmentation,” CoRR, vol. abs/1807.10165, 2018. [Online]. Available: http://arxiv.org/abs/1807.10165[18] O. Oktay, J. Schlemper, L. L. Folgoc, M. C. H. Lee, M. P. Heinrich, K. Misawa, K. Mori, S. G. McDonagh, N. Y. Hammerla, B. Kainz, B. Glocker, and D. Rueckert, “Attention u-net: Learning where to look for the pancreas,” CoRR, vol. abs/1804.03999, 2018. [Online]. Available: http://arxiv.org/abs/1804.03999[19] J. Chen, Y. Lu, Q. Yu, X. Luo, E. Adeli, Y. Wang, L. Lu, A. L. Yuille, and Y. Zhou, “Transunet: Transformers make strong encoders for medical image segmentation,” 2021. [Online]. Available: https://arxiv.org/abs/2102.04306[20] J. Chen, J. Mei, X. Li, Y. Lu, Q. Yu, Q. Wei, X. Luo, Y. Xie, E. Adeli, Y. Wang, M. P. Lungren, S. Zhang, L. Xing, L. Lu, A. Yuille, and Y. Zhou, “Transunet: Rethinking the u-net architecture design for medical image segmentation through the lens of transformers,” Medical Image Analysis, vol. 97, p. 103280, 2024. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1361841524002056[21] A. Hatamizadeh, Y. Tang, V. Nath, D. Yang, A. Myronenko, B. Landman, H. R. Roth, and D. Xu, “Unetr: Transformers for 3d medical image segmentation,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2022, pp. 574–584.[22] A. Hatamizadeh, V. Nath, Y. Tang, D. Yang, H. Roth, and D. Xu, “Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images,” 2022. [Online]. Available: https://arxiv.org/abs/2201.01266[23] T. B. Brown, “Language models are few-shot learners,” arXiv preprint arXiv:2005.14165, 2020.[24] H. Guo, J. Zhang, J. Huang, T. C. W. Mok, D. Guo, K. Yan, L. Lu, D. Jin, and M. Xu, “Towards a comprehensive, efficient and promptable anatomic structure segmentation model using 3d whole-body ct scans,” 2024. [Online]. Available: https://arxiv.org/abs/2403.15063[25] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo et al., “Segment anything,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 4015–4026.[26] H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, and R. M. Summers, “Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning,” IEEE transactions on medical imaging, vol. 35, no. 5, pp. 1285–1298, 2016.[27] J. Ma, Y. He, F. Li, L. Han, C. You, and B. Wang, “Segment anything in medical images,” Nature Communications, vol. 15, no. 1, Jan. 2024. [Online]. Available: http://dx.doi.org/10.1038/s41467-024-44824-z[28] S. Studer, T. B. Bui, C. Drescher, A. Hanuschkin, L. Winkler, S. Peters, and K.-R. Mueller, “Towards crisp-ml(q): A machine learning process model with quality assurance methodology,” 2021. [Online]. Available: https://arxiv.org/abs/2003.05155[29] C. M. Amaya Porras, “Comparative study of deep learning segmentation models for body composition analysis in ct scans,” Fundación Universitaria de Ciencias de la Salud, Tech. Rep., 2020. [Online]. Available: http://hdl.handle.net/1992/44828[30] J. D. Torres Pinzón, “Segmentación del tejido muscular paravertebral en imágenes tac,” Tech. Rep., 2018.[31] S. Masoudi, S. A. Harmon, S. Mehralivand, S. M. Walker, H. Raviprakash, U. Bagci, P. L. Choyke, and B. Turkbey, “Quick guide on radiology image pre-processing for deep learning applications in prostate cancer research,” Journal of Medical Imaging, vol. 8, no. 1, pp. 010901010901, 2021.[32] K. Engelke, O. Museyko, L. Wang, and J.-D. Laredo, “Quantitative analysis of skeletal muscle by computed tomography imaging—state of the art,” Journal of orthopaedic translation, vol. 15, pp. 91–103, 2018.[33] C. H. Sudre, W. Li, T. Vercauteren, S. Ourselin, and M. Jorge Cardoso, Generalised Dice Overlap as a Deep Learning Loss Function for Highly Unbalanced Segmentations. Springer International Publishing, 2017, p. 240–248. [Online]. Available: http://dx.doi.org/10.1007/ 978-3-319-67558-9_28[34] A. Mao, M. Mohri, and Y. Zhong, “Cross-entropy loss functions: Theoretical analysis and applications,” 2023. [Online]. Available: https://arxiv.org/abs/2304.07288[35] R. F. Khan, B.-D. Lee, and M. S. Lee, “Transformers in medical image segmentation: a narrative review,” Quantitative Imaging in Medicine and Surgery, vol. 13, no. 12, p. 8747, 2023.[36] D. N. A. Kareem, M. Fiaz, N. Novershtern, and H. Cholakkal, “Medical image segmentation using directional window attention,” 2024. [Online]. Available: https: //arxiv.org/abs/2406.17471201923531Publicationa50afd39-959f-4beb-a35f-6f387946f795virtual::23086-1a50afd39-959f-4beb-a35f-6f387946f795virtual::23086-1ORIGINALParavertebral_muscle_segmentation_for_body_composition_analysis_in_CT_scans.pdfParavertebral_muscle_segmentation_for_body_composition_analysis_in_CT_scans.pdfapplication/pdf7236394https://repositorio.uniandes.edu.co/bitstreams/688b236f-5ce7-4d94-b116-223cc4c0bad9/download25929886b84d38e1f88bc0c70f9147f9MD51autorizacion tesis-LinaGomez[Firmada].pdfautorizacion tesis-LinaGomez[Firmada].pdfHIDEapplication/pdf285881https://repositorio.uniandes.edu.co/bitstreams/d688b39d-838a-4d63-b2a5-f625425179fd/download1f43ea193cda5dcd097882e651af23d2MD52CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8908https://repositorio.uniandes.edu.co/bitstreams/ddcb8ddf-1747-4b1c-906e-e6ec2cad2926/download0175ea4a2d4caec4bbcc37e300941108MD53LICENSElicense.txtlicense.txttext/plain; charset=utf-82535https://repositorio.uniandes.edu.co/bitstreams/2232a4a5-a5ec-4eed-af90-da78d7f81b26/downloadae9e573a68e7f92501b6913cc846c39fMD54TEXTParavertebral_muscle_segmentation_for_body_composition_analysis_in_CT_scans.pdf.txtParavertebral_muscle_segmentation_for_body_composition_analysis_in_CT_scans.pdf.txtExtracted texttext/plain48777https://repositorio.uniandes.edu.co/bitstreams/cba32e1e-7869-47d6-88f5-b094dd729844/downloadce2ccdf6bb329dec5130a90823cba300MD55autorizacion tesis-LinaGomez[Firmada].pdf.txtautorizacion tesis-LinaGomez[Firmada].pdf.txtExtracted texttext/plain1994https://repositorio.uniandes.edu.co/bitstreams/accacc98-0dc1-4c40-a10b-38c57e354716/download698fc993734be9466702da2f33c1d993MD57THUMBNAILParavertebral_muscle_segmentation_for_body_composition_analysis_in_CT_scans.pdf.jpgParavertebral_muscle_segmentation_for_body_composition_analysis_in_CT_scans.pdf.jpgGenerated Thumbnailimage/jpeg6188https://repositorio.uniandes.edu.co/bitstreams/408fe06b-8ecc-43f6-a06d-0e7ff0c0ca51/download464b633ab64c83244d71be8af30aacd6MD56autorizacion tesis-LinaGomez[Firmada].pdf.jpgautorizacion tesis-LinaGomez[Firmada].pdf.jpgGenerated Thumbnailimage/jpeg10935https://repositorio.uniandes.edu.co/bitstreams/b3e13709-986f-40d6-8ff0-6737dd49eb5a/downloadcd48ada3aa01756240daa5bc422f72a8MD581992/75992oai:repositorio.uniandes.edu.co:1992/759922025-03-05 10:02:53.954http://creativecommons.org/licenses/by/4.0/Attribution 4.0 Internationalopen.accesshttps://repositorio.uniandes.edu.coRepositorio institucional Sénecaadminrepositorio@uniandes.edu.coPGgzPjxzdHJvbmc+RGVzY2FyZ28gZGUgUmVzcG9uc2FiaWxpZGFkIC0gTGljZW5jaWEgZGUgQXV0b3JpemFjacOzbjwvc3Ryb25nPjwvaDM+CjxwPjxzdHJvbmc+UG9yIGZhdm9yIGxlZXIgYXRlbnRhbWVudGUgZXN0ZSBkb2N1bWVudG8gcXVlIHBlcm1pdGUgYWwgUmVwb3NpdG9yaW8gSW5zdGl0dWNpb25hbCBTw6luZWNhIHJlcHJvZHVjaXIgeSBkaXN0cmlidWlyIGxvcyByZWN1cnNvcyBkZSBpbmZvcm1hY2nDs24gZGVwb3NpdGFkb3MgbWVkaWFudGUgbGEgYXV0b3JpemFjacOzbiBkZSBsb3Mgc2lndWllbnRlcyB0w6lybWlub3M6PC9zdHJvbmc+PC9wPgo8cD5Db25jZWRhIGxhIGxpY2VuY2lhIGRlIGRlcMOzc2l0byBlc3TDoW5kYXIgc2VsZWNjaW9uYW5kbyBsYSBvcGNpw7NuIDxzdHJvbmc+J0FjZXB0YXIgbG9zIHTDqXJtaW5vcyBhbnRlcmlvcm1lbnRlIGRlc2NyaXRvcyc8L3N0cm9uZz4geSBjb250aW51YXIgZWwgcHJvY2VzbyBkZSBlbnbDrW8gbWVkaWFudGUgZWwgYm90w7NuIDxzdHJvbmc+J1NpZ3VpZW50ZScuPC9zdHJvbmc+PC9wPgo8aHI+CjxwPllvLCBlbiBtaSBjYWxpZGFkIGRlIGF1dG9yIGRlbCB0cmFiYWpvIGRlIHRlc2lzLCBtb25vZ3JhZsOtYSBvIHRyYWJham8gZGUgZ3JhZG8sIGhhZ28gZW50cmVnYSBkZWwgZWplbXBsYXIgcmVzcGVjdGl2byB5IGRlIHN1cyBhbmV4b3MgZGUgc2VyIGVsIGNhc28sIGVuIGZvcm1hdG8gZGlnaXRhbCB5L28gZWxlY3Ryw7NuaWNvIHkgYXV0b3Jpem8gYSBsYSBVbml2ZXJzaWRhZCBkZSBsb3MgQW5kZXMgcGFyYSBxdWUgcmVhbGljZSBsYSBwdWJsaWNhY2nDs24gZW4gZWwgU2lzdGVtYSBkZSBCaWJsaW90ZWNhcyBvIGVuIGN1YWxxdWllciBvdHJvIHNpc3RlbWEgbyBiYXNlIGRlIGRhdG9zIHByb3BpbyBvIGFqZW5vIGEgbGEgVW5pdmVyc2lkYWQgeSBwYXJhIHF1ZSBlbiBsb3MgdMOpcm1pbm9zIGVzdGFibGVjaWRvcyBlbiBsYSBMZXkgMjMgZGUgMTk4MiwgTGV5IDQ0IGRlIDE5OTMsIERlY2lzacOzbiBBbmRpbmEgMzUxIGRlIDE5OTMsIERlY3JldG8gNDYwIGRlIDE5OTUgeSBkZW3DoXMgbm9ybWFzIGdlbmVyYWxlcyBzb2JyZSBsYSBtYXRlcmlhLCB1dGlsaWNlIGVuIHRvZGFzIHN1cyBmb3JtYXMsIGxvcyBkZXJlY2hvcyBwYXRyaW1vbmlhbGVzIGRlIHJlcHJvZHVjY2nDs24sIGNvbXVuaWNhY2nDs24gcMO6YmxpY2EsIHRyYW5zZm9ybWFjacOzbiB5IGRpc3RyaWJ1Y2nDs24gKGFscXVpbGVyLCBwcsOpc3RhbW8gcMO6YmxpY28gZSBpbXBvcnRhY2nDs24pIHF1ZSBtZSBjb3JyZXNwb25kZW4gY29tbyBjcmVhZG9yIGRlIGxhIG9icmEgb2JqZXRvIGRlbCBwcmVzZW50ZSBkb2N1bWVudG8uPC9wPgo8cD5MYSBwcmVzZW50ZSBhdXRvcml6YWNpw7NuIHNlIGVtaXRlIGVuIGNhbGlkYWQgZGUgYXV0b3IgZGUgbGEgb2JyYSBvYmpldG8gZGVsIHByZXNlbnRlIGRvY3VtZW50byB5IG5vIGNvcnJlc3BvbmRlIGEgY2VzacOzbiBkZSBkZXJlY2hvcywgc2lubyBhIGxhIGF1dG9yaXphY2nDs24gZGUgdXNvIGFjYWTDqW1pY28gZGUgY29uZm9ybWlkYWQgY29uIGxvIGFudGVyaW9ybWVudGUgc2XDsWFsYWRvLiBMYSBwcmVzZW50ZSBhdXRvcml6YWNpw7NuIHNlIGhhY2UgZXh0ZW5zaXZhIG5vIHNvbG8gYSBsYXMgZmFjdWx0YWRlcyB5IGRlcmVjaG9zIGRlIHVzbyBzb2JyZSBsYSBvYnJhIGVuIGZvcm1hdG8gbyBzb3BvcnRlIG1hdGVyaWFsLCBzaW5vIHRhbWJpw6luIHBhcmEgZm9ybWF0byBlbGVjdHLDs25pY28sIHkgZW4gZ2VuZXJhbCBwYXJhIGN1YWxxdWllciBmb3JtYXRvIGNvbm9jaWRvIG8gcG9yIGNvbm9jZXIuPC9wPgo8cD5FbCBhdXRvciwgbWFuaWZpZXN0YSBxdWUgbGEgb2JyYSBvYmpldG8gZGUgbGEgcHJlc2VudGUgYXV0b3JpemFjacOzbiBlcyBvcmlnaW5hbCB5IGxhIHJlYWxpesOzIHNpbiB2aW9sYXIgbyB1c3VycGFyIGRlcmVjaG9zIGRlIGF1dG9yIGRlIHRlcmNlcm9zLCBwb3IgbG8gdGFudG8sIGxhIG9icmEgZXMgZGUgc3UgZXhjbHVzaXZhIGF1dG9yw61hIHkgdGllbmUgbGEgdGl0dWxhcmlkYWQgc29icmUgbGEgbWlzbWEuPC9wPgo8cD5FbiBjYXNvIGRlIHByZXNlbnRhcnNlIGN1YWxxdWllciByZWNsYW1hY2nDs24gbyBhY2Npw7NuIHBvciBwYXJ0ZSBkZSB1biB0ZXJjZXJvIGVuIGN1YW50byBhIGxvcyBkZXJlY2hvcyBkZSBhdXRvciBzb2JyZSBsYSBvYnJhIGVuIGN1ZXN0acOzbiwgZWwgYXV0b3IgYXN1bWlyw6EgdG9kYSBsYSByZXNwb25zYWJpbGlkYWQsIHkgc2FsZHLDoSBkZSBkZWZlbnNhIGRlIGxvcyBkZXJlY2hvcyBhcXXDrSBhdXRvcml6YWRvcywgcGFyYSB0b2RvcyBsb3MgZWZlY3RvcyBsYSBVbml2ZXJzaWRhZCBhY3TDumEgY29tbyB1biB0ZXJjZXJvIGRlIGJ1ZW5hIGZlLjwvcD4KPHA+U2kgdGllbmUgYWxndW5hIGR1ZGEgc29icmUgbGEgbGljZW5jaWEsIHBvciBmYXZvciwgY29udGFjdGUgY29uIGVsIDxhIGhyZWY9Im1haWx0bzpiaWJsaW90ZWNhQHVuaWFuZGVzLmVkdS5jbyIgdGFyZ2V0PSJfYmxhbmsiPkFkbWluaXN0cmFkb3IgZGVsIFNpc3RlbWEuPC9hPjwvcD4K |