Evaluación de las habilidades lingüísticas mediante modelos de clasificación diagnóstica: un ejemplo de uso de un instrumento lingüístico

El propósito principal del presente estudio fue informar e ilustrar, mediante ejemplos, el uso de Modelos de Clasificación Diagnóstica (DCM) para la evaluación de habilidades y competencias en cognición y rendimiento académico. Un propósito secundario fue comparar y contrastar la psicometría tradici...

Full description

Autores:
Sideridis, Georgios D.
Tsaousis, Ioannis
Al-Harbi, Khaleel
Tipo de recurso:
Article of journal
Fecha de publicación:
2022
Institución:
Universidad de San Buenaventura
Repositorio:
Repositorio USB
Idioma:
eng
OAI Identifier:
oai:bibliotecadigital.usb.edu.co:10819/28941
Acceso en línea:
https://hdl.handle.net/10819/28941
https://doi.org/10.21500/20112084.5657
Palabra clave:
Diagnostic classification models
cognitive diagnostic models
CEFR
bifactor models
language
modelos de diagnóstico cognitivo
modelos de clasificación diagnóstica
MCER
modelos bifactoriales
lenguaje
Rights
openAccess
License
http://purl.org/coar/access_right/c_abf2
id SANBUENAV2_5e61af4136d7a1cd1982040446729605
oai_identifier_str oai:bibliotecadigital.usb.edu.co:10819/28941
network_acronym_str SANBUENAV2
network_name_str Repositorio USB
repository_id_str
dc.title.spa.fl_str_mv Evaluación de las habilidades lingüísticas mediante modelos de clasificación diagnóstica: un ejemplo de uso de un instrumento lingüístico
dc.title.translated.spa.fl_str_mv Evaluación de las habilidades lingüísticas mediante modelos de clasificación diagnóstica: un ejemplo de uso de un instrumento lingüístico
title Evaluación de las habilidades lingüísticas mediante modelos de clasificación diagnóstica: un ejemplo de uso de un instrumento lingüístico
spellingShingle Evaluación de las habilidades lingüísticas mediante modelos de clasificación diagnóstica: un ejemplo de uso de un instrumento lingüístico
Diagnostic classification models
cognitive diagnostic models
CEFR
bifactor models
language
modelos de diagnóstico cognitivo
modelos de clasificación diagnóstica
MCER
modelos bifactoriales
lenguaje
title_short Evaluación de las habilidades lingüísticas mediante modelos de clasificación diagnóstica: un ejemplo de uso de un instrumento lingüístico
title_full Evaluación de las habilidades lingüísticas mediante modelos de clasificación diagnóstica: un ejemplo de uso de un instrumento lingüístico
title_fullStr Evaluación de las habilidades lingüísticas mediante modelos de clasificación diagnóstica: un ejemplo de uso de un instrumento lingüístico
title_full_unstemmed Evaluación de las habilidades lingüísticas mediante modelos de clasificación diagnóstica: un ejemplo de uso de un instrumento lingüístico
title_sort Evaluación de las habilidades lingüísticas mediante modelos de clasificación diagnóstica: un ejemplo de uso de un instrumento lingüístico
dc.creator.fl_str_mv Sideridis, Georgios D.
Tsaousis, Ioannis
Al-Harbi, Khaleel
dc.contributor.author.eng.fl_str_mv Sideridis, Georgios D.
Tsaousis, Ioannis
Al-Harbi, Khaleel
dc.subject.eng.fl_str_mv Diagnostic classification models
cognitive diagnostic models
CEFR
bifactor models
language
topic Diagnostic classification models
cognitive diagnostic models
CEFR
bifactor models
language
modelos de diagnóstico cognitivo
modelos de clasificación diagnóstica
MCER
modelos bifactoriales
lenguaje
dc.subject.spa.fl_str_mv modelos de diagnóstico cognitivo
modelos de clasificación diagnóstica
MCER
modelos bifactoriales
lenguaje
description El propósito principal del presente estudio fue informar e ilustrar, mediante ejemplos, el uso de Modelos de Clasificación Diagnóstica (DCM) para la evaluación de habilidades y competencias en cognición y rendimiento académico. Un propósito secundario fue comparar y contrastar la psicometría tradicional y contemporánea para la medición de habilidades y competencias. Los DCM se describen siguiendo las líneas de otros modelos psicométricos dentro de la tradición del Análisis Factorial Confirmatorio, como el modelo bifactor y los conocidos modelos mixtos que se utilizan para clasificar a los individuos en subgrupos. La inclusión de términos y restricciones de interacción junto con su naturaleza confirmatoria permite a los DCM evaluar con precisión la posesión de habilidades y competencias. Lo anterior se ilustra utilizando un conjunto de datos empíricos de Arabia Saudita (n = 2,642), que evalúan cómo las habilidades lingüísticas se ajustan a los niveles de competencia conocidos, basados en el MCER (Council of Europe, 2001).
publishDate 2022
dc.date.accessioned.none.fl_str_mv 2022-10-12T15:16:27Z
2025-08-22T16:59:05Z
dc.date.available.none.fl_str_mv 2022-10-12T15:16:27Z
2025-08-22T16:59:05Z
dc.date.issued.none.fl_str_mv 2022-10-12
dc.type.spa.fl_str_mv Artículo de revista
dc.type.coar.fl_str_mv http://purl.org/coar/resource_type/c_2df8fbb1
dc.type.coar.eng.fl_str_mv http://purl.org/coar/resource_type/c_6501
dc.type.coarversion.eng.fl_str_mv http://purl.org/coar/version/c_970fb48d4fbd8a85
dc.type.content.eng.fl_str_mv Text
dc.type.driver.eng.fl_str_mv info:eu-repo/semantics/article
dc.type.local.eng.fl_str_mv Journal article
dc.type.version.eng.fl_str_mv info:eu-repo/semantics/publishedVersion
format http://purl.org/coar/resource_type/c_6501
status_str publishedVersion
dc.identifier.doi.none.fl_str_mv 10.21500/20112084.5657
dc.identifier.eissn.none.fl_str_mv 2011-7922
dc.identifier.issn.none.fl_str_mv 2011-2084
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/10819/28941
dc.identifier.url.none.fl_str_mv https://doi.org/10.21500/20112084.5657
identifier_str_mv 10.21500/20112084.5657
2011-7922
2011-2084
url https://hdl.handle.net/10819/28941
https://doi.org/10.21500/20112084.5657
dc.language.iso.eng.fl_str_mv eng
language eng
dc.relation.bitstream.none.fl_str_mv https://revistas.usb.edu.co/index.php/IJPR/article/download/5657/4782
dc.relation.citationedition.eng.fl_str_mv Núm. 2 , Año 2022 : Vol. 15, No. 2
dc.relation.citationendpage.none.fl_str_mv 104
dc.relation.citationissue.eng.fl_str_mv 2
dc.relation.citationstartpage.none.fl_str_mv 94
dc.relation.citationvolume.eng.fl_str_mv 15
dc.relation.ispartofjournal.eng.fl_str_mv International Journal of Psychological Research
dc.relation.references.eng.fl_str_mv Alderson, C. (2007). The CEFR and the need for more research. The Modern Language Journal, 91, 659–663. https://doi.org/10.1111/j.1540-4781.2007.00627_4.x
Alexander, G. E., Satalich, T. A., Shankle, W. R., & Batchelder, W. H. (2016). A cognitive psychometric model for the psychodiagnostic assessment of memory-related deficits. Psychological assessment, 28 (3), 279. https://doi.org/10.1037/pas0000163
Asparouhov, T., & Muthén, B. (2009). Exploratory structural equation modeling. Structural Equation Modeling, 16, 397–438. https://doi.org/10.1080/10705510903008204
Bonifay, W., & Cai, L. (2017). On the complexity of item response theory models. Multivariate behavioral research, 52 (4), 465–484. https://doi.org/10.1080/00273171.2017.1309262
Bower, J., Runnels, J., Rutson-Griffiths, A., Schmidt, R., Cook, G., Lehde, L., & Kodate, A. (2017). Aligning a Japanese university’s English language curriculum and lesson plans to the CEFR-J. In F. O’Dwyer, M. Hunke, A. Imig, N. Nagai, N. Naganuma, & M. G. Schmidt (Eds.), Critical, Constructive Assessment of CEFR-informed Language Teaching in Japan and Beyond (pp. 176–225). Cambridge University Press.
Bozard, J. L. (2010). Invariance testing in diagnostic classification models (Doctoral dissertation). The University of Georgia. https://getd.libs.uga.edu/pdfs/bozard_jennifer_l_201005_ma.pdf
Bradshaw, L., Izsák, A., Templin, J., & Jacobson, E. (2014). Diagnosing teachers’ understandings of rational numbers: Building a multidimensional test within the diagnostic classification framework. Educational measurement: Issues and practice, 33 (1), 2–14. https://doi.org/10.1080/15305058.2015.1107076
Bradshaw, L. P., & Madison, M. J. (2016). Invariance properties for general diagnostic classification models. International Journal of Testing, 16 (2), 99–118. https://doi.org/10.1080/15305058.2015.1107076
Chen, Y., Liu, J., Xu, G., & Ying, Z. (2015). Statistical analysis of Q-matrix based diagnostic classification models. Journal of the American Statistical Association, 110 (510), 850–866. https://doi.org/10.1080/01621459.2014.934827
Council of Europe. (2001). Common European Framework of Reference for Languages: Learning, teaching, assessment. Cambridge University Press.
Davier, M. V. (2009). Some notes on the reinvention of latent structure models as diagnostic classification models. Measurement: Interdisciplinary Research and Perspectives, 7 (1), 67–74. https://doi.org/10.1080/15366360902799851
DiBello, L. V., Henson, R. A., & Stout, W. F. (2015). A family of generalized diagnostic classification models for multiple choice option-based scoring. Applied Psychological Measurement, 39 (1), 62–79. https://doi.org/10.1177%2F0146621614561315
Emons, W. H., Glas, C. A., Meijer, R. R., & Sijtsma, K. (2003). Person fit in order-restricted latent class models. Applied psychological measurement, 27 (6), 459–478. https://doi.org/10.1177%2F0146621603259270
Gierl, M. J., Alves, C., & Majeau, R. T. (2010). Using the attribute hierarchy method to make diagnostic inferences about examinees’ knowledge and skills in mathematics: An operational implementation of cognitive diagnostic assessment. International Journal of Testing, 10 (4), 318–341. https://doi.org/10.1080/15305058.2010.509554
Gorin, J. S., & Embretson, S. E. (2006). Item difficulty modeling of paragraph comprehension items. Applied Psychological Measurement, 30, 394–411. https://doi.org/10.1177/0146621606288554
Gorsuch, R. (1983). Factor analysis. Lawrence Erlbaum Associates.
Hansen, M., Cai, L., Monroe, S., & Li, Z. (2016). Limited information goodness-of-fit testing of diagnostic classification item response models. British Journal of Mathematical and Statistical Psychology, 69 (3), 225–252. https://doi.org/10.1111/bmsp.12074
Hasselgreen, A. (2013). Adapting the CEFR for the classroom assessment of young learners’ writing. The Canadian Modern Language Review, 69, 415–435. https://doi.org/10.3138/cmlr.1705.415
Henson, R., DiBello, L., & Stout, B. (2018). A Generalized Approach to Defining Item Discrimination for DCMs. Measurement: Interdisciplinary Research and Perspectives, 16 (1), 18–29. https://doi.org/10.1080/15366367.2018.1436855
Huang, H. Y. (2017). Multilevel cognitive diagnosis models for assessing changes in latent attributes. Journal of Educational Measurement, 54 (4), 440–480. https://doi.org/10.1111/jedm.12156
Jang, E. (2009). Cognitive diagnostic assessment of L2 reading comprehension ability: Validity arguments for Fusion Model application to LanguEdge assessment. Language Testing, 26, 31–73. https://doi.org/10.1177%2F0265532208097336
Jurich, D. P., & Bradshaw, L. P. (2014). An illustration of diagnostic classification modeling in student learning outcomes assessment. International Journal of Testing, 14 (1), 49–72. https://doi.org/10.1080/15305058.2013.835728
Kaya, Y., & Leite, W. L. (2017). Assessing change in latent skills across time with longitudinal cognitive diagnosis modeling: An evaluation of model performance. Educational and psychological measurement, 77 (3), 369–388. https://doi.org/10.1177%2F0013164416659314
Köhn, H. F., & Chiu, C. Y. (2018). How to Build a Complete Q-Matrix for a Cognitively Diagnostic Test. Journal of Classification, 35 (2), 273–299. https://doi.org/10.1007/s00357-018-92550
Kunina-Habenicht, O., Rupp, A. A., & Wilhelm, O. (2009). A practical illustration of multidimensional diagnostic skills profiling: Comparing results from confirmatory factor analysis and diagnostic classification models. Studies in Educational Evaluation, 35 (2-3), 64–70. https://doi.org/10.1016/j.stueduc.2009.10.003
Kusseling, F., & Lonsdale, D. (2013). A corpus-based assessment of French CEFR lexical content. The Canadian Modern Language Review, 69, 436–461. https://doi.org/10.3138/cmlr.1726.436
Little, D. (2007). The common European framework of reference for languages: Perspectives on the making of supranational language education policy. The Modern Language Journal, 91, 645–655. https://doi.org/10.1111/j.1540-4781.2007.00627_2.x
Liu, R., Huggins-Manley, A. C., & Bradshaw, L. (2017). The impact of Q-matrix designs on diagnostic classification accuracy in the presence of attribute hierarchies. Educational and psychological measurement, 77 (2), 220–240. https://doi.org/10.1177%2F0013164416645636
Liu, R., Huggins-Manley, A. C., & Bulut, O. (2018). Retrofitting diagnostic classification models to responses from IRT-based assessment forms. Educational and psychological measurement, 78 (3), 357–383. https://doi.org/10.1177%2F0013164416685599
Madison, M. J., & Bradshaw, L. P. (2015). The effects of Q-matrix design on classification accuracy in the log-linear cognitive diagnosis model. Educational and Psychological Measurement, 75 (3), 491–511. https://doi.org/10.1177%2F0013164414539162
McGill, R. J., Styck, K. M., Palomares, R. S., & Hass, M. R. (2016). Critical issues in specific learning disability identification: What we need to know about the PSW model. Learning Disability Quarterly, 39 (3), 159–170. https://doi.org/10.1177%2F0731948715618504
Rupp, A. A., & Templin, J. L. (2008). Unique characteristics of diagnostic classification models: A comprehensive review of the current state-of-the-art. Measurement, 6 (4), 219–262. https://doi.org/10.1080/15366360802490866
Sessoms, J., & Henson, R. A. (2018). Applications of Diagnostic Classification Models: A Literature Review and Critical Commentary. Measurement: Interdisciplinary Research and Perspectives, 16 (1), 1–17. https://doi.org/10.1080/15366367.2018.1435104
Templin, J., & Bradshaw, L. (2013). Measuring the reliability of diagnostic classification model examinee estimates. Journal of Classification, 30 (2), 251–275. https://doi.org/10.1007/s00357-0139129-4
Templin, J., & Hoffman, L. (2013). Obtaining diagnostic classification model estimates using Mplus. Educational Measurement: Issues and Practice, 32 (2), 37–50. https://doi.org/10.1111/emip.12010
Tu, D., Gao, X., Wang, D., & Cai, Y. (2017). A new measurement of internet addiction using diagnostic classification models. Frontiers in psychology, 8, 1768. https://doi.org/10.3389%2Ffpsyg.2017.01768
Walker, G. M., Hickok, G., & Fridriksson, J. (2018). A cognitive psychometric model for assessment of picture naming abilities in aphasia. Psychological assessment, 30 (6), 809–826. https://doi.org/10.1037%2Fpas0000529
Wang, C. (2013). Mutual information item selection method in cognitive diagnostic computerized adaptive testing with short test length. Educational and Psychological Measurement, 73 (6), 1017–1035. https://doi.org/10.1177%2F0013164413498256
Xia, Y., & Zheng, Y. (2018). Asymptotically Normally Distributed Person Fit Indices for Detecting Spuriously High Scores on Difficult Items. Applied psychological measurement, 42 (5), 343–358. https://doi.org/10.1177%2F0146621617730391
Xie, Q. (2017). Diagnosing university students’ academic writing in English: Is cognitive diagnostic modeling the way forward? Educational Psychology, 37 (1), 26–47. https://doi.org/10.1080/01443410.2016.1202900
dc.rights.accessrights.eng.fl_str_mv info:eu-repo/semantics/openAccess
dc.rights.coar.eng.fl_str_mv http://purl.org/coar/access_right/c_abf2
dc.rights.uri.eng.fl_str_mv http://creativecommons.org/licenses/by-nc-nd/4.0
eu_rights_str_mv openAccess
rights_invalid_str_mv http://purl.org/coar/access_right/c_abf2
http://creativecommons.org/licenses/by-nc-nd/4.0
dc.format.mimetype.eng.fl_str_mv application/pdf
dc.publisher.eng.fl_str_mv Universidad San Buenaventura - USB (Colombia)
dc.source.eng.fl_str_mv https://revistas.usb.edu.co/index.php/IJPR/article/view/5657
institution Universidad de San Buenaventura
bitstream.url.fl_str_mv https://bibliotecadigital.usb.edu.co/bitstreams/14107bde-38d2-4f91-8489-c1ab28671072/download
bitstream.checksum.fl_str_mv 80a5088922feeecf68766765e49fe100
bitstream.checksumAlgorithm.fl_str_mv MD5
repository.name.fl_str_mv Repositorio Institucional Universidad de San Buenaventura Colombia
repository.mail.fl_str_mv bdigital@metabiblioteca.com
_version_ 1851053639371587584
spelling Sideridis, Georgios D.Tsaousis, IoannisAl-Harbi, Khaleel2022-10-12T15:16:27Z2025-08-22T16:59:05Z2022-10-12T15:16:27Z2025-08-22T16:59:05Z2022-10-12El propósito principal del presente estudio fue informar e ilustrar, mediante ejemplos, el uso de Modelos de Clasificación Diagnóstica (DCM) para la evaluación de habilidades y competencias en cognición y rendimiento académico. Un propósito secundario fue comparar y contrastar la psicometría tradicional y contemporánea para la medición de habilidades y competencias. Los DCM se describen siguiendo las líneas de otros modelos psicométricos dentro de la tradición del Análisis Factorial Confirmatorio, como el modelo bifactor y los conocidos modelos mixtos que se utilizan para clasificar a los individuos en subgrupos. La inclusión de términos y restricciones de interacción junto con su naturaleza confirmatoria permite a los DCM evaluar con precisión la posesión de habilidades y competencias. Lo anterior se ilustra utilizando un conjunto de datos empíricos de Arabia Saudita (n = 2,642), que evalúan cómo las habilidades lingüísticas se ajustan a los niveles de competencia conocidos, basados en el MCER (Council of Europe, 2001).The primary purpose of the present study was to inform and illustrate, using examples, the use of Diagnostic Classification Models (DCMs) for the assessment of skills and competencies in cognition and academic achievement. A secondary purpose was to compare and contrast traditional and contemporary psychometrics for the measurement of skills and competencies. DCMs are described along the lines of other psychometric models within the Confirmatory Factor Analysis tradition, such as the bifactor model and the known mixture models that are utilized to classify individuals into subgroups. The inclusion of interaction terms and constraints along with its confirmatory nature enables DCMs to accurately assess the possession of skills and competencies. The above is illustrated using an empirical dataset from Saudi Arabia (n = 2,642), in which language skills are evaluated on how they conform to known levels of competency, based on the CEFR (Council of Europe, 2001) using the English Proficiency Test (EPT).application/pdf10.21500/20112084.56572011-79222011-2084https://hdl.handle.net/10819/28941https://doi.org/10.21500/20112084.5657engUniversidad San Buenaventura - USB (Colombia)https://revistas.usb.edu.co/index.php/IJPR/article/download/5657/4782Núm. 2 , Año 2022 : Vol. 15, No. 210429415International Journal of Psychological ResearchAlderson, C. (2007). The CEFR and the need for more research. The Modern Language Journal, 91, 659–663. https://doi.org/10.1111/j.1540-4781.2007.00627_4.xAlexander, G. E., Satalich, T. A., Shankle, W. R., & Batchelder, W. H. (2016). A cognitive psychometric model for the psychodiagnostic assessment of memory-related deficits. Psychological assessment, 28 (3), 279. https://doi.org/10.1037/pas0000163Asparouhov, T., & Muthén, B. (2009). Exploratory structural equation modeling. Structural Equation Modeling, 16, 397–438. https://doi.org/10.1080/10705510903008204Bonifay, W., & Cai, L. (2017). On the complexity of item response theory models. Multivariate behavioral research, 52 (4), 465–484. https://doi.org/10.1080/00273171.2017.1309262Bower, J., Runnels, J., Rutson-Griffiths, A., Schmidt, R., Cook, G., Lehde, L., & Kodate, A. (2017). Aligning a Japanese university’s English language curriculum and lesson plans to the CEFR-J. In F. O’Dwyer, M. Hunke, A. Imig, N. Nagai, N. Naganuma, & M. G. Schmidt (Eds.), Critical, Constructive Assessment of CEFR-informed Language Teaching in Japan and Beyond (pp. 176–225). Cambridge University Press.Bozard, J. L. (2010). Invariance testing in diagnostic classification models (Doctoral dissertation). The University of Georgia. https://getd.libs.uga.edu/pdfs/bozard_jennifer_l_201005_ma.pdfBradshaw, L., Izsák, A., Templin, J., & Jacobson, E. (2014). Diagnosing teachers’ understandings of rational numbers: Building a multidimensional test within the diagnostic classification framework. Educational measurement: Issues and practice, 33 (1), 2–14. https://doi.org/10.1080/15305058.2015.1107076Bradshaw, L. P., & Madison, M. J. (2016). Invariance properties for general diagnostic classification models. International Journal of Testing, 16 (2), 99–118. https://doi.org/10.1080/15305058.2015.1107076Chen, Y., Liu, J., Xu, G., & Ying, Z. (2015). Statistical analysis of Q-matrix based diagnostic classification models. Journal of the American Statistical Association, 110 (510), 850–866. https://doi.org/10.1080/01621459.2014.934827Council of Europe. (2001). Common European Framework of Reference for Languages: Learning, teaching, assessment. Cambridge University Press.Davier, M. V. (2009). Some notes on the reinvention of latent structure models as diagnostic classification models. Measurement: Interdisciplinary Research and Perspectives, 7 (1), 67–74. https://doi.org/10.1080/15366360902799851DiBello, L. V., Henson, R. A., & Stout, W. F. (2015). A family of generalized diagnostic classification models for multiple choice option-based scoring. Applied Psychological Measurement, 39 (1), 62–79. https://doi.org/10.1177%2F0146621614561315Emons, W. H., Glas, C. A., Meijer, R. R., & Sijtsma, K. (2003). Person fit in order-restricted latent class models. Applied psychological measurement, 27 (6), 459–478. https://doi.org/10.1177%2F0146621603259270Gierl, M. J., Alves, C., & Majeau, R. T. (2010). Using the attribute hierarchy method to make diagnostic inferences about examinees’ knowledge and skills in mathematics: An operational implementation of cognitive diagnostic assessment. International Journal of Testing, 10 (4), 318–341. https://doi.org/10.1080/15305058.2010.509554Gorin, J. S., & Embretson, S. E. (2006). Item difficulty modeling of paragraph comprehension items. Applied Psychological Measurement, 30, 394–411. https://doi.org/10.1177/0146621606288554Gorsuch, R. (1983). Factor analysis. Lawrence Erlbaum Associates.Hansen, M., Cai, L., Monroe, S., & Li, Z. (2016). Limited information goodness-of-fit testing of diagnostic classification item response models. British Journal of Mathematical and Statistical Psychology, 69 (3), 225–252. https://doi.org/10.1111/bmsp.12074Hasselgreen, A. (2013). Adapting the CEFR for the classroom assessment of young learners’ writing. The Canadian Modern Language Review, 69, 415–435. https://doi.org/10.3138/cmlr.1705.415Henson, R., DiBello, L., & Stout, B. (2018). A Generalized Approach to Defining Item Discrimination for DCMs. Measurement: Interdisciplinary Research and Perspectives, 16 (1), 18–29. https://doi.org/10.1080/15366367.2018.1436855Huang, H. Y. (2017). Multilevel cognitive diagnosis models for assessing changes in latent attributes. Journal of Educational Measurement, 54 (4), 440–480. https://doi.org/10.1111/jedm.12156Jang, E. (2009). Cognitive diagnostic assessment of L2 reading comprehension ability: Validity arguments for Fusion Model application to LanguEdge assessment. Language Testing, 26, 31–73. https://doi.org/10.1177%2F0265532208097336Jurich, D. P., & Bradshaw, L. P. (2014). An illustration of diagnostic classification modeling in student learning outcomes assessment. International Journal of Testing, 14 (1), 49–72. https://doi.org/10.1080/15305058.2013.835728Kaya, Y., & Leite, W. L. (2017). Assessing change in latent skills across time with longitudinal cognitive diagnosis modeling: An evaluation of model performance. Educational and psychological measurement, 77 (3), 369–388. https://doi.org/10.1177%2F0013164416659314Köhn, H. F., & Chiu, C. Y. (2018). How to Build a Complete Q-Matrix for a Cognitively Diagnostic Test. Journal of Classification, 35 (2), 273–299. https://doi.org/10.1007/s00357-018-92550Kunina-Habenicht, O., Rupp, A. A., & Wilhelm, O. (2009). A practical illustration of multidimensional diagnostic skills profiling: Comparing results from confirmatory factor analysis and diagnostic classification models. Studies in Educational Evaluation, 35 (2-3), 64–70. https://doi.org/10.1016/j.stueduc.2009.10.003Kusseling, F., & Lonsdale, D. (2013). A corpus-based assessment of French CEFR lexical content. The Canadian Modern Language Review, 69, 436–461. https://doi.org/10.3138/cmlr.1726.436Little, D. (2007). The common European framework of reference for languages: Perspectives on the making of supranational language education policy. The Modern Language Journal, 91, 645–655. https://doi.org/10.1111/j.1540-4781.2007.00627_2.xLiu, R., Huggins-Manley, A. C., & Bradshaw, L. (2017). The impact of Q-matrix designs on diagnostic classification accuracy in the presence of attribute hierarchies. Educational and psychological measurement, 77 (2), 220–240. https://doi.org/10.1177%2F0013164416645636Liu, R., Huggins-Manley, A. C., & Bulut, O. (2018). Retrofitting diagnostic classification models to responses from IRT-based assessment forms. Educational and psychological measurement, 78 (3), 357–383. https://doi.org/10.1177%2F0013164416685599Madison, M. J., & Bradshaw, L. P. (2015). The effects of Q-matrix design on classification accuracy in the log-linear cognitive diagnosis model. Educational and Psychological Measurement, 75 (3), 491–511. https://doi.org/10.1177%2F0013164414539162McGill, R. J., Styck, K. M., Palomares, R. S., & Hass, M. R. (2016). Critical issues in specific learning disability identification: What we need to know about the PSW model. Learning Disability Quarterly, 39 (3), 159–170. https://doi.org/10.1177%2F0731948715618504Rupp, A. A., & Templin, J. L. (2008). Unique characteristics of diagnostic classification models: A comprehensive review of the current state-of-the-art. Measurement, 6 (4), 219–262. https://doi.org/10.1080/15366360802490866Sessoms, J., & Henson, R. A. (2018). Applications of Diagnostic Classification Models: A Literature Review and Critical Commentary. Measurement: Interdisciplinary Research and Perspectives, 16 (1), 1–17. https://doi.org/10.1080/15366367.2018.1435104Templin, J., & Bradshaw, L. (2013). Measuring the reliability of diagnostic classification model examinee estimates. Journal of Classification, 30 (2), 251–275. https://doi.org/10.1007/s00357-0139129-4Templin, J., & Hoffman, L. (2013). Obtaining diagnostic classification model estimates using Mplus. Educational Measurement: Issues and Practice, 32 (2), 37–50. https://doi.org/10.1111/emip.12010Tu, D., Gao, X., Wang, D., & Cai, Y. (2017). A new measurement of internet addiction using diagnostic classification models. Frontiers in psychology, 8, 1768. https://doi.org/10.3389%2Ffpsyg.2017.01768Walker, G. M., Hickok, G., & Fridriksson, J. (2018). A cognitive psychometric model for assessment of picture naming abilities in aphasia. Psychological assessment, 30 (6), 809–826. https://doi.org/10.1037%2Fpas0000529Wang, C. (2013). Mutual information item selection method in cognitive diagnostic computerized adaptive testing with short test length. Educational and Psychological Measurement, 73 (6), 1017–1035. https://doi.org/10.1177%2F0013164413498256Xia, Y., & Zheng, Y. (2018). Asymptotically Normally Distributed Person Fit Indices for Detecting Spuriously High Scores on Difficult Items. Applied psychological measurement, 42 (5), 343–358. https://doi.org/10.1177%2F0146621617730391Xie, Q. (2017). Diagnosing university students’ academic writing in English: Is cognitive diagnostic modeling the way forward? Educational Psychology, 37 (1), 26–47. https://doi.org/10.1080/01443410.2016.1202900info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.http://creativecommons.org/licenses/by-nc-nd/4.0https://revistas.usb.edu.co/index.php/IJPR/article/view/5657Diagnostic classification modelscognitive diagnostic modelsCEFRbifactor modelslanguagemodelos de diagnóstico cognitivomodelos de clasificación diagnósticaMCERmodelos bifactorialeslenguajeEvaluación de las habilidades lingüísticas mediante modelos de clasificación diagnóstica: un ejemplo de uso de un instrumento lingüísticoEvaluación de las habilidades lingüísticas mediante modelos de clasificación diagnóstica: un ejemplo de uso de un instrumento lingüísticoArtículo de revistahttp://purl.org/coar/resource_type/c_6501http://purl.org/coar/resource_type/c_2df8fbb1http://purl.org/coar/version/c_970fb48d4fbd8a85Textinfo:eu-repo/semantics/articleJournal articleinfo:eu-repo/semantics/publishedVersionPublicationOREORE.xmltext/xml2683https://bibliotecadigital.usb.edu.co/bitstreams/14107bde-38d2-4f91-8489-c1ab28671072/download80a5088922feeecf68766765e49fe100MD5110819/28941oai:bibliotecadigital.usb.edu.co:10819/289412025-08-22 11:59:05.135http://creativecommons.org/licenses/by-nc-nd/4.0https://bibliotecadigital.usb.edu.coRepositorio Institucional Universidad de San Buenaventura Colombiabdigital@metabiblioteca.com