IMProB-It : automatic feedback model for iterative programming tasks in introductory programming courses

The growing interest in integrating artificial intelligence (AI) in education has spurred the development of tools and platforms that support learning, especially in computer programming. While many of these tools are designed to provide summative assessments and general feedback, only a few are tai...

Full description

Autores:
Leytón, Ginna
Tipo de recurso:
Doctoral thesis
Fecha de publicación:
2025
Institución:
Universidad del Valle
Repositorio:
Repositorio Digital Univalle
Idioma:
eng
OAI Identifier:
oai:bibliotecadigital.univalle.edu.co:10893/36036
Acceso en línea:
https://hdl.handle.net/10893/36036
Palabra clave:
Programación
Bucle
Programación iterativa
Aprendizaje automático
Inteligencia artificial
Sistema de Gestión de Aprendizaje (LMS)
Rights
openAccess
License
https://creativecommons.org/licenses/by-nc-nd/4.0/
Description
Summary:The growing interest in integrating artificial intelligence (AI) in education has spurred the development of tools and platforms that support learning, especially in computer programming. While many of these tools are designed to provide summative assessments and general feedback, only a few are tailored to address programming tasks that involve iteration structures, and an even smaller number incorporate Gries’ theory into their design. As a result, few solutions effectively support students’ understanding of loops in introductory programming courses. Students often struggle with loop construction, finding it difficult to grasp the scope of a loop, which code segments will repeat, and how many times they will execute. Automated feedback for loop-based programming can play a key role in improving students’ comprehension of these concepts. However, factors such as the type of feedback provided and the intervention level are essential to its effectiveness. This thesis addresses these difficulties by proposing an automated feedback model to support students’ understanding of iterative programming. Grounded in program correctness theory and machine learning (ML), the model evaluates student code, identifies common errors, and delivers targeted feedback to explain mistakes made in loop-based tasks. This research contributes significantly to the field of automated feedback in programming. We developed a specialized dataset of programming tasks featuring "while" loops, annotated to capture typical student errors, including issues with loop initialization, termination, and state transformation. Using Gries’ loop programming theory, we built a detailed taxonomy to categorize and label these errors. Later, we trained ML models on this dataset to classify and predict errors in students’ "while" loop tasks. Then, we employed prompt engineering with OpenAI’s GPT-4 to generate automated feedback aligned with Gries’ theory, tailoring it to the errors detected by the ML classifier. Finally, we integrated these models into INGInious, a learning management system (LMS), through an API named IMProB-It, allowing students to receive specific feedback on programming tasks involving "while" loops. To evaluate IMProB-It, we conducted a quasi-experimental study with first-semester students in systems engineering and related fields. Divided into experimental and control groups, students in the experimental group received automated feedback on their solutions to "while" loop tasks. The results indicate that students who received specific feedback found it beneficial for understanding loop mechanics, as reflected in surveymeasured satisfaction levels. This research demonstrates the potential of ML-driven automated feedback to enhance the learning experience for novice programmers, addressing an essential need in computer science education.