Contrasting Different Forms of Feedback in Intelligent Tutoring Systems
For my final project for 'Advanced Topics in Personalized Online Learning', I worked on a team to develop a set of cognitive tutors to use in an experiment contrasting various forms of feedback. Feedback and its implications in learning have been one of the most discussed topics in the space of cognitive tutors. The general consensus among the feedback research studied is that, for the most part, feedback is effective and can lead to significant learning gains. In order to be effective, feedback should be immediate, positive, relate to the correct responses, and help students understand the goals of their practice.
However, there remains debate on which form of feedback is the most effective, across different domains and populations, and in different environments. Many studies have supported the general idea that more holistic feedback is more effective in terms of learning gains.
For our final project, we decided to investigate feedback types in self-directed environments in which no instructor is present. This type of environment is typical for personalized and adaptive learning. In order to do this, we experimentally contrasted three conditions that varied only in the type of feedback delivered. We chose the conditions to have high contrast, relevance to online tutoring systems, and to be easily describable to other practitioners in the field. We also decided to vary only one variable, the feedback, and use a novel domain (i.e., experimental design in the field of psychology) for our experiment.
To contrast the effectiveness of the forms of feedback, we created three tutors in CTAT, identical in form and appearance, but different in the type of feedback returned. Each tutor returned only one type of feedback, and our experimental design was between subjects. Therefore, subjects received only one type of feedback across all the questions they answered. We implemented three questions in each tutor module, each with four sub-questions. The sub-questions should be considered as separate questions that reference a common scenario, rather than all part of a single question (i.e., these are not “inner loop” questions, which allow for multiple paths through a problem). Finally, In order to limit confounding variables that could impact how students interpreted the feedback administered to them, we did not include hints in our tutor design, which may be considered another form of feedback to students.
The feedback types were:
Based on past studies, we hypothesize that elaborated feedback will lead to the greatest learning gains, followed by error-specific feedback, followed by simple feedback.
The experiment was administered to nine Carnegie Mellon Students with varying degrees of knowledge regarding psychology and experimental design. We first administered a written Pre-Test to the participants. Next, students were logged in to the tutor that would run with their randomly assigned feedback condition. Finally, we administered a written post-test which was exactly the same as the pre-test. The purpose of administering a pre and post test was to detect a change in knowledge state, normalized by how much knowledge could be gained (Post - Pre/ Points Possible - Pre).