Project
Optimizing Personalized Learning at Scale by Setting Up Failure for Success
Big online data is increasingly valuable in the study of latent cognitive processes. In the field of educational psychology, the heightened popularity of online learning environments (OLEs) in recent decades invites new avenues for studying and promoting learning. OLEs have the advantage of providing large scale educational data; the platform Prowise Learn, for example, provide responses to math- and language-related items from hundreds of thousands of students across more than a decade. Such rich data provide a unique opportunity to dive into the intricacies of learning processes that are harder to detect in traditional lab studies or with survey data.
Specifically, this project aims to promote success in learning by investigating its counterpart: failure. Failure-related behavior, such as making errors, can inform educational measurement and personalized learning algorithms. We leverage big educational data and apply statistical models to answer questions related to failure in learning. Research questions include:
1. What causes underlie errors? We use the Systematic Error Tracing Algorithm to detect which misconceptions are likely leading to individuals’ error responses.
2. What are the consequences of making errors? We aim to extend and triangulate recent findings that making several errors in a row predicts quitting. We can also relate this to problem-skipping behavior.
3. Is there individual variability in these relationships? We aim to find student-specific factors that can be targeted to improve the learning process of each student. In the context of quitting, do some students persist better after errors than others, and what potential factors underlie these differences? One study has shown that users are more likely to slow down their performance after making an error, and more slowing down led to more learning, thus we aim to incorporate response times as one of many ways of looking at individual-specific factors. We also examine the stability of individual differences across datasets, learning domains, and time.
4. Intervention research: We conduct A/B tests to causally determine the impact of our investigated factors on student success. These studies also allow us to inform the algorithm of the system to optimize its design and implementation.
Ultimately, answering these questions will help us to understand cognitive processes and inform educational practice. From a methodological standpoint, the vast amount of data available allow us to utilize contemporary statistical techniques derived from both psychometrics and artificial intelligence research to investigate the proposed questions. Lastly, we aim to use several independent datasets from different OLEs, to enable triangulation of findings and to evaluate the validity and reliability of identified predictors.
Supervisors
Dr. Alexander O. Savi
Dr. Abe D. Hofman
Prof. Dr. Han L. J. van der Maas
Financed by
European Research Council
Period
October 2023 – October 2027