Using Eye-Tracking to Determine the Impact of Prior Knowledge on Self-Regulated Learning with an Adaptive Hypermedia-Learning Environment
Recent research on self-regulated learning (SRL) includes multi-channel data, such as eye-tracking, to measure the deployment of key cognitive and metacognitive SRL processes during learning with adaptive hypermedia systems. In this study we investigated how 147 college students’ proportional learning gains (PLGs), proportion of time spent on areas of interest (AOIs), and frequency of fixations on AOI-pairs, differed based on their prior knowledge of the overall science content, and of specific content related to sub-goals, as they learned with MetaTutor. Results indicated that students with low prior sub-goal knowledge had significantly higher PLGs, and spent a significantly larger proportion of time fixating on diagrams compared to students with high prior sub-goal knowledge. In addition, students with low prior knowledge had significantly higher frequencies of fixations on some AOI-pairs, compared to students with high prior knowledge. The results have implications for using eye-tracking (and other process data) to understand the behavioral patterns associated with underlying cognitive and metacognitive SRL processes and provide real-time adaptive instruction, to ensure effective learning.
KeywordsMetacognition Self-regulated learning Eye tracking Prior knowledge Adaptive hypermedia-learning environments Process data
This study was supported by funding from the National Science Foundation (DRL 1431552). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.
- 1.Azevedo, R., et al.: Using trace data to examine the complex roles of cognitive, metacognitive, and emotional self-regulatory processes during learning with multi-agent systems. In: Azevedo, R., Aleven, V. (eds.) International Handbook of Metacognition and Learning Technologies, pp. 427–449. Springer, Amsterdam (2013)CrossRefGoogle Scholar
- 4.Winne, P.H., Hadwin, A.F.: The weave of motivation and self-regulated learning. In: Schunk, D.H., Zimmerman, B.J. (eds.) Motivation and Self-Regulated Learning: Theory, Research and Applications, pp. 298–314. Erlbaum, New York (2008)Google Scholar
- 7.Bondareva, D., Conati, C., Feyzi-Behnagh, R., Harley, J.M., Azevedo, R., Bouchet, F.: Inferring learning from gaze data during interaction with an environment to support self-regulated learning. In: Lane, H., Yacef, K., Mostow, J., Pavlik, P. (eds.) AIED 2013. LNCS, vol. 7926, pp. 229–238. Springer, Heidelberg (2013)CrossRefGoogle Scholar
- 12.SMI Experiment Center 3.4.165 [Apparatus and Software]. SensoMotoric Instruments, Boston, Massachusetts, USA (2014)Google Scholar
- 13.Salvucci, D.D., Goldberg, J.H.: Identifying fixations and saccades in eye-tracking protocols. In: Duchowski, A.T. (ed.) Eye-Tracking Research and Application, pp. 71–78. ACM Press, Palm Beach Gardens (2000)Google Scholar
- 14.Mayer, R.E. (ed.): The Cambridge Handbook of Multimedia Learning, 2nd edn. Cambridge University Press, New York (2014)Google Scholar
- 15.Calvo, R.A., et al. (eds.): The Oxford Handbook of Affective Computing. Oxford University Press, New York (2015)Google Scholar