Modeling and Scaffolding Self-Explanation Across Domains and Activities

Part of the Springer International Handbooks of Education book series (SIHE, volume 28)


In this chapter, we describe our research on providing computer-based support for the meta-cognitive skill of self-explanation. The distinguishing element of our work is that we aim at providing support for self-explanation that is student-adaptive, i.e., tailored to the specific needs and traits of each individual, e.g., relevant knowledge and tendency to self-explain spontaneously. Adapting to these elements requires building models that can measure them in real-time during interaction. In this chapter, we illustrate how we built such models for two different intelligent learning environments (ILEs): one that helps college students self-explain worked-out solutions of physics problems, and one that supports self-explanation during interaction with an interactive simulation for mathematical functions.


Bayesian Network Exploration Node Interactive Simulation Student Model Conditional Probability Table 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. Aleven, V., & Koedinger, K. R. (2002). An effective meta-cognitive strategy: Learning by doing and explaining with a computer-based Cognitive Tutor. Cognitive Science, 26(2), 147–179.CrossRefGoogle Scholar
  2. Azevedo, R., Witherspoon, A., Chauncey, A., Burkett, C., & Fike, A. (2009). Meta-tutor: A meta-cognitive tutor for enhancing self-regulated learning. AAAI Fall Symposium on Cognitive and Meta-Cognitive Educational Systems (pp. 14–19).Google Scholar
  3. Bunt, A., Conati, C., Hugget, M., & Muldner, K. (2001). On improving the effectiveness of open learning environments through tailored support for exploration. AIED 2001, 10th World Conference of Artificial Intelligence and Education.Google Scholar
  4. Bunt, A., Conati, C., & Muldner, K. (2004). Scaffolding self-explanation to improve learning in exploratory learning environments. Proceedings of ITS 2004, 7th International Conference on Intelligent Tutoring Systems, Lecture Notes in Computer Science, Vol. 3220/2004 (pp. 656–667), Berlin/Heidelberg: Springer.Google Scholar
  5. Chi, M. T. (2000). Self-explaining expository texts: The dual processes of generating inferences and repairing mental models. In R. Glaser (Ed.), Advances in instructional psychology (pp. 161–238). Mahwah: Lawrence Erlbaum Associates.Google Scholar
  6. Chi, M., & VanLehn, K. (2007). The impact of explicit strategy instruction on problem solving behaviors across intelligent tutoring systems. 29th Annual Conference of the Cognitive Science Society.Google Scholar
  7. Conati, C., Gertner, A., & VanLehn, K. (2002). Using bayesian networks to manage uncertainty in student modeling. Journal of User Modeling and User-Adapted Interaction, 12(4), 371–417.CrossRefGoogle Scholar
  8. Conati, C., & Merten, C. (2007). Eye-tracking for user modeling in exploratory learning environments: An empirical evaluation. Knowledge Based Systems, 20(6), 557–574.CrossRefGoogle Scholar
  9. Conati, C., & Vanlehn, K. (2000). Toward computer-based support of meta-cognitive skills: A computational framework to coach self-explanation. International Journal of Artificial Intelligence in Education, 11, 389–415.Google Scholar
  10. Crippen, K., & Boyd, L. (2007). The impact of web-based worked examples and self-explanation on performance, problem solving, and self-efficacy. Computers in Education, 49(3), 809–821.CrossRefGoogle Scholar
  11. Luckin, R., & Hammerton, L. (2002). Getting to know me: Helping learners understand their own learning needs through metacognitive scaffolding. In: Intelligent Tutoring Systems: 6th International Conference on Intelligent Tutoring Systems, ITS 2002. Springer.Google Scholar
  12. McLaren, B., Lim, S., & Koedinger, K. (2008). When and how often should worked examples be given to students? New results and a summary of the current state of research. 30th Annual Conference of the Cognitive Science Society (pp. 2176–2181). Austin, TX: Cognitive Science Society.Google Scholar
  13. Mitrovic, A. (2003). Supporting self-explanation in a data normalization tutor. Supplementary proceedings, AIED 2003 (pp. 565–577).Google Scholar
  14. Muldner, K., & Conati, C. (2007). Evaluating a decision-theoretic approach to tailored example selection. IJCAI 2007, 20th International Joint Conference in Artificial Intelligence (pp. 483–488).Google Scholar
  15. Rau, M., Aleven, V., & Rummel, N. (2009). Intelligent tutoring systems with multiple representations and self-explanation prompts support learning of fractions. 14th International Conference on Artificial Intelligence in Education, AIED 2009 (pp. 441–449). Amsterdam: IOS Press.Google Scholar
  16. Roll, I., Aleven, V., McLaren, B. M., & Koedinger, K. R. (2007). Can help seeking be tutored? Searching for the secret sauce of metacongitive tutoring. International Conference on Artificial Intelligence in Education 2007.Google Scholar
  17. Shute, V. J., & Glaser, R. (1990). A large-scale evaluation of an intelligent discovery world: Smithtown. Interactive Learning Environments, 1(1), 1–77.CrossRefGoogle Scholar
  18. Tan, J., Biswas, G., & Schwartz, D. (2006). Feedback for metacognitive support in learning by teaching environments. Meeting of the Cognitive Science Society (pp. 828–833). Vancouver.Google Scholar
  19. van Joolingen, W. (2000). Designing for collaborative discovery learning. 5th International Conference on Intelligent Tutoring Systems (pp. 202–211).Google Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of British ColumbiaVancouverCanada

Personalised recommendations