Comparing Student Models in Different Formalisms by Predicting Their Impact on Help Success

  • Sébastien Lallé
  • Jack Mostow
  • Vanda Luengo
  • Nathalie Guin
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7926)

Abstract

We describe a method to evaluate how student models affect ITS decision quality – their raison d’être. Given logs of randomized tutorial decisions and ensuing student performance, we train a classifier to predict tutor decision outcomes (success or failure) based on situation features, such as student and task. We define a decision policy that selects whichever tutor action the trained classifier predicts in the current situation is likeliest to lead to a successful outcome. The ideal but costly way to evaluate such a policy is to implement it in the tutor and collect new data, which may require months of tutor use by hundreds of students. Instead, we use historical data to simulate a policy by extrapolating its effects from the subset of randomized decisions that happened to follow the policy. We then compare policies based on alternative student models by their simulated impact on the success rate of tutorial decisions. We test the method on data logged by Project LISTEN’s Reading Tutor, which chooses randomly which type of help to give on a word. We report the cross-validated accuracy of predictions based on four types of student models, and compare the resulting policies’ expected success and coverage. The method provides a utility-relevant metric to compare student models expressed in different formalisms.

Keywords

Student models knowledge tracing classification help policy 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Anderson, J.R., Gluck, K.: What role do cognitive architectures play in intelligent tutoring systems. In: Cognition and Instruction: Twenty-Five Years of Progress, pp. 227–262. Lawrence Erlbaum, Mahwah (2001)Google Scholar
  2. 2.
    Balacheff, N., Gaudin, N.: Students conceptions: an introduction to a formal characterization. Cahier Leibniz 65, 1–21 (2002)Google Scholar
  3. 3.
    Barnes, T., Stamper, J., Lehman, L., Croy, M.: A pilot study on logic proof tutoring using hints generated from historical student data. In: Procs. of the 1st International Conference on Educational Data Mining, Montréal, Canada, pp. 552–557 (2008)Google Scholar
  4. 4.
    Beck, J.E., Woolf, B.P., Beal, C.R.: ADVISOR: a machine-learning architecture for intelligent tutor construction. In: Procs. of the 17th AAAI Conference on Artificial Intelligence, Boston, MA, pp. 552–557 (2000)Google Scholar
  5. 5.
    Cen, H., Koedinger, K., Junker, B.: Comparing two IRT models for conjunctive skills. In: Woolf, B.P., Aïmeur, E., Nkambou, R., Lajoie, S. (eds.) ITS 2008. LNCS, vol. 5091, pp. 796–798. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  6. 6.
    Chi, M., VanLehn, K., Litman, D., Jordan, P.: Inducing effective pedagogical strategies using learning context features. In: De Bra, P., Kobsa, A., Chin, D. (eds.) UMAP 2010. LNCS, vol. 6075, pp. 147–158. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  7. 7.
    Cohen, W.W.: Fast Effective Rule Induction. In: Procs. of the 12th International Conference on Machine Learning, Tahoe City, CA, pp. 115–123 (1995)Google Scholar
  8. 8.
    Corbett, A.T., Anderson, J.R.: Knowledge tracing: Modeling the acquisition of procedural knowledge. User Modelling and User-Adapted Interaction 4, 253–278 (1995)CrossRefGoogle Scholar
  9. 9.
    Desmarais, M.C.: Performance comparison of item-to-item skills models with the IRT single latent trait model. In: Konstan, J.A., Conejo, R., Marzo, J.L., Oliver, N. (eds.) UMAP 2011. LNCS, vol. 6787, pp. 75–86. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  10. 10.
    Frank, E., Witten, I.H.: Generating accurate rule sets without global optimization. In: Procs. of the 15th International Conference on Machine Learning, Madison, WI, pp. 144–151 (1998)Google Scholar
  11. 11.
    Gertner, A.S., Conati, C., VanLehn, K.: Procedural help in Andes: Generating hints using a Bayesian network student model. In: Procs. of the 15th National Conference on Artificial Intelligence, Madison, WI, pp. 106–111 (1998)Google Scholar
  12. 12.
    Gong, Y., Beck, J.E., Heffernan, N.T.: How to Construct More Accurate Student Models: Comparing and Optimizing Knowledge Tracing and Performance Factor Analysis. International Journal of Artificial Intelligence in Education 21(1), 27–46 (2011)Google Scholar
  13. 13.
    Heiner, C., Beck, J., Mostow, J.: Improving the help selection policy in a Reading Tutor that listens. In: Procs. of the InSTIL/ICALL 2004 Symposium on NLP and Speech Technologies in Advanced Language Learning Systems, Venice, Italy, pp. 195–198 (2004)Google Scholar
  14. 14.
    Le, N.-T., Pinkwart, N.: Can Soft Computing Techniques Enhance the Error Diagnosis Accuracy for Intelligent Tutors? In: Cerri, S.A., Clancey, W.J., Papadourakis, G., Panourgia, K. (eds.) ITS 2012. LNCS, vol. 7315, pp. 320–329. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  15. 15.
    McNemar, Q.: Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika 12(2), 153–157 (1947)CrossRefGoogle Scholar
  16. 16.
    Minh Chieu, V., Luengo, V., Vadcard, L., Tonetti, J.: Student modeling in complex domains: Exploiting symbiosis between temporal Bayesian networks and fine-grained didactical analysis. International Journal of Artificial Intelligence in Education 20(3), 269–301 (2010)Google Scholar
  17. 17.
    Mitrovic, A., Koedinger, K., Martin, B.: A comparative analysis of cognitive tutoring and constraint-based modeling. In: Procs. of the 9th International Conference on User Modeling, Johnstown, PA, pp. 313–322 (2003)Google Scholar
  18. 18.
    Mitrovic, A., Ohlsson, S.: Evaluation of a Constraint-Based Tutor for a Database. International Journal of Artificial Intelligence in Education 10(3-4), 238–256 (1999)Google Scholar
  19. 19.
    Mostow, J., Aist, G.: Evaluating tutors that listen: An overview of Project LISTEN. In: Smart Machines in Education: The Coming Revolution in Educational Technology, pp. 169–234. MIT/AAAI Press, Cambridge, MA (2001)Google Scholar
  20. 20.
    Ohlsson, S.: Constraint-based student modeling. NATO ASI Series F Computer and Systems Sciences, vol. 125, pp. 167–189 (1994)Google Scholar
  21. 21.
    Pavlik, P.I., Cen, H., Koedinger, K.: Performance Factors Analysis–A New Alternative to Knowledge Tracing. In: Procs. of the 15th International Conference on Artificial Intelligence in Education, Auckland, New Zealand, pp. 531–538 (2009)Google Scholar
  22. 22.
    Razzaq, L., Heffernan, N.T.: Scaffolding vs. Hints in the Assistment System. In: Ikeda, M., Ashley, K.D., Chan, T.-W. (eds.) ITS 2006. LNCS, vol. 4053, pp. 635–644. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  23. 23.
    Stamper, J., Barnes, T., Lehmann, L., Croy, M.: The hint factory: Automatic generation of contextualized help for existing computer aided instruction. In: Procs. of the International 9th Conference on Intelligent Tutoring Systems Young Researchers Track, Montréal, Canada, pp. 71–78 (2008)Google Scholar
  24. 24.
    Swets, J.A.: Measuring the accuracy of diagnostic systems. Science 240(4857), 1285–1293 (1988)MathSciNetMATHCrossRefGoogle Scholar
  25. 25.
    VanLehn, K.: Student modeling. In: Foundations of Intelligent Tutoring Systems, pp. 55–78. Lawrence Erlbaum, Mahwah (1988)Google Scholar
  26. 26.
    Verbert, K., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., Duval, E.: Dataset-driven research for improving recommender systems for learning. In: Procs. of the 1st International Conference on Learning Analytics and Knowledge, Banff, Canada, pp. 44–53 (2011)Google Scholar
  27. 27.
    Xu, Y., Mostow, J.: Comparison of methods to trace multiple subskills: Is LR-DBN best? In: Procs. of the 5th International Conference on Educational Data Mining, Chania, Greece, pp. 41–48 (2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Sébastien Lallé
    • 1
    • 2
    • 3
  • Jack Mostow
    • 3
  • Vanda Luengo
    • 1
  • Nathalie Guin
    • 2
  1. 1.LIG METAHJoseph Fourier UniversityGrenobleFrance
  2. 2.LIRISUniversity of Lyon 1, CNRSLyonFrance
  3. 3.Carnegie Mellon UniversityPittsburghUnited States of America

Personalised recommendations