Advertisement

User Modeling – A Notoriously Black Art

  • Michael Yudelson
  • Philip I. PavlikJr.
  • Kenneth R. Koedinger
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6787)

Abstract

This paper is intended as guidance for those who are familiar with user modeling field but are less fluent in statistical methods. It addresses potential problems with user model selection and evaluation, that are often clear to expert modelers, but are not obvious for others. These problems are frequently a result of a falsely straightforward application of statistics to user modeling (e.g. over-reliance on model fit metrics). In such cases, absolute trust in arguably shallow model accuracy measures could lead to selecting models that are hard-to-interpret, less meaningful, over-fit, and less generalizable. We offer a list of questions to consider in order to avoid these modeling pitfalls. Each of the listed questions is backed by an illustrative example based on the user modeling approach called Performance Factors Analysis (PFA) [9].

Keywords

User modeling educational data mining model selection model complexity model parsimony 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Baayen, R.H., Davidson, D.J., Bates, D.M.: Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language 59(4), 390–412 (2008)CrossRefGoogle Scholar
  2. 2.
    Cen, H., Koedinger, K.R., Junker, B.: Comparing Two IRT Models for Conjunctive Skills. In: Woolf, B.P., Aïmeur, E., Nkambou, R., Lajoie, S. (eds.) ITS 2008. LNCS, vol. 5091, pp. 796–798. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  3. 3.
    Clark, H.H.: The language-as-fixed-effect fallacy: A critique of language statistics in psychological research. Journal of Verbal Learning and Verbal Behavior 12, 335–359 (1973)CrossRefGoogle Scholar
  4. 4.
    Corbett, A.T., Anderson, J.R.: Locus of feedback control in computer-based tutoring: impact on learning rate, achievement and attitudes. In: Proceedings of CHI 2002, Human Factors in Computing Systems, Seattle, WA, USA, March 31-April 5, pp. 245–252. ACM, New York (2001)Google Scholar
  5. 5.
    Cummins, D.D., Kintsch, W., Reusser, K., Weimer, R.: The role of understanding in solving algebra word problems. Cognitive Psychology 20, 405–438 (1988)CrossRefGoogle Scholar
  6. 6.
    Koedinger, K.R., Nathan, M.J.: The real story behind story problems: Effects of representation on quantitative reasoning. Journal of the Learning Sciences 13, 129–164 (2004)CrossRefGoogle Scholar
  7. 7.
    van der Linden, W.J., Hambleton, R.K. (eds.): Handbook of Modern Item Response Theory. Springer, New York (1997)zbMATHGoogle Scholar
  8. 8.
    Pavlik, P.I., Cen, H., Koedinger, K.R.: Learning factors transfer analysis: Using learning curve analysis to automatically generate domain models. In: Barnes, T., Desmarais, M., Romero, C., Ventura, S. (eds.) Proceedings of The 2nd International Conference on Educational Data Mining, Cordoba, Spain, pp. 121–130 (2009)Google Scholar
  9. 9.
    Pavlik Jr., P.I., Cen, H., Koedinger, K.R.: Performance factors analysis – A new alternative to knowledge tracing. In: Dimitrova, V., Mizoguchi, R. (eds.) Proceedings of the 14th International Conference on Artificial Intelligence in Education, Brighton, England (2009)Google Scholar
  10. 10.
    Pitt, M.A., Myung, I.J., Zhang, S.: Toward a method of selecting among computational models of cognition. Psychological Review 109(3), 472–491 (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Michael Yudelson
    • 1
  • Philip I. PavlikJr.
    • 1
  • Kenneth R. Koedinger
    • 1
  1. 1.Human Computer Interaction InstituteCarnegie Mellon UniversityPittsburghUSA

Personalised recommendations