Advertisement

KT-IDEM: Introducing Item Difficulty to the Knowledge Tracing Model

  • Zachary A. Pardos
  • Neil T. Heffernan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6787)

Abstract

Many models in computer education and assessment take into account difficulty. However, despite the positive results of models that take difficulty in to account, knowledge tracing is still used in its basic form due to its skill level diagnostic abilities that are very useful to teachers. This leads to the research question we address in this work: Can KT be effectively extended to capture item difficulty and improve prediction accuracy? There have been a variety of extensions to KT in recent years. One such extension was Baker’s contextual guess and slip model. While this model has shown positive gains over KT in internal validation testing, it has not performed well relative to KT on unseen in-tutor data or post-test data, however, it has proven a valuable model to use alongside other models. The contextual guess and slip model increases the complexity of KT by adding regression steps and feature generation. The added complexity of feature generation across datasets may have hindered the performance of this model. Therefore, one of the aims of our work here is to make the most minimal of modifications to the KT model in order to add item difficulty and keep the modification limited to changing the topology of the model. We analyze datasets from two intelligent tutoring systems with KT and a model we have called KT-IDEM (Item Difficulty Effect Model) and show that substantial performance gains can be achieved with this minor modification that incorporates item difficulty.

Keywords

Knowledge Tracing Bayesian Networks Item Difficulty User Modeling Data Mining 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Johns, J., Mahadevan, S., Park Woolf, B.: Estimating Student Proficiency Using an Item Response Theory Model. In: Ikeda, M., Ashley, K., Cahn, T.-W. (eds.) ITS 2006. LNCS, vol. 4053, pp. 473–480. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  2. 2.
    Koedinger, K.R., Corbett, A.T.: Cognitive tutors: Technology bringing learning science to the classroom. In: Sawyer, K. (ed.) The Cambridge Handbook of the Learning Sciences, pp. 61–78. Cambridge University Press, New York (2006)Google Scholar
  3. 3.
    Corbett, A.T., Anderson, J.R.: Knowledge Tracing: Modeling the Acquisition of Procedural Knowledge. User Modeling and User-Adapted Interaction 4, 253–278 (1995)CrossRefGoogle Scholar
  4. 4.
    Baker, R.S.J.d., Corbett, A.T., Aleven, V.: More accurate student modeling through contextual estimation of slip and guess probabilities in bayesian knowledge tracing. In: Woolf, B.P., Aïmeur, E., Nkambou, R., Lajoie, S. (eds.) ITS 2008. LNCS, vol. 5091, pp. 406–415. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  5. 5.
    Baker, R.S.J.d., Corbett, A.T., Gowda, S.M., Wagner, A.Z., MacLaren, B.A., Kauffman, L.R., Mitchell, A.P., Giguere, S.: Contextual Slip and Prediction of Student Performance after Use of an Intelligent Tutor. In: De Bra, P., Kobsa, A., Chin, D. (eds.) UMAP 2010. LNCS, vol. 6075, pp. 52–63. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  6. 6.
    Pardos, Z.A., Heffernan, N.T.: Modeling Individualization in a Bayesian Networks Implementation of Knowledge Tracing. In: De Bra, P., Kobsa, A., Chin, D. (eds.) UMAP 2010. LNCS, vol. 6075, pp. 255–266. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  7. 7.
    Pardos, Z., Dailey, M., Heffernan, N.: Learning what works in ITS from non-traditional randomized controlled trial data. The International Journal of Artificial Intelligence in Education (in Press, 2011)Google Scholar
  8. 8.
    Razzaq, L., Feng, M., Nuzzo-Jones, G., Heffernan, N.T., Koedinger, K.R., Junker, B., et al.: The Assistment project: Blending assessment and assisting. In: Looi, C.K., McCalla, G., Bredeweg, B., Breuker, J. (eds.) Proceedings of the 12th Artificial Intelligence in Education, pp. 555–562. ISO Press, Amsterdam (2005)Google Scholar
  9. 9.
    Corbett, A.T.: Cognitive computer tutors: Solving the two- sigma problem. In: Bauer, M., Gmytrasiewicz, P.J., Vassileva, J. (eds.) UM 2001. LNCS (LNAI), vol. 2109, pp. 137–147. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  10. 10.
    Pardos, Z.A., Heffernan, N.T.: Using HMMs and bagged decision trees to leverage rich features of user and skill from an intelligent tutoring system dataset. To appear in Journal of Machine Learning Research W & CP (in Press)Google Scholar
  11. 11.
    Bell, R., Koren, Y.: Lessons from the Netflix Prize Challenge. SIGKDD Explorations 9, 75–79 (2007)CrossRefGoogle Scholar
  12. 12.
    Yu, H.-F., Lo, H.-Y., Hsieh, H.-P., Lou, J.-K., McKenzie, T.G., Chou, J.-W., et al.: Feature Engineering and Classifier Ensemble for KDD Cup 2010. In: Proceedings of the KDD Cup 2010 Workshop, pp. 1–16 (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Zachary A. Pardos
    • 1
  • Neil T. Heffernan
    • 1
  1. 1.Department of Computer ScienceWorcester Polytechnic InstituteWorcesterUSA

Personalised recommendations