Advertisement

Performance Comparison of Item-to-Item Skills Models with the IRT Single Latent Trait Model

  • Michel C. Desmarais
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6787)

Abstract

Assessing a learner’s mastery of a set of skills is a fundamental issue in intelligent learning environments. We compare the predictive performance of two approaches for training a learner model with domain data. One is based on the principle of building the model solely from observable data items, such as exercises or test items. Skills modelling is not part of the training phase, but instead dealt with at later stage. The other approach incorporates a single latent skill in the model. We compare the capacity of both approaches to accurately predict item outcome (binary success or failure) from a subset of item outcomes. Three types of item-to-item models based on standard Bayesian modeling algorithms are tested: (1) Naive Bayes, (2) Tree-Augmented Naive Bayes (TAN), and (3) a K2 Bayesian Classifier. Their performance is compared to the widely used IRT-2PL approach which incorporates a single latent skill. The results show that the item-to-item approaches perform as well, or better than the IRT-2PL approach over 4 widely different data sets, but the differences vary considerably among the data sets. We discuss the implications of these results and the issues relating to the practical use of item-to-item models.

Keywords

IRT Bayesian Models TAN Learner models 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Amershi, S., Conati, C.: Unsupervised and supervised machine learning in user modeling for intelligent learning environments. In: IUI 2007: Proceedings of the 12th International Conference on Intelligent User Interfaces, pp. 72–81. ACM, New York (2007)Google Scholar
  2. 2.
    Ayers, E., Nugent, R., Dean, N.: A comparison of student skill knowledge estimates. In: 2nd International Conference on Educational Data mining, Cordoba, Spain, pp. 1–10 (2009)Google Scholar
  3. 3.
    Baker, F.B.: Item Response Theory Parameter Estimation Techniques. Marcel Dekker Inc., New York (1992)zbMATHGoogle Scholar
  4. 4.
    Carmona, C., Millán, E., Pérez-de-la-Cruz, J.-L., Trella, M., Conejo, R.: Introducing Prerequisite Relations in a Multi-layered Bayesian Student Model. In: Ardissono, L., Brna, P., Mitrović, A. (eds.) UM 2005. LNCS (LNAI), vol. 3538, pp. 347–356. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  5. 5.
    Conati, C., Gertner, A., VanLehn, K.: Using Bayesian networks to manage uncertainty in student modeling. User Modeling and User-Adapted Interaction 12(4), 371–417 (2002)CrossRefzbMATHGoogle Scholar
  6. 6.
    Desmarais, M.C., Meshkinfam, P., Gagnon, M.: Learned student models with item to item knowledge structures. User Modeling and User-Adapted Interaction 16(5), 403–434 (2006)CrossRefGoogle Scholar
  7. 7.
    Desmarais, M.C., Pu, X.: A bayesian inference adaptive testing framework and its comparison with Item Response Theory. International Journal of Artificial Intelligence in Education 15, 291–323 (2005)Google Scholar
  8. 8.
    Doignon, J.P., Falmagne, J.C.: Knowledge Spaces. Springer, Berlin (1999)CrossRefzbMATHGoogle Scholar
  9. 9.
    Falmagne, J.C., Cosyn, E., Doignon, J.P., Thiéry, N.: The assessment of knowledge, in theory and in practice. In: Missaoui, R., Schmidt, J. (eds.) Formal Concept Analysis. LNCS (LNAI), vol. 3874, pp. 61–79. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  10. 10.
    Friedman, N., Geiger, D., Goldszmidt, M.: Bayesian network classifiers. Machine Learning 29(2-3), 131–163 (1997)CrossRefzbMATHGoogle Scholar
  11. 11.
    Heller, J., Hockemeyer, C., Albert, D.: Applying competence structures for peer tutor recommendations in CSCL environments. In: Kinshuk, L.C., Sutinen, E., Sampson, D., Aedo, I., Uden, L., Kähkönen, E. (eds.) The 4th IEEE International Conference on Advanced Learning Technologies, pp. 1050–1051. IEEE Computer Society, Los Alamitos (2004)Google Scholar
  12. 12.
    Heller, J., Steiner, C., Hockemeyer, C., Albert, D.: Competence–based knowledge structures for personalised learning. International Journal on E–Learning 5(1), 75–88 (2006)Google Scholar
  13. 13.
    Hockemeyer, C., Held, T., Albert, D.: Rath - a relational adaptive tutoring hypertext www-environment based on knowledge space theory (1997)Google Scholar
  14. 14.
    Hornik, K., Buchta, C., Hothorn, T., Meyer, D., Zeileis, A.: The RWeka package (2006)Google Scholar
  15. 15.
    Liu, C.L.: A simulation-based experience in learning structures of bayesian networks to represent how students learn composite concepts. I. J. Artificial Intelligence in Education 18(3), 237–285 (2008)Google Scholar
  16. 16.
    Pavlik, P.I., Cen, H., Koedinger, K.R.: Learning factors transfer analysis: Using learning curve analysis to automatically generate domain models. In: Barnes, T., Desmarais, M.C., Romero, C., Ventura, S. (eds.) Proceedings of the 2nd International Conference on Educational Data Mining, EDM 2009, Cordoba, Spain, July 1-3, pp. 121–130 (2009), www.educationaldatamining.org
  17. 17.
    Rizopoulos, D.: ltm: An r package for latent variable modelling and item response theory analyses. Journal of Statistical Software 17(5), 1–25 (2006)CrossRefGoogle Scholar
  18. 18.
    Sing, T., Sander, O., Beerenwinkel, N., Lengauer, T.: Rocr: visualizing classifier performance in r. Bioinformatics 21(20), 3940–3941 (2005), http://bioinformatics.oxfordjournals.org/content/21/20/3940.abstract CrossRefGoogle Scholar
  19. 19.
    Stamper, J.C., Barnes, T., Croy, M.J.: Extracting student models for intelligent tutoring systems. In: AAAI 2007, pp. 1900–1901. AAAI Press, Menlo Park (2007)Google Scholar
  20. 20.
    Tatsuoka, K.K.: Rule space: An approach for dealing with misconceptions based on item response theory. Journal of Educational Measurement 20, 345–354 (1983)CrossRefGoogle Scholar
  21. 21.
    van der Linden, W.J., Hambleton, R.K. (eds.): Handbook of Modern Item Response Theory. Springer, Heidelberg (1997)zbMATHGoogle Scholar
  22. 22.
    VanLehn, K., Niu, Z., Siler, S., Gertner, A.S.: Student modeling from conventional test data: A bayesian approach without priors. In: Goettl, B.P., Halff, H.M., Redfield, C.L., Shute, V.J. (eds.) ITS 1998. LNCS, vol. 1452, pp. 434–443. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  23. 23.
    Vomlel, J.: Bayesian networks in educational testing. International Journal of Uncertainty, Fuzziness and Knowledge Based Systems 12, 83–100 (2004)CrossRefzbMATHGoogle Scholar
  24. 24.
    Witten, I.H., Frank, E.: Data mining. Morgan Kaufmann, Los Altos (2000)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Michel C. Desmarais
    • 1
  1. 1.Polytechnique MontréalMontréalCanada

Personalised recommendations