Advertisement

Behaviormetrika

, Volume 45, Issue 2, pp 475–493 | Cite as

Degree of error in Bayesian knowledge tracing estimates from differences in sample sizes

  • Stefan Slater
  • Ryan S. Baker
Original Paper
  • 19 Downloads

Abstract

Bayesian knowledge tracing (BKT) is a knowledge inference model that underlies many modern adaptive learning systems. The primary goal of BKT is to predict the point at which a student has reached mastery of a particular skill. In this paper, we examine the degree to which changes in sample size influence the values of the parameters within BKT models, and the effect that these errors have on predictions of student mastery. We generate simulated data sets of student responses based on underlying BKT parameters and the degree of variance which they involve, and then fit new models to these data sets, and compared the error between the predicted parameters and the seed parameters. We discuss the implications of sample size in considering the trustworthiness of BKT parameters derived in learning settings and make recommendations for the number of data points that should be used in creating BKT models.

Keywords

Bayesian knowledge tracing Knowledge inference Student model Simulated data 

Notes

Compliance with ethical standards

Conflict of interest

The authors claim no conflicts of interest for this work.

References

  1. Baker RS (2018) 4.2: Bayesian knowledge tracing. In: Big data and education, 4th edn. University of Pennsylvania, PhiladelphiaGoogle Scholar
  2. Baker RSJd, Corbett AT, Aleven V (2008) More accurate student modeling through contextual estimation of slip and guess probabilities in Bayesian knowledge tracing. In: Proceedings of the 9th international conference on intelligent tutoring systems, pp 406–415Google Scholar
  3. Baker RSJd, Corbett AT, Gowda SM, Wagner AZ, MacLaren BM, Kauffman LR, Mitchell AP, Giguere S (2010) Contextual slip and prediction of student performance after use of an intelligent tutor. In: Proceedings of the 18th annual conference on user modeling, adaptation, and personalization, pp 52–63Google Scholar
  4. Baker RSJd, Pardos Z, Gowda S, Nooraei B, Heffernan N (2011) Ensembling predictions of student knowledge within intelligent tutoring systems. In: Proceedings of 19th international conference on user modeling, adaptation, and personalization, pp 13–24Google Scholar
  5. Beck J (2014) The field of EDM: where we came from, and where we’re going. In: Presentation at the 7th international conference on educational data miningGoogle Scholar
  6. Beck JE, Gong Y (2013) Wheel-spinning: students who fail to master a skill. In: International conference on artificial intelligence in education, pp 431–440Google Scholar
  7. Beck J, Xiong X (2013) Limits to accuracy: how well can we do at student modeling? In: Educational data mining 2013Google Scholar
  8. Chen CM, Lee HM, Chen YH (2005) Personalized e-learning system using item response theory. Comput Educ 44(3):237–255CrossRefGoogle Scholar
  9. Cohen J (1992) Statistical power analysis. Curr Dir Psychol Sci 1(3):98–101CrossRefGoogle Scholar
  10. Corbett AT, Anderson JR (1995) Knowledge tracing: modeling the acquisition of procedural knowledge. User Model User Adap Interact 4(4):253–278CrossRefGoogle Scholar
  11. Desmarais MC, Baker RSJd (2012) A review of recent advances in learner and skill modeling in intelligent learning environments. User Model User-Adap Inter 22(1–2):9–38CrossRefGoogle Scholar
  12. Fancsali S, Nixon T, Ritter S (2013) Optimal and worst-case performance of mastery learning assessment with bayesian knowledge tracing. In: Educational data mining 2013Google Scholar
  13. Gong Y, Beck JE, Heffernan NT (2010) Comparing knowledge tracing and performance factor analysis by using multiple model fitting procedures. International conference on intelligent tutoring systems. Springer, Berlin, pp 35–44CrossRefGoogle Scholar
  14. Gowda SM, Rowe JP, Baker RSJd, Chi M, Koedinger KR (2011) Improving models of slipping, guessing, and moment-by-moment learning with estimates of skill difficulty. In: Proceedings of the 4th international conference on educational data mining, pp 199–208Google Scholar
  15. Hershkovitz A, Baker RSJd, Gobert J, Wixon M, Sao Pedro M (2013) Discovery with models: a case study on carelessness in computer-based science inquiry. Am Behav Sci 57(10):1479–1498CrossRefGoogle Scholar
  16. Khajah M, Lindsey RV, Mozer MC (2016) How deep is knowledge tracing? In: Proceedings of the 2016 educational data mining conferenceGoogle Scholar
  17. Koedinger KR, Corbett A (2006) Cognitive tutors: technology bringing learning sciences to the classroom. Cambridge handbook of the learning sciences, pp 61–71Google Scholar
  18. Lee JI, Brunskill E (2012) The impact on individualizing student models on necessary practice opportunities. In: International educational data mining societyGoogle Scholar
  19. Liu R, Koedinger KR (2015) Variations in learning rate: student classification based on systematic residual error patterns across practice opportunities. In: International educational data mining societyGoogle Scholar
  20. Pardos ZA, Heffernan NT (2011) KT-IDEM: introducing item difficulty to the knowledge tracing model. In: International conference on user modeling, adaptation, and personalization, pp 243–254Google Scholar
  21. Pelánek R (2016) Applications of the Elo rating system in adaptive educational systems. Comput Educ 98:169–179CrossRefGoogle Scholar
  22. Reye J (2004) Student modelling based on belief networks. Int J Artif Intell Educ 14(1):63–96Google Scholar
  23. Ritter S, Harris TK, Nixon T, Dickison D, Murray RC, Towle B (2009) Reducing the knowledge tracing space. In: Proceedings of the educational data mining conference 2009, pp 151–159Google Scholar
  24. Wang Y, Heffernan N (2013) Extending knowledge tracing to allow partial credit: Using continuous versus binary nodes. In: International conference on artificial intelligence in education, pp 181–188Google Scholar
  25. Wilson KH, Karklin Y, Han B, Ekanadham C (2016) Back to the basics: Bayesian extensions of IRT outperform neural networks for proficiency estimation. In: Proceedings of the 2016 educational data mining conference, pp 539–544Google Scholar

Copyright information

© The Behaviormetric Society 2018

Authors and Affiliations

  1. 1.University of PennsylvaniaPhiladelphiaUSA

Personalised recommendations