Skip to main content

Degree of error in Bayesian knowledge tracing estimates from differences in sample sizes

Abstract

Bayesian knowledge tracing (BKT) is a knowledge inference model that underlies many modern adaptive learning systems. The primary goal of BKT is to predict the point at which a student has reached mastery of a particular skill. In this paper, we examine the degree to which changes in sample size influence the values of the parameters within BKT models, and the effect that these errors have on predictions of student mastery. We generate simulated data sets of student responses based on underlying BKT parameters and the degree of variance which they involve, and then fit new models to these data sets, and compared the error between the predicted parameters and the seed parameters. We discuss the implications of sample size in considering the trustworthiness of BKT parameters derived in learning settings and make recommendations for the number of data points that should be used in creating BKT models.

This is a preview of subscription content, access via your institution.

Fig. 1

(diagram reproduced from Baker 2018)

References

  • Baker RS (2018) 4.2: Bayesian knowledge tracing. In: Big data and education, 4th edn. University of Pennsylvania, Philadelphia

  • Baker RSJd, Corbett AT, Aleven V (2008) More accurate student modeling through contextual estimation of slip and guess probabilities in Bayesian knowledge tracing. In: Proceedings of the 9th international conference on intelligent tutoring systems, pp 406–415

  • Baker RSJd, Corbett AT, Gowda SM, Wagner AZ, MacLaren BM, Kauffman LR, Mitchell AP, Giguere S (2010) Contextual slip and prediction of student performance after use of an intelligent tutor. In: Proceedings of the 18th annual conference on user modeling, adaptation, and personalization, pp 52–63

  • Baker RSJd, Pardos Z, Gowda S, Nooraei B, Heffernan N (2011) Ensembling predictions of student knowledge within intelligent tutoring systems. In: Proceedings of 19th international conference on user modeling, adaptation, and personalization, pp 13–24

  • Beck J (2014) The field of EDM: where we came from, and where we’re going. In: Presentation at the 7th international conference on educational data mining

  • Beck JE, Gong Y (2013) Wheel-spinning: students who fail to master a skill. In: International conference on artificial intelligence in education, pp 431–440

  • Beck J, Xiong X (2013) Limits to accuracy: how well can we do at student modeling? In: Educational data mining 2013

  • Chen CM, Lee HM, Chen YH (2005) Personalized e-learning system using item response theory. Comput Educ 44(3):237–255

    Article  Google Scholar 

  • Cohen J (1992) Statistical power analysis. Curr Dir Psychol Sci 1(3):98–101

    Article  Google Scholar 

  • Corbett AT, Anderson JR (1995) Knowledge tracing: modeling the acquisition of procedural knowledge. User Model User Adap Interact 4(4):253–278

    Article  Google Scholar 

  • Desmarais MC, Baker RSJd (2012) A review of recent advances in learner and skill modeling in intelligent learning environments. User Model User-Adap Inter 22(1–2):9–38

    Article  Google Scholar 

  • Fancsali S, Nixon T, Ritter S (2013) Optimal and worst-case performance of mastery learning assessment with bayesian knowledge tracing. In: Educational data mining 2013

  • Gong Y, Beck JE, Heffernan NT (2010) Comparing knowledge tracing and performance factor analysis by using multiple model fitting procedures. International conference on intelligent tutoring systems. Springer, Berlin, pp 35–44

    Chapter  Google Scholar 

  • Gowda SM, Rowe JP, Baker RSJd, Chi M, Koedinger KR (2011) Improving models of slipping, guessing, and moment-by-moment learning with estimates of skill difficulty. In: Proceedings of the 4th international conference on educational data mining, pp 199–208

  • Hershkovitz A, Baker RSJd, Gobert J, Wixon M, Sao Pedro M (2013) Discovery with models: a case study on carelessness in computer-based science inquiry. Am Behav Sci 57(10):1479–1498

    Article  Google Scholar 

  • Khajah M, Lindsey RV, Mozer MC (2016) How deep is knowledge tracing? In: Proceedings of the 2016 educational data mining conference

  • Koedinger KR, Corbett A (2006) Cognitive tutors: technology bringing learning sciences to the classroom. Cambridge handbook of the learning sciences, pp 61–71

  • Lee JI, Brunskill E (2012) The impact on individualizing student models on necessary practice opportunities. In: International educational data mining society

  • Liu R, Koedinger KR (2015) Variations in learning rate: student classification based on systematic residual error patterns across practice opportunities. In: International educational data mining society

  • Pardos ZA, Heffernan NT (2011) KT-IDEM: introducing item difficulty to the knowledge tracing model. In: International conference on user modeling, adaptation, and personalization, pp 243–254

  • Pelánek R (2016) Applications of the Elo rating system in adaptive educational systems. Comput Educ 98:169–179

    Article  Google Scholar 

  • Reye J (2004) Student modelling based on belief networks. Int J Artif Intell Educ 14(1):63–96

    Google Scholar 

  • Ritter S, Harris TK, Nixon T, Dickison D, Murray RC, Towle B (2009) Reducing the knowledge tracing space. In: Proceedings of the educational data mining conference 2009, pp 151–159

  • Wang Y, Heffernan N (2013) Extending knowledge tracing to allow partial credit: Using continuous versus binary nodes. In: International conference on artificial intelligence in education, pp 181–188

  • Wilson KH, Karklin Y, Han B, Ekanadham C (2016) Back to the basics: Bayesian extensions of IRT outperform neural networks for proficiency estimation. In: Proceedings of the 2016 educational data mining conference, pp 539–544

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stefan Slater.

Ethics declarations

Conflict of interest

The authors claim no conflicts of interest for this work.

Additional information

Communicated by Ronny Scherer and Marie Wiberg.

About this article

Verify currency and authenticity via CrossMark

Cite this article

Slater, S., Baker, R.S. Degree of error in Bayesian knowledge tracing estimates from differences in sample sizes. Behaviormetrika 45, 475–493 (2018). https://doi.org/10.1007/s41237-018-0072-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41237-018-0072-x

Keywords

  • Bayesian knowledge tracing
  • Knowledge inference
  • Student model
  • Simulated data