Advertisement

Adaptive Testing Using a General Diagnostic Model

  • Jill-Jênn VieEmail author
  • Fabrice Popineau
  • Yolaine Bourda
  • Éric Bruillard
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9891)

Abstract

In online learning platforms such as MOOCs, computerized assessment needs to be optimized in order to prevent boredom and dropout of learners. Indeed, they should spend as little time as possible in tests and still receive valuable feedback. It is actually possible to reduce the number of questions for the same accuracy with computerized adaptive testing (CAT): asking the next question according to the past performance of the examinee. CAT algorithms are divided in two categories: summative CATs, that measure the level of examinees, and formative CATs, that provide feedback to the examinees at the end of the test by specifying which knowledge components need further work. In this paper, we formalize the problem of test-size reduction by predicting student performance, and propose a new hybrid CAT algorithm GenMA based on the general diagnostic model, that is both summative and formative. Using real datasets, we compare our model to popular CAT models and show that GenMA achieves better accuracy while using fewer questions than the existing models.

Notes

Acknowledgements

This work is supported by the Paris-Saclay Institut de la Société Numérique funded by the IDEX Paris-Saclay, ANR-11-IDEX-0003-02.

References

  1. 1.
    Zernike, K.: Obama administration calls for limits on testing in schools (2015)Google Scholar
  2. 2.
    Council, G.M.A.: Profile of GMAT Candidates - Executive Summary (2013)Google Scholar
  3. 3.
    Huebner, A.: An overview of recent developments in cognitive diagnostic computer adaptive assessments. Pract. Assess. Res. Eval. 15(3), n3 (2010)Google Scholar
  4. 4.
    Yan, D., von Davier, A.A., Lewis, C.: Computerized multistage testing (2014)Google Scholar
  5. 5.
    Lan, A.S., Waters, A.E., Studer, C., Baraniuk, R.G.: Sparse factor analysis for learning and content analytics. J. Mach. Learn. Res. 15, 1959–2008 (2014)MathSciNetzbMATHGoogle Scholar
  6. 6.
    Davier, M.: A general diagnostic model applied to language testing data. ETS Research Report Series, p. i-35 (2005)Google Scholar
  7. 7.
    Desmarais, M.C., Baker, R.S.: A review of recent advances in learner and skill modeling in intelligent learning environments. User Model. User Adap. Inter. 22, 9–38 (2012)CrossRefGoogle Scholar
  8. 8.
    Bergner, Y., Droschler, S., Kortemeyer, G., Rayyan, S., Seaton, D., Pritchard, D.E.: Model-based collaborative filtering analysis of student response data: machine-learning item response theory. In: International Educational Data Mining Society (2012)Google Scholar
  9. 9.
    Verhelst, N.D.: Profile analysis: a closer look at the PISA 2000 reading data. Scand. J. Educ. Res. 56, 315–332 (2012)CrossRefGoogle Scholar
  10. 10.
    Reckase, M.: Multidimensional Item Response Theory. Statistics for Social and Behavioral Sciences. Springer, New York (2009)CrossRefzbMATHGoogle Scholar
  11. 11.
    Xu, X., Chang, H., Douglas, J.: A simulation study to compare CAT strategies for cognitive diagnosis. In: Annual Meeting of the American Educational Research Association, Chicago (2003)Google Scholar
  12. 12.
    Cheng, Y.: When cognitive diagnosis meets computerized adaptive testing: CD-CAT. Psychometrika 74, 619–632 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Chalmers, R.P.: mirt: A multidimensional item response theory package for the R environment. J. Stat. Softw. 48, 1–29 (2012)CrossRefGoogle Scholar
  14. 14.
    Cai, L.: Metropolis-Hastings Robbins-Monro algorithm for confirmatory item factor analysis. J. Educ. Behavi. Stat. 35, 307–335 (2010)CrossRefGoogle Scholar
  15. 15.
    Shute, V., Leighton, J.P., Jang, E.E., Chu, M.-W.: Advances in the science of assessment. Educ. Assess. 21(1), 34–59 (2015)CrossRefGoogle Scholar
  16. 16.
    Leighton, J.P., Gierl, M.J., Hunka, S.M.: The attribute hierarchy method for cognitive assessment: a variation on Tatsuoka’s rule-space approach. J. Educ. Measur. 41, 205–237 (2004)CrossRefGoogle Scholar
  17. 17.
    Rupp, A., Levy, R., Dicerbo, K.E., Sweet, S.J., Crawford, A.V., Calico, T., Benson, M., Fay, D., Kunze, K.L., Mislevy, R.J.: Putting ECD into practice: the interplay of theory and data in evidence models within a digital learning environment. JEDM-J. Educ. Data Min. 4, 49–110 (2012)Google Scholar
  18. 18.
    Doignon, J.-P., Falmagne, J.-C.: Knowledge Spaces. Springer Science & Business Media, New York (2012)zbMATHGoogle Scholar
  19. 19.
    Lynch, D., Howlin, C.P.: Real world usage of an adaptive testing algorithm to uncover latent knowledge (2014)Google Scholar
  20. 20.
    Mandin, S., Guin, N.: Basing learner modelling on an ontology of knowledge and skills. In: 2014 IEEE 14th International Conference on Advanced learning technologies (iCALT), pp. 321–323. IEEE (2014)Google Scholar
  21. 21.
    Kickmeier-Rust, M.D., Albert, D.: Competence-based knowledge space theory. In: Measuring and Visualizing Learning in the Information-Rich Classroom, p. 109 (2015)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Jill-Jênn Vie
    • 1
    Email author
  • Fabrice Popineau
    • 1
  • Yolaine Bourda
    • 1
  • Éric Bruillard
    • 2
  1. 1.LRI – Bât. 650 Ada LovelaceUniversité Paris-SudOrsayFrance
  2. 2.ENS Cachan – Bât. CournotCachanFrance

Personalised recommendations