Advertisement

A Data-Based Simulation Study of Reliability for an Adaptive Assessment Based on Knowledge Space Theory

  • Christopher Doble
  • Jeffrey MatayoshiEmail author
  • Eric Cosyn
  • Hasan Uzun
  • Arash Karami
Article

Abstract

A large-scale simulation study of the assessment effectiveness of a particular instantiation of knowledge space theory is described. In this study, data from more than 700,000 actual assessments in mathematics using the ALEKS (Assessment and LEarning in Knowledge Spaces) software were used to determine response probabilities for the same number of simulated assessments, for the purpose of examining reliability. Several measures of reliability were examined, borrowed from existing psychometric approaches, with an eye toward developing measures for evaluating reliability for adaptive assessments. The results are compared to analogous results for assessments having mathematics content overlapping that of the ALEKS assessment, and some consequences and future directions are discussed.

Keywords

Knowledge space theory Simulation Reliability Adaptive assessment 

Notes

Acknowledgements

We are grateful to the associate editor and four anonymous reviewers for many helpful comments on a previous draft of this paper.

References

  1. ACT. (2012). ACT Compass Internet Version Reference Manual. http://act-stage.adobecqms.net/content/dam/act/unsecured/documents/CompassReferenceManual.pdf. Accessed January 29, 2019.
  2. ACT. (2013–2014). ACT Explore Technical Manual. https://www.act.org/content/dam/act/unsecured/documents/Explore-TechManual.pdf. Accessed April 9, 2017.
  3. American Institutes for Research. (2013–2014). State of Delaware End-of-Course (EOC) Technical Report. Evidence of Reliability and Validity (Vol. 4).Google Scholar
  4. Birkhoff, G. (1937). Rings of sets. Duke Mathematical Journal, 3, 443–454.MathSciNetCrossRefzbMATHGoogle Scholar
  5. Chaiklin, S. (2003). The zone of proximal development in Vygotsky’s analysis of learning and instruction. In A. Kozulin, B. Gindis, V.S. Ageyev, S.M. Miller (Eds.) Vygotsky’s educational theory and practice in clutural context (pp. 39–64). New York: Cambridge University Press.Google Scholar
  6. Cosyn, E., & Thiéry, N. (2000). A practical procedure to build a knowledge structure. Journal of Mathematical Psychology, 44, 383–407.MathSciNetCrossRefzbMATHGoogle Scholar
  7. Cosyn, E., & Uzun, H.B. (2009). Note on two necessary and sufficient axioms for a well-graded knowledge space. Journal of Mathematical Psychology, 53(1), 40–42.MathSciNetCrossRefzbMATHGoogle Scholar
  8. Cronbach, L.J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334.CrossRefzbMATHGoogle Scholar
  9. Desmarais, C., & Baker, R. (2012). A review of recent advances in learner and skill modeling in intelligent learning environments. User Modeling and User-Adapted Interaction, 22(1–2), 9–38.CrossRefGoogle Scholar
  10. Dimitrov, D. (2002). Reliability: arguments for multiple perspectives amd potential problems for generalization across studies. Educational and Psychological Measurement, 5, 783–801.CrossRefGoogle Scholar
  11. Doignon, J.-P., & Falmagne, J.-C. (1985). Spaces for the assessment of knowledge. International Journal of Man-Machine Studies, 23, 175–196.CrossRefzbMATHGoogle Scholar
  12. Doignon, J.-P., & Falmagne, J.-C. (1999). Knowledge spaces. Berlin: Springer.CrossRefzbMATHGoogle Scholar
  13. Falmagne, J.-Cl., & Doignon, J.-P. (2011). Learning spaces. Interdisciplinary applied mathematics. Berlin: Springer.zbMATHGoogle Scholar
  14. Falmagne, J.-Cl., Albert, D., Doble, C. W., Eppstein, D., Hu, X. (Eds.). (2013). Knowledge spaces: application in education. Berlin: Springer.Google Scholar
  15. Green, B.F., Bock, R.D., Humphreys, L.G., Linn, R.L., Reckase, M.D. (1984). Technical guidelines for assessing computerized adaptive tests. Journal of Educational Measurement, 4, 347–360.CrossRefGoogle Scholar
  16. Hambleton, R.K., Swaminathan, H., Rogers, H.J. (1991). Fundamentals of item response theory. Newbury Park: Sage Press.Google Scholar
  17. Hockemeyer, C. (1997). Using the basis of a knowledge space for determining the fringe of a knowledge state. Journal of Mathematical Psychology, 3, 275–279.MathSciNetCrossRefzbMATHGoogle Scholar
  18. Hockemeyer, C. (2017). The collection of computer science bibliographies: bibliography on knowledge spaces. http://liinwww.ira.uka.de/bibliography/Ai/knowledge.spaces.html. Accessed February 14, 2017.
  19. Koppen, M. (1998). On alternative representations for knowledge spaces. Mathematical Social Sciences, 36, 127–143.MathSciNetCrossRefzbMATHGoogle Scholar
  20. Koppen, M., & Doignon, J.-P. (1990). How to build a knowledge space by querying an expert. Journal of Mathematical Psychology, 34, 311–331.MathSciNetCrossRefzbMATHGoogle Scholar
  21. Lei, Y.C., & Zhang, S.Y. (2004). Features and partial derivatives of Bertalanffy-Richards growth model in forestry. Nonlinear Analysis: Modelling and Control, 1, 65–73.zbMATHGoogle Scholar
  22. Lynch, D., & Howlin, C.P. (2014). Real world usage of an adaptive testing algorithm to uncover latent knowledge. In 7th international conference of education, research and innovation (ICERI2014 Proceedings). Seville.Google Scholar
  23. Matayoshi, J., Granziol, U., Doble, C., Uzun, H., Cosyn, E. (2018). Forgetting curves and testing effect in an adaptive learning and assessment system. In Proceedings of the 11th international conference on educational data mining (pp. 607–612).Google Scholar
  24. Nicewander, A, & Thomasson, G.L. (1999). Some reliability estimates for computerized adaptive tests. Applied Psychological Measurement, 23(3), 239–247.CrossRefGoogle Scholar
  25. Reddy, A.A., & Harper, M. (2013). Mathematics placement at the University of Illinois. PRIMUS, 8, 683–702.CrossRefGoogle Scholar
  26. Richards, F.J. (1959). A flexible growth function for empirical use. Journal of Experimental Botany, 2, 290–300.CrossRefGoogle Scholar
  27. Texas Education Agency. (2015–2016). Technical Digest. http://tea.texas.gov/Student_Testing_and_Accountability/Testing/Student_Assessment_Overview/Technical_Digest_2015-2016. Accessed April 12, 2017.
  28. Thissen, D. (2000). Reliability and measurement precision. In Computerized adaptive testing: a primer, 2nd edn (pp. 159–184). London: Lawrence Erlbaum Associates.Google Scholar
  29. Vie, J.-J., Popineau, F., Bruillard, E., Bourda, Y. (2017). A review of recent advances in adaptive assessment. In A. Peña-Ayala (Ed.) Learning analytics: fundaments, applications, and trends (pp. 113–142): Springer International Publishing.Google Scholar
  30. Vygotsky, L.S. (1978). Mind and society: the development of higher mental processes. Cambridge: Harvard University Press.Google Scholar
  31. Weiss, D.J. (2011). Better data from better measurements using computerized adaptive testing. Journal of Methods and Measurement in the Social Sciences, 2(1), 1–27.MathSciNetCrossRefGoogle Scholar
  32. Woodruff, D., Traynor, A., Cui, Z., Fang, Y. (2013). A comparison of three methods for computing scale score conditional standard errors of measurement. ACT Research Report Series No. 7.Google Scholar
  33. Zhu, Q., & Lowe, P.A. (2018). Split-half reliability. In Frey, B.B. (Ed.) The SAGE encyclopedia of educational research, measurement, and evaluation. Thousand Oaks: SAGE Publications.Google Scholar

Copyright information

© International Artificial Intelligence in Education Society 2019

Authors and Affiliations

  • Christopher Doble
    • 1
  • Jeffrey Matayoshi
    • 1
    Email author
  • Eric Cosyn
    • 1
  • Hasan Uzun
    • 1
  • Arash Karami
    • 1
  1. 1.McGraw-Hill Education/ALEKS CorporationIrvineUSA

Personalised recommendations