Psychometrika

, Volume 81, Issue 2, pp 461–482 | Cite as

An Upgrading Procedure for Adaptive Assessment of Knowledge

  • Pasquale Anselmi
  • Egidio Robusto
  • Luca Stefanutti
  • Debora de Chiusole
Article
  • 266 Downloads

Abstract

In knowledge space theory, existing adaptive assessment procedures can only be applied when suitable estimates of their parameters are available. In this paper, an iterative procedure is proposed, which upgrades its parameters with the increasing number of assessments. The first assessments are run using parameter values that favor accuracy over efficiency. Subsequent assessments are run using new parameter values estimated on the incomplete response patterns from previous assessments. Parameter estimation is carried out through a new probabilistic model for missing-at-random data. Two simulation studies show that, with the increasing number of assessments, the performance of the proposed procedure approaches that of gold standards.

Keywords

adaptive assessment continuous procedure BLIM  missing data knowledge space theory knowledge structure 

References

  1. Ahlgren Reddy, A., & Harper, M. (2013). ALEKS-based placement at the University of Illinois. In J.-C. Falmagne, D. Albert, C. Doble, D. Eppstein, & X. Hu (Eds.), Knowledge spaces: Applications in education (pp. 51–68). Berlin: Springer.CrossRefGoogle Scholar
  2. Anselmi, P., Robusto, E., & Stefanutti, L. (2012). Uncovering the best skill multimap by constraining the error probabilities of the gain-loss model. Psychometrika, 77, 763–781.CrossRefGoogle Scholar
  3. Anselmi, P., Robusto, E., & Stefanutti, L. (2013). A procedure for identifying the best skill multimap in the gain-loss model. Electronic Notes in Discrete Mathematics, 42, 9–16.CrossRefGoogle Scholar
  4. Cheng, Y. (2009). When cognitive diagnosis meets computerized adaptive testing: CD-CAT. Psychometrika, 74, 619–632.CrossRefGoogle Scholar
  5. Cosyn, E., & Thiéry, N. (2000). A practical procedure to build a knowledge structure. Journal of Mathematical Psychology, 44, 383–407.CrossRefPubMedGoogle Scholar
  6. Cosyn, E., & Uzun, H. B. (2009). Note on two necessary and sufficient axioms for a well-graded knowledge space. Journal of Mathematical Psychology, 53, 40–42.CrossRefGoogle Scholar
  7. de Chiusole, D., Stefanutti, L., Anselmi, P., & Robusto, E. (2015). Modeling missing data in knowledge space theory. Psychological Methods, 20, 506–522.CrossRefPubMedGoogle Scholar
  8. de la Torre, J., & Douglas, J. (2004). Higher-order latent trait models for cognitive diagnosis. Psychometrika, 69, 333–353.CrossRefGoogle Scholar
  9. de la Torre, J., & Douglas, J. (2008). Model evaluation and multiple strategies in cognitive diagnosis: An analysis of fraction subtraction data. Psychometrika, 73, 595–624.CrossRefGoogle Scholar
  10. Degreef, E., Doignon, J.-P., Ducamp, A., & Falmagne, J.-C. (1986). Languages for the assessment of knowledge. Journal of Mathematical Psychology, 30, 243–256.CrossRefGoogle Scholar
  11. Doignon, J.-P. (1994a). Knowledge spaces and skill assignments. In G. H. Fisher & D. Laming (Eds.), Contributions to mathematical psychology, psychometrics and methodology (pp. 111–121). Berlin: Springer.CrossRefGoogle Scholar
  12. Doignon, J.-P., & Falmagne, J.-C. (1985). Spaces for the assessment of knowledge. International Journal of Man-Machine Studies, 23, 175–196.CrossRefGoogle Scholar
  13. Doignon, J.-P., & Falmagne, J.-C. (1999). Knowledge spaces. Berlin: Springer.CrossRefGoogle Scholar
  14. Dowling, C., & Hockemeyer, C. (2001). Automata for the assessment of knowledge. IEEE Transactions on Knowledge and Data Engineering, 13, 451–461.CrossRefGoogle Scholar
  15. Dowling, C. E. (1993). Applying the basis of a knowledge space for controlling the questioning of an expert. Journal of Mathematical Psychology, 37, 21–48.CrossRefGoogle Scholar
  16. Düntsch, I., & Gediga, G. (1995). Skills and knowledge structures. British Journal of Mathematical and Statistical Psychology, 48, 9–27.CrossRefGoogle Scholar
  17. Falmagne, J.-C., Cosyn, E., Doignon, J.-P., & Thiéry, N. (2006). The assessment of knowledge, in theory and in practice. In B. Ganter & L. Kwuida (Eds.), Formal Concept Analysis, 4th International Conference, ICFCA 2006, Dresden, Germany, February 13–17, 2006. Lecture Notes in Artificial Intelligence (pp. 61–79). Berlin: Springer.Google Scholar
  18. Falmagne, J.-C., & Doignon, J.-P. (1988a). A class of stochastic procedures for the assessment of knowledge. British Journal of Mathematical and Statistical Psychology, 41, 1–23.CrossRefGoogle Scholar
  19. Falmagne, J.-C., & Doignon, J.-P. (1988b). A Markovian procedure for assessing the state of a system. Journal of Mathematical Psychology, 32, 232–258.Google Scholar
  20. Falmagne, J.-C., & Doignon, J.-P. (2011). Learning spaces: Interdisciplinary applied mathematics. Berlin: Springer.CrossRefGoogle Scholar
  21. Falmagne, J.-C., Koppen, M., Villano, M., Doignon, J.-P., & Johanessen, L. (1990). Introduction to knowledge spaces: How to build, test and search them. Psychological Review, 97, 204–224.CrossRefGoogle Scholar
  22. Gediga, G., & Düntsch, I. (2002). Skill set analysis in knowledge structures. British Journal of Mathematical and Statistical Psychology, 55, 361–384.CrossRefPubMedGoogle Scholar
  23. Haertel, E. H. (1989). Using restricted latent class models to map the skill structure of achievement items. Journal of Educational Measurement, 26, 301–321.CrossRefGoogle Scholar
  24. Heller, J., & Repitsch, C. (2012). Exploiting prior information in stochastic knowledge assessment. Methodology, 8, 12–22.CrossRefGoogle Scholar
  25. Heller, J., Stefanutti, L., Anselmi, P., & Robusto, E. (2015). On the link between cognitive diagnostic models and knowledge space theory. Psychometrika, 80, 995–1019.Google Scholar
  26. Heller, J., Stefanutti, L., Anselmi, P., & Robusto, E. (2016). Erratum to: On the link between cognitive diagnostic models and knowledge space theory. Psychometrika, 81, 250–251. doi:10.1007/s11336-015-9494-5.CrossRefPubMedGoogle Scholar
  27. Heller, J., Ünlü, A., & Albert, D. (2013). Skills, competencies and knowledge structures. In J.-C. Falmagne, D. Albert, C. Doble, D. Eppstein, & X. Hu (Eds.), Knowledge spaces: Applications in education (pp. 229–242). Berlin: Springer.CrossRefGoogle Scholar
  28. Hockemeyer, C. (2002). A comparison of non-deterministic procedures for the adaptive assessment of knowledge. Psychologische Beiträge, 44, 495–503.Google Scholar
  29. Holman, R., & Glas, C. A. W. (2005). Modelling non-ignorable missing-data mechanisms with item response theory models. British Journal of Mathematical and Statistical Psychology, 58, 1–17.CrossRefPubMedGoogle Scholar
  30. Huebner, A. (2010). An overview of recent developments in cognitive diagnostic computer adaptive assessments. Practical Assessment, Research & Evaluation, 15(3), 1–7.Google Scholar
  31. Junker, B. W., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25, 258–272.CrossRefGoogle Scholar
  32. Koppen, M. (1993). Extracting human expertise for constructing knowledge spaces: An algorithm. Journal of Mathematical Psychology, 37, 1–20.CrossRefGoogle Scholar
  33. Koppen, M., & Doignon, J.-P. (1990). How to build a knowledge space by querying an expert. Journal of Mathematical Psychology, 34, 311–331.CrossRefGoogle Scholar
  34. Korossy, K. (1999). Modeling knowledge as competence and performance. In D. Albert & J. Lukas (Eds.), Knowledge spaces: Theories, empirical research, and applications (pp. 103–132). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  35. Langeheine, R., Pannekoek, J., & van de Pol, F. (1996). Bootstrapping goodness-of-fit measures in categorical data analysis. Sociological Methods and Research, 24, 492–516.CrossRefGoogle Scholar
  36. Lord, F. M. (1980). Applications of item response theory to practical testing problems. Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  37. McGlohen, M., & Chang, H.-H. (2008). Combining computer adaptive testing technology with cognitively diagnostic assessment. Behavior Research Methods, 40, 808–821.CrossRefPubMedGoogle Scholar
  38. Mislevy, R., & Wu, P.-K. (1996). Missing responses and IRT ability estimation: Omits, choice, time limits, and adaptive testing (Research Report No. RR-96-30-ONR). Retrieved from http://www.ets.org/Media/Research/pdf/RR-96-30.pdf.
  39. Robusto, E., Stefanutti, L., & Anselmi, P. (2010). The Gain-Loss Model: A probabilistic skill multimap model for assessing learning processes. Journal of Educational Measurement, 47, 373–394.CrossRefGoogle Scholar
  40. Rubin, D. B. (1976). Inference and missing data. Biometrika, 63, 581–592.CrossRefGoogle Scholar
  41. Rupp, A. A., & Templin, J. (2008). The effects of Q-matrix misspecification on parameter estimates and classification accuracy in the DINA model. Educational and Psychological Measurement, 68, 78–96.CrossRefGoogle Scholar
  42. Stefanutti, L., Anselmi, P., & Robusto, E. (2011). Assessing learning processes with the Gain-Loss Model. Behavior Research Methods, 43, 66–76.CrossRefPubMedGoogle Scholar
  43. Stefanutti, L., & Robusto, E. (2009). Recovering a probabilistic knowledge structure by constraining its parameter space. Psychometrika, 74, 83–96.CrossRefGoogle Scholar
  44. Tatsuoka, C. (2002). Data-analytic methods for latent partially ordered classification models. Journal of the Royal Statistical Society Series C (Applied Statistics), 51, 337–350.CrossRefGoogle Scholar
  45. Tatsuoka, K. (1990). Toward an integration of item-response theory and cognitive error diagnosis. In N. Frederiksen, R. Glaser, A. Lesgold, & M. Safto (Eds.), Monitoring skills and knowledge acquisition (pp. 453–488). Hillsdale, MI: Lawrence Erlbaum Associates.Google Scholar
  46. Villano, M. (1991). Computerized Knowledge Assessment: Building the Knowledge Structure and Calibrating the Assessment Routine. Ph.D. Dissertation. New York: New York University.Google Scholar
  47. von Davier, M. (1997). Bootstrapping goodness-of-fit statistics for sparse categorical data: Results of a Monte Carlo study. Methods of Psychological Research, 2, 29–48.Google Scholar
  48. Xu, X., Chang, H.-H., & Douglas, J. (2003). A simulation study to compare CAT strategies for cognitive diagnosis. Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago.Google Scholar

Copyright information

© The Psychometric Society 2016

Authors and Affiliations

  • Pasquale Anselmi
    • 1
  • Egidio Robusto
    • 1
  • Luca Stefanutti
    • 1
  • Debora de Chiusole
    • 1
  1. 1.Department FISPPAUniversity of PaduaPaduaItaly

Personalised recommendations