Cognitive diagnosis models for estimation of misconceptions analyzing multiple-choice data

Abstract

Incorrect options for multiple-choice questions are often intentionally included so that they may be selected by an examinee who possesses a misconception. Determining whether an examinee possess a misconception is useful for educational purposes. In the present paper, two statistical models that can estimate examinees’ possession of misconceptions by analyzing multiple-choice data, which are unscored data were developed. By converting multiple-choice data to binary data, which are scored data (\(1=\) correct, \(0=\) incorrect), the Bug-DINO model can estimate examinees’ possession of misconceptions. However, converting multiple-choice data to binary data causes a loss in information, because which incorrect option an examinee chooses is important information for an examinee’s knowledge state. The three models (two developed models and the Bug-DINO model) are compared in a simulation study, and the developed models are applied to the Reading Skill Test data.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2

References

  1. Arai NH, Todo N, Arai T, Bunji K, Sugawara S, Inuzuka M, Matsuzaki T, Ozaki K (2017) Reading skill test to diagnose basic language skills in comparison to machines. In: Proceedings of the 39th annual cognitive science society meeting (CogSci 2017), pp 1556–1561

  2. Chen J (2017) A residual-based approach to validate Q-Matrix specifications. Appl Psychol Meas 41(4):277–293

    Article  Google Scholar 

  3. de la Torre J, Douglas J (2004) Higher-order latent trait models for cognitive diagnosis. Psychometrika 69(3):333–353

    MathSciNet  Article  Google Scholar 

  4. de la Torre J (2009) A cognitive diagnosis model for cognitively based multiple-choice options. Appl Psychol Meas 33(3):163–183

    MathSciNet  Article  Google Scholar 

  5. de la Torre J (2011) The generalized DINA model framework. Psychometrika 76(2):179–199

    MathSciNet  Article  Google Scholar 

  6. de la Torre J, Chiu C-Y (2016) A general method of empirical Q-matrix validation. Psychometrika 81(2):253–273

    MathSciNet  Article  Google Scholar 

  7. DiBello LV, Henson RA, Stout WF (2015) A family of generalized diagnostic classification models for multiple choice option-based scoring. Appl Psychol Meas 39(1):62–79

    Article  Google Scholar 

  8. Gelman A, Rubin DB (1992) Inference from iterative simulation using mutiple sequences. Stat Sci 7(4):457–472

    Article  Google Scholar 

  9. Hartz S (2002) A Bayesian framework for the unified model for assessing cognitive abilities: blending theory with practicality (Doctoral dissertation). University of Illinois, Urbana-Champaign

    Google Scholar 

  10. Hastings WK (1970) Monte carlo sampling methods using markov chains and their applications. Biometrika 57(1):97–109

    MathSciNet  Article  Google Scholar 

  11. Im S, Corter JE (2011) Statistical consequences of attribute misspecification in the rule spece method. Educ Psychol Meas 71(4):712–731

    Article  Google Scholar 

  12. Junker BW, Sijtsma K (2001) Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Appl Psychol Meas 25(3):258–272

    MathSciNet  Article  Google Scholar 

  13. Köhn HF, Chiu CY (2017) A procedure for assessing the completeness of the Q-Matrices of cognitively diagnostic tests. Psychometrika 82(1):112–132

    MathSciNet  Article  Google Scholar 

  14. Kuo B-C, Chen C-H, Yang C-W, Mok MMC (2016) Cognitive diagnostic models for tests with multiple-choice and constructed-response items. Educ Psychol 36(6):1115–1133

    Article  Google Scholar 

  15. Kuo B-C, Chen C-H, de la Torre J (2018) A cognitive diagnosis model for identifying coexisting skills and misconceptions. Appl Psychol Meas 42(3):179–191

    Article  Google Scholar 

  16. Maris E (1999) Estimating multiple classification latent class models. Psychometrika 64(2):187–212

    MathSciNet  Article  Google Scholar 

  17. Minchen ND, de la Torre J, Liu Y (2017) A cognitive diagnosis model for continuous response. J Educ Behav Stat 42(6):651–677

    Article  Google Scholar 

  18. Ozaki K (2015) DINA models for multiple-choice items with few parameters: considering incorrect answers. Appl Psychol Meas 39(6):431–447

    Article  Google Scholar 

  19. Richards JC, Schmidt R (2002) Dictionary of language teaching and applied linguistics, 3rd edn. Longman, London

    Google Scholar 

  20. Rupp AA, Templin JL (2008) The effects of Q-matrix misspecification on parameter estimates and classification accuracy in the DINA model. Educ Psychol Meas 68(1):78–96

    MathSciNet  Article  Google Scholar 

  21. Templin J, Henson R (2006) Measurement of psychological disorders using cognitive diagnosis models. Psychol Methods 11(3):287–305

    Article  Google Scholar 

Download references

Acknowledgements

This research was funded by Grant-in-Aid for Scientific Research(C) 18K03057.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Koken Ozaki.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Communicated by Russell George Almond.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 200 KB)

About this article

Verify currency and authenticity via CrossMark

Cite this article

Ozaki, K., Sugawara, S. & Arai, N. Cognitive diagnosis models for estimation of misconceptions analyzing multiple-choice data. Behaviormetrika 47, 19–41 (2020). https://doi.org/10.1007/s41237-019-00100-9

Download citation

Keywords

  • Multiple-choice item
  • Cognitive diagnosis model
  • Misconception
  • DINO model