Advertisement

Insights from Reparameterized DINA and Beyond

  • Lawrence T. DeCarloEmail author
Chapter
Part of the Methodology of Educational Measurement and Assessment book series (MEMA)

Abstract

The purpose of cognitive diagnosis is to obtain information about the set of skills or attributes that examinees have or do not have. A cognitive diagnostic model (CDM) attempts to extract this information from the pattern of responses of examinees to test items. A number of general CDMs have been proposed, such as the general diagnostic model (GDM; von Davier M, Brit J Math Stat Psychol 61:287–307, 2008), the generalized DINA model (GDINA; de la Torre J, Psychometrika 76:179–199, 2011), and the log-linear cognitive diagnostic model (LCDM; Henson RA, Templin JL, Willse JT, Psychometrika 74:191–210, 2009). These general models can be shown to include well-known models that are often used in cognitive diagnosis, such as the deterministic inputs noisy and gate model (DINA; Junker BW, Sijtsma K, Appl Psychol Meas 25:258–272, 2001), the deterministic inputs noise or gate model (DINO; Templin JL, Henson RA, Psychol Methods 11:287–305, 2006), the additive cognitive diagnosis model (ACDM; de la Torre J, Psychometrika 76:179–199, 2011), the linear logistic model (LLM; Maris E, Psychometrika 64:187–212, 1999), and the reduced reparameterized unified model (rRUM; Hartz SM, A Bayesian framework for the unified model for assessing cognitive abilities. Unpublished doctoral dissertation, 2002).

This chapter starts with a simple reparameterized version of the DINA model and builds up to other models; all of the models are shown to be extensions or variations of the basic model. Working up to more general models from a simple form helps to illustrate basic aspects of the models and associated concepts, such as the meaning of model parameters, issues of estimation, monotonicity, duality, and the relation of the models to each other and more general forms. In addition, reparameterizing CDMs as latent class models allows one to use standard software for latent class analysis (LCA), which offers a connection to latent class analysis and also allows one to take advantage of recent advances in LCA.

References

  1. Agresti, A. A. (2002). Categorical data analysis (2nd ed.). Hoboken, NJ: Wiley.CrossRefGoogle Scholar
  2. Chen, J., de la Torre, J., & Zhang, Z. (2013). Relative and absolute fit evaluation in cognitive diagnosis modeling. Journal of Educational Measurement, 50, 123–140.CrossRefGoogle Scholar
  3. Chen, Y., Liu, J., Xu, G., & Ying, Z. (2015). Statistical analysis of Q-matrix based diagnostic classification models. Journal of the American Statistical Association, 110, 850–866.CrossRefGoogle Scholar
  4. Chiu, C.-Y., Douglas, J. A., & Li, X. (2009). Cluster analysis for cognitive diagnosis: Theory and applications. Psychometrika, 74, 633–665.CrossRefGoogle Scholar
  5. Chiu, C.-Y., & Köhn, H.-F. (2016). The reduced rum as a logit model: Parameterization and constraints. Psychometrika, 81(2), 350–370.CrossRefGoogle Scholar
  6. Clogg, C. C. (1995). Latent class models. In G. Arminger, C. C. Clogg, & M. E. Sobel (Eds.), Handbook of statistical modeling for the social and behavioral sciences (pp. 211–359). New York, NY: Plenum.Google Scholar
  7. Clogg, C. C., & Eliason, S. R. (1987). Some common problems in log-linear analysis. Sociological Methods and Research, 16, 8–44.CrossRefGoogle Scholar
  8. Culpepper, S. A. (2015). Bayesian estimation of the DINA model with Gibbs sampling. Journal of Educational and Behavioral Statistics, 40(5), 454–476.CrossRefGoogle Scholar
  9. Dayton, C. M. (1998). Latent class scaling analysis. Thousand Oaks, CA: Sage.CrossRefGoogle Scholar
  10. de la Torre, J. (2009). DINA model and parameter estimation: A didactic. Journal of Educational and Behavioral Statistics, 34, 115–130.CrossRefGoogle Scholar
  11. de la Torre, J. (2011). The generalized DINA model framework. Psychometrika, 76, 179–199.CrossRefGoogle Scholar
  12. de la Torre, J., & Douglas, J. A. (2004). Higher-order latent trait models for cognitive diagnosis. Psychometrika, 69, 333–353.CrossRefGoogle Scholar
  13. de la Torre, J., & Lee, Y.-S. (2013). Evaluating the Wald test for item-level comparison of saturated and reduced models in cognitive diagnosis. Journal of Educational Measurement, 50, 355–373.CrossRefGoogle Scholar
  14. DeCarlo, L. T. (2002). A latent class extension of signal detection theory, with applications. Multivariate Behavioral Research, 37, 423–451.CrossRefGoogle Scholar
  15. DeCarlo, L. T. (2005). A model of rater behavior in essay grading based on signal detection theory. Journal of Educational Measurement, 42, 53–76.CrossRefGoogle Scholar
  16. DeCarlo, L. T. (2008). Studies of a latent-class signal-detection model for constructed response scoring (ETS Research Report No. RR-08-63). Princeton, NJ: Educational Testing Service.Google Scholar
  17. DeCarlo, L. T. (2010). Studies of a latent class signal detection model for constructed response scoring II: Incomplete and hierarchical designs (ETS Research Report No. RR-10-08). Princeton, NJ: Educational Testing Service.Google Scholar
  18. DeCarlo, L. T. (2011). On the analysis of fraction subtraction data: The DINA model, classification, latent class sizes, and the Q-matrix. Applied Psychological Measurement, 35, 8–26.CrossRefGoogle Scholar
  19. DeCarlo, L. T. (2012). Recognizing uncertainty in the Q-matrix via a Bayesian extension of the DINA model. Applied Psychological Measurement, 36, 447–468.CrossRefGoogle Scholar
  20. DeCarlo, L. T., Kim, Y. K., & Johnson, M. S. (2011). A hierarchical rater model for constructed responses with a signal detection rater model. Journal of Educational Measurement, 48, 333–356.CrossRefGoogle Scholar
  21. DeCarlo, L. T., & Kinghorn, B. R. C. (2016, April). An exploratory approach to the Q-matrix via Bayesian estimation. Paper presented at the 2016 meeting of the National Council on Measurement in Education, Washington, DC.Google Scholar
  22. George, A. C., Robitzsch, A., Kiefer, T., Gross, J., & Uenlue, A. (2016). The R package CDM for cognitive diagnosis models. Journal of Statistical Software, 74, 1–24.CrossRefGoogle Scholar
  23. Green, D. M., & Swets, J. A. (1966). Signal detection theory and psychophysics. New York, NY: Wiley.Google Scholar
  24. Haertel, E. H. (1989). Using restricted latent class models to map the skill structure of achievement items. Journal of Educational Measurement, 26, 301–321.CrossRefGoogle Scholar
  25. Hartz, S. M. (2002). A Bayesian framework for the unified model for assessing cognitive abilities. Unpublished doctoral dissertation.Google Scholar
  26. Heller, J., & Wickelmaier, F. (2013). Minimum discrepancy estimation in probabilistic knowledge structures. Electronic Notes in Discrete Mathematics, 42, 49–56.CrossRefGoogle Scholar
  27. Henson, R. A., Templin, J. L., & Willse, J. T. (2009). Defining a family of cognitive diagnosis models using log-linear models with latent variables. Psychometrika, 74, 191–210.CrossRefGoogle Scholar
  28. Huebner, A., & Wang, C. (2011). A note on comparing examinee classification methods for cognitive diagnosis models. Educational and Psychological Measurement, 71, 407–419.CrossRefGoogle Scholar
  29. Junker, B. W., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25, 258–272.CrossRefGoogle Scholar
  30. Kim, Y. K. (2009). Combining constructed response items and multiple choice items using a hierarchical rater model. Doctoral dissertation, Teachers College, Columbia University.Google Scholar
  31. Köhn, H.-F., & Chiu, C.-Y. (2016). A proof of the duality of the DINA model and the DINO model. Journal of Classification, 33, 171–184.CrossRefGoogle Scholar
  32. Köhn, H.-F., & Chiu, C.-Y. (2017). A procedure for assessing the completeness of the Q-matrices of cognitively diagnostic tests. Psychometrika, 82, 112–132.CrossRefGoogle Scholar
  33. Liu, J., Xu, G., & Ying, Z. (2011). Learning item-attribute relationship in Q-matrix based diagnostic classification models (Report: arXiv:1106.0721v1). New York, NY: Columbia University, Department of Statistics. Retrieved from the website https://arxiv.org/abs/1106.0721v1
  34. Ma, W., & de la Torre, J. (2017). GDINA: The generalized DINA model framework (R package version 1.4.2). Retrieved from https://cran.r-project.org/web/packages/GDINA/index.html
  35. Macmillan, N. A., & Creelman, C. D. (2005). Detection theory: A user’s guide (2nd ed.). New York, NY: Cambridge University Press.Google Scholar
  36. Macready, G. B., & Dayton, C. M. (1977). The use of probabilistic models in the assessment of mastery. Journal of Educational Statistics, 2, 99–120.CrossRefGoogle Scholar
  37. Maris, E. (1999). Estimating multiple classification latent class models. Psychometrika, 64, 187–212.CrossRefGoogle Scholar
  38. Philipp, M., Strobl, C., de la Torre, J., & Zeileis, A. (2018). On the estimation of standard errors in cognitive diagnosis models. Journal of Educational and Behavioral Statistics, 43, 88–115.CrossRefGoogle Scholar
  39. Rupp, A. A., Templin, J., & Henson, R. A. (2010). Diagnostic measurement: Theory, methods, and applications. New York, NY: Guilford Press.Google Scholar
  40. Spiegelhalter, D., Thomas, A., Best, N., & Lunn, D. (2014). OpenBUGS version 3.2.3 user manual. Helsinki, Finland. Retrieved from http://www.openbugs.net/w/Manuals
  41. Templin, J., & Bradshaw, L. (2014). Hierarchical diagnostic classification models: A family of models for estimating and testing attribute hierarchies. Psychometrika, 79, 317–339.CrossRefGoogle Scholar
  42. Templin, J., & Hoffman, L. (2013). Obtaining diagnostic classification and model estimates using Mplus. Educational Measurement: Issues and Practice, 32, 37–50.CrossRefGoogle Scholar
  43. Templin, J. L., & Henson, R. A. (2006). Measurement of psychological disorders using cognitive diagnosis models. Psychological Methods, 11, 287–305.CrossRefGoogle Scholar
  44. Vermunt, J. K. (1997). LEM: A general program for the analysis of categorical data. Tilburg, The Netherlands: Tilburg University.Google Scholar
  45. Vermunt, J. K., & Magidson, J. (2016). Technical guide for latent GOLD 5.1: Basic, advanced, and syntax. Belmont, MA: Statistical Innovations Inc.Google Scholar
  46. von Davier, M. (2008). A general diagnostic model applied to language testing data. British Journal of Mathematical and Statistical Psychology, 61, 287–307.CrossRefGoogle Scholar
  47. von Davier, M. (2013). The DINA model as a constrained general diagnostic model – Two variants of a model equivalency. British Journal of Mathematical and Statistical Psychology, 67, 49–71.CrossRefGoogle Scholar
  48. von Davier, M. (2014). The log-linear cognitive diagnostic model (LCDM) as a special case of the general diagnostic model (GDM) (ETS Research Report No. RR-14-40). Princeton, NJ: Educational Testing Service.Google Scholar
  49. Wickens, T. D. (2002). Elementary signal detection theory. New York, NY: Oxford University Press.Google Scholar
  50. Xu, G., & Zhang, S. (2016). Identifiability of diagnostic classification models. Psychometrika, 81, 625–649.CrossRefGoogle Scholar
  51. Zhang, S. S., DeCarlo, L. T., & Ying, Z. (2013). Non-identifiability, equivalence classes, and attribute-specific classification in Q-matrix based cognitive diagnosis models. Available at: http://arxiv.org/abs/1303.0426v1

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Human Development, Teachers CollegeColumbia UniversityNew YorkUSA

Personalised recommendations