Diagnostic Measurement

  • Meghan Sullivan
  • Hongling Lao
  • Jonathan Templin


With diagnostic measurement, the aim is to identify causes or underlying properties of a problem or characteristic for the purposes of making classification-based decisions. The decisions are based on a nuanced profile of attributes or skills obtained from observable characteristics of an individual. In this chapter, we discuss psychometric methodologies involved in engaging in diagnostic measurement. We define basic terms in measurement, describe diagnostic classification models in the context of latent variable models, demonstrate an empirical example, and express the broad purpose of how diagnostic assessment can be useful in management and related fields


  1. Ackerman, T. A., Gierl, M. J., & Walker, C. M. (2003). Using Multidimensional Item Response Theory to Evaluate Educational and Psychological Tests. Educational Measurement: Issues and Practice, 22, 37–53. Scholar
  2. Agresti, A. (2002). Categorical Data Analysis. Hoboken: Wiley.CrossRefGoogle Scholar
  3. Bartholomew, D. J., Knott, M., & Moustaki, I. (2011). Latent Variable Models and Factor Analysis: A Unified Approach. Hoboken: Wiley.CrossRefGoogle Scholar
  4. Boorsboom, D., & Mellenbergh, G. J. (2007). Test Validity in Cognitive Assessment. In J. P. Leighton & M. J. Gierl (Eds.), Cognitive Diagnostic Assessment for Education: Theory and Applications. Cambridge: Cambridge University Press.Google Scholar
  5. Bozard, J. L. (2010). Invariance Testing in Diagnostic Classification Models (Masters Thesis). University of Georgia.Google Scholar
  6. Bradshaw, L., Izsák, A., Templin, J., & Jacobson, E. (2014). Diagnosing Teachers’ Understandings of Rational Numbers: Building a Multidimensional Test Within the Diagnostic Classification Framework. Educational Measurement: Issues and Practice, 33, 2–14. Scholar
  7. Brown, C. (2013). Modification Indices for Diagnostic Classification Models (Unpublished Doctoral Dissertation). University of Georgia, Athens.Google Scholar
  8. Chiu, C.-Y., Douglas, J., & Li, X. (2009). Cluster Analysis for Cognitive Diagnosis: Theory and Applications. Psychometrika, 74, 633–665. Scholar
  9. de Ayala, R. J. (2009). The Theory and Practice of Item Response Theory. New York: Guilford Press.Google Scholar
  10. Finch, W. H., & Bronk, K. C. (2011). Conducting Confirmatory Latent Class Analysis Using Mplus. Structural Equation Modeling: A Multidisciplinary Journal, 18(1), 132–151. Scholar
  11. Haertel, E. H. (1989). Using Restricted Latent Class Models to Map the Skill Structure of Achievement Items. Journal of Educational Measurement, 26, 301–321.CrossRefGoogle Scholar
  12. Hagenaars, J. A., & McCutcheon, A. L. (2002). Applied Latent Class Analysis. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  13. Hambleton, R. K., Swaminathan, H., & Jane Rogers, H. (1991). Fundamentals of Item Response Theory. Newbury Park: Sage.Google Scholar
  14. Henson, R. A., Templin, J. L., & Willse, J. T. (2009). Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables. Psychometrika, 74, 191–210. Scholar
  15. Jang, E. E. (2005). A Validity Narrative: Effects of Reading Skills Diagnosis on Teaching and Learning in the Context of NG TOEFL (Unpublished Doctoral Dissertation). University of Illinois, Urbana-Champaign.Google Scholar
  16. Jöreskog, K. G. (1993). Testing Structural Equation Models. In K. A. Bollen & J. Scott Long (Eds.), Testing Structural Equation Models. Newbury Park: Sage.Google Scholar
  17. Junker, B. W., & Sijtsma, K. (2001). Cognitive Assessment Models with Few Assumptions, and Connections with Nonparametric Item Response Theory. Applied Psychological Measurement, 25, 258–272. Scholar
  18. Kane, M. T. (2013). Validating the Interpretations and Uses of Test Scores. Journal of Educational Measurement, 50, 1–73. Scholar
  19. Kunina-Habenicht, O., Rupp, A. A., & Wilhelm, O. (2012). The Impact of Model Misspecification on Parameter Estimation and Item-Fit Assessment in Log-Linear Diagnostic Classification Models. Journal of Educational Measurement, 49(1), 59–81. Scholar
  20. Lazersfield, P. F., & Henry, N. W. (1968). Latent Structure Analysis. Boston: Houghton Mifflin Company.Google Scholar
  21. Leighton, J. P., & Gierl, M. J. (2007). Cognitive Diagnostic Assessment for Education: Theory and Applications. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  22. Macready, G. B., & Mitchell Dayton, C. (1977). The Use of Probabilistic Models in the Assessment of Mastery. Journal of Educational Statistics, 2, 99–120. Scholar
  23. Maris, E. (1995). Psychometric Latent Response Models. Psychometrika, 60(4), 523–547. Scholar
  24. Maris, E. (1999). Estimating Multiple Classification Latent Class Models. Psychometrika, 64(2), 187–212. Scholar
  25. Maydeu, A., & Joe, H. (2005). Limited- and Full-Information Estimation and Goodness-of-Fit Testing in 2n Contingency Tables. Journal of the American Statistical Association, 100, 1009–1020. Scholar
  26. McClarty, K. L., Way, W. D., Porter, A. C., Beimers, J. N., & Miles, J. A. (2013). Evidence Based Standard Setting: Establishing a Validity Framework for Cut Scores. Educational Researcher, 42, 78–88. Scholar
  27. McCulloch, C. E., & Searle, S. R. (2001). Generalized, Linear, and Mixed Models. New York: Wiley.Google Scholar
  28. McDonald, R. P. (1999). Test Theory: A Unified Approach. Mahwah: Erlbaum.Google Scholar
  29. Morgeson, F. P., Campion, M. A., Dipboye, R. L., Hollenbeck, J. R., Murphy, K., & Schmitt, N. (2007). Reconsidering the Use of Personality Tests in Personnel Selection Contexts. Personnel Psychology, 60(3), 683–729. Scholar
  30. Muthén, L. K., & Muthén, B. (2012). Mplus Statistical Modeling Software: Release 7.0. Los Angeles: Muthén & Muthén.Google Scholar
  31. Nichols, P. D., Chipman, S. F., & Brennan, R. L. (1995). Cognitively Diagnostic Assessment. Mahwah: Erlbaum.Google Scholar
  32. Pintér, J. D. (2002). Global Optimization: Software, Test Problems, and Applications. In P. M. Pardalos & H. Edwin Romeijn (Eds.), Handbook of Global Optimization (Vol. 2, pp. 515–569). Boston: Springer.CrossRefGoogle Scholar
  33. Pintér, J. D. (2006). Global Optimization: Scientific and Engineering Case Studies. In J. D. Pintér (Ed.), Nonconvex Optimization and Its Applications (Vol. 85). New York: Springer Science & Business Media.Google Scholar
  34. R Core Team. (2016). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna.
  35. Redner, R. A., & Walker, H. F. (1984). Mixture Densities, Maximum Likelihood and the EM Algorithm. SIAM Review, 26(2), 195–239.CrossRefGoogle Scholar
  36. Rupp, A. A., & Templin, J. (2008a). The Effects of Q-Matrix Misspecification on Parameter Estimates and Classification Accuracy in the DINA Model. Educational and Psychological Measurement, 68, 78–96. Scholar
  37. Rupp, A. A., & Templin, J. (2008b). Unique Characteristics of Diagnostic Classification Models: A Comprehensive Review of the Current State-of-the-Art. Measurement, 6(4), 219–262. Scholar
  38. Rupp, A. A., Templin, J., & Henson, R. A. (2010). Diagnostic Measurement: Theory, Methods, and Applications. New York: Guilford Press.Google Scholar
  39. SAS Institute. (2011). The SAS System for Windows. Cary: SAS Institute.Google Scholar
  40. Schmidt, F. L., & Hunter, J. E. (1998). The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings. Psychological Bulletin, 124(2), 262–274. Scholar
  41. Skrondal, A., & Rabe-Hesketh, S. (2004). Generalized Latent Variable Modeling: Multilevel, Longitudinal, and Structural Equation Models. New York: Chapman & Hall/CRC.CrossRefGoogle Scholar
  42. Stephens, M. (2000). Dealing with Label Switching in Mixture Models. Journal of the Royal Statistical Society, Series B: Statistical Methodology, 62(4), 795–809. Scholar
  43. Stroup, W. W. (2012). Generalized Linear Mixed Models: Modern Concepts, Methods, and Applications. Boca Raton: CRC Press.Google Scholar
  44. Tatsuoka, K. K. (1983). Rule-space: An Approach for Dealing with Misconceptions Based on Item Response Theory. Journal of Educational Measurement, 20, 345–354. Scholar
  45. Templin, J., & Bradshaw, L. (2013a). Nominal Response Diagnostic Classification Models (Manuscript Under Revision).Google Scholar
  46. Templin, J., & Bradshaw, L. (2013b). The Comparative Reliability of Diagnostic Model Examinee Estimates. Journal of Classification, 10, 251–275. Scholar
  47. Templin, J., & Hoffman, L. (2013). Obtaining Diagnostic Classification Model Estimates Using Mplus. Educational Measurement: Issues and Practice, 32, 37–50. Scholar
  48. Templin, J., Bradshaw, L., & Paek, P. (2016). A Comprehensive Framework for Integrating Innovative Psychometric Methodology into Educational Research. In A. Izsák, J. T. Remmillard, & J. Templin (Eds.), Psychometric Methods in Mathematics Education: Opportunities, Challenges, and Interdisciplinary Collaborations, Journal for Research in Mathematics Education Monograph Series. Reston: National Council of Teachers of Mathematics.Google Scholar
  49. Thomas, S. L., & Scroggins, W. A. (2006). Psychological Testing in Personnel Selection: Contemporary Issues in Cognitive Ability and Personality Testing. Journal of Business Inquiry, 5, 28–38.Google Scholar

Copyright information

© The Author(s) 2018

Authors and Affiliations

  • Meghan Sullivan
    • 1
  • Hongling Lao
    • 1
  • Jonathan Templin
    • 1
  1. 1.University of KansasLawrenceUSA

Personalised recommendations