Advertisement

The Reparameterized Unified Model System: A Diagnostic Assessment Modeling Approach

  • William StoutEmail author
  • Robert Henson
  • Lou DiBello
  • Benjamin Shear
Chapter
Part of the Methodology of Educational Measurement and Assessment book series (MEMA)

Abstract

This chapter considers the Reparameterized Unified Model (RUM). The RUM a refinement of the DINA where which particular required skills that are lacking influences the probability of a correct response: Hartz, A Bayesian framework for the Unified Model for assessing cognitive abilities: blending theory with practicality. Dissertation, University of Illinois at Urbana-Champaign, 2001; Roussos, DiBello, Stout, Hartz, Henson, and Templin. The fusion model skills diagnosis system. In: JP Leighton and MJ Gierl (eds) Cognitive diagnostic assessment for education: Theory and applications. New York, Cambridge University Press, pp 275–318, 2007). The RUM diagnostic classification models (DCM) models binary (right/wrong scoring) items as the basis for a stills diagnostic classification system for scoring quizzes or tests. Refined DCMs developed from the RUM are discussed in some detail. Specifically, the commonly used “Reduced” RUM and an extension of the RUM to option scored items referred to as the Extended RUM model (ERUM; DiBello, Henson, & Stout, Appl Psychol Measur 39:62–79, 2015) are also considered. For the ERUM, the latent skills space is augmented by the inclusion of misconceptions whose possession reduces the probability of a correct response and increases the probability of certain incorrect responses, thus providing increasing classification accuracy. In addition to discussion of the foundational identifiability issue that occurs for option scored DCMs, available software using the SHINY package in R and including various appropriate “model checking” fit and discrimination indices is discussed and is available for users.

References

  1. Beeley, C. (2013). Web application development with R using SHINY. Birmingham, UK: PACKT Publishing.Google Scholar
  2. Bradshaw, L., & Templin, J. (2014). Combining scaling and classification: A psychometric model for scaling ability and diagnosing misconceptions. Psychometrika, 79, 403–425.CrossRefGoogle Scholar
  3. Briggs, D. C., Alonzo, A. C., Schwab, C., & Wilson, M. (2006). Diagnostic assessment with ordered multiple-choice items. Educational Assessment, 11(1), 33–63.CrossRefGoogle Scholar
  4. Chen, Y., Culpepper, S., Chen, Y., & Douglas, J. (2018). Bayesian estimation of the DINA Q matrix. Psychometrika, 83, 89–108.CrossRefGoogle Scholar
  5. Chiu, C., Kohn, H., & Wu, H. (2016). Fitting the reduced RUM with Mplus: A tutorial. International Journal of Testing, 16(4), 331–351.CrossRefGoogle Scholar
  6. Chung, M., & Johnson, M. (2017). An MCMC algorithm for estimating the Reduced RUM. https://arxiv.org/abs/1710.08412
  7. de la Torre, J. (2009). A cognitive diagnosis model for cognitively-based multiple choice options. Applied Psychological Measurement, 33, 163–183.CrossRefGoogle Scholar
  8. DiBello, L., Stout, W., & Roussos, L. (1995). Unified cognitive/psychometric diagnostic assessment likelihood-based classification techniques. In P. Nichols, S. Chipman, & R. Brennan (Eds.), Cognitively diagnostic assessment (pp. 361–389). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  9. DiBello, L. V., Henson, R., & Stout, W. (2015). A family of generalized diagnostic classification models for multiple choice option-based scoring. Applied Psychological Measurement, 39, 62–79.CrossRefGoogle Scholar
  10. DiBello, L. V., & Stout, W. (2008). Arpeggio documentation and analyst manual. Department of Statistics, University of Illinois, Champaign-Urbana IL (Contact W. Stout).Google Scholar
  11. Feng, Y., Habing, B., & Huebner, A. (2014). Parameter estimation of the reduced RUM using the EM algorithm. Applied Psychological Measurement, 38, 137–150.CrossRefGoogle Scholar
  12. Gelman, A., & Rubin, D. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, 7, 457–511.CrossRefGoogle Scholar
  13. Gilks, W. R., Richardson, S., & Spielgelhalter, D. J. (1996). Markov chain Monte Carlo. London, UK: Chapman and Hall/CRC.Google Scholar
  14. Haertel, E. H. (1989). Using restricted latent class models to map the skill structure of achievement items. Journal of Educational Measurement, 26, 301–321.CrossRefGoogle Scholar
  15. Han, Z., & Johnson, M. (this volume). Global model and item-level fit indices. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models. Cham, Switzerland: Springer.Google Scholar
  16. Hartz, S. M. (2001). A Bayesian framework for the Unified Model for assessing cognitive abilities: Blending theory with practicality. Dissertation. University of Illinois at Urbana-Champaign.Google Scholar
  17. Henson, R., DiBello, L., & Stout, W. (2018). A generalized approach to defining item discrimination for DCMs. Measurement: Interdisciplinary Research and Perspectives, 16(1), 18–29.  https://doi.org/10.1080/15366367.2018.1436855 CrossRefGoogle Scholar
  18. Jang, E. E. (2005). A validity narrative: Effects of reading skills diagnosis on teaching and learning in the context of NG TOEFL. Dissertation. University of Illinois at Urbana-Champaign.Google Scholar
  19. Jang, E. E. (2009). Cognitive diagnostic assessment of L2 reading comprehension ability: Validity arguments for fusion model application to language assessment. Language Testing, 26(1), 031–073.  https://doi.org/10.1177/0265532208097336 CrossRefGoogle Scholar
  20. Jorion, N., Gane, B. D., James, K., Schroeder, L., DiBello, L. V., & Pellegrino, J. W. (2015). An analytic framework for evaluating the validity of concept inventory claims: Framework for evaluating validity of concept inventories. Journal of Engineering Education, 104(4), 454–496..  https://doi.org/10.1002/jee.20104 CrossRefGoogle Scholar
  21. Kim, A.-Y. (Alicia). (2015). Exploring ways to provide diagnostic feedback with an ESL placement test: Cognitive diagnostic assessment of L2 reading ability. Language Testing, 32(2), 227–258.  https://doi.org/10.1177/0265532214558457 CrossRefGoogle Scholar
  22. Kim, Y.-H. (2011). Diagnosing EAP writing ability using the Reduced Reparameterized Unified Model. Language Testing, 28(4), 509–541.  https://doi.org/10.1177/0265532211400860 CrossRefGoogle Scholar
  23. Kunina-Habenicht, O., Rupp, A., & Wilhelm, O. (2009). A practical illustration of multidimensional diagnostic skills profiling: Comparing results from confirmatory factor analysis and diagnostic classification models. Studies in Educational Evaluation, 35, 64–70.CrossRefGoogle Scholar
  24. Kuo, B., Chen, C., & de la Torre, J. (2017). A cognitive diagnosis model for identifying coexisting skills and misconceptions. Applied Psychological Measurement, 42(3), 179–191.CrossRefGoogle Scholar
  25. Lee, Y.-W., & Sawaki, Y. (2009a). Application of three cognitive diagnosis models to ESL reading and listening assessments. Language Assessment Quarterly, 6(3), 239–263.  https://doi.org/10.1080/15434300903079562 CrossRefGoogle Scholar
  26. Lee, Y.-W., & Sawaki, Y. (2009b). Cognitive diagnosis and Q-matrices in language assessment. Language Assessment Quarterly, 6(3), 169–171.  https://doi.org/10.1080/15434300903059598 CrossRefGoogle Scholar
  27. Li, H., & Suen, H. K. (2013a). Constructing and validating a Q-matrix for cognitive diagnostic analyses of a reading test. Educational Assessment, 18(1), 1–25.  https://doi.org/10.1080/10627197.2013.761522 CrossRefGoogle Scholar
  28. Li, H., & Suen, H. K. (2013b). Detecting native language group differences at the subskills level of reading: A differential skill functioning approach. Language Testing, 30(2), 273–298.  https://doi.org/10.1177/0265532212459031 CrossRefGoogle Scholar
  29. Liu, J., & Johnson, M. (this volume). Estimating CDMs Using MCMC. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models. Cham, Switzerland: Springer.Google Scholar
  30. Liu, X., & Kang, H. (this volume). Q matrix learning via latent variable selection and identifiability. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models. Cham, Switzerland: Springer.Google Scholar
  31. Ma, W. (this volume). The GDINA R-package. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models. Cham, Switzerland: Springer.Google Scholar
  32. Masters, J. (2012). Diagnostic geometry assessment project: Validity evidence (Technical Report). Measured Progress Innovation Laboratory.Google Scholar
  33. Plummer, M., Best, N., Cowles, K., & Vines, K. (2006). CODA: Convergence diagnosis and output analysis for MCMC. R News, 6, 7–11.Google Scholar
  34. Ranjbaran, F., & Alavi, M. (2017). Developing a reading comprehension test for cognitive diagnostic assessment: A RUM analysis. Studies in Educational Evaluation 55 167–179.Google Scholar
  35. Robitzsch, A., & George, A. (this volume). The R package CDM for diagnostic modeling. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models. Cham, Switzerland: Springer.Google Scholar
  36. Roussos, L. A., DiBello, L. V., Stout, W., Hartz, S. M., Henson, R. A., & Templin, J. L. (2007). The fusion model skills diagnosis system. In J. P. Leighton & M. J. Gierl (Eds.), Cognitive diagnostic assessment for education: Theory and applications (pp. 275–318). New York, NY: Cambridge University Press.CrossRefGoogle Scholar
  37. Santiago-Román, A. I., Streveler, R. A., & DiBello, L. V. (2010a). The development of estimated cognitive attribute profiles for the concept assessment tool for statics. Presented at the 40th ASEE/IEEE Frontiers in Education Conference, Washington, DC.Google Scholar
  38. Santiago-Román, A. I., Streveler, R. A., Steif, P. S., & DiBello, L. V. (2010b). The development of a Q-matrix for the concept assessment tool for statics. Presented at the ERM division of the ASEE annual conference and exposition, Louisville, KY.Google Scholar
  39. Shear, B. R., & Roussos, L. A. (2017). Validating a distractor-driven geometry test using a generalized diagnostic classification model. In I. B. D. Zumbo & A. M. Hubley (Eds.), Understanding and investigating response processes in validation research (Vol. 69, pp. 277–304). Cham, Switzerland: Springer.  https://doi.org/10.1007/978-3-319-56129-5_15 CrossRefGoogle Scholar
  40. Steif, P. S., & Dantzler, J. A. (2005). A statics concept inventory: Development and psychometric analysis. Journal of Engineering Education, 94(4), 363–371.  https://doi.org/10.1002/j.2168-9830.2005.tb00864.x CrossRefGoogle Scholar
  41. Sullivan, M., Pace, J., & Templin, J. (this volume). Using Mplus to estimate the Log-Linear Cognitive Diagnosis model. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models. Cham, Switzerland: Springer.Google Scholar
  42. Templin, J., & Hoffman, L. (2013). Obtaining diagnostic classification model estimates using Mplus. Educational Measurement: Issues and Practice, 32(2), 37–50.CrossRefGoogle Scholar
  43. von Davier, M., & Lee, Y.-S. (this volume). Introduction: From latent class analysis to DINA and beyond. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models. Cham, Switzerland: Springer.Google Scholar
  44. Zhang, S., Douglas, J., Wang, S., & Culpepper, S. (this volume). Reduced Reparameterized Unified Model applied to learning system spacial reasoning. In M. von Davier & Y.-S. Lee (Eds.), Handbook of diagnostic classification models. Cham, Switzerland: Springer.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • William Stout
    • 1
    Email author
  • Robert Henson
    • 2
  • Lou DiBello
    • 1
  • Benjamin Shear
    • 3
  1. 1.Department of StatisticsUniversity of Illinois at Urbana-ChampaignUrbanaUSA
  2. 2.Educational Research Methodology (ERM) DepartmentThe University of North Carolina at GreensboroGreensboroUSA
  3. 3.Research and Evaluation MethodologyUniversity of ColoradoBoulderUSA

Personalised recommendations