Psychometrika

, Volume 77, Issue 3, pp 495–523 | Cite as

Multidimensional CAT Item Selection Methods for Domain Scores and Composite Scores: Theory and Applications

Article

Abstract

Multidimensional computer adaptive testing (MCAT) can provide higher precision and reliability or reduce test length when compared with unidimensional CAT or with the paper-and-pencil test. This study compared five item selection procedures in the MCAT framework for both domain scores and overall scores through simulation by varying the structure of item pools, the population distribution of the simulees, the number of items selected, and the content area. The existing procedures such as Volume (Segall in Psychometrika, 61:331–354, 1996), Kullback–Leibler information (Veldkamp & van der Linden in Psychometrika 67:575–588, 2002), Minimize the error variance of the linear combination (van der Linden in J. Educ. Behav. Stat. 24:398–412, 1999), and Minimum Angle (Reckase in Multidimensional item response theory, Springer, New York, 2009) are compared to a new procedure, Minimize the error variance of the composite score with the optimized weight, proposed for the first time in this study. The intent is to find an item selection procedure that yields higher precisions for both the domain and composite abilities and a higher percentage of selected items from the item pool. The comparison is performed by examining the absolute bias, correlation, test reliability, time used, and item usage. Three sets of item pools are used with the item parameters estimated from real live CAT data. Results show that Volume and Minimum Angle performed similarly, balancing information for all content areas, while the other three procedures performed similarly, with a high precision for both domain and overall scores when selecting items with the required number of items for each domain. The new item selection procedure has the highest percentage of item usage. Moreover, for the overall score, it produces similar or even better results compared to those from the method that selects items favoring the general dimension using the general model (Segall in Psychometrika 66:79–97, 2001); the general dimension method has low precision for the domain scores. In addition to the simulation study, the mathematical theories for certain procedures are derived. The theories are confirmed by the simulation applications.

Key words

BMIRT CAT domain scores Kullback–Leibler MCAT multidimensional item response theory multidimensional information overall scores 

Notes

Acknowledgements

I would like to thank the reviewers and the editor for their valuable input on the earlier version of this manuscript. I would also like to thank Dan Segall for his helpful comments, and Sherlyn Stahr, Louis Roussos, and my daughter Sophie Chen for their editorial assistance. The views expressed are those of the author and not necessarily those of the Department of Defense or the United States government.

References

  1. Chang, H.-H., & Ying, Z. (1996). A global information approach to computerized adaptive testing. Applied Psychological Measurement, 20, 213–229. CrossRefGoogle Scholar
  2. Cheng, Y., & Chang, H.H. (2009). The maximum priority index method for severely constrained item selection in computerized adaptive testing. British Journal of Mathematical and Statistical Psychology, 62, 369–383. PubMedCrossRefGoogle Scholar
  3. De la Torre, J., & Hong, Y. (2010). Parameter estimation with small sample size: a higher-order IRT approach. Applied Psychological Measurement, 34, 267–285. CrossRefGoogle Scholar
  4. Haberman, J.S., & Sinharay, S. (2010). Reporting of subscores using multidimensional item response theory. Psychometrika, 75, 331–354. CrossRefGoogle Scholar
  5. Lee, Y.H., Ip, E.H., & Fuh, C.D. (2008). A strategy for controlling item exposure in multidimensional computerized adaptive testing. Educational and Psychological Measurement, 68, 215–232. CrossRefGoogle Scholar
  6. Li, Y.H., & Schafe, W. (2005). Trait parameter recovery using multidimensional computerized adaptive testing in reading and mathematics. Applied Psychological Measurement, 29, 3–25. CrossRefGoogle Scholar
  7. Luecht, R.M. (1996). Multidimensional computerized adaptive testing in a certification or licensure context. Applied Psychological Measurement, 20, 389–404. CrossRefGoogle Scholar
  8. Luecht, R.M., & Miller, T.R. (1992). Unidimensional calibrations and interpretations of composite traits for multidimensional tests. Applied Psychological Measurement, 16, 279–293. CrossRefGoogle Scholar
  9. Mulder, J., & van der Linden, W.J. (2009). Multidimensional adaptive testing with optimal design criteria for item selection. Psychometrika, 74, 273–296. PubMedCrossRefGoogle Scholar
  10. Reckase, M.D. (1997). The past and future of multidimensional item response theory. Applied Psychological Measurement, 21, 25–36. CrossRefGoogle Scholar
  11. Reckase, M.D. (2009). Multidimensional item response theory. New York: Springer. CrossRefGoogle Scholar
  12. Reckase, M.D., & McKinely, R.L. (1991). The discriminating power of items that measure more than one dimension. Applied Psychological Measurement, 15, 361–373. CrossRefGoogle Scholar
  13. Segall, D.O. (1996). Multidimensional adaptive testing. Psychometrika, 61, 331–354. CrossRefGoogle Scholar
  14. Segall, D.O. (2001). General ability measurement: an application of multidimensional item response theory. Psychometrika, 66, 79–97. CrossRefGoogle Scholar
  15. van der Linden, W.J. (1999). Multidimensional adaptive testing with a minimum error-variance criterion. Journal of Educational and Behavioral Statistics, 24, 398–412. Google Scholar
  16. Veldkamp, B.P., & van der Linden, W.J. (2002). Multidimensional adaptive testing with constraints on test content. Psychometrika, 67, 575–588. CrossRefGoogle Scholar
  17. Wang, C., Chang, H.-H., & Boughton, K.A. (2011). Kullback–Leibler information and its applications in multi-dimensional adaptive testing. Psychometrika, 76, 13–39. CrossRefGoogle Scholar
  18. Yao, L. (2003). BMIRT: Bayesian multivariate item response theory [Computer software]. Monterey: Defense Manpower Data Center. Google Scholar
  19. Yao, L. (2010a). Reporting valid and reliability overall score and domain scores. Journal of Educational Measurement, 47, 339–360. CrossRefGoogle Scholar
  20. Yao, L. (2010b). Multidimensional ability estimation: Bayesian or non-Bayesian. Unpublished manuscript. Google Scholar
  21. Yao, L. (2011). simuMCAT: simulation of multidimensional computer adaptive testing [Computer software]. Monterey: Defense Manpower Data Center. Google Scholar
  22. Yao, L., & Boughton, K.A. (2007). A multidimensional item response modeling approach for improving subscale proficiency estimation and classification. Applied Psychological Measurement, 31, 83–105. CrossRefGoogle Scholar
  23. Yao, L., & Boughton, K.A. (2009). Multidimensional linking for tests containing polytomous items. Journal of Educational Measurement, 46, 177–197. CrossRefGoogle Scholar
  24. Yao, L., & Schwarz, R.D. (2006). A multidimensional partial credit model with associated item and test statistics: an application to mixed-format tests. Applied Psychological Measurement, 30, 469–492. CrossRefGoogle Scholar

Copyright information

© The Psychometric Society 2012

Authors and Affiliations

  1. 1.Defense Manpower Data Center, Monterey BaySeasideUSA

Personalised recommendations