Skip to main content
Log in

Multidimensional CAT Item Selection Methods for Domain Scores and Composite Scores: Theory and Applications

  • Published:
Psychometrika Aims and scope Submit manuscript

Abstract

Multidimensional computer adaptive testing (MCAT) can provide higher precision and reliability or reduce test length when compared with unidimensional CAT or with the paper-and-pencil test. This study compared five item selection procedures in the MCAT framework for both domain scores and overall scores through simulation by varying the structure of item pools, the population distribution of the simulees, the number of items selected, and the content area. The existing procedures such as Volume (Segall in Psychometrika, 61:331–354, 1996), Kullback–Leibler information (Veldkamp & van der Linden in Psychometrika 67:575–588, 2002), Minimize the error variance of the linear combination (van der Linden in J. Educ. Behav. Stat. 24:398–412, 1999), and Minimum Angle (Reckase in Multidimensional item response theory, Springer, New York, 2009) are compared to a new procedure, Minimize the error variance of the composite score with the optimized weight, proposed for the first time in this study. The intent is to find an item selection procedure that yields higher precisions for both the domain and composite abilities and a higher percentage of selected items from the item pool. The comparison is performed by examining the absolute bias, correlation, test reliability, time used, and item usage. Three sets of item pools are used with the item parameters estimated from real live CAT data. Results show that Volume and Minimum Angle performed similarly, balancing information for all content areas, while the other three procedures performed similarly, with a high precision for both domain and overall scores when selecting items with the required number of items for each domain. The new item selection procedure has the highest percentage of item usage. Moreover, for the overall score, it produces similar or even better results compared to those from the method that selects items favoring the general dimension using the general model (Segall in Psychometrika 66:79–97, 2001); the general dimension method has low precision for the domain scores. In addition to the simulation study, the mathematical theories for certain procedures are derived. The theories are confirmed by the simulation applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1.
Figure 2.
Figure 3.
Figure 4.
Figure 5.

Similar content being viewed by others

References

  • Chang, H.-H., & Ying, Z. (1996). A global information approach to computerized adaptive testing. Applied Psychological Measurement, 20, 213–229.

    Article  Google Scholar 

  • Cheng, Y., & Chang, H.H. (2009). The maximum priority index method for severely constrained item selection in computerized adaptive testing. British Journal of Mathematical and Statistical Psychology, 62, 369–383.

    Article  PubMed  Google Scholar 

  • De la Torre, J., & Hong, Y. (2010). Parameter estimation with small sample size: a higher-order IRT approach. Applied Psychological Measurement, 34, 267–285.

    Article  Google Scholar 

  • Haberman, J.S., & Sinharay, S. (2010). Reporting of subscores using multidimensional item response theory. Psychometrika, 75, 331–354.

    Article  Google Scholar 

  • Lee, Y.H., Ip, E.H., & Fuh, C.D. (2008). A strategy for controlling item exposure in multidimensional computerized adaptive testing. Educational and Psychological Measurement, 68, 215–232.

    Article  Google Scholar 

  • Li, Y.H., & Schafe, W. (2005). Trait parameter recovery using multidimensional computerized adaptive testing in reading and mathematics. Applied Psychological Measurement, 29, 3–25.

    Article  Google Scholar 

  • Luecht, R.M. (1996). Multidimensional computerized adaptive testing in a certification or licensure context. Applied Psychological Measurement, 20, 389–404.

    Article  Google Scholar 

  • Luecht, R.M., & Miller, T.R. (1992). Unidimensional calibrations and interpretations of composite traits for multidimensional tests. Applied Psychological Measurement, 16, 279–293.

    Article  Google Scholar 

  • Mulder, J., & van der Linden, W.J. (2009). Multidimensional adaptive testing with optimal design criteria for item selection. Psychometrika, 74, 273–296.

    Article  PubMed  Google Scholar 

  • Reckase, M.D. (1997). The past and future of multidimensional item response theory. Applied Psychological Measurement, 21, 25–36.

    Article  Google Scholar 

  • Reckase, M.D. (2009). Multidimensional item response theory. New York: Springer.

    Book  Google Scholar 

  • Reckase, M.D., & McKinely, R.L. (1991). The discriminating power of items that measure more than one dimension. Applied Psychological Measurement, 15, 361–373.

    Article  Google Scholar 

  • Segall, D.O. (1996). Multidimensional adaptive testing. Psychometrika, 61, 331–354.

    Article  Google Scholar 

  • Segall, D.O. (2001). General ability measurement: an application of multidimensional item response theory. Psychometrika, 66, 79–97.

    Article  Google Scholar 

  • van der Linden, W.J. (1999). Multidimensional adaptive testing with a minimum error-variance criterion. Journal of Educational and Behavioral Statistics, 24, 398–412.

    Google Scholar 

  • Veldkamp, B.P., & van der Linden, W.J. (2002). Multidimensional adaptive testing with constraints on test content. Psychometrika, 67, 575–588.

    Article  Google Scholar 

  • Wang, C., Chang, H.-H., & Boughton, K.A. (2011). Kullback–Leibler information and its applications in multi-dimensional adaptive testing. Psychometrika, 76, 13–39.

    Article  Google Scholar 

  • Yao, L. (2003). BMIRT: Bayesian multivariate item response theory [Computer software]. Monterey: Defense Manpower Data Center.

    Google Scholar 

  • Yao, L. (2010a). Reporting valid and reliability overall score and domain scores. Journal of Educational Measurement, 47, 339–360.

    Article  Google Scholar 

  • Yao, L. (2010b). Multidimensional ability estimation: Bayesian or non-Bayesian. Unpublished manuscript.

  • Yao, L. (2011). simuMCAT: simulation of multidimensional computer adaptive testing [Computer software]. Monterey: Defense Manpower Data Center.

    Google Scholar 

  • Yao, L., & Boughton, K.A. (2007). A multidimensional item response modeling approach for improving subscale proficiency estimation and classification. Applied Psychological Measurement, 31, 83–105.

    Article  Google Scholar 

  • Yao, L., & Boughton, K.A. (2009). Multidimensional linking for tests containing polytomous items. Journal of Educational Measurement, 46, 177–197.

    Article  Google Scholar 

  • Yao, L., & Schwarz, R.D. (2006). A multidimensional partial credit model with associated item and test statistics: an application to mixed-format tests. Applied Psychological Measurement, 30, 469–492.

    Article  Google Scholar 

Download references

Acknowledgements

I would like to thank the reviewers and the editor for their valuable input on the earlier version of this manuscript. I would also like to thank Dan Segall for his helpful comments, and Sherlyn Stahr, Louis Roussos, and my daughter Sophie Chen for their editorial assistance. The views expressed are those of the author and not necessarily those of the Department of Defense or the United States government.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lihua Yao.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yao, L. Multidimensional CAT Item Selection Methods for Domain Scores and Composite Scores: Theory and Applications. Psychometrika 77, 495–523 (2012). https://doi.org/10.1007/s11336-012-9265-5

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11336-012-9265-5

Key words

Navigation