Skip to main content

Implementing the Graduate Management Admission Test Computerized Adaptive Test

  • Chapter
  • First Online:
Book cover Elements of Adaptive Testing

Part of the book series: Statistics for Social and Behavioral Sciences ((SSBS))

Abstract

Wise and Kingsbury (2000) argue that the success of an adaptive testing program is a function of how well the various practical issues are addressed. Decisions must be made with regard to test specifications, item selection algorithms, pool design and rotation, ability estimation, pretesting, item analysis, database design, and data security. The test sponsor is ultimately responsible for each of these decisions and must work closely with the vendor to assure that the sponsor interests are met.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Bridgeman, B., Wightman, L. & Anderson D. (n.d.). GMAT comparability study [Internal Administrative Report]. McLean, VA: GMAC.

    Google Scholar 

  • Georgiadou, E., Triantafillou, E. & Economides, A. A. (2006). Evaluation parameters for computer adaptive testing. British Journal of Educational Technology, 37, 261–278.

    Article  Google Scholar 

  • Green, B., Bock, R. D., Humphreys, L., Linn, R. & Reckase, M. (1984). Technical guidelines for assessing computerized adaptive tests. Journal of Educational Measurement, 21, 347–360.

    Article  Google Scholar 

  • Guo, F., Rudner, L., Owens, K. & Talento-Miller, E. (2006, July). Differential impact as an item bias indicator in CAT. Paper presented at the International Testing Commission 5th International Conference on Psychological and Educational Test Adaptation across Language and Cultures, Brussels, Belgium.

    Google Scholar 

  • Guo, F. & Wang, L. (2005). Evaluating scale stability of a computer adaptive testing system [Research Report RR 05-12]. McLean, VA: GMAC.

    Google Scholar 

  • Kingsbury, G. & Zara, A. (1989). Procedures for selecting items for computerized adaptive tests. Applied Measurement in Education, 2, 359–375.

    Article  Google Scholar 

  • Lord, F. M. (1980). Application of item response theory to practical testing problems. Hillsdale, NJ: Erlbaum.

    Google Scholar 

  • McBride, J. R. & Martin, J. T. (1983). Reliability and validity of adaptive ability tests in a military setting. In D. J. Weiss (Ed.), New horizons in testing (pp. 223–236). New York: Academic Press.

    Google Scholar 

  • Parshall, C. G., Spray, J. A., Kalohn, J. C. & Davey, T. (2002). Practical considerations in computer-based testing. New York: Springer-Verlag.

    MATH  Google Scholar 

  • Plake, B. P. (1996). A review of the comparability study design [Internal Administrative Report]. McLean, VA: GMAC.

    Google Scholar 

  • Rosenbaum P. R. & Rubin, D. B. (1985). Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. American Statistician, 39, 33–38.

    Article  Google Scholar 

  • Rubin, D. B. (1997). Estimating causal effects from large data sets using propensity scores. Annals of Internal Medicine, 127, 757–763.

    Google Scholar 

  • Rudner, L. M. (2005). Examinees retaking the Graduate Management Admission Test [Research Report RR-05-01]. McLean, VA: GMAC.

    Google Scholar 

  • Rudner, L. M. & Peyton, J. (2006). Consider propensity scores to compare treatments. Practical Assessment Research & Evaluation, 11. (Available online: http://pareonline.net/getvn. asp?v=11&n=9)

  • Sireci, S. (1998). The construct of content validity. Social Indicators Research, 45, 83–117.

    Article  Google Scholar 

  • Sireci, S. & Talento-Miller, E. (2006). Evaluating the predictive validity of Graduate Management Admission Test scores. Educational and Psychological Measurement, 66, 305–317.

    Article  MathSciNet  Google Scholar 

  • Stocking, M. L. & Swanson, L. (1993). A method for severely constrained item selection in adaptive testing. Applied Psychological Measurement, 17, 277–292.

    Article  Google Scholar 

  • Swanson, L. & Stocking, M. L. (1993). A model and heuristic for solving very large item selection problems. Applied Psychological Measurement, 17, 151–166.

    Article  Google Scholar 

  • Sympson, J. B. & Hetter, R. D. (1985). Controlling item exposure rates in computerized adaptive testing. In Proceedings of the 27th Annual Meeting of the Military Testing Association. San Diego: Navy Personnel Research and Development Center.

    Google Scholar 

  • Talento-Miller, E. (2008). Generalizability of GMAT validity to programs outside the U.S. International Journal of Testing, 8, 127–142.

    Article  Google Scholar 

  • Talento-Miller, E. & Rudner, L. (2005). GMAT validity study summary report for 1997 to 2004 [Research Report RR-05-06]. McLean, VA: GMAC.

    Google Scholar 

  • Talento-Miller, E. & Rudner, L. (2008). The validity of Graduate Management Admission Test scores: A summary of studies conducted from 1997 to 2004. Educational and Psychological Measurement, 68, 129–138.

    Article  MathSciNet  Google Scholar 

  • van der Linden, W. J. (2005). A comparison of item-selection methods for adaptive tests with content constraints. Journal of Educational Measurement, 42, 283–302.

    Article  Google Scholar 

  • van der Linden, W. J. & Reese, L. M. (1998). A model for optimal constrained adaptive testing. Applied Psychological Measurement, 22, 259–270.

    Article  Google Scholar 

  • van der Linden, W. J. & Veldkamp, B. P. (2007). Conditional item-exposure control in adaptive testing using item-ineligibility probabilities. Journal of Educational and Behavioral Statistics, 32, 398–418.

    Article  Google Scholar 

  • Wainer, H. (2000). Computerized adaptive testing: A primer. Mahwah, NJ: Erlbaum.

    Google Scholar 

  • Wainer, H., Kaplan, B. & Lewis, C. (1992). A comparison of the performance of simulated hierarchical and linear testlets. Journal of Educational Measurement, 27, 1–14.

    Article  Google Scholar 

  • Wainer, H. & Kiely, G. (1987). Item clusters and computerized adaptive testing: the case for testlets. Journal of Educational Measurement, 24, 189–205.

    Article  Google Scholar 

  • Weiss, D. J. (1985). Adaptive testing by computer. Journal of Consulting and Clinical Psychology, 53, 774–789.

    Article  Google Scholar 

  • Wise, S. L. & Kingsbury, G. G. (2000). Practical issues in developing and maintaining a computerized adaptive testing program. Psicologica, 21, 135–155.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Rudner, L.M. (2009). Implementing the Graduate Management Admission Test Computerized Adaptive Test. In: van der Linden, W., Glas, C. (eds) Elements of Adaptive Testing. Statistics for Social and Behavioral Sciences. Springer, New York, NY. https://doi.org/10.1007/978-0-387-85461-8_8

Download citation

Publish with us

Policies and ethics