Advertisement

Computerized Adaptive Tests

  • Cynthia G. Parshall
  • Judith A. Spray
  • John C. Kalohn
  • Tim Davey
Part of the Statistics for Social and Behavioral Sciences book series (SSBS)

Abstract

A traditional computerized adaptive test (CAT) selects items individually for each examinee based on the examinee’s responses to previous items in order to obtain a precise and accurate estimate of that examinee’s latent ability on some underlying scale. The specific items, the number of items, and the order of item presentation are all likely to vary from one examinee to another. Forms are drawn adaptively and scored in real time, and unique tests are constructed for each examinee. Scores are equated through reliance on item response theory (IRT)1 ability estimates.

Keywords

Item Response Theory Item Pool Computerize Adaptive Test Item Selection Adaptive Test 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Brown, J. M., & Weiss, D. J. (1977). An Adaptive Testing Strategy for Achievement Test Batteries. (Research Report 77-6). Minneapolis: University of Minnesota, Psychometric Methods Program.Google Scholar
  2. Davey, T., & Parshall, C. G. (1995, April). New algorithms for item selection and exposure control with computerized adaptive testing. Paper presented at the annual meeting of the American Educational Research Association, San Francisco.Google Scholar
  3. Davey, T., & Thomas, L. (1996, April). Constructing adaptive tests to parallel conventional programs. Paper presented at the annual meeting of the American Educational Research Association, New York.Google Scholar
  4. Lord, F. M. (1980). Applications of Item Response Theory to Testing Problems. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  5. Owen, R. J. (1969). A Bayesian Approach to Tailored Testing. (Research Report 69-92)/ Princeton, NJ: Educational Testing Service.Google Scholar
  6. Owen, R.J. (1975). A Bayesian sequential procedure for quantal response in the context of adaptive mental testing. Journal of the American Statistical Association, 70, 351–356.MathSciNetzbMATHCrossRefGoogle Scholar
  7. Nering, M. L., Davey, T., & Thompson, T. (1998). A hybrid method for controlling item exposure in computerized adaptive testing. Paper presented at the annual meeting of the Psychometric Society, Champaign-Urbana.Google Scholar
  8. Parshall, C. G., Davey, T., & Nering, M. L. (1998, April). Test development exposure control for adaptive testing. Paper presented at the annual meeting of the National Council on Measurement in Education, San Diego.Google Scholar
  9. Parshall, C. G., Hogarty, K. Y., & Kromrey, J. D. (1999, June). Item exposure in adaptive tests: An empirical investigation of control strategies. Paper presented at the annual meeting of the Psychometric Society, Lawrence, KS.Google Scholar
  10. Segall, D. O., & Davey, T. C. (1995). Some new methods for content balancing adaptive tests. Presented at the annual meeting of the Psychometric Society, Minneapolis.Google Scholar
  11. Stocking, M. L. (1987). Two simulated feasibility studies in computerized testing. Applied Psychology: An International Review, 36(3), 263–277.CrossRefGoogle Scholar
  12. Stocking, M. L. (1994). Three Practical Issues for Modern Adaptive Testing Item Pools. (Report No. ETS-RR-94-5). Princeton, NJ: ETS.Google Scholar
  13. Stocking, M. L., & Lewis, C. (1995). Controlling Item Exposure Conditional on Ability in Computerized Adaptive Testing. (Research Report 95-24). Princeton, NJ: Educational Testing Service.Google Scholar
  14. Stocking, M., & Swanson, L. (1993). A method for severely constrained item selection in adaptive testing. Applied Psychological Measurement, 17, 277–292.CrossRefGoogle Scholar
  15. Sympson, J. B., & Hetter, R. D. (1985). Controlling item-exposure rates in computered adaptive testing. Proceedings of the 27th annual meeting of the Military Testing Association (pp. 973–977). San Diego: Navy Personnel Research and Development Center.Google Scholar
  16. Thissen, D. (1990). Ability and measurement precision. In Wainer, H. (ed.), Computer Adaptive Testing: A Primer (chap. 7, pp. 161–186), Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  17. Thomasson, G. L. (1995). New item exposure control algorithms for computerized adaptive testing. Paper presented at the annual meeting of the Psychometric Society, Minneapolis.Google Scholar
  18. Thompson, T., Davey, T.C., & Nering, M.L. (1998). Constructing adaptive tests to parallel conventional programs. Presented at the Annual Meeting of the American Educational Research Association, San Diego.Google Scholar
  19. Wainer, H., (ed.) (1990). Computerized Adaptive Testing: A Primer. Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  20. Way, W. D. (1998). Protecting the integrity of computerized testing item pools. Educational Measurement: Issues and Practice, 17, 17–27.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2002

Authors and Affiliations

  • Cynthia G. Parshall
    • 1
  • Judith A. Spray
    • 2
  • John C. Kalohn
    • 2
  • Tim Davey
    • 3
  1. 1.University of South FloridaTampaUSA
  2. 2.ACT, Inc.Iowa CityUSA
  3. 3.Educational Testing ServicePrincetonUSA

Personalised recommendations