Skip to main content
Log in

Computerized adaptive testing in instructional settings

  • Research
  • Published:
Educational Technology Research and Development Aims and scope Submit manuscript

Abstract

Item response theory (IRT) has most often been used in research on computerized adaptive testing (CAT). Depending on the model used, IRT requires between 200 and 1,000 examinees for estimating item parameters. Thus, it is not practical for instructional designers to develop their own CAT based on the IRT model. Frick improved Wald's sequential probability ratio test (SPRT) by combining it with normative expert systems reasoning, referred to as an EXSPRT-based CAT. While previous studies were based on re-enactments from historical test data, the present study is the first to examine how well these adaptive methods function in a real-time testing situation. Results indicate that the EXSPRT-I significantly reduced test lengths and was highly accurate in predicting mastery. EXSPRT is apparently a viable and practical alternative to IRT for assessing mastery of instructional objectives.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Bunderson, V., Inouye, D., & Olson, J. (1989). The four generations of computerized educational measurement. In R. L. Linn (Ed.),Educational measurement. New York: Macmillan.

    Google Scholar 

  • Frick, T. W. (1989). Bayesian adaptation during computer-based tests and computer-guided practice exercises.Journal of Educational Computing Research, 5(1), 89–114.

    Google Scholar 

  • Frick, T. W. (1990). A comparison of three decision models for adapting the length of computer-based mastery tests.Journal of Educational Computing Research, 6(4), 479–513.

    Google Scholar 

  • Frick, T. W. (1991).A comparison of an expert systems approach to computerized adaptive testing and an item response theory model. Paper presented at the annual conference of the Association for Educational Communications and Technology, Orlando, Florida.

  • Frick, T. W. (1992). Computerized adaptive mastery tests as expert systems.Journal of Educational Computing Research, 8(2), 187–213.

    Google Scholar 

  • Hambleton, R., & Cook, L. (1983). Robustness of item response models and effects of test length and sample size on the precision of ability estimates. In D. Weiss (Ed.),New horizons in testing (pp. 31–50). New York: Academic Press.

    Google Scholar 

  • Hambleton, R., Swaminathan, H., & Rogers, H. J. (1991).Fundamentals of item response theory. Newbury Park, CA: Sage Publications.

    Google Scholar 

  • Kirk, R. (1982).Experimental design: Procedures for the behavioral sciences (2nd ed, pp. 101–105). Belmont, CA: Brooks/Cole.

    Google Scholar 

  • Lord, F. (1983). Smalln justifies Rasch model. In D. Weiss (Ed.),New horizons in testing (pp. 52–62). New York: Academic Press.

    Google Scholar 

  • Luk, H.-K. (1991).An empirical comparison of an expert systems approach and an IRT approach to computer-based adaptive mastery testing. Paper presented at the annual meeting of the American Educational Research Association, Chicago, Illinois.

  • Owen, R. J. (1975). A Bayesian sequential procedure for quantal response in the context of adaptive mental testing.Journal of the American Statistical Association, 70,351–356.

    Google Scholar 

  • Plew, G. T. (1989).A comparison of major adaptive testing strategies and an expert systems approach. Unpublished doctoral dissertation, Indiana University, Bloomington.

  • Powell, E. (1991).Test anxiety and test performance under computerized adaptive testing methods. Unpublished doctoral dissertation, Indiana University, Bloomington.

    Google Scholar 

  • Wald, A. (1947).Sequential analysis. New York: Wiley.

    Google Scholar 

  • Weiss, D., & Kingsbury, G. (1984). Application of computerized adaptive testing to educational problems.Journal of Educational Measurement, 21, 361–375.

    Article  Google Scholar 

  • Wright, B. D. (1977). Solving measurement problems with the Rasch model.Journal of Educational Measurement, 14(2), 97–116.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Edwin Welch, R., Frick, T.W. Computerized adaptive testing in instructional settings. ETR&D 41, 47–62 (1993). https://doi.org/10.1007/BF02297357

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02297357

Keywords

Navigation