Automated Test Assembly for Online Administration
So far, we have discussed computerized tests that are usually fixed in terms of length and items and are constructed offline to be administered at a later time. When testing volumes are moderately high, several such forms may have to be administered randomly to examinees for security purposes. However, multiple test forms require that each one be psychometrically equivalent to every other. This results in either a delay in score reporting so that the test scores can be equated after administration or separate studies to establish passing scores for each form prior to administration. If examinee volumes are fairly high, there may need to be many such equivalent fixed forms constructed in advance. If delays in score reporting are unacceptable, another solution must be found to construct equivalent test forms prior to test administration.
Unable to display preview. Download preview PDF.
- Chen, S., Ankenmann, R., & Spray, J. (1999). Exploring the Relationship between Item Exposure Rate and Test Overlap Rate in Computerized Adaptive Testing. (ACT Research Report Series No. 99-5). Iowa City: ACT, Inc.Google Scholar
- Davey, T., & Parshall, C. G. (1995, April). New Algorithms for Item Selection and Exposure Control with Computerized Adaptive Testing. Paper presented at the annual meeting of the American Educational Research Association, San Francisco.Google Scholar