Psychometrika

, Volume 64, Issue 2, pp 153–168 | Cite as

A Bayesian random effects model for testlets

  • Eric T. Bradlow
  • Howard Wainer
  • Xiaohui Wang
Article

Abstract

Standard item response theory (IRT) models fit to dichotomous examination responses ignore the fact that sets of items (testlets) often come from a single common stimuli (e.g. a reading comprehension passage). In this setting, all items given to an examinee are unlikely to be conditionally independent (given examinee proficiency). Models that assume conditional independence will overestimate the precision with which examinee proficiency is measured. Overstatement of precision may lead to inaccurate inferences such as prematurely ending an examination in which the stopping rule is based on the estimated standard error of examinee proficiency (e.g., an adaptive test). To model examinations that may be a mixture of independent items and testlets, we modified one standard IRT model to include an additional random effect for items nested within the same testlet. We use a Bayesian framework to facilitate posterior inference via a Data Augmented Gibbs Sampler (DAGS; Tanner & Wong, 1987). The modified and standard IRT models are both applied to a data set from a disclosed form of the SAT. We also provide simulation results that indicates that the degree of precision bias is a function of the variability of the testlet effects, as well as the testlet design.

Key words

Gibbs sampler Data augmentation Testlets 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Albert, J. H. (1992). Bayesian estimation of normal ogive response curves using Gibbs sampling,Journal of Educational Statistics, 17, 251–269.Google Scholar
  2. Albert, J. H., & Chib, S. (1993). Bayesian analysis of binary and polychotomous response data,Journal of the American Statistical Association, 88, 669–679.Google Scholar
  3. Bradlow, E. T., & Zaslavsky, A. M. (1997). Case influence analysis in Bayesian inference,Journal of Computational and Graphical Statistics, 6, 3, 314–331.Google Scholar
  4. Bradlow, E. T., & Zaslavsky, A. M. (1999). A hierarchical latent variable model for ordinal customer satisfaction survey data with “no answer” responses,Journal of the American Statistical Association, 94(445), 43–52.Google Scholar
  5. Gelfand, A. E., & Smith, A. F. M. (1990). Sampling-based approaches to calculating marginal densities,Journal of the American Statistical Association, 85, 398–409.Google Scholar
  6. Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences,Statistical Science, 7, 457–511.Google Scholar
  7. Hulin, C. L., Drasgow, F., & Parsons, L. K. (1983). Item response theory. Homewood, IL: Dow-Jones-Irwin.Google Scholar
  8. Lord, F. M., & Novick, M. R. (1968).Statistical theories of mental test scores. Reading, PA: Addison-Wesley.Google Scholar
  9. McDonald, R. P. (1981). The dimensionality of tests and items,British Journal of Mathematical and Statistical Psychology, 34, 100–117.Google Scholar
  10. McDonald, R. P. (1982). Linear versus nonlinear models in item response theory.Applied Psychological Measurement, 6, 379–396.Google Scholar
  11. Mislevy, R. J., & Bock, R. D. (1983).BILOG: Item and test scoring with binary logistic models [computer program]. Mooresville, IN: Scientific Software.Google Scholar
  12. Rosenbaum, P. R. (1988). Item Bundles.Psychometrika, 53, 349–359.Google Scholar
  13. Sireci, S. G., Wainer, H., & Thissen, D. (1991). On the reliability of testlet-based tests.Journal of Educational Measurement, 28, 237–247.Google Scholar
  14. Stout, W. F. (1987). A nonparametric approach for assessing latent trait dimensionality,Psychometrika, 52, 589–617.Google Scholar
  15. Stout, W. F. (1990). A new item response theory modeling approach with applications to unidimensional assessment and ability estimation,Psychometrika, 55, 293–326.Google Scholar
  16. Stout, W., Habing, B., Douglas, J., Kim, H. R., Roussos, L., & Zhang, J. (1996). Conditional covariance-based nonparametric multidimensionality assessment,Applied Psychological Measurement, 20, 331–354.Google Scholar
  17. Tanner, M. A., & Wong, W. H. (1987). The calculation of posterior distributions by data augmentation,Journal of the American Statistical Association, 82, 528–540.Google Scholar
  18. Wainer, H. (1995). Precision and differential item functioning on a testlet-based test: The 1991 Law School Admissions Test as an example,Applied Measurement in Education, 8(2), 157–187.Google Scholar
  19. Wainer, H., & Kiely, G. (1987). Item clusters and computerized adaptive testing: A case for testlets.Journal of Educational Measurement, 24, 185–202.Google Scholar
  20. Wainer, H., & Thissen, D. (1996). How is reliability related to the quality of test scores? What is the effect of local dependence on reliability?Educational Measurement: Issues and Practice, 15(1), 22–29.Google Scholar
  21. Yen, W. (1993). Scaling performance assessments: Strategies for managing local item dependence.Journal of Educational Measurement, 30, 187–213.Google Scholar
  22. Zhang, J. (1996).Some fundamental issues in item response theory with applications. Unpublised doctoral dissertation, University of Illinois at Urbana-Champaign.Google Scholar
  23. Zhang, J., & Stout, W. F. (1999). Conditional covariance structure of generalized compensatory multidimensional items,Psychometrika, 64, 129–152.Google Scholar

Copyright information

© The Psychometric Society 1999

Authors and Affiliations

  • Eric T. Bradlow
    • 1
  • Howard Wainer
    • 2
  • Xiaohui Wang
    • 2
  1. 1.Marketing and Statistics The Wharton SchoolThe University of PennsylvaniaUSA
  2. 2.Educational Testing ServiceUSA

Personalised recommendations