, Volume 64, Issue 2, pp 153–168

A Bayesian random effects model for testlets


  • Eric T. Bradlow
    • Marketing and Statistics The Wharton SchoolThe University of Pennsylvania
  • Howard Wainer
    • Educational Testing Service
  • Xiaohui Wang
    • Educational Testing Service

DOI: 10.1007/BF02294533

Cite this article as:
Bradlow, E.T., Wainer, H. & Wang, X. Psychometrika (1999) 64: 153. doi:10.1007/BF02294533


Standard item response theory (IRT) models fit to dichotomous examination responses ignore the fact that sets of items (testlets) often come from a single common stimuli (e.g. a reading comprehension passage). In this setting, all items given to an examinee are unlikely to be conditionally independent (given examinee proficiency). Models that assume conditional independence will overestimate the precision with which examinee proficiency is measured. Overstatement of precision may lead to inaccurate inferences such as prematurely ending an examination in which the stopping rule is based on the estimated standard error of examinee proficiency (e.g., an adaptive test). To model examinations that may be a mixture of independent items and testlets, we modified one standard IRT model to include an additional random effect for items nested within the same testlet. We use a Bayesian framework to facilitate posterior inference via a Data Augmented Gibbs Sampler (DAGS; Tanner & Wong, 1987). The modified and standard IRT models are both applied to a data set from a disclosed form of the SAT. We also provide simulation results that indicates that the degree of precision bias is a function of the variability of the testlet effects, as well as the testlet design.

Key words

Gibbs samplerData augmentationTestlets
Download to read the full article text

Copyright information

© The Psychometric Society 1999