Psychometrika

, Volume 64, Issue 2, pp 153–168

A Bayesian random effects model for testlets

  • Eric T. Bradlow
  • Howard Wainer
  • Xiaohui Wang
Article

Abstract

Standard item response theory (IRT) models fit to dichotomous examination responses ignore the fact that sets of items (testlets) often come from a single common stimuli (e.g. a reading comprehension passage). In this setting, all items given to an examinee are unlikely to be conditionally independent (given examinee proficiency). Models that assume conditional independence will overestimate the precision with which examinee proficiency is measured. Overstatement of precision may lead to inaccurate inferences such as prematurely ending an examination in which the stopping rule is based on the estimated standard error of examinee proficiency (e.g., an adaptive test). To model examinations that may be a mixture of independent items and testlets, we modified one standard IRT model to include an additional random effect for items nested within the same testlet. We use a Bayesian framework to facilitate posterior inference via a Data Augmented Gibbs Sampler (DAGS; Tanner & Wong, 1987). The modified and standard IRT models are both applied to a data set from a disclosed form of the SAT. We also provide simulation results that indicates that the degree of precision bias is a function of the variability of the testlet effects, as well as the testlet design.

Key words

Gibbs sampler Data augmentation Testlets 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© The Psychometric Society 1999

Authors and Affiliations

  • Eric T. Bradlow
    • 1
  • Howard Wainer
    • 2
  • Xiaohui Wang
    • 2
  1. 1.Marketing and Statistics The Wharton SchoolThe University of PennsylvaniaUSA
  2. 2.Educational Testing ServiceUSA

Personalised recommendations