Seeing the same thing differently
- First Online:
- Cite this article as:
- Yeates, P., O’Neill, P., Mann, K. et al. Adv in Health Sci Educ (2013) 18: 325. doi:10.1007/s10459-012-9372-1
- 809 Downloads
Assessors’ scores in performance assessments are known to be highly variable. Attempted improvements through training or rating format have achieved minimal gains. The mechanisms that contribute to variability in assessors’ scoring remain unclear. This study investigated these mechanisms. We used a qualitative approach to study assessors’ judgements whilst they observed common simulated videoed performances of junior doctors obtaining clinical histories. Assessors commented concurrently and retrospectively on performances, provided scores and follow-up interviews. Data were analysed using principles of grounded theory. We developed three themes that help to explain how variability arises: Differential Salience—assessors paid attention to (or valued) different aspects of the performances to different degrees; Criterion Uncertainty—assessors’ criteria were differently constructed, uncertain, and were influenced by recent exemplars; Information Integration—assessors described the valence of their comments in their own unique narrative terms, usually forming global impressions. Our results (whilst not precluding the operation of established biases) describe mechanisms by which assessors’ judgements become meaningfully-different or unique. Our results have theoretical relevance to understanding the formative educational messages that performance assessments provide. They give insight relevant to assessor training, assessors’ ability to be observationally “objective” and to the educational value of narrative comments (in contrast to numerical ratings).