Clinical observed performance evaluation: a prospective study in final year students of surgery
- First Online:
- Cite this article as:
- Markey, G.C., Browne, K., Hunter, K. et al. Adv in Health Sci Educ (2011) 16: 47. doi:10.1007/s10459-010-9240-9
- 163 Downloads
We report a prospective study of clinical observed performance evaluation (COPE) for 197 medical students in the pre-qualification year of clinical education. Psychometric quality was the main endpoint. Students were assessed in groups of 5 in 40-min patient encounters, with each student the focus of evaluation for 8 min. Each student had a series of assessments in a 25-week teaching programme. Over time, several clinicians from a pool of 16 surgical consultants and registrars evaluated each student by direct observation. A structured rating form was used for assessment data. Variance component analysis (VCA), internal consistency and inter-rater agreement were used to estimate reliability. The predictive and convergent validity of COPE in relation to summative OSCE, long case, and overall final examination was estimated. Median number of COPE assessments per student was 7. Generalisability of a mean score over 7 COPE assessments was 0.66, equal to that of an 8 × 7.5 min station final OSCE. Internal consistency was 0.88–0.97 and inter-rater agreement 0.82. Significant correlations were observed with OSCE performance (R = 0.55 disattenuated) and long case (R = 0.47 disattenuated). Convergent validity was 0.81 by VCA. Overall final examination performance was linearly related to mean COPE score with standard error 3.7%. COPE permitted efficient serial assessment of a large cohort of final year students in a real world setting. Its psychometric quality compared well with conventional assessments and with other direct observation instruments as reported in the literature. Effect on learning, and translation to clinical care, are directions for future research.