, Volume 22, Issue 2 Supplement, pp 336-340,
Open Access This content is freely available online to anyone, anywhere at any time.
Date: 24 Oct 2007

Validating Measures of Third Year Medical Students’ Use of Interpreters by Standardized Patients and Faculty Observers



Increasing prevalence of limited English proficiency patient encounters demands effective use of interpreters. Validated measures for this skill are needed.


We describe the process of creating and validating two new measures for rating student skills for interpreter use.


Encounters using standardized patients (SPs) and interpreters within a clinical practice examination (CPX) at one medical school.


Students were assessed by SPs using the interpreter impact rating scale (IIRS) and the physician patient interaction (PPI) scale. A subset of 23 encounters was assessed by 4 faculty raters using the faculty observer rating scale (FORS). Internal consistency reliability was assessed by Cronbach’s coefficient alpha (α). Interrater reliability of the FORS was examined by the intraclass correlation coefficient (ICC). The FORS and IIRS were compared and each was correlated with the PPI.


Cronbach’s α was 0.90 for the 7-item IIRS and 0.88 for the 11-item FORS. ICC among 4 faculty observers had a mean of 0.61 and median of 0.65 (0.20, 0.86). Skill measured by the IIRS did not significantly correlate with FORS but correlated with the PPI.


We developed two measures with good internal reliability for use by SPs and faculty observers. More research is needed to clarify the reasons for the lack of concordance between these measures and which may be more valid for use as a summative assessment measure.