Abstract
Purpose
Class rank and clerkship grades impact a medical student’s residency application. The variability and inter-rater reliability in assessment across multiple clinical sites within a single university system is unknown. We aimed to determine if medical student assessment across medical school campuses is consistent when using a standardized scoring rubric.
Design/Methods
Attending physicians who participate in assignment of clerkship grades for neurology from three separate clinical campuses of the same medical school observed 10 identical standardized patient encounters completed by third year medical students during the 2017–2018 academic year. Scoring was completed using a standardized rubric. Descriptive analysis and intra-rater comparisons were completed. Evaluations as a part of this study were completed in 2018.
Results
Of 50 possible points for the patient encounter, the median score among all medical students and all evaluators was 43 (IQR 40, 45.5). Evaluator number 1 provided a statistically significant lower overall score as compared to evaluators 2 and 3 (p = 0.0001 and p = 0.0006, respectively), who were consistently similar in their overall medical student assessment (p = 0.46). Overall agreement between evaluators was good (ICC = 0.805, 95% CI 0.36–0.95) and consistency was excellent (ICC = 0.91, 95% CI 0.75–0.97).
Conclusions
Medical student evaluation across multiple clinical campus sites via observation of identical standardized patient encounters and use of a standardized scoring rubric generally demonstrated good inter-rater agreement and consistency, but the small variation seen may affect overall clerkship scores.
Similar content being viewed by others
References
Barrows HS. An overview of the uses of standardized patients for teaching and evaluating clinical skills. Acad Med. 1993;68(6):443–51.
Safdieh JE, Lin AL, Aizer J, et al. Standardized patient outcomes trial (SPOT) in neurology. Medical Education Online. 2011;16(1):5634. https://doi.org/10.3402/meo.v16i0.5634.
USMLE Step 2 clinical skills (CS) content description and general information. 2018. Available from: https://usmle.org/pdfs/step-2-cs/cs-info-manual.pdf [accessed Feb 26, 2019].
Braksick SA, Kashani K, Hocker S. Neurology education for critical care fellows using high-Fidelity simulation. Neurocrit Care. 2017;26(1):96–102.
Ermak DM, Bower DW, Wood J, Sinz EH, Kothari MJ. Incorporating simulation technology into a neurology clerkship. J Am Osteopath Assn. 2013;113(8):628–35.
Liaison Committee on Medical Education. Functions and structure of a medical school: standards for accreditation of medication education programs leading to the MD degree. Washington DC: Liaison Committee on Medical Education; 2018. http://lcme.org/publications [accessed February 8, 2019].
Park YA, Hyderi A, Heine N, et al. Validity evidence and scoring guidelines for standardized patient encounters and patient notes from a multisite study of clinical performance examinations in seven medical schools. Acad Med. 2017;92(11):S12–20.
Park YS, Hyderi A, Bordage G, Xing K, Yudkowsky R. Inter-rater reliability and generalizeability of patient note scores using a scoring rubric based on the USMLE Step-2 CS format. Adv in Health Sci Ed. 2016;21(4):761–73.
Acknowledgments
The authors would like to thank Julie Mack and the staff in the Neis Clinical Skills Lab at the University of Kansas for their assistance in development and execution of our medical student clinical skills program, and for the technological assistance required to complete this project.
Authorship
SAB was responsible for project conception, design, execution and initial manuscript drafting. YX, WCR, and JPS completed student evaluation and critical editing to the manuscript.
SLH completed statistical analysis and critical editing to the manuscript.
GSG provided assistance with statistical analysis and critical editing to the manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Braksick, S.A., Wang, Y., Hunt, S.L. et al. Evaluator Agreement in Medical Student Assessment Across a Multi-Campus Medical School During a Standardized Patient Encounter. Med.Sci.Educ. 30, 381–386 (2020). https://doi.org/10.1007/s40670-020-00916-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40670-020-00916-1