Skip to main content
Log in

Evaluator Agreement in Medical Student Assessment Across a Multi-Campus Medical School During a Standardized Patient Encounter

  • Original research
  • Published:
Medical Science Educator Aims and scope Submit manuscript

Abstract

Purpose

Class rank and clerkship grades impact a medical student’s residency application. The variability and inter-rater reliability in assessment across multiple clinical sites within a single university system is unknown. We aimed to determine if medical student assessment across medical school campuses is consistent when using a standardized scoring rubric.

Design/Methods

Attending physicians who participate in assignment of clerkship grades for neurology from three separate clinical campuses of the same medical school observed 10 identical standardized patient encounters completed by third year medical students during the 2017–2018 academic year. Scoring was completed using a standardized rubric. Descriptive analysis and intra-rater comparisons were completed. Evaluations as a part of this study were completed in 2018.

Results

Of 50 possible points for the patient encounter, the median score among all medical students and all evaluators was 43 (IQR 40, 45.5). Evaluator number 1 provided a statistically significant lower overall score as compared to evaluators 2 and 3 (p = 0.0001 and p = 0.0006, respectively), who were consistently similar in their overall medical student assessment (p = 0.46). Overall agreement between evaluators was good (ICC = 0.805, 95% CI 0.36–0.95) and consistency was excellent (ICC = 0.91, 95% CI 0.75–0.97).

Conclusions

Medical student evaluation across multiple clinical campus sites via observation of identical standardized patient encounters and use of a standardized scoring rubric generally demonstrated good inter-rater agreement and consistency, but the small variation seen may affect overall clerkship scores.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Barrows HS. An overview of the uses of standardized patients for teaching and evaluating clinical skills. Acad Med. 1993;68(6):443–51.

    Article  Google Scholar 

  2. Safdieh JE, Lin AL, Aizer J, et al. Standardized patient outcomes trial (SPOT) in neurology. Medical Education Online. 2011;16(1):5634. https://doi.org/10.3402/meo.v16i0.5634.

    Article  Google Scholar 

  3. USMLE Step 2 clinical skills (CS) content description and general information. 2018. Available from: https://usmle.org/pdfs/step-2-cs/cs-info-manual.pdf [accessed Feb 26, 2019].

  4. Braksick SA, Kashani K, Hocker S. Neurology education for critical care fellows using high-Fidelity simulation. Neurocrit Care. 2017;26(1):96–102.

    Article  Google Scholar 

  5. Ermak DM, Bower DW, Wood J, Sinz EH, Kothari MJ. Incorporating simulation technology into a neurology clerkship. J Am Osteopath Assn. 2013;113(8):628–35.

    Article  Google Scholar 

  6. Liaison Committee on Medical Education. Functions and structure of a medical school: standards for accreditation of medication education programs leading to the MD degree. Washington DC: Liaison Committee on Medical Education; 2018. http://lcme.org/publications [accessed February 8, 2019].

  7. Park YA, Hyderi A, Heine N, et al. Validity evidence and scoring guidelines for standardized patient encounters and patient notes from a multisite study of clinical performance examinations in seven medical schools. Acad Med. 2017;92(11):S12–20.

    Article  Google Scholar 

  8. Park YS, Hyderi A, Bordage G, Xing K, Yudkowsky R. Inter-rater reliability and generalizeability of patient note scores using a scoring rubric based on the USMLE Step-2 CS format. Adv in Health Sci Ed. 2016;21(4):761–73.

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank Julie Mack and the staff in the Neis Clinical Skills Lab at the University of Kansas for their assistance in development and execution of our medical student clinical skills program, and for the technological assistance required to complete this project.

Authorship

SAB was responsible for project conception, design, execution and initial manuscript drafting. YX, WCR, and JPS completed student evaluation and critical editing to the manuscript.

SLH completed statistical analysis and critical editing to the manuscript.

GSG provided assistance with statistical analysis and critical editing to the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sherri A. Braksick.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Braksick, S.A., Wang, Y., Hunt, S.L. et al. Evaluator Agreement in Medical Student Assessment Across a Multi-Campus Medical School During a Standardized Patient Encounter. Med.Sci.Educ. 30, 381–386 (2020). https://doi.org/10.1007/s40670-020-00916-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40670-020-00916-1

Keywords

Navigation