Using Peers to Assess Handoffs: A Pilot Study
- 298 Downloads
Handoffs among post-graduate year 1 (PGY1) trainees occur with high frequency. Peer assessment of handoff competence would add a new perspective on how well the handoff information helped them to provide optimal patient care.
The goals of this study were to test the feasibility of the approach of an instrument for peer assessment of handoffs by meeting criteria of being able to use technology to capture evaluations in real time, exhibiting strong psychometric properties, and having high PGY1 satisfaction scores.
An iPad® application was built for a seven-item handoff instrument. Over a two-month period, post-call PGY1s completed assessments of three co-PGY1s from whom they received handoffs the prior evening.
Internal Medicine PGY1s at the University of Pennsylvania.
ANOVA was used to explore interperson score differences (validity). Generalizability analyses provided estimates of score precision (reproducibility). PGY1s completed satisfaction surveys about the process.
Sixty-two PGY1s (100 %) participated in the study. 59 % of the targeted evaluations were completed. The major limitations were network connectivity and inability to find the post-call trainee. PGY1 scores on the single item of “overall competency” ranged from 4 to 9 with a mean of 7.31 (SD 1.09). Generalizability coefficients approached 0.60 for 10 evaluations per PGY1 for a single rotation and 12 evaluations per PGY1 across multiple rotations. The majority of PGY1s believed that they could adequately assess handoff competence and that the peer assessment process was valuable (70 and 77 %, respectively).
Psychometric properties of an instrument for peer assessment of handoffs are encouraging. Obtaining 10 or 12 evaluations per PGY1 allowed for reliable assessment of handoff skills. Peer evaluations of handoffs using mobile technology were feasible, and were well received by PGY1s.
KEY WORDSpeer evaluations assessments handoffs
The authors would like to thank all the housestaff of the internal medicine residency program at the University of Pennsylvania who participated in this study as well as their Program Leadership. In addition, significant informational technology support was provided by the Hospital of University IT Division. The authors would also like to thank the National Board of Medical Examiners
This study was conducted in part with funds provided by the National Board of Medical Examiners. Dr. Myers was supported in part by the Josiah Macy Foundation.
The abstract of an earlier version of this article was presented at the AAMC Annual Meeting in 2011.
Conflict of Interest
The authors declare that they do not have a conflict of interest.
- 1.Joint Commission on Accreditation of Healthcare Organizations. Sentinel Event Statistics. http://www.jointcommission.org/topics/default.aspx?k=795. Accessed January 9, 2013.
- 2.Accreditation Standards. Oakbrook, IL: The Joint Commission; 2010.Google Scholar
- 4.Institute of Medicine Report on Resident Duty Hours: Enhancing Sleep, Supervision, and Safety http://iom.edu/Reports/2008/Resident-Duty-Hours-Enhancing-Sleep-Supervision-and-Safety.aspx. Accessed January 9, 2013.
- 5.American College of Graduate Medical Education Standards. Common Program Requirements. http://www.acgme.org/acgmeweb/Portals/0/dh_dutyhoursCommonPR07012007.pdf Accessed January 9, 2013.