Skip to main content
Log in

Comparing the Ottawa Emergency Department Shift Observation Tool (O-EDShOT) to the traditional daily encounter card: measuring the quality of documented assessments

  • Original Research
  • Published:
Canadian Journal of Emergency Medicine Aims and scope Submit manuscript

Abstract

Objectives

The Ottawa Emergency Department Shift Observation Tool (O-EDShOT) is a workplace-based assessment designed to assess a trainee’s performance across an entire shift. It was developed in response to validity concerns with traditional end-of-shift workplace-based assessments, such as the daily encounter card. The O-EDShOT previously demonstrated strong psychometric characteristics; however, it remains unknown whether the O-EDShOT facilitates measurable improvements in the quality of documented assessments compared to daily encounter cards.

Methods

Three randomly selected daily encounter cards and three O-EDShOTs completed by 24 faculty were scored by two raters using the Completed Clinical Evaluation Report Rating (CCERR), a previously published 9-item quantitative measure of the quality of a completed workplace-based assessment. Automated-CCERR (A-CCERR) scores, which do not require raters, were also calculated. Paired sample t tests were conducted to compare the quality of assessments between O-EDShOTs and DECs as measured by the CCERR and A-CCERR.

Results

CCERR scores were significantly higher for O-EDShOTs (mean(SD) = 25.6(2.6)) compared to daily encounter cards (21.5(3.9); t(23) = 5.2, p < 0.001, d = 1.1). A-CCERR scores were also significantly higher for O-EDShOTs (mean(SD) = 18.5(1.6)) than for daily encounter cards (15.5(1.2); t(24) = 8.4, p < 0.001). CCERR items 1, 4 and 9 were rated significantly higher for O-EDShOTs compared to daily encounter cards.

Conclusions

The O-EDShOT yields higher quality documented assessments when compared to the traditional end-of-shift daily encounter card. Our results provide additional validity evidence for the O-EDShOT as an assessment tool for capturing trainee on-shift performance that can be used as a stimulus for actionable feedback and as a source for high-quality workplace-based assessment data to inform decisions about emergency medicine trainee progress and promotion.

Résumé

Objectifs

L’outil d’observation des quarts de travail des services d’urgence d’Ottawa (O-EDShOT) est une évaluation en milieu de travail conçue pour évaluer la performance d’un stagiaire pendant tout un quart de travail. Il a été développé en réponse à des problèmes de validité avec les évaluations traditionnelles en milieu de travail de fin de quart de travail, comme la fiche de rencontre quotidienne (DEC). Le O-EDShOT avait préalablement démontré de fortes caractéristiques psychométriques; cependant, on ignore toujours si l'O-EDShOT facilite des améliorations mesurables de la qualité des évaluations documentées par rapport aux fiches de rencontre quotidiennes.

Méthodes

Trois fiches de rencontre quotidiennes sélectionnées au hasard et trois O-EDShOT complétés par 24 membres du corps professoral ont été marqués par deux évaluateurs à l'aide de Completed Clinical Evaluation Report Rating (CCERR), une mesure quantitative en 9 points publiée précédemment de la qualité d’une évaluation en milieu de travail réalisée. Les scores du CCERR automatisé (A-CCERR), qui ne nécessitent pas d’évaluateur, ont également été calculés. Des tests t d'échantillons appariés ont été effectués pour comparer la qualité des évaluations entre les O-EDShOT et les DEC, telle que mesurée par le CCERR et l’A-CCERR.

Résultats

Les scores CCERR étaient significativement plus élevés pour les O-EDShOT (moyenne (ET) = 25,6 (2,6)) par rapport aux fiches de rencontre quotidiennes (21,5 (3,9) ; t (23) = 5,2, p < 0,001, d = 1,1). Les scores A-CCERR étaient également significativement plus élevés pour les O-EDShOT (moyenne (ET) = 18,5 (1,6)) que pour les fiches de rencontre quotidiennes (15,5 (1,2) ; t (24) = 8,4, p < 0,001). Les points 1, 4 et 9 du CCERR ont été notés beaucoup plus haut pour les O-EDShOT que pour les fiches de rencontre quotidiennes.

Conclusions

L’O-EDShOT produit des évaluations documentées de meilleure qualité par rapport à la traditionnelle fiche de rencontre quotidienne de fin de quart de travail. Nos résultats fournissent des preuves de validité supplémentaires pour l’O-EDShOT en tant qu’outil d’évaluation pour saisir les performances des stagiaires en poste qui peuvent être utilisés comme stimulus pour une rétroaction exploitable, et comme source de données d’évaluation de haute qualité en milieu de travail pour notifier les décisions sur la progression et promotion des stagiaires en médecine d’urgence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Norcini J, Burch V, Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE Guide No. 31. Med Teach. 2009;31:142–59.

    Google Scholar 

  2. Cheung WJ, Dudek N, Wood TJ, Frank JR. Daily encounter cards—evaluating the quality of documented assessments. J Grad Med Educ. 2016;8:601–4.

    PubMed  PubMed Central  Google Scholar 

  3. Holmboe ES, Sherbino J, Long DM, Swing SR, Jason R. The role of assessment in competency-based medical education. Med Teach. 2010;32:676–82.

    Google Scholar 

  4. Iobst WF, Sherbino J, Cate OT, Richardson DL, Dath D, Swing SR, et al. Competency-based medical education in postgraduate medical education Competency-based medical education in postgraduate medical education. Med Teach. 2010;32:651–6.

    PubMed  Google Scholar 

  5. Cheung WJ, Wood T, Gofton W, Dewhirst S, Dudek N. The Ottawa Emergency Department Shift Observation Tool (O-EDShOT): a new tool for assessing resident competence in the emergency department. AEM Educ Train. 2020;4(4):359–68.

    PubMed  Google Scholar 

  6. Sherbino J, Kulasegaram K, Worster A, Norman GR. The reliability of encounter cards to assess the CanMEDS roles. Adv Heal Sci Educ. 2013;18:987–96.

    Google Scholar 

  7. Sherbino J, Bandiera G, Frank JR. Assessing competence in emergency medicine trainees: an overview of effective methodologies. Can J Emeg Med. 2008;10(4):365–71.

    Google Scholar 

  8. Bandiera G, Lendrum D. Daily encounter cards facilitate competency-based feedback while leniency bias persists. Can J Emerg Med. 2008;10(1):44–50.

    Google Scholar 

  9. Kogan J, Holmboe E, Hauer K. Tools for direct observation and assessment a systematic review. J Am Med Assoc. 2009;302(12):1316–26.

    CAS  Google Scholar 

  10. Canadian Association of Emergency Physicians. Position statement on emergency medicine definitions from the Canadian Association of Emergency Physicians. 2015

  11. American College of Emergency Physicians. Definition of emergency medicine. TX: Irving; 2015.

    Google Scholar 

  12. Crossley J, Johnson G, Booth J, Wade W. Good questions, good answers: construct alignment improves the performance of workplace-based assessment scales. Med Educ. 2011;45:560–9.

    PubMed  Google Scholar 

  13. Dudek N, Gofton W, Rekman J. Faculty and resident perspectives on using entrustment anchors for workplace-based assessment. J Grad Med Educ. 2019;11:287–94.

    PubMed  PubMed Central  Google Scholar 

  14. Rekman J, Hamstra SJ, Dudek N, Wood T, Seabrook C, Gofton W. A new instrument for asessing resident competence in surgical clinic: The Ottawa Clinic Assessment Tool. J Surg Educ. 2016;73(4):575–82.

    PubMed  Google Scholar 

  15. Dolan BM, Brien CL, Green MM. Including entrustment language in an assessment form may improve constructive feedback for student clinical skills. Med Sci Educ. 2017;27:461–4.

    Google Scholar 

  16. Rekman J, Gofton W, Dudek N, Gofton T, Hamstra SJ. Entrustability scales: outlining their usefulness for competency-based clinical assessment. Acad Med. 2016;91(2):186–90.

    PubMed  Google Scholar 

  17. Dudek NL, Marks MB, Wood TJ, Lee AC. Assessing the quality of supervisors’ completed clinical evaluation reports. Med Educ. 2008;42:816–22.

    PubMed  Google Scholar 

  18. Dudek NL, Marks MB, Wood TJ, Dojeiji S, Hatala R, Cooke L, et al. Quality evaluation reports: can a faculty development program make a difference? Med Teach. 2012;34(11):725–31.

    Google Scholar 

  19. Dudek NL, Marks MB, Bandiera G, White J, Wood TJ. Quality in-training evaluation reports—does feedback drive faculty performance? 2013;88(8):1129–34.

  20. Cheung WJ, Dudek NL, Wood TJ, Frank JR. Supervisor–trainee continuity and the quality of work- based assessments. Med Educ. 2017;51:1260–8.

    PubMed  Google Scholar 

  21. Bismil R, Dudek NL, Wood TJ. In-training evaluations: developing an automated screening tool to measure report quality. Med Educ. 2014;48:724–32.

    PubMed  Google Scholar 

  22. Stat Trek. Random number generator [Internet]. 2018 [cited 2020 Jan 12]. https://stattrek.com/statistics/random-number-generator.aspx#error

  23. McGraw KO, Wong SP. Forming inferences about some intraclass correlation coefficients. Psychol Methods. 1996;1(1):30–46.

    Google Scholar 

  24. Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J Chiropr Med. 2016;15(2):155–63.

    PubMed  PubMed Central  Google Scholar 

  25. Tavakol M, Dennick R. Making sense of Cronbach’s alpha. Int J Med Educ. 2011;2:53–5.

    PubMed  PubMed Central  Google Scholar 

  26. Cohen J. The effect size index: d. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale: Erlbaum; 1988.

    Google Scholar 

  27. Halman S, Rekman J, Wood T, Baird A, Gofton W, Dudek N. Avoid reinventing the wheel: implementation of the Ottawa Clinic Assessment Tool (OCAT) in Internal Medicine. BMC Med Educ. 2018;18(1):1–8.

    Google Scholar 

  28. Cook DA, Brydges R, Ginsburg S, Hatala R. A contemporary approach to validity arguments: a practical guide to Kane’s framework. Med Educ. 2015;49(6):560–75.

    PubMed  Google Scholar 

  29. Streiner DL, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use. 5th ed. New York: Oxford University Press; 2008.

    Google Scholar 

  30. Govaerts MJ, Schuwirth LW, Muijtjens AM. Workplace-based assessment: effects of rater expertise. Adv Heal Sci Educ. 2011;16:151–65.

    CAS  Google Scholar 

  31. Cohen GS, Blumberg P, Ryan NC, Sullivan PL. Do final grades reflect written qualitative evaluations of student performance? Teach Learn Med. 1993;5(1):10–5.

    Google Scholar 

  32. Speer AJ, Solomon DJ, Ainsworth MA. An innovative evaluation method in an internal medicine clerkship. Acad Med. 1996;71(1):76–8.

    Google Scholar 

  33. Watling CJ, Ginsburg S. Assessment, feedback and the alchemy of learning. Med Educ. 2019;53:76–85.

    PubMed  Google Scholar 

  34. Lefroy J, Watling C, Teunissen PW, Brand P. Guidelines: the do’s, don’ts and don’t knows of feedback for clinical education. Perspect Med Educ. 2015;4:284–99.

    PubMed  PubMed Central  Google Scholar 

  35. Journal AI, Halman S, Dudek N, Wood T, Pugh D, Mcaleer S, et al. Direct observation of clinical skills feedback scale: development and validity evidence. Teach Learn Med. 2016;28(4):385–94. https://doi.org/10.1080/10401334.2016.1186552.

    Article  Google Scholar 

  36. Marcotte L, Egan R, Soleas E, Dalgarno N, Norris M, Smith C. Assessing the quality of feedback to general internal medicine residents in a competency-based environment. Can Med Educ J. 2019;10(4):32–47.

    Google Scholar 

  37. Van Der Vleuten C, Schuwirth L, Driessen E, Dijkstra J, Tigelaar D, Baartman L, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34:205–14.

    PubMed  Google Scholar 

  38. Sherbino J, Bandiera G, Doyle K, Frank JR, Holroyd BR, Jones G, et al. The competency-based medical education evolution of Canadian emergency medicine specialist training. Can J Emerg Med. 2019;22(1):95–102.

    Google Scholar 

  39. Hawkins RE, Welcher CM, Holmboe ES, Kirk LM, Norcini JJ, Simons KB, et al. Implementation of competency-based medical education: are we addressing the concerns and challenges? Med Educ. 2015;49(11):1086–102.

    PubMed  Google Scholar 

  40. Caverzagie KJ, Nousiainen MT, Ferguson PC, ten Cate O, Ross S, Harris KA, et al. Overarching challenges to the implementation of competency-based medical education. Med Teach. 2017;39(6):588–93.

    PubMed  Google Scholar 

  41. Entrustable Professional Activity Guide: Emergency Medicine. Emerg Med Spec Committee EPA Guid Emerg Med Ottawa R Coll Physicians Surg Canada [Internet]. 2017. https://cloudfront.ualberta.ca/-/media/medicine/departments/emergency-medicine/documents/epa-guide-emergency-med-e.pdf

  42. Pinsk M, Karpinski J, Carlisle E. Introduction of competence by design to Canadian nephrology postgraduate training. Can J Kidney Heal Dis. 2018;5:1–9.

    Google Scholar 

  43. Shalhoub J, Marshall DC, Ippolito K. Perspectives on procedure-based assessments: A thematic analysis of semistructured interviews with 10 UK surgical trainees. BMJ Open. 2017;7(3):1–8.

    Google Scholar 

  44. Oswald A, Cheung W, Bhanji F, Ladhani M, Hamilton J. Mock Competence Committee Cases. [Internet]. 2020 [cited 2020 May 28]. https://www.royalcollege.ca/mssites/casescenarios_en/story_html5.html

  45. LaDonna KA, Hatala R, Lingard L, Voyer S, Watling C. Staging a performance: Learners’ perceptions about direct observation during residency. Med Educ. 2017;51(5):498–510.

    PubMed  Google Scholar 

  46. Bindal T, Wall D, Goodyear HM. Trainee doctors’ views on workplace-based assessments: are they just a tick box exercise? Med Teach. 2011;33(11):919–27.

    PubMed  Google Scholar 

  47. Martin L, Sibbald M, Brandt Vegas D, Russell D, Govaerts M. The impact of entrustment assessments on feedback and learning: trainee perspectives. Med Educ. 2020;54(4):328–36.

    PubMed  Google Scholar 

  48. Macewan MJ, Dudek NL, Wood TJ, Gofton WT. Continued validation of the O-SCORE (Ottawa Surgical Competency Operating Room Evaluation): use in the simulated environment. Teach Learn Med. 2016;28(1):72–9.

    PubMed  Google Scholar 

  49. Voduc N, Dudek N, Parker CM, Sharma KB, Wood TJ. Development and validation of a bronchoscopy competence assessment tool in a clinical setting. An Am Thorac Soc. 2016;13(4):495–501.

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank Drs. Sebastian Dewhirst and Jeffrey Landreville at the University of Ottawa Department of Emergency Medicine for their contribution as raters in this study, in addition to Katherine Scowcroft, a research assistant at the University of Ottawa Department of Innovation and Medical Education for her support.

Funding

This research was supported through grants to the authors from the University of Ottawa Department of Emergency Medicine (DEM) Spring Academic Grant as well as the Medical Student Education Research Grant (MSERG) through the Ontario Medical Students Association (OMSA).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Warren J. Cheung.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (PDF 89 KB)

Supplementary file2 (PDF 227 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Endres, K., Dudek, N., McConnell, M. et al. Comparing the Ottawa Emergency Department Shift Observation Tool (O-EDShOT) to the traditional daily encounter card: measuring the quality of documented assessments. Can J Emerg Med 23, 383–389 (2021). https://doi.org/10.1007/s43678-020-00070-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43678-020-00070-y

Keywords

Navigation