There is a lot to the practice of emergency medicine (EM); so, too is there a lot to the assessment of learners in this milieu. The rapidly changing clinical environment of EM, and the fact that learners call upon competencies covering the full spectrum of CanMEDS physician roles on most shifts, makes EM well suited to the development and implementation of advances in medical education. In this issue, Endres et al. [1] draw attention to one of the hallmarks of EM education and assessment, the use of daily evaluation cards (DECs). In showing that the use of an instrument with evidence of validity produces higher quality results, the authors not only emphasize the importance of systematic tool development but also bring up important questions about how we engage in workplace-based assessment. We laud the authors for their step-wise contribution to the literature around instrument development and hereby seek to contextualize their work in some of the broader debates in medical education and the current transition to competency-based medical education (CBME).

One of the key criticisms of the migration to CBME is the risk of deconstructing competence into discrete measurable chunks at the expense of a more wholistic frame of reference [3]. With the Canadian transition to CBME, EM programs now use either entrustable professional activities (EPAs) or field notes as a framework to document acquisition of competencies [5]. Many EM programs have eliminated the typical end-of-shift DECs in favor of EPA-focused workplace-based assessments, while others have kept, or sought to improve their DECs, and actively integrate them into their CBME program of assessment. Arguments in favour of replacing DECs with EPA-focused workplace-based assessments usually involve reduction of redundancy, assessor workload and confusion. Understandably, program leaders have sought to minimize the impact of CBME on their frontline faculty, thus seeking to switch from DECs alone to EPA assessments alone. Arguments in favour of keeping both revolve around the necessity to preserve the wholistic view, the need to provide feedback on specific items not necessarily captured in EPAs, and the ‘forced’ design feature wherein a learner cannot pick and choose when to complete an assessment as one is done for each and every shift. Most DECs have been specifically designed to capture a more gestalt perspective on learner performance, and thus are a well-suited to complement the more specific and operational data acquired through EPA-focused assessments. The use of multiple sources of data to inform eventual progress and promotions decisions would align well with the views of Schuwirth et al [4], who opine in a recent review, “these decisions must be made on the basis of meaningful triangulation of information from various sources, longitudinal data collection, meaningful feedback with targeted learning activities and proportional decision making, always requiring a clear and transparent rationale behind each high-stakes decision.”

What have those who chose to eliminate DECs given up? What are those who kept DECs experiencing? Time will tell. The experience of early CBME implementers at Queen’s university certainly indicates that there is something to be lost if we rely solely on targeted EPA-assessment [2]. In response to the elimination of DECs, both faculty and residents sounded the alarm that something was missing in the typical end-of-shift engagement between a faculty and a learner, and that perhaps assessment processes were “missing the forest for the trees”.

Longstanding use of a DEC has created familiarity of frequent, workplace-based assessments amongst EM faculty members. Both EPAs and DECs are modelled on real-time, formative feedback models that also provide data points that eventually underpin summative decisions. What is new is the shift towards the concept of entrustment and the use of entrustment anchors for workplace-based assessment scales, which Endres et al. [1] postulate is one of the reasons the O-EDShOT form performs well. This should work well in EM since the work of learners is closely overseen by staff physicians and/or senior residents in almost exclusively a 1:1 ratio, and faculty therefore rapidly develop intimate knowledge of the strengths and weaknesses of their learners. But if both the DEC and EPAs are framed using entrustment constructs, are they not then redundant? Ten Cate et al. [6] describe some of the issues related to entrustment as a construct, separating the retrospective assessment of an example of performance from the much more sophisticated mental act of entrusting an individual with some future activity. In CBME, frontline faculty completing workplace-based assessments are intentionally not tasked with decisions about future independence/trust, but rather are asked to reflect on an observation and indicate the perceived level of supervision that was required (not necessarily the amount actually provided). However, as faculty, we can’t help but think prospectively about trust and entrustment and are likely therefore considering this in the use of these assessment tools.

In EM there is often direct supervision, direct case review and validation, and competencies beyond medical expert, rooted in the communicator, collaborator, health advocate and leader, are also directly encountered. Working with a learner in such an intimate way lends itself to global assessment, and early and step-wise entrustment decisions via serial interactions and observations. Therein lies the potential of the DEC to complement EPA-focused assessment, particularly when framed in an entrustment construct. A faculty member can separate their opinion of a very specific observation from their global evaluation of the learner, the latter informed in part by the specific components embedded in EPAs, but also by the collective observations and interactions of a trainee over the course of a shift. Both the individual observations of specific EPAs and the global perceptions of trainee competence by frontline faculty are valuable not only to those making progression decisions, but also to our learners, and it is likely that we need mechanisms for capturing and communication both. How to best integrate DECs and EPA-focused workplace-based assessments at the program and competence committee decision-making level remains unclear. It will be interesting to see how the data sets derived from these two contemporaneously deployed assessment modalities compare.