Background

The Royal College of Physicians and Surgeons of Canada (RCPSC) is shifting to a competency-based medical education (CBME) and assessment scheme termed Competence By Design (CBD). A recent review article and accompanying editorial in the Journal provide a comprehensive overview of the rationale underlying this change as well as the potential opportunities and challenges associated with CBD in anesthesiology.1,2 The specialty of anesthesiology will adopt CBD in residency beginning in July 2017. The RCPSC Fundamental Innovations in Residency Education mechanism has allowed two CBD programs to launch in advance of the RCPSC schedule, namely, University of Ottawa3 in July 2015 and Dalhousie University in July 2016.

One of the foundational components of CBME is the frequent and contextualized assessment of trainees.4 Miller describes the attainment of competency via a staged progression from “knows” and “knows how” to “shows how” and “does”.5 Different methods of assessment can be matched to the various stages of competency (Figure). Simulation scenarios allow learners deliberate practice of crisis management at no risk to patient safety or quality of care, and if designed for evaluation, permit the “shows how” level of assessment.6

Figure
figure 1

Miller’s pyramid of competence matched to assessment methods (based on: Miller GE. The assessment of clinical skills/competence/performance. Acad Med 1990; 65(9 Suppl): S63-7).5

In Canada, anesthesiology residents in all 17 programs have access to simulation-based training, but level of exposure varies by program. In 2010, the RCPSC introduced simulation-assisted oral examinations.7 Resource limitations currently prohibit the incorporation of mannequin-based stations as a component of the final RCPSC examination. As such, an alternative paradigm for integrating simulation into the evolving curriculum and assessment of competence is required.

In 2013, the RCPSC Anesthesiology Specialty Committee assembled a group of simulation educators, representing each of the 17 residency programs, to form the Canadian National Anesthesiology Simulation Curriculum (CanNASC) Task Force (membership listed in the Appendix). The overall goals of the Task Force are to develop, implement, and evaluate a set of consensus-driven standardized simulation scenarios (mannequin-based) that every trainee must complete satisfactorily prior to completion of anesthesiology residency and certification. This paper describes the CanNASC development and implementation processes.

Canadian National Anesthesiology Simulation Curriculum development

The Ottawa Health Science Network Research Ethics Board waived the requirement for review of this project. Curriculum development was based on the principles described by Kern.8 Monthly teleconferences, twice yearly face-to-face meetings, and an engaged Task Force facilitated curriculum development and implementation. A description of some of the key elements of this project follows.

Simulation resource survey

A national survey of simulation resources was conducted amongst Task Force members to assess the feasibility of implementation across the country. Survey results confirmed that all programs had sufficient personnel and equipment to implement a standardized mannequin-based simulation curriculum.

Needs assessment for curriculum content

A literature search was first conducted to identify topics taught in existing anesthesiology simulation curricula.9,10 A needs assessment survey was then developed to identify clinical events, the management of which would be crucial to competence as a consultant anesthesiologist in Canada. The survey was subsequently distributed via FluidSurveys™ to every Canadian resident trainee, program director, simulation instructor, and residency program committee member. Three hundred sixty-eight of 958 invitees responded (38.4%), resulting in 64 unique suggested scenario topics.

Task Force members reached consensus on curriculum content using a modified Delphi technique11 focusing on two criteria: 1) The topic addressed a gap in the anesthesia training program (defined as a subject that is considered important but is suboptimally taught and/or assessed in the program. 2) Teaching and/or assessing this subject is best done with the use of a resource-intensive full-body mannequin. A four-point scale was used: 1 = definitely should not be included; 2 = probably should not be included; 3 = probably should be included; 4 = definitely should be included. Topics not endorsed by ≥ 80% of the Task Force were defined as a score of ≤ 2 and excluded. Seven scenario topics met the cut-off criteria. The Task Force began with the top five scenarios.

Scenario development process

It was critical to standardize the delivery of each scenario to ensure all candidates experience an identical assessment scenario regardless of their centre. Thus, a standardized scenario template specifying medical background, setup and equipment needs, and a single page storyboard summary was created for each scenario. Standardized pre-brief and debrief documents were also created.

All scenarios have learning objectives grounded in the National Curriculum for Canadian Anesthesiology Residency.12 Task Force subcommittees designed the scenarios, which were then refined iteratively by the whole committee. Scenarios were piloted, modified as necessary based on analysis of the pilot, and then finally ratified.

Assessment strategy

Concurrently with scenario development, the Task Force discussed the assessment strategy. The goal was a simple, reliable, and valid criterion-based assessment system from which competence could be inferred. Ultimately, it was decided to use the Managing Emergencies in Pediatric Anesthesia Global Rating Scale (MEPA GRS) as the primary tool for assessment of competence (Table 1). The MEPA GRS has robust evidence to support its validity in a similar context.13 Scenario-specific checklists and the Anesthesia Non-Technical Skills (ANTS) score14 were used to help raters decide on the final MEPA GRS score and to aid in debriefing the scenario. The checklists and ANTS were not directly used in the final assessment but helped guide rater judgement. Assessment, by at least two raters, could be conducted live or via video review at the discretion of the program.

Table 1 Managing emergencies in pediatric anesthesia global rating scale13

Scenario-specific checklists were created using a modified Delphi technique, with the initial items generated from a literature review, including existing published guidelines. Items endorsed by < 80% of respondents were discarded. Four rounds were required to reach consensus for each scenario, resulting in checklists with 10-16 items. Items had to be observable performance markers that raters could specifically identify from the learners’ actions or verbal statements. Specific items on each checklist were mandatory and would lead to failure of the scenario if omitted. An item required 100% consensus by the Task Force to be designated mandatory. Checklist items were scored using a three-point scale (0 = not done; 1 = done improperly/partially, or out of sequence; 2 = done appropriately).

Rater training was accomplished by developing a “how-to” rater training guide and two training videos. Two experienced raters then created a commentary in the format of a transcribed rater training session discussing the scoring, thought processes, and final score given by each rater.

Canadian National Anesthesiology Simulation Curriculum implementation

All scenarios, storyboards, pre-brief/debrief documents, assessment rubrics, and data collection surveys were translated into both official languages. Standardized implementation guidelines were developed and composed of the elements described in Table 2. National rollout of the first CanNASC scenario occurred during the 2014-2015 academic year at all 17 programs in Canada. Each program controlled the timing, location, and local resources involved in delivering this scenario. The target cohort was senior residents, which would correspond to the Core or Transition to Practice phases of the new CBD curriculum.

Table 2 Standardized elements of the CanNASC implementation guidelines

Data collection and analysis

Each program created a unique participant number that contained no personal identifiers, and entered the results into a FluidSurveys form. The key identification numbers linked to trainee names was kept locally and not shared nationally; thus the master database was anonymized.

To ensure standardized delivery of the scenario and to collect feedback for continuing quality improvement, data were collected on participant performance and key processes, including the use of the pre-brief document and debriefing guide, incidence of scenario technical difficulties, rater calibration, and duration of the scenario and debrief. Resident feedback was solicited.

Adherence to implementation procedures

All sites correctly delivered the pre-brief document and used the debriefing guide, and 92% encountered no technical difficulties. Of the nine instances of technical difficulties, six were due to equipment, two were due to procedure timing, and one was a facility issue (fire alarm). All sites conducted rater calibration with a minimum of two raters at each site. The mean (SD) scenario and debrief durations were 11.9 (1.7) min and 17.3 (7.8) min, respectively.

Resident performance

One hundred fourteen residents (89 postgraduate year [PGY]-5 and 25 PGY-4) participated. The overall pass rate, defined as a MEPA GRS score ≥ 4, was 79%. The pass rate among PGY5 residents was significantly higher than for PGY4 residents (83% vs 64% pass rate, respectively; P < 0.04). This provides evidence for the discriminatory validity of our tools. Residents were given immediate formative feedback during debriefing. Reasons for failing the scenario were collected and fed back to each program to aid in planning future educational initiatives. Consistent with CBD principles, those who failed were allowed to remediate, as achievement of competence is the ultimate goal of CBD.

Resident feedback

One hundred eight residents completed the feedback surveys (95% response rate). They were asked to indicate their level of agreement to a series of statements using a five-point Likert scale (1 = strongly disagree; 5 = strongly agree) (Table 3). While the primary purpose of the sessions was performance evaluation, the residents also valued the educational component of the debriefing. Residents considered the simulation team and scenario realistic and the scenario relevant to their clinical practice. Seventy-seven percent of respondents supported the concept that simulation should be used as an assessment modality during residency.

Table 3 Resident feedback on CanNASC simulation session, reported as % agreement, defined as a score ≥ 4 on a five-point Likert scale (1 = strongly disagree; 5 = strongly agree)

Future directions

The shift to CBME demands increased frequency and variety of valid and reliable assessments. It is feasible to achieve consensus on the elements of a national simulation curriculum, including assessment, for anesthesiology trainees.

The CanNASC scenarios have been successfully implemented in every residency program in Canada, with a universal commitment to increase the provision of scenarios even further in the following academic years. Collection of data on this scale from standardized content provides evidence on which to support (or refute) the reliability and validity of the process. Program evaluation and quality improvement frameworks have been built into the project. Additionally, data on performance will exist not only for individual trainees but also for training programs, which can help assure universities, regulators, and the public that the quality of training and certification is uniformly high across the country. The RCPSC Specialty Committee considers CanNASC a useful assessment modality for high-risk low-frequency clinical events and is discussing the inclusion of CanNASC within the assessment blueprint for CBD residency-training.

While initially designed for education and assessment during residency, there has also been great interest among continuing education (CE) providers to use these scenarios to meet the demands for simulation-based CE programs. Furthermore, the experience accumulated with developing the CanNASC scenarios may help form a basis for the evolution of maintenance of certification processes for practicing anesthesiologists.

Conclusions

The RCPSC approach to the future of medical education is clear: “CBD is a multi-year program to implement a CBME approach to residency education and specialty practice in Canada…”.15 This initiative requires major changes to all residency education programs, including an increased quantity and diversity in assessment modalities during specialty training. The CanNASC has been implemented at each program site and allows nationwide comparisons and performance benchmarking. Our development and implementation processes could be adapted or adopted by any specialty interested in implementing a simulation-based curriculum incorporating competency-based assessment on a national scale.