Advances in Health Sciences Education

, Volume 17, Issue 3, pp 441–451

Didactic CME and practice change: don’t throw that baby out quite yet

Authors

  • Curtis A. Olson
    • Department of MedicineUniversity of Wisconsin-Madison
    • Office of Continuing Professional Development in Medicine and Public HealthUniversity of Wisconsin-Madison
    • Office of Continuing Professional Development in Medicine and Public HealthUniversity of Wisconsin-Madison
Reflections

DOI: 10.1007/s10459-011-9330-3

Cite this article as:
Olson, C.A. & Tooman, T.R. Adv in Health Sci Educ (2012) 17: 441. doi:10.1007/s10459-011-9330-3

Abstract

Skepticism exists regarding the role of continuing medical education (CME) in improving physician performance. The harshest criticism has been reserved for didactic CME. Reviews of the scientific literature on the effectiveness of CME conclude that formal or didactic modes of education have little or no impact on clinical practice. This has led some to argue that didactic CME is a highly questionable use of organizational and financial resources, and a cause of lost opportunities for physicians to engage in meaningful learning. The authors’ current program of research has forced them to reconsider the received wisdom regarding the relationship between didactic modes of education and learning, and the role frank dissemination can play in bringing about practice change. The authors argued that the practice of assessing and valuing educational methods based only on their capacity to directly influence practice reflects an impoverished understanding of how change in clinical practice actually occurs. Drawing on case studies research, examples were given of the functions didactic CME served in the interest of improved practice. Reasons were then explored as to why the contribution of didactic CME is often missed or dismissed. The goal was not to advocate for a return to the status quo ante where lecture-based education is the dominant modality, but rather to acknowledge both the limits and potential of this longstanding approach to delivering continuing education.

Keywords

Medical educationContinuing medical educationDidactic educationPractice changePhysician performanceAssessment healthcare outcomesEvaluation

Much attention has been devoted to the US health system’s failure to consistently provide safe, effective, and high quality care (Institute of Medicine. Committee on Quality of Health Care in America 2001). A major cause for this failure is that what we know often does not readily translate into better performance or patient health outcomes (Straus et al. 2009). Historically, continuing medical education (CME) has been considered a means by which practitioners could acquire up-to-date knowledge and skills in order to provide the best possible health care for their patients (Moore 1998). However, there is skepticism about the role CME can play in improving physician performance (Davis and Galbraith 2009). The sharpest criticism has been reserved for didactic CME, which, following Davis et al., we define as an educational method or activity comprising “predominately lectures or presentations with minimal audience interaction or discussion” (Davis et al. 1999, p. 868). (We will use “didactic” and “formal” interchangeably.) For example, in their systematic review of randomized controlled trials, Davis and colleagues concluded that “where performance change is the immediate goal of a CME activity, the exclusively didactic modality has little or no role to play” (Davis et al. 1999, p. 873). Similarly, Bloom stated that the “CME tools and techniques most commonly used [didactic presentations and print materials] are the least effective ones in helping physicians adapt to new diagnostic and therapeutic interventions” (Bloom 2005, p. 383). Marinopoulos et al. agreed that “interactive techniques seem to be more effective than non-interactive ones” (Marinopoulos et al. 2007, p. 7). Didactic CME has been called a highly questionable use of organizational and financial resources, as well as a cause of lost opportunities for physicians who could instead be taking part in “interactive, challenging, and sequenced activities that have increased potential for positively affecting their performance and the health of their patients” (Davis et al. 1999, p. 873). By 1999, some were asking why the medical profession would “persist in delivering such a product and accrediting its consumption” (Davis et al. 1999, p. 873).

To the extent that these findings have encouraged CME professionals to critically re-examine our heavy reliance on didactic CME, this has been a positive development. However, our research is forcing us to reconsider the contribution that didactic modes of continuing education can make to practice change. We have come to believe that the prevailing view—that the value of didactic CME should rest on its capacity to directly produce changes in learner competency and clinical practice—reflects an impoverished view of how change in clinical practice actually occurs and of the many important functions didactic approaches to education can serve in the interest of improving practice. Although our focus here is on CME, our observations raise questions relevant to undergraduate and graduate medical education, for they too have seemingly been captivated by the outcomes movement and competency-based evaluation. The CanMEDS roles (Frank 2005) and the ACGME competencies (Accreditation Council for Graduate Medical Education 2007) alike foster the notion that education should be valued only to the extent that it can be shown to directly result in pre-specified performance outcomes.

In this paper, we argue that the impact of didactic educational methods has been systematically under-recognized. We use examples drawn from our case studies of how clinical teams change their practice to show that formal CME can and does play a role—sometimes a critically important role—in improving practice, but this role is often something other than that of primary, proximal cause of change. We cannot say how often this phenomenon occurs based on the three cases we will describe. Instead, employing the logic of counterargument (Dion 2003), we present these cases as “black swans” (Popper 1959), as evidence of exceptions that challenge the sweeping conclusion that didactic CME is ineffective in changing clinical practice. We then suggest several hypotheses to account for why the field has tended to overlook these contributions and discuss implications for theory and research in medical education.

New insights into the impact of didactic CME

Our belief that the impact of didactic CME is under-recognized emerged from a series of naturalistic, retrospective case studies of how clinical teams in three hospitals in the US made significant improvements in practices aimed at reducing antimicrobial resistance (Olson et al. 2010). This methodology allowed us to pose a fundamentally different question than most CME impact studies and systematic reviews. We asked not, “Is continuing education effective at changing performance or improving patient outcomes?” but rather, “When change in clinical practice is observed, what role, if any, did continuing education play?” Our theoretical framework was Soft Knowledge Systems (SKS) (Engel 1997). The primary intent of the SKS framework is to understand the use of scientific and other types of knowledge by practitioners in order to bring about innovation. SKS emphasizes how actors collect, produce, evaluate, communicate, and synthesize knowledge and information (K&I) to support the change process.

We were intrigued to find that in each of the cases studied, formal, didactic educational activities provided K&I critical to the success of the change effort. Following are examples drawn from each of the three cases.

Case 1: problem prioritization

In Case 1, an academic medical center in the western US addressed a marked increase in Pseudomonas aeruginosa resistance by implementing a hospital-wide policy restricting the use of selected antimicrobials. A didactic CME session was instrumental in making Pseudomonas resistance a priority. At an Interscience Conference on Antimicrobial Agents and Chemotherapy, the physician champion attended a didactic session by a leading expert on infectious diseases. The presenter asked for a show of hands of those who had found patients in their hospital resistant to colistin (one of the last resort antibiotics for multidrug resistant P. aeruginosa) (Falagas et al. 2008). Nearly everyone in the room raised their hands. This demonstration had a powerful effect on the physician champion, helping her see that the issue of P. aeruginosa resistance was not simply an emerging concern, but one that demanded immediate attention.

We’re like, okay, this is seriously an issue, not just unique to our hospital. It sort of just compelled us more to start looking at what do we need to do different. [Physician Champion, Case 1]

Her experience in this didactic CME session led her and her colleagues to launch a two-year effort to bring their resistance rates down.

Case 2: providing part of the solution

Case 2 took place in the intensive care unit (ICU) of a medium-sized community hospital in the Midwest. The goal was to eliminate ventilator-associated pneumonias (VAP). The ICU team implemented a version of the Institute for Healthcare Improvement’s VAP bundle, which included peptic ulcer disease prophylaxis, deep vein thrombosis prophylaxis, elevation of the head of the bed to 45 degrees, and a sedation vacation (Resar et al. 2005). The ICU team modified the bundle by reducing the head of bed angle to 30 degrees. The result was a significant drop in infection rates, however they plateaued short of their goal prompting a critical re-examination of all practices in the ICU that might be the source of infection. They suspected that the endotracheal tube they were using might be the problem. There was a tube with an innovative design available that had been considered months earlier, but at that time the ICU medical director had heard there were problems with it and would not approve the change. Later, while attending a critical care conference, the ICU medical director learned that refinements had been made in the design of the tube.

There had been some problems with it when it was first launched a few years back. So a lot of the ICUs were hesitant to implement those, but he had heard about and saw its presentation and was interested in presenting that back to us. [Infection Control Professional, Case 2]

Based on this new information, the medical director approved the change. Shortly afterward the VAP rate went to zero and remained there for the 18 months prior to the time of our study, an accomplishment the project team attributed to the adoption of the new endotracheal tube.

Case 3: removing political barriers

In Case 3, a small community hospital on the Atlantic coast sought to reduce the number of new hospital-acquired methicillin-resistant Staphlococcus aureus (MRSA) infections. To achieve this goal, the infection control practitioner (ICP) and the physician chair of the Infection Control Committee developed a proposal to institute routine surveillance screening of high risk patients on admission and isolation of persons known to be infected and of all high risk patients unless the screening came back negative. The proposal was presented for a vote at a monthly medical staff meeting and the reaction was strong and unfavorable. Toward the end of a heated discussion, a highly respected senior surgeon spoke up to say that while attending a regional meeting of the American College of Surgeons, he had learned that an operating room in another state had been shut down as a result of MRSA and said “We can’t afford to have that happen here.” Even though the surgeon and the ICP had often found themselves on the opposite side of issues, this time he lent her support.

I really hadn’t supported her in anything else…. I stood up and made a motion. I said, ‘I make a motion that we accept this scenario and we get started.’ [Senior Surgeon, Case 3]

Moments later, the proposal was approved unanimously—an action heavily influenced by information obtained at a traditional CME activity.

The role of didactic CME

The knowledge and information gleaned from didactic CME activities in these cases did not serve as the primary, proximal cause of change. It had no direct, measureable impact on clinical practice that would be detected by the outcomes measures typically used in CME impact research and evaluation. Nevertheless, these examples demonstrate that didactic CME can and does contribute to practice improvement and at times, the contribution is critical to the success of efforts to produce the change.

Given that the literature consistently concludes that formal CME has little or no value as a tool for practice change, we found it remarkable that it supplied an important piece of the puzzle in each of the three cases. More remarkable yet, we could have presented several additional examples from our cases. These findings present a very different picture of the value of didactic CME than the reviews of the research. How can we account for the difference? Why do we miss, or perhaps dismiss, contributions of didactic CME such as those illustrated by our examples?

Why do we miss these contributions to change?

Our hypothesis is that as a field, we have a blind spot when it comes to recognizing the contribution of formal CME to practice change and as a result, it is undervalued as a tool for effecting change. We have this blind spot as the result of certain beliefs and practices that have a strong hold on the continuing education field. While an extensive analysis is beyond the scope of this paper, we would like to outline why we think the contributions of formal CME are consistently overlooked by considering three interrelated aspects of CME research and practice: (1) the dominant outcomes framework used to assess effectiveness of educational methods, (2) the prevailing theories of clinical practice change, and (3) research methods used for effectiveness studies.

The CME evaluation framework

The widely accepted framework for evaluating the impact of CME activities is the one first described by Kirkpatrick (Kirkpatrick 1959). This framework was adapted to the health care field by Moore (Moore 1998) and recently expanded upon (Table 1) (Moore et al. 2009). The logic of Moore’s framework is that “CME evaluation should focus on identifying, measuring, and describing the value provided by CME” and value is based on the extent to which it “leads to enhanced physician performance, improved health care quality, and reduced costs” (Moore 2003, p. 251). It is used as a framework for assessing the impact of educational methods (e.g., approaches such as role plays, computer simulations, live presentations), activities (which may incorporate more than one educational approach, such as lecture plus small group discussion), and programs (e.g., the portfolio of activities delivered by an academic CME provider over the course of a year).
Table 1

Moore et al.’s expanded outcomes framework (used with permission)

Original CME framework

Miller’s framework

Expanded CME framework

Description

Source of data

Participation

 

Participation Level 1

The number of physicians and others who participated in the CME activity

Attendance records

Satisfaction

 

Satisfaction Level 2

The degree to which the expectations of the participants about the setting and delivery of the CME activity were met

Questionnaires completed by attendees after a CME activity

Learning

Knows

Learning: declarative knowledge Level 3A

The degree to which participants state what the CME activity intended them to know

Objective: pre- and posttests of knowledge

Subjective: self-report of knowledge gain

Knows how

Learning: procedural knowledge Level 3B

The degree to which participants state how to do what the CME activity intended them to know how to do

Objective: pre- and posttests of knowledge

Subjective: self-report of knowledge gain

Shows how

Competence Level 4

The degree to which participants show in an educational setting how to do what the CME activity intended them to be able to do

Objective: observations in educational setting

Subjective: self-report of competence; intention to change

Performance

Does

Performance Level 5

The degree to which participants do what the CME activity intended them to be able to do in their practices

Objective: observations of performance in patient care setting; patient charts; administrative databases

Subjective: self-report of performance

Patient health

 

Patient health Level 6

The degree to which the health status of patients improves due to changes in the practice behavior of participants

Objective: health status measures recorded in patient charts or administrative databases

Subjective: patient self-report of health status

Community health

 

Community health Level 7

The degree to which the health status of a community of patients changes due to changes in the practice behavior of participants

Objective: epidemiological data and reports

Subjective: community self-report

Moore’s framework has both a descriptive and normative aspect. It is descriptive in the sense that, although it is not identified as such, it is effectively a theory of causality (Pawson and Tilley 1997). By that we mean it describes a sequence of outcomes representing the mechanism by which an educational activity is expected to produce change in clinical practice, and ultimately in patient and population health. It is normative in the sense that it provides a hierarchy of values for ascribing worth to CME activities. That is, an activity that produces satisfaction but not learning, is not valued as highly as one that produces satisfaction and learning. It is also normative in the sense that it suggests planners of CME activities should aspire to do better than simply produce learning and that the ultimate goal should be to improve patient and population health through fostering improvements in practice. This framework has, we believe, encouraged the practice of pegging the value of an educational method or activity directly to its impact on practice change.

It is difficult to overstate the extent to which the assumption that the value of CME at all levels (method, activity, and program) depends on its demonstrated capacity to directly result in practice change has gained acceptance in the field. It has become “enshrined” in the accreditation requirements of the Accreditation Council for Continuing Medical Education (Davis and Galbraith 2009). The Institute of Medicine report on the future of CME (citing the Macy Foundation report [Hager et al. 2007] on continuing education in the health professions) states “An effective CE method is now understood to be one that has enhanced provider performance and thus improved patient outcomes” (Institute of Medicine. Committee on Planning a Continuing Health Care Professional Education Institute 2010, p. 35, italics added).

We certainly agree with those who argue that that the ultimate purpose of medical education should be to improve the health of patients or a given population (Kern et al. 2009). However, this model becomes problematic if taken to mean that practice change, as a direct result of education, should be the sole or primary criterion used to evaluate the impact of individual educational activities, much less particular educational methods. Our cases illustrate why.

The underlying theory of practice change

In our view, the theory of causality embedded in Moore’s framework draws our attention away from other possible outcomes that may also contribute to practice change. Why does this framework predispose us to miss important contributions? There are many reasons but here we want to emphasize four.
  1. 1.

    It describes only one of several possible pathways to change. There are undoubtedly times when CME activities do function in the stepwise, direct manner described in Moore’s framework. However, as our case examples show, at other times CME activities are but one element in a complex picture, and those activities can serve multiple, important functions.

     
  2. 2.

    It focuses on the individual learner. Like most approaches to improving clinical practice through education (Cervero 2003) it makes individuals the focus of change. However, physicians do not practice as solitary actors (Smith and Schmitz 2004). Instead, practice is a social act that takes place in highly complex systems (Heffner 2001). Models focusing on individuals lead to a lack of attention to social and political aspects of change and the ways in which K&I can be used to address them.

     
  3. 3.

    It is largely ahistorical and acontextual. It assumes that the educational activity is the proximal and primary cause of practice change, leaving it unable to adequately account for the temporal, organizational, and social contexts that learners bring with them to the activity. Our case examples show that change is sometimes less an event than an ongoing activity, the product of a host of causational factors or “collection of forces” (Fox and Bennett 1998) aligning for action and impinging at different times.

     
  4. 4.

    It gives little attention to the agency of learners. Change is seen as a process driven by educators, not by actor decisions about what problems should be given priority. It treats learners as largely reactive recipients of information instead of proactive agents seeking K&I and other resources needed to advance their agendas.

     

Research methods used for effectiveness studies

The conclusion that formal, didactic CME has little or no effect on physician practice was based on systematic reviews of randomized controlled trials (Davis et al. 1999; Bloom 2005; Marinopoulos et al. 2007; Haynes et al. 1984; Davis et al. 1992; Oxman et al. 1995; Thomson O’Brien et al. 2001). We hypothesize that our methods for assessing the impact of didactic CME further blind us to some of its contributions.

Experiments conducted to study the effectiveness of educational methods typically examine the causal relationship between an intervention and a narrow range of outcomes. The purpose is to estimate the marginal impact of the intervention on the outcome variable of interest. The approach assumes that the endpoint is easily identified, objectified, and measured (Norman 2010). However, our study shows that the range of important contributions that education can make to successful change in practice can be diverse, difficult to anticipate, and separated from the educational intervention by months or even years.

Furthermore, RCTs demand theories of causation that are simplifications of the multidimensional and dynamic theories needed to more fully capture the realities of how practice change actually occurs. RCTs focus on theorized causal linkages that are bivariate and unidirectional (Cook and Payne 2002). RCTs examining the impact of CME on practice are typically “black box” studies, giving little or no attention to the mechanism by which the education is expected to produce the change in practice, much less collecting data to confirm that the supposed mechanism is borne out by the evidence. As such, RCTs are often predicated on theories of causation that are underspecified and highly simplified.

Conclusion

Having an empirically grounded, rich understanding of the contributions that didactic CME can and does make to practice change is fundamental to knowing how to effectively employ didactic CME as an educational method. We have a body of empirical evidence and theory that gives us insight into how individual physicians learn and change their practice (Fox et al. 1989; Slotnick 1999), and this body of work addresses to some degree how didactic CME contributes to the process. However, this work has not systematically explored the contributions of didactic CME, and we are only beginning to understand how clinical teams learn and change their clinical practice.

This state of affairs leads us to conclude that more research is needed on the phenomenology and “natural history” of change. Schön made this point, advocating that we start with asking not how we can make better use of research-based knowledge, but rather what can we learn from a careful examination of high-performing individuals (Schön 1987). To this we would add “high performing teams.”

We believe that as we develop a better understanding of how practice change actually occurs, we will become more acutely aware of the limitations of mechanistic input–output models that construe educational methods as techniques for producing change. Organic metaphors will be needed to more fully capture the complexities involved. One promising example of an organic metaphor is knowledge ecosystem, a construct used in the field of strategic management that conceptualizes innovation and change as the product of the synthesis of existing local knowledge with that drawn from an external knowledge reservoir (Nonaka et al. 2008). Complex adaptive systems is another conceptual lens with potential for helping us see beyond mechanistic approaches to improving health care through continuing education (Plsek and Greenhalgh 2001).

We also need to critically re-examine the criteria by which didactic CME is evaluated. To say that didactic CME does not directly and immediately lead to change in clinical competency or practice is not the same as saying it has no value. Pegging the value of an educational method directly to practice change and patient outcomes overlooks a broad range of potentially valuable, even critical contributions the method can make to professional development and practice change.

We need to acknowledge the limits of RCTs and adopt strategies to make them more useful. For example, experimenters can open up the “black box” and articulate the mechanism by which the treatment is expected to produce the anticipated outcome and strive, in the words of Cook and Payne, “to improve the explanatory yield of their work by adding more measures about possible intervening processes” (Cook and Payne 2002, p. 154). Mixed methods research, combining qualitative methods with an RCT, is another potential strategy.

Finally, while we believe that formal, didactic educational methods can play an important role in facilitating change in clinical practice, we want to state unequivocally that we are not advocating for a return to the status quo ante, in which didactic CME is the dominant modality. Instead, we believe that as medical educators we must move beyond thinking in terms of isolated educational methods and activities to a perspective that incorporates strategic programs of action, in which a portfolio of methods and activities is deployed, each designed to serve specific purposes as part of a larger plan for enhancing competency, clinical practice, patient outcomes, and population health. Within such a framework, not all components would be expected to function as the direct, proximal cause of change, but rather would be expected to make clearly defined contributions to the overall goal of excellence in patient care.

Acknowledgments

The case studies research described in this article was funded in part by an unrestricted educational grant from Wyeth Pharmaceuticals; this publication was supported by grant 1UL1RR025011 from the Clinical and Translational Science Award (CTSA) program of the National Center for Research Resources, National Institutes of Health.

Ethical approval

The case studies were reviewed and determined to be exempt by the Health Sciences Institutional Review Board of the University of Wisconsin-Madison.

Copyright information

© Springer Science+Business Media B.V. 2011