Didactic CME and practice change: don’t throw that baby out quite yet
- First Online:
- Cite this article as:
- Olson, C.A. & Tooman, T.R. Adv in Health Sci Educ (2012) 17: 441. doi:10.1007/s10459-011-9330-3
- 182 Views
Skepticism exists regarding the role of continuing medical education (CME) in improving physician performance. The harshest criticism has been reserved for didactic CME. Reviews of the scientific literature on the effectiveness of CME conclude that formal or didactic modes of education have little or no impact on clinical practice. This has led some to argue that didactic CME is a highly questionable use of organizational and financial resources, and a cause of lost opportunities for physicians to engage in meaningful learning. The authors’ current program of research has forced them to reconsider the received wisdom regarding the relationship between didactic modes of education and learning, and the role frank dissemination can play in bringing about practice change. The authors argued that the practice of assessing and valuing educational methods based only on their capacity to directly influence practice reflects an impoverished understanding of how change in clinical practice actually occurs. Drawing on case studies research, examples were given of the functions didactic CME served in the interest of improved practice. Reasons were then explored as to why the contribution of didactic CME is often missed or dismissed. The goal was not to advocate for a return to the status quo ante where lecture-based education is the dominant modality, but rather to acknowledge both the limits and potential of this longstanding approach to delivering continuing education.
KeywordsMedical educationContinuing medical educationDidactic educationPractice changePhysician performanceAssessment healthcare outcomesEvaluation
Much attention has been devoted to the US health system’s failure to consistently provide safe, effective, and high quality care (Institute of Medicine. Committee on Quality of Health Care in America 2001). A major cause for this failure is that what we know often does not readily translate into better performance or patient health outcomes (Straus et al. 2009). Historically, continuing medical education (CME) has been considered a means by which practitioners could acquire up-to-date knowledge and skills in order to provide the best possible health care for their patients (Moore 1998). However, there is skepticism about the role CME can play in improving physician performance (Davis and Galbraith 2009). The sharpest criticism has been reserved for didactic CME, which, following Davis et al., we define as an educational method or activity comprising “predominately lectures or presentations with minimal audience interaction or discussion” (Davis et al. 1999, p. 868). (We will use “didactic” and “formal” interchangeably.) For example, in their systematic review of randomized controlled trials, Davis and colleagues concluded that “where performance change is the immediate goal of a CME activity, the exclusively didactic modality has little or no role to play” (Davis et al. 1999, p. 873). Similarly, Bloom stated that the “CME tools and techniques most commonly used [didactic presentations and print materials] are the least effective ones in helping physicians adapt to new diagnostic and therapeutic interventions” (Bloom 2005, p. 383). Marinopoulos et al. agreed that “interactive techniques seem to be more effective than non-interactive ones” (Marinopoulos et al. 2007, p. 7). Didactic CME has been called a highly questionable use of organizational and financial resources, as well as a cause of lost opportunities for physicians who could instead be taking part in “interactive, challenging, and sequenced activities that have increased potential for positively affecting their performance and the health of their patients” (Davis et al. 1999, p. 873). By 1999, some were asking why the medical profession would “persist in delivering such a product and accrediting its consumption” (Davis et al. 1999, p. 873).
To the extent that these findings have encouraged CME professionals to critically re-examine our heavy reliance on didactic CME, this has been a positive development. However, our research is forcing us to reconsider the contribution that didactic modes of continuing education can make to practice change. We have come to believe that the prevailing view—that the value of didactic CME should rest on its capacity to directly produce changes in learner competency and clinical practice—reflects an impoverished view of how change in clinical practice actually occurs and of the many important functions didactic approaches to education can serve in the interest of improving practice. Although our focus here is on CME, our observations raise questions relevant to undergraduate and graduate medical education, for they too have seemingly been captivated by the outcomes movement and competency-based evaluation. The CanMEDS roles (Frank 2005) and the ACGME competencies (Accreditation Council for Graduate Medical Education 2007) alike foster the notion that education should be valued only to the extent that it can be shown to directly result in pre-specified performance outcomes.
In this paper, we argue that the impact of didactic educational methods has been systematically under-recognized. We use examples drawn from our case studies of how clinical teams change their practice to show that formal CME can and does play a role—sometimes a critically important role—in improving practice, but this role is often something other than that of primary, proximal cause of change. We cannot say how often this phenomenon occurs based on the three cases we will describe. Instead, employing the logic of counterargument (Dion 2003), we present these cases as “black swans” (Popper 1959), as evidence of exceptions that challenge the sweeping conclusion that didactic CME is ineffective in changing clinical practice. We then suggest several hypotheses to account for why the field has tended to overlook these contributions and discuss implications for theory and research in medical education.
New insights into the impact of didactic CME
Our belief that the impact of didactic CME is under-recognized emerged from a series of naturalistic, retrospective case studies of how clinical teams in three hospitals in the US made significant improvements in practices aimed at reducing antimicrobial resistance (Olson et al. 2010). This methodology allowed us to pose a fundamentally different question than most CME impact studies and systematic reviews. We asked not, “Is continuing education effective at changing performance or improving patient outcomes?” but rather, “When change in clinical practice is observed, what role, if any, did continuing education play?” Our theoretical framework was Soft Knowledge Systems (SKS) (Engel 1997). The primary intent of the SKS framework is to understand the use of scientific and other types of knowledge by practitioners in order to bring about innovation. SKS emphasizes how actors collect, produce, evaluate, communicate, and synthesize knowledge and information (K&I) to support the change process.
We were intrigued to find that in each of the cases studied, formal, didactic educational activities provided K&I critical to the success of the change effort. Following are examples drawn from each of the three cases.
Case 1: problem prioritization
We’re like, okay, this is seriously an issue, not just unique to our hospital. It sort of just compelled us more to start looking at what do we need to do different. [Physician Champion, Case 1]
Her experience in this didactic CME session led her and her colleagues to launch a two-year effort to bring their resistance rates down.
Case 2: providing part of the solution
There had been some problems with it when it was first launched a few years back. So a lot of the ICUs were hesitant to implement those, but he had heard about and saw its presentation and was interested in presenting that back to us. [Infection Control Professional, Case 2]
Based on this new information, the medical director approved the change. Shortly afterward the VAP rate went to zero and remained there for the 18 months prior to the time of our study, an accomplishment the project team attributed to the adoption of the new endotracheal tube.
Case 3: removing political barriers
I really hadn’t supported her in anything else…. I stood up and made a motion. I said, ‘I make a motion that we accept this scenario and we get started.’ [Senior Surgeon, Case 3]
Moments later, the proposal was approved unanimously—an action heavily influenced by information obtained at a traditional CME activity.
The role of didactic CME
The knowledge and information gleaned from didactic CME activities in these cases did not serve as the primary, proximal cause of change. It had no direct, measureable impact on clinical practice that would be detected by the outcomes measures typically used in CME impact research and evaluation. Nevertheless, these examples demonstrate that didactic CME can and does contribute to practice improvement and at times, the contribution is critical to the success of efforts to produce the change.
Given that the literature consistently concludes that formal CME has little or no value as a tool for practice change, we found it remarkable that it supplied an important piece of the puzzle in each of the three cases. More remarkable yet, we could have presented several additional examples from our cases. These findings present a very different picture of the value of didactic CME than the reviews of the research. How can we account for the difference? Why do we miss, or perhaps dismiss, contributions of didactic CME such as those illustrated by our examples?
Why do we miss these contributions to change?
Our hypothesis is that as a field, we have a blind spot when it comes to recognizing the contribution of formal CME to practice change and as a result, it is undervalued as a tool for effecting change. We have this blind spot as the result of certain beliefs and practices that have a strong hold on the continuing education field. While an extensive analysis is beyond the scope of this paper, we would like to outline why we think the contributions of formal CME are consistently overlooked by considering three interrelated aspects of CME research and practice: (1) the dominant outcomes framework used to assess effectiveness of educational methods, (2) the prevailing theories of clinical practice change, and (3) research methods used for effectiveness studies.
The CME evaluation framework
Moore et al.’s expanded outcomes framework (used with permission)
Original CME framework
Expanded CME framework
Source of data
Participation Level 1
The number of physicians and others who participated in the CME activity
Satisfaction Level 2
The degree to which the expectations of the participants about the setting and delivery of the CME activity were met
Questionnaires completed by attendees after a CME activity
Learning: declarative knowledge Level 3A
The degree to which participants state what the CME activity intended them to know
Objective: pre- and posttests of knowledge
Subjective: self-report of knowledge gain
Learning: procedural knowledge Level 3B
The degree to which participants state how to do what the CME activity intended them to know how to do
Objective: pre- and posttests of knowledge
Subjective: self-report of knowledge gain
Competence Level 4
The degree to which participants show in an educational setting how to do what the CME activity intended them to be able to do
Objective: observations in educational setting
Subjective: self-report of competence; intention to change
Performance Level 5
The degree to which participants do what the CME activity intended them to be able to do in their practices
Objective: observations of performance in patient care setting; patient charts; administrative databases
Subjective: self-report of performance
Patient health Level 6
The degree to which the health status of patients improves due to changes in the practice behavior of participants
Objective: health status measures recorded in patient charts or administrative databases
Subjective: patient self-report of health status
Community health Level 7
The degree to which the health status of a community of patients changes due to changes in the practice behavior of participants
Objective: epidemiological data and reports
Subjective: community self-report
Moore’s framework has both a descriptive and normative aspect. It is descriptive in the sense that, although it is not identified as such, it is effectively a theory of causality (Pawson and Tilley 1997). By that we mean it describes a sequence of outcomes representing the mechanism by which an educational activity is expected to produce change in clinical practice, and ultimately in patient and population health. It is normative in the sense that it provides a hierarchy of values for ascribing worth to CME activities. That is, an activity that produces satisfaction but not learning, is not valued as highly as one that produces satisfaction and learning. It is also normative in the sense that it suggests planners of CME activities should aspire to do better than simply produce learning and that the ultimate goal should be to improve patient and population health through fostering improvements in practice. This framework has, we believe, encouraged the practice of pegging the value of an educational method or activity directly to its impact on practice change.
It is difficult to overstate the extent to which the assumption that the value of CME at all levels (method, activity, and program) depends on its demonstrated capacity to directly result in practice change has gained acceptance in the field. It has become “enshrined” in the accreditation requirements of the Accreditation Council for Continuing Medical Education (Davis and Galbraith 2009). The Institute of Medicine report on the future of CME (citing the Macy Foundation report [Hager et al. 2007] on continuing education in the health professions) states “An effective CE method is now understood to be one that has enhanced provider performance and thus improved patient outcomes” (Institute of Medicine. Committee on Planning a Continuing Health Care Professional Education Institute 2010, p. 35, italics added).
We certainly agree with those who argue that that the ultimate purpose of medical education should be to improve the health of patients or a given population (Kern et al. 2009). However, this model becomes problematic if taken to mean that practice change, as a direct result of education, should be the sole or primary criterion used to evaluate the impact of individual educational activities, much less particular educational methods. Our cases illustrate why.
The underlying theory of practice change
It describes only one of several possible pathways to change. There are undoubtedly times when CME activities do function in the stepwise, direct manner described in Moore’s framework. However, as our case examples show, at other times CME activities are but one element in a complex picture, and those activities can serve multiple, important functions.
It focuses on the individual learner. Like most approaches to improving clinical practice through education (Cervero 2003) it makes individuals the focus of change. However, physicians do not practice as solitary actors (Smith and Schmitz 2004). Instead, practice is a social act that takes place in highly complex systems (Heffner 2001). Models focusing on individuals lead to a lack of attention to social and political aspects of change and the ways in which K&I can be used to address them.
It is largely ahistorical and acontextual. It assumes that the educational activity is the proximal and primary cause of practice change, leaving it unable to adequately account for the temporal, organizational, and social contexts that learners bring with them to the activity. Our case examples show that change is sometimes less an event than an ongoing activity, the product of a host of causational factors or “collection of forces” (Fox and Bennett 1998) aligning for action and impinging at different times.
It gives little attention to the agency of learners. Change is seen as a process driven by educators, not by actor decisions about what problems should be given priority. It treats learners as largely reactive recipients of information instead of proactive agents seeking K&I and other resources needed to advance their agendas.
Research methods used for effectiveness studies
The conclusion that formal, didactic CME has little or no effect on physician practice was based on systematic reviews of randomized controlled trials (Davis et al. 1999; Bloom 2005; Marinopoulos et al. 2007; Haynes et al. 1984; Davis et al. 1992; Oxman et al. 1995; Thomson O’Brien et al. 2001). We hypothesize that our methods for assessing the impact of didactic CME further blind us to some of its contributions.
Experiments conducted to study the effectiveness of educational methods typically examine the causal relationship between an intervention and a narrow range of outcomes. The purpose is to estimate the marginal impact of the intervention on the outcome variable of interest. The approach assumes that the endpoint is easily identified, objectified, and measured (Norman 2010). However, our study shows that the range of important contributions that education can make to successful change in practice can be diverse, difficult to anticipate, and separated from the educational intervention by months or even years.
Furthermore, RCTs demand theories of causation that are simplifications of the multidimensional and dynamic theories needed to more fully capture the realities of how practice change actually occurs. RCTs focus on theorized causal linkages that are bivariate and unidirectional (Cook and Payne 2002). RCTs examining the impact of CME on practice are typically “black box” studies, giving little or no attention to the mechanism by which the education is expected to produce the change in practice, much less collecting data to confirm that the supposed mechanism is borne out by the evidence. As such, RCTs are often predicated on theories of causation that are underspecified and highly simplified.
Having an empirically grounded, rich understanding of the contributions that didactic CME can and does make to practice change is fundamental to knowing how to effectively employ didactic CME as an educational method. We have a body of empirical evidence and theory that gives us insight into how individual physicians learn and change their practice (Fox et al. 1989; Slotnick 1999), and this body of work addresses to some degree how didactic CME contributes to the process. However, this work has not systematically explored the contributions of didactic CME, and we are only beginning to understand how clinical teams learn and change their clinical practice.
This state of affairs leads us to conclude that more research is needed on the phenomenology and “natural history” of change. Schön made this point, advocating that we start with asking not how we can make better use of research-based knowledge, but rather what can we learn from a careful examination of high-performing individuals (Schön 1987). To this we would add “high performing teams.”
We believe that as we develop a better understanding of how practice change actually occurs, we will become more acutely aware of the limitations of mechanistic input–output models that construe educational methods as techniques for producing change. Organic metaphors will be needed to more fully capture the complexities involved. One promising example of an organic metaphor is knowledge ecosystem, a construct used in the field of strategic management that conceptualizes innovation and change as the product of the synthesis of existing local knowledge with that drawn from an external knowledge reservoir (Nonaka et al. 2008). Complex adaptive systems is another conceptual lens with potential for helping us see beyond mechanistic approaches to improving health care through continuing education (Plsek and Greenhalgh 2001).
We also need to critically re-examine the criteria by which didactic CME is evaluated. To say that didactic CME does not directly and immediately lead to change in clinical competency or practice is not the same as saying it has no value. Pegging the value of an educational method directly to practice change and patient outcomes overlooks a broad range of potentially valuable, even critical contributions the method can make to professional development and practice change.
We need to acknowledge the limits of RCTs and adopt strategies to make them more useful. For example, experimenters can open up the “black box” and articulate the mechanism by which the treatment is expected to produce the anticipated outcome and strive, in the words of Cook and Payne, “to improve the explanatory yield of their work by adding more measures about possible intervening processes” (Cook and Payne 2002, p. 154). Mixed methods research, combining qualitative methods with an RCT, is another potential strategy.
Finally, while we believe that formal, didactic educational methods can play an important role in facilitating change in clinical practice, we want to state unequivocally that we are not advocating for a return to the status quo ante, in which didactic CME is the dominant modality. Instead, we believe that as medical educators we must move beyond thinking in terms of isolated educational methods and activities to a perspective that incorporates strategic programs of action, in which a portfolio of methods and activities is deployed, each designed to serve specific purposes as part of a larger plan for enhancing competency, clinical practice, patient outcomes, and population health. Within such a framework, not all components would be expected to function as the direct, proximal cause of change, but rather would be expected to make clearly defined contributions to the overall goal of excellence in patient care.
The case studies research described in this article was funded in part by an unrestricted educational grant from Wyeth Pharmaceuticals; this publication was supported by grant 1UL1RR025011 from the Clinical and Translational Science Award (CTSA) program of the National Center for Research Resources, National Institutes of Health.
The case studies were reviewed and determined to be exempt by the Health Sciences Institutional Review Board of the University of Wisconsin-Madison.