Advertisement

Implementation Science and Comparative Effectiveness Research

  • Ann C. BonhamEmail author
  • Mildred Z. Solomon
  • Brian Mittman
  • Alexander K. Ommaya
  • Anne Berlin
Living reference work entry
Part of the Health Services Research book series (HEALTHSR)

Abstract

The resurgence of interest in comparative effectiveness research (CER) in the last decade has trained a bright spotlight on ensuring that clinical findings obtained from CER are implemented and disseminated to routine practice so that all patients and populations benefit. The focus on implementation science as part of CER reflects the collective realization that findings from clinical studies have not uniformly resulted in changes in the practices of health care providers or patients, nor have they always yielded improvements in health outcomes. Implementation science, as defined by the journal that bears its name is, “the scientific study of methods to promote the systematic uptake of proven clinical treatments, practices, organizational and management interventions into routine practice, and hence to improve health” “Implementation Science About Implementation Science.” The field has evolved as a multidisciplinary science, drawing principles from the behavioral and social sciences, process engineering, economics, and traditional health services research. Parallel to this evolution, new methodologies and evaluation approaches have emerged to track the processes, organizational contexts, and other elements which contribute to the successful implementation of CER findings. Embedding implementation research into CER starts with strong multidisciplinary teams – from institutional leadership to frontline care providers – to bridge the gap between research and operations; and then depends on organizational receptivity, appropriate infrastructure, and project-specific researcher-clinician partnerships. Governmental agencies around the world are already using forms of implementation science to inform health care; in the United States, the 2010 passage of health care reform legislation offers an unprecedented opportunity to make implementation science part and parcel of clinical practice.

This chapter brings together a brief history of implementation science and CER with a discussion of the current political and economic context, an overview of the major funders in this space, and the myriad evaluation frameworks and other conceptual models for the successful uptake of evidence into practice. Readers will also find a treatment of the ethics associated with research in this field, and a consideration of the state of the research workforce, followed by recommendations for the future.

Keywords

Knowledge Translation Implementation Research Comparative Effectiveness Research Implementation Science Comparative Effectiveness Research Study 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

Comparative effectiveness research (CER), in the last half decade, has witnessed a surge of interest on the part of federal policy-makers, health service researchers, clinicians, patient organizations, and the public. The 2012 Supreme Court decision to uphold the Patient Protection and Affordable Care Act, often referred to as the Affordable Care Act (ACA), and the consequential authorization of the Patient-Centered Outcomes Research Institute (PCORI), reaffirms the national commitment of policy-makers to CER Bonham and Solomon (2010). PCORI, as mandated by statute, proposed national guidelines for patient-centered CER and identified communication and dissemination as one of the five research priorities PCORI (2012).

The focus on implementation science as a component of CER reflects the collective sobering realization that findings from clinical studies have not uniformly resulted in changes in the behavior or practices of health care systems, providers, or by extension, patients, nor have they always yielded timely or anticipated improvements in health outcomes (Kohn et al. 2000; IOM 2001; Kalfoglou et al. 2001; Lenfant 2003; McGlynn et al. 2003; Sung et al. 2003; Woolf and Johnson 2005; Zerhouni 2005; Proctor et al. 2007). The sluggish timeline from discovery to implementation is not a new phenomenon; examples are many and are striking in their implications for efforts to improve health and health care.

The Beta-Blocker Heart Attack Trial (BHAT), an NIH-funded, multicenter clinical trial conducted in the early 1980s, is an oft-cited example and continues to cast a long shadow on what happens without implementation research. The BHAT showed markedly increased survival rates for patients who were treated with beta-blockers following acute myocardial infarctions. The differences between the group that was given beta-blockers and the control arm were evident within 27 months of the study’s start, and consequently, the trial was stopped for ethical reasons. The results were widely (and passively) distributed through peer-reviewed journals and through presentations by opinion leaders at national conferences. Yet, by the early 1990s, the use of beta-blockers for such patients was still not routine practice; it took another 10–15 years for that change to happen. It is daunting to imagine the lives that might have been saved or extended – had it taken just 5 years.

Another example of a large national economic investment in a clinical trial, in which the benefits, not fully realized for years, could have been accelerated with an evidence-based implementation strategy, comes from the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT). ALLHAT was a large study that began in 1994 and lasted 8 years NHLBI. One part of the study compared the effectiveness of medications to treat individuals with stage 1 or 2 hypertension who had an additional cardiovascular disease risk factor. The ALLHAT study results indicated that thiazide-type diuretics were as or more effective than newer drugs (calcium channel blockers, ACE inhibitors, and alpha-adrenergic blockers) in lowering high blood pressure and preventing one or more forms of cardiovascular disease in some patients. However, uptake of this evidence was slow, and the proportion of patients treated for hypertension using thiazide-type diuretics remained constant at a disappointingly low level – roughly 12 % usage after 5 years after publication of the results (Stafford et al. 2010). At the conclusion of the ALLHAT trial, a multicomponent, joint dissemination project of ALLHAT and the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC7) was implemented. The ALLHAT/JNC7 Dissemination Project focused on academic detailing, in which physicians approached colleagues regarding blood pressure management, along with utilization of educational activities, to effect changes in physician behavior. The results demonstrate both the challenges and the fundamental necessity of investing in implementation research. The dissemination project involved 18,000 participants in 41 states and still was associated with only a small increase in thiazide-type diuretic use.

Recent work by Kaiser Permanente of Northern California (KPNC) provides a positive example of the effectiveness of implementation in translating evidence into practice. They demonstrated how instituting a systematic implementation strategy can move the needle in changing practice. As part of a comprehensive program focused on primary, secondary, and tertiary prevention for cardiac care management, KPNC created an implementation program to improve control of hypertension through providing data to providers and implementing system changes. Figure 1 tracks the timeline from when the thiazides were first found to be useful in treating high blood pressure (ALLHAT), to the JNC Report released in 1977, to 1995, when KPNC issued the first guideline for practice in treating hypertension to network clinicians. Note the relatively flat slope reflecting the percentage of patients with controlled hypertension between 1960 and 2000. In 2001, Kaiser launched an implementation initiative, which included (1) increasing the number of patients with hypertension in their registry and (2) providing individual and aggregate health outcomes data to clinicians to demonstrate the positive results of adhering to the practice guidelines. By 2012, 87 % of patients within KPNC’s hypertension registry had controlled hypertension. This implementation initiative powerfully illustrates what can happen when implementation research is embedded in CER.
Fig. 1

Transformation – the importance of data and systems (source: Kaiser Permanente, Northern California, reused with permission)

These examples set the stage for demonstrating the importance of implementation science in ensuring that valuable clinical advances do not continue to be clouded or stymied by the absence of rigorous methodologies for adoption and dissemination.

Development of the Field

While there is a growing body of literature on the value of implementation research in the context of CER, the field is still coalescing around core conceptual and theoretical frameworks, a common vocabulary, and operational best practices. As it stands, the field is working to engage with the full range of available research approaches and methodologies and with the types of workforce training needed to embed implementation research into CER. To best understand the role of implementation research in CER first requires a step back to look at the development of the field of implementation science as a whole. The evolution of implementation science as a foundation for implementation research has been described eloquently and in detail by Mittman (2012). The following draws from that work to highlight pathways for the continued development of implementation research in the context of CER.

A Brief History of Implementation Science

Implementation science has evolved as a multidisciplinary field of inquiry. As defined in the journal, Implementation Science, it is, “the scientific study of methods to promote the systematic uptake of proven clinical treatments, practices, organizational and management interventions into routine practice, and hence to improve health.” It draws from the fields of psychology, behavioral and social sciences, economics, systems engineering, communications, health services research, and others and calls on new methodologies and analyses on systems of clinical care.

Historically, interest in how medical knowledge could be implemented into practice to improve patient care initially focused on changing physician behavior. Implementation researchers in the early years – the 1960s, 1970s, and 1980s – studied how to change physician knowledge and hence practice. Early work demonstrated that passively and unsystematically disseminating the information through reports and oral presentations did not result in widespread changes in practice or behavior. So, researchers began to focus on studying the role of professional norms and social influences through the use of opinion leaders and academic detailing as tools to influence physician behavior and ultimately patient outcomes.

Contemporaneously, research documenting variations in quality led to the concept of measuring the quality of care. Interest in developing strategies for minimizing the undesirable variations in care led to the introduction of clinical practice guidelines and practice parameters, both of which were intended to serve as benchmarks for quality of care across environments, Tricoci et al. (2009), Lee and Vielemeyer (2011). As interest in the quality of care matured, the field of implementation research shifted to focus on organizations and the analyses of structures and policies that could lead to institution-wide adaption of clinical guidelines and quality improvement. Theories of individual decision-making and behavior change were supplemented by theories drawn from management research and organizational behavior.

Today, implementation scientists use multiple theories from the behavioral sciences – including psychological and organizational – as conceptual frameworks for the practice of implementation research, Rycroft-Malone et al. (2002). From a practical perspective of embedding implementation research into CER, understanding the organizational social context of the institution – which includes the norms and expectations of the organization for the individual care providers and researchers as well as the attributes and attitudes of the care providers and researchers themselves – will be crucial to building capacity and support for implementation research programs. Systems-based thinking and cultural analysis at both the organizational and unit level has been valuable in identifying those elements within an organization that are subject to the influences of collective action. Through process evaluation, interrelationships among individuals, groups of individuals, and between groups and the organizational whole have come to be understood as critical factors in success or failure of adapting quality improvement activities into a delivery environment.

Toward a Common Lexicon

As the field of implementation research has evolved, so has the terminology. As noted earlier, the journal Implementation Science describes implementation research in the context of health care as the study of influences on patients, health care professionals, and organizational behavior in either health care or population settings. Work on quality improvement led to the term “quality improvement research” which is sometimes used interchangeably by some with the term “implementation research.” Meanwhile, the US Centers for Disease Control and Prevention makes a distinction between implementation research, which it defines as the “systematic study of how a specific set of activities and designed strategies are used to successfully integrate an evidence-based intervention within specific settings (e.g., primary care clinic, community center, school),” and diffusion research, the “systematic study of the factors necessary for successful adoption by stakeholders and the targeted population of an evidence-based intervention that results in widespread use and specifically includes the uptake of new practices or the penetration of broad-scale recommendations through dissemination and implementation efforts, marketing, laws and regulations, systems-research, and policies.”

Proctor and colleagues (2007) have called for standardized definitions, while providing their own independent descriptions of the terms dissemination (“The targeted distribution of information and intervention materials to a specific public health or clinical practice audience”) and implementation (“the use of strategies to introduce or change evidence-based health interventions within specific settings”). Greenhalgh et al. (2004) synthesis of the literature on diffusion of innovations further distinguishes diffusion as passive spread, dissemination as active and planned efforts to persuade target groups to adopt an innovation, and implementation as active and planned efforts to mainstream an innovation within an organization. The term translational research, traditionally applied to bench to bedside research, has likewise been applied to implementing and disseminating the results of clinical research into routine practice.

In Canada, the term commonly used is knowledge translation, which is described as, “a dynamic and iterative process that includes synthesis, dissemination, exchange and ethically sound application of knowledge to improve the health of Canadians, provide more effective health services and products and strengthen the health care system.” Kitson and Strauss (2010) use the phrase knowledge-to-action cycle, pointing out that it is teeming with gaps that can appear in innumerable ways throughout the process and that identifying and measuring the gaps is a complex and necessary first step in knowledge translation.

Under the most widely used definitions, “implementation research” is intended to apply a rigorous scientific methodology to provide an evidence base for improving processes to ensure the adoption, fidelity, and dissemination of clinical knowledge to change practices to effect better outcomes for patients (Lenfant 2003; Woolf 2008; Schillinger 2010). For the purposes of this chapter, the term implementation research is used interchangeably with knowledge translation, and both dissemination and diffusion are included as part of the implementation chain of events.

Governmental Initiatives in Implementation Research

Government agencies around the world have taken steps to embed implementation research into care delivery. Here are three examples.

The United Kingdom, through the National Health Service (NHS), established a guidance body known as the National Institute for Health and Clinical Excellence (NICE) in 1999 to address concerns over nationwide disparities in health care accessibility and treatment NICE (2005). NICE sought to organize and disseminate knowledge of and best practices in health delivery and care with the aim of bridging these disparities and to encourage public input on proposed guidelines. NICE established a library of disease- and condition-specific guidelines and quality standards that are based on case studies and research reviews. Its information repository, known as NHS evidence, forms the basis for the guidelines. NICE does not directly embed its experts into health care scenarios; rather, it acts as a non-prescriptive advice resource. Recommendations and “do not do” advisories are communicated as statements backed by reference material. NICE’s general implementation guide, entitled How to Put NICE Guidance into Practice, includes a series of tools for cost management, evaluating progress, and instructions on how to contribute to and use the NHS’s database. The implementers can range from clinical managers to local NHS commissioners (who purchase health care goods and services for a locality) and clinicians. As a conduit for the implementation of best practices in health care, NICE guidelines have seen successes and failures. For example, a study on brain cancer found that the systematic implementation of NICE guidelines in one setting improved access to care, led to more MRIs, decreased hospital stays, and led to survival rates comparable to those seen in clinical trials (Colditz et al. 2012). At the same time, a 2010 review of staff practices from four community mental health teams found that NICE guidelines on the psychological treatment of depression were understood and followed in a spotty manner. Some staff reported that there was a lack of availability of resources and that there were social barriers to implementation.

Canada has integrated efforts to bridge the research-to-practice gap under a discipline known as knowledge translation described earlier in the chapter. Knowledge translation is fully defined as a “…process that includes synthesis, dissemination, exchange and ethically-sound application of knowledge to improve the health of Canadians, provide more effective health services and products and strengthen the health care system.” As a research tool, knowledge translation science is, “the scientific study of methods for closing the knowledge-to-practice gap, and of the barriers and facilitators inherent in this process” (CIHR 2008). Significantly, knowledge translation is a key aspect of the legal mandate underpinning the Canadian Institutes of Health Research (CIHR), the body charged with serving as the nation’s standard-bearer of health care research and implementation. CIHR works with a group of universities and hospital centers across Canada to systematically disseminate knowledge translation findings through seminars, journal articles, information clearinghouses, and training modules. This systematic knowledge translation has been applied in a variety of forms, across disciplines, institutions, and Canadian provinces. For example, in 2004 and 2005, a group of provincial health and medical associations in Alberta redesigned the process of care for hip and knee surgical care and launched a pilot project to rigorously compare their new approach to conventional practice. Hospital stays were reduced by an average of a day and a half, waiting times to see a surgeon dropped from 145 working days to 21, on average, and patients and health care providers expressed greater degrees of satisfaction with their experiences. They used the results to change policy. Alberta Health and Wellness announced plans to adopt the program province-wide, although the researchers acknowledge that implementing this knowledge translation success story on a larger scale will be more difficult.

A third major national initiative which propelled the field is the Quality Enhancement Research Initiative (QUERI) program established by the US Veterans Health Administration. In 1998, the VHA launched QUERI as part of a system-wide transformation aimed at improving the quality of health care for veterans by using research evidence to improve clinical practice VHA (2009). QUERI groups were formed around high-risk or highly prevalent diseases or conditions marked as priorities for improvement. Some examples include the Diabetes QUERI, the HIV/Hepatitis QUERI, and the Mental Health QUERI. Each QUERI group has embedded researchers in clinical settings and fosters collaboration between implementation experts, clinicians, and management staff to come up with both best practices and best methods for evaluating the success of each project. The successes and failures, amply chronicled in research publications, have provided an invaluable blueprint to implementation science researchers and research funders. QUERI researchers have published a six-step implementation guide.

The achievements in HIV/Hepatitis QUERI can serve as a valuable illustration of how these steps operate. In step one, QUERI groups identify high-priority clinical conditions and opportunities for improving care; HIV/Hepatitis QUERI set out to identify and improve timely testing, diagnosis, and treatment of HIV. In step two, QUERI teams identify effective practices for improving outcomes; HIV/Hepatitis QUERI conducted meta-analyses of antiretroviral drug trials to identify best practices in care. In step three, QUERI teams identify variations in practices and quality and performance gaps; here, the HIV/Hepatitis QUERI, recognizing gaps in screening, set out to analyze the cost-effectiveness of creating a screening program. Step four involves testing of implementation of improvement programs; HIV/Hepatitis QUERI conducted an implementation trial of using clinical reminders for HIV patients. In step five, QUERI teams focus on the adoption and spread of best practices by evaluating the feasibility, adoption, and impact of coordinated improvement programs and the impact on patients. In step six, QUERI teams assess the impact of the improvement programs on the patient’s health outcomes and quality of life.

US Political and Economic Environment and Implications for Integrating Implementation Research in CER

The current political and economic environment has in many ways ushered the field of implementation research to center stage in the US health research arena- through the passage of the ACA, the increasing complexity and cost of the US health care system, increasing calls for dissemination of outcomes of clinical research, and increasing engagement of patient organizations in assessing the value of medical research. As a result, there is a growing attention to the notion of learning health systems and an attendant growth in funding streams, with particular emphasis on implementation science around CER within delivery systems.

Center for Medicare and Medicaid Innovation (CMMI)

The ACA paved the way for new approaches to care which elevate the importance of CER and implementation research. It also authorized new forms of government support to build capacity for the conduct of important CER and implementation research. For instance, the ACA created the Center for Medicare and Medicaid Innovation (CMMI) which is charged with developing and testing new payment and service delivery approaches to reduce Medicare, Medicaid, and CHIP spending while improving the experience of care and overall population health. One well-known CMMI delivery model innovation is the Accountable Care Organizations (ACOs) that share risk for a defined population and take responsibility for the quality of care they provide. In this model, savings achieved through better population health management are shared with Medicare, which may help incentivize faster knowledge translation and will certainly require implementation research to track results. Other CMMI-initiated demonstration projects include bundled payments for care improvement, financial incentives for medical home-based primary care teams, and hospital payment deductions for the incidence of potentially preventable hospital Medicare readmissions.

Formally established in November 2010, the CMMI is working with an estimated $10 billion budget over 10 years to seed these demonstration projects within health systems. Uniquely, the ACA grants the US Secretary of Health and Human Services direct authority to rapidly scale proven interventions to the national level. Due to the compressed time frame and high expectations for a return on investment from CMMI demonstration projects, a dedicated team of intramural and contract researchers have been charged with rapid-cycle evaluation – a complex undertaking that involves determining which interventions work in what contexts while unpacking the outcomes from potentially overlapping interventions in a climate of massive generalized health delivery system change (Shrank 2013). CMMI expects to launch additional new projects each year, including data sharing, implementation assistance, learning and diffusion, and evaluation. CMMI’s growing emphasis on providing an evidence base for new models of care delivery may serve as an enduring model for embedding implementation research into clinical effectiveness.

The ACA also introduced new funding streams for research focused on public health services and systems to more rigorously evaluate effective prevention practices; health system delivery improvements; and process changes that impact quality, safety, and efficiency; and dissemination research to make patient outcomes at different clinical settings more transparent to consumers.

PCORI

By establishing PCORI, the ACA also authorized the PCORI trust fund, which is expected to receive a total of $3.5 billion by the end of fiscal year 2018 through a combination of treasury appropriations and contributions from the Medicare Trust Fund and private health insurers. Though intentionally conceived as a nongovernmental organization, making it potentially more nimble and innovative, PCORI closely collaborates with existing federal sponsors of CER including the National Institutes of Health (NIH), the Agency for Healthcare Research and Quality (AHRQ), and the Veterans Affairs Administration (VA), as well as with industry Washington and Lipstein (2011). While CMMI has an explicit charge to reduce the cost of care, PCORI is prohibited from expressly considering cost issues, given the sensitivities around the perceptions of care rationing that emerged during the legislation process. PCORI’s charge is to support research and generate information that will “assist patients, clinicians, purchasers, and policy-makers in making informed health decisions by advancing the quality and relevance of evidence concerning the manner in which diseases, disorders, and other health conditions can effectively and appropriately be prevented, diagnosed, treated, monitored, and managed through research and evidence synthesis” (Patient Protection and Affordable Care Act, Pub L No. 111–148, 124 Stat 727, §6301). The PCORI mission statement demonstrates an explicit commitment to the engagement of patients, caregivers, and the broader health care community in the organization’s agenda setting and research funding process. The research priorities developed through this stakeholder-engaged process demonstrate the promise this new organization holds for significantly advancing the conduct of implementation science on CER. The priorities include comparing the effectiveness and safety of prevention, diagnosis, and treatment options; comparing health system-level approaches to patient-centered care; addressing disparities; accelerating CER methodological research; and notably, comparing approaches to providing CER evidence, speeding dissemination, and supporting shared decision-making between patients and their providers.

In 2013, PCORI began to focus on large-scale networks to conduct CER. PCORI launched national calls for proposals for using electronic data infrastructures to launch two kinds of major research networks. The PCORI committed $72 million to facilitate the development of national research data networks for CER. The PCORI Clinical Data Research Network (CDRN) has been established to develop the capacity to conduct randomized CER studies using data from clinical practice in a large, defined population. The CDRNs will access data on at least a million continuously eligible lives – data that will include the full spectrum of administrative claims (i.e., inpatient, outpatient, physician, hospital, drugs, and eligibility), biometrics, laboratory results, patient satisfaction, and other survey measures. A similar network pioneered by PCORI, the Patient-Powered Clinical Research Network, funds a subset of such large-scale research initiatives to be driven by patients who desire an active role in CER on issues important to them. In both of these networks, the success of the investment will be judged by the effectiveness of implementing and disseminating the results.

These large-scale networks underscore the national trend in the United States to focus resources on multi-site coordinated CER studies. The networks provide a potential platform for understanding health and health care in real-world conditions through CER, and they will allow unprecedented speed in the development and testing of new hypotheses, predictive models, and in the implementation of the results. The history of the slow and modest change in practice in the wake of fairly strong results and guidelines from large-scale clinical trials suggests the critical need for implementation science in ensuring that results are translated into routine practice.

Agency for Healthcare Research and Quality (AHRQ)

The ACA also engaged AHRQ to broadly disseminate research findings published as a result of PCORI. Previously, the American Recovery and Reinvestment Act of 2009 (ARRA) also provided AHRQ with a one-time infusion of funds to boost capacity for research on implementing delivery system change. Both serve to further demonstrate the extent of the US policy shift to embrace implementation science to optimize the design, uptake, and spread of federally funded comparative effectiveness research. Traditionally, AHRQ has been an important source of funding and expertise on health services research, with an annual budget of approximately $400 million- 80 % of which flows to extramural grants and contracts to support research on the quality, safety, efficiency, and effectiveness of health care.

NIH Clinical and Translational Science Awards (CTSA)

In 2006, the NIH, noting the lag between research and health outcomes, launched a major new initiative to speed the translation of research to improvements in clinical treatments through the CTSA program National Research Council (2013). As outlined earlier, translational research in the conventional sense refers to the bench-to-bedside conversion. While the goal of the CTSAs was to transform how resources are utilized and coordinated to identify promising therapeutics and interventions and move them forward rapidly, centralizing investments in the drivers of translational research – core facilities; biomedical informatics expertise; research design; regulatory knowledge; ethics compliance, and patient, community, and clinical engagement resources – makes the CTSAs de facto intellectual and physical hotbeds for innovations in implementation science.

Between 2006 and 2011, the CTSAs were funded through the National Center for Research Resources. With the Supreme Court approval of the ACA, the formation of the NIH National Center for Advancing Translational Sciences (NCATS) was authorized to oversee and expand the CTSA program. What began as 12 sites has grown to encompass 61 sites. In 2012 alone, the program’s budget was $461 million, with individual sites funded through 5-year cooperative agreements of between $5 and $23 million/year, representing a hefty investment in this critical domain. Given the size and import of this new endeavor, in 2012, the NIH contracted with the Institute of Medicine (IOM) to provide an independent appraisal of the program and make recommendations for improvements. Of particular relevance to CER and implementation science, the IOM Review Committee recommended increased NCATS leadership to ensure that the CTSA Program as a whole actively supports the full spectrum of clinical and translational research, including translation and dissemination of innovative methodologies and research findings.

NIH Health Care Systems (HCS) Collaboratory

Although not directly mandated by the ACA, in late 2012, the NIH announced a new effort titled the HCS Collaboratory to bring health care delivery organizations into the research arena to engage in large-scale studies. The initial announcement was accompanied by a grant for a coordinating center, along with support for seven demonstration projects that are emblematic of the next generation of implementation science in CER, investigating subjects such as the most effective time to take antihypertensive drugs and best practices to provide safe, cost-effective chronic pain management within primary care settings. In 2013, additional funding was announced to support research on the management of patients with multiple chronic conditions, calling for applicants with combined expertise on prevention, diagnosis, and treatment as well as pragmatic trial design, implementation, analysis, and data management. In all, the NIH has dedicated up to $20 million through 2016 to bring attention and capacity to this type of research.

The HCS Collaboratory, along with other new research partnerships, represents the rising recognition that research based on large, complex electronic datasets requires pooling health IT resources and expertise, streamlined review processes, and more flexible research designs (Riley et al. 2013). Such large-scale partnerships enable a shared learning curve on best practices for implementing and evaluating the effectiveness of interventions, and hold great promise as a source of effective methodological approaches for the entire field.

The visibility and importance of implementation research at the NIH is further demonstrated by the formation of a separate study section on Dissemination and Implementation Research in Health within the NIH Center for Scientific Review and the 2007 introduction of an annual Conference on the Science of Dissemination and Implementation (NIH. Dessimination and implementation science). However, implementation science is not limited to the study section. Various institutes within the NIH have implementation and dissemination research initiatives. As one example, the National Cancer Institute has established a priority of implementation science with specific goals of coordinating and increasing implementation science initiatives funded by the Institute.

Veterans Health Administration

The Veterans Health Administration has also entered this space with their own approach, seeking to enhance the integration between health systems and research called Collaborative Research to Enhance and Advance Transformation and Excellence (CREATE). Under the CREATE mechanism, researchers, clinicians, and health system leadership collaborate on projects from beginning to end: from identification of the research questions to implementation of the results. The hope is that this collaborative engagement will lead to enhanced implementation of the results and a speedier implementation process. CREATE’s review process focuses not only on the quality of the proposed projects, but also on the engagement and depth of the collaborative relationship. So far, VHA has supported 10 of these collaborative arrangements, to the tune of over $10 million/year for up to 5 years.

Implementation Science in an Era of Rising Health Care Costs

Rounding out the current context for implementation science and comparative effectiveness research is a general climate of fiscal austerity. In the United States, the current political environment is marked by intense focus on the budget deficit, which currently stands at over $850 billion, and several legislative controls introduced to reduce it, namely, budget caps and sequestration. As health care demands an increasing part of our federal and state budgets, there is also concern over future health care spending growth Weiss (2007). Health care costs as a percentage of GDP are expected to grow from 17.2% to 19.3% by 2023 (CMS 2013). In this context, future funding for research is in jeopardy, and competition for resources in the research arena will become even fiercer. The US Congress is increasingly calling for evidence and research outcomes that are efficient and that are disseminated in a way that shows improvements in health across the nation. As Chambers deftly articulated in the foreword to a comprehensive primer on Dissemination and Implementation Research in Health from 2012, “we can see a dramatic shift from an era in which dissemination and implementation processes were considered to be beyond science to the current state, in which diverse scientists, practitioners and policymakers are actively pursuing knowledge on how to accrue the most public health benefit from our scientific discoveries.”

Moving from Theoretical to Practical

The “how-to” of implementation has preoccupied implementation researchers from the start, and the discipline has witnessed the development of a variety of frameworks to facilitate both planning and execution.

The Promoting Action on Research Implementation in Health Services (PARiHS) Framework

Some of the more studied frameworks include the PARiHS framework, which posits that the ease with which research can be put into practice depends primarily on three components: evidence, context, and facilitation. Each component exists along a continuum: the “higher” each component is, the fewer the barriers. An intervention which is “high” on the evidence continuum refers not only to whether a proposed change is scientifically robust, but also to the fact that it enjoys acceptance among providers and patients. Facilitation refers to people; “high” facilitation indicates that stakeholders are supportive of the change. Context refers to the arena in which evidence is translated into practice. A “high” context environment is one in which there is effective leadership enabling change and a climate conducive to constant feedback and self-assessment.

Kitson and colleagues reviewed the PARiHS model with an eye toward conceptual challenges, suggesting steps to refine the model and test the face and construct validities of the model (Kitson et al. 2008). Empirical case studies were then conducted to identify those factors that practitioners assigned the highest priority in enabling implementation of evidence into practice.

In order to move from discussions of theory or conceptual frameworks to measurement and evaluation, the elements within context and evidence that require further development and action interventions to be introduced into the practice environment must be identified. Kitson and colleagues illustrate the interrelationships among frameworks, theories, and models and offer a concept map detailing the three stages of refinement for each element, along with examples of how evidence, context, and facilitation can be evaluated (Kitson et al. 2008).

Subsequently, the same group of researchers led by Stetler took the PARiHS framework further toward implementation by developing a guide that is designed to apply and operationalize the framework within QUERI (Stetler et al. 2011). The result exploits QUERI’s action-oriented paradigm. By distinguishing short-term task-oriented purposes from broader and more substantial organizational purposes, the group revised the PARiHS framework and developed a guide for implementation that is sensitive to need for prospective and comprehensive evidence-based practice (EBP) and evaluating the effort.

The Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) Framework

From the public health field comes the RE-AIM framework, a planning framework described by Glasgow et al. (1999) which proposes that the impact of a public health initiative can be assessed by measuring the degree of reach, efficacy, adoption, implementation, and maintenance. Reach refers to the people affected by an intervention. Efficacy refers to positive medical outcomes as well as improvements in quality of life and lifestyle. Adoption refers to how viable an intervention is across different settings. Implementation refers to, “the extent to which a program is delivered as intended.” Finally, maintenance is intended to reflect the extent to which a desirable clinical practice is sustained and becomes routine in an organization. Each variable is quantifiable, and the framework can be used in a variety of community and clinical care settings.

The Practical, Robust Implementation and Sustainability Model (PRISM)

The PRISM model combines the principles of the RE-AIM framework with key elements from existing models of quality improvement, chronic care management, and diffusion of innovation to highlight a broad range of factors associated with the successful integration of research findings into practice. Designed and tested via retroactive case studies by Feldstein and Glasgow (2008), PRISM provides a model for considering the interactions among an intervention’s design; the available infrastructure for implementation and sustainability; characteristics of both the intervening organization and recipient population; and other factors from the external environment, and how all these might impact the intervention’s adoption, implementation, maintenance, reach, and effectiveness. For each of these domains, PRISM provides key questions to consider during the development, implementation, and maintenance phases of health care improvement programs.

The Knowledge-to-Action Framework

The Knowledge-to-Action Framework, developed by Graham and Tetroe and colleagues, provides a simple and practical cycle of steps and milestones to bridge the knowledge-to-action gap. While it demonstrates the cycle for adaption of clinical practice guidelines, the framework has broader applicability. The cycle takes into account local context, varying stakeholders at each step, and the importance of social relationships for adaption of the evidence or research. The cycle begins with, (1) identifying the problem or gap (which may be at the individual care provider level, the organizational level, or the population level); (2) adapting the proposed guidelines to the local context (by engaging a working committee to assess the quality, acceptability, and applicability of the guidelines through target audiences); (3) identifying barriers and facilitators; (4) implementing the interventions (which may include educational activities directed to all stakeholders; decision support, audit and feedback directed to care providers; interventions targeted to the system (which may include developing a cultural change program)); (5) monitoring and evaluating the outcomes; and (6) sustain the implementation of the innovation over time (by identifying the conditions needed for sustainability and the continued relevance of the guidelines).

The Consolidated Framework for Implementation Research

Damschroder and colleagues reviewed implementation theories and frameworks to identify common components that impact implementation across settings (Damschroder et al. 2009). The resulting model – the Consolidated Framework for Implementation Research – is composed of five major domains: intervention characteristics, outer setting, inner setting, characteristics of the individuals involved, and the process of implementation. The review identified a variety of factors associated with each domain: intervention (e.g., evidence quality), outer setting (patient needs), inner setting (leadership engagement), individual characteristics (knowledge), and process (evaluation).

As the experimental contexts for implementation are variable, it is not surprising that there are a number of frameworks available for implementation research. The field will continue to refine the concepts, models, factors, and measures that affect the process of implementation.

Keys to Successful Implementation

Each of the frameworks explored above included the importance of identifying keys to success. Those keys most commonly described are reviewed below.

Strong Confidence in the CER Study Design

To ensure confidence in the research, stakeholder engagement in study design, conduct, oversight, and CER methodology has been viewed as a critical element for the implementation of the findings. CER provides critical data to health care practice, in that clinical development of products often does not include comparisons between existing therapies. This data will allow for greater identification of the most appropriate population group for a particular treatment. Additionally, implementation is greatly facilitated when clinical research studies include data from diverse populations and practice environments. Scott and Glasziou (2012) estimated that between 30 % and 50 % of randomized trials “are seriously limited by one or more methodological flaws.” Consideration of real-world scenarios in the design of clinical trials will greatly increase the possibility of implementing the results. Many published clinical trials are not directly relevant to clinical practice due to the populations selected for study or the practice environment. Scott and Glasziou (2012) cite estimates that less than 7 % of all reported clinical trials are “valid and highly relevant to clinical practice.”

Understanding the Belief Systems of the Physicians and Frontline Providers

Physicians sometimes prefer to take cues from their peers or from medical societies. Aarons describes four dimensions in physician attitudes toward evidence and evidence-based practice that could inform solutions to similar barriers in implementing the results of CER studies Aarons et al. (2010). These include the “does this make sense” intuitive appeal of the evidence, the perceived gap between the research-based practice and current clinical practice, the openness to learning new practices, and the trust in the practice and recommendations of peers over the development of the evidence base. Prescribing styles and psychosocial factors may determine whether and how physicians adopt clinical guidelines; Gagnon et al. (2006) found that implementation of recommendations based on health technology assessments in Canada differed among two specialties of surgeons based on belief systems about themselves and their professions.

Infrastructure

A successful implementation climate can exist at all levels of clinical care – large or small, urban or rural, and with robust or relatively modest funding – so long as the science of implementation and implementation research are embedded into the system of care delivery. In concrete terms, this may mean training staff to collect and record data on outcomes, having quality improvement professionals working alongside frontline providers to assist in implementation and dissemination efforts, and creating opportunities for health care providers to receive training on implementing a best practice, rather than expecting uptake via an instruction sheet or simple command. On-the-job training is widely available in clinical settings across the United States; what is less commonplace is an acute awareness that for a clinical discovery to be successful, the putting in place requires practice, effort, and resources. Access to population data and the ability to extract and analyze these data relevant to the particular clinical question is also critical.

Committed Institutional Leadership Support

The culture and structure of a health care organization is a key determinant of how easy or difficult implementing new CER results will be. Barriers to implementation can emerge simply because the leadership is not committed to change or does not provide sufficient financial and human resources needed to make change feasible. In situations where the leadership is willing to provide crucial support through mentorship, education, and resources for dissemination, there is a better climate for the uptake of evidence-based practice. Clinical and institutional leadership can influence the speed and effectiveness of the uptake of a new process (Caldwell et al. 2008). The engagement of leadership at all levels of the organization can thus be critical to implementing change.

Organizational Receptivity and Preparedness

An organization has to be “ready” for change, and part of the implementation process might include an assessment to determine whether and how an entity is structurally capable of being on the receiving end of implementation. In organizations less receptive to change, different incentives may be needed; the lack of incentive and lack of a perceived advantage to applying guidelines are often cited as reasons for nonadherence. Close collaboration between stakeholders at all levels is often a predictor of an organization’s success in implementing change. A meta-analysis of studies on ways to improve cancer screenings and immunizations by Stone and colleagues (2002) found that organizational change was the single best predictor of a successful outcome. “When combined with the finding that teamwork and collaboration are powerful intervention features, this result confirms key elements of team-based quality improvement approaches for introducing practice change.” In fact, the more intervention components are added to an existing set, the better. This collaborative, multi-intervention approach is, as mentioned earlier, is also used in the VA QUERI programs. The QUERI model of embedding implementation researchers into an organization and conducting implementation studies and evaluation at the same time as studies of evidence-based practice is believed to be unique.

Effective Use of Information Technology

Implementation science relies on knowledge about what works as its backbone and communication as its most effective tool. Knowledge and communication are both best leveraged with the effective use of information technology. Governments and institutions around the world have increasingly come to recognize the value of electronic health record keeping for this and other reasons. The existence of a central data system for patients and clinicians helps to ensure that everyone is working off the same set of facts. Many implementation studies have successfully incorporated information technology in record keeping, and some have tested different website portals/e-mail protocols to find effective methods of encouraging healthy lifestyle habits, such as smoking cessation and diabetes self-care. At the same time, inefficiencies in IT and data use, such as the existence of databases that do not easily work with other programs, or that fail to allow for recording of key facts, can delay implementation studies and processes. For example, a database on diabetes control may lose its effectiveness if there is no place to record foot and eye problems. Databases where there is no way to link processes and outcomes (e.g., number of rehospitalizations or infections) may make it hard for better processes to come to the attention of clinicians and managers. In the United States, the Veterans Health Administration’s Mental Health QUERI introduced computerized kiosks for patients with schizophrenia to complete self-assessment questionnaires before they met with clinicians. A 1-year assessment found that the kiosk-based self-assessments were helpful in improving the quality of care. The VA eventually linked the self-assessments with its system of computerized patient records. This, in turn, enabled the VA to provide tailored education to veterans. In Toronto, the Sunnybrook Health Sciences Center has introduced an e-health initiative that enables patients to access their records from the internet using a login and password. The objective is to streamline the way in which health information is delivered and exchanged between providers and patients, and to improve the efficiency of clinical workflow.

Researcher-Clinician Partnerships

The support of the leadership in instituting implementation strategies may go nowhere if there is not an organizational embedding of implementation experts in day-to-day clinical practice. This can be done in many ways, and different organizations benefit from distinct partnerships. Properly done, researcher-clinician partnerships can lead to direct effects on clinical outcomes.

The work of Peter Pronovost and colleagues in a study known as the Keystone Michigan ICU Project is an oft-cited example of how implementation research can have a real impact on clinical care, Pronovost et al. (2006). Using a simple, five-point checklist applied over an 18-month period between 2004 and 2005 in a wide variety of hospital settings mostly across Michigan, the Keystone ICU Project managed to reduce catheter-related infection rates from 4% to zero. The intervention is estimated to have saved more than 1,500 lives and nearly $200 million, while costing only $500,000 to implement. An examination of the study components shows that the facilitators described above were all present. The CER design was simple and straightforward. The checklist items were handwashing, full barrier precautions during the insertion of central venous catheters, cleaning the skin with chlorhexidine, avoiding the femoral site, and removing unnecessary catheters. They were evidence-based, being developed using findings from research. Implementation teams included at least one physician and one nurse working together as project leaders. Time and effort went into ensuring that staff were familiarized with and trained in the plan. Infectious disease specialists at each participating institution were on hand to help carry out the plan and obtain data. Team leaders met with researchers via bi-weekly conference calls and twice-yearly meetings to learn about the science of safety, how each stage of the implementation process worked, and how to partner with experts in infection control to obtain data on the implementation efforts.

Understanding the success in Michigan is vital to replicating the effort and the positive outcomes of that effort in other clinical settings. A research team including Dixon-Woods and Pronovost, among others, evaluated the success of the Keystone Project from both a process point of view and a social perspective (Dixon-Woods et al. 2011). Implementation of the checklist process within a clinical care environment can be understood as a complex social intervention in the daily routines of health care professionals. Variation among the hospital sites where the project was introduced meant that an “out-of-the-box” implementation of the protocol would not be feasible, since the institutions in question were preexisting rather than designed to function around the protocol. While any given institution within the project ultimately adapted the protocol to fit its own clinical environment, the support that Keystone hospitals received as part of a network contributed greatly to the success in generating positive outcomes. Intensive communication and training activities mentioned above transferred not only the procedural aspects of the protocol, but allowed the clinical staff to participate in creating momentum for an improvement initiative that extended beyond the boundaries of their clinical unit, hospital, or community. Provision of a toolkit of materials, including recorded teleconferences made available on CD’s, functioned as support in assisting sites to implement to protocol (Dixon-Woods et al. 2011). The evidence base on which the protocol was based gave the project the scientific authority necessary to appeal to medical science clinical personnel; support for the project from hospital leadership conveyed a sense of significance to the endeavor, and membership in the project study community provided individual personnel the safety of an “authorized” project to ensure that challenges to existing cultural norms did not result in disciplinary action for clinical staff attempting to question and alter ingrained procedures.

Ethical Considerations of CER and Implementation Research

A sine qua non for ethical research with human research participants is that the intended research must be of important social value (Emanuel et al. 2000). If CER yields useful knowledge about the effectiveness of medicines or devices, but that knowledge is not then applied in practice, the intended social value of the CER is not realized and therefore the potential burdens or harms placed on the human research participants who helped to generate that knowledge, and the societal resources expended, are not justified. Given how critical implementation science is to realizing the social value of research, some bioethicists prefer that implementation science itself should be seen as an imperative.

As Faden and colleagues (2011) point out, “continuously gathering information about what works best for whom, and what does not, is vital” to establishing a just health care system that fairly and wisely allocates its resources. CER and implementation science help systems fulfill this principle of justice, by, for example, enabling care that is equally available and of equal quality to all patients in their system. A just system for delivering health or health care discovers, through CER and implementation research, knows who it is reaching and who it is not and can shape its care delivery pathways to discover the needs of patients and communities.

There are also ethical considerations regarding the integration of CER, implementation research, and clinical care. Historically, research and clinical care have been conceptualized as separate activities. Research has been seen as contributing to generalized knowledge that will benefit future patients and persons, while treatment is meant to help individual patients contemporaneously. Ideally, it was argued there should be firewalls between investigators and the physician caring for a patient or at least disclosure to the patient if his or her physician is also serving as an investigator. Yet, integrating CER and implementation research into clinical practice blurs the traditional research-treatment distinction and has important implications for human protections and oversight. Recent work by bioethicists and clinical researchers (Faden et al. 2011; Kass et al. 2012; Selker et al. 2011; Largent et al. 2011) has called for a thorough rethinking of the research-treatment distinction, calling for a focus on a national framework in which oversight is commensurate with level of risk. These ethical considerations will help frame the integration of CER and implementation in learning health care systems.

Scenarios for Integrating Implementation Research into CER

As noted earlier, depending on the research hypothesis or comparison question, a number of scenarios may be used to embed implementation research into CER. Below are four simplified scenarios to help frame how and when implementation research could be embedded in CER. These are not intended to be comprehensive, but rather, to provide some general guidance on methods. In all these cases, the CER would measure clinical outcomes, and the implementation research would concurrently track the process that affected the successful implementation.

Regardless of the research scenario, a crucial task for implementation researchers, CER researchers, and clinical staff is to identify and reach a consensus on where in the process, system, and/or setting opportunities for implementation are best seized. This entails applying rigorous methodology in identifying the barriers, facilitators, and other factors that will affect the implementation, and then, once those are identified, to pull the most effective evidence-based implementation “lever(s)” to change individual behaviors, organizational behaviors, processes, systems, infrastructure, or other factors. It also means knowing the best time to pull the lever(s) in other words, when is the right moment during the intervention to share “how-to’s?”
  1. 1.

    Sequential CER and implementation research studies. In this scenario, the CER study may have been previously conducted and established evidence of an intervention’s effectiveness. Here, the implementation research team might examine the gaps, barriers, and facilitators to the existing adoption of the intervention and establish an implementation study with the appropriate implementation levers to improve the adoption or spread of the intervention. The implementation research team would then compare the uptake and/or dissemination before and after the implementation study and describe the processes that lead to the success or failure.

     
  2. 2.

    Simultaneous identification of implementation barriers during CER to guide planning the implementation research. Here, the CER study would assess the relative effectiveness of an intervention and, simultaneously during the effectiveness trial, the implementation team would identify barriers, facilitators, and contextual factors that might influence the uptake, adoption, or spread of the CER findings. All stakeholders and participants – including the physicians at the point of care, trainees or residents, researchers, nurses, managers, quality improvement officers, and clinical study coordinators and study subjects themselves – should be at the table at the beginning and throughout the studies in order to decide what constitutes success of the CER study as well as how to measure and collect data on inputs, outputs, and outcomes. This scenario could be framed as, “don’t waste the opportunities of a CER study to examine all the factors that could benefit or impede implementation - including the perspectives of the human subjects in the study.” Here, pulling the patient engagement “lever” could help plan patient education and outreach efforts to ensure success of the implementation.

     
  3. 3.

    Comparative implementation studies at multiple sites. This approach may be helpful in establishing the effectiveness of implementation strategies in a health system with multiple hospitals or affiliated entities. The CER study might be undertaken at one site, and once the most effective intervention is determined, the research team may choose to conduct different implementation trials at other sites where the context, barriers, facilitators, and other factors may differ. The implementation studies would compare the uptake or adoption at the different sites, again evaluating the levers that affected the success or failure of the implementation strategies.

     
  4. 4.

    Staging implementation studies to scale up for wide dissemination. Once the effectiveness of an intervention is established, the adoption might be studied and disseminated in stages, beginning with a pilot and proceeding from small-scale to large-scale implementation studies. A pilot implementation study may be conducted to develop initial evidence regarding the feasibility, acceptability, and potential effectiveness of implementation strategies, and to begin to identify key contextual influences specifically for scaling up. Once the pilot study has been implemented and assessed, small-scale implementation studies may take place under controlled or “idealized” settings, where the strategies are further evaluated. In subsequent implementation phases, larger trials of implementation strategies could take place under “routine” care conditions to evaluate how well the strategies perform in the real world. This staged approach with widespread dissemination in the real world might provide levers for influencing policies related to care or processes to improve care.

     

In all these scenarios, in order to effectively pull the most appropriate implementation lever(s), teams must fully comprehend the key levers, context, and evaluation framework that will be applied to gauging success.

Workforce and Training

A number of reports from the United Kingdom, Canada, and the United States have identified current gaps in the workforce for CER and for implementation and dissemination research (Kalfoglou et al. 2001; Kitson and Strauss 2010; Kroenke et al. 2010). Academic centers for the study of implementation research are still few in number. Prospective implementation researchers reasonably question whether there are career pathways, opportunities for engaging with clinicians in CER, and appropriate training programs that can provide them with implementation “know-how.”

Implementation research training, in large part, draws from a variety of behavioral and social science disciplines. An overall goal is to begin to provide a comprehensive program that not only builds on behavioral and social science disciplines, but also includes the theoretical framework for implementation science, integrates relevant approaches and methods, and focuses on skills to work in multidisciplinary teams. While a number of institutions have or are developing formal or informal training programs, with some training targeted to implementation and dissemination, there are some examples of well-established and published training programs specifically targeted to implementation research, and which provide some suggested curricular offerings and competencies. Here are two examples, one from a program in Canada and another recently published in the United States.

The Canadian Institutes of Health Research funded the development of a national training initiative to build a workforce prepared in the science and practice of knowledge translation. Trainees can be broadly categorized into two groups: (1) those with advanced training and degrees from a wide range of disciplines including health services research, clinical epidemiology, informatics, nursing, medicine, psychology, health policy, business, computer science, and engineering and (2) decision-makers, including clinicians, health care managers, and policymakers, who want to know more about knowledge translation and how to apply it in their own setting. The training of the first group focuses on how to disseminate research results and engage stakeholders (including the public, health care providers, managers, and policy-makers) in collaborative strategies and interventions to ensure that the research is both relevant to and used by the knowledge users. The curriculum for the second group focuses on the core competency of how to implement a knowledge translation project in an organization.

The NIH and Veterans Health Administration have recently partnered to fund and administer an annual summer “Training in Dissemination and Implementation Research in Health” (TIDIRH).

Structured as a 5-day intensive residential institute, TIDIRH targets established researchers from a range of disciplines. This selection of participants with demonstrated experience with health care delivery research is intended to speed the impacts of the training in terms of volume and quality of research (Meissner et al. 2013). Now in its third year, the program rotates among different hosting universities to engage local faculty and build institutional as well as individual interest and capacity for implementation and dissemination research in health. The TIDIRH maintains a core faculty year to year, while inviting guest lecturers to round out a curriculum which aims to provide a thorough grounding in theory; implementation and evaluation approaches; creating partnerships and multidisciplinary research teams; research designs, methods, and analysis; and conducting research at multiple levels of intervention (clinical, community, policy).

The University of California, San Francisco, Training in Clinical Research Program (TICR) also targets implementation and dissemination science and presents a training framework that embraces multidisciplinary training combined with skill development through a focus on translational disciplines, including social and behavioral sciences, business administration, economics, education, and engineering; clinical disciplines including dentistry, medicine, nursing, pharmacy, and public health; and population sciences including biostatistics, epidemiology, and health policy. The competencies include an emphasis on identifying community, patient, clinician, and organizational factors that serve as barriers and facilitators to translating research results into everyday practice, policy, and public health; applying the basics of process and outcome evaluation; identifying the appropriate qualitative and quantitative measures of effect; and integrating conceptual frameworks for implementation science into the intervention design and/or evaluation of the program.

Below are some representative examples of courses and programs that integrate elements of implementation research (Table 1).
Table 1

Relevant Course Information

Workshop

Name of course

Educational institution

Program duration

Website

Enhancing Implementation Science in VA (EIS) Conference

Department of Veterans Affairs, Quality Enhancement Research Initiative (QUERI)

Cyber seminar delivered over several weeks

http://www.queri.research.va.gov/meetings/eis/

Global Implementation Conference

National Implementation Research Network

Biennial

http://www.implementationconference.org/

Guideline Implementation Master class

Guidelines International Network

1 Day

http://www.g-i-n.net/events/9th-conference

Improvement Science Summit

University of Texas, San Antonio

2 Days

http://acestar.uthscsa.edu/institute/su11.asp#improve

KT Canada Summer Institute on Knowledge Translation

Canadian Institute of Health Research

3 Days

http://ktclearinghouse.ca/ktcanada/education/summerinstitute

5th Annual NIH Conference on the Science of Dissemination and Implementation: Policy and Practice

NIH

2 Days

http://obssr.od.nih.gov/scientific_areas/translation/dissemination_and_implementation/DI2012/index.html

NIH Training Institute on Dissemination and Implementation Research in Health

NIH

5 Days

http://conferences.thehillgroup.com/OBSSRinstitutes/TIDIRH2012/index.html

Summer Institute on Evidence-Based Quality Improvement

University of Texas, San Antonio

5 Days

http://www.acestar.uthscsa.edu/

Teaching Evidence Assimilation for Collaborative Health (TEACH) Conference/Program

New York Academy of Medicine

4 Days

http://www.nyam.org/fellows-members/ebhc/

Fellowship

Health Services/Comparative Effectiveness Research Training

University of Alabama at Birmingham

2–3 years

http://www.soph.uab.edu/listerhill/hsortp

Implementation Research Institute (IRI)

Washington University in St. Louis

2 years

http://cmhsr.wustl.edu/Training/IRI/Pages/ImplementationResearchTraining.aspx

Indiana Children’s Health Services Research Fellowship

Indiana University School of Medicine

2 years

http://www.ichsr.org/

Certification and degree programs

Health Services Research and Policy – University of Rochester

University of Rochester Medical Center

5 years

http://www.urmc.rochester.edu/education/graduate/phd/health-services-research/

Master of Health Care Delivery Science – Dartmouth College

Dartmouth Medical School

1.5 years

http://mhcds.dartmouth.edu/index.html

PhD Program in Translational Science

Ohio State University – Center for Clinical and Translational Science

3–4 years

http://ccts.osu.edu/education-and-training-programs/degree-programs/phd-in-translational-science-integrated-biomedical-graduate-program

Master of Public Health in Clinical Translational Science

Ohio State University – Center for Clinical and Translational Science

2–3 years

http://ccts.osu.edu/education-and-training-programs/degree-programs/mph-in-clinical-translational-science

UCSF Certificate Program in Implementation Science

University of California, San Francisco

1 year (P/T)

http://www.epibiostat.ucsf.edu/courses/implementation_research.html

UCSF PhD Program in Epidemiology and Translational Science

University of California, San Francisco

3–4 years

http://www.epibiostat.ucsf.edu/courses/doctoral.html

Pre-doctoral and Post-doctoral Traineeships in Health Services Research and Healthcare Quality Improvement

University of Wisconsin,

Madison

2–5 years

http://pophealth.wisc.edu/prospective-students/finance/funding-support/training-grant

Source: Analysis conducted by the Association of American Medical Colleges, 2012

To plan not only career pathways but also to adjust policy and funding streams will be inherent to the successes of a training program for a diverse and adequately prepared workforce that is capable of carrying out implementation research.

Summary

This chapter has highlighted the evolution of the field of implementation science in various settings, the opportunities afforded by the ACA, and the recognition of the research community and policymakers that a new engagement of implementation research will be required to fulfill the promise of CER. The success of the federal investment in CER, as well as the fulfillment of the implicit social contract, may well hinge on how clinical discoveries are disseminated and implemented across the nation. Indeed, the social contract extends to all members of society; for if knowledge is not implemented and disseminated broadly to historically underserved groups, the CER will not have addressed a major tenet, which is to reduce disparities in health care and delivery. Biomedical ethicists have argued convincingly that implementation and dissemination are integral parts of the social contract between medical research and the human research subjects who participate in CER studies and endure potential burdens and risks in helping to ensure that innovations are discovered, disseminated, and implemented.

In the end, clinical knowledge, no matter how valuable and lifesaving, does not disseminate itself, and even publication in a well-known journal conveys no guarantee of uptake. To shorten the gap between knowledge and practice is not simply to make a discovery, create guidelines, publish findings, or provide financial incentives. It takes a deliberate use of implementation research and a laser-like focus on outcomes.

Today, a greater awareness among policy-makers, researchers, and clinical leaders, and new funding streams, open up comparable opportunities in the field of implementation research to help ensure that clinical breakthroughs benefit all citizens and communities. In the end, however, the full integration of implementation research into CER and into the fabric of research in the United States will require a political will to invest resources into the field and into career pathways for future generations in a resource-constrained environment, and implementation researchers will need to simplify the lexicon and the academic, political, and public understanding of a complex, multidimensional field of inquiry.

References

  1. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2010;38(1):4–23.CrossRefPubMedCentralGoogle Scholar
  2. Bonham AC, Solomon MZ. Moving comparative effectiveness research into practice: implementation science and the role of academic medicine. Health Aff. 2010;29(10):1901–5.CrossRefGoogle Scholar
  3. Caldwell DF, Chatman J, O’Reilly 3rd CA, Ormiston M, Lapiz M. Implementing strategic change in a health care system: the importance of leadership and change readiness. Health Care Manage Rev. 2008;33(2):124–33.CrossRefPubMedGoogle Scholar
  4. Chambers D. Foreword. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and implementation research in health. New York: Oxford University Press; 2012. p. vii.Google Scholar
  5. CIHR. About knowledge translation. Last modified 2014. http://www.cihr-irsc.gc.ca/e/29418.html. Accessed 26 July 2013.
  6. CIHR. Knowledge to action: a knowledge translation casebook. 2008. http://www.cihr-irsc.gc.ca/e/38764.html. Accessed 26 July 2013.
  7. CMS. Center for medicare and medicaid innovation – health care innovation awards. http://innovation.cms.gov/initiatives/Health Care-Innovation-Awards/. Accessed 26 July 2013.
  8. CMS. National healthcare expenditures fact sheet. Last modified in 2014. http://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/NationalHealthExpendData/NHE-Fact-Sheet.html. Accessed 26 July 2013.
  9. Colditz GA, Wolin KY, Gehlert S. Applying what we know to accelerate cancer prevention. Sci Transl Med. 2012;4(127):127rv4.CrossRefPubMedCentralPubMedGoogle Scholar
  10. Damschroder L, Aron D, Keith R, Kirsh S, Alexander J, Lowery J. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.CrossRefPubMedCentralPubMedGoogle Scholar
  11. Dixon-Woods M, Bosk C, Aveling E, Goeschel C, Pronovost P. Explaining Michigan: Developing an Ex Post Theory of a Quality Improvement Program. Milbank Q. 2011;89(2):167–205.Google Scholar
  12. Emanuel E, Wendler D, Grady C. What makes clinical research ethical? JAMA. 2000;283:2701–11.CrossRefPubMedGoogle Scholar
  13. Faden R, Beauchamp TL, Kass NE. Learning health care systems and justice. Hastings Center Report, 2011; July–August: 3.Google Scholar
  14. Feldstein AC, Glasgow RE. A practical, robust implementation and sustainability model (PRISM) for integrating research findings into practice. Jt Comm J Qual Patient Saf. 2008;34(4):228–43.PubMedGoogle Scholar
  15. Gagnon MP, Sanchez E, Pons JM. From recommendation to action: psychosocial factors influencing physician intention to use Health Technology Assessment (HTA) recommendations. Implement Sci. 2006;1:8.CrossRefPubMedCentralPubMedGoogle Scholar
  16. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322–7.CrossRefPubMedCentralPubMedGoogle Scholar
  17. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629.CrossRefPubMedCentralPubMedGoogle Scholar
  18. Implementation Science About Implementation Science. Implementation science “About this Journal”. http://www.implementationscience.com/about. Accessed 26 July 2013.
  19. Institute of Medicine (IOM). Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academy Press; 2001.Google Scholar
  20. Kalfoglou AL, Boenning DA, Korn D, Institute of Medicine. Exploring the map of clinical research for the coming decade: symposium summary, Clinical Research Roundtable, December 2000. Washington, DC: Board on Health Sciences Policy, Institute of Medicine; 2001.Google Scholar
  21. Kass N, Faden R, Tunis S. Addressing low-risk comparative effectiveness research in proposed changes to U.S. federal regulations governing research. JAMA. 2012;307(15):1589–90.CrossRefPubMedGoogle Scholar
  22. Kitson A, Rycroft-Malone J, Harvey G, McCormack B, Seers K, Titchen A. Evaluating the successful implementation of evidence into practice using the PARiHS framework: theoretical and practical challenges. Implement Sci. 2008;3:1. doi: 10.1186/1748-5908-3-1Google Scholar
  23. Kitson A, Strauss SE. The knowledge-to-action cycle: identifying the gaps. Can Med Assoc J. 2010;182(2):E73–7.CrossRefGoogle Scholar
  24. Kohn LT, Corrigan JM, Donaldson MS, Institute of Medicine. To err is human: building a safer health system. Washington, DC: National Academy Press; 2000.Google Scholar
  25. Kroenke K, Kapoor W, Helfand M, Meltzer DO, McDonald MA, Selker H. Training and career development for comparative effectiveness research workforce development: CTSA Consortium Strategic Goal Committee on comparative effectiveness research workgroup on workforce development. Clin Transl Sci. 2010;3(5):258–62.CrossRefPubMedCentralPubMedGoogle Scholar
  26. Largent E, Joffe S, Miller F. Can research and caer be ethically integrated? Hastings Center Rep. 2011;41(4):37–46.Google Scholar
  27. Lee DH, Vielemeyer O. Analysis of overall level of evidence behind Infectious Diseases Society of America practice guidelines. Arch Intern Med. 2011;171(1):18–22.CrossRefPubMedGoogle Scholar
  28. Lenfant C. Shattuck lecture–clinical research to clinical practice–lost in translation? N Engl J Med. 2003;349(9):868–74.CrossRefPubMedGoogle Scholar
  29. McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348:2635–45.CrossRefPubMedGoogle Scholar
  30. Meissner HI, Glasgow RE, Vinson CA, Chambers D, Brownson RC, Green LW, Ammerman AS, Weiner BJ, Mittman B. The U.S. training institute for dissemination and implementation research in health. Implement Sci. 2013;8:12. doi:10.1186/1748-5908-8-12.CrossRefPubMedCentralPubMedGoogle Scholar
  31. Mittman BS. Implementation science in health care. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and implementation research in health. New York: Oxford University Press; 2012. p. 400–18.Google Scholar
  32. National Research Council. The CTSA program at NIH: opportunities for advancing clinical and translational research. Washington, DC: The National Academies Press; 2013.Google Scholar
  33. NHLBI. Facts about ALLHAT: new findings about drugs to lower high blood pressure and cholesterol. http://www.nhlbi.nih.gov/health/allhat/facts.htm. Accessed 24 July 2013.
  34. NICE. How to put NICE guidance into practice. London: National Institute for Health and Care Excellence; 2005, Interim.Google Scholar
  35. NIH. Center for scientific review. Dissemination and implementation research in health study section. http://public.csr.nih.gov/StudySections/IntegratedReviewGroups/HDMIRG/DIRH/Pages/default.aspx. Accessed 24 July 2013.
  36. NIH. Dissemination and implementation science. The National Library of Medicine. Last modified 2015. http://www.nlm.nih.gov/hsrinfo/implementation_science.html. Accessed 26 July 2013.
  37. Patient Protection and Affordable Care Act. 111–148. 2010 42 U.S.C. Secs. 1185–9511.Google Scholar
  38. PCORI. National priorities for research and research Agenda. Washington, DC: Patient Centered Outcomes Research Institute; 2012. http://www.pcori.org/assets/PCORI-National-Priorities-and-Research-Agenda-2012-05-21-FINAL1.pdf. Accessed 24 July 2013.
  39. Proctor EK, Knudsen KJ, Fedoravicius N, Hovmand P, Rosen A, Perron B. Implementation of evidence-based practice in community behavioral health: agency director perspectives. Adm Policy Ment Health. 2007;34(5):479–88.CrossRefPubMedGoogle Scholar
  40. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006;355(26):2725–32.CrossRefPubMedGoogle Scholar
  41. Riley WT, Glasgow RE, Etheredge L, Abernethy AP. Rapid, responsive, relevant (R3) research: a call for a rapid learning health research enterprise. Clin Transl Med. 2013;2(1):10.CrossRefPubMedCentralPubMedGoogle Scholar
  42. Rycroft-Malone J, Kitson A, Harvey G, et al. Ingredients for change: revisiting a conceptual framework. Qual Saf Health Care. 2002;11(2):174–80.CrossRefPubMedCentralPubMedGoogle Scholar
  43. Schillinger D. An introduction to effectiveness, dissemination and implementation research. In: Fleisher P, Goldstein E, editors. From the series: UCSF Clinical and Translational Science Institute (CTSI) resource manuals and guides to community-engaged research. Published by Clinical Translational Science Institute Community Engagement Program, University of California San Francisco. 2010. http://ctsi.ucsf.edu/files/CE/edi_introguide.pdf. Accessed 26 July 2013.
  44. Scott IA, Glasziou PP. Improving the effectiveness of clinical medicine: the need for better science. Med J Aust. 2012;196(5):304–8.CrossRefPubMedGoogle Scholar
  45. Selker H, Grossman C, Adams A, Goldmann D, Dezii C, Meyer G, Roger V, Savitz L, Platt R. The common rule and continuous improvement in health care: a learning health system perspective. Unpublished discussion paper posted on Institute of Medicine website. 2011. http://www.iom.edu/Global/Perspectives/2012/CommonRule.aspx. Accessed 31 July 2013.
  46. Selker H, Leslie L, Wasser J, Plaut A, Wilson I, Griffith J. Tufts CTSI: Comparative Effectiveness Research as a Conceptual Framework for a Focus on Impact. Clin Transl Sci. 2011;3(2):56–58.Google Scholar
  47. Shrank W. The center for medicare and medicaid innovation’s blueprint for rapid-cycle evaluation of new care and payment models. Health Aff. 2013;32(4):807–12.CrossRefGoogle Scholar
  48. Stafford RS, Bartholomew LK, Cushman WC, Cutler JA, Davis BR, Dawson G, Einhorn PT, Furberg CD, Piller LB, Pressel SL, Whelton PK, ALLHAT Collaborative Research Group. Impact of the ALLHAT/JNC7 Dissemination Project on thiazide-type diuretic use. Arch Intern Med. 2010;170(10):851–8.CrossRefPubMedCentralPubMedGoogle Scholar
  49. Stetler C, Damschroder L, Helfrich C, Hagedorn H. A Guide for applying a revised version of the PARIHS framework for implementation. Implement Sci. 2011;6:99.Google Scholar
  50. Stone EG, Morton SC, Hulscher ME, et al. Interventions that increase use of adult immunization and cancer screening services: a meta-analysis. Ann Intern Med. 2002;136(9):641–51.CrossRefPubMedGoogle Scholar
  51. Sung NS, Crowley Jr WF, Genel M, et al. Central challenges facing the national clinical research enterprise. JAMA. 2003;289(10):1278–87.CrossRefPubMedGoogle Scholar
  52. Tricoci P, Allen JM, Kramer JM, Califf RM, Smith Jr SC. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA. 2009;301(8):831–41.CrossRefPubMedGoogle Scholar
  53. VHA. QUERI implementation guide, vol 2012. U.S. Department of Veterans Affairs; 2009.Google Scholar
  54. Washington AE, Lipstein SH. The Patient-Centered Outcomes Research Institute–promoting better information, decisions, and health. N Engl J Med. 2011;365(15):e31.CrossRefPubMedGoogle Scholar
  55. Weiss AP. Measuring the impact of medical research: moving from outputs to outcomes. Am J Psychiatry. 2007;164(2):206–14.CrossRefPubMedGoogle Scholar
  56. Woolf SH. The meaning of translational research and why it matters. JAMA. 2008;299:211–3.PubMedGoogle Scholar
  57. Woolf SH, Johnson RE. The break-even point: when medical advances are less important than improving the fidelity with which they are delivered. Ann Fam Med. 2005;3(6):545–52.CrossRefPubMedCentralPubMedGoogle Scholar
  58. Zerhouni EA. Translational and clinical science–time for a new vision. N Engl J Med. 2005;353(15):1621–3.CrossRefPubMedGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  • Ann C. Bonham
    • 1
    Email author
  • Mildred Z. Solomon
    • 2
  • Brian Mittman
    • 3
  • Alexander K. Ommaya
    • 4
  • Anne Berlin
    • 5
  1. 1.Association of American Medical CollegesWashingtonUSA
  2. 2.The Hastings CenterGarrisonUSA
  3. 3.Department of Veterans Affairs Greater Los Angeles Healthcare SystemVA Center for Implementation Practice and Research SupportLos AngelesUSA
  4. 4.Clinical Effectiveness and Implementation ResearchAssociation of American Medical CollegesWashingtonUSA
  5. 5.Scientific AffairsAssociation of American Medical CollegesWashingtonUSA

Personalised recommendations