Prevention Science

, Volume 14, Issue 4, pp 319–351

Addressing Core Challenges for the Next Generation of Type 2 Translation Research and Systems: The Translation Science to Population Impact (TSci Impact) Framework

  • Richard Spoth
  • Louise A. Rohrbach
  • Mark Greenberg
  • Philip Leaf
  • C. Hendricks Brown
  • Abigail Fagan
  • Richard F. Catalano
  • Mary Ann Pentz
  • Zili Sloboda
  • J. David Hawkins
  • Society for Prevention Research Type 2 Translational Task Force Members and Contributing Authors
Open Access
Article

Abstract

Evidence-based preventive interventions developed over the past two decades represent great potential for enhancing public health and well-being. Research confirming the limited extent to which these interventions have been broadly and effectively implemented, however, indicates much progress is needed to achieve population-level impact. In part, progress requires Type 2 translation research that investigates the complex processes and systems through which evidence-based interventions are adopted, implemented, and sustained on a large scale, with a strong orientation toward devising empirically-driven strategies for increasing their population impact. In this article, we address two core challenges to the advancement of T2 translation research: (1) building infrastructure and capacity to support systems-oriented scaling up of evidence-based interventions, with well-integrated practice-oriented T2 research, and (2) developing an agenda and improving research methods for advancing T2 translation science. We also summarize a heuristic “Translation Science to Population Impact (TSci Impact) Framework.” It articulates key considerations in addressing the core challenges, with three components that represent: (1) four phases of translation functions to be investigated (pre-adoption, adoption, implementation, and sustainability); (2) the multiple contexts in which translation occurs, ranging from community to national levels; and (3) necessary practice and research infrastructure supports. Discussion of the framework addresses the critical roles of practitioner–scientist partnerships and networks, governmental agencies and policies at all levels, plus financing partnerships and structures, all required for both infrastructure development and advances in the science. The article concludes with two sets of recommended action steps that could provide impetus for advancing the next generation of T2 translation science and, in turn, potentially enhance the health and well-being of subsequent generations of youth and families.

Keywords

Type 2 translation research Prevention intervention Adoption decisions Dissemination Implementation Sustainability Public health impact Systems approach 

Introduction

Evidence-based prevention and health promotion programs, practices, and policies represent great potential for enhancing public health and well-being. When carefully implemented, such interventions can prevent a wide range of health problems, promote positive development, and achieve economic benefits (National Research Council and Institute of Medicine 2009a; Society for Prevention Research [SPR] 2012; U.S. Department of Health and Human Services, Affordable Care Act 2011). Considerable evaluation data indicate that these types of interventions have had significant and far-reaching effects in reducing: unhealthy eating; physical inactivity; alcohol, tobacco, and other drug abuse; teen pregnancy; school failure; delinquent behavior; violence; and other mental, emotional, behavioral, and physical health problems (National Research Council and Institute of Medicine 2009a; U.S. Department of Health and Human Services, Affordable Care Act 2011). Furthermore, these interventions have shown a cost-beneficial economic impact in the education, criminal justice, social, and health service systems (Aos et al. 2004; Miller and Hendrie 2008; National Research Council and Institute of Medicine 2009a). To achieve broad population-level impact of evidence-based interventions, however, much more progress is necessary.

In this paper, we first highlight the twofold problem of limited translation of evidence-based interventions (EBIs) into practice and the related gaps in Type 2 (T2) translational research, along with the consequences of these limits and gaps. We then introduce two core, grand challenges in solving the twofold problem: (1) building the necessary infrastructure and capacity to support the systems-oriented scaling up of EBIs and practice-oriented translation research, and (2) developing a research agenda and improving the methods to address this agenda, in order to advance translation science. Next, we present a heuristic conceptual framework to articulate key considerations in addressing these challenges. In the third section, we discuss in greater detail specific strategies to address the two core challenges, summarizing relevant findings from previous research and suggesting future directions. Finally, we discuss the role of government and the policy changes required to support both infrastructure development and advances in translation science, including coordinated governmental activity and innovative partnership-based financing structures. Although conclusions presented for addressing challenges and advancing the field are broadly applicable, space constraints led us to focus primarily on literature pertaining to: (1) practice and research efforts in education, public health, and human services, rather than efforts in healthcare settings; and (2) prevention programming with children and adolescents.

The Problem: Limited Translation and Related Research

Failing to Reach Those Who Could Benefit from Evidence-Based Intervention

In the U.S., we are failing to reach those in need of evidence-based prevention and health promotion interventions.1 A number of studies have shown that only a relatively small percentage of interventions implemented by community-based program delivery systems (e.g., public schools, healthcare facilities, social service agencies) is evidence-based (Gottfredson et al. 2000; Hallfors et al. 2000; Hantman and Crosse 2000; Mendel 2000; Ringwalt et al. 2009; Silvia et al. 1997), indicating there is limited public access, along with problematic disparities in access to these services (e.g., Kessler et al. 2005; Merikangas et al. 2011; National Research Council and Institute of Medicine 2009a, 2009b; World Health Organization 2008). Moreover, a high percentage of EBIs are being implemented without quality or fidelity; thus, they are unlikely to achieve the intended outcomes (Fixsen et al. 2005; Ringwalt et al. 2011).

The issues are not unique to preventive interventions; the picture is similar for other healthcare and public health services. For example, only 50 % of U.S. patients receive therapies and services recommended by their healthcare providers, despite the country’s potential to provide excellent healthcare (McGlynn et al. 2003; Woolf 2008). As Glasgow and colleagues note, many of the elements necessary for translating science into widespread public health practice are, simply put, just not in place (Glasgow et al. 2003). According to Kerner et al. (2005):

…efforts to move effective preventive strategies into widespread use too often have been unsystematic, uncoordinated, and insufficiently capitalized…little [is] known about the best strategies to facilitate active dissemination and rapid implementation of evidence-based practices…[without] infrastructure for dissemination, it is likely that many evidence-based interventions will remain on the shelves (pp. 443–444).

Several literature reviews emphasize that it takes too long to see widespread public health benefits from newly tested and effective prevention interventions. Estimates vary by type of intervention, but indicate that a period of 17 or more years is common (Balas and Boren 2000; Boren and Balas 1999). As a historical example, penicillin was discovered by Alexander Fleming in 1928 but was not widely used until the mid-1940s, despite the demand created by millions of wounded soldiers in World War II. Time estimates are conservative since they often assume intervention adoption rates of 50 % or less and do not factor in the time required to move from adoption and implementation to achieving population-level impacts (also see Trochim et al. 2011, on issues with tracking time markers in biomedical research). These findings underscore the challenge in achieving the widespread use of many efficacious and effective prevention interventions.

The Role of Type 2 Translation Research in Addressing the Problem

Translation research, or the process of translating the information gained through scientific research into knowledge that will affect practice and ultimately improve public health, often is labeled as Type 1 and Type 2 (hereafter T2) translation (e.g., Sung et al. 2003). Briefly, Type 1 translation research addresses the application of basic research findings to the development of interventions. T2 translation research has been defined and classified in various ways in the health research literature (Bowen et al. 2009; Fixsen et al. 2005; Kerner et al. 2005), reflecting both its relatively recent appearance on the health-related research agenda and the diversity of disciplines contributing to it (Rabin et al. 2008). Broadly speaking, T2 translation research investigates the complex processes and mechanisms through which tested and proven interventions are integrated into practice and policy on a large scale and in a sustainable way, across targeted populations and settings. In our view, it is essential for realizing the population-level health impact of evidence-based preventive interventions. Despite its critically important role in achieving public health outcomes, T2 research has been limited.

Barriers to and Consequences of Limited T2 Research

Limited Funding

A key barrier is limited funding devoted to T2 translation, which reflects missed opportunities for prevention and health promotion in the U.S. As Woolf (2008) emphasizes, program and policy priorities typically are not based on a careful assessment of population needs or consideration of the EBIs most likely to address specific priority needs efficiently and effectively. There are many reasons for the limited investment in evidence-based prevention, ranging from a lack of public awareness to competing resource allocation priorities at federal, state, and local levels, priorities which typically favor treatment over preventive interventions (Catalano et al. 2012). Despite the potential for widespread adoption of effective prevention interventions to reduce costly treatments, only an estimated 2 to 3 % of governmental healthcare spending is directed toward them (Centers for Disease Control and Prevention 1992; Miller et al. 2008; Satcher 2006; Woolf 2008). Further, the majority of NIH funding is devoted to basic “discovery research” or Type 1 translation research.

Policy-related Barriers

As Woolf (2008) states, new “breakthrough” interventions may have less potential for saving lives than existing efficacious interventions implemented effectively at scale. The emphasis on the development of new interventions, rather than on research that would guide broad implementation of interventions shown to be effective, suggests that current policy priorities do not sufficiently consider the potential of such interventions to affect public health. It also might reflect an assumption that market forces will drive the dissemination of such interventions. Unfortunately, this has not been the case (see Kerner et al. 2005). Further, infrastructures and systems to support implementation of effective prevention interventions have not been well-developed, absent supportive policies.

Consequences of Lack of Investment and Policy Barriers

Numerous health, social, and economic consequences result from this failure to prioritize T2 translation practice and research. For example, preventing unhealthy behaviors (e.g., poor diet, physical inactivity, tobacco smoking, alcohol and other drug use, mental health problems, school failure, and delinquency) could reduce a significant proportion of the more than 900,000 annual U.S. deaths associated with these behaviors, along with the associated chronic diseases accounting for 75 % of healthcare costs and $1 trillion in lost productivity (CDC, National Center for Chronic Disease Prevention & Health Promotion 2009). Moreover, the estimated annual cost of mental, emotional, and behavioral disorders ranges from $247 to 435 billion (National Research Council and Institute of Medicine 2009a). Despite these economic and health consequences, funding for implementation and evaluation of interventions designed to prevent or reduce unhealthy behaviors and mental, emotional, and behavioral disorders appears to be decreasing (e.g., National Research Council and Institute of Medicine 2009a; Sloboda 2012; also see www.carnevaleassociates.com), although derivation of precise funding estimates is hampered by the current monitoring systems (e.g., imprecise categorizing and reporting of NIH and other potentially relevant grants—see National Research Council and Institute of Medicine 2009a). Compounding the problem is the fact that access to such interventions is not uniform across populations, with more disadvantaged populations having less access (Woolf 2007).

An increased emphasis on research to understand how to widely disseminate and implement EBIs could save the lives of children, youth, and adults; reduce health problems; and improve quality of life. It also could substantially reduce healthcare costs, particularly through the use of economic analyses to guide investment shifts from expensive but low-value preventive interventions to ones that are more cost-effective (Woolf et al. 2009). This paper from the Mapping Advances in Prevention Science Task Force of the Society of Prevention Research on T2 Translation Research2 summarizes an approach to facilitating widespread application of EBIs.

Addressing Core Challenges: An Integrated Conceptual Framework

The Two Core Challenges: Infrastructure Development and Scientific Advances

Our review of the literature highlights two core challenges we must address to achieve population impact through EBIs (Backer and Guerra 2011; Durlak and DuPre 2008; Fixsen et al. 2005; Glasgow et al. 2006; National Research Council and Institute of Medicine 2009a; Proctor et al. 2009; Spoth et al. 2008; Westfall et al. 2007). The first core challenge is to build infrastructures and the capacity for broad translation of evidence-based preventive interventions into community practices through prevention delivery systems. At the heart of this challenge is the need to develop practice infrastructures more capable of enhanced adoption, implementation, and sustainability of EBIs. This effort must include developing the corresponding research infrastructures required to investigate the features of practice or delivery systems (technical, human, and structural) that would optimally support effective, efficient, and sustained implementation of EBIs. Broadly defined, infrastructure consists of the basic supports and systems in practice and research settings that serve to achieve sustained, high-quality implementation of EBIs at scale. The second core challenge is to clarify and conduct the range of necessary scientific advances required for investigation of sustained, high-quality implementation of EBIs at scale. The range of related activities includes developing conceptual frameworks, delineating priority T2 translation research questions, and improving the methods to address them.

Building capacity and related infrastructure for T2 translation—in both science and practice—and developing the next generation of theory, methods, and T2 translation research are grand challenges. These are challenges within our reach due to prior scientific advances and, if addressed, are likely to have substantial public health impact (see National Research Council and Institute of Medicine 2009a and the National Science Foundation’s description of grand challenges, retrieved at www.nsf.gov/sees).These two challenges are closely interrelated and can be addressed coextensively; that is, addressing the first challenge of infrastructural development creates greater capacity for surmounting the second challenge of conducting and advancing translation science which, in turn, can guide further effective and efficient infrastructure development.

An Integrative Conceptual Framework

Defining Terms

We define interventions as programs, policies, and practices encompassing intentional actions (whether a singular action or a constellation of actions), designed for an individual, organization, community, region, or system, which are intended to alter health-related behaviors, address risk or protective factors, and improve health-related outcomes (CDC 2007; Rabin et al. 2008). We subscribe to the definition of evidence-based and the standards of evidence described by the Society for Prevention Research (Flay et al. 2004; 2005—also, see Footnote 1).

As stated above, two types of translation research have been identified (Sung et al. 2003; Rabin et al. 2008). Type 1 translation involves the first three phases in the preventive intervention research cycle model (Greenwald 1990; Mrazek and Haggerty 1994): (1) epidemiology (identification of the problem or disorder and review of information to determine its extent); (2) etiology (identification of risk and protective factors for the problem or disorder as potential targets for preventive intervention); and (3) intervention design to reduce risks, enhance protection, and reduce problems, intervention pilot testing, and efficacy trials.3 T2 translation research can be integrated into or coextensive with this third phase, as well as two additional phases—(4) effectiveness trials with well-defined populations and (5) dissemination and implementation research (Sussman et al. 2006).

Overview of Framework

The ultimate goal of T2 translation research—to enhance public health through widespread use of EBIs—requires an orientation toward broad-spectrum population impact and the systems that support it. This requires coordinated study of effectiveness and efficiency of infrastructures and translation processes or systems, such as methods for preparing individuals and organizations to select and adopt EBIs, quality implementation practices following adoption, and factors that affect sustainability of EBIs across various populations and settings (see Glasgow et al. 2003). To achieve the greatest population impact, we recommend an emphasis on translation during all relevant phases of the research cycle, such that the development and testing of interventions would entail a thorough investigation of context, systems, and other factors that influence pre-adoption processes, adoption, quality implementation, and sustainability of EBIs (Glasgow et al. 2003; Rotheram-Borus and Duan 2003; Sandler et al. 2005), as well as effective collaborations between practitioners and scientists.

In Fig. 1, we present a conceptual framework incorporating key components for advancing T2 translation research—called the Translation Science to Population Impact (TSci Impact) Framework. It characterizes (1) four translation functions, optimally shaped through informational feedback loops across all four phases of translation; (2) the multiple contexts for this work; and (3) necessary infrastructural supports. The framework is grounded in Diffusion of Innovation Theory (Rogers 1995), with its core concept that EBIs diffuse across multiple interrelated stages. The first stage precedes adoption; it entails gaining knowledge about the innovation (among policymakers, practitioners, and the general public), and being persuaded about the innovation’s relative advantages. The second stage involves the decision to adopt the innovation. The next stage is implementation, or putting the innovation into use, and the final stage requires institutionalizing or sustaining its use. Our framework articulates a translation process that occurs across four phases that correspond to many aspects of stages of innovation diffusion, entailing Pre-adoption, Adoption, Implementation, and Sustainability phases. The success of each phase is influenced by wide-ranging factors that warrant systematic investigation (see Table 1). As graphically represented in Fig. 1, an intervention’s research cycle begins with the design and development of preventive interventions. Optimally, wide-ranging interventions targeting all health-compromising behaviors are developed for translation. This starting stage entails application of relevant epidemiology and life course development findings (see National Research Council and Institute of Medicine 2009b, pp. 71–111), along with etiological research that can inform intervention design. Preliminary intervention design is followed by pilot and efficacy testing, including all aspects of Type 1 translation research. The identification of efficacious EBIs sets the stage for addressing key functions of T2 translation research, as described in detail below.
Fig. 1

Addressing core challenges for the next generation of Type 2 translation research and systems: The Translation Science to Population Impact (TSci Impact) framework. Notes: EBIs: evidence-based interventions. Research cycle begins with intervention design guided by epidemiological, etiological, and life course development research. It should include T2 translation research integrated into pilot, efficacy, and effectiveness testing; optimally, interventions are developed to cover the full range of health-compromising behaviors across population segments

Table 1

Translation science to population impact framework: translation phases, factors to investigate, and illustrative research questions to address

Translation phase/function

Illustrative factors to investigate

Examples of key research questions

Pre-adoption

• Consumer/provider preferences, marketing

• How do various preferences about EBI attributes influence ultimate consumer choices and demand?

• Information dissemination factors

• What are the key channels by which stakeholders obtain EBI information?

• Packaging of materials and knowledge syntheses

• How do stakeholder networks affect information dissemination?

• How do stakeholders evaluate the knowledge base or evidence on EBIs?

Adoption

• Program/provider decision making

• What are the key market, organizational, and other factors influencing adoption decisions?

• Economic benefit analysis

• What are the incentives/disincentives for EBI adoption by various stakeholders (e.g., policymakers, community leaders, service providers, program participants), and how do they affect adoption decisions?

• Organizational readiness

• How do various types of decision-making tools influence the EBI selection and decision-making process?

• How does decision-making vary by type of intervention, service system, or community needs?

• How are cost and other economic data used in the decision-making process?

Implementation

• Provider/organization/systems factors

• What are the characteristics of EBI stakeholders who are most inclined to implement particular EBIs?

• Training/technical assistance (TA)

• What are the most effective delivery systems for specific types of EBIs in different settings?

• Participant factors

• What are the effects of different training and TA methods, including the use of web-based TA technologies, on implementation quality in different populations?

• Fidelity/adaptation

• How could social media technologies be used to enhance implementation quality?

• How do the amount, types, and mode of delivery of training and TA affect implementation quality?

• Do solutions to common implementation problems differ across intervention types?

• What are the key factors that influence consumers to participate in EBIs, and what are the best strategies for enhancing participation?

• What are the relative contributions of EBI core components and how do specific adaptations affect outcomes?

Sustainability

• Funding/financing strategies and structures

• In general, what management, motivation, organization, training, and TA factors for organizations and communities lead to greater sustainability?

• Intervention characteristics/costs

• What funding models and financing strategies are most conducive to sustainability?

• Organizational/community system factors

• What are effective organizational leadership strategies for nurturing champions for long-term implementation?

• Supportive policy

• What national, regional, and state diffusion networks and TA systems can most effectively support sustainability?

• What policies are most conducive to stable funding streams?

Key Research Areas Defined by Four Translation Functions

Research on the pre-adoption phase focuses on intervention, consumer, provider, and organizational characteristics that could influence the ultimate adoption of EBIs. Optimally, it is conducted early in the intervention development process (see Rotheram-Borus and Duan 2003; Sandler et al. 2005). Pre-adoption research also addresses how information about EBIs is best synthesized and disseminated to policymakers, practitioners, and the general public. For example, pre-adoption factors such as appeal or acceptability of the intervention to prospective consumers, and the feasibility of its implementation (particularly anticipated costs and resource requirements), should be considered. Research also should investigate how to market prevention interventions so that they are aligned with the priorities of collaborating institutions, such as those in the education and health sectors.

Adoption research is the systematic study of factors influencing policymaker, practitioner, and organizational decisions to implement EBIs, including decision-making tools. Adoption factors include prospective financing for new prevention programs; attitudes towards the intervention; incentives for adoption by policymakers, community and organizational leaders, service providers, and participants; loyalty to existing non-EBIs; assessment of economic benefits; and organizational factors (e.g., institutional readiness for change).

Implementation research investigates how specific activities and strategies are used to integrate quality EBI implementation within specific service systems and settings (e.g., community center, school, service agency, primary care clinic—see CDC 2007). Implementation research is distinct from research on intervention efficacy and effectiveness in terms of outcomes, content, and methods (Proctor 2007). Rather than focusing on how an intervention works with defined populations, implementation studies examine a broad range of factors concerning effective implementation. These include: factors in reaching and engaging targeted populations or institutions; factors that influence implementation quality or the extent to which intervention delivery is faithful to the original design and intent of the intervention (e.g., organizational leadership attitudes, staff training and technical assistance (TA) resources, incentives for quality implementation, communication between practitioners and program participants, and organizational climate); the process of incorporating EBIs into existing systems and organizations; and how EBIs are monitored to determine impact on proximal as well as ultimate target behaviors (see Greenberg et al. 2005).

Sustainability research examines how EBIs are maintained or institutionalized over the long term, or expanded within and across specific settings or service delivery systems. These studies examine factors that may contribute to long-term implementation of a single or comprehensive set of EBIs, such as funding availability, organizational capacity and stability, sustainability of community-based implementation teams, and policies that support a functional infrastructure for the intervention (e.g., training, laws, and reimbursements for services—CDC 2007).

Feedback Loops

Our framework includes feedback of information among all phases of research, from the later phases of translation to the more formative phases of intervention development, and back again. For example, refinement of an intervention or its infrastructural supports, including strategies for its adoption and for covering anticipated scale-up costs, can be informed by findings from an effectiveness study. As another example, research on intervention participant preferences concerning modifiable implementation strategies during the sustainability phase (e.g., feedback during this phase about program setting and scheduling choices that might enhance its program appeal) could be used subsequently to inform evaluation of implementation quality. In other words, fostering a flow of information between and across the phases of intervention development, testing, and implementation facilitates evaluation and understanding of the factors associated with sustained, quality implementation (see Fig. 1; see Wilcox et al. 2008 for an illustration).

Multilevel Contexts

The four phases of translation research occur within multiple contexts, ranging from local communities and organizations to national, state, and county governments that ultimately affect the population impact of EBIs (see Fig. 1 arrows indicating interrelationships between contextual influences and translation functions). Our conceptual framework draws upon work by Proctor and colleagues (2009), who posit that EBI adoption and sustained implementation take place within many contexts at a number of levels. At the community level, stakeholder group, or organizational preferences and attitudes, along with community coalition functioning, can enhance or impede EBI implementation. In addition, the willingness and capacity to implement EBIs may be influenced by organizational factors, such as financial and other incentives, support from leadership, and organizational climate. At higher organizational levels, governmental budget priorities, policies, and practices (at county, state, and national levels), such as those that mandate or restrict funding to the use of EBIs, also affect EBI adoption, implementation, and sustainability. The effectiveness of translation within and across these contexts at various levels is influenced by characteristics of consumer groups, program providers, organizations, communities, government, and service provider systems.

Later discussion summarizes key context-related factors at multiple levels that should be addressed through T2 research. We recognize how infrastructures are influenced by contextual factors represented in the TSci Impact Framework, as well as each translation function. For this reason, subsequent presentation of research questions will be used to illustrate how to address the complex interrelationships between infrastructures and contextual influences, within and across translation phases (also represented by “looping” arrows in Fig. 1). In addition, we will describe how infrastructure development is necessary to foster positive contextual influences on the T2 translation process.

Systems Orientation in the TSci Impact Framework and Infrastructures

The TSci Impact Framework captures key components of other frameworks that describe T2 translation-related systems. For example, the “Interactive Systems” and other frameworks for dissemination and implementation (Durlak and DuPre 2008; Wandersman et al. 2008) delineate key infrastructures and systems that could serve to support the functions necessary for T2 translation (e.g., training and TA for adoption and implementation; supports at the county, state, and national levels to foster networking for sustained implementation). The Interactive Systems framework also guides specification of actions for fostering quality implementation across all phases of translation (e.g., assessments of capacity for adoption and goodness of fit with stakeholder preferences, development of implementation teams, TA provision, and ongoing process evaluation—see Meyers et al. 2012). Consistent with this approach, TSci Impact considers the extent and quality of infrastructure supports and systems within and between practice and research settings to be critically important. This includes TA supports and practitioner–scientist partnerships for T2 translation, along with financing and related supports from governmental agencies (e.g., braided funding structures).

Another recent framework, produced by a Centers for Disease Control and Prevention Work Group (Wilson et al. 2011), draws upon interactive systems approaches in translating knowledge into action, focusing on the area of chronic disease prevention. It also emphasizes the importance of supporting structures and an orientation toward public health impact, across phases of the translation process. Importantly, this “Knowledge to Action” (K2A) translational process incorporates practitioner perspectives on chronic disease prevention, while also highlighting the need to integrate evaluation research into all phases of translation. The K2A approach fits well with the multiphase T2 scientific enterprise presented here, and how it encourages embedding research within the practice of translation, especially through practitioner–scientist collaborations.

Future development of the TSci Impact Framework will consider the critical roles of systems in T2 translation that are emerging from recent meetings convened by the Bill & Melinda Gates Foundation on the topic of “Achieving Lasting Impact at Scale.” Many of the innovative ideas emerging to date concern “catalytic” and delivery system mechanisms for translation to impact (Little 2011; Little et al. 2012), with emphasis on the fit between innovations and the systems in which they are intended to be translated for large-scale impact. These and other concepts emerging from the effort (e.g., focus on the need to improve existing health-practice platforms or delivery systems, emphasis on contextual influences, careful attention to consumer needs) also inform our translation science to impact systems framework. The focus of this effort is impact at scale globally, similar to another useful resource on practical solutions to global translation challenges, a WHO report on systems thinking (De Savigny and Adam 2009).

Next, we summarize the infrastructure supports needed to advance T2 translational research. Figure 2 provides a summary of the two core challenges and related topics, subtopics, and issues that are discussed subsequently. Given the scope of needed infrastructural supports and the numerous related topics, subtopics, and issues to be addressed, this flow chart covers the range of topical content as an initial overview.
Fig. 2

The two core challenges for Type 2 translation research: summary of primary topics, subtopics, and key issues addressed

#1 Core Challenge: Developing Infrastructures and Capacity for T2 Translation

The TSci Impact Framework intends to focus attention on the need for improving infrastructures to support the uptake and broad implementation of EBIs, within multiple contexts. Two interrelated types of infrastructure development are necessary: practice infrastructures to support the delivery of EBIs, and the research infrastructure to support and foster inquiry about how to maximize effective EBI delivery.

Practice Supports that Address the First Core Challenge of Infrastructure Development

Needed Practice Supports for Pre-adoption and Adoption Phase

Infrastructure at the pre-adoption and adoption phases includes: (1) systems for consumer and market analyses at the pre-adoption phase; (2) EBI dissemination structures to support adoption decision making; (3) data systems for assessing community or organizational needs; and (4) community-based partnerships engaged in adoption decision making.

Systems for Consumer/Market Analyses

Several researchers have called for prevention scientists to adopt a business systems approach, with development of the consumer and market analysis systems to take prospective provider and participant preferences into account from the earliest phases of program development (Kreuter and Bernhardt 2009; Rotheram-Borus and Duan 2003; Sandler et al. 2005). With this type of approach, program developers work closely with prospective consumers and providers to identify the available resources, preferences, needs, and values of the target audience(s); in so doing, they assess prospective adopter capacity to implement EBIs. This approach also entails market analyses and feasibility studies of intervention delivery. Such an approach involves careful consideration of factors that could impede the adoption and successful implementation of the innovation and the program elements most likely to be redesigned or adapted by users. In addition, it is likely to increase the appeal of EBIs to various types of consumers, thereby increasing the likelihood that EBIs would be adopted and implemented widely (Rotheram-Borus and Duan 2003; Sandler et al. 2005).

Dissemination Information Systems

At the adoption phase, common barriers working against the decision to adopt an EBI include a lack of structures and systems supporting widespread access to valid information about which EBIs work, for whom, and under what conditions. Among other things, addressing these barriers could reduce issues with the perceived fit between a particular EBI and an adopting organization. Detailed information from scientific articles, however, often is difficult for practitioners to access. Information about what works should be published and communicated in a manner that is accessible and easy to understand for specific audiences, yet detailed enough to allow for informed decision making. Progress in addressing the barrier is being made by websites summarizing relevant EBI information, such as the Blueprints for Prevention, the Washington State Institute for Public Policy (benefit–cost information), and the Collaborative for Academic, Social, and Emotional Learning websites, among others.

To date, the application of scientific criteria to identify interventions as “evidence-based” has been inconsistent across the organizations that create registries and lists of specific EBIs. As discussed above, the Society for Prevention Research has proposed that organizations utilize a rigorous set of scientific criteria for identifying EBIs (Flay et al. 2004). Scientists and practitioner communities, however, can have differing views about the appropriate standards to apply in judgments about the degree to which programs or policies are evidence-based. This contributes to information dissemination that is perceived as inconsistent by practitioners and scientists alike. To avoid the perception of inconsistent standards and related confusion, it is critical that the information infrastructure—websites and other channels for dissemination—provide clear information about criteria used in evaluating effectiveness, as well as consistent and valid information about program content, core components, implementation support, appropriate target populations, and human labor and other infrastructure costs associated with program implementation.

In addition to websites focusing on delineation of available EBIs, some agencies and professional organizations have published practice guidelines and recommendations on the selection of EBIs, based on relevant scientific research and considerations of tradeoffs among EBI options. Two examples are the CDC School Health Guidelines to Prevent Unintentional Injuries and Violence (CDC 2001) and the CDC School Health Guidelines to Promote Healthy Eating and Physical Activity (CDC 2011). This type of dissemination approach to increased EBI adoption also can be helpful and warrants further evaluation.

Prior research has shown that adoption of EBIs is more likely when programs are viewed by key stakeholders as more advantageous than ones currently implemented, compatible with organizational needs and service delivery mechanisms, and relatively easy to deliver (Greenhalgh et al. 2004). Accurate information about the degree to which EBIs can help to achieve these goals is important to disseminate. In providing such information, it will be important to be cognizant of the ways in which different stakeholders (e.g., scientists, practitioners, community leaders, and policymakers) process this information. By further evaluating what types of information influence the decision-making processes, prevention scientists may improve their ability to convey the advantages of using effective prevention strategies (Tseng 2012).

Community Monitoring/Data Systems

Optimally, community decision makers should have access to data systems that use valid and reliable instruments or archival data to generate local epidemiologic data on the risk/protective factors and problem behaviors targeted for intervention. Accurate data can be used to: (1) assess and prioritize prevention needs, pinpointing problems to be addressed; (2) make informed and strategic choices in the selection of EBIs that target elevated risks in the community and strengthen protection where it is weak; and (3) assess program impacts. Objective data of this type ensure that prevention efforts carefully consider the specific needs of the community; they also can increase the support for prevention and the likelihood that EBIs will be adopted and sustained (Fagan et al. 2008; Hawkins et al. 2002).

Community-Based Partnerships for Program Adoption

Development of community-based systems for prevention program adoption has the potential to overcome several of the challenges related to program adoption, including enhancing the “fit” between newly adopted EBIs and local organizations, anticipating local capacity building needs for program implementation, and increasing community support for prevention (Fagan et al. 2011). A key element of a community’s effectiveness in addressing these challenges is the development of effective partnerships or coalitions that can draw upon existing resource systems (e.g., public schools, non-profit agencies, law enforcement, government-linked and private social and health services, religious institutions, and businesses). During the adoption phase, building a strong foundation from which to conduct prevention is dependent on community partners—representing the demographic composition of the community and key community sectors—coming together with a shared vision for desired outcomes and developing a strategic plan for reaching goals that specifies, from the outset, how the EBI will be sustained (e.g., Fagan et al. 2012; Valente et al. 2007). By engaging in these activities, community partners enhance the capacity of collaborating local organizations to conduct prevention activities, actively engage community members and engender widespread support for prevention, enhance information and resource sharing, and minimize duplication of services (Wandersman and Florin 2003). Longitudinal randomized controlled trials (RCTs) have demonstrated that community-based delivery systems and structures can be successful in guiding communities’ selection and adoption of a variety of EBIs that address and “fit” with local prevention needs (Fagan et al. 2009; Spoth, Guyll et al. 2011; Spoth and Greenberg 2011).

Needed Practice Supports for the Implementation Phase

As with program adoption, a range of important infrastructural supports for implementation have been identified (see Fig. 2). Here, we focus primarily on the two interrelated elements of EBI-related training and technical assistance, both of which are considered essential in enhancing implementation fidelity and program outcomes (Durlak and DuPre 2008; Dusenbury et al. 2010; Fixsen et al. 2005). We also address the important issue of engaging prospective community participants in community-based EBIs.

EBI-Related Training and Technical Assistance

EBIs typically require staff trained in the content and skills needed for effective delivery. Initial training sessions for program providers must promote consistent and full implementation of the program as designed and tested. Ongoing implementation training should be offered on a regular basis in order to train new staff as well as provide continuous support to providers who have already started program implementation (Fagan and Mihalic 2003; McHugh and Barlow 2010). This type of ongoing training of community team members and key leaders is important in ensuring successful and sustained community prevention infrastructures (Feinberg et al. 2002; Fagan et al. 2012).

Implementation Technical Assistance Systems

TA typically follows initial training sessions and is necessary for both providers of specific EBIs and members of community-based prevention delivery systems. Although research on the effectiveness of various TA models is largely lacking, a review of available research suggests that TA should be proactive, as prevention program providers may not always self-identify the need for such services, may be reluctant to ask for them, or may lack the resources to pay for them. Both training and TA should focus on helping implementers understand the need to monitor prevention activities and learn how to do so using implementation fidelity monitoring tools and information systems. In community-based models, TA can help team members assess their internal functioning (e.g., turnover in membership, leadership, shared decision making, planning for sustainability) and evaluate their progress in meeting goals or benchmarks (Feinberg et al. 2007; Spoth 2008). To be optimally effective, such services must be continuous and responsive to the level of functioning and developmental stage of the community team, and provided to all individuals involved in the effort (e.g., team leaders, staff, and membership—Mitchell et al. 2002).

TA providers can be program developers or master trainers located internal or external to the implementing site (Fixsen et al. 2009). The “scaling up” of some EBIs has involved the development of “purveyors,” individuals or groups who represent the EBI and provide TA to help local organizations implement the intervention with fidelity (Fixsen et al. 2005). One example of a successful purveyor group is the national office that provides training, TA, and program monitoring services for quality implementation of the Nurse Family Partnership program (Olds 2002). In addition to provision of TA on-site by trainers, other mechanisms that can be effective are off-site TA sources such as written feedback on videotaped sessions or just-in-time assistance via email or phone (see Feinberg, Ridenour, and Greenberg 2008; Mihalic and Irwin 2003; Rohrbach et al. 2010; Spoth et al. 2007a, b).

Supports for Engaging Prospective Community Participants

Failure to engage community participants in EBIs is a major impediment to effective implementation and scaling-up of EBIs. The public health impact of EBIs cannot be accomplished unless a large percentage of the target population for the intervention is engaged (Glasgow et al. 2004). Methods to reach the entire target population and overcome barriers to participation are essential, including engaging populations that are less likely to use preventive services (e.g., rural, inner city, and vulnerable populations; see Hawkins and Salisbury 1983). For example, recruiting universal populations into parent training interventions has been a major challenge impeding the widespread dissemination of family-focused interventions (Frolich and Potvin 2008; Haggerty et al. 2002; Spoth and Redmond 2000, 2002; Spoth 2008). It is paramount to engage subpopulations that have limited awareness of or access to EBIs, in order to reduce preventive health service inequities. It is a priority to determine successful methods for increasing participation of harder-to-reach consumers in EBIs, such as parents of high-risk youth, whether in rural or inner-city neighborhoods. Support for identification of common barriers to regular attendance in prevention program settings and innovative and low-cost solutions to these problems could help increase the reach of EBIs.

Needed Practice Supports for the Sustainability Phase

Sustainability supports and systems are necessary for long-term maintenance of quality implementation of EBIs, across specific settings and service delivery systems. Key components of related practice infrastructures include continuous training and TA—oriented toward a benchmarked developmental process fostering sustainability and incorporating quality improvement systems for implementers—plus funding or financing structures. In designing these components, the first order of business is improved frameworks for multiphased sustainability infrastructure development.

Multiphased Infrastructure Development

There are phases of development of specific practice infrastructures that support movement toward full sustainability of both individual EBIs and service delivery systems (Chinman et al. 2004; Hawkins et al. 2008; Livit and Wandersman 2004; Spoth and Greenberg 2005; Stevenson and Mitchell 2003; Wandersman et al. 2008). The phases of infrastructure development described in this literature overlap with the implementation phase addressed above, and often include initial implementation, expansion of implementation, sustainability strategic planning, and strategic plan implementation. These phases are part of a transactional, iterative process (Scheirer 2005; Shediac-Rizkallah and Bone 1998). Institutional supports that improve the sustainability of EBIs have not been widely researched, though emerging evidence suggests that some of the same factors that affect adoption and implementation quality also impact sustainability (August et al. 2006; Gruen et al. 2008; Kalafat and Ryerson 1999; Scheirer 2005). Studies of program delivery by community-based prevention teams have identified team functioning, shared resources, and effective planning as related to the sustainability of EBIs, as well as the sustained functioning of the teams themselves (Brown et al. 2010; Feinberg, Bontempo, and Greenberg 2008; Scheirer 2005).

TA Oriented Toward Continuous Quality Improvement and Benchmarking

Recent research has suggested that the provision of TA to service delivery systems at state and local levels can support sustainability (Spoth and Greenberg 2011). The provider support organizations noted in the prior section are designed for sustainability; state systems also can serve this purpose (e.g., Land Grant University Extension outreach systems supporting practitioner–scientist partnerships). Research in Pennsylvania has illustrated how a state-level TA system can support quality implementation and sustainability by using total quality management findings to improve TA in the context of a benchmarking process (Bumbarger and Campbell 2012; Rhoades et al. 2012; Tibbits et al. 2010). Other infrastructure supports and resources that affect sustainability include organizational capacity, data systems, and effective structures for addressing leadership turnover (August et al. 2006; Gruen et al. 2008; Scheirer 2005). As discussed above, data systems not only assist communities in making strategic choices regarding intervention targets but also they are critical to monitoring and quality control during program implementation and evaluation of intervention outcomes, as well as to benchmarking progress.

Financing Structures and Strategies

Importantly, effective communication of findings from the previously discussed data systems can aid in generating of enthusiasm for EBIs and help to generate funds to sustain prevention activities. Notably, financing structures and strategies to support sustainability are critically important, and benefit from policymaking that will be discussed in the last section of this paper.

Practice Supports Through Community-Based Delivery Systems—Longitudinal Evidence

In this context, it is important to note that randomized controlled trials of community-based delivery systems designed to support all of the phases of EBI translation have demonstrated that such systems can result in sustained, quality EBI implementation and multiyear community-level reductions in youth problem behaviors, including substance misuse, delinquency, and violence (Hawkins et al. 2009; Hawkins et al. 2012; Miller and Hendrie 2008; Redmond et al. 2009; Spoth 2007; Spoth et al. 2007c). There also is evidence that the numerous developmentally appropriate EBIs from which communities can select for implementation through these infrastructures are cost-beneficial (Drake et al. 2009).

As an example, the Communities That Care (CTC) model is one that creates its own community-based infrastructure (Hawkins et al. 2009). An evaluation of the CTC model demonstrated significant reductions in the initiation of smoking and drinking, prevalence of past-month tobacco use, and past-year delinquency and violence for students in intervention compared to control communities 5 years following the adoption of the CTC model in intervention sites (Hawkins et al. 2012) and that this model is cost-effective (Kuklinski et al. 2012).

Another example is PROSPER (PROmoting School–university Partnerships to Enhance Resilience), a practitioner–scientist partnership-based state prevention system for effective delivery of EBIs. It is based in states’ existing land grant university outreach systems linked with public school systems. It has demonstrated significant effects in reducing the initiation and prevalence of adolescent conduct problems and substance misuse, including marijuana, methamphetamine, and prescription drug misuse 6.5 years past baseline (e.g., Spoth et al. 2011; Spoth et al. 2012a, b). PROSPER also has been shown to be cost-efficient (Crowley et al. 2012) and cost-effective (Guyll et al. 2011), sustaining implementation quality over time (Spoth et al. 2011).

Needed Practice Supports for Underserved Populations Across All Phases

Community entities potentially served by preventive interventions include school districts, neighborhoods, towns, and tribal jurisdictions; they can be located in urban, suburban, and rural areas, many of which are underserved, particularly those with lower-income populations. Although each community brings a unique set of prevention needs, infrastructure supports, and challenges to the process of translating EBIs, the challenges of infrastructure development in underserved areas can be substantial. For example, a recent report issued by the Health Resources and Services Administration describes health inequities in rural areas, which reflect gaps in infrastructure development for health services and prevention (U. S. Department of Health and Human Services, Health Resources and Services Administration 2011).

Inner-city neighborhoods also are underserved and provide another case in point. Residents of inner-city neighborhoods often are highly mobile, which impedes the provision of consistent and ongoing prevention services and reduces the availability of human capital to support such services (see www.promiseneighborhoods.org; Komro et al. 2011.) In addition, service staff turnover in these neighborhoods can be high, making it more difficult to create and sustain change. Given the population size and density of high-risk urban areas, prevention activities in these areas will likely necessitate considerable resources spread across multiple neighborhoods/schools and substantial commitments (e.g., funding matches) from the larger municipal government or school district, in order to be effective. Support for prevention may be difficult to garner if the immediate concerns of residents and policymakers are not addressed during the EBI pre-adoption and adoption phases. These concerns include economic development, neighborhood safety, and affordable housing, which are not necessarily tied to prevention strategies (e.g., see www.promiseneighborhoods.org).

Considering the above conditions, efforts to build infrastructure for implementation of EBIs in inner-city neighborhoods are likely to be more time-consuming and should be more focused on extensive partnership building than what might be required in suburban areas and smaller cities or towns. Resources and funding to support these types of efforts include those from the U.S. Department of Housing and Urban Development (for researchers collaborating with community-based organizations that attempt to reduce the challenges experienced in urban environments, see portal.hud.gov/hudportal/HUD?src=/hudprograms/toc) and The Center for Faith-Based and Neighborhood Partnerships in the Department of Labor (for initiatives related to prevention, see partnerships.workforce3one.org/page/resources/1001118233606726797). Good examples of effective partnerships between EBI purveyors or researchers and urban-based practitioners, along with lessons learned from those partnerships, are provided by the Chicago Parenting Program (Gross et al. 2009) and the Good Behavior Game (implemented in a range of urban communities—see Kellam et al. 2011).

Research Supports that Address the First Core Challenge of Infrastructure Development

Basic Research Supports

Parallel to and in coordination with the development of infrastructure for evidence-based practice, successful prevention efforts require the development of infrastructure for practice-oriented research (see Fig. 2 and Table 1). Fundamentally, the infrastructure for prevention-related translation research must entail support for rigorous single-site dissemination, adoption, implementation, and sustainability research projects. Optimally, it also would support multi-site projects, centers, or research networks that combine the interdisciplinary skills of academics across research institutions, and effective data sharing systems. One possibility to be considered is a Multisite Transdisciplinary T2 Research Network supported by the NIH, as was recently presented at a National Institute on Drug Abuse strategic planning meeting (Spoth 2012). Advantages of such a research network have been noted, including: (1) support for T2-related system science grounded in “real world” prevention delivery systems, and also linked with Type 1 translation research; (2) increased capacity to address gaps concerning high-priority research questions; (3) support for innovative study of new social media technologies; (4) support for data sharing projects; and (5) the creation of new scientist training opportunities (also see Greenberg et al. 2009).

Another key need is to develop a workforce of scientists who will focus their research on translation issues. Currently, there are few federally-funded institutional training grants focused on T2 translation issues and very few university-based programs devoted to training in T2 translation research; training often entails application of a “learn as you go” or apprentice model (National Research Council and Institute of Medicine 2009a). Increasing numbers of NIH career development grants have been helpful in this regard (National Research Council and Institute of Medicine 2009a); in addition, CDC-funded Prevention Research Centers are facilitating this work through training in an empowerment evaluation approach designed to integrate evaluation into ongoing service delivery (Cox et al. 2009). Also, some of the NIH Clinical and Translational Science Awards are taking steps to improve T2 translation-related training. To nurture the next generation of T2 translation scientists, training must entail a broad curriculum that includes the basics of prevention research, integrated with community development, health communications, marketing and other business models, and the use of new technologies, placing a strong emphasis on research methods and analytic skill development (both qualitative and quantitative), as described in the Society of Prevention Research document on standards of knowledge for the science of prevention and related training resources (see www.preventionresearch.org/advocacy). Finally, it is worth noting that additional workforce development recommendations across all relevant areas of prevention science and public health can be found on the 2009 National Academies report (National Research Council and Institute of Medicine 2009a).

Needed Supports for Building Practitioner–Scientist Partnerships and Networks

Practitioner–scientist partnerships could serve as primary vehicles for enhanced translation of science into practice. New infrastructures could help to foster practitioner–scientist consortia or partnerships involving broader collaborations among communities, consumers, practitioners, and prevention scientists (Spoth and Greenberg 2005). Common areas of need and interest include how to most effectively create systems-level change, build alliances across institutional settings, and diffuse EBIs with fidelity, while allowing for local input and adaptations that ensure effective, sustained, and institutionalized EBIs (Valentine et al. 2011).

Examples of Research Consortia and Practitioner Partnerships

An example of a new type of infrastructure is the Promise Neighborhoods Research Consortium, funded by the NIDA (see Komro et al. 2011; www.promiseneighborhoods.org). Comprised of a team of prevention scientists from multiple research institutions, this consortium provides scientific infrastructure and support for initiatives to promote health and well-being within distressed neighborhoods. The consortium supports individual scientists who are collaborating with individual neighborhoods; synthesizes research about what works in prevention; helps neighborhoods to build strong teams for prevention and apply for prevention funding; fosters networks of communities addressing similar problems; and provides a user-friendly guide to data collection, management, and analysis. Other examples include the aforementioned evidence-based PROSPER practitioner–scientist partnership delivery system (Spoth and Greenberg 2011), which is applying web technology to the creation of a network of partnerships, and the CTC coalitions (Fagan et al. 2011; Hawkins et al. 2012).

Steps in Developing Practitioner–Scientist Partnerships

Much has been written about the challenges and benefits of collaborations between practitioners and scientists (e.g., Price and Behrens 2003; Spoth and Greenberg 2005; Wandersman and Florin 2003 in a special issue of the American Journal of Community Psychology devoted to the topic). A key barrier to forging these collaborations is the “natural tension” between the two groups concerning their respective motivations, goals, and methods. For example, developing community-based research efforts introduces additional burdens for practitioners. The potential differences in world view, scientific values, and methodologies causing tensions have been noted. Differences such as these adversely affect practitioner–scientist collaboration and the workforce development required for T2 research. Also, it should be recognized that there is a continuing and dynamic “tension” that requires awareness and attention throughout all phases of practitioner–scientist partnerships.

A noteworthy first step in reducing tensions between practitioners (broadly defined here to include all stakeholders in practice settings) and scientists is to identify common ground and goals of interest to both scientists and community practitioners or stakeholders early in the collaborative process (Price and Behrens 2003; Wandersman and Florin 2003). Scientists have made suggestions for reaching common ground, such as addressing differences in methodological orientations through ongoing dialogues between scientists and community practitioners (Kelly 2003); using community leadership development to promote an approach that dynamically integrates community action with theory development (Price and Behrens 2003); and incorporating role flexibility into the research through shared decision making about EBIs (Pentz 1986; Pentz 2007). Furthermore, collaboration may be improved by researcher support of community-initiated adaptations to EBIs that are assessed through careful measurement of change to produce continuous quality improvements. Finally, there is a need to establish equality in partnerships between practitioners and scientists, with both groups valuing the skills and knowledge that each bring to the collaboration (Holder et al. 1997). For example, Mold and Peterson (2005) describe a practice-based research approach whereby shared priorities, such as quality implementation, are emphasized and the process is not assumed to be best led or owned by scientists.

Integrated, Practice-oriented Research Frameworks

To forge practitioner–scientist links, a synthesis of two contrasting approaches to community prevention research frameworks is needed: prevention science and collaborative community action research, often termed community-based participatory research (Coie et al. 1993; Minkler and Wallerstein 2002; Rappaport 1990; Weissberg and Greenberg 1998). On the one hand, clinical trial methodologies advocated by prevention science are needed to provide an evidence base for identifying program effects on targeted outcomes. Clinical trial outcomes often inform researchers about variables to address as they work within community/organizational settings to design ecologically valid, contextually responsive programs. On the other hand, collaborative community action research is likely to provide rich accounts of how culture, context, local decision making, and history influence both model development and implementation of EBIs. A synthesis of these approaches could promote further learning between scientists and practitioners across all phases of T2 translation.

Weissberg and Greenberg (1998) have emphasized the substantial importance of both perspectives and the need to integrate the values and methods of both approaches. It is likely that one approach may have more relevance at one phase and the second model may provide greater value at a different phase (e.g., prevention science during a trial phase vs. community-based participatory research during the pre-adoption phase). In each local partnership, the balance of these approaches may be different, and partnerships need to consider procedures for arbitrating sometimes conflicting perspectives. Studying these various ways of balancing approaches may provide new solutions to addressing the complexities of partnership-related research and the standards about what constitutes “evidence-based,” including addressing past dialogues about the focus on EBIs vs. “best practices” (Green 2001; Nation et al. 2003; Spoth and Molgaard 1999).

#2 Core Challenge: Needed Scientific Advances for the Next Generation of Translation Research

A second key challenge to the prevention field is the need to advance the science, including clearer conceptual frameworks and a related research agenda that will address priority T2 questions specific to the pre-adoption, adoption, implementation, and sustainability phases of translation of EBIs in diverse communities and service settings (see Table 1).

Elaborated Research Cycle

Prevention research has tended to proceed in a linear fashion, from epidemiological and etiological studies, to intervention design and efficacy testing, to effectiveness and dissemination of trials. However, as noted in the introduction, there are discontinuities on the continuum between epidemiology and intervention dissemination, and not all research proceeds temporally from efficacy to effectiveness to dissemination (Marchand et al. 2011). Thus, we recommend a broader, more cyclical view of the factors that influence the formative processes preceding adoption, adoption itself, quality implementation, and sustainability of prevention interventions across all stages of the intervention cycle, incorporating what is learned through the advanced phases into the earlier ones (see Fig. 1 feedback loops). One example of this approach is a Robert W. Johnson Foundation study entailing systematic feedback about translation issues from practitioners throughout the process of implementing physical activity interventions (Wilcox et al. 2008).

T2 Research Embedded in Effectiveness Studies

To set the stage for addressing critical T2 research questions, more effectiveness trials are needed to evaluate the range of preventive interventions that: (1) can be integrated into service systems; (2) provide a good fit with the resources, norms, and typical practices of these service systems; and (3) produce effects that can lead to population impacts. It is critically important that effectiveness research, or hybrid efficacy/effectiveness trials, be conducted with a wide range of population subgroups and in various types of settings, to evaluate whether an intervention works with broad populations and conditions. Incorporating T2 research studies into larger efficacy and effectiveness studies can facilitate an understanding of how EBIs are most effectively disseminated, what factors influence decisions about selection and adoption, how EBIs are integrated into a wide range of service settings, what factors are most critical to quality implementation, and how prevention infrastructure and capacity are best developed to foster and sustain prevention efforts. Answers to key T2 translation questions are essential to ensuring that prevention research and practice are integral parts of the public health infrastructure of the U.S.

In the next section, we describe the key research questions within the phases of T2 translation that need to be addressed (see Table 1). Importantly, many of these research questions address specific context-related influences on translating EBIs into practice, as indicated in Fig. 1. Beyond that, in the last section of the paper, we separately address the potential positive “contextual” influences of coordinated governmental actions at the federal and state levels, along with the key role of those actions on infrastructure development.

The T2 Translation Research Agenda

Key Pre-adoption Phase Research Questions

Broadly speaking, T2 research must address questions such as: How do consumer (individuals, organizations) preferences about various EBI features or attributes influence their ultimate intervention choices and demand for EBIs? (See Table 2 for illustrative literature reviews of empirical studies; also see Cunningham et al. 2009; Spoth and Redmond 1993 for examples of consumer preference study and market simulations.) Research also can be used to support adoption of EBIs to fit a new ecology. For example, conducting parent preference research would be advised in adopting a parenting program for use with deployed military families. Another question concerns the most important factors that predispose consumers to make decisions to adopt EBIs. As noted earlier, a market orientation for developing and disseminating interventions may be useful in understanding adoption decisions (Kreuter and Bernhardt 2009; Pentz 2004; Sandler et al. 2005). Future research should examine how marketing principles such as segmentation of the target market, product positioning (i.e., branding, pricing, and promotion), identifying competing needs for limited resources, assessments of consumer and provider preferences, and the packaging of key ingredients into simpler, less intensive interventions might be applied to the improved translation of EBIs.
Table 2

Translation science to population impact framework: illustrative critical reviews addressing high priority research questions

Translation phase/function

Selected high priority research question

Illustrative critical reviews

Pre-adoption

How do various preferences about EBI attributes influence ultimate consumer choices and demand?

Sandler et al. (2005) articulate a business systems approach to the development of prevention programs and services that are carefully integrated with the prevention research cycle. Emphasis is on marketing strategies, including concept development, feasibility analyses, prototype development/testing, and market testing (branding, pricing, promotion), to assure that programs are both effective and widely adopted. Also see the Greenhalgh et al. (2004) review on diffusion of innovations in service organizations, concerning system antecedents and readiness for innovations, and the Rotheran-Borus et al. (2012) review of the model of disruptive innovations and market strategies to promote and diffuse EBI science.

Adoption

How are various types of evidence used by decision makers in their adoption decision making?

Tseng (2012) critically reviews literature on the ways policymakers, program administrators, and practitioners define, acquire, interpret, evaluate, and apply different types of evidence, including how cost and other economic data are used in the decision-making process. As an illustration, Tseng summarizes a study of school board members by Asen et al. (2012). This study showed that the school board members used six types of evidence (research studies, firsthand knowledge, testimony, measurable quantitative/qualitative data, case incidents, and law/policy); research was used very infrequently, and it was not used in a way intended by policymakers or researchers.

Implementation

Which systems factors are most important in quality implementation of specific EBIs?

Fixsen et al. (2005) present a comprehensive review of the implementation science literature, focusing on relevant components and conditions of implementation (based on empirical studies, meta-analyses, and prior literature reviews dating back to 1970). Durlak and DuPre (2008) summarize results of meta-analyses and other empirical studies underscoring the conclusion that level of implementation quality affects outcomes, highlighting how there are at least 23 key contextual factors influencing it.

Sustainability

What funding models and financing systems are most conducive to sustainability?

Scheirer and Dearing (2011) review the literature on sustainability as a basis for constructing a conceptual framework to guide future sustainability research, highlighting the primary sets of sustainability factors affecting the full range of key outcomes of relevance. Also see the Langford et al. (2012) review of financing projects, strategies, and structures; it provides examples of a number of state-level initiatives to strategically improve investments in evidence-based prevention.

Crosscutting all phases

What policy changes will yield the strongest effects in T2 translation research and practice?

Biglan and Taylor (2000) critically review literature on the contribution of advocacy and policy change to the translation of science into practice, illustrating how that occurred in the case of the population-level reduction in tobacco use in the U.S. They argue that evidence-based advocacy and policy change also could be effective in other areas, like prevention of violence. Pentz et al. (2004) complement this review with a conceptual model representing multiple perspectives and contexts (e.g., political, public health, education) for shaping public policy.

Studies on effective strategies for disseminating information about EBIs, and how stakeholders ultimately integrate information into adoption decisions, remain major gaps in translation research. First, there is a need to investigate the channels through which different stakeholder groups receive general information about EBIs and their implementation, and the effects of stakeholder networks on information dissemination. Social network methodologies, in particular, may prove useful in understanding how stakeholder networks affect EBI information dissemination (Greenhalgh et al. 2004; Valente 2012). Second, there is a need for research on the role of evidence in early stages of decision making about prevention programs (Tseng 2012). The widespread use of EBIs requires that key stakeholders value evidence-based approaches over other types of interventions. Also, when stakeholders value evidence-based approaches, two questions are what they consider to be “evidence” and how they evaluate the evidence for a given intervention; these questions warrant further investigation (e.g., Granger 2008).

Key Adoption Phase Research Questions

Future research is needed to understand the processes stakeholders use in the selection of EBIs, including evaluation of the types of evidence-related information influencing decision-making processes (see Table 2 for illustrative literature reviews of empirical studies). For example, how do decision makers make use of data on risk and protective factors in the selection of an EBI? What is the effectiveness of various types of decision-making tools (e.g., see the Strategic Prevention Framework, www.samhsa.gov/prevention/spf.aspx)? How do groups of stakeholders reach a consensus about adopting an EBI? And how does the adoption decision-making process vary by type of intervention, service system, or community needs?

A key question concerns the need to address why some stakeholders—including policymakers, community leaders, organizational leaders, service providers, and consumers—appear to be more amenable, while others are more resistant to adopting EBIs. A related question concerns what accounts for these differences. For example, what are the incentives for EBI adoption by various stakeholders? Among those organizations that have overcome specific barriers, what strategies have they employed? Adoption of any new practice involves complex organizational behavior changes. Understanding these complexities can aid in designing the necessary tools to support adoption.

Cost is a major adoption consideration for consumers; thus, the availability of funding streams and the methods used to access them is important to investigate. This may begin with an investigation of existing prevention funding in communities and states, as well as models for identifying and accessing additional stable and single-purpose revenue streams. In addition, questions about the economic benefits of many EBIs remain unanswered. More economic analyses (cost-effectiveness, benefit-cost, and cost efficiency) are needed to facilitate decision making about EBIs and to persuade consumers, policymakers, and funders of the overall benefits of investing in prevention programming. Further, better understanding of where and when economic benefits are realized is important, including the specific systems to which costs and benefits accrue, in order to advance strategies for financing EBIs. For example, school districts may invest in an evidence-based obesity or tobacco prevention program, but the benefits accrue to the individuals involved, taxpayers, and the health care system, and not necessarily the school districts that made the original investment. How initial investments can be recouped is an important T2 research question (see Woolf et al. 2009).

In cases where economic analyses have not been conducted, data on the actual costs of implementing a given EBI are essential, and program developers should make it routine to measure variables related to cost to assist consumers in making decisions, with or without an analysis of benefit–cost. There clearly is a need for cost tracking and analysis tools that facilitate reasonably accurate cost projections and are practically feasible, such as cost savings estimators for policymakers (see www.paxis.org/triplep) along with the study of the benefits of their application. User-friendly cost tracking tools would help increase the availability of cost information that is critically important in community and organization administrator decisions about initiation and continued implementation of EBIs.

In considering these research questions and issues, it is noteworthy that Rotheram-Borus and colleagues suggest that the prevention field reorient its approach to EBI diffusion and apply the business model of disruptive innovations. These researchers make the case that following this model is simpler and less expensive, and offers more accessible versions of interventions (comprised of key elements) that would be diffused more broadly and more quickly (Rotheram-Borus et al. 2012). In the future, tests of this approach should be conducted to examine whether streamlined versions of interventions achieve impacts comparable to the more complex versions.

Key Implementation Phase Research Questions

Although previous reviews of the program implementation literature (Dane and Schneider 1998; Domitrovich and Greenberg 2000; Durlak 1998) indicated the absence of significant attention to this issue, recently implementation has become a greater focus of research efforts, and there now is a developing “implementation science” (Dane and Schneider 1998; Durlak 1998; Domitrovich and Greenberg 2000; Fixsen et al. 2005). As a result, there is an increasing knowledge base regarding the wide variety of factors that influence implementation and its quality. The research clearly demonstrates that better quality implementation leads to improved outcomes for children (Durlak et al. 2007). However, many questions about how to improve and maintain quality implementation remain. Three conceptual models that may assist in guiding further research questions include those of the National Implementation Research Network (Fixsen et al. 2005), the school ecological model (Greenberg et al. 2005), and the RE-AIM model (Glasgow et al. 2004). Drawing on these models, specific areas of needed research in this emerging sub-field follow.

Factors Influencing Implementation Quality

As suggested earlier, more research is needed on factors that influence implementation quality, particularly studies on provider and organizational characteristics (e.g., organizational staff and leadership characteristics, agency mission alignment, resources, tolerance for change, practitioner–client communication, organizational climate—see Domitrovich and Greenberg 2000; Glisson et al. 2008). Another important issue for further research is how characteristics of the EBI itself affect implementation quality. Previous studies suggest that EBIs that are well specified in regard to setting, qualifications of program providers, content, and methods are more likely to be implemented with quality than those that are less explicit (Rohrbach et al. 2006). Research is needed on which elements of EBIs enhance implementation quality, and how those elements may be incorporated into intervention design. In addition, research on the use of social media technologies to enhance effective application of EBI-based “home work,” or engagement in EBIs, is important (e.g., Facebook or text reminders and cues, smart phone applications).

Although the importance of providing practitioners, organizations, and communities with effective training and TA has been linked to enhanced implementation quality and targeted outcomes, the amount, types, and mode of delivery of training and TA needed to achieve success warrant further attention (Rohrbach et al. 2006). For example, trials might compare the delivery of training and TA face-to-face relative to alternate delivery modes, such as by phone, email, or web-based applications; whether the amount and kind of TA support differs for start-up and maintenance phases of EBI implementation; and how fidelity and/or outcomes differ due to the presence or absence, amount, and timing of TA (e.g., Rohrbach et al. 2010).

Previous research suggests that community organizations are reluctant to access training and TA, but the reasons for this reluctance are unclear. Is it the content, method of delivery, cost, time involved, or timing that is least appealing? Is resistance to training and TA related to market factors, such as the availability of some EBIs as stand-alone products that can be purchased without required training? Identification of such barriers could help providers of TA better construct and direct their services. In this connection, a major question is: How do providers of different types of interventions (e.g., universal vs. indicated programs, or school curricula vs. delinquency prevention programming) address different training, TA, and implementation challenges? In this vein, while it is important to recognize the unique needs at each phase for differing intervention models, an important question is whether there also are common solutions to common implementation problems across intervention types (see Bumbarger and Perkins 2008; Dariotis et al. 2008; Wandersman et al. 2008).

Implementation Quality and Delivery Systems

Further research on prevention delivery systems also is needed to determine effective strategies for integrating EBIs within existing service delivery systems (e.g., Women, Infants and Children program), creating delivery infrastructures for use in existing systems (e.g., PROSPER—Spoth and Greenberg 2011; Spoth et al. 2004); and developing new systems that build infrastructure for prevention program delivery (e.g., CTC—Fagan et al. 2011; Hawkins et al. 2008; Quinby et al. 2008). Given limited EBI implementation and the potential of EBIs for achieving widespread change, more prevention delivery systems are needed (see Table 2 for illustrative literature reviews of empirical studies). Another key question is: What are the most effective delivery systems for specific types of EBI delivery in different types of settings? Further research is required to evaluate existing EBI dissemination and delivery systems such as those funded by SAMHSA (see Schinke et al. 2002) as well as school-based delivery systems (e.g., Sprague 2007). Prevention trials addressing these delivery systems questions could provide opportunities for examining methods to enhance collaboration between scientists and practitioners and studying the challenges of shared decision making, the characteristics of effective organizational teams, and how collaborative structures share information, delivery tasks, and resources. The use of evidence mapping methodologies to quantify the nature and distribution of EBIs also could be helpful in this regard (Callahan et al. 2011).

Further investigation can help to illuminate optimal strategies for assessing and increasing organizational capacity to deliver various types of EBIs. While research has supported the general relationship between organizational capacity and program implementation (Durlak and DuPre 2008), studies should be conducted to compare different strategies for improving organizational practices related to decision making, training, management of staff turnover, internal and external communication, and staff supervision for prevention programming (Rohrbach et al. 2006). Schools are a primary access point to children; thus, they require careful study. However, there are multiple levels and types of practices in school systems, from curriculum to financing, to training and accountability assessment, and EBI implementation requires buy-in at each of these levels. To date, there has been little research on how to most effectively gain access to schools for prevention evaluations or to facilitate the changes necessary in these systems to ensure high-quality implementation across many schools and classrooms (Greenberg 2010).

Implementation Fidelity and Adaptation

Identifying the core components of EBIs to be delivered and the optimal methods of their delivery is important to inform decisions regarding program adaptation. Currently, there is debate among prevention scientists regarding if, when, and how to adapt interventions during local replications. While there is consensus that components deemed critical to success should not be altered (Valentine et al. 2011), there is a growing concern that without an efficient strategy for updating EBIs and effective methods for adapting them to local circumstances, the population impact of EBIs might be reduced (Rotheram-Borus et al. 2012). For most EBIs, studies of the relative contribution of individual components to the overall EBI effectiveness have not been conducted. Thus, research is needed on how EBIs work in order to identify their key ingredients. It also is important to study adaptations, to understand whether they affect outcomes, and if so, how, including the types of changes that enhance or diminish effectiveness. Thus, replications that test various program components can both determine “core components” and guide user adaptations. Studies should address questions about how and with whom adaptations occur in natural settings, what parts of programs tend to be adapted most often, reasons for these adaptations, and the effects of these adaptations on expected outcomes. This research should focus on adaptations that may occur at the organizational or community level (e.g., adapting interventions for a specific cultural group, Castro et al. 2004), as well as those that may occur at the level of the individual program provider during program delivery (e.g., O’Brien et al. 2012). Exploration of these questions could be facilitated by better methods for incremental tracking of adaptations as they occur and analysis techniques for determining optimal adaptations. Longitudinal research is needed to examine how levels of implementation fidelity and adaptation vary over time (Berkel et al. 2011).

Finally, some program developers are testing the effectiveness of different versions of their intervention, including variations in program components, delivery methods, and accompanying TA for use with different populations (Haggerty et al. 2007). Examples of this research include tests of new technologies such as web-based versus in-person versus cellular phone-based formats, or practitioner versus peer-led formats. Such trials have the potential to increase knowledge about what populations are most amenable to the various formats and associated uptake of EBIs, and generate data that may improve implementation consistency and quality. A better understanding of the characteristics of individuals, families, schools, communities, and organizations that are most responsive to particular intervention delivery formats can help decision makers target services to those who will benefit from them the most and maximize the use of their resources.

Key Sustainability Phase Research Questions

Perhaps the most significant challenge to effective translation of EBIs is to sustain implementation efforts over time (Bumbarger and Perkins 2008; Spoth and Greenberg 2011). Frequently in the U.S., innovative public health programming is developed and initially implemented with seed grants from governments or foundations. However, the dilemma of time-limited grant funding is that innovations often are implemented without a clear plan for generating resources to sustain the innovation (Adelman and Taylor 2003; Farquhar 1978; Hallfors et al. 2002; Hallfors and Godette 2002; Mancini and Marek 2004). As noted earlier, the field would benefit from a change in approach whereby practitioner–scientist collaborations work with organizations, service systems, and communities to investigate what structures, strategies, policies, and resources are necessary to create sustainable change. In doing so, it will be important to recognize both the common issues shared by interventions as well as the unique nature and needs of specific intervention models. Such investigations, beginning in the pre-adoption phase, could address the need for developing and testing practical tools to assist agencies, schools, and communities in moving EBIs to a sustainable basis. Yet, at this point, sustainability research has not coalesced into a subfield with a clear research agenda (Scheirer and Dearing 2011).

As the study of sustainability is nascent, numerous T2 research questions should be addressed to guide both policy and practice. One critically important question concerns identification of optimal strategies for organizations, agencies, and communities in effectively using implementation data to support sustainability (see Tibbits et al. 2010). In addition, a key question is what management, organization, training, and TA factors for organizations and communities lead to greater sustainability for specific programs, practices, and policies. Scheirer and Dearing (2011) provide a useful framework for articulating sustainability research questions and formulating a research agenda (also see Table 2). They suggest three key sets of sustainability factors to investigate, namely, intervention characteristics (e.g., specific intervention models costs), organizational supports (e.g., staff buy-in), and community environment (e.g., existence of partnerships). They also emphasize the importance of examining financing mechanisms, along with evaluation of six types of outcomes (continued participant benefits, continued program activities, sustained community-level partnerships, maintained organizational practices, sustained attention to the problem addressed, and program diffusion).

As discussed earlier, the sustainability process has recognizable developmental phases, suggesting key questions that fit within the Scheirer and Dearing framework. For example: how does the nature of ongoing TA change once a community matures in its use of an EBI or service delivery model? What strategies can be used with organizational leadership to nurture champions for long-term prevention implementation, especially in cases where the greatest benefits may not be realized for years (e.g., early childhood interventions)? Concerning sustainability support structures, it is critically important to clarify what types of national, regional, or state diffusion networks and TA systems can most effectively support sustainability for various types of EBIs and service delivery systems. Moreover, it would be helpful to understand how findings on cost–benefit effectiveness can be best used to alter policymaking and influence sustainable funding.

Crosscutting Policy Research

The above discussion of the T2 phases has referenced policy-specific research questions. Relevant policies operate at all levels highlighted in Fig. 1 (national, state, county, and community organizational) and represent potentially powerful contextual effects to investigate. A good illustration of the powerful role of regulatory and other policy change is the case of reduced tobacco use in this country and the related reductions in morbidity and mortality (e.g., clean indoor air policy, taxes on tobacco products, regulation of advertising, and illegal sales contributed to this outcome; see Biglan and Taylor 2000). These types of policy efforts could be applied to other health-related behaviors, and they warrant further research (e.g., Biglan and Taylor 2000; National Research Council and Institute of Medicine 2009a). Because the relevant policy is very broad (Briar-Lawson and Drews 2000; Midgley 2000), further specifying policy-related research questions lies beyond the scope of this article. Nonetheless, the systematic study of policies affecting T2 translation is recommended (see Table 2). Research might examine, for example, how community- or county-level policies, such as mandated set-aside funds from local taxes, can be aligned with federal-level policies and funding streams to support the purchase of EBI materials, program delivery, implementer training, and evaluation support on an ongoing basis at the local level (Pentz 2000, 2004, 2007). In addition, research is needed on relevant processes of local prevention policy enactment (Pentz et al. 2004).

In summary, considering the array of questions, a broad research agenda is needed to better understand obstacles and solutions to bringing EBIs into wider use. While the study of implementation processes has received some attention, there has been a paucity of work on pre-adoption, adoption, and sustainability issues and questions, and on related policy. Overall, addressing this T2 research agenda will require collaboration across scientific domains and among researchers and practitioners and policymakers, a clearer understanding of the role of different types of organizations, a transdisciplinary approach that incorporates theory and methods from disciplines such as marketing and communications, and carefully designed and executed studies of strategies for affecting practice and policy.

To conduct a broad T2 research agenda, multiple methodologies are needed, including naturalistic/descriptive studies of the longer-term outcomes of EBIs, experimental trials that vary important features of system development, policy, management, training, and financing, and adaptive design studies. In the next section, we discuss important design and methods considerations relevant to T2 translation research.

Key Design, Methodology, and Measurement Issues

Design Alternatives

There is scientific debate about the types of study designs that best suit this emerging field of T2 translation research. Clearly, the designs and scientific strategies will depend on the classes of research questions that are being addressed. Some researchers have questioned whether it is appropriate to rely primarily on randomized controlled trials to build a knowledge base for T2 translation research. Often labeled as the “gold standard” in medicine and public health, the traditional randomized controlled trial typically is used for identifying efficacious interventions that demonstrate impact under ideal conditions (National Research Council and Institute of Medicine 2009a). Other researchers have suggested that designs should be more suitable to the study of interventions under real-world conditions (Brown et al. 2009; Glasgow et al. 2004, 2006). Yet others have suggested that randomized trials have very little place in T2 research (Berwick 2008). While the field is sorting out the most useful designs to be used, it is clear that there is a need to address some major limitations in the ability of traditional randomized trials to examine fundamental questions related to implementation (National Research Council and Institute of Medicine 2009a; West et al. 2008).

In this context, it is important to note the distinction between the goals of strengthening clinical or public health evidence and that of improving the processes involved in delivering or better translating health-related interventions (programs, policies, and practices). Strengthening clinical evidence often involves demonstrating efficacy or effectiveness of one program versus another through a randomized effectiveness trial or natural experiment. Another way that programs can be improved is through adaptation, and designs are being explored that accommodate such program changes in the course of conducting a series of randomized trials (Brown et al. 2009). In addition, improving the processes involved in delivering a program often uses quality improvement methodology. This entails paying careful attention to the development of monitoring and feedback systems at the point of service or program provision (e.g., hospital infection control; Chan et al. 2011), the level where training and supervision occur (e.g., improving teachers’ delivery of a prevention program in schools; Poduska et al. 2009), or the level of organizational or community leadership (e.g. school-wide implementation of a prevention program; Bradshaw et al. 2008). As another example, the Veteran’s Administration’s QUERI system (Demakis et al. 2000; Fortney et al. 2012) uses a systemic quality improvement program that could be adapted for many of the prevention delivery systems that are described in this paper.

Standard randomized trials often are difficult to conduct when there are complex behavioral- or system-level interventions, and they often provide insufficient information about mechanisms of change and the role of contexts (Berwick 2008). Flay et al., adaptive time-series designs for evaluating complex multicomponent interventions in neighborhoods and communities, unpublished have delineated adaptive designs that can be especially useful in evaluating very complex, multi-component interventions. Importantly, they have recommended a hierarchical decision-making approach to research designs, a process that begins with consideration of randomization but then also allows for alternative designs when such is not feasible.

Despite noted limitations in the applicability of randomized trials, we anticipate that the use of randomization in innovative ways will make major contributions, especially in the formative phases of T2 translation science. Indeed, Flay proposed the use of randomized assignment in implementation research one quarter century ago as part of the overall scientific approach for moving prevention science into practice (Flay 1986). Designs need to take into account which phase or phases of translation are being studied, ranging from pre-adoption through sustainability. In each of these phases, there are opportunities to use random assignment to implementation or dissemination condition and timing of an implementation or dissemination strategy (Hawkins et al. 2008; Spoth et al. 2007b). Currently, there is a considerable number of randomized implementation trials that are testing alternative strategies for implementing EBIs (Haggerty et al. 2007; Landsverk et al. 2011, 2012). Thus, standard randomized controlled trials with modifications may be particularly useful, both from a scientific point of view and from the perspective of community and institutional partners in implementation or dissemination research (Brown et al. 2012).

One particular concern is that the use of a control condition results in withholding of an EBI to a portion of the subjects, which is often unacceptable to some organizations and communities. There are alternatives to the traditional “control group,” such as a “roll-out” design where groups or communities are randomized to the timing of when they initiate a standard or innovative implementation or dissemination strategy (Brown et al. 2008, 2009, 2006; Chamberlain et al. 2010; Landsverk et al. 2012). In roll-out designs, all communities receive an intervention; it is the timing that is randomly determined. Compared to the traditional two time point wait-listed design, the multiple time point roll-out (also called the dynamic wait-listed design) has higher statistical power and is often logistically easier to manage (Brown et al. 2006). The fact that communities are assigned at random times also decreases the potential for bias over multiple baseline designs and improves acceptability to practitioners and community partners (Biglan et al. 2000).

In addition to the roll-out designs, other important RCT-related designs for future T2 translation research include those embedded in multiphase optimization strategies (MOST–Collins et al. 2005) and sequential multiple assignment randomized trials (SMART–Murphy 2003). Drawing upon engineering principles focused on enhancing efficiencies, MOST includes methods for identifying active components of interventions and optimal intervention doses, and then applies fully crossed or fractional factorial RCT designs to assess the relative performance of intervention components or dose levels. SMART is a RCT design that can be used to address the best sequencing of various intervention components and to clarify which types of tailoring are optimal. An illustration of the application of both types of designs to the development and testing of eHealth interventions can be found in Collins et al. (2007).

Alternative Methods

Mixed, multiple methods may be particularly important for T2 translation; mixed methods that combine qualitative and quantitative approaches have great potential. An example of this mixed methods approach is the CAL-OH implementation study. It evaluates the efficiency of two alternative implementation strategies aimed at California and Ohio counties’ social service systems in implementation of an evidence-based program for foster care. It uses quantitative methods in the context of a true randomized design, whereby 52 counties were first matched, then randomly assigned to one of two implementation strategies (Chamberlain et al. 2008). Comparisons of implementation strategies were conducted, to assess how fast they progressed through stages of implementation, as well as the quantity of service delivery. This study also examined how social network connections among service system leaders affected adoption. These network connections were evaluated through survey methodology as well as through ethnographic interviews. The combination of these data, when analyzed using mixed methods, revealed a much richer network structure than that available with each separate method (Palinkas et al. 2012).

Another important methodological approach, called evaluability assessments for practice settings, facilitates assessment of the appropriateness of conducting more conventional and rigorous evaluations before the fact (Leviton et al. 2010). These assessments can serve several purposes, including rapid feedback on implementation quality, examination of feasibility and acceptability of EBIs, and the evaluation of the appropriateness and feasibility of conventional methods for evaluating intervention impacts.

Finally, methods of “systems science,” such as social network analysis (Valente 2010; 2012) systems engineering (Czaja and Nair 2006; Czaja et al. 2003), process control (Rivera et al. 2007), and ethnographic methods (Palinkas et al. 2011) almost certainly will play important roles in the study of the complex interactions within and across implementation systems, as will a new generation of computationally intensive methods (Landsverk et al. 2012).

Whether focusing on the application of emerging designs or alternative applications of research methods, innovative ways of integrating T2 translation research into EBI delivery systems as part of the ongoing service delivery process is essential.

Implementation Measures and Methods

Currently, there is limited consensus on the measures or methods (observations, provider self-reports, participant reports) that provide the most reliable and valid assessments of implementation fidelity and adaptation. Further evaluation of current tools and development and testing of new methods is warranted, including clarification of which dimensions of fidelity (e.g., adherence, dosage, and participant responsiveness) are most important to measure for a particular intervention and how these dimensions actually affect program outcomes (see Berkel et al. 2011).

Few studies have evaluated the degree to which or the process by which schools, agencies, community teams, and organizations monitor or measure their prevention activities. In general, some of the tools used in prevention research trials, such as conducting observations of program delivery with trained independent observers or videotaping and later coding program sessions, are not practical, effective, or efficient for use in community settings. Thus, alternative user-friendly instruments and methodologies should be explored in order to increase local monitoring of EBIs (Berkel et al. 2011; Fagan et al. 2011).

Key Action Steps for Governmental Agencies and Partners

Accomplishing effective EBI implementation in many communities throughout the U.S. will require not only adequate infrastructure and innovative research but also changes in federal and state agency operating procedures, policies, and funding mechanisms. Coordination and collaboration among federal and state agencies are critically important to the creation of effective and sustainable infrastructures for prevention practice and research (National Research Council and Institute of Medicine 2009a). Health-compromising behaviors do not fall neatly into the domain of any single governmental agency; therefore, broad-based prevention efforts that are effective and sustainable will require the braiding of service and research funding across a variety of state and federal agencies. In this section of the paper, we specify two types of action steps for governmental agencies and their partners, focusing on (1) coordination of guiding conceptual frameworks, policy, strategic plans, reporting requirements, and TA, as well as (2) innovative funding mechanisms.

Key Action Steps #1: Coordinate Prevention Translation Efforts

Conceptual Frameworks, Strategic Planning, and Policy Change

As part of a fundamental restructuring of the U.S. healthcare system, the Affordable Care Act created the National Prevention, Health Promotion, and Public Health Council (National Prevention Council), chaired by the U.S. Surgeon General. The National Prevention Council consists of the heads of 17 departments, agencies, and offices from the federal government collaborating to develop a conceptual framework and strategic plans for improving the health of Americans through disease prevention and health promotion. In June 2011, the Council released the nation’s first comprehensive strategy for disease prevention and health promotion (National Prevention Health, Promotion, and Public Health Council 2011) called the National Prevention Strategy. Each of four strategic directions and seven priority areas articulated by the Strategy states a number of recommended actions in clear alignment with those indicated in the TSci Impact Framework (Fig. 1). Several of the related recommendations in the National Prevention Strategy dovetail with those we have suggested above, including the dissemination of community-based interventions that address health inequities, especially in inner-city neighborhoods and rural areas, the development, testing, and implementation of effective strategies to engage underserved populations, and the organization of representative, multi-sector community partnerships. However, expansion of research and the widespread adoption of research findings will not occur without more effective collaborations between researchers and the advocacy community. These efforts and similar international ones, such as the WHO Healthy Cities project (http://www.euro.who.int/en/what-we-do/health-topics/environment-and-health/urban-health/activities/healthy-cities), demonstrate a growing coordination of governmental, community, and advocacy efforts focusing on prevention with a broad translation mandate.

The type of federal agency coordination illustrated by the National Prevention Council illustrates one way to move toward a common conceptual framework for prevention across federal agencies. A key aspect of a common framework concerns the risk and protective factors shared by the diverse problems addressed by various agencies. Obesity, inactivity, substance misuse, poor academic outcomes, and delinquent and criminal behavior are predicted by shared factors, as well as distinct risk and protective factors (Catalano et al. 2011). Clear evidence indicates that the same EBIs can positively affect seemingly disparate health-related behaviors and outcomes because these interventions target shared risk and protective factors (National Research Council and Institute of Medicine 2009a). In addition, there is evidence that EBIs designed to prevent mental, emotional, and behavioral disorders among children can have positive impacts on more distal outcomes such as academic achievement (e.g., Durlak et al. 2007).

Thus, there is growing attention to the need for federal and other governmental agencies to move from functioning in funding silos to forming multisectoral, multiagency partnerships that recognize both the agency-specific (particularly NIH) and the cross-agency outcome benefits achieved through collaboration. Agencies must focus more attention on the construction of readily explainable frameworks for the many health outcomes that are important across the federal stakeholders, including an understanding of why the agencies would jointly engage in prevention, how risk and protective factors cluster and cross over their interest areas, and how to measure and increase the translational impact of interventions targeting youth across outcomes of interest.

In this context, the authors recommend that the National Council on Prevention, Health Promotion, and Public Health, along with its nationally representative advisory group, facilitate the organization of a White House Office on the Translation of Evidence-Based Prevention, the primary functions of which would entail coordinating activities concerning the establishment of national goals for advancing translational practice and research (to address the needs articulated in this article, for example), along with monitoring progress toward these goals. Other activities of this office could parallel those summarized for a White House Office-led strategic coordinating group in the 2009 National Research Council and Institute of Medicine report on prevention of mental, emotional, and behavioral disorders (pp. 318–382). In that connection, one possibility would be that the Office could monitor the health and well-being of the nation’s children and youth, using data from this monitoring to construct an index of well-being. This index could show state-by-state results, as well as those of the nation overall, and how children’s and youth’s well-being is changing over time. It would be important that this Office be structured in a way that assured a non-partisan approach, similar to that of the Office of Management and Business, with its politically broad coalition of support, and that it work closely with bipartisan congressional caucuses that focus on dissemination and prevention issues.

Coordinated Reporting Requirements

One way to support T2 prevention translation across federal, state, and local levels is to form a better alignment across these levels in reporting requirements for grants and other funded programs. Varied requirements for assessing and reporting intervention outcomes constitute a major impediment to conducting T2 prevention research and pooling data among communities. The expansion of federal interagency workgroups and greater attention by oversight agencies (e.g., the White House National Office of Drug Control Policy) could facilitate efforts to coordinate data collection and reporting practices. The most recent National Drug Control Strategy emphasizes, “Preventing drug use before it begins is a cost-effective, common-sense way to build safe and healthy communities” (Office of National Drug Control Policy 2011). Toward this end, it highlights the need to develop community-focused prevention systems and increase collaboration among federal and state agencies to implement evidence-based prevention initiatives, including alignment of reporting requirements and fostering collaborations between public health and public safety organizations.

Coordinated Technical Assistance for Evaluation

Another consideration for facilitating better T2 translation across federal, state, and local levels concerns adequate attention to TA needs, particularly concerning program outcome research and evaluation. Federal initiatives are raising public awareness that health problems can be prevented and are helping to support implementation of a continuum of programs supported by the best available evidence. However, financial support alone for the purchase of EBIs does not ensure that the interventions will be implemented as intended or that desired outcomes will emerge. Thus, federal initiatives in support of EBIs must include training, TA, and continuous evaluation to determine optimal conditions for achieving anticipated health impacts. Evaluation research should be a part of all large, federal, programmatic initiatives in order to increase our understanding of the factors that ensure successful selection, adoption, implementation, and sustainability of prevention programs (see Office of Management and Budget Memos M-12-13, 14, May 18, 2012; Haskins and Baron 2011). Concomitant with such initiatives, there should be an aligned focus on addressing the types of T2 research questions delineated earlier, particularly questions about continued effectiveness of EBIs once implemented on a wide scale. The Maternal, Infant, and Early Childhood Home Visiting Program under the Affordable Care Act is an example of a collaboration with a comprehensive view to programmatic and evaluation TA needs, whereby the Health Research Services Administration leads the programmatic TA efforts and the Administration for Children and Families (ACF) manages the systematic review of evidence (http://homvee.acf.hhs.gov/), the national evaluation, and the set-aside funding for tribal communities, which includes programmatic and evaluation TA.

Key Action Steps #2: Support Innovative Funding Mechanisms

Better Research Funding Models

Some contemporary federal initiatives have placed a priority on supporting science-based approaches through encouraging or mandating the use of EBIs. Of critical importance to advancing the field, funding models also have been devised to allow for the braiding or integration of evaluation research into programmatic initiatives for the purposes of continually monitoring the effects of programs once they are implemented in communities. For example, in 2010 the DHHS Office of Adolescent Health, in collaboration with the Assistant Secretary for Planning and Evaluation and ACF, announced the availability of funds to support the implementation of evidence-based programs to reduce teenage pregnancy. Funded communities are expected to evaluate the implementation fidelity of the strategy and its effectiveness in reducing teen pregnancy rates and associated sexual outcomes.

For braided funding to work optimally, however, many conditions must be met. Not only should multiple federal agencies arrive at a common view about research and service goals of the federal program, but the reviewers of the applications to these funding opportunities need to do so as well. With T2 translation applying cross-disciplinary approaches, it is critically important that reviewers consider the appropriateness of a full range of methodologies that can be applied to such programs, including qualitative, quantitative, and simulation modeling, as described above. Further, projects supported through braided funding may assist with implementation monitoring or outcome evaluation but fall short of the kind of integration of translation research into programmatic initiatives contemplated by the TSci Impact Framework, an approach that is intended to create the opportunity to answer a broad range of translation research questions.

Considering limitations in current braided funding efforts, it is critical to develop additional federal and state funding models that can support coordinated, comprehensive prevention approaches in communities. Given the scale of the work to be accomplished, such funding models will likely require resource contributions from multiple agencies, as well as braiding funding from both service and research-supporting agencies. Funding models should explicitly integrate state-of-the-art methodologies for conducting T2 research, including randomized controlled trials or rigorous adaptive designs. Funding models could include consideration of: (1) braiding grant funding and TA from one federal agency for community-based implementation of EBIs and local evaluation, and contract funding from another federal agency for cross-site research on shared health outcomes; and (2) collaboration among federal and state agencies, in which state agencies fund grants to community-based organizations for program implementation and the federal agencies fund partnership grants to universities for collaboration on community-based participatory T2 research in the same communities. Ideally, these approaches are integrated with private–public partnerships that channel additional funding streams into the mix.

State Prevention Translation Financing Teams

As noted in the section on infrastructural challenges, one of the most difficult barriers to surmount is limited funding and financing strategies and structures. The Annie E. Casey Foundation (www.aecf.org) is developing a series of documents describing financing projects, strategies, and structures that provide some excellent examples of state-level initiatives to strategically change investment in EBIs (Langford et al. 2012). Langford and colleagues define financing structures as mechanisms for prioritizing, coordinating, and expending dollars on services and programs (e.g., budget structures such as set-asides, or braided funding). Changed investment occurs either through redirection (shifting funds from lower-priority services or programs to higher-priority EBIs) or reinvestment (shifting funding from higher to lower-cost services and then reinvesting the savings, particularly to reach more people in targeted populations with EBIs). At this point, most of these illustrative initiatives focus primarily on a single service system (e.g., juvenile justice).

Following these examples, we recommend that each state organize “Prevention Translation Financing Teams” to support priority prevention goals through state EBI-delivery systems. The purpose of these teams would be to develop a strategic plan for financing population impact-oriented EBIs within each state. The teams’ composition could follow guidelines used by successful initiatives in states such as Maryland, as illustrated by Langford et al. (2012), to assure inclusion of all appropriate stakeholders for broad EBI implementation in state delivery systems. Eventually, these teams could network to form a learning community, possibly including coordination with the Coalition for Evidence-Based Policy (see www.coalition4evidence.org).

The National Prevention Strategy provides a conceptual frame, along with potential objectives and priorities for state strategic plans for financing EBIs. It would be especially helpful to focus on scalable EBIs with existing, proven delivery systems, combining general population or universal EBIs with those that are more targeted, particularly those that address prevalent and costly behavioral health problems. Practical guidelines for developing such a strategic plan will be more readily available through efforts like those of the Annie E. Casey Foundation.

Overall, federal and state governments should take a stronger lead in promoting widespread EBI adoption, effective implementation, and sustainability, particularly through prevention delivery systems that foster collaborations and networking among practitioners and scientists. The Coalition for Evidence-Based Policy is greatly fostering progress on this front by facilitating legislative and policy change that supports broader implementation of highly effective programs, particularly through its tiered initiatives that fund evidence-based social programs (see Haskins and Baron 2011; www.coalition4evidence.org). Importantly, this federal-level effort requires development of an infrastructure for national prevention systems to support widespread implementation of EBIs in communities nationwide, to increase the prevalence of healthy, nurturing environments (Biglan et al. 2012).

Summary

A tremendous investment has been made in the development and evaluation of EBIs over the past several decades. Yet, we are failing to realize returns on that investment: populations that could benefit from these interventions are not receiving them, adequate infrastructure is not in place to support high-quality implementation of interventions, and population-level impact is not a reality. The resulting health, social, and economic tolls are well documented. As Surgeon General Regina Benjamin has emphasized, it is important to develop more effective strategies for implementing evidence-based prevention programs, practices, and policies to enhance population health (National Prevention, Health Promotion, and Public Health Council 2011).

This paper has focused on addressing two grand challenges for accomplishing the Surgeon General’s vision: building well-integrated prevention practice and research infrastructures with increased capacities, along with a clearly articulated T2 prevention translation research agenda driven by methods advances in prevention science. To surmount these challenges and achieve population impact, this paper presents an integrative, systems-oriented framework TSci Impact concerning key aspects of the four translation functions and phases moving toward such impact (pre-adoption, adoption, implementation, and sustainability), the multiple levels of socio-environmental contexts in which they occur, and the necessary infrastructures to support them. The TSci Impact Framework highlights key components for improving integration of evidence-based practices and practice-oriented translation research, central to which are practically viable ways of linking practitioners and scientists in collaborative efforts. This approach frames critical research questions, specific to each of the four translation functions, composing a research agenda for the future, and suggests enhancements to study design and methodology for T2 prevention research.

Finally, advancing T2 translation would benefit from two sets of action steps by governmental agencies and their partners, steps that would better support the prevention organizations and systems necessary for widespread implementation of EBIs. The recent articulation of the National Prevention Strategy and the Public Health Investment Fund greatly boosts the potential of relevant organizations and systems for effective translation and greater population impacts. The two sets of critical action steps delineated to assure progress in this direction are (1) coordinated conceptual frameworks, strategic planning, policies, reporting requirements, and TA systems for evaluation—both horizontally across agencies at the federal level and vertically across local, state, and federal levels—plus (2) innovative funding mechanisms, including braided funding, private–public partnerships, and state prevention translation financing teams. We suggest that state-based teams could develop the financing structures and strategies that provide the necessary wherewithal for advancing the science conducted within practice settings, as well as for scaling up EBIs. These action steps could provide the impetus for greatly advancing the next generation of T2 translation science and, in turn, enhance the health and well-being of future generations.

Footnotes

  1. 1.

    In the scientific literature, “health promotion,” which is focused on well-being, is distinguished from “prevention,” which is designed to prevent or reduce diseases and related problems. Consistent with those who have argued for a synthesis of prevention and promotion approaches (e.g., Weissberg and Greenberg 1998), we use the term “prevention” to refer to both types of efforts. Evidence-based interventions are programs, policies, or practices tested in well-designed, methodologically sound prevention studies with health outcome improvements demonstrated to be statistically and practically significant. Prevention science or research refers to the scientific investigation of the etiology and prevention of social, physical, mental health, and academic problems, and the translation of that information to promote health and well-being.

  2. 2.

    Mapping Advances in Prevention Science are multidisciplinary task forces funded by the Society of Prevention Research conference grants from the National Institute of Drug Abuse (grants 5R13DA021047-09 and 5R13DA021047-08SI). They are designed to advance promising ideas and scientific efforts generated through the Society for Prevention Research annual meeting, in order to: (1) foster promising, emerging areas of prevention science; (2) articulate an agenda to move research forward in such emerging areas; and (3) nurture the scientific leadership and capacity required to make the advances.

  3. 3.

    Although recently it has been suggested that the process of T2 research involves several additional phases (e.g., “Type 3” and “Type 4” translation; Abernethy and Wheeler 2011; Khoury et al. 2007), here we refer to the entire process as T2 translation. It also should be noted that, within NIH, T2 translation research often is referenced as “dissemination and implementation research” (e.g., Rabin et al. 2008).

Notes

Acknowledgments

Preparation of this report was sponsored by the Society for Prevention Research (SPR), with support from the National Institutes of Health. The SPR Board of Directors appointed the Mapping Advances in Prevention Science Type 2 Translational Research Task Force (Spoth and Rohrbach, Co-Chairs) to develop the Type 2 translation research population impact framework outlined in this paper. The intent of this work is to enhance the next generation of T2 translation science to have widespread benefits to the health and well-being of the next generation of our youth and families. The final report was endorsed by the SPR Board of Directors on February 13, 2012. The recommendations and conclusions of the report are those of the authors. The authors are grateful for the valuable feedback and contributions from Elizabeth Robertson, Tamara M. Haegerich, and Aleta Meyer (Society of Prevention Research Type 2 Translational Task Force Members), along with the helpful feedback and editing suggestions from anonymous reviewers for Prevention Science and from Christine Cody.

References

  1. Abernethy, A. P., & Wheeler, J. L. (2011). True translational research: Bridging the three phases of translation through data and behavior. Translational Behavioral Medicine, 1, 26–30.CrossRefGoogle Scholar
  2. Adelman, H. S., & Taylor, L. (2003). Creating school and community partnerships for substance abuse prevention programs. Commissioned by SAMHSA Center for Substance Abuse Prevention. Journal of Primary Prevention, 23, 331–369.CrossRefGoogle Scholar
  3. Aos, S., Lieb, R., Mayfield, J., Miller, M., & Pennucci, A. (2004). Benefits and costs of prevention and early intervention programs for youth. Olympia: Washington State Institute for Public Policy.Google Scholar
  4. Asen, R., Gurke, D., Conners, P., Solomon, R., & Gumm, E. (2012). Research evidence and school board deliberations: Lessons from three Wisconsin school districts. Educational Policy. E-pub 1–16-2012. doi:10.1177/0895904811429291
  5. August, G. J., Bloomquist, M. L., Lee, S. S., Realmuto, G. M., & Hektner, J. M. (2006). Can evidence-based prevention programs be sustained in community practice settings? The Early Risers’ advanced-stage effectiveness trial. Prevention Science, 7, 151–165.PubMedCrossRefGoogle Scholar
  6. Backer, T., & Guerra, N. (2011). Mobilizing communities to implement evidence-based practices in youth violence prevention: The state of the art. American Journal of Community Psychology, 48, 31–42.PubMedCrossRefGoogle Scholar
  7. Balas, E. A., & Boren, S. A. (2000). Managing clinical knowledge for health care improvement. In J. Bemmel & A. T. McCray (Eds.), Yearbook of medical informatics 2000: Patient-centered systems (pp. 65–70). Stuttgart: Schattauer Verlagsgesellschaft mbH.Google Scholar
  8. Berkel, C., Mauricio, A. M., Schoenfelder, E., & Sandler, I. N. (2011). Putting the pieces together: An integrated model of program implementation. Prevention Science, 12, 23–33.PubMedCrossRefGoogle Scholar
  9. Berwick, D. M. (2008). The science of improvement. Journal of the American Medical Association, 299, 1182–1184.PubMedCrossRefGoogle Scholar
  10. Biglan, A., & Taylor, T. K. (2000). Why have we been more successful in reducing tobacco use than violent crime? American Journal of Community Psychology, 28, 269–302.PubMedCrossRefGoogle Scholar
  11. Biglan, A., Ary, D., & Wagenaar, A. C. (2000). The value of interrupted time-series experiments for community intervention research. Prevention Science, 1, 31–49.PubMedCrossRefGoogle Scholar
  12. Biglan, A., Flay, B. R., Embry, D. D., & Sandler, I. (2012). The critical role of nurturing environments for promoting human wellbeing. American Psychologist, 67, 257–271.PubMedCrossRefGoogle Scholar
  13. Boren, S. A., & Balas, E. A. (1999). Evidence-based quality measurement. The Journal of Ambulatory Care Management, 22, 17–23.PubMedGoogle Scholar
  14. Bowen, S. W., Chawla, N., Collins, S. E., Witkiewitz, K., Hsu, S., Grow, J., & Marlatt, A. (2009). Mindfulness-based relapse prevention for substance use disorders: A pilot efficacy trial. Substance Abuse, 30, 205–305.CrossRefGoogle Scholar
  15. Bradshaw, C. P., Koth, C. W., Bevans, K. B., Ialongo, N., & Leaf, P. J. (2008). The impact of school-wide positive behavioral interventions and supports (PBIS) on the organizational health of elementary schools. School Psychology Quarterly, 23, 462–473.CrossRefGoogle Scholar
  16. Briar-Lawson, K., & Drews, J. (2000). Child and family welfare policies and services: Current issues and historical antecedents. In J. Midgley, M. B. Tracy, & M. Livermore (Eds.), The handbook of social policy (pp. 157–174). Thousand Oaks: Sage.Google Scholar
  17. Brown, C. H., Wyman, P. A., Guo, J., & Peña, J. (2006). Dynamic wait-listed designs for randomized trials: New designs for prevention of youth suicide. Clinical Trials, 3, 259–271.PubMedCrossRefGoogle Scholar
  18. Brown, C. H., Ten Have, T. R., Jo, B., Dagne, G., Wyman, P. A., Muthen, B., & Gibbons, R. D. (2009). Adaptive designs for randomized trials in public health. Annual Review of Public Health, 30, 1–25.PubMedCrossRefGoogle Scholar
  19. Brown, L. D., Feinberg, M. E., & Greenberg, M. T. (2010). Determinants of community coalition ability to support evidence-based programs. Prevention Science, 11, 287–297.PubMedCrossRefGoogle Scholar
  20. Brown, C. H., Kellam, S. G., Kaupert, S., Muthén, B. O., Wang, W., Muthén, L., & McManus, J. (2012). Partnerships for the design, conduct, and analysis of effectiveness, and implementation research: Experiences of the Prevention Science and Methodology Group. Administration and Policy in Mental Health and Mental Health Services Research, 39, 301–316. doi:10.1007/s10488-011-0387-3.PubMedCrossRefGoogle Scholar
  21. Brown, C. H., Wang, W., Kellam, S. G., Muthén, B. O., Petras, H., Toyinbo, P., & The Prevention Science and Methodology Group. (2008). Methods for testing theory and evaluating impact in randomized field trials: Intent-to-treat analyses for integrating the perspectives of person, place, and time. Drug and Alcohol Dependence, 95, S74–S104.PubMedCrossRefGoogle Scholar
  22. Bumbarger, B. K., & Campbell, E. M. (2012). A state agency-university partnership for translational research and the dissemination of evidence-based prevention and intervention. Administration and Policy in Mental Health and Mental Health Services Research, 39, 268–277. doi:10.1007/s10488-011-0372-x.PubMedCrossRefGoogle Scholar
  23. Bumbarger, B., & Perkins, D. (2008). After randomized trials: Issues related to dissemination of evidence-based intervention. Journal of Children’s Services, 3, 53–61.Google Scholar
  24. Callahan, P., Liu, P., Purcell, R., Parker, A. G., & Hetrick, S. E. (2011). Evidence map of prevention and treatment interventions for depression in youth people. Depression Research and Treatment, 2012, Article ID 820735. doi:10.1155/2012/820735
  25. Castro, F. G., Barrera, M., & Martinez, D. H. (2004). The cultural adaptation of prevention interventions: Resolving tensions between fidelity and Fit. Prevention Science, 5, 41–45. page 38.PubMedCrossRefGoogle Scholar
  26. Catalano, R. F., Haggerty, K., Hawkins, J. D., & Elgin, J. (2011). Prevention of substance use and substance use disorders: The role of risk and protective factors. In Y. Kaminer & K. Winters (Eds.), Clinical manual of adolescent substance abuse treatment (pp. 25–63). Washington, DC: American Psychiatric Publishing. CASEL (2012). Collaborative for Academic, Social, and Emotional Learning website.Google Scholar
  27. Catalano, R. F., Fagan, A. A., Gavin, L. E., Greenberg, M. T., Irwin, C. E., Ross, D. A. & Shek, D. (2012). Worldwide application of the prevention science research base in adolescent health. The Lancet, 29/4/12.Google Scholar
  28. Centers for Disease Control and Prevention. (1992). Effectiveness in disease and injury prevention estimated national spending on prevention–United States, 1988. Morbidity and Mortality Weekly Report, 41, 529–531.Google Scholar
  29. Centers for Disease Control and Prevention (2001). School health guidelines to prevent unintentional injuries and violence. Retrieved at http://www.cdc.gov/mmwr/preview/mmwrhtml/rr5022a1.htm. Accessed 1 Nov 2012.
  30. Centers for Disease Control and Prevention (2007). Improving public health practice through translation research (R18). Retrieved at http://grants.nih.gov/grants/guide/rfa-files/RFA-CD-07-005.html. Accessed 1 Nov 2012.
  31. Centers for Disease Control and Prevention (2011). School health guidelines to promote healthy eating and physical activity. Retrieved at http://www.cdc.gov/mmwr/preview/mmwrhtml/rr6005a1.htm?s_cid=rr6005a1_w. Accessed 1 Nov 2012.
  32. Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion. (2009). The power of prevention: Chronic disease, the public health challenge of the 21st century. Retreived at http://www.cdc.gov/chronicdisease/pdf/2009-Power-of-Prevention.pdf. Accessed 1 Nov 2012.
  33. Chamberlain, P., Brown, C. H., Saldana, L., Reid, J., Wang, W., Marsenich, L., & Bouwman, G. (2008). Engaging and recruiting counties in an experiment on implementing evidence-based practice in California. [Research Support, N.I.H., Extramural Research Support, U.S. Government, Non-P.H.S.]. Administration & Policy in Mental Health, 35, 250–260.CrossRefGoogle Scholar
  34. Chamberlain, P., Saldana, L., Brown, C. H., & Leve, L. D. (2010). Implementation of multidimensional treatment foster care in California: A randomized trial of an evidence-based practice. In M. Roberts-DeGennaro & S. Fogel (Eds.), Empirically supported interventions for community and organizational change. Chicago, IL: Lyceum.Google Scholar
  35. Chan, K., Hsu, Y. J., Lubomski, L., & Marsteller, J. (2011). Validity and usefulness of members reports of implementation progress in a quality improvement initiative: Findings from the Team Check-up Tool (TCT). Implementation Science, 6, 115.PubMedCrossRefGoogle Scholar
  36. Chinman, M., Imm, P., & Wandersman, A. (2004). Getting to outcomes: Promoting accountability through methods and tools for planning, implementation, and evaluation. Santa Monica: Rand Corporation. Centers for Disease Control and Prevention Technical Report TR-TR101.Google Scholar
  37. Coie, J. D., Watt, N. F., West, S. G., Hawkins, J. D., Asarnow, J. R., Markham, H. J., & Long, B. (1993). The science of prevention: A conceptual framework and some directions for a national research program. American Psychologist, 48, 1013–1022.PubMedCrossRefGoogle Scholar
  38. Collins, L. M., Murphy, S. A., Nair, V. N., & Strecher, V. (2005). A strategy or optimizing and evaluating behavioral interventions. Annals of Behavioral Medicine, 30, 65–73.PubMedCrossRefGoogle Scholar
  39. Collins, L. M., Murphy, S. A., & Strecher, V. (2007). The Multiphase Optimization Strategy (MOST) and the Sequential Multiple Assignment Randomized Trial (SMART): New methods for more potent e-Health interventions. American Journal of Preventive Medicine, 32, S112–118.PubMedCrossRefGoogle Scholar
  40. Cox, P. J., Keener, D., Woodard, T., & Wandersman, A. (2009). Evaluation for improvement: A seven step empowerment evaluation approach for violence prevention organizations. Atlanta (GA): Centers for Disease Control and Prevention. Retrieved at http://www.cdc.gov/violenceprevention/pdf/evaluation_improvement-a.pdf. Accessed 1 Nov 2012.
  41. Crowley, D. M., Jones, D. E., Greenberg, M. T., Feinberg, M. E., & Spoth, R. L. (2012). Resource consumption of a dissemination model for prevention programs: The PROSPER delivery system. Journal of Adolescent Health, 50, 256–263.PubMedCrossRefGoogle Scholar
  42. Cunningham, C. E., Valliancourt, T., Rimas, T., Deal, K., Cunningham, L., Short, K., & Chen, Y. (2009). Modeling the bullying prevention program preferences of educators: A discrete choice conjoint experiment. Journal of Abnormal Child Psychology, 37, 929–943.PubMedCrossRefGoogle Scholar
  43. Czaja, S. J., & Nair, S. N. (2006). Human factors engineering and systems design. In G. Salvendy (Ed.), Handbook of human factors and ergonomics (3rd ed., pp. 32–49). Hoboken: Wiley.CrossRefGoogle Scholar
  44. Czaja, S. J., Schulz, R., Lee, C. C., & Belle, S. H. (2003). A methodology for describing and decomposing complex psychosocial and behavioral interventions. Psychology and Aging, 18, 385–395.PubMedCrossRefGoogle Scholar
  45. Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review, 18, 23–45.PubMedCrossRefGoogle Scholar
  46. Dariotis, J. K., Bumbarger, B. K., Duncan, L. G., & Greenberg, M. T. (2008). How do implementation efforts relate to program adherence? Examining the role of organizational, implementer, and program factors. Journal of Community Psychology, 36, 744–760.CrossRefGoogle Scholar
  47. De Savigny, D., & Adam, T., (2009). Systems thinking for health systems strengthening. Alliance for Health Policy and System Research, World Health Organization.Google Scholar
  48. Demakis, J., McQueen, L., Kizer, K., & Feussner, J. (2000). Quality Enhancement Research Initiative (QUERI): A collaboration between research and clinical practice. Medical Care, 38, 17–25.CrossRefGoogle Scholar
  49. Domitrovich, C. E., & Greenberg, M. T. (2000). The study of implementation: Current findings from effective programs that prevent mental disorders in school-aged children. Journal of Educational and Psychological Consultation, 11, 193–221.CrossRefGoogle Scholar
  50. Drake, E. K., Aos, S., & Miller, M. G. (2009). Evidence-based public policy options to reduce crime and criminal justice costs: Implications in Washington State. Victims and Offenders, 4, 170–196.CrossRefGoogle Scholar
  51. Durlak, J. A. (1998). Why program implementation is important. Journal of Prevention & Intervention in the Community, 17, 5–18.CrossRefGoogle Scholar
  52. Durlak, J. A., & DuPre, E. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327–350.PubMedCrossRefGoogle Scholar
  53. Durlak, J. A., Taylor, R. D., Kawashima, K., Pachan, M. K., DuPre, E. P., Celio, C. L., & Weissberg, R. (2007). Effects of positive youth development programs on school, family, and community systems. American Journal of Community Psychology, 39, 269–286.PubMedCrossRefGoogle Scholar
  54. Dusenbury, L., Hansen, W. B., Jackson-Newsom, J., Pittman, D. S., Wilson, C. V., Nelson-Smiley, K., & Giles, S. M. (2010). Coaching to enhance quality of implementation in prevention. Health Education, 110, 43–60.PubMedCrossRefGoogle Scholar
  55. Fagan, A. A., & Mihalic, S. (2003). Strategies for enhancing the adoption of school-based prevention programs: Lessons learned from the Blueprints for Violence Prevention replications of the Life Skills Training Program. Journal of Community Psychology, 31, 235–254.CrossRefGoogle Scholar
  56. Fagan, A. A., Hawkins, J. D., & Catalano, R. F. (2008). Using community epidemiologic data to improve social settings: The Communities That Care prevention system. In M. Shin (Ed.), Toward positive youth development: Transforming schools and community programs (pp. 292–312). New York: Oxford University Press.CrossRefGoogle Scholar
  57. Fagan, A. A., Hanson, K., Hawkins, J. D., & Arthur, M. W. (2009). Translation research in action: Implementation of the Communities That Care prevention system in 12 communities. Journal of Community Psychology, 37, 809–829.PubMedCrossRefGoogle Scholar
  58. Fagan, A. A., Arthur, M. W., Hanson, K., Briney, J. S., & Hawkins, J. D. (2011). Effects of Communities That Care on the adoption and implementation fidelity of evidence-based prevention programs in communities: Results from a randomized controlled trial. Prevention Science, 12, 223–234.PubMedCrossRefGoogle Scholar
  59. Fagan, A. A., Hanson, K., Briney, J. S., & Hawkins, J. D. (2012). Sustaining the utilization and high quality implementation of tested and effective prevention programs using the Communities That Care prevention system. American Journal of Community Psychology, 49, 365–377.PubMedCrossRefGoogle Scholar
  60. Farquhar, J. W. (1978). The community-based model of life style intervention trials. American Journal of Epidemiology, 108, 103–111.PubMedGoogle Scholar
  61. Feinberg, M. E., Greenberg, M. T., Osgood, D. W., Anderson, A., & Babinski, L. (2002). The effects of training community leaders in prevention science: Communities That Care in Pennsylvania. Evaluation and Program Planning, 25, 245–259.CrossRefGoogle Scholar
  62. Feinberg, M. E., Meyer-Chilenski, S., Greenberg, M. T., Spoth, R. L., & Redmond, C. (2007). Community and team member factors that influence the operations phase of local prevention teams: The PROSPER Project. Prevention Science, 8, 214–226.PubMedCrossRefGoogle Scholar
  63. Feinberg, M., Bontempo, D., & Greenberg, M. (2008). Predictors and level of sustainability of community prevention coalitions. American Journal of Preventive Medicine, 34, 495–501.PubMedCrossRefGoogle Scholar
  64. Feinberg, M. E., Ridenour, T. A., & Greenberg, M. T. (2008). The longitudinal effect of technical assistance dosage on the functioning of Communities That Care prevention boards in Pennsylvania. Journal of Primary Prevention, 29, 145–165.PubMedCrossRefGoogle Scholar
  65. Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature (FMHI Pub. No. 231). Tampa: The National Implementation Research Network, University of South Florida.Google Scholar
  66. Fixsen, D. L., Blase, K. A., Naoom, S. F., & Wallace, F. (2009). Core implementation components. Research on Social Work Practice, 19, 531–540.CrossRefGoogle Scholar
  67. Flay, B. R. (1986). Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine, 15, 451–474.PubMedCrossRefGoogle Scholar
  68. Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S., & Ji, P. (2004). Standards of evidence: Criteria for efficacy, effectiveness, and dissemination. Washington, DC: Society for Prevention Research.Google Scholar
  69. Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S., & Ji, P. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6, 151–175.PubMedCrossRefGoogle Scholar
  70. Fortney, J., Enderle, M., McDougall, S., Clothier, J., Otero, J., Altman, L., & Curran, G. (2012). Implementation outcomes of evidence-based quality improvement for depression in VA community based outpatient clinics. Implementation Science, 7, 30.PubMedCrossRefGoogle Scholar
  71. Frolich, K. L., & Potvin, L. (2008). The inequality paradox: The population approach and vulnerable populations. American Journal of Public Health, 98, 216–221.CrossRefGoogle Scholar
  72. Glasgow, R. E., Lichtenstein, E., & Marcus, A. C. (2003). Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. American Journal of Public Health, 93, 1261–1267.PubMedCrossRefGoogle Scholar
  73. Glasgow, R. E., Klesges, L. M., Dzewaltowski, D. A., Bull, S. S., & Estabrooks, P. (2004). The future of health behavior change research: What is needed to improve translation of research into health promotion practice? Annual Behavioral Medicine, 27, 3–12.CrossRefGoogle Scholar
  74. Glasgow, R. E., Green, L. W., Klesges, L. M., Abrams, D. B., Fisher, E. B., Goldstein, M. G., & Orleans, C. T. (2006). External validity: We need to do more. Annals of Behavioral Medicine, 31, 105–108.PubMedCrossRefGoogle Scholar
  75. Glisson, C., Landsverk, J., Schoenwald, S., Kelleher, K., Hoagwood, K., Mayberg, S., & Green, P. (2008). Assessing the organizational social context (OSC) of mental health services: Implications for research and practice. Administration and Policy in Mental Health and Mental Health Services Research, 35, 98–113.PubMedCrossRefGoogle Scholar
  76. Gottfredson, G. D., Gottfredson, D. C., Czeh, E. R., Cantor, D., Crosse, S., & Hantman, I. (2000). National study of delinquency prevention in schools. Ellicott City, MD: Gottfredson Associates. Retrieved at https://www.ncjrs.gov/pdffiles1/nij/grants/194116.pdf. Accessed 1 Nov 2012.
  77. Granger, R. (2008). After-school programs and academics: Implications for policy, practice, and research. Social Policy Report, 22, 3–19.Google Scholar
  78. Green, L. W. (2001). From research to “best practices” in other settings and populations. American Journal of Health Behavior, 25, 165–178.PubMedCrossRefGoogle Scholar
  79. Greenberg, M. T. (2010). School-based prevention: Current status and future challenges. Effective Education, 2, 27–52.CrossRefGoogle Scholar
  80. Greenberg, M. T., Domitrovich, C. E., Graczyk, P. A., & Zins, J. E. (2005). The study of implementation in school-based preventive interventions: Theory, research, and practice (vol. 3). Rockville: Center for Mental Health Services, SAMHSA.Google Scholar
  81. Greenberg, M. T., Biglan, A., Cicchetti, D., Fisher, P. A., Lightfoot, M., Neal, M. C., Wolf, M. E. (2009). Review of the prevention research portfolio, National Institute on Drug Abuse. Retrieved at http://www.drugabuse.gov/sites/default/files/prr_rpt.pdf. Accessed 1 Nov 2012.
  82. Greenhalgh, T., Robert, G., Macfarlane, F., Bate, P., & Kyriakidou, O. (2004). Diffusion of innovations in service organizations: Systematic review and recommendations. The Milbank Quarterly, 82, 581–629.PubMedCrossRefGoogle Scholar
  83. Greenwald, P. G. (1990). Forward. In Smoking, tobacco, and cancer programs. Rockville: National Cancer Institute, U.S.D.H.H.S.Google Scholar
  84. Gross, D., Garvey, C., Julion, W., Fogg, L., Tucker, S., & Mokros, H. (2009). Efficacy of the Chicago Parent Program with low-income African American and Latino parents of young children. Prevention Science, 10, 54–65.PubMedCrossRefGoogle Scholar
  85. Gruen, R. L., Elliott, J. H., Nolan, M. L., Lawton, P. D., Parkhill, A., McLaren, C. J., & Lavis, J. N. (2008). Sustainability science: An integrated approach for health-programme planning. Lancet, 372, 1579–1589.PubMedCrossRefGoogle Scholar
  86. Guyll, M., Spoth, R. L., & Crowley, D. M. (2011). An economic analysis of methamphetamine prevention effects and employer costs. Journal of Studies on Alcohol and Drugs, 72, 577–585.PubMedGoogle Scholar
  87. Haggerty, K. P., Fleming, C. B., Lonczak, H. S., Oxford, M., Harachi, T. W., & Catalano, R. F. (2002). Predictors of participation in parenting workshops. Journal of Primary Prevention, 22, 375–387.CrossRefGoogle Scholar
  88. Haggerty, K. P., Skinner, M. L., MacKenzie, E. P., & Catalano, R. F. (2007). A randomized trial of Parents Who Care: Effects on key outcomes at 24-month follow-up. Prevention Science, 8, 249–260.PubMedCrossRefGoogle Scholar
  89. Hallfors, D., & Godette, D. (2002). Will the ‘Principles of Effectiveness’ improve prevention practice? Early findings from a diffusion study. Health Education Research, 17, 461–470.PubMedCrossRefGoogle Scholar
  90. Hallfors, D., Sporer, A., Pankratz, M., & Godette, D. (2000). Drug-free schools survey: Report of results. Chapel Hill: University of North Carolina.Google Scholar
  91. Hallfors, D., Cho, D., Livert, D., & Kadushin, C. (2002). Fighting back against substance abuse: Are community coalitions winning? American Journal of Preventive Medicine, 23, 237–245.PubMedCrossRefGoogle Scholar
  92. Hantman, I., & Crosse, S. (2000). Progress in prevention: Report on the National Study of Local Education Agency Activities under the Safe and Drug Free Schools and Communities Act. Rockville, MD: Department of Education, Office of the Under Secretary, Planning and Evaluation.Google Scholar
  93. Haskins, R., & Baron, J. (2011). Building the connection between policy and evidence: The Obama evidence-based initiatives. Paper commissioned by the U.K. National Endowment for Science, Technology and the Arts (NESTA), Sept 2011. Retrieved at http://coalition4evidence.org/wordpress/wp-content/uploads/Haskins-Baron-paper-on-fed-evid-based-initiatives-2011.pdf. Accessed 1 Nov 2012.
  94. Hawkins, J. D., & Salisbury, B. R. (1983). Delinquency prevention programs for minorities of color. Social Work Research & Abstracts, 19, 5–12.CrossRefGoogle Scholar
  95. Hawkins, J. D., Catalano, R. F., & Arthur, M. W. (2002). Promoting science-based prevention in communities. Addictive Behaviors, 27, 951–976.PubMedCrossRefGoogle Scholar
  96. Hawkins, J. D., Brown, E. C., Oesterle, S., Arthur, M. W., Abbott, R. D., & Catalano, R. F. (2008). Early effects of Communities That Care on targeted risks and initiation of delinquent behavior and substance use. Journal of Adolescent Health, 43, 15–22.PubMedCrossRefGoogle Scholar
  97. Hawkins, J. D., Oesterle, S., Brown, E. C., Arthur, M. W., Abbott, R. D., Fagan, A. A., & Catalano, R. F. (2009). Results of a Type 2 translation research trial to prevent adolescent drug use and delinquency: A test of Communities That Care. Archives of Pediatrics & Adolescent Medicine, 163, 790–798.CrossRefGoogle Scholar
  98. Hawkins, J. D., Oesterle, S., Brown, E. C., Monahan, K. C., Abbott, R. D., Arthur, M. W., & Catalano, R. G. (2012). Sustained decreases in risk exposure and youth problem behaviors after installation of the Communities That Care prevention system in a randomized trial. Archives of Pediatrics & Adolescent Medicine, 166, 141–148. doi:10.1001/archpediatrics.2011.183. Published online in 2011.CrossRefGoogle Scholar
  99. Holder, H. D., Saltz, R. F., Grube, J. W., Treno, A. J., Reynolds, R. I., Voas, R. B., & Gruenewald, P. J. (1997). Summing up: Lessons from a comprehensive community prevention trial. Addiction, 92, S293–S301.PubMedCrossRefGoogle Scholar
  100. Kalafat, J., & Ryerson, D. M. (1999). The implementation and institutionalization of a school-based youth suicide prevention program. Journal of Primary Prevention, 19, 157–175.CrossRefGoogle Scholar
  101. Kellam, S. G., Mackenzie, A. C., Brown, C. H., Poduska, J. M., Wang, W., Petras, H., & Wilcox, H. C. (2011). The Good Behavior Game and the future of prevention and treatment. Addiction Science & Clinical Practice, 6, 73–84.Google Scholar
  102. Kelly, J. G. (2003). Science and community psychology: Social norms for pluralistic inquiry. American Journal of Community Psychology, 31, 213–217.PubMedCrossRefGoogle Scholar
  103. Kerner, J., Rimer, B., & Emmons, K. (2005). Dissemination research and research dissemination: How can we close the gap? Health Psychology, 24, 443–446.PubMedCrossRefGoogle Scholar
  104. Kessler, R. C., Demler, O., Frank, R. G., Olfson, M., Pincus, H. A., Walters, E. E., & Zaslavsky, A. M. (2005). US prevalence and treatment of mental disorders: 1990–2003. The New England Journal of Medicine, 352, 2515.PubMedCrossRefGoogle Scholar
  105. Khoury, M. J., Gwinn, M., & Bowen, M. S. (2007). Genomics and public health research. Letter to the editor. Journal of the American Medical Association, 297, 2347.PubMedGoogle Scholar
  106. Komro, K. A., Flay, B. R., & Biglan, A. (2011). Creating nurturing environments: A science-based framework for promoting child health and development within high-poverty neighborhoods. Clinical Child and Family Psychology Review, 14, 111–134.PubMedCrossRefGoogle Scholar
  107. Kreuter, M. W., & Bernhardt, J. M. (2009). Reframing the dissemination challenge: A marketing and distribution perspective. American Journal of Public Health, 99, 2123–2127.PubMedCrossRefGoogle Scholar
  108. Kuklinski, M. R., Briney, J. S., Hawkins, J. D., & Catalano, R. F. (2012). Cost–benefit analysis of Communities That Care outcomes at eighth grade. Prevention Science, 13, 150–161. doi:10.1007/s11121-011-0259.9.PubMedCrossRefGoogle Scholar
  109. Landsverk, J., Brown, C. H., Rolls Reutz, J., Palinkas, L., & Horwitz, S. M. (2011). Design elements in implementation research: A structured review of child welfare and child mental health studies. Administration and Policy in Mental Health, 38, 54–63. doi:10.1007/s10488-010-0315-y.PubMedCrossRefGoogle Scholar
  110. Landsverk, J., Brown, C. H., Chamberlain, P., Palinkas, L., Rolls Reutz, J., & Horwitz, S. M. (2012). Design and analysis in dissemination and implementation research. London: Oxford University Press.Google Scholar
  111. Langford, B. H., Flynn-Khan, M., English, K., Grimm, G., & Taylor, K. (2012). Evidence2Success, making wise investments in children’s futures: Financing strategies and structures. Baltimore: The Annie E. Casey Foundation.Google Scholar
  112. Leviton, L. C., Kettel Khan, L., Rog, D., Dawkins, N., & Cotton, D. (2010). Evaluability assessment to improve public health policies, programs, and practices. Annual Review of Public Health, 31, 213–233.PubMedCrossRefGoogle Scholar
  113. Little, M. (November 1–2, 2011). Achieving Lasting Impact at Scale, Part One: Behavior Change and the Spread of Family Health Innovations in Low-Income Countries: A convening hosted by the Bill & Melinda Gates Foundation, Seattle.Google Scholar
  114. Little, M., Kaoukji, D., Truesdale, B., Backer, T. (March 29–30, 2012). Achieving Lasting Impact at Scale, Part Two: Assessing System Readiness for Delivery of Family Health Innovations at Scale: A convening hosted by the Bill & Melinda Gates Foundation, La Jolla, CaliforniaGoogle Scholar
  115. Livit, M., & Wandersman, A. (2004). Organizational functioning: Facilitating effective interventions and increasing the odds of program success. In D. M. Fetterman & A. Wandersman (Eds.), Empowerment evaluation principals in practice (pp. 123–142). New York: Guilford.Google Scholar
  116. Mancini, J. A., & Marek, L. I. (2004). Sustaining community-based programs for families: Conceptualization and measurement. Family Relations, 53, 339–347.CrossRefGoogle Scholar
  117. Marchand, E., Stice, E., Rohde, P., & Becker, C. B. (2011). Moving from efficacy to effectiveness trials in prevention research. Behavioral Research and Therapy, 49, 32–41.CrossRefGoogle Scholar
  118. McGlynn, E. A., Asch, S. M., & Adams, J. (2003). The quality of health care delivered to adults in the United States. The New England Journal of Medicine, 348, 2635–45.PubMedCrossRefGoogle Scholar
  119. McHugh, R. K., & Barlow, D. H. (2010). The dissemination and implementation of evidence-based psychological treatments: A review of current efforts. American Psychologist, 65, 73–84.PubMedCrossRefGoogle Scholar
  120. Mendel, R. A. (2000). Less hype, more help: Reducing juvenile crime, what works—and what doesn’t. Washington, DC: The American Youth Policy Forum.Google Scholar
  121. Merikangas, K. R., He, J., Burstein, M., Swendsen, J., Avenevoli, S., Case, B., & Olfson, M. (2011). Service utilization for lifetime mental disorders in US adolescents: Results of the National Comorbidity Survey-Adolescent Supplement (NCS-A). Journal of the American Academy of Child and Adolescent Psychiatry, 50, 32–45.PubMedCrossRefGoogle Scholar
  122. Meyers, D. C., Durlak, J. A., & Wandersman, A. (2012). The quality implementation framework: A synthesis of critical steps in the implementation process. American Journal of Community Psychology, 50, 462–80. doi:10.1007/s10464-012-9522-x.PubMedCrossRefGoogle Scholar
  123. Midgley, J. (2000). The definition of social policy. In J. Midgley, M. B. Tracy, & M. Livermore (Eds.), The handbook of social policy. Thousand Oaks: Sage.Google Scholar
  124. Mihalic, S. F., & Irwin, K. (2003). Blueprints for violence prevention: From research to real-world settings—factors influencing the successful replication of model programs. Youth Violence and Juvenile Justice, 1, 307–329.CrossRefGoogle Scholar
  125. Miller, T., & Hendrie, D. (2008). Substance abuse prevention dollars and cents: A costbenefit analysis, DHHS Pub. No. (SMA) 07–4298. Rockville, MD: Center for Substance Abuse Prevention, Substance Abuse and Mental Health Services Administration.Google Scholar
  126. Miller, G., Roehrig, C., Hughes-Cromwick, P., & Lake, C. (2008). Quantifying national spending on wellness and prevention. Advances in Health Economics and Health Services Research, 19, 1–24.PubMedCrossRefGoogle Scholar
  127. Minkler, M., & Wallerstein, N. (2002). Improving health through community organizing and community building. In K. Glanz, F. M. Lewis, & B. K. Rimer (Eds.), Health behavior and health education: Theory, research and practice (3rd ed., pp. 241–269). San Francisco: Jossey-Bass.Google Scholar
  128. Mitchell, R., Florin, P., & Stevenson, J. F. (2002). Supporting community-based prevention and health promotion initiatives: Developing effective technical assistance systems. Health Education & Behavior, 29, 620–639.CrossRefGoogle Scholar
  129. Mold, J. W., & Peterson, K. A. (2005). Primary care practice-based research networks: Working at the interface between research and quality improvement. Annals of Family Medicine, 3, S12–S20.PubMedCrossRefGoogle Scholar
  130. Mrazek, P., & Haggerty, R. (1994). Reducing risks for mental disorders: Frontiers for preventive intervention research. Washington, DC: National Academy Press.Google Scholar
  131. Murphy, S. A. (2003). Optimal dynamics treatment regimes. Journal of the Royal Statistical Society, 65, 331–366.CrossRefGoogle Scholar
  132. Nation, M., Crusto, C., Wandersman, A., Kumpfer, K. L., Seybolt, D., Morrissey-Kane, E., & Davino, K. (2003). What works in prevention: Principles of effective prevention programs. American Psychologist, 58, 449–456.PubMedCrossRefGoogle Scholar
  133. National Research Council and Institute of Medicine (2009a). Preventing mental, emotional, and behavioral disorders among young people: Progress and possibilities. Committee on the Prevention of Mental Disorders and Substance Abuse Among Children, Youth, and Young Adults: Research Advances and Promising Interventions. Mary Ellen O’Connell, Thomas Boat, and Kenneth E. Warner, Editors. Washington DC: The National Academies Press.Google Scholar
  134. National Research Council and Institute of Medicine. (2009b). Depression in parents, parenting, and children: Opportunities to improve identification, treatment, and prevention. Committee on Depression, Parenting Practices, and the Healthy Development of Children. Board on Children, Youth, and Families. Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.Google Scholar
  135. National Prevention, Health Promotion, and Public Health Council (2011). National Prevention Strategy. Washington DC: National Prevention, Health Promotion, and Public Health Council. Retrieved at www.healthcare.gov/center/councils/nphpphc/strategy/report.pdf. Accessed 1 Nov 2012.
  136. O’Brien, R. A., Moritz, P., Luckey, D. W., McClatchey, M. W., Ingoldsby, E. M., & Olds, D. L. (2012). Mixed methods analysis of participant attrition in the nurse–family partnership. Prevention Science, 13, 219–228.PubMedCrossRefGoogle Scholar
  137. Office of National Drug Control Policy, National Drug Control Strategy (2011). Retrieved at http://www.whitehouse.gov/ondcp/2011-national-drug-control-strategy. Accessed 1 Nov 2012.
  138. Olds, D. L. (2002). Prenatal and infancy home visiting by nurses: From randomized trials to community replication. Prevention Science, 3, 153–172.PubMedCrossRefGoogle Scholar
  139. Palinkas, L. A., Aarons, G. A., Horwitz, S. M., Chamberlain, P., Hulburt, M., & Landsverk, J. (2011). Mixed method designs in implementation research. Administration & Policy in Mental Health and Mental Health Services, 38, 44–53.CrossRefGoogle Scholar
  140. Palinkas, L. A., Fuentes, D., Finno, M., Garcia, A. R., Holloway, I. W., & Chamberlain, P. (2012). Inter-organizational collaboration in the implementation of evidence-based practices among public agencies serving abused and neglected youth. Administration & Policy in Mental Health and Mental Health Services. doi:10.1007/s10488-012-0437-5. Epub 8-11-2012.
  141. Pentz, M. A. (1986). Community organization and school liaisons: How to get programs started. Journal of School Health, 56, 382–388. doi:10.1111/j.1746-1561.1986.tb05778.x.PubMedCrossRefGoogle Scholar
  142. Pentz, M. A. (2000). Institutionalizing community-based prevention through policy change. Journal of Community Psychology, 28, 257–270.CrossRefGoogle Scholar
  143. Pentz, M. A. (2004). Form follows function: Designs for prevention effectiveness and diffusion research. Prevention Science, 5, 23–29.PubMedCrossRefGoogle Scholar
  144. Pentz, M. A. (2007). Disseminating effective approaches to drug use prevention. In M. K. Welch-Ross & L. G. Fasig (Eds.), Handbook on communicating and disseminating behavioral science (pp. 341–364). Thousand Oaks: Sage.CrossRefGoogle Scholar
  145. Pentz, M. A., Mares, D., Schinke, S., & Rohrbach, L. A. (2004). Political science, public policy, and drug use prevention. Substance Use & Misuse, 39, 1821–1865.CrossRefGoogle Scholar
  146. Poduska, J., Kellam, S. G., Brown, C. H., Ford, C., Windham, A., Keegan, N., & Wang, W. (2009). Study protocol for a group randomized controlled trial of a classroom-based intervention aimed at preventing early risk factors for drug abuse: Integrating effectiveness and implementation research. Implementation Science, 4, 56.PubMedCrossRefGoogle Scholar
  147. Price, R. H., & Behrens, T. (2003). Working Pasteur’s quadrant: Harnessing science and action for community change. American Journal of Community Psychology, 31, 219–223.PubMedCrossRefGoogle Scholar
  148. Proctor, E. K. (2007). Implementing evidence-based practice in social work education: Principles, strategies, and partnerships. Research on Social Work Practice, 17, 583–591.CrossRefGoogle Scholar
  149. Proctor, E. K., Landsverk, J., Aarons, G., Chambers, D., Glisson, C., & Mittman, B. (2009). Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health, 36, 24–34.PubMedCrossRefGoogle Scholar
  150. Quinby, R. K., Fagan, A. A., Hanson, K., & Brooke-Weiss, B. L. (2008). Installing the Communities That Care prevention system: Implementation progress and fidelity in a randomized controlled trial. Journal of Community Psychology, 36, 313–332.CrossRefGoogle Scholar
  151. Rabin, B. A., Brownson, R. C., Haire-Joshu, D., Kreuter, M. W., & Weaver, N. L. (2008). A glossary for dissemination and implementation research in health. Journal of Public Health Management and Practice, 14, 117–123.PubMedGoogle Scholar
  152. Rappaport, J. (1990). Research methods and empowerment social agenda. In P. Tolan, C. Keys, F. Chertok, & L. Jason (Eds.), Researching community psychology: Issues of theories and methods (pp. 51–63). Washington, DC: American Psychological Association.CrossRefGoogle Scholar
  153. Redmond, C., Spoth, R. L., Shin, C., Schainker, L., Greenberg, M., & Feinberg, M. (2009). Long-term protective factor outcomes of evidence-based interventions implemented by community teams through a community–university partnership. Journal of Primary Prevention, 30, 513–530.PubMedCrossRefGoogle Scholar
  154. Rhoades, B. L., Bumbarger, B. K., & Moore, J. E. (2012). The role of a state-level prevention support system in promoting high-quality implementation and sustainability of evidence-based programs. American Journal of Community Psychology, 50, 386–401. doi:10.1007/s10464-012-9502.1. Online first 3-23-12.PubMedCrossRefGoogle Scholar
  155. Ringwalt, C., Vincus, A., Hanley, S., Ennett, S., Bowling, J., & Rohrbach, L. (2009). The prevalence of evidence-based drug use prevention curricula in U.S. middle schools in 2005. Prevention Science, 10, 33–40.PubMedCrossRefGoogle Scholar
  156. Ringwalt, C., Hanley, S., Ennett, S. T., Vincus, A. A., Bowling, J. M., Haws, S. W., & Rohrbach, L. A. (2011). The effects of no child left behind on the prevalence of evidence-based drug prevention curricula in the nation’s middle schools. Journal of School Health, 81, 265–272. doi:10.1111/j.1746-1561.2011.00587.x.PubMedCrossRefGoogle Scholar
  157. Rivera, D. E., Pew, M. D., & Collins, L. M. (2007). Using engineering control principles to inform the design of adaptive interventions: A conceptual introduction. Drug and Alcohol Dependence, 88, S31–40.PubMedCrossRefGoogle Scholar
  158. Rogers, E. M. (1995). Diffusion of innovations (4th ed.). New York: Free Press.Google Scholar
  159. Rohrbach, L. A., Grana, R., Sussman, S., & Valente, T. W. (2006). Type 2 translation: Transporting prevention interventions from research to real-world settings. Evaluation & the Health Professions, 29, 302–333.CrossRefGoogle Scholar
  160. Rohrbach, L. A., Sun, P., & Sussman, S. (2010). One-year follow-up evaluation of the Project Towards No Drug Abuse (TND) dissemination trial. Preventive Medicine, 51, 313–319.PubMedCrossRefGoogle Scholar
  161. Rotheram-Borus, M. J., & Duan, N. (2003). Next generation of preventive interventions. American Academy of Child and Adolescent Psychiatry, 42, 518–526.CrossRefGoogle Scholar
  162. Rotheram-Borus, M. J., Swendeman, D., & Chorpita, B. F. (2012). Disruptive innovations for designing and diffusing evidence-based interventions. American Psychologist, 67, 463–476.PubMedCrossRefGoogle Scholar
  163. Sandler, I. N., Ostrom, A., Bitner, M. J., Ayers, T. S., Wolchik, S. A., & Daniels, V. S. (2005). Developing effective prevention services for the real world: A prevention service development model. American Journal of Community Psychology, 35, 127–142.PubMedCrossRefGoogle Scholar
  164. Satcher, D. (2006). The prevention challenge and opportunity. Health Affairs, 25, 1009–1011.PubMedCrossRefGoogle Scholar
  165. Scheirer, M. A. (2005). Is sustainability possible? A review and commentary on empirical studies of program sustainability. American Journal of Evaluation, 26, 320–347.CrossRefGoogle Scholar
  166. Scheirer, M. A., & Dearing, J. W. (2011). An agenda for research on the sustainability of public health programs. American Journal of Public Health, 101, 2059–67.PubMedCrossRefGoogle Scholar
  167. Schinke, S., Brounstein, P. & Gardner, S. (2002). Science-based prevention programs and principles, 2002. DHHS Pub. No. (SMA) 03–3764. Rockville, MD: Center for Substance Abuse Prevention, Substance Abuse and Mental Health Services AdministrationGoogle Scholar
  168. Shediac-Rizkallah, M., & Bone, L. (1998). Planning for the sustainability of community based health programs: Conceptual frameworks and future directions for research, practice and policy. Health Education Research, 13, 87–108.PubMedCrossRefGoogle Scholar
  169. Silvia, E. S., Thorne, J., & Tashjian, C. A. (1997). School-based drug prevention programs: A longitudinal study in selected school districts. Triangle Park: Research Triangle Institute.Google Scholar
  170. Sloboda, Z. (2012). The state of support for research on the epidemiology, prevention, and treatment of drug use and drug use disorders in the United States. Unpublished manuscript.Google Scholar
  171. Society for Preventive Research (2012). Advocacy for prevention science. Retrieved at http://www.preventionresearch.org/advocacy. Accessed 1 Dec 2012.
  172. Spoth, R. (2007). Opportunities to meet challenges in rural prevention research: Findings from an evolving community–university partnership model. The Journal of Rural Health, 23, S42–S54.CrossRefGoogle Scholar
  173. Spoth, R. (2008). Translating family-focused prevention science into effective practice. Toward a translational impact paradigm. Current Directions in Psychological Science, 17, 415–421.PubMedCrossRefGoogle Scholar
  174. Spoth, R. (2012). Multisite transdisciplinary T2 translation research network for state prevention systems. Invited presenter and panelist on scientist advisory panel for NIH/NIDA Prevention Science Five-Year Strategic Planning Meeting, Washington, DC.Google Scholar
  175. Spoth, R., Redmond, C., Shin, C., Greenberg, M., Feinberg, M., & Schainker, L. (2012a) PROSPER community–university partnership delivery system effects on substance misuse through 6 1/2 years past baseline from a cluster randomized controlled intervention trial. Preventive Medicine. doi:10.1016/j.ypmed.2012.12.013.
  176. Spoth, R., Trudeau, L., Shin, C., Ralston, E., Redmond, C., Greenberg, M., & Feinberg, M. (2012b). Longitudinal effects of universal preventive intervention on prescription drug misuse: Three RCTs with late adolescents and young adults. American Journal of Public Health. doi:10.2105/AJPH.2012.301209.
  177. Spoth, R. L., & Greenberg, M. T. (2005). Toward a comprehensive strategy for effective practitioner–scientist partnerships and larger-scale community benefits. American Journal of Community Psychology, 35, 107–126.PubMedCrossRefGoogle Scholar
  178. Spoth, R. L., & Greenberg, M. T. (2011). Impact challenges in community science-with-practice: Lessons from PROSPER on transformative practitioner–scientist partnerships and prevention infrastructure development. American Journal of Community Psychology, 40, 1178–1191.Google Scholar
  179. Spoth, R., & Molgaard, V. (1999). Project Family: A partnership integrating research with the practice of promoting family and youth competencies. In T. R. Chibucos & R. Lerner (Eds.), Serving children and families through community–university partnerships: Success stories (pp. 127–137). Boston: Kluwer Academic.CrossRefGoogle Scholar
  180. Spoth, R., & Redmond, C. (1993). Identifying program preferences through conjoint analysis: Illustrative results from a parent sample. American Journal of Health Promotion, 8, 124–133. not in refs.PubMedCrossRefGoogle Scholar
  181. Spoth, R., & Redmond, C. (2000). Research on family engagement in preventive interventions: Toward improved use of scientific findings in primary prevention practice. Journal of Primary Prevention, 21, 267–284.CrossRefGoogle Scholar
  182. Spoth, R., & Redmond, C. (2002). Project Family prevention trials based in community–university partnerships: Toward scaled up preventive interventions. Prevention Science, 3, 203–221.PubMedCrossRefGoogle Scholar
  183. Spoth, R., Greenberg, M., Bierman, K., & Redmond, C. (2004). PROSPER community–university partnership model for public education systems: Capacity-building for evidence-based, competence-building prevention. Prevention Science, 5, 31–39.PubMedCrossRefGoogle Scholar
  184. Spoth, R., Clair, S., Greenberg, M., Redmond, C., & Shin, C. (2007a). Toward dissemination of evidence-based family interventions: Maintenance of community-based partnership recruitment results and associated factors. Journal of Family Psychology, 21, 137–146.PubMedCrossRefGoogle Scholar
  185. Spoth, R., Guyll, M., Lillehoj, C. J., Redmond, C., & Greenberg, M. (2007b). PROSPER study of evidence-based intervention implementation quality by community–university partnerships. Journal of Community Psychology, 35, 981–999.PubMedCrossRefGoogle Scholar
  186. Spoth, R., Redmond, C., Shin, C., Clair, S., Greenberg, M. T., & Feinberg, M. E. (2007c). Toward public health benefits from community–university partnerships: PROSPER effectiveness trial results for substance use at 1 1/2 years past baseline. American Journal of Preventive Medicine, 32, 395–402.PubMedCrossRefGoogle Scholar
  187. Spoth, R., Greenberg, M., & Turrisi, R. (2008). Preventive interventions addressing underage drinking: State of the evidence and steps toward public health impact. Pediatrics, 121, 311–336.CrossRefGoogle Scholar
  188. Spoth, R., Redmond, C., Clair, S., Shin, C., Greenberg, M., & Feinberg, M. (2011). Preventing substance misuse through community–university partnerships and evidence-based interventions: PROSPER outcomes 4 1/2 years past baseline. American Journal of Preventive Medicine, 40, 440–447.PubMedCrossRefGoogle Scholar
  189. Spoth, R. L., Guyll, M., Redmond, C., Greenberg, M. T., & Feinberg, M. E. (2011). Six-year sustainability of evidence-based intervention implementation quality by community–university partnerships: The PROSPER study. American Journal of Community Psychology, 48, 412–425.PubMedCrossRefGoogle Scholar
  190. Sprague, J. (2007). Creating schoolwide prevention and intervention strategies. Published by Hamilton Fish Institute on School and Community Violence & Northwest Regional Education Laboratory, with support from OJJDP and USDOJ. Retrieved at: http://gwired.gwu.edu/%20hamfish/merlin-cgi/p/downloadFile/d/20707/n/off/other/1/name/prevention.pdf/
  191. Stevenson, J. F., & Mitchell, R. E. (2003). Community level collaboration for substance abuse prevention. American Journal of Community Psychology, 23, 371–404.Google Scholar
  192. Sung, N. S., Crowley, W. F., Genel, M., Salber, P., Sandy, L., Sherwood, L. M., & Rimoin, D. (2003). Central challenges facing the clinical research enterprise. Journal of the American Medical Association, 289, 1278.PubMedCrossRefGoogle Scholar
  193. Sussman, S., Valente, T. W., Rohrbach, L. A., Skara, S., & Pentz, M. A. (2006). Translation in the health professions: Converting science into action. Evaluation & the Health Professions, 29, 7–32.CrossRefGoogle Scholar
  194. Tibbits, M. K., Bumbarger, B. K., Kyler, S. J., & Perkins, D. F. (2010). Sustaining evidence-based interventions under real-world conditions: Results from a large-scale diffusion project. Prevention Science, 11, 252–262.PubMedCrossRefGoogle Scholar
  195. Trochim, W., Kane, C., Graham, M. J., & Pincus, H. A. (2011). Evaluating translational research: A process marker model. Clinical and Translational Science, 4, 153–162. doi:10.1111/j.1752-8062.2011.00291.x.PubMedCrossRefGoogle Scholar
  196. Tseng, V. (2012). The uses of research in policy and practice. Social Policy Report, 26, 1–16.Google Scholar
  197. U. S. Department of Health and Human Services, Affordable Care Act (2011). Overview retrieved at http://www.whitehouse.gov/healthreform/healthcare-overview. Accessed 1 Nov 2012.
  198. U. S. Department of Health and Human Services, Health Resources and Services Administration, Maternal and Child Health Bureau. (2011). The health and wellbeing of children in rural areas: A portrait of the nation 2007. Rockville: U.S. DHHS.Google Scholar
  199. Valente, T. W. (2010). Social networks and health: Models, methods, and applications. New York: Oxford.CrossRefGoogle Scholar
  200. Valente, T. W. (2012). Network interventions. Science, 337, 49–53. doi:10.1126/science.1217330.PubMedCrossRefGoogle Scholar
  201. Valente, T. W., Chou, C. P., & Pentz, M. A. (2007). Community coalitions as a system: Effects of network change on adoption of evidence-based substance abuse prevention. American Journal of Public Health, 95, 880–886.CrossRefGoogle Scholar
  202. Valentine, J. C., Biglan, A., Boruch, R. F., Castro, F. G., Collins, L. M., Flay, B. R., & Schinke, S. P. (2011). Replication in prevention science. Prevention Science, 12, 103–117.PubMedCrossRefGoogle Scholar
  203. Wandersman, A., & Florin, P. (2003). Community intervention and effective prevention. American Psychologist, 58, 441–448.PubMedCrossRefGoogle Scholar
  204. Wandersman, A., Duffy, J., Flaspohler, P., Noonan, R., Lubell, K., Stillman, L., & Saul, J. (2008). Bridging the gap between prevention research and practice: The interactive systems framework for dissemination and implementation. American Journal of Community Psychology, 41, 171–181.PubMedCrossRefGoogle Scholar
  205. Weissberg, R. P., & Greenberg, M. T. (1998). Prevention science and collaborative community action research: Combining the best from both perspectives. Journal of Mental Health, 7, 479–492.CrossRefGoogle Scholar
  206. West, S. G., Duan, N., Pequegnat, W., Gaist, P., Des Jarlais, D. C., Holtgrave, D., & Mullen, P. D. (2008). Alternatives to the randomized controlled trial. American Journal of Public Health, 98, 1359–1366.PubMedCrossRefGoogle Scholar
  207. Westfall, J. M., Mold, J., & Fagman, L. (2007). Practice-based research—“blue highways” on the NIH road map. Journal of the American Medical Association, 297, 403–406.PubMedCrossRefGoogle Scholar
  208. Wilcox, S., Dowda, M., Leviton, L. C., Bartless-Prescott, J., Bazzarre, T., Campbell-Yoytal, K., & Wegley, S. (2008). Active for life: Final results from the translation of two physical activity programs. American Journal of Preventive Medicine, 35, 340–351.PubMedCrossRefGoogle Scholar
  209. Wilson, K. M., Brady, T. J., Lesesne, C., & on behalf of the NCCDPHP Work Group on Translation. (2011). An organizing framework for translation in public health: The knowledge to action framework. Preventing Chronic Disease, 8, 1–7. doi:1.Google Scholar
  210. Woolf, S. H. (2007). Potential health and economic consequences of misplaced priorities. Journal of the American Medical Association, 297, 523–526.PubMedCrossRefGoogle Scholar
  211. Woolf, S. H. (2008). The meaning of translation research and why it matters. Journal of the American Medical Association, 222, 211–213.CrossRefGoogle Scholar
  212. Woolf, S. H., Husten, C. G., Lewin, L. S., Marks, J. S., Fielding, J. E., & Sanchez, E. J. (2009). The economic argument for disease prevention: Distinguishing between value and savings. A Prevention Policy Paper commissioned by Partnership for Prevention. Retrieved at http://www.prevent.org/data/files/initiatives/economicargumentfordiseaseprevention.pdf. Accessed 1 Dec 2012.
  213. World Health Organization. (2008). The global burden of disease: 2004 update. Geneva: WHO Press.Google Scholar

Copyright information

© The Author(s) 2013

Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

Authors and Affiliations

  • Richard Spoth
    • 1
  • Louise A. Rohrbach
    • 2
  • Mark Greenberg
    • 3
  • Philip Leaf
    • 4
  • C. Hendricks Brown
    • 5
  • Abigail Fagan
    • 6
  • Richard F. Catalano
    • 7
  • Mary Ann Pentz
    • 8
  • Zili Sloboda
    • 9
  • J. David Hawkins
    • 10
  • Society for Prevention Research Type 2 Translational Task Force Members and Contributing Authors
  1. 1.Partnerships in Prevention InstituteIowa State UniversityAmesUSA
  2. 2.Institute for Prevention ResearchUniversity of Southern CaliforniaLos AngelesUSA
  3. 3.Prevention Research CenterThe Pennsylvania State UniversityUniversity ParkUSA
  4. 4.Johns Hopkins Center for the Prevention of Youth ViolenceJohns Hopkins Bloomberg School of Public HealthBaltimoreUSA
  5. 5.Prevention Science and Methodology GroupUniversity of Miami Miller School of MedicineMiamiUSA
  6. 6.College of Criminology and Criminal JusticeFlorida State UniversityTallahasseeUSA
  7. 7.Social Development Research GroupUniversity of WashingtonSeattleUSA
  8. 8.Institute for Prevention ResearchUniversity of Southern CaliforniaLos AngelesUSA
  9. 9.Research and DevelopmentJBS International, Inc.North BethesdaUSA
  10. 10.Social Development Research GroupUniversity of WashingtonSeattleUSA

Personalised recommendations