The current field of prevention science boasts an increasing armamentarium of effective preventive interventions, reflecting the growing promise of the field. However, we have yet to achieve significant scale with the most efficacious interventions, limiting the potential impact of prevention on population and public health. Further, there is considerable evidence that the goals of scale and quality often seem to work in opposition; implementation quality and fidelity suffer as preventive interventions increase in scale, potentially diminishing returns. As prevention science continues to mature, the growing arena of implementation science has accelerated the pace at which these challenges are being addressed. We believe the field has entered an important new era, reflected by the growing number and sophistication of systems, tools, and processes aimed at increasing scalability without sacrificing high-quality implementation. Many researchers and research teams are working in isolation to address this important challenge. It is our hope that the pace of advancement can be increased by promoting a broader discussion across many researchers, practitioners, and policy makers. This special issue of the Journal of Primary Prevention, focusing on measurement and monitoring systems and frameworks for assessing implementation and adaptation of prevention programs, offers some empirical examples that demonstrate the growth and development of prevention and implementation science and practice, as these fields move us closer to scalable systems and practices in applied contexts. The collection of articles in this special issue reflect the diversity of contexts in which these challenges are being addressed and the important underlying questions that must be answered as preventive interventions seek scale with quality.

In their article Self-Reported Engagement in a Drug Prevention Program: Individual and Classroom Effects on Proximal and Behavioral Outcomes, Hansen, Fleming, and Scheier tackle the role of engagement on youth outcomes in the context of All Stars Core, a school-based drug prevention program (Hansen & Dusenbury, 2004). Given the importance of engagement in regards to implementation outcomes, identification of measurement strategies that enable programs to continuously monitor engagement is warranted. This study provides an example of a brief, easy to implement measure that can be used to monitor engagement. Interestingly, within the context of this classroom-based intervention, classroom-level engagement was more predictive of outcomes than individual-level engagement, though individual engagement compared to peers still had an effect. While the authors were unable to tease apart this difference explicitly given available data, they clearly point to the importance of context for measurement of implementation variables.

In Reconciling Adaptation and Fidelity: Implications for Scaling Up High Quality Youth Programs, Anyon and colleagues address the tricky issue of assessing fidelity within more principle-driven initiatives, particularly youth voice programs (YVPs). Within this context, the authors suggest that fidelity and adaptation are not mutually exclusive concepts but rather can coexist and can be indicators of high-functioning programs, as long as adaptations are aligned with the core principles of the intervention. (For additional discussion and empirical support of this hypothesis, see also Moore, Bumbarger, & Cooper, 2013.) Further, the authors suggest that traditional fidelity measurement is not always appropriate to measure fidelity for YVPs, and challenge the field to consider the context of fidelity measurement and the impact on how such programs are evaluated.

Mauricio and colleagues introduce us to the Family Check-Up (FCU; Dishion & Stormshak, 2007) readiness assessment in Provider Readiness and Adaptations of Competency Drivers During Scale-Up of the Family Check-Up. With 112 practitioners across 19 sites, the authors were able to create four different constellations of practitioners based on inner setting variables. These practitioner groupings predicted fidelity. This type of research has potential implications for implementation science and practical field-based implementation supports alike. The Mauricio et al. study challenges measurement to move from consideration of individual variables to constellations of variables that may have unique effects. The practical implications of the measurement strategies imply that precision supports may need to be developed to support fidelity in settings with more limited options for practitioner selection.

Hill, Cooper, and Parker, in Qualitative Comparative Analysis: A Mixed-Method Tool for Complex Implementation Questions, introduce us to a novel measurement strategy for the prevention science field called Qualitative Comparative Analysis (CQA). While the CQA has been used in other fields, Hill et al. use the treatment platform of the Strengthening Families Program (Spoth, Redmond, Trudeua, & Shin, 2002) to illustrate the utility of the CQA within a large-scale prevention-oriented dissemination initiative. The CQA advances the field of prevention science in that this approach is designed to determine causal pathways for implementation, not to establish correlational relationships. Importantly, this model helps predict not just implementation but clinical outcomes as well.

In their article Classifying Changes to Preventive Interventions: Applying Adaptation Taxonomies, Roscoe and colleagues retrospectively examine the performance of four different taxonomies of assessing intervention adaptation, using the school-based intervention TOOLBOXTR (Collins, 2015) as the context. While the growth of frameworks and taxonomies in the literature leave the field with many different potential options to guide implementation, their applicability within prevention contexts is understudied. Roscoe and colleagues’ study highlights that no one taxonomy is clearly superior, and that each has strengths and weaknesses. Their study provides a critical call for future research on taxonomies of adaptation research, and their recommendations are likely generalizable for other frameworks and taxonomies.

Berkel and colleagues, in Redesigning Implementation Measurement for Monitoring and Quality Improvement in Community Delivery Settings, provide a “vision for the future,” integrating technology into a monitoring and feedback system. As the field grapples with how to effectively scale evidence-based prevention programs while maintaining the rigor of intervention delivery, this article illustrates an example of how the complexities of human interactions within the context of service delivery settings can be depicted within a technologically-driven approach.

Collectively, these six articles are meant to demonstrate how prevention and implementation science are progressing to address the challenges of scaling up—particularly how (evidence-based prevention) programs are developing tools and infrastructure to maintain quality at scale in non-research contexts, including monitoring implementation and adaptation. Our goal with this special issue is to present case studies of individual programs’ experiences (and some data), to collectively add to a broader generalizable discussion about how the field is advancing in this regard. We’re grateful to the authors for contributing to this important discussion. We also call the reader’s attention to the three commentaries included with this special issue (Homel, Branch, Freiberg; DeRosier; and Lewis, Lyon, McBain, & Landes) and thank those authors as well for their insightful comments. The commentary authors had the benefit of reading all six articles and have done an excellent job of drawing out important themes that contribute to the broader discussion. All three commentaries point out the critical importance of context, the potential of technology, and the need for co-creating measurement and monitoring systems with end-users, namely the practitioners, providers and program recipients who will need to both provide the data on their experiences and utilize the feedback for continuous quality improvement.

Finally, for readers who are particularly interested in the movement to promote the scale-up of effective preventive interventions, we acknowledge several groups doing important work to advance this agenda.

  1. 1.

    The Society for Prevention Research (www.preventionresearch.org). Early findings from several of the research projects highlighted in this special issue were first presented at the SPR Annual Conference. SPR also has, among its many committees, a Mapping Advances in Prevention Science (MAPS) Task Force on Scaling Effective Prevention Programs. This MAPS Task Force has recently completed a white paper (Fagan et al.) currently under review by SPR’s journal, Prevention Science. The white paper specifically examines the role and context of scaling effective preventive interventions within five public systems (i.e., public health, behavioral health, child welfare, education, and juvenile justice).

  2. 2.

    The National Prevention Science Coalition to Improve Lives (www.npscoalition.org)—spearheading prevention science advocacy, especially in federal policy, and working to develop stronger partnerships between prevention scientists and policy makers.

  3. 3.

    The Coalition for the Promotion of Behavioral Health (www.coalitionforbehavioralhealth.org)—an interdisciplinary group of researchers, practitioners, and policymakers guided by a National Academy of Medicine prevention implementation and dissemination framework called Unleashing the Power of Prevention (Hawkins, et al., 2015).

These are only three examples of opportunities to join other researchers, practitioners and policy makers in a broader discussion about how to achieve the promise of prevention to improve public and population health. We are grateful for the opportunity to advance that conversation through this special issue.