The focus of this special issue is on identifying factors that predict a school’s likelihood of high implementation of social–emotional learning (SEL) interventions. In this case, these factors are interpreted as readiness to implement. Implementation readiness is defined by the editors as the capacity to implement an evidence-based intervention (EBI) effectively. Though not stated explicitly, readiness in this definition seems to be a characteristic of the implementers (i.e., teachers or school). The model proposed by the editors and reflected to varying degrees by each of the seven studies involves documenting variables of teachers, classrooms, and schools that are predictive of high-quality implementation and using those variables to create readiness profiles and tailor implementation supports.

SEL programs have demonstrated convincing efficacy for improving the social and academic development of children (Durlak et al. 2011). As practitioners and policy makers become more convinced of the fundamental importance of SEL as a foundation for quality education and child development, the challenge of effectively scaling SEL programs and practices is becoming more critical and timely. This special issue addresses an important empirical question: Can we identify factors that represent “readiness” of a school (and its teachers and classrooms) to adopt an SEL program and deliver it with sufficient quality and fidelity to reproduce the improvements in social and academic outcomes demonstrated in controlled trials? The seven SEL implementation studies presented in this special issue depict a complex picture of delivering SEL programs in schools and assessing both implementation and impact. As a result, across these seven studies, and other similar studies, we arrive at a long laundry list of variables that may influence implementation quality, fidelity, and reach (which may in turn potentially impact program effects and sustainment).

So what can be made of the complex readiness model characterized across these seven studies? The collective body of SEL implementation research, exemplified in the articles of this special issue and more broadly, addresses both academic (i.e., for the sake of increasing our generalizable knowledge) and utilitarian (i.e., for the practical advancement of the scaling of SEL practice in schools) ends. Though these are sometimes overlapping goals simultaneously advanced, for the sake of clarity and space, this commentary will primarily address the practical and pragmatic value and lessons of this special issue, with admittedly less attention paid to issues of methodology, analytic techniques, or study designs.

Considering the lessons we can draw across these seven SEL implementation studies, we might start with the end in mind: what could we do if we arrived at a clear list of the most important predictors of high- (or low-)quality implementation? It is a herculean task to elucidate the characteristics of social–emotional development and subsequently use that knowledge to craft an intervention intended to promote such development. To further demonstrate, in the context of a rigorous experimental trial, that such an intervention can produce (both statistically and practically) significant improvements relative to a control condition is equally challenging, and not accomplished without a commitment to sound theory and a solid understanding of the school and classroom context (Flay et al. 2005). So the achievement of each of these programs in demonstrating efficacy must be recognized. Taking such evidence-based SEL programs to scale across a sufficient number of classrooms and schools to create a population-level impact is another thing altogether and presents an additional list of potential barriers and challenges to both research and practice (Rohrbach et al. 1996, 2006; Bumbarger and Perkins 2008). However lofty that ambition, it is indeed the goal of all those who work in this field: to change the way schools operate, teachers teach and students learn. So it is critically important that we not only develop efficacious SEL programs but that we also develop SEL programs so robust in their effectiveness and so finely tuned for adoption, efficiency, and impact that we can realistically expect them to be widely embraced, delivered with quality, and sustained.

Although there is an established, convincing body of empirical evidence for the power of SEL programs (CASEL 2013; Durlak et al. 2011), there is likewise a considerable body of evidence demonstrating variability in implementation (Kallestad and Olweus 2003; Durlak and DuPre 2008), especially in non-research contexts (Gottfredson and Gottfredson 2002; Moore et al 2013; Payne et al. 2006), as well as the strong correlation between implementation and impact (Fixsen et al. 2005). There have been many studies concurrently examining characteristics of schools, classrooms, and teachers and attempting to associate those characteristics with high or low levels of implementation (Durlak et al. 2011), but the articles in this special issue seek to address a priori hypotheses about predictors of implementation, getting to important questions about the level of readiness that might be required to successfully deliver effective SEL programming. Understanding what constitutes “readiness” advances a number of important areas necessary to scale SEL. Perhaps foremost, it addresses questions of feasibility—of both specific SEL programs and also of the goal of promoting an SEL agenda in public education more generally. For example, if across multiple implementation studies we find that “readiness” is embodied by teacher, classroom, and school contexts that are quite atypical, it would be reasonable to conclude that most schools will not be in a position to adopt such programs and, conceivably, that the schools and students most in need of SEL programming are the schools least likely to be considered ready. Fortunately, the studies reflected in this special issue do not paint such a bleak picture. On the contrary, across these diverse implementation studies, the feasibility of delivering SEL programming, even in quite challenging environments, is clearly established. So perhaps the focus of the readiness question is not on the feasibility of delivering an SEL program in a specific school (i.e., is the school “ready”?) but on malleable factors that decrease variability in implementation in different classroom and school contexts.

By better understanding these factors (including their malleability, interconnectedness, and strength of association to program outcomes), we may be able to identify specific robust levers that can generate improvement in program implementation across multiple contexts and measures. For example, a number of these studies have identified issues related to teachers’ emotional state, burnout, and stress as being predictive of SEL program implementation (Wehby et al. 2011).

Although there were a few potentially generalizable “themes” that emerged across the studies included in this special issue and are described by Wanless and Domitrovich in the Introduction, the overall thesis seems to lie in a motif of variability rather than consistency. As the editors point out, the variability in definitions and conceptualization of both the dependent and independent variables, as well as the variability in the relationships found, reflect the early stage of this line of SEL implementation research and the need for further conceptual and theoretical development. In that regard, although the knowledge-generation goal of such studies is inarguable, it might be of greater practical value to consider them as “implementation epidemiology” investigations as the field continues to refine its conceptual models (e.g., Aarons et al. 2011; Berkel et al. 2011; Damschroder et al. 2009; Domitrovich et al. 2008; Dusenbury et al. 2005). The time and effort to move from studying and understanding the epidemiology of a problem to developing and testing interventions based on that knowledge has historically been a slow and incremental process. Constrained by that standard, the goal of developing a global assessment of readiness to implement SEL programs that would inform a framework of capacity-building efforts that could be universally applied to various SEL interventions (e.g., Flaspohler et al. 2012; Wandersman et al. 2008) seems a lofty and distant ambition.

As a practical consideration, the idea of using a readiness profile to inform general (i.e., not specific to any EBI) capacity building may be problematic in actual practice. EBIs are typically based on a particular underlying etiologic theory and have specific intervention components based on that theory, and so lend themselves particularly well to identifying and building capacities to utilize and monitor that structure of delivery (Domitrovich et al. 2010; Greenberg et al. 2005; Kazdin and Weisz 2003). However, there are few, if any, schools or provider organizations that are delivering only evidence-based programs (Gottfredson and Gottfredson 2002). Rather, when an EBI is present in a school’s or provider organization’s portfolio, it coexists with a number of other programs or interventions, some or most of which have not been rigorously evaluated, and often have no specific or clearly articulated underlying theory. If a program has no specified theory to serve as a rational for identifying core components, what would be the target of fidelity monitoring? Although fidelity monitoring is only one example of a general capacity that could be identified through readiness profiling, the underlying challenge is likely widely relevant.

To get beyond the excruciatingly slow, incremental progress, we have come to accept perhaps our entire research–industrial complex for discovery and development of interventions may be due for a paradigm-shifting overhaul. For example, SEL trials have primarily been concerned with proving the efficacy of SEL as a mechanism of behavior change and academic achievement because that proof has not yet been established. If you are trying to prove the hypothesis that a SEL-focused intervention can change behavior or school success, you do not develop the cheapest, leanest intervention to test. On the contrary, you turn all the dials to 10 and throw in everything but the kitchen sink to maximize the likelihood of achieving a detectable effect, especially in the context of a well-resourced trial. This is a logical approach to generating measurable differences in the intervention condition, but it leaves us with unnecessarily bulky and inefficient intervention designs that are not attractive to the target “customers” and are not easily replicated in non-research settings. Once a mechanism of intervention (such as SEL) has clearly established its general efficacy (e.g., by having enough similar efficacy trials to produce a meta-analysis or systematic review) perhaps as a field, we should logically shift into a design-optimization phase where the primary challenge becomes drilling down to the cheapest, simplest, most robust intervention design possible. This might include developing program materials, as well as training and coaching methods, with as much attention to facilitating high-quality implementation (including adherence, dosage, and reach) as achieving desired student outcomes. Rather than the question being What are the minimum conditions under which a given SEL program can be effectively implemented?, the question instead becomes What are the essential characteristics of a SEL program and its support structure that qualify it as being optimized for scale-up? In that regard, the value of studying readiness in the context of SEL implementation research is less about understanding where we can implement effective SEL programs (i.e., which schools are ready) and more about understanding what characteristics of our SEL programs, including their infrastructure (dissemination, training, coaching, etc.), we must improve to arrive at a model fully optimized for delivery at scale. This is perhaps a markedly different lens for considering readiness. This intentional, collective optimization phase of program development is not currently promoted by our traditional research mechanisms or academic reward systems or priorities.

Beyond insufficiently developed intervention designs, we must recognize that we are typically attempting to deliver our interventions in the context of systems that have existed for much longer than our recently established science and which have achieved a stasis in the absence of both our current knowledge base and our interventions. It is important to acknowledge systems such as public education and human services as well-established institutions, not empty vessels. These systems are not prone to molding by external agents, so a model that relies primarily on fundamentally changing the capacities of such systems and their actors is facing a heavy lift.

Another indicator of the reality that we are looking to intervene in systems that predate our science is the continuing bolt-on nature of our interventions and the unsystematic drivers of their adoption. SEL program adoption by schools is driven by a variety of catalysts but rarely reflects the culmination of a methodical diagnostic needs assessment (e.g., Walker et al. 2015) or the type of careful review that would go into selecting a new core academic curriculum. In our work with hundreds of schools and EBIs (described in Rhoades et al. 2012), we assessed the motivations of schools and service provider agencies for adopting prevention programs and found that frequently reported considerations include research evidence or being listed in an EBI registry, program implementation cost, peer experiences or influence (i.e., other schools or providers), and the esthetics of marketing materials. Careful consideration of practical feasibility of implementation is uncommon. In this case, there is a good chance that the program will be a poor fit with teacher or school norms or priorities and that teachers will not be engaged and invested in the SEL program. Both of these are likely to lead to poor implementation and poor outcomes, which reinforce poor fit and lack of engagement in a reinforcing negative feedback loop (Rhoades et al. 2012).

Perhaps rather than marveling at why SEL interventions that demonstrate effectiveness in randomized trials are not widely adopted and delivered with high quality and fidelity, we might more realistically start by acknowledging that this is a somewhat unrealistic and illogical expectation given the process by which interventions typically evolve. So I offer two recommendations in the pursuit of a more rapid acceleration of moving effective SEL into widespread practice: (1) adopt a greater focus on the intervention as the primary driver of implementation quality and fidelity and (2) embrace greater use of technology to fundamentally change the way interventions are both designed and delivered.

In considering our challenges with implementation (whether SEL or other types of EBIs), it might be informative to look to the evolution of pharmaceutical sciences over the past century. In the early twentieth century, the significant breakthrough in medicine was the development of drugs to prevent and treat three conditions responsible for significant morbidity—pneumonia, tuberculosis, and diarrhea—and later to the development of “miracle drugs” such as penicillin. But in the last half of the century, much of the emphasis has shifted to addressing challenges of drug delivery and patient adherence. One of the most significant challenges to the efficacy and cost-effectiveness of medicines is getting the sufficient dosage to the targeted part of the body. Historically, oral administration has been the widely accepted delivery mechanism, primarily due to convenience. Drugs taken orally, by far the most common method of delivery, must pass through the stomach and thus are subjected to the often-attenuating effects of the stomach’s unfavorable pH level. Traditionally, drug manufacturers had to compensate for this in the chemistry and the way pills were manufactured. So there are often non-essential compounds and a potentially higher-than-necessary dosage of the “core” chemicals. But recent improvements, made available by technological advances, are allowing for a wide variety of new drug delivery vehicles that not only get the drugs more directly to where they are needed and at the optimal dosage, but in doing so actually drives down the cost of the drugs, because the extra unnecessary “buffering” can be reduced or eliminated. The growing number and variety of these drug delivery and patient adherence innovations is quite impressive: text message reminders, eye drops, patches, time-release capsules, even drugs administered in an innate form that are later activated via Wi-Fi signal once they’ve reached the target site. Much of this ingenuity is perhaps driven by market forces and the potential to increase profits, but the lesson for SEL and other psychosocial or behavioral interventions is still relevant: we must think creatively, beyond traditional mechanisms, and reinvent both the way our interventions are developed and how we promote and monitor their delivery. In terms of SEL, EBIs typically involve direct instructional and student-centered practices that create engaging learning environments, in order to promote students’ analytical, communication, and collaboration skills. Perhaps ecological or environmental approaches that rely (and compete) less with classroom instruction might be less prone to implementation challenges (Odom et al. 2010).

From this medical metaphor, to address implementation challenges, we should turn our attention to the pill rather than the patient. While I agree with the premise that this line of SEL implementation inquiry aligns with the goal of prevention science (preventing a problem before it occurs; in this case, the problem being poor quality or highly variable implementation), it strikes me that focusing on the context rather than the program model seems a more cumbersome, complex, and inefficient approach. Again, using a medical metaphor, my doctor does not tailor my course of treatment based on how well he predicts I will adhere to the prescribed therapy. Because he has many patients, all with varying degrees of and reasons for their degree of adherence, it is more efficient to adjust the delivery mechanism to be universally adherence prone. In all seven of the studies in this special issue, and in the editors’ cataloging of the independent variables assessed in SEL implementation studies generally, there are no variables related to the intervention itself. On one hand this is logical, because in a typical implementation study there is only one SEL intervention being evaluated, offering no comparison condition for intervention characteristics. However, it might be the case that characteristics of the intervention (rather than the contextual variables of school, classroom, teacher, or students) exert much greater influence on implementation, outcomes, and sustainment (Cooper et al. 2015; Dariotis et al. 2008), and these program characteristics are likely more malleable given the inertia that naturally exists within large systems (Kaftarian et al. 2004). Given the idiosyncratic nature of the local context, which is clearly demonstrated by the variability of findings across these seven studies, it would seem to be much more efficient to focus on optimizing the intervention once than to optimize the context, based on a needs assessment, in each new school location where a program is adopted. Admittedly, the sweet spot probably lies with some combination of both: continuous refinement of the intervention based on feedback from each new implementation (e.g., Bumbarger 2010, 2014; Hansen et al. 2009; Perez et al. 2011) with careful consideration of the degree to which local factors are or are not generalizable. If this could be promoted collectively (i.e., simultaneously across multiple SEL interventions rather than incongruously by individual programs based on each program developer’s ideas and resources), then the pace of progress would no doubt be much greater.

Consider the trusty no. 2 pencil. It seems that implementation, effectiveness, and sustainability become moot when the tool is perfectly designed for both task and context. The challenges to implementation experienced by each of these SEL program examples, and the subsequent need to address readiness for adoption and implementation of SEL programs, are an indication that our interventions are still blunt instruments requiring some significant refinement and redesign. This does not minimize the significance or value of having proved that SEL programs can improve both behavioral and academic outcomes, and do so rather inexpensively. In fact, quite the contrary; knowing we have this efficacious tool in our toolbox creates an ethical imperative to find the best way to get it to scale to positively impact as many students and schools as possible. It remains to be seen whether the most efficient route to solving the readiness dilemma is to be found in changing the implementation environment or the interventions themselves.