Introduction to the Special Issue

Training plays a critical role within the broader implementation research agenda involving evidence-based treatments (EBTs). EBTs, interventions that have demonstrated client improvement within the context of controlled trials (Kazdin 2008), tend to be complex, multisession treatment packages that hinge largely on the provider’s execution of a set of interventions with a satisfactory level of fidelity (Carroll et al. 2010; Chorpita and Regan 2009; Herschell et al. 2010). Although training alone is not sufficient to guarantee successful implementation, there is evidence that adequate training can reduce variations in provider behavior, improve fidelity, and ultimately, increase the quality of service delivery (Aarons et al. 2011; Feldstein et al. 2008; Fixsen et al. 2005; Stirman et al. 2004) above and beyond that provided by therapy as usual (Schoener et al. 2006; Simons et al. 2010). In light of the central role training plays in the implementation of EBTs in mental health service delivery settings (Karlin et al. 2010; McHugh and Barlow 2010), efforts to scientifically examine the impact of training and to establish best practices in training are essential (Herschell et al. 2010).

Our field is fortunate to be at the point at which EBTs have been designed, tested, and refined with much success; however, dissemination and implementation initiatives are still in their relative infancy (Becker et al. 2009). This early stage of evidence-based training development renders the field at a crossroads. One possible path parallels the traditional stage model of EBT development (Onken et al. 1997), with training methods developed and tested in university-based settings with tightly controlled conditions to demonstrate efficacy prior to testing effectiveness. As demonstrated by highly controlled studies of training (Miller et al. 2004; Sholomskas et al. 2005), this approach has merit inasmuch as information regarding necessary and sufficient training components, and optimal dosage, sequencing, and spacing of training can be identified (see Rakovshik and McManus 2010 for a review of these factors in the training of cognitive behavioral therapy). However, as the stage model of psychotherapy development has demonstrated, the needs of the field, in terms of understanding effective, efficient treatment strategies that can be deployed in less tightly controlled settings, easily outpace the stages of research (Institute of Medicine 2001). Similarly, given that advances in EBTs have outpaced the development of implementation supports (i.e., evidence-based training) (Fixsen et al. 2005), the “research-to-practice gap” (McHugh and Barlow 2010) is at risk of continuing to widen, leaving a workforce with insufficient training and support in the very treatments that may have the best chance of improving the conditions of patients in need.

An alternative path to promoting the science of training involves fitting the training program into the existing mental health delivery system (Stirman et al. 2010) to examine how features of the setting can inform adaptations and support the training program. This is an approach that parallels innovations in community-based EBT development (e.g., Hoagwood et al. 2002; Weisz et al. 2004) and that is occurring within a few notable large-scale dissemination and implementation initiatives (e.g., Godley et al. 2011; Nakamura et al. 2011a; Chorpita et al. 2002). A dynamic exchange of information within a formative evaluation process may enhance the fit, and ultimately the sustainability and effectiveness, of the training program within the service delivery system (Damschroder et al. 2009; Palinkas et al. 2008).

Relatively few studies have been conducted in the context of larger implementation efforts (Beidas and Kendall 2010), yet such programs provide excellent opportunities to study training. As rising numbers of clinicians are currently being trained in large-scale implementation programs (Karlin et al. 2010; McHugh and Barlow 2010; Nakamura et al. 2011a, b; Rakovshik and McManus 2010), or through smaller public-academic partnerships (Simons et al. 2010; Stirman et al. 2009), these programs have the potential to advance our knowledge about the effectiveness of training methods in the settings where treatments are ultimately delivered. While the scale of these programs, along with constraints in funding and policy, can limit the feasibility of rigorous experimental methodologies in some cases, much can be learned about feasible and effective methods of training as well as the impact of training on a variety of individual, organization, and system-level factors that may influence the use of EBPs after training. The papers in this issue represent preliminary efforts to leverage and enhance training and implementation efforts by reviewing and highlighting relevant methodologies, conceptual models and literatures, identifying trends with regard to measurement of key variables and outcomes, and spotlighting efforts to study training in the context of implementation efforts that range from small- to large-scale. This collection of papers also identifies potential next steps, current gaps in knowledge, and advances in methodologies that are critical to advancing the science of training and implementation.

As a starting point for this issue, Beidas et al. (2011) propose recommendations for the development of evidence-based training approaches that are responsive to the growing demand for EBTs in an era of mental health service delivery that is characterized by increasing demand for efficiency, quality assurance, and improved patient outcomes (Schoenwald et al. 2010b). Their recommendations reflect an extension of the current implementation literature and include the use of a comprehensive model and consistent nomenclature (Proctor et al. 2011), consideration of alternatives to the stage model of psychotherapy development, formative evaluation and iterative review in the development of EBPs (Damschroder et al. 2009), and infusion of technology throughout the stages of training (Cucciare et al. 2008). Their description of the Behavioral Activation Demonstration Project provides a rich illustration of these principles in practice. Although the effectiveness of this integrated training method remains to be evaluated, it heralds an evidence-based training development approach that has the potential to enhance stakeholder involvement and workforce development within the context of an increasingly industrialized mental health service marketplace (Schoenwald et al. 2010b).

Just as our field can advance training development by adopting models and terminology espoused in the larger dissemination and implementation literature (Beidas et al. 2011, this issue; see Damschroder et al. 2009, for a review), so too can the field can maximize its training development efforts by capitalizing on training innovations already established in other fields. Lyon et al. (2011, this issue) present a review of evidence-based training strategies (e.g., coaching, self-regulated learning) from other disciplines (e.g., education, medicine) and discuss their application to mental health. Although there are no “magic bullets” (Oxman et al. 1995), it is widely recognized that the single-exposure (i.e., “train and hope;” Stokes and Baer 1977) approach to training is ineffective at enhancing provider skills and implementation (Fixsen et al. 2005). Lyon et al. (2011, this issue) summarize strategies that may each have small effects when used separately, but have the potential for even greater impact when used together simultaneously or sequentially. The nascent science involving the application of these individual strategies to training mental health clinicians within the context of their professional settings holds promise for advancing the field towards the development of best practices in training.

As training models and strategies are being developed and tested, multilevel assessment is critical to understanding their success (Proctor et al. 2011). Decker et al. (2011, this issue) present a review of methods intended to assess short-term (i.e., therapist satisfaction and change in knowledge, attitudes, and skills related to EBPs) and long-term (i.e., therapist behavior change, client treatment progress) training outcomes. Their review, organized within the structure provided by a synthesis of existing frameworks (i.e., Kirkpatrick 1967; Rogers 2003; Hammick et al. 2007), highlights the reliability, validity, and feasibility of various assessment methods as well as variation in the depth of empirical attention paid to each outcome domain in the field thus far. As their review indicates, an important advance will be the assessment of multiple domains of training outcomes. Outcomes in the field are weighted towards knowledge, attitudes, and self-reported adherence, with less information about therapist implementation and client outcomes. While the papers in this special issue also reflect the larger literature in terms of their reliance on self-reports of clinician factors and implementation, for the field to move forward, there is a critical need for creative, reliable, and efficient measurement of both adherence and skill (Schoenwald et al. 2010a), and for further assessment of client-level outcomes (c.f., Proctor et al. 2011).

The review papers by Beidas et al., Lyon et al., and Decker et al. highlight the breadth of some of the fundamental issues (e.g., conceptual models, training strategies, and assessment methods) facing our field with regard to developing the science of training. The remaining four papers included in this issue scientifically examine these issues in more depth within the context of existing implementation efforts.

Nakamura et al. (2011, this issue) examine key elements (i.e., knowledge and attitudes) of Rogers’ (2003) innovation diffusion within the context of state-sponsored continuing education workshops for clinicians working youth in the public sector. Their research provides empirical evidence of the association between low pre-training knowledge of and negative attitudes towards EBPs. These findings complement those of Lopez et al.’s (2011, this issue) comparison of providers with prior training in a particular EBT (i.e., behavioral parent training) to providers who receive the training for the first time. Lopez et al. (2011, this issue), who sampled clinicians who participated in workshops conducted under a state mental health system mandate, found that clinicians with previous exposure are more likely to report use of behavioral strategies after the workshop. This finding supports a widely held assumption that has received little, if any, previous empirical support. Together, Nakamura et al.’s (2011, this issue) and Lopez et al.’s (2011, this issue) work portend the value of empirical examination of potential strategies to address variables (e.g., knowledge, attitudes) that are thought to be integral to adoption and implementation (Aarons et al. 2011; Rogers 2003), by staging EBT exposure and training, capitalizing on previous experience, or strategically using agency leadership to enhance knowledge and attitudes towards EBTs (Aarons 2006).

Parallel to the weighty issues surrounding EBPs, questions involving who should be trained, to what criterion of skill or competency (see Decker et al. 2011, this issue), in what settings, and under what training and support conditions (see Lyon et al. 2011, this issue), is critical to the development of a workforce capable of successful delivery of EBTs (Schoenwald et al. 2010b). Current research is just beginning to scratch the surface regarding these issues of workforce development. Hepner et al. (2011, this issue) demonstrated that addiction counselors at a publicly funded addiction treatment agency, who did not have previous mental health training, could be trained to deliver CBT depression treatment with sufficient proficiency. Consistent with literature that suggests that blended learning approaches (Cucciare et al. 2008) and follow-up support (Miller et al. 2004; Sholomskas et al. 2005) can facilitate the uptake and sustainability of EBPs, their intensive training method comprised lecture, demonstrations, experiential exercises, as well as ongoing supervision that included review of client progress, review of audio recordings, and preparation for the next session. The promising findings of and lessons learned from their pilot study have implications for the integration of mental health and addiction services as a strategy for broader implementation of EBTs targeting complex combinations of mental and behavioral health problems.

Within the context of a study involving a university-community partnership training program in cognitive behavioral therapy for depression, Lewis and Simons (2011, this issue) utilized similarly intensive training methods (i.e., didactics, demonstrations, experiential exercises, on-site supervision, and monthly consultation support) as those described by Hepner et al. (2011, this issue) and in the literature (e.g., Sholomskas et al. 2005; Cucciare et al. 2008). Unfortunately, following training and eight months of consultation, therapists reported relatively low implementation in their routine practice. Although the authors measured clinician attitudes towards ESTs and readiness for change, only perceived client (e.g., diagnostic complexities, cognitive abilities) and organizational (i.e., lack of supervision and training) barriers to ESTs were related to self-reported implementation. This study highlights the key challenge to training and implementation: despite intensive efforts to train and support clinicians, implementation is not guaranteed. What we learn about barriers can inform refinements to our training approaches (e.g., guidance regarding appropriateness of EBTs for clients with diagnostic complexities) as well as to the EBTs in which we attempt to train clinicians (Beidas et al. 2011, this issue; Borntrager et al. 2009; Lewis and Simons 2011, this issue).

Taken together, the empirical papers in this issue (i.e., Hepner et al. 2011; Lewis and Simons 2011; Lopez et al. 2011; Nakamura et al. 2011) provide a snapshot of the current state of the burgeoning science of training and further add to a literature that is largely composed of empirical research involving training precursors and outcomes such as knowledge, attitudes, self-reported implementation, and barriers to implementation by examining these factors within the context of training conducted within community-based mental health service settings. The three review papers provide recommendations to further advance the science of training through the application of implementation models (Beidas et al. 2011, this issue), technology (Beidas et al. 2011, this issue), and evidence-based training strategies from other fields (Lyon et al. 2011, this issue) and through more rigorous assessment of a broader range of training outcomes, particularly those associated with actual changes in clinician behavior and client outcomes (Decker et al. 2011, this issue).

In addition, we would like to suggest additional areas of promise. First, as efforts to make large-scale training programs sustainable are critical, macrolevel examination of supervision and consultation, cascade models (i.e., train the trainer to provide onsite supervision; Herschell et al. 2010), and technology-based (Beidas et al. 2011, this issue) primary and/or adjunctive training supports in existing mental health delivery systems are critical. Additionally, microlevel investigation of the scope, dosage, sequencing, and other parameters that enhance the effectiveness of these training supports may advance the development of evidence-based training approaches, as has occurred with the exemplar development and refinement of evidence-based supervision procedures (e.g., performance feedback) in multisystemic therapy (Henggeler et al. 2002; Schoenwald et al. 2009).

Second, because the effective implementation of EBTs involves more than just the delivery of the treatment itself (Institute of Medicine 2001; Kazdin 2008), the study and refinement of tools that enhance the clinical decision-making of the individual clinician is of utmost importance to the sustainability of EBT in service delivery systems. On the one hand, although access to expert supervision or consultation may improve treatment adherence and implementation (e.g., Schoenwald et al. 2009), clinical decisions that are tightly controlled by an expert supervisor, as is usually the case in research trials, may limit the ability of a clinical to competently implement an EBT without those supports. On the other hand, a clinical decision-making heuristic that helps clinicians use client data to inform adaptations to the treatment plan (Chorpita et al. 2008) may enhance training as well as clinical outcomes. Progress monitoring and feedback systems provide a structure for regular evidence-based assessment of client treatment progress or other outcomes of interest (e.g., therapeutic alliance), as well as a mechanism for interpretation of and feedback about these data (Bickman 2008; Chorpita et al. 2008; Higa-McMillan et al. 2011; Seidman et al. 2010). Such systems are virtually absent from training research, yet their study can provide a great deal of information about how best to support quality improvement and sustainability of an intervention within the clinical context.

The studies in this issue highlight critical issues and point to the promise of conducting research in the context of community-based training and implementation programs. In order for the field to continue to advance the science of training and the goals of improving the quality of clinical services and the lives of clients in need, methodologically rigorous research will be necessary to further develop scalable, cost-efficient training strategies and examine their impact at the level of the clinician, the client, and the system. Additionally, qualitative research and mixed-methods strategies (Palinkas et al. 2010) can be used to identify stakeholder preferences, refinements to training strategies, and the interaction between training efforts and service delivery settings. Such research will require substantial time, resources, and close collaborations between researchers, policymakers, service delivery systems, and stakeholders. The exciting large-scale EBT training programs (McHugh and Barlow 2010), ongoing public-academic partnerships, and NIMH’s funding prioritization of research on implementation and fidelity (Proctor et al. 2009) will facilitate further advances in knowledge in the upcoming years. We look forward to advances in research methodologies and training strategies, and to learning the impact of these practices on both service delivery and the lives of mental health consumers.