INTRODUCTION

Cognitive behavioral therapy (CBT), acceptance and commitment therapy (ACT), and mindfulness-based stress reduction (MBSR) have demonstrated effectiveness for mental health conditions common among US military veterans, including depression and anxiety,1,2,3,4,5,6,7 and have more recently shown effectiveness for improving chronic pain outcomes.8,9 Nonetheless, these evidence-based psychotherapies (EBPs) remain underutilized and underaccessed in clinical settings.10,11,12 The Veterans Health Administration (VHA) has rolled out several national initiatives to implement13,14,15,16,17,18 EBPs for mental health conditions including depression and anxiety, as well as for chronic pain.13,14,18 The goal of these implementation initiatives is to increase uptake of EBPs—i.e., both access to therapies and use of therapies—for conditions they effectively treat. The results of such efforts to improve the uptake of these EBPs in VHA and similar clinical settings have not been systematically reviewed using implementation science frameworks.19

Implementation science attempts to understand and resolve problems by translating evidence-based therapies into real-world practice so that the greatest number of patients can have easy and ready access to gold-standard treatments.20,21,22,23 Strategic approaches to implementing EBPs must be responsive to existing health care contexts in order to successfully increase access to care. Implementation efforts should thus be tailored to disparate clinical settings, situated within differing patient and provider populations, and evaluated using varying and mixed-methods approaches. Synthesizing evidence on results of implementation strategies is both challenging and necessary to improve access to and use of evidence-based care. Conceptual frameworks from implementation science,23,24,25,26 such as the Expert Recommendations for Implementing Change (ERIC) classification of implementation strategies and the RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) classification of outcome domains (Table 1), delineate how interventions interact with health system components and patient factors to impact patient access to treatments. Applying these conceptual frameworks can clarify lessons from past EBP implementation approaches and inform future efforts to improve EBP access and use. For example, Adoption in RE-AIM refers to the proportion and representativeness of settings and staff willing to deliver an intervention, which is a prerequisite for patient access to that intervention. If initial Adoption is low in some settings, an implementation strategy such as a staff training and education program may increase Adoption and thereby facilitate patient access to care.

Table 1 RE-AIM Framework Domains and Definitionsa

To identify research gaps and the next steps for improving EBP access and use, the VHA Pain/Opioid Consortium of Research (CORE) engaged the VHA Evidence Synthesis Program (ESP) to conduct a systematic review of factors related to the implementation of psychotherapies with evidence of effectiveness for chronic pain and mental health conditions.27 In this paper, we present results focusing on implementation strategies for CBT, ACT, and MBSR to treat chronic pain or mental health conditions. We summarize outcomes reported by studies of various implementation programs, organized by the type(s) of implementation strategies employed. We also discuss evidence gaps and provide recommendations for future research.

METHODS

Scope and Key Questions

In collaboration with our key stakeholder, the VA Pain/Opioids CORE, and our expert advisory panel, we developed the scope and key questions. Recognizing that there is very limited evidence on the implementation of EBPs specifically for the treatment of chronic pain, we expanded the scope to include implementation studies of EBPs to treat chronic mental health conditions such as depression and anxiety. In this paper, we focus on results for CBT, ACT, and MBSR, three of the most widely used EBPs with the strongest evidence for effectiveness in chronic pain as well as for multiple mental health conditions common among U.S. military veterans.8,9 The full ESP report included other EBPs that have demonstrated effectiveness in the treatment of chronic mental health conditions and are recommended by various treatment guidelines.27,28,29,30

In this paper, we present results on CBT, ACT, and MBSR that addressed the following key questions:

  1. 1.

    For CBT, ACT, and MBSR used to treat adults with chronic pain, what is the effect of implementation strategies to increase the uptake of these treatments?

  2. 2.

    For CBT, ACT, and MBSR used in integrated delivery systems to treat adults with chronic mental health conditions, what is the effect of implementation strategies to increase the uptake of these treatments?

Search Strategy

We searched MEDLINE, Embase, PsycINFO, and CINAHL databases from inception through March 2021. Search terms included MeSH and free text for EBPs (eg, CBT, ACT, and MBSR), chronic pain, integrated delivery systems, and veterans (Appendix A). We sought relevant systematic reviews from the Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center (EPC) and VA Evidence Synthesis Program (ESP); we hand-searched relevant reviews for potentially eligible studies. Our expert advisory panel also provided referrals.

Screening and Selection

Duplicate results were removed and abstracts were screened using DistillerSR (Evidence Partners, Ottawa, Canada). Exclusion of abstracts required the agreement of 2 reviewers. Included abstracts underwent full-text review by 2 individuals, with eligibility decisions requiring consensus. Eligible studies addressed implementation barriers or facilitators, or outcomes of implementation strategies, for EBPs used in the outpatient treatment of adults with chronic pain or chronic mental health conditions. In addition to CBT, ACT, and MBSR, eligible EBPs (for key question 2) in the full report included other therapies effective for chronic mental health conditions common in the VA patient population (e.g., Prolonged Exposure Therapy for posttraumatic stress disorder). Eligible studies were conducted in the United States (US), United Kingdom (UK), Ireland, Canada, and Australia. We expanded the setting beyond the US to these other countries because of the presence of integrated health systems with qualities similar to VHA, and comparable economic, public health, and cultural contexts including the dominant English language. Finally, we limited eligible studies addressing key question 2 to those conducted within large integrated healthcare systems, as these settings may be more similar to VHA, compared with small community clinics. For full eligibility criteria, see Appendix B.

Quality Assessment and Data Abstraction

Two reviewers independently assessed quality using modified versions of the Newcastle–Ottawa Scale31 (for quantitative studies) or the Critical Skills Appraisal Programme (CASP)32 Checklist for qualitative studies. For studies using mixed methods, we used both sets of criteria as applicable. We rated overall quality as high, moderate, or low; the consensus was reached through discussion (see Appendix C for detailed quality criteria and ratings for eligible studies).

Eligible studies underwent independent data abstraction by 2 individuals for the following: participant characteristics and setting (e.g., country and VHA or community clinics); EBP; data sources and analytic methods (e.g., semi-structured interviews and framework analysis, or surveys and multivariate logistic regression); and outcomes. We extracted demographic data in categories consistent with the terminology used by the authors. We classified implementation strategies according to the Expert Recommendations for Implementing Change (ERIC) project.23,24 Most studies assessed programs that combined various implementation strategies, and their study designs did not make it possible to determine which outcomes resulted from individual strategies. We grouped studies empirically based on specific combinations of ERIC strategies, both to describe the relative frequencies of different combinations reported in eligible studies and to facilitate outcome comparisons between similar implementation programs.

All quantitative results were abstracted by one reviewer and over-read by a second reviewer. Qualitative results were independently coded by at least 2 reviewers, with final codes reached by consensus. For implementation outcomes that support access to care, a priori codes were generated from the RE-AIM framework (Table 1): Reach (reaching the target population with the intervention), Effectiveness (knowing the intervention is effective in context), Adoption (developing staff and organizational support to deliver the intervention), Implementation (ensuring the intervention is delivered consistently and with fidelity), and Maintenance (supporting intervention delivery over the long term).26 We used a best-fit framework synthesis approach to categorize outcomes within the framework. For example, we categorized provider attitudes and self-efficacy within Adoption; these provider factors are important for understanding why some providers will (or will not) use certain EBPs.

Qualitative Synthesis

Given the heterogeneity in populations, different EBPs implemented for various health conditions, and a range of study designs, we elected to conduct qualitative synthesis. We first created tables with detailed results (classified or coded as described above). We then grouped results for studies using similar ERIC strategies (or combinations thereof), reviewed these results within groups and across articles addressing the same EBP, and identified implementation outcomes within the RE-AIM framework. When describing the magnitude of standardized effect sizes (e.g., Cohen’s d or Hedge’s g) reported by studies, we used conventional recommendations for interpretations: small is less than 0.4, medium is 0.4–0.7, and large is greater than 0.7.33 We did not undertake a formal assessment of the overall certainty of evidence.

RESULTS

Overview and Characteristics of Eligible Studies

We screened 7295 unique citations and reviewed the full text of 506 articles (Fig. 1). Among 67 eligible articles, 10 studies (reported by k = 12 articles) addressed implementation strategies for CBT (9 studies) and ACT (1 study) (Table 2). We did not identify any eligible study on implementation strategies for MBSR. Conditions addressed included chronic pain (1 study),14 depression and/or anxiety (7 studies),15,18,34,35,36,37,38,39 insomnia (1 study),16,17, and PTSD (1 study).40 Most eligible articles described studies conducted in the US (k = 11), with the vast majority in VHA settings (k = 10). Among VHA-based articles, half evaluated pre-training to post-training outcomes following national initiatives to implement CBT or ACT (k = 5).14,15,16,17,18 Eleven articles were high or moderate quality, and most used quantitative (k = 10) or mixed methods (k = 1), with the remaining article reporting only qualitative results. Evaluations examined data for providers (range 5–391) who completed training programs and patients (range 113–745) treated by providers (Table 2 and see Appendix D for detailed characteristics of eligible studies).

Figure 1
figure 1

Search and selection of eligible articles. Ten studies were described by 12 eligible articles on the implementation of CBT or ACT (out of total of k = 67 eligible articles). ACT = Acceptance and Commitment Therapy; CBT = Cognitive Behavioral Therapy

Table 2 Characteristics of Studies Evaluating the Implementation of Cognitive Behavioral Therapy (CBT) and Acceptance and Commitment Therapy (ACT)

Eligible studies fell into 4 distinct groups based on combinations or type(s) of implementation strategies used: (1) training/education, facilitation, and audit/feedback; (2) training/education and audit/feedback; (3) training/education; and (4) access to new funding (see Table 2 for studies within each group). We applied ERIC definitions to guide the classification of individual strategies: training/education included workshops and other provider educational resources; facilitation was interactive support provided by internal or external individuals (e.g., resources and support provided by centralized VHA training initiatives to individual sites); and audit/feedback entailed the collection and summary of clinical performance data (e.g., fidelity measures, recommendations during consultation) given to administrators or clinicians to modify behaviors and enhance fidelity.23,24

We summarize evaluation outcomes reported by studies in each of these 4 groups, with further categorization using RE-AIM (Table 3; detailed results are found in Appendix D).

Table 3 Outcomes for the Implementation of CBT and ACT: Results by Implementation Strategies and RE-AIM Categories

Group 1: Training/Education, Facilitation, and Audit/Feedback

Six studies (k = 8 articles) evaluated VHA training programs for CBT (5 studies, k = 7)14,15,16,17,35,36,37 and ACT (1 study, k = 1),18 using survey data from trainees and providers who completed training, and information about patients treated by providers who were trained by these programs. Four studies in this first group evaluated VHA national initiatives for CBT and ACT (k = 5 articles)14,15,16,17,18 that used training/education, facilitation, and audit/feedback (Table 2). VHA provided facilitation through centralized resources and support, and all initiatives involved structured programs of in-person workshops (2–3 days) followed by 6 months of weekly consultation with experts. For consultation sessions, trainees were required to submit audio recordings of therapy sessions with patients, which were rated for fidelity. The vast majority of providers enrolled in national VHA training programs for CBT and ACT completed the mandated requirements (range 82–93%, 60–334 providers).14,15,16,17,18

This first group also included VHA studies of implementation efforts that were not national in scale but still involved structured training, facilitation, and audit/feedback.35,36,37 One study included 28 mental health providers in regional implementations of CBT for depression (k = 2 articles).36,37 Implementation efforts involved a 1.5-day CBT workshop and biweekly expert consultation group calls over 12 weeks post-workshop. In addition, 12 therapists at 10 sites were randomly assigned to receive external facilitation, consisting of monthly meetings for 6 months.36,37 The second study involved 9 Primary Care Mental Health Integration (PCMHI) providers at 2 VHA sites (4 providers completed all training modules).35 This study evaluated the implementation of brief CBT in primary care for depression and anxiety; online training was followed by feedback from expert clinicians. Project staff also engaged providers and clinic leadership to facilitate implementation.35

Overall, studies in this first group reported outcomes addressing Effectiveness, Adoption, Implementation, and Maintenance; none reported on Reach (Table 3 and Appendix D). For Effectiveness, most implementation programs led to moderate to large effects on patient symptoms and quality of life (Cohen’s d 0.34–2.2; Table 3 and Appendix D). Only the VHA study of regional implementation of CBT examined the effects of external facilitation independently of training/education; this did not show added benefit in the use of CBT or improvement in CBT-specific knowledge and skills at 3 months post-workshop.37

Group 2: Training/Education and Audit/Feedback

In the second group, 2 studies evaluated the impact of combined training/education and audit/feedback (Table 2). One study randomized 139 VHA mental health providers to training approaches in CBT skills for PTSD: 3 internet-based training modules only (n = 46); internet modules combined with 6 weekly telephone consultations (n = 42); or no training (n = 51).40 The other study evaluated outcomes for implementing group CBT for depression in US non-VA community addiction programs and involved training community addiction counselors to deliver group CBT for depression; counselors received 2 days of didactic training and weekly group supervision over 2.5 years, including a review of audiotapes and feedback to improve adherence.38

Studies in this group reported outcomes addressing Reach, Adoption, and Implementation; neither reported on Effectiveness or Maintenance (Table 3 and Appendix D).

Group 3: Training/Education

One study evaluated only training/education; this involved 8 VHA Substance Use Disorders (SUD) volunteer program counselors (at 7 VHA SUD programs) participating in online modules for CBT for depression.34 This study reported qualitatively on Reach and Implementation (Table 3 and Appendix D).

Group 4: Access to New Funding

One study evaluated the impact of new financial resources on access to mental health treatments in primary care; this new funding was used differently across clinical sites and not associated with structured training programs or facilitation.39 The study reported the experience of 2 primary care demonstration sites for the Improving Access to Psychological Therapies (IAPT) initiative of the UK National Health Service (NHS). We focus on the results for the Newham site, which delivered in-person CBT for depression or anxiety to a majority of referred patients; the other site provided mostly self-guided resources.39 This study reported on Reach and Effectiveness, indicating large reductions in depression and anxiety symptoms (Cohen’s d 1.06–1.26; Table 3 and Appendix D).

No studies in any of the four groups reported on the proportions or representativeness of participating patients as compared to eligible patient population characteristics (elements of Reach), or on proportions or representativeness of participating staff and settings as compared to potential participating staff and settings (elements of Adoption).

DISCUSSION

In this systematic review, we identified 12 eligible studies evaluating implementation strategies to improve CBT and ACT access and use in large integrated healthcare systems. We found no studies evaluating the implementation of MBSR. Most studies were conducted in VHA and involved national implementation programs comprising training/education, facilitation, and audit/feedback. These VHA studies focused on Effectiveness, Adoption, Implementation, and Maintenance; they did not address Reach. Evaluated programs demonstrated moderate to large symptom reduction and improvements in quality of life for patients treated by trained providers. They also increased provider self-efficacy, improved provider perceptions of CBT and ACT, and improved competency, particularly after expert consultation, suggesting an additional benefit from audit/feedback strategies. These national VHA initiatives also provided centralized facilitation resources including salary support for clinicians, patient-facing EBP materials and tools, and coordination and organizational support for training and problem-solving. There was very limited evidence, however, on whether external facilitation enhanced Adoption beyond the effects of training and audit/feedback. Finally, sustained effects of VHA national initiatives (i.e., after the consultation phase) were modest, with continued barriers to ongoing access and use (e.g., competing professional time demands and patient barriers to attending appointments).

There is a clear need for additional implementation research focusing on MBSR, as well as evaluations of the implementation of CBT and ACT in non-VHA settings. In addition, while some results indicated that audit/feedback may be important for improving provider perceptions and skills, there was minimal evidence evaluating the impact of external facilitation and feedback. It may be especially crucial to understand the value of these additional strategies for healthcare systems that have fewer resources than VHA and thus may lack the capacity for audit/feedback and external facilitation on the same scale as VHA initiatives.

Despite promising findings on effectiveness and implementation outcomes, it remains unclear whether trainings increased some key components of Reach and Adoption. Evaluations of VHA national programs did not address Reach (i.e., the proportion and representativeness of appropriate patients who initiated or completed CBT and ACT). Although these outcomes may be challenging to measure, even for large integrated systems such as VHA, it is nevertheless critical to assess how many (and which) patients engage in these treatments. The ultimate metric for evaluating implementation strategy success must be whether it has increased the Reach of effective treatments, leading to better outcomes for target patient populations. Evaluation of Reach across a variety of clinical settings should also occur in conjunction with further research into provider and system-level factors that contribute to differences in referral rates and treatment engagement. Additional work is needed to comprehensively assess Adoption, particularly the proportion or representativeness of clinical settings and staff that use EBPs. It will be important to determine whether such programs lead to improved access through widespread uptake by relevant staff in settings with relevant high needs.

While included studies noted logistical barriers to EBP practice for both patients and providers, strategies to address these were minimally evaluated. Previous work has advocated patient-facing resources tailored to patient needs and goals (e.g., educational materials, preparatory groups) or additional delivery formats (beyond in-person meetings) and options outside the workday (e.g., asynchronous and telehealth modalities).41,42,43,44,45 While the COVID-19 pandemic has rapidly expanded telehealth and asynchronous options for mental health care, additional work is needed to establish effectiveness, implementation outcomes, and equitable Reach and Adoption for specific therapies and conditions.41,42,43,44,45,46,47,48 Similarly, modified brief therapy protocols for providing CBT in primary care may increase patient access and use, but need additional evaluations to establish effects.35,49,50,51 It will be important to distinguish the “core” of essential treatment characteristics from the “adaptable periphery” of elements that may be modified without threatening effectiveness.20,21 Given the diversity of resources, needs, and priorities across healthcare settings, it will also be important to perform local needs assessments and match strategies or resources to identified barriers (e.g., strategies to enhance leadership engagement, train local champions, and facilitate communication across primary care and specialty clinics).

Finally, few studies utilized comprehensive, theoretical frameworks for examining processes of change in implementation trials and reporting outcomes. Future implementation work should be guided by theoretical models that clearly designate the key domains and conceptual relationships that link access barriers to strategies, thereby allowing systematic examination of processes of change and important outcomes.

Our study has limitations. For greater applicability to VHA settings, we required that implementation studies of CBT, ACT, or MBSR were conducted in large integrated healthcare systems. Our results thus do not address the implementation of these EBPs in smaller community clinics or settings. We also limited eligibility to English-language studies conducted in the US or in a small set of non-US countries with comparable economic, cultural, and public health contexts (Canada, UK, Ireland, and Australia). Although implementation evidence from excluded countries would likely have been less directly applicable to the VHA setting, it is possible that this may have provided some relevant information.

In conclusion, this systematic review found that multi-faceted VHA implementation programs for CBT and ACT led to increased provider EBP use during the interventions, but had unclear impacts on access and use by target patient populations or by key providers in high-need settings, and variable maintenance of adoption by providers. Additional work is needed to evaluate implementation programs’ Reach and Adoption, address maintenance of provider EBP use, and assess the added value of external facilitation (on top of education/training and audit/feedback). There is a clear need for implementation evaluation of MBSR for chronic pain and mental health conditions, and for additional research in non-VHA settings. Future studies should apply implementation frameworks to guide evaluations of barriers and facilitators, processes of change, and outcomes in key domains.