Background

Many implementation theories and frameworks in healthcare assert the importance of organizational leadership and organizational implementation climate for achieving high fidelity to newly implemented clinical interventions [1,2,3,4,5,6]; however, the evidentiary basis for these claims is thin. The usual standard for making causal claims in the medical and social sciences is demonstration of effect within a randomized controlled trial [7]; yet, recent reviews indicate no trials have tested whether experimentally induced change in organizational leadership or organizational implementation climate contributes to improved implementation of clinical interventions in healthcare [8, 9]. In the present study, we address this gap using data from the WISDOM (Working to Implement and Sustain Digital Outcome Measures) hybrid type III effectiveness-implementation trial. The WISDOM trial showed that a strategy called Leadership and Organizational Change for Implementation (LOCI) [10, 11], which targets organizational leadership and organizational implementation climate, improved fidelity to measurement-based care (MBC) in outpatient mental health clinics serving youth [12]. In this paper, we tested LOCI’s hypothesized mechanisms of change. Specifically, we tested whether improvement in clinic leadership contributed to improvement in clinic implementation climate and whether improved implementation climate in turn contributed to improved MBC fidelity.

Measurement-based care in youth mental health

Measurement-based care is an evidence-based practice (EBP) that involves the collection of standardized symptom rating scales from patients prior to each treatment session and use of the results to guide treatment decisions [13]. Meta-analyses of over 30 randomized controlled trials indicate feedback from MBC improves the outcomes of mental health treatment relative to services as usual across patient ages, diagnoses, and intervention modalities [14,15,16,17]. There is also evidence MBC improves mental health medication adherence [18], reduces risk of treatment dropout [14], and is particularly effective for youths and for patients who are most at risk for treatment failure [14,15,16, 18].

Unfortunately, MBC is rarely used in practice. Only 14% of clinicians who deliver mental health services to youth in the USA use any form of MBC [19], and MBC usage rates are similarly low in other countries [20, 21]. When MBC use is mandated, less than half of clinicians view feedback and use it to guide treatment [22,23,24]. Digital MBC systems (i.e., measurement feedback systems) remove many practical barriers to MBC implementation by collecting measures from patients electronically (e.g., via tablet or phone) and instantaneously generating feedback [25]. However, even when these systems are available, clinician fidelity to MBC—defined as administering measures, viewing feedback reports, and using the information to guide treatment—is often substandard [22, 26]. Qualitative and quantitative studies of MBC implementation indicate clinicians’ work environments explain much of the variation in their attitudes toward, and use of, MBC [19, 27, 28], with organizational leadership and supportive organizational culture or organizational implementation climate identified as key determinants [13, 28,29,30].

Mechanisms of the LOCI strategy

LOCI is a multicomponent organizational implementation strategy that engages organizational executives and first-level leaders (i.e., those who administratively supervise clinicians) to build an organizational climate to support the implementation of a focal EBP with fidelity [10, 31]. It includes two overarching components: (1) monthly organizational strategy meetings between executives and LOCI consultants/trainers to develop and embed policies, procedures, and practices that support implementation of a focal EBP and (2) training and coaching for first-level leaders, to develop their skills in leading implementation. Organizational survey data and feedback guide planning, goal specification, and progress monitoring for both components. The aim of these components is to develop an organizational implementation climate [32, 33] in which clinicians perceive that use of a specific EBP with high fidelity is expected, supported, and rewarded [34].

Figure 1 shows LOCI’s theoretical model as applied to MBC in the present study. This model forms the basis for the study hypotheses. The LOCI strategy draws on two leadership theories—full-range leadership [35, 36] and implementation leadership [37]—and on theories of organizational implementation climate [33, 34], to explain variation in implementation success and to identify targets for implementation improvement. As is shown in Fig. 1, LOCI seeks to equip first-level leaders (e.g., clinical program managers), with two types of leadership behaviors believed to influence implementation success. Transformational leadership, drawn from the full-range leadership model, is a general type of leadership that reflects a leader’s ability to inspire and motivate employees to follow an ideal or course of action [38, 39]. Implementation leadership is a type of focused leadership that refers to leader behaviors that facilitate the organization’s specific strategic objective of successfully implementing a focal EBP, such as MBC [37, 40,41,42,43,44]. As is shown in the figure, LOCI aims to increase first-level leaders’ use of these leadership behaviors in order to support the development of an implementation climate that prompts and supports clinicians’ use of the focal EBP with high fidelity.

Fig. 1
figure 1

Study theoretical model. Note: LOCI, Leadership and Organizational Change for Implementation strategy; MBC, measurement-based care. First-level leaders are those who administratively supervise clinicians (e.g., clinical managers). Random assignment of clinics to LOCI (vs. training and technical assistance only) is expected to cause improvement in clinic-level implementation leadership, transformational leadership, and implementation climate for digital MBC (Aim 1). Improvement in implementation leadership and transformational leadership is expected to mediate LOCI’s effect on improved clinic implementation climate (Aim 2). Improvement in clinic implementation climate is expected to mediate LOCI’s effect on improved fidelity to digital MBC as experienced by youth (Aim 3). In this study, the clinic level is synonymous with the organization level; however, this is not always the case in applications of LOCI. The LOCI strategy can be applied to organizations with multiple levels, resulting in theoretical models that describe how LOCI intervenes at multiple organizational levels to influence climate

Correlational studies offer preliminary support for the relationships shown in Fig. 1. In a 5-year study of 30 outpatient mental health clinics serving youth, Williams et al. [45] showed that increases in implementation leadership at the clinic level were associated with increases in clinic EBP implementation climate, which subsequently predicted increases in clinicians’ self-reported use of evidence-based psychotherapy techniques. Other studies have shown that higher levels of organizational implementation climate predict higher observed fidelity to evidence-based mental health interventions in outpatient clinics and schools [46, 47]. In the other fully powered trial of LOCI that is currently published [48], researchers studying mental health care in Norway showed that LOCI improved first-level leaders’ implementation leadership, transformational leadership, and clinic implementation climate for trauma-focused EBPs. However, no studies have tested the two key linkages in LOCI’s hypothesized theory of change, namely: (1) that improvement in clinic implementation leadership and transformational leadership contributes to subsequently improved clinic implementation climate, and (2) that improvement in clinic implementation climate explains LOCI’s effect on EBP fidelity.

Study contributions

This study makes three contributions to the literature. In Aim 1, we test LOCI’s effects on growth in first-level leaders’ use of implementation leadership (Hypothesis 1) and transformational leadership (Hypothesis 2), and on clinic implementation climate for MBC (Hypothesis 3), from baseline to 18-month post-baseline (i.e., 6 months after completion of LOCI). This aim seeks to replicate findings from an earlier trial [48] in which LOCI improved these outcomes in a different treatment setting (mental health clinics within Norwegian health trusts), patient population (adults), and set of EBPs (trauma-focused assessment and psychotherapies). In Aim 2, we test the hypotheses that experimentally induced improvement in first-level leaders’ implementation leadership (Hypothesis 4) and transformational leadership (Hypothesis 5) at T2 (4 months after baseline) will mediate LOCI’s effects on clinic implementation climate at T4 (12 months after baseline). In Aim 3, we test LOCI’s focal mechanism, namely: that improvement in clinic implementation climate from pre- to post-LOCI (i.e., T1 to T4) will mediate LOCI’s effect on MBC fidelity during the same period (Hypothesis 6). We believe this is the first study to test whether experimentally induced improvement in clinic leadership contributes to improved implementation climate and whether improvement in implementation climate improves observed fidelity to an EBP.

Method

Study design and procedure

Project WISDOM was a cluster randomized, controlled, hybrid type III effectiveness-implementation trial designed to test the effects of LOCI versus training and technical assistance only on MBC fidelity in outpatient mental health clinics serving youth. Details of the trial and primary implementation and clinical outcomes are reported elsewhere [12]. The trial enrolled 21 clinics serving youth in Idaho, Oregon, and Nevada, USA. Clinics were eligible if they were not actively implementing a digital MBC system and if they employed three or more clinicians delivering psychotherapy to youth (ages 4–18 years). Using covariate constrained randomization, clinics were randomly assigned to one of two parallel arms: (1) LOCI plus training and technical assistance in MBC or (2) training and technical assistance in MBC only. Clinic-level randomization aligned with the scope of the LOCI strategy and prevented contamination of outcomes at the clinician and patient levels. Clinic leaders could not be naïve to condition; however, clinicians and caregivers of youth were naïve to condition.

Following baseline assessments and randomization of clinics, executives and first-level leaders in the LOCI condition began participating in the LOCI implementation strategy. One month later, clinicians who worked with youths in both conditions received training to implement an evidence-based digital MBC system called the Outcome Questionnaire-Analyst (OQ-A; see below for details; [49, 50]). Following the initial OQ-A training, clinics in both conditions received two booster trainings and ongoing OQ-A technical assistance from the OQ-A purveyor organization until the trial’s conclusion.

To assess LOCI’s effects on its targeted mechanisms of change, clinicians who served youth in participating clinics were asked to complete web-based assessments evaluating their clinic’s leadership and clinic implementation climate for MBC at five time points: baseline (T1; following randomization of clinics but prior to initiation of LOCI or OQ-A training), 4-month post-baseline (T2), 8-month post-baseline (T3), 12-month post-baseline (T4; coinciding with the conclusion of LOCI), and 18-month post-baseline (T5; 6 months after LOCI concluded). Surveys were administered from October 2019 to May 2021. Clinic leaders provided the research team with rosters and emails of all youth-serving clinicians at each time point. Confidential survey links were distributed by the research team directly to clinicians via email. Clinicians received a small financial incentive for completion of each assessment (i.e., gift card to a national retailer) based on an escalating structure (US $30, US $30, US $45, US $50, US $55).

The primary implementation outcome of MBC fidelity was assessed for new youth outpatients who initiated treatment in the 12 months following clinician training in the MBC system. Upon intake to services, parents/caregivers of new youth patients were presented with study information requesting their consent for contact by the research team. Caregivers who agreed were contacted by research staff via telephone to complete screening, informed consent, and baseline measures (if eligible). After study entry, caregivers completed assessments reporting on the youth’s treatment participation (i.e., number of sessions) and symptoms monthly for 6 months following the youth’s baseline. Assessments were completed regardless of the youth’s continued participation in treatment, unless the caregiver formally withdrew (n = 7). Caregivers received a US $15 gift card to a national retailer for completion of each assessment. Enrollment and collection of follow-up data for youth occurred from January 2020 to July 2021. The CONSORT and Stari guidelines were used to report the results of this mediation analysis within the larger trial [51, 52].

Participants

All licensed clinicians who worked with youth in participating clinics at each time point were eligible to participate in web-based surveys of clinic leadership and climate. This broad inclusion criterion ensured a full picture of clinic leadership and climate at each time point.

Inclusion criteria for youth were intentionally broad to reflect the trial’s pragmatic nature and the applicability of MBC to a wide range of mental health diagnoses. Eligible youth were new patients (i.e., no psychotherapy at the clinic in the prior 12 months), ages 4 to 17 years, who had been diagnosed by clinic staff with an Axis I DSM disorder deemed appropriate for outpatient treatment at the clinic; it was not required that youths be assigned to clinicians who completed surveys. Youths were excluded if they initiated treatment more than 7 days before the informed consent interview. Electronic informed consent was obtained from all participants. The Boise State University Institutional Review Board provided oversight for the trial (protocol no. 041‐SB19‐081) which was prospectively registered at ClinicalTrials.gov (identifier: NCT04096274).

Clinical intervention: digital measurement-based care

The OQ-A is a digital MBC system shown to improve the effectiveness of mental health services in over a dozen clinical trials across four countries [49, 53]. OQ-A measures are sensitive to change upon weekly administration and designed to detect treatment progress regardless of treatment protocol, patient diagnosis, or clinician discipline [53]. In this study, clinicians had access to parent- and youth-report forms of the Youth Outcomes Questionnaire 30.2 [54, 55] and the Treatment Support Measure [56, 57]. Measures were completed by caregivers and/or youth electronically (via tablet or phone). Administration typically took 3–5 min. Measures were automatically scored by the OQ-A system, and feedback reports were generated within seconds. Feedback included a graph of change in the youth’s symptoms, critical items (e.g., feelings of aggression), and a color-coded alert, generated by an empirical algorithm, indicating whether the youth was making expected progress or was at risk of negative treatment outcome.

Clinicians were instructed to administer a youth symptom measure to the caregiver and/or youth at each session, review the feedback within 7 days of the session, and use the feedback to guide clinical decision-making. Clinicians were encouraged to discuss feedback with the caregiver and/or youth when they believed it was clinically appropriate and to administer a Treatment Support Measure if a youth was identified as high risk for negative outcome. Consistent with prior MBC trials, clinicians were not given specific guidance on how to respond to feedback; instead, they were advised to use their clinical skills in partnership with patients and clinical supervisors.

Implementation strategies

OQ-A training and technical assistance

The initial, 6-h, OQ-A training provided to clinicians in both conditions was conducted in-person by the OQ-A purveyor organization. Training focused on the conceptual and psychometric foundations of the measures, the value of clinical feedback, clinical application of measures and feedback with youth and families, and technical usage of the system. Learning activities included didactics, in vivo modeling and behavioral rehearsal, exercises with sample feedback reports, and use of the system in “playground mode.” Two, live, virtual, 1-h booster trainings were offered to clinicians 3 and 5 months after the initial training. Professional continuing education hours were offered at no cost for all trainings to encourage participation. After the initial training, all clinics in both conditions received year-round technical assistance from the OQ-A purveyor organization. This included on-demand virtual training sessions, an online library of training videos, and a customer care representative to troubleshoot technical issues.

Leadership and Organizational Change for Implementation (LOCI)

Details of the LOCI implementation strategy are available elsewhere [11, 12]. Briefly, LOCI was implemented in quarterly cycles over 12 months. During each cycle, (1) executives and first-level leaders within LOCI clinics attended monthly organizational strategy meetings to review data and to develop clinic-wide policies, procedures, and practices to support OQ-A implementation, and (2) first-level leaders attended leadership development trainings (5 days total) and participated in brief (~ 15 min) weekly coaching calls, designed to enhance their leadership skills. Once per month, individual coaching calls were replaced by group coaching calls with all other first-level leaders in the LOCI condition.

To support enrollment in the study, clinic leaders in the training and technical assistance only condition were offered access to four, professionally produced, web-based, general leadership seminars (1 h each). Seminars covered general leadership topics like giving effective feedback and leading change. The seminars were made available immediately after the OQ-A training.

Measures

MBC fidelity

The primary outcome of MBC fidelity was measured at the youth level using an empirically validated MBC fidelity index [22, 58, 59]. The index was generated using electronic metadata from the OQ-A system combined with monthly caregiver reports of the number of sessions youths attended. Following prior research [22], scores were calculated as the product of two quantities: (a) the youth’s completion rate (i.e., number of measures administered relative to the number of sessions attended within the 6-month observation period) and (b) the youth’s viewing rate (i.e., the number of feedback reports viewed by the clinician relative to the number of measures administered). Note that this product is equivalent to the ratio of viewed feedback reports to total sessions; it represents an events/trials proportion. MBC fidelity index scores summarize the level of MBC fidelity experienced by each youth (range = 0–1) and have been shown to predict clinical improvement of youths receiving MBC [22, 58, 59]. Importantly, this index captures the administration and viewing components of MBC fidelity but does not indicate whether clinicians used the feedback to guide treatment.

Implementation leadership

Clinicians assessed the extent to which their first-level leaders exhibited implementation leadership behaviors with regard to the OQ-A using the 12-item Implementation Leadership Scale (ILS) [40]. The ILS includes four subscales assessing the extent to which the first-level leader is proactive, knowledgeable, supportive, and perseverant about implementation. Responses were made on a 0 (not at all) to 4 (very great extent) scale. Total scores were calculated as the mean of all items. In prior research, scores on the ILS exhibited excellent internal consistency, convergent and discriminant validity [40, 47, 60, 61], and sensitivity to change [45].

Transformational leadership

Clinicians assessed the extent to which their first-level leaders exhibited transformational leadership behaviors using the Multifactor Leadership Questionnaire (MLQ) [62, 63]. The MLQ is a widely used measure that has demonstrated excellent psychometric properties [64] and is associated with implementation climate for EBP as well as clinicians’ attitudes toward, and use of, EBPs [65,66,67,68]. Responses were made on a 5-point scale (“not at all” to “frequently, if not always”). Consistent with prior studies, we used the 20-item transformational leadership total score, calculated as the mean of four subscales: idealized influence, inspirational motivation, intellectual stimulation, and individual consideration.

Clinic implementation climate

Clinicians’ perceptions of their clinics’ implementation climate for OQ-A were measured using the 18-item Implementation Climate Scale (ICS) [34]. The ICS includes six subscales assessing focus, educational support, recognition, rewards, selection, and openness. Responses were made on a 0 (not at all) to 4 (a very great extent) scale with the total score calculated as the mean of all items. Prior research provides evidence for the structural, convergent, and discriminant validity of scores on the ICS [27, 34, 69,70,71,72] as well as sensitivity to change [45].

Data aggregation

Best practice guidelines [73,74,75,76] recommend clinician ratings of first-level leadership and clinic implementation climate be aggregated and analyzed at the clinic level. To justify aggregation, guidelines recommend that researchers test the level of inter-rater agreement among clinicians within each clinic to confirm there is evidence of shared experience. We used the rwg(j) statistic [77] to assess inter-rater agreement among clinicians within each clinic. Across all clinics and all waves, average values of rwg(j) were above the recommended cutoff of 0.7 [78, 79] for implementation leadership (M = 0.82, SD = 0.27), transformational leadership (M = 0.87, SD = 0.24), and clinic implementation climate (M = 0.94, SD = 0.10).

Covariates

In order to increase statistical power and to address potential imbalance across clusters, we planned a priori to include covariates of state and clinic size (number of youths served in the prior year) in all analyses. In addition, in the mediation analyses described below, we included baseline values of the hypothesized mediator and outcome (when possible) to increase the plausibility of the no-unmeasured-confounding assumptions within the causal mediation approach [80, 81].

Data analysis

All analyses used an intent-to-treat approach. To test LOCI’s effects on growth in first-level leaders’ implementation leadership (H1), transformational leadership (H2), and clinic implementation climate (H3) for Aim 1, we used three-level linear mixed-effects regression models [82, 83] with random effects addressing the nesting of repeated observations (level 1) within clinicians (level 2) within clinics (level 3). Separate models were estimated for each outcome. At level 1, observations of leadership and climate collected from clinicians at each time point were modeled using a piecewise growth function that captured differences in change from baseline to each time point across conditions [84]. Implementation condition and clinic covariates were entered at level 3. Models were estimated using the mixed command in Stata 17.0 [85] under full maximum likelihood estimation, which accounts for missing data on the outcomes, assuming data are missing at random. Effect sizes were calculated as the standardized mean difference in change (i.e., difference in differences) from baseline to each time point (i.e., Cohen’s d) using formulas by Feingold [86]. Cohen suggested values of d could be interpreted as small (0.2), medium (0.5), and large (0.8) [87].

Aim 2 tested the hypotheses that experimentally induced improvement in first-level leaders’ implementation leadership (H4), and transformational leadership (H5) by T2, would mediate LOCI’s effect on improvement in clinic implementation climate by T4. These mediation hypotheses were tested using the multilevel causal mediation approach by Imai et al. [81], implemented in the R “mediation” package [88]. To align our analytic approach with our theoretical model, we estimated a 2–2-1 mediation model in which the primary antecedent (LOCI) and mediator (clinic-level aggregate leadership scores) entered the model at level 2 (i.e., the clinic level), and the outcome (clinician ratings of implementation climate) entered at level 1, representing latent clinic means [89]. Separate models were estimated for each type of leadership because Imai’s approach does not accommodate simultaneous mediators [81]. The inclusion of baseline values for the mediator (i.e., leadership) and outcome (i.e., climate) in each model modified the interpretation of the effects so that they represented the effect of LOCI on change in leadership from T1 to T2 and of change in leadership on change in climate from T1 to T4. To stabilize the effect estimates, we set the number of analytic simulations for the direct and indirect effects to 10,000. These analyses produced estimates of LOCI’s indirect and direct effects on T4 implementation climate, as well as the proportion of LOCI’s total effect on implementation climate that was mediated by improvement in leadership (i.e., proportion mediated = pm). Indirect effects indicate the extent to which LOCI influenced T4 implementation climate through its effect on T2 leadership (i.e., mediation). Direct effects indicate the residual (remaining) effect of LOCI on T4 implementation climate that was not explained by change in T2 leadership. The pm statistic is an effect size measure indicating how much of LOCI’s effect on implementation climate was explained by change in leadership.

Aim 3 tested the hypothesis that improvement in clinic implementation climate from T1 to T4 would mediate LOCI’s effect on MBC fidelity during the same time period (H6). The nested data structure was accommodated using a 2–2-1 model in which the primary antecedent (LOCI) and mediator (aggregate clinic-level T4 implementation climate scores) occurred at level 2 (i.e., clinic level) and the outcome (MBC fidelity) occurred at level 1 (i.e., youth level). Note that the inclusion of baseline values of clinic implementation climate in this model modified the interpretation of the effects so that they represent the effect of LOCI on change in climate from T1 to T4 and of change in climate on fidelity during the same time period. To address the events/trials nature of the MBC fidelity index, a generalized linear mixed-effects model with random clinic intercepts, a binomial response distribution, and a logit link function was used in the second step of the mediation analysis [82]. In total, 18 clinics enrolled a total of 234 youth, all of whom had MBC fidelity data; however, one clinic was missing ratings of T4 implementation climate, resulting in a sample of 17 clinics and 231 youth for this analysis. A sensitivity analysis based on mean imputation of the missing T4 implementation climate value yielded the same inferential conclusions. A priori statistical power analyses conducted with the PowerUp! macro [90, 91] indicated the trial had power of 0.74–0.90 to detect minimally meaningful mediation effect sizes depending on observed intraclass correlation coefficients and variance explained by covariates.

Results

Figure 2 shows the flow of clinics, clinicians, and youth through the study. As is shown in Table 1, there were no differences by condition on the distribution of any clinic (K = 21), clinician (N = 252), or youth (N = 231) characteristics (all ps > 0.05). In total, 252 clinicians completed assessments for the study across 5 waves (average response rate = 88% across waves). The average number of participating clinicians per clinic was 12 (SD = 6.4). Nearly two-thirds of clinicians (n = 154, 61%) participated in 3 or more waves of data collection, and there were no differences by condition on clinician participation patterns (p = 0.114). A total of 234 youths were enrolled in the study. The average number of youths per clinic was 13.6 (SD = 13.3). The average number of assessments completed per youth was 5.9 (SD = 1.8) out of 7; 64% (n = 148) of youth had complete data, and 90% (n = 208) had 3 or more completed assessments. There were no differences in caregiver response rates for youth data by condition (p = 0.557).

Fig. 2
figure 2

CONSORT diagram showing the flow of clinics, clinicians, and youth through the WISDOM trial Note: ITT, intent to treat; LOCI, Leadership and Organizational Change for Implementation strategy; T2, 4-month follow-up; T4, 12-month follow-up; WISDOM, Working to Implement and Sustain Digital Outcome Measures trial. aOne clinic participated in LOCI for only 6 months. bOne clinic that enrolled youth did not have T4 climate data

Table 1 Characteristics of participating clinics, clinicians, and youth by condition

Effects of LOCI on growth in clinic leadership and implementation climate

Figure 3 shows the growth in first-level leaders’ implementation leadership, transformational leadership, and clinic implementation climate from baseline to 18 months (6 months after LOCI completed). Compared to clinicians in control clinics, clinicians in LOCI reported significantly greater increases in their first-level leaders’ use of implementation leadership behaviors from baseline to 4 months (b = 1.27, SE = 0.18, p < 0.001), 8 months (b = 1.46, SE = 0.22, p < 0.001), 12 months (b = 1.28, SE = 0.27, p < 0.001), and 18 months (b = 1.07, SE = 0.37, p = 0.003). These results supported Hypothesis 1. As is shown in Table 2, LOCI’s effects on implementation leadership were large at all follow-up points, including 4-, 8-, 12-, and 18-month post-baseline (range of d = 0.97 to 1.34).

Fig. 3
figure 3

Change in clinic leadership and climate by condition and wave Note: Means estimated using linear mixed-effects regression models. Error bars represent 95% confidence intervals. All models control for state and clinic size. P-values contrast the difference between conditions on change in the outcome from baseline to the referenced time point. LOCI, Leadership and Organizational Change for Implementation condition. Control, training and technical assistance only condition. T5 occurred 6 months after completion of the LOCI strategy. See Table 2 for effect sizes. aK = 21 clinics, N = 248 clinicians, J = 803 observations. bK = 21 clinics, N = 251 clinicians, and J = 810 observations. cK = 21 clinics, N = 247 clinicians, and J = 809 observations

Table 2 Effects of the Leadership and Organizational Change for Implementation (LOCI) strategy versus control on clinic leadership and clinic implementation climate by time

Hypothesis 2 stated that growth in first-level leaders’ transformational leadership would be superior in LOCI clinics relative to control. This hypothesis was partially supported (see Fig. 3 and Table 2). Clinicians in LOCI reported significantly greater growth in their first-level leaders’ use of transformational leadership behaviors from baseline to 4 months (b = 0.31, SE = 0.13, p = 0.019); however, this difference disappeared at 8 months (b = 0.27, SE = 0.14, p = 0.061) and was not evident at 12 months (b = 0.21, SE = 0.16, p = 0.191) or 18 months (b = 0.15, SE = 0.19, p = 0.438).

Hypothesis 3 stated growth in clinic implementation climate would be superior in LOCI clinics relative to control. This hypothesis was supported. Relative to clinicians in control, clinicians in LOCI reported significantly greater increases in their clinics’ implementation climate for MBC at 4 months (b = 0.56, SE = 0.10, p < 0.001), 8 months (b = 0.71, SE = 0.11, p < 0.001), 12 months (b = 0.55, SE = 0.12, p < 0.001), and 18 months (b = 0.43, SE = 0.15, p = 0.005). Table 2 shows that these effects were large during the intervention period and at posttest (i.e., at 4-, 8-, and 12-month post-baseline; d ranged from 0.98 to 1.25) and slightly attenuated at 18-month follow-up (d = 0.76).

Indirect effects of LOCI on T4 implementation climate through T2 clinic leadership

Hypotheses 4 and 5 examined how LOCI improved T4 clinic implementation climate by testing mediation models. Hypothesis 4 stated LOCI would have an indirect effect on T4 implementation climate through improvement in first-level leaders’ use of implementation leadership from T1 to T2. As is shown in Table 3, this hypothesis was supported. The LOCI strategy had a significant indirect effect on T4 implementation climate through T2 implementation leadership (indirect effect = 0.51, p = 0.004) even as LOCI’s direct effect was not statistically significant (direct effect = 0.11, p = 0.582). The proportion-mediated statistic indicated 82% of LOCI’s total effect on clinic implementation climate at T4 was explained by improvement in implementation leadership from T1 to T2 (pm = 0.82).

Table 3 Direct and indirect effects of the Leadership and Organizational Change for Implementation (LOCI) strategy on clinic implementation climate and digital MBC fidelity

Hypothesis 5 stated LOCI would have an indirect effect on T4 implementation climate through improvement in first-level leaders’ transformational leadership. This hypothesis was not supported (see Table 3). There was no evidence of an indirect effect of LOCI through transformational leadership (indirect effect = 0.16, p = 0.135) even as LOCI’s direct effect on T4 implementation climate remained statistically significant (direct effect = 0.38, p = 0.024). This pattern confirms LOCI improved T4 implementation climate but not through its effect on transformational leadership.

Indirect effect of LOCI on MBC fidelity through implementation climate

Hypothesis 6 stated that LOCI’s effect on clinic implementation climate from T1 to T4 would mediate LOCI’s effect on MBC fidelity measured at the youth level during the same time period. As is shown in Table 3, results of the mediation analysis supported this hypothesis. The LOCI strategy had a statistically significant indirect effect on MBC fidelity through clinic implementation climate, increasing fidelity by 14 percentage points (indirect effect = 0.14, 95% CI = 0.01–0.37, p = 0.033) through this mechanism. The direct effect of LOCI on fidelity after accounting for clinic implementation climate was not statistically significant (direct effect = 0.05, p = 0.482). The proportion-mediated statistic indicated 71% of LOCI’s effect on MBC fidelity was explained by improvement in clinic implementation climate from T1 to T4 (pm = 0.71, p = 0.045).

Discussion

This study is the first to experimentally test the hypotheses that (a) increases in first-level leaders’ use of implementation leadership and transformational leadership improve clinic implementation climate, and (b) improvement in clinic implementation climate contributes to improved fidelity to an EBP. As such, it represents an important step in advancing recommendations for rigorous tests of mechanisms and causal theory in implementation science [73, 92]. Results support the hypotheses that (a) first-level leaders can help generate clinic implementation climates for a specific EBP through the use of implementation leadership behaviors, and (b) first-level leaders and organization executives can improve fidelity to EBP by developing focused implementation climates in their organizations.

In this trial, increases in first-level leaders’ use of implementation leadership by 4-month post-baseline explained 82% of LOCI’s effect on improvement in clinic implementation climate by 12-month post-baseline. This finding aligns with qualitative data from another recent trial of MBC implementation in community mental health [93], which also found that leader and clinical supervisor support for MBC were perceived as key implementation mechanisms. The linkage of implementation leadership to improvement in clinic implementation climate suggests implementation leadership behaviors are important targets for implementation success. Accordingly, pre-service educational programs for health leaders, implementation purveyor organizations, and other stakeholders interested in supporting EBP implementation should consider integrating these leadership competencies into core curricula and training.

Contrary to our hypotheses (see Fig. 1), LOCI did not exert lasting effects on transformational leadership, and increases in transformational leadership did not explain LOCI’s effect on improvement in clinic implementation climate. This pattern of results aligns with theoretical models of leadership and climate which suggest that specific types of focused leadership (i.e., implementation leadership [40]) are needed to generate specific types of focused organizational climate and associated outcomes [37, 44]. This finding also suggests the LOCI strategy could be streamlined without loss of efficacy by scaling back (or eliminating) components that address transformational leadership, as has been done in implementation studies of autism interventions [94]. Streamlining the content of LOCI may increase LOCI’s feasibility and allow for greater development of implementation leadership skills.

Improvement in clinic implementation climate explained 71% of LOCI’s effect on youth-level MBC fidelity. The validation of this theoretical linkage within an experimental design lends credence to prior correlational studies and theory suggesting clinic implementation climate can contribute to improved EBP implementation in mental health settings [27, 44,45,46,47]. These results suggest organizational and system leaders can improve the implementation of EBPs by deploying organizational policies, procedures, and practices that send clear signals to clinicians about the importance of EBP implementation relative to competing priorities within practice settings.

Discussions about change in organizational leadership and organizational climate often center around how long it takes for these constructs to change and what level of resources are required. Results from this trial suggest changes in first-level leadership and clinic implementation climate can occur quickly, within 4 months, and that these changes can be lasting, even 6 months after supports (i.e., LOCI) are removed. A similarly brief timeframe for initial change, and similarly sustained period of maintenance of effect, was observed for implementation leadership and climate in the other large trial of LOCI, which occurred in a different country, patient population, and EBP [48]. Together, results from these trials confirm implementation leadership and implementation climate are modifiable with a combination of training, weekly coaching calls, data feedback, and goal setting.

This study highlights multiple directions for future research. Future studies should examine moderators of LOCI’s effectiveness with an eye toward the minimally necessary components to make LOCI effective. For example, there is some evidence that intervention complexity moderates the association between implementation climate and fidelity [47]. This is consistent with organizational climate theory, which indicates climate is most strongly related to employee behaviors when service complexity is higher [95] and when there is high interdependence among employees to complete tasks and high intangibility of the service provided [96]. Other research should look beyond the organization level at how systems can be modified in complementary ways to create supportive implementation climates for targeted interventions.

Strengths of this study include the use of an experimental, longitudinal design, enrollment of clinics in diverse policy environments (i.e., three different States), measurement of leadership and climate by third-party informants whose behavior is most salient to implementation success (i.e., clinicians), measurement of MBC fidelity through objective computer-generated data, time ordering of hypothesized antecedents and consequents, and use of rigorous causal mediation models to estimate direct and indirect effects. The study’s primary limitation is generalizability given all clinics and caregivers of youth volunteered to participate. In addition, the MBC fidelity measure does not assess whether clinicians used the feedback to inform clinical decisions. Data were not collected on other mechanisms that may explain LOCI’s effects, a gap that may be fruitfully addressed by future qualitative research. Because the LOCI condition included training and technical assistance, it was not possible to isolate LOCI’s independent effects; this is also a fruitful area for future research. Finally, we were unable to fully test LOCI’s hypothesized theory of change due to our use of the causal mediation approach which precludes testing serial multiple mediator models (e.g., LOCI → leadership → climate → fidelity). Nonetheless, our results confirm the most consequential links in LOCI’s theoretical model and offer important directions for research and practice.

Conclusion

In this mediation analysis of the WISDOM trial, experimentally induced improvement in implementation leadership explained increases in clinic implementation climate, which in turn explained LOCI’s effects on MBC fidelity in youth mental health services. This offers strong evidence that fidelity to EBPs can be improved by developing organizational leaders and strong implementation climates.