Implementation strategies: training and consultation

Over the past two decades, studies have repeatedly explored the extent to which initial training must be supplemented with additional implementation supports to effect meaningful changes in professional practice. Many reviews indicate that post-training consultation (i.e., ongoing support following training, provided by an expert with the goal of improving implementation and practice of an EBP) [1] or other supports are necessary, but the strength of the evidence remains mixed [2,3,4,5]. Recent literature points to the limitations of “workshop only” trainings and suggests that they are likely to have a greater impact on the adherence and competence of practitioners when augmented with practice-specific implementation supports and/or ongoing consultation to promote workplace-specific adaptations and skill application [6,7,8]. As such, initial training and post-training consultation are two cornerstone—and complementary—implementation strategies to support the implementation of evidence-based practices [9, 10]. This is especially true in mental health care, where most evidence-based practices are complex psychosocial processes that require significant professional development support to implement effectively [11].

Among the numerous implementation strategies that have been identified to potentially promote adoption, delivery, and sustainment of interventions [12, 13], initial training and post-training consultation have often proven effective for evidence-based mental health practice implementation [4, 9, 10]. Initial training generally refers to a combination of didactic content covering intervention materials or protocols and active or experiential learning such as in vivo demonstration or role plays to apply skills and receive feedback [2]. Ongoing, post-training consultation refers to practice-specific coaching by one or more experts in the intervention strategy [14]. In the context of mental health service delivery, consultation is different from supervision in that the consultant may be internal or external to the organization and does not have direct authority over the implementer [15]. Moreover, supervision can be administrative in nature [16] and consultation specifically refers to case-specific skill application and refinement of an evidence-based clinical practice [17, 18]. Together, initial training and post-training consultation are complementary and foundational implementation strategies.

Training is the starting point for many professional behavior change efforts and is associated with initial, post-training improvements in provider attitudes, knowledge, and skill acquisition [5]. Increasingly, training for mental health clinicians is provided online, something that has only intensified following the emergence of the COVID-19 pandemic [19]. Fortunately, online training is at least as effective as in-person approaches [20,21,22,23], eliminates training-related travel time, reduces costs, and allows for self-paced administration of content, thus improving accessibility and feasibility [21, 24].

Importantly, initial training likely necessitates the addition of post-training consultation or support to effect changes in professional practice [2,3,4,5, 25, 26]. However, the optimal duration or frequency of ongoing supports following initial training is not well known. In fact, the most recent review on this topic found that post-training consultation was variably effective in improving uptake [5]. Inconsistencies in these findings suggest that it is important to better understand the processes through which consultation improves implementation outcomes.

Consultation dose

Although post-training consultation is a promising implementation strategy, very little is known about the optimal frequency and duration (i.e., dose) of post-training consultation to ensure its impact. In fact, a recent review of randomized controlled trials concluded that variations in consultation dose across studies precluded their ability to conclusively identify the effect of consultation practices [5]. Efficiency is important given that post-training consultation can easily increase the cost of training by 50% or more [27]. An uncontrolled study found that higher consultation doses were associated with greater changes in knowledge and attitudes surrounding an evidence-based intervention protocol for youth anxiety [28]. Although the moderating effects of consultation dose are typically examined observationally or quasi-experimentally [24], the current study explores consultation dose by manipulating it experimentally.

Implementation mechanisms: provider knowledge, skill, and attitudes

Despite increasingly precise identification of implementation strategies [13], there is little information available about how implementation strategies exert their influence. To ensure implementation strategies accurately match barriers and promote parsimonious and efficient change efforts, current research has focused on the identification of implementation mechanisms [29]. Implementation mechanisms are the processes or events through which an implementation strategy operates to affect desired implementation outcomes [30]. Unfortunately, systematic reviews indicate few studies have experimentally evaluated implementation mechanisms of change [30, 31]. Whether in schools or other healthcare delivery sectors, there is growing recognition that implementation strategy optimization will be greatly facilitated by clear identification and testing of the mechanisms through which implementation outcomes are improved [29, 32, 33].

Despite the ubiquity of training and consultation in the implementation of evidence-based mental health services, few studies have articulated or evaluated their mechanisms. Theorized mechanisms for training and consultation include provider changes in (1) knowledge, (2) attitudes, and (3) acquisition of skills. Knowledge and attitudes surrounding new practices are among the most frequently identified candidate mechanisms for training efforts [34]. Indeed, previous studies have shown that training often improves EBP knowledge, and, in turn, more knowledgeable and competent clinicians have been found to exhibit superior training and consultation outcomes [34,35,36]. Positive attitudes are often predictive of provider adoption, engagement in training and consultation, adherence, and skill [37,38,39,40]. Further, training has been found to enhance initial skill acquisition [5, 41] and post-training consultation subsequently reinforces these gains and helps to maintain clinician skills [42]. In one study of community mental health clinicians’ acquisition of evidence-based assessment practices following training in a flexible psychotherapy protocol, attitudes improved following training alone, but skill continued to improve over the course of consultation [43]. Other studies have found similar evidence for the value of consultation for maintaining and promoting these changes [25].

Measurement-based care in education sector mental health

The current study focuses on understanding the impact of training and consultation strategies for measurement-based care (MBC)—an evidence-based practice that involves the ongoing use of progress and outcome data to inform decision making and adjust treatment [44, 45]—on implementation mechanisms and outcomes within mental health services delivered in the education sector. MBC can be used to enhance any underlying treatment and has received extensive empirical support for its ability to improve outcomes in adult services [45, 46], with evidence rapidly accruing for children [47,48,49,50]. MBC involves both standardized (e.g., symptom rating scales with clinical norms/cutoffs) and individualized assessment measures (e.g., client-specific goals tracked quantitatively over time, such as days of school attendance) [51,52,53] as well as a focus on reviewing progress data with the patient to inform collaborative decisions about individualized treatment adjustments [54, 55].

MBC is particularly suited for treatment delivered in schools, where it is perceived by clinicians and students to be feasible, acceptable, and to facilitate progress toward treatment goals [56,57,58]. Research over the past 25 years has consistently found that schools are the most common service setting for the delivery of child and adolescent mental healthcare [59, 60]. Yet, use of evidence-based mental health treatment in school settings—including MBC—has lagged behind services in other sectors [61, 62].

The implementation of MBC in the education sector is consistent with calls to enhance the quality of services among settings, clinicians, and service recipients who would not otherwise have access to evidence-based care [63, 64]. However, while pragmatic implementation approaches are greatly needed to support MBC across service sectors, few established implementation strategies exist [54] and even fewer studies have evaluated strategies for improving MBC use among school clinicians. Existing training and consultation strategies used to implement MBC practices in schools have resulted in improvements in knowledge, attitudes, and use of practices initially and over time [43, 65], pointing to them as potential mechanisms for MBC implementation strategy effects.

Current study

The current study was designed to better understand the impact of a brief online training (BOLT) and post-training consultation (PTC) on putative implementation mechanisms as well as MBC penetration, an indicator of the number of service recipients receiving MBC out of the total number possible [66]. This study focused on increasing providers’ use of standardized assessments, individualized assessments, and assessment-informed treatment modifications (all components of MBC practice) [45]. Moreover, our design facilitated examination of the effects of PTC dosage levels on outcomes via its experimental design. Accordingly, the current study examined three specific aims by testing (1) the impact BOLT+PTC supports on MBC practices; (2) differential effects of a 2-, 4- or 8-week dosage of consultation on MBC practices; and (3) the impact on implementation mechanisms such as MBC knowledge, skill, and attitudes (i.e., perceived benefit and perceived harm) that are hypothesized to activate favorable MBC implementation outcomes.

Methods

Participants

Participants included a national sample of N = 75 clinicians who provide mental health interventions to students on their school campus. Participants were Master’s level or above; primarily female (n = 68, 91%) and White (n = 55, 73%); and working in elementary (students approximately 5–12 years old), middle (12–14 years), and high schools (14–19 years). The most common role was mental health counselor, followed by school social worker, school psychologist, school counselor, and other professional roles. Demographic and professional characteristics for participating clinicians are presented in Table 1.

Table 1 Clinician participant demographic, professional, and caseload characteristics

Procedure

Recruitment

Participants were recruited via numerous professional networks, listservs, and social media (e.g., statewide organizations of school-based mental health practitioners in Illinois and North and South Carolina; a national newsletter devoted to school mental health; Twitter). Inclusion criteria were minimal to enhance generalizability and only included the requirements that (1) participants routinely provided individual-level mental health interventions or therapy and (2) spent ≥50% of their time providing services in schools. This was done to help ensure the representativeness of the sample compared to the clinicians who would ultimately access the training and consultation supports when later disseminated at a large scale. The study team conducted informed consent meetings via phone with prospective participants during which they were provided with a description of the study—including the conditions to which they could potentially be randomized—and its benefits. Participants provided verbal consent consistent with procedures approved by the University of Washington institutional review board. Recruitment lasted for a period of approximately 6 weeks to achieve the desired sample size.

Randomization

All participants were randomly assigned to either a BOLT+PTC condition or a service as usual control condition (see below) using the list randomizer function available at Random.org. For clinicians in the BOLT+PTC condition (n = 37), we conducted a scheduling survey to identify preferred times for consultation calls and then randomized the resulting consultation time blocks to our different consultation duration conditions. Using this approach, clinicians were placed into 2 (n = 14), 4 (n = 10), or 8 (n = 13) weeks of consultation. Clinicians in the control condition only completed study measures while continuing to provide services as usual. Blinding was not possible as clinicians were aware of their training and consultation obligations. See Additional File 1 for the study CONSORT diagram.

Data collection

All data were collected via online surveys and self-reported by participants. After enrolling in the study, all participants completed pre-training measures of their demographic, professional, and caseload characteristics and MBC knowledge, skill, attitudes, and use. These measures were collected weekly for 32 weeks following study enrollment. Participants received incentives in the form of $300 in the services as usual condition and $500 in the online training and consultation conditions for data collection activities (differences were due to differences in data collection burden).

Training and consultation

The online training and post-training consultation strategies were developed via an iterative human-centered design process intended to ensure their efficiency and contextual fit [67, 68].

Online training

After completing pre-training measures, participants assigned to any BOLT+PTC condition were asked to complete the online training within 2 weeks. Training included a series of interactive modules addressing the following content: (1) utility of MBC in school mental health; (2) administration and interpretation of measures; (3) delivery of collaborative feedback; (4) treatment goal-setting and prioritization of monitoring targets; (5) selecting and using standardized assessments; (6) selecting and using individualized assessments; (7) assessment-driven clinical decision-making; and (8) strategies to support continued use. The interactive online training modules are accompanied by a variety of support materials (e.g., tools to help integrate MBC into clinicians’ workflow; job-aids for introducing assessments and providing feedback on assessment results; a commonly used youth measures reference guide) via an online learning management system.

Consultation

The consultation model consisted of (1) 1-h small group (3–5 clinicians) live consultation sessions led by an expert MBC consultant which occurred every other week during the consultation period (e.g., clinicians in the 2-week condition had a single consultation call) and (2) asynchronous, message board discussions (hosted on the learning management system and moderated by the same consultant). Clinicians were asked to post a specified homework assignment on the message board (a) prior to their first call and (b) following each call during their assigned consultation period. Regardless of the consultation dosage, live consultation calls followed a standard sequence: (a) introduction and orientation to the session; (b) brief case presentations including MBC strategies used; (c) group discussion of appropriate next MBC steps for the case, including discussion of alternative therapeutic approaches/ strategies if MBC indicates that a change in treatment target or intervention strategy is needed; (d) expert consultant recommendations (as appropriate); and (e) wrap up, homework assignments, and additional resources. Asynchronous message board discussion provided a central location where clinicians reported on their experiences completing homework. Across the 2-, 4-, and 8-week groups, participating clinicians posted an average of 1.6 (range 0–3), 3.9 (range 0–6), and 6.2 (range 1–16) times, respectively.

Services as usual

Typical education sector mental health services tend to include a diverse array of assessment strategies that may include some inconsistent use of formal assessment and monitoring measures [56, 62]. Clinicians in this condition only completed study assessments.

Measures

Clinician demographic, professional, and caseload characteristics

Clinician demographic, professional, and caseload characteristics were collected using a self-reported questionnaire developed by the study team, informed by those used in prior school-based implementation research (e.g., [69]). Participants completed this questionnaire upon study enrollment.

MBC Knowledge Questionnaire (MBCKQ)

Modeled on the Knowledge of Evidence-Based Services Questionnaire [70], the MBCKQ was designed to assess factual and procedural knowledge about MBC. The 28-item, multiple-choice MBCKQ was iteratively developed based on the key content and learning objectives of the MBC training modules and administered at baseline and 2, 4, 6, 8, 10, 16, 20, 24, 28, and 32 weeks.

MBC skill

Clinicians responded to 10 Likert-style items assessing MBC skills including selection and administration of measures, progress monitoring, treatment integration/modification based on the results, and feedback to clients. Responses range from 1 (“Minimal”) to 5 (“Advanced”). This scale has previously demonstrated good internal consistency (α = .85) when used with school-based clinicians [56]. In the current sample, α = .90. It was also administered at baseline and subsequently at 2, 4, 6, 8, 10, 16, 20, 24, 28, and 32 weeks.

MBC attitudes

The Monitoring and Feedback Attitudes Scale (MFA) [71] was used to assess clinician attitudes toward ongoing assessment of mental health problems and the provision of client feedback (e.g., “negative feedback to clients would decrease motivation/engagement in treatment”). Responses range from 1 (“Strongly Disagree”) to 5 (“Strongly Agree”). The MFA has two subscales: (1) Benefit (i.e., facilitating collaboration with clients) and (2) Harm (i.e., harmful for therapeutic alliance, misuse by administrators). In the current sample, the MFA subscales demonstrated strong internal reliability (α = .91 and .88, respectively) and was administered at baseline and 2, 4, 6, 8, 10, 16, 20, 24, 28, and 32 weeks.

MBC practices

Clinician self-reported use of MBC practices was measured by their completion of the Current Assessment Practice Evaluation – Revised (CAPER) [52], a measure of MBC penetration. The CAPER is a 7-item self-report instrument that allows clinicians to self-report their use of assessments in their clinical practice during the previous month and previous week. Clinicians indicate the percentage of their caseload with whom they have engaged in each of the seven assessment activities. Response options for each activity are on a 4-point scale (1 = “None [0%],” 2 = “Some [1-39%],” 3 = “Half [40-60%],” 4 = “Most [61-100%]”). The CAPER has three subscales, which are (1) Standardized assessments (e.g., % of case load administered standardized assessment during the last week); (2) Individualized assessments (e.g., % of caseload systematically tracked individualized outcomes last week); and, (3) Treatment modification (e.g., % of clients whose overall treatment plan altered based on assessment data during the last week). Previous versions have been found to be sensitive to training [43]. The CAPER demonstrated good internal consistency across its three subscales in the current study (α = .85, .94, .87). Clinicians completed the CAPER every week for 32 weeks, including baseline; total CAPER scores for each subscale were used as implementation outcomes.

Analyses

The main goal of the current study was to test whether the BOLT+PTC strategies led to improvements in MBC implementation mechanisms and outcomes, relative to no-training services as usual condition. Specific MBC practices we measured using the CAPER were standardized and individualized assessment use and treatment modification informed by assessment data collected. Our second goal was to test whether the dose of PTC was differentially related to implementation outcomes (i.e., MBC practices). Finally, we tested the main effects of training and consultation dose on consultation mechanisms (i.e., MBC knowledge, attitudes, and skill).

We used R [72] for all analyses and tested our main hypotheses using multilevel models (MLM) with the statistical package ‘nlme’ [73]. MLMs allow for the analysis of clustered data, such as multiple observations of clinicians collected over time, and allows for missing data at the observation level. MLMs allowed us to simultaneously estimate all key hypothesis tests for each outcome in a single model: intervention effects on the levels of the outcome following training, intervention effects on the rate of change in the outcome over 32 weeks during and following training, and how much did PTC dose matter. These models are more appropriate than traditional ANCOVA models because they allow the integration of multiple time points into analyses while having more flexible assumptions (e.g., modeling both how clinicians changed over time in implementation outcomes as well as modeling differences across groups at specific time points).

We centered time such that the intercept reflected the end of consultation, regardless of consultation dosage. For instance, for those with 2 weeks of PTC, the intercept was centered at week 4 and for those with 4 weeks of PTC, the intercept was centered at week 6. This means that the main effects of intervention reflect differences across groups in levels of the outcomes at the end of consultation, or after the first 2 weeks of observations for the no-consultation control group.

To properly estimate the shape of change over time, we compared polynomials (quadratic and cubic effects) to a piecewise model, which estimated separate models of change during the consultation period and change in the post-consultation period. Models were chosen based on fit; we selected a given model as “better” fitting when all indices (e.g., BIC, AIC and -2LL test) agreed to avoid capitalizing on chance. These model fit indices allow comparison of the relative goodness of fit of different model specifications, balancing fit against parsimony [74]. Because we had a relatively small sample size of clinicians, we only estimated random intercepts and linear slopes; estimating random quadratic slopes (which estimate individual differences in quadratic change over time) produced model convergence problems.

We used 3 orthogonal contrast codes to compare the effects of intervention across conditions. The first compared the effects of BOLT+PTC to control. The second compared the effects of receiving PTC for 4 or 8 weeks to receiving PTC for 2 weeks, and the final contrast code compared receiving 4 vs. 8 weeks of PTC.

Power

The sample size balanced feasibility (e.g., the number of clinicians we expected to be able to recruit during the project period) against sensitivity of the data analytic models to detect the expected effects. We used Monte Carlo simulations in MPlus 6.0 to provide power estimates for the main effects models used in the current study. In a Monte Carlo simulation, data are simulated for a population based on estimated parameter values, multiple samples are drawn from that population, and a model is estimated for each one. Parameter values and standard errors are averaged over the samples, and power is derived based on the proportion of replications in which the null hypothesis is correctly rejected. We estimated power across 1000 replications for n = 75 participants measured at 32 time points. In general, our power analyses suggested that the current study had sufficient power (1 − b .80, a = .05) to detect small (b = .12) effects at Level 1 of the time-varying training and consultation effects across all models. On the other hand, power to detect between group differences, especially within the different consultation groups, was more limited. For example, we were powered (1 − b .80, a = .05) only to detect large (d = .98) differences between different consultation groups, and moderate (d = .58) differences between PTC and training only groups.

Results

Descriptive statistics

Table 1 presents overall descriptive statistics for the main sample for the current study. Table 2 presents means and SDs for implementation outcomes and mechanisms across time.

Table 2 Implementation outcomes and mechanisms across time

Main effects on implementation outcomes

First, we examined whether receiving BOLT+PTC produced changes in MBC practices during the post-consultation period (weeks 8–32), relative to the control group, and without accounting for change over time. Generally, mean differences were moderate for use of standardized assessment (Cohen’s d = .37, p = .11) and individualized assessment (Cohen’s d = .49, p = .04), but close to zero for treatment modification (Cohen’s d = .05, p = .84). However, it is important to note that these main effect estimates were non-significant or trending towards non-significance.

Change over time in outcomes

Next, we modeled change over time in the main outcomes, MBC practices. For standardized assessment, a quadratic term exhibited best fit to the data indicating a curvilinear pattern of change (see Fig. 1), but a linear only model fit the data for individualized assessment and treatment modification.

Fig. 1
figure 1

Main effects of BOLT on main outcomes over time

Main effects over time

We tested whether receiving BOLT+PTC led to post-training changes in MBC practices over time. Table 3 presents these findings. Results indicated relatively similar findings across outcomes, in that those clinicians who received training and any consultation exhibited better outcomes over time, while those in the control group exhibited declines in MBC practices over time, with those declines generally slowing with time. Figure 1 also illustrates the effects of BOLT+PTC over time on MBC practices. Specifically, for both standardized assessment (b = .02, p = <.01) and individualized assessment (b = .04, p < .001), there was a significant intervention effect on change over time, such that participants in the BOLT+PTC condition exhibited less decline relative to those in the control condition. While clinicians in BOLT+PTC continued to report higher levels of standardized and individualized assessments, control group participants reported declines over time. On the other hand, we observed no effects of BOLT+PTC on treatment modification.

Table 3 Main effects of BOLT+PTC on outcomes over time

Effects of PTC dose on main outcomes over time

We next explored whether PTC dose influenced outcomes, by replicating the analyses above with additional contrast codes for different PTC doses. Across models, there was little evidence of a consistent pattern of findings. Although some effects reached the traditional significance level, there was not consistent or strong evidence that any one PTC dose condition had a stronger impact on outcomes than any other. Because of the small sample size of the individual training groups, and the inconsistent effects, we refrain from interpreting them as evidence of a clear dose effect of PTC. See Table 4.

Table 4. Effects of consultation dose on BOLT outcomes over time

Effects on implementation mechanisms

Next, we tested whether BOLT+PTC was associated with change over time in the theoretical implementation mechanisms (MBC knowledge, attitudes, and skill). We used a similar approach to the analyses above, using multilevel models with time centered at the end of consultation for each group. At the end of consultation, participants in the BOLT+PTC conditions reported higher levels of MBC knowledge (b = .06, p = .019), but not higher levels of MBC skill (b = .13, p = .33), or more positive MBC attitudes including perceived benefit (b = .16, p = .06) or perceived harm (b = −.12, p = .34) compared to control group participants. Participants in the BOLT+PTC condition exhibited larger increases across the consultation period in MBC skills (b = .03, p <.001), although these increases slowed over time (b = −0.00091, p = <.001), and there were ultimately no differences at the end of consultation. We observed no other effects of BOLT+PTC on rates of change over time.

We also explored whether PTC dose was associated with differential outcomes for theorized mechanisms (skill, attitudes, and knowledge), by replicating the analyses above with additional contrast codes for different PTC doses. Across all outcomes, there was no evidence that PTC dose was associated with differences in mechanisms.

Discussion

This study tested the impact of an efficient, brief online training and post-training consultation program for MBC with school-based mental health clinicians. In addition to examining the effects of BOLT+PTC, it was the first study to experimentally manipulate PTC dose to evaluate the amount of support that may be needed to promote effective implementation. Consistent with current implementation research design guidelines to advance the precision of implementation science results, we examined the impact of BOLT and various dosages of BOLT+PTC on implementation outcomes (e.g., MBC practices) as well as implementation mechanisms (i.e., MBC knowledge, attitudes and skill).

Beyond examinations of dose, the precision of implementation science is increased by better specification of implementation outcomes. In the current project, effects were apparent over time for the impact of BOLT+PTC on both standardized and individualized assessment use, but not for treatment modification, even though each of these was an explicit focus of the training and consultation. Consistent with this, the developers of the CAPER have reported the lowest mean ratings for the treatment modification subscale [52]. Prior findings also have found less change—and less sustainment of changes—in treatment modifications following training and consultation in MBC practices [43]. It may be the case that changing one’s practice to collect standardized or individualized measures is more straightforward than using those measures to inform treatment modifications. Indeed, determining changes to treatment plans is often more complicated than instrument administration alone. Studies have pointed to the importance of ensuring clinical decision support during MBC implementation [75], which some measurement feedback systems and ongoing clinical consultation can provide. Although MBC has been associated with improved outcomes among a wide variety of presenting concerns and treatment modalities [44, 45], it is possible that using MBC for treatment planning is more ambiguous when implemented without a specific evidence-based intervention. The current sample of clinicians delivered MBC-enhanced “usual care” mental health services in schools. Treatment modification may be easier if MBC is implemented alongside a broader practice change initiative, such as those that include transdiagnostic or common elements of evidence-based practices [76,77,78], or even clear and consistent specification of usual care treatment elements [79].

Notably, these clinicians also delivered usual care services in the education sector, where there is limited research on evidence-based practice implementation [61]. Given the importance of better understanding setting-specific rates of change, deterioration, outcomes, and treatment interventions that may influence MBC implementation and effectiveness [80], additional research examining how MBC practices can more effectively inform treatment modifications in school mental health treatment—and beyond—is greatly needed.

Regarding implementation mechanisms, one of the three theorized mechanisms was impacted for the BOLT+PCT group, relative to controls. Specifically, MBC knowledge was higher immediately following consultation. In contrast, EBP attitudes have been inconsistently associated with implementation outcomes in prior research across service sectors and interventions [81, 82]. Given that some implementation strategies have been found to successfully shift practitioner attitudes in the education sector (e.g., Beliefs and Attitudes for Successful Implementation in Schools [BASIS] [83, 84]), there might be utility in focusing more explicitly on that mechanism to enhance the utility of BOLT approach.

Research is increasingly investigating “how low can you go” with regard to implementation processes and pursuing pragmatic and cost-effective implementation supports [37, 85, 86]. Related, our findings regarding consultation dose were not as anticipated. Specifically, we observed very small differences among the different PTC groups (2, 4, or 8 weeks), indicating that while post-training consultation is critical, higher doses may not have improved its impact in this sample. However, many questions remain surrounding optimal consultation dosage, and replication with larger samples of clinicians is needed. Aside from aforementioned between-group power limitations, there are several possible explanations for our findings. First, it may be that MBC practices are simpler, relative to evidence-based treatment protocols which often use training and PTC to support clinicians’ adoption and ongoing implementation with fidelity. Therefore, perhaps less consultation is more likely to be adequate for MBC than for manualized, evidence-based interventions. Indeed, it has been suggested that less complex interventions may need fewer implementation supports to be successfully adopted [87]. On the other hand, our findings that standardized and individualized assessment administration changed more over time as compared to treatment modifications suggests that treatment modification could be a more complex or challenging practice to change than just collecting new measures. Additionally, changes in treatment modification may be less frequently indicated than other changes in MBC practice, as they are often dependent on the results of the assessment (e.g., in cases of nonresponse or deterioration). It is also possible that none of the consultation conditions were sufficient to effect this change or to support clinicians in determining what other intervention changes were indicated when measurement suggests a lack of progress. Such a “ceiling effect” for our specific set of training and PTC strategies could necessitate the incorporation of some additional techniques, such as the BASIS strategy noted earlier. In BOLT+PTC, treatment modifications in response to assessment data were discussed more generally to enhance usual care, rather than in adherence to a particular set of intervention practices or expectation to deliver a manualized intervention. Future work should explore whether the brief model could be augmented to focus more explicitly on understanding when or how to adjust care. In this way and others, MBC—and MBC-facilitated treatment more generally—may continue to be a practice that is “simple, but not easy.”

Limitations

Findings of the current study must be interpreted within the context of several limitations. First, we were unable to include observational measures of MBC practices in this initial study. Although self-reported clinical practices are typical in implementation research for practicality and resource constraints particularly of pilot trials, observational measures would have been more robust and are a recommended future direction for related work. A review of clinical records was similarly infeasible due to the remote nature of this study and the diverse systems in which participants worked. Second, the frequency with which MBC practices were assessed may have had an impact on clinicians’ MBC behaviors, although this assessment approach was consistent across conditions. Third, it is not entirely clear why some deterioration of MBC practices was observed over time in our sample. It may be that the baseline ratings of MBC practices were inflated and that repeated assessments produced ratings that were more reflective of clinicians’ behaviors. The pattern of results observed from weeks 8 to 32 (Table 2) provides some support for this possibility. Fourth, based on previously described research documenting the limitations of workshop training alone, we opted not to include a training-only condition to examine the differential effects of training compared to training plus consultation. Fifth, collection of student clinical outcomes was outside the scope of this study due to its pilot nature and our explicit focus on the most proximal implementation mechanisms and outcomes linked to the implementation supports provided. Sixth, although we examined main effects on hypothesized implementation mechanisms, we did not conduct formal tests of mediation as they were beyond the scope of the current paper. Finally, as indicated above, the sample sizes of each PTC dosage condition limited the conclusions that could be drawn about the differential effects of a greater number of weeks of consultation.

Conclusion and future directions

Results from this study indicate that efficient training and consultation supports can produce meaningful practice change surrounding the administration of assessments. However, better understanding the optimal dosage of consultation when paired with high-quality, active training requires further investigation, particularly for MBC delivered in schools. This is in line with recent evidence that even a condition without any post-training consultation—but with other potentially low-cost (but high quality) supports like handouts, video tutorials, and access to online supports—was able to yield improvements in some aspects of fidelity, including those that might be most important for patient outcomes [8]. This points to the necessity of continuing to identify and test the mechanisms engaged by various implementation strategies, especially training and consultation processes, to determine the most parsimonious and efficient approaches. Certainly, the incremental impact of consultation relative to alternative or adjunctive post-training support strategies, such as measurement feedback systems [88, 89], should be further explored as research on successful MBC implementation strategies unfolds.