Background

Evidence-based practices (EBPs) continue to proliferate in child and adolescent mental health treatment, many of which are developed under controlled conditions in university clinics and healthcare settings [1]. However, intervention evidence is limited by the client populations and settings where the evidence was originally derived [2], often making it necessary to adapt the intervention to fit a particular setting [3, 4]. In addition, there are numerous barriers to successful EBP implementation in real-world mental health settings where children and their families are likely to receive care. Implementation barriers exist in both the outer setting (e.g., patient needs and resources) and the inner setting (e.g., organizational culture, leadership engagement), as well as involving individual characteristics and implementation processes unique to each intervention, intervention level, population, and service setting [3].

Desired implementation outcomes are more likely when implementation strategies are selected for and tailored to 1) specific patient populations, 2) care delivery systems and practices, and 3) local barriers and facilitators, often referred to as “determinants of practice” [5, 6]. Implementation strategies are single- or multiple-component approaches aimed at increasing adoption, implementation, and sustainment of EBPs in routine care [7]. Despite an established taxonomy of 73 implementation strategies, minimal guidance exists for how to select, integrate, and tailor these strategies to specific services and contexts [8, 9]. Proposed methods include concept mapping, group model building, conjoint analysis, and intervention mapping [10]. Yet, each method has limitations, such as requiring advanced methodological consultation, complex modeling that may overwhelm stakeholders, and/or use of proprietary software [10].

There are few examples of how to select strategies prospectively based on implementation science research and stakeholder knowledge of contextual factors [5]. This study replicates one established systematic method (the use of modified Delphi surveys) to select implementation strategies for a given EBP (measurement-based care) in the most common mental health service delivery setting for children and adolescents (schools) [10]. Delphi surveys are a pragmatic approach [10] that can be used when implementation strategy lists are established and thus stakeholders can rate existing strategies, propose new ones, and recommend changes in strategy definitions or applications. Stakeholder ratings of importance and feasibility have been used in numerous studies to assess which strategies are most actionable and applicable for a given implementation initiative to maximize success [11,12,13,14,15,16]. The actual effectiveness of these strategies on implementation, service, and client outcomes is an empirical question to be evaluated once they are applied [7].

Measurement-based care in mental health service delivery

Measurement-based care (MBC) is the routine collection and use of client data throughout treatment, including initial screening and assessment, problem definition and analysis, finalizing treatment objectives and intervention tactics, and monitoring treatment progress collaboratively with the client to inform treatment adjustments [17]. MBC is a critical component of an evidence-based practice orientation to mental health treatment [18]. There is strong evidence supporting MBC in settings other than schools. For instance, systematic reviews show better and faster goal attainment and symptom reduction with MBC as compared to usual care; effect sizes range from 0.28 to 0.70 [19,20,21]. Larger effect sizes of 0.49 to 0.70 are attributable to MBC with feedback, particularly feedback provided to both the patient and providers, or when clinical support tools are provided [21, 22]. Recent Cochrane reviews underscore the importance of including studies where measures are used to adjust the treatment plan [23, 24], indicating that patient outcomes associated with MBC are likely a result of the real-time, client-centered, data-driven adjustments made to interventions provided.

Despite the promise of MBC to improve mental health service quality, use of MBC in practice is minimal. Fewer than 20% of providers report collecting progress measures at least monthly [25, 26]. Barriers to MBC implementation in behavioral health care have been well-documented at the individual patient, provider, organizational, and system levels [27].

School mental health treatment services

Schools are the most common setting for children to receive mental health treatment, particularly for families who face barriers to accessing care in traditional clinic- or hospital-based settings [28,29,30,31,32]. However, the extent to which school mental health treatment services are grounded in EBPs is largely unknown [33, 34]. EBPs implemented in schools have potentially broad reach [35, 36] and school-based EBP implementation allows for adaptation to local culture and contexts that is scalable across communities and states [37, 38].

Implementation considerations in schools

Selecting and tailoring implementation strategies to practice and context has been found to optimize implementation feasibility and, ultimately, effectiveness outcomes [39, 40]. Yet, results are mixed, suggesting that tailoring may need to occur continuously throughout implementation [41]. Schools are also a unique setting for mental health treatment services, so implementation strategies defined for other behavioral healthcare delivery settings are unlikely to fit perfectly for schools without attention to strategic selection and tailoring. Indeed, implementing new practices in educational settings requires careful attention to school organizational factors, such as principal leadership, education policies at state and federal levels, a heterogenous mental health workforce, requirements and constraints related to professional development and ongoing coaching, and logistics as basic as the school calendar [42]. Other studies point to the importance of flexible treatment delivery and intentional family engagement efforts to facilitate EBP implementation and outcomes [43].

MBC implementation in schools

Barriers to MBC implementation in schools have some similarities with other more traditional behavioral health care settings, such as providers reporting limited time to administer measures. However, some barriers are more salient in the school context, such as difficulty reaching parents, limited access to measures, and lack of administrative or technical resources for scoring measures [44]. Although scientifically-rigorous applications of MBC in schools are new, an individualized approach to monitoring student progress and outcomes has been emphasized and studied in schools for decades [45, 46]. There are some published demonstrations of standardized, patient-reported outcome measures being implemented in school mental health systems [47, 48], as well as examples of psychosocial progress monitoring in schools as part of high-quality, comprehensive school mental health systems [49]. Moreover, MBC is consistent with schools’ emphasis on Response to Intervention, which is using student progress data to prevent and remediate academic and behavioral difficulties [50] and accountability requests for school-based providers to demonstrate outcomes [51]. Recent studies have highlighted case examples of an MBC approach in schools, from assessment tool selection to measurement processes and the role of feedback to the student and family [51, 52]. Yet, there still remains a substantial gap in the literature regarding implementation strategies best suited to MBC implementation when child mental health treatment services are provided on school grounds instead of a more traditional clinic or hospital setting.

Current study

The current study identifies feasible and important implementation strategies to increase school mental health provider use of MBC. This work builds on an initial list of 70+ implementation strategies that have been codified for general use [9, 53], and a recent extension to identify top strategies relevant to and important for implementing evidence-based practices in school settings [13, 54]. We focused specifically on selecting strategies for MBC in schools using prior Delphi survey methods. We collected importance and feasibility ratings for implementation strategies as well as operational definitions and recommendations for practical application in schools [9, 13, 53, 54]. Our objective was to obtain stakeholder perceptions of the most feasible and important implementation strategies for MBC as rated by provider and researcher stakeholders with expertise in school mental health treatment.

Methods

Participants

Study participants (N = 52) were drawn from a national sample of school mental health stakeholders: (1) providers with experience delivering and/or supervising mental health interventions in schools (N = 31); and [2] researchers with experience partnering with schools or districts to implement EBPs (N = 21). Providers were sampled from the National School Mental Health Census and researchers were sampled from two established lists of researchers with relevant expertise (see procedures for details). All participants were US-based in one or more than 23 states (AZ, AR, CA, CO, CT, FL, GA, IL, IN, LA, MD, MA, MI, MN, NE, NH, NC, OH, OR, PA, TX, VA, and WA). Table 1 shows demographic, professional, and urbanicity characteristics of participants.

Table 1 Demographic and professional characteristics of stakeholder participants, N = 52

Providers identified as school psychologists (N = 6, 19%), school social workers (N = 5, 16%), or school counselors (N = 5, 16%). Other provider roles included being a school psychology supervisor (N = 2, 7%), director of related services/special education/student support (N = 2, 7%), counselor (community- or hospital-employed; N = 1, 3%), mental health agency administrator (N = 1, 3%), or other positions (N = 9, 29%). School providers were based in 18 states representing all regions of the USA, and researchers in 14 states and had worked with school partners in 43 states, the District of Columbia, Guam, the US Virgin Islands, and other US territories. Most providers indicated they had current or past experiences delivering (N = 30, 97%) and/or supervising (N = 20, 65%) mental health treatment services in schools. Demographic and professional characteristics and urbanicity of the N = 31 participating providers displayed in Table 1 were not significantly different from those N = 53 providers recruited who completed the prescreening survey, based on chi-square tests (Awad M, Connors E: Promoting measurement-based care in school mental health through practice-specific supervision, submitted). These details were not available for individuals who completed the School Mental Health Profile generally.

Researchers had experience conducting research about child and adolescent mental health, conducting research in partnership with schools/districts, training school-based personnel, and providing consultation or technical assistance to schools/districts. Most researchers had current or past experience training graduate students about working in or with schools (N = 20, 95%), providing mental healthcare in schools (N = 16, 77%), supervising direct mental healthcare in schools (N = 13, 62%), and serving as an educator (N = 11, 52%). Researchers represented various age groups, fields of training, and urbanicity across the USA. Although gender identity (56% female) and degree (100% PhD) appear similar to researchers in our datasets who were not invited to participate, we did not have detailed self-reported characteristics of non-participating researchers to conduct statistical comparisons. Results from study participants are likely generalizable to stakeholders of similar demographics, professional expertise, and geographic location. There was a 94% retention rate of participants for Survey 2 (N = 49; N = 30 providers and N = 19 researchers).

Procedures

Systematic sampling procedures that drew on nationally representative databases for school-based providers and researchers were used to identify the study sample. Providers were selected through stratified random sampling from the National School Mental Health Census, a nationally representative survey of school and district mental health teams’ services and data usage. Inclusion criteria, confirmed by self-report on a prescreening survey, was holding a position as a school mental health provider or clinical supervisor with experience delivering or supervising school-based psychotherapy, in which MBC would be used (e.g., school social worker). Census data with individuals meeting this inclusion criteria were stratified based on rural-urbanicity continuum codes (metropolitan vs. non-metropolitan) and geographic representation. Prospective participants were randomly selected with replacement until a target sample of at least 30 school mental health providers was achieved. We monitored the sample for approximate distributions in the USA for (1) metropolitan and non-metropolitan/rural urbanicity; and [2] geographic location. Using this approach, we oversampled for non-metropolitan/rural providers toward the end of recruitment to ensure adequate representation.

We recruited 211 school mental health providers; after a response rate of 25% (N = 53) for a prescreening survey, four were ineligible due to never being a clinician or clinical supervisor (N = 3) or community provider not working in a school (N = 1). Of the N = 49 eligible participants invited to complete Survey 1, a final sample of 31 providers participated. Eligible recruits who did not participate had nonworking emails (N = 24), did not respond to our recruitment request (N = 106), or declined (N = 28). Providers received up to three reminder emails over the course of 3 weeks to respond to the study invitation to consent and start Survey 1.

Researchers were selected using purposive sampling from two sources, which were (1) Implementation Research Institute fellows who applied to and were selected for implementation science training through a competitive process and reviewed for school mental health expertise [55]; and (2) a list of 138 school mental health researchers maintained by the National Center for School Mental Health with active peer-reviewed publications and/or grants on topics pertaining to school mental health and wellbeing. This latter group of researchers were part of an invitation-only annual national meeting and pre-reviewed for their scholarship and impact on the field, adjusted for career stage, by a planning committee team comprised of national school mental health scholars. Inclusion criteria were (1) expertise with mental health program or practice development, effectiveness testing, and/or implementation research; (2) experience partnering directly with schools; and (3) Associate Professor or Professor at their institution, which resulted in N = 56 eligible researchers. Next, advanced expertise implementing mental health programs or practices in schools was coded on a 4-point scale (3 = “optimal,” 2 = “good,” 1 = “okay’, and 0 = “unable to assess”) by three senior school mental health researchers with extensive experience in evidence-based practice implementation in schools. Ratings were averaged for each researcher and then recruits were invited with replacement from the highest ratings downwards until a sample size of at least N = 20 was achieved. We recruited 29 research participants, which resulted in a response rate of 72% (N = 21); among recruits, one did not respond to recruitment emails and seven declined.

Measures: Delphi surveys

Participants completed two rounds of feedback using anonymous Delphi surveys. Each survey started with operational definitions of implementation strategies, MBC, school mental health providers, and three vignettes illustrating MBC use in schools (see Supplemental file 1). Vignettes were developed and revised for clarity and accuracy based on feedback from several co-authors and other collaborators. The vignettes focus on MBC clinical practice representing various school mental health professional roles, presenting concerns, student agesFootnote 1, and measures. Due to our focus on identifying implementation strategies for MBC as a clinical practice, the vignettes did not refer to any implementation supports, such as decision support by a measurement feedback system or other digital interface for scoring and viewing progress data. Although clinical decision support tools have been associated with more robust effects of MBC, they are not necessary [56], and using technology to aid measure completion and review may create disparities in MBC access [57]. Availability and feasibility of technology-assisted decision support tools is variable in public schools given the ongoing digital divide in education [58]. Therefore, to ensure MBC was presented in a manner that would not raise resource or equity issues, our vignettes focused on the core components of MBC only, without noting how measures are collected.

The Delphi technique is an established method using a series of surveys or questionnaires to obtain controlled, mixed methods input from a diverse set of expert stakeholders to gain reliable consensus on a health care quality topic [59, 60]. This method was used in the Expert Recommendations for Implementing Change (ERIC) project to identify a complete list of implementation strategies and definitions for selection and application to specific practices and settings [9, 53]. Another research team replicated and extended this research to select and tailor strategies for implementing EBPs in schools [13, 61, 62]. As the prior school study was not practice-specific, we included the 33 implementation strategies rated most important and feasible by the prior study to further refine a list of strategies for MBC in schools [61]. For each strategy, participants indicated whether it is relevant to MBC care specifically (“yes,” “yes with changes,” or “no”). For strategies rated as relevant (“yes” or “yes with changes”), participants then were asked to provide (1) importance and feasibility ratings (1 = “not at all important/feasible” to 5 = “extremely important/feasible”) based on the definition provided, (2) possible synonyms or related activities to the strategy, and (3) suggestions about the definition or application of the strategy. To close the survey, participants were also asked to suggest additional implementation strategies not listed. The Round 2 survey included an updated list of strategies and definitions based on Round 1 results. Participants had 4 weeks to complete Round 1 and 2 surveys. Participants were compensated for their time and study procedures were approved by the Yale Institutional Review Board.

Data analyses

Descriptive statistics of quantitative feasibility and importance ratings were examined for normality. Independent samples t-tests were used to compare ratings between providers and researchers. Mean feasibility and importance ratings were plotted for each strategy on a “go-zone” plot to compare relative feasibility and importance by quadrants [63]. Go-zone plots provide a bivariate display of mean ratings and are often used in concept mapping. The origin represents the grand mean of both variables of interest (in this case, feasibility and importance) and the four resulting quadrants are used to interpret relative distance among items (in this case, strategies). The top right quadrant, Zone 1, is the “go-zone” where strategies of the highest feasibility and importance appear.

A multimethod approach was used to reduce strategies and refine definitions between Survey 1 and Survey 2. First, a document was developed to display quantitative and qualitative Survey 1 results for each strategy. This included each Survey 1 strategy and definition, go-zone quadrant results (overall, as well as for providers and researchers), quantitative considerations (e.g., percentage of stakeholders who indicated the strategy was not relevant for MBC in school, significant differences between providers and researchers, any distribution normality concerns with ratings), qualitative synonyms, and qualitative definition change recommendations made by participants. Second, one rater (EC) reviewed each strategy using this document and established decision-making guidance vetted by study team members for each zone. She coded an initial decision (e.g., retain with revisions, collapse, or remove) with justification for each, documented any synonyms reported more than three times, and drafted definition changes that were (a) minimal language adjustments; (b) not substantial additions to definition length, and (c) consistent with overall scope of the strategy. Then, another rater (CS) reviewed coded decisions and documentation, and all discrepancies were resolved through consensus conversations. Final decisions about collapsing strategies were made based on consultation with two implementation researchers.

We also examined additional strategies and associated definitions recommended by N = 10 providers and N = 7 researchers as well as substantive comments provided at the end of Survey 1 by N = 16 providers and N = 8 researchers that pertained to additional strategies. Using thematic analysis and consensus coding by both coders, these data resulted in four distinct strategies broadly related to incentives, policy change, workload/time, and measure selection which were added to Survey 2. We discovered that two strategies (“alter and provide individual- and system-level incentives” and “develop local policy that supports implementation”) already existed in the established list of strategies for EBPs in schools, so we added those strategies and definitions from the published literature [27]. Two strategies (“support workflow adjustments” and “offer a clinician-informed menu of free, brief measures”) were new, so we added those strategies and definitions based on stakeholder qualitative feedback.

To analyze Survey 2 results, descriptive statistics, independent samples t-tests and go-zone plots were used again, as was the multi-step process detailed above.

Results

Survey 1 strategy ratings

In general, strategies were rated as “relevant” or “relevant with changes” by participants, and all 33 strategies in Survey 1 received importance and feasibility ratings. Eight strategies received the highest proportion of “not relevant” ratings (range = 25–38% participants) to MBC in schools, as follows: (1) model and simulate change; (2) change/alter environment; (3) provide practice-specific feedback; (4) identify early adopters; (5) visit other sites; (6) obtain and use student and family feedback; (7) develop academic partnerships; and (8) build partnerships (i.e., coalitions) to support implementation. Since the majority of participants rated these as “relevant” or “relevant with changes,” the importance and feasibility ratings are included in our analysis.

Importance and feasibility ratings were high overall for both survey rounds, with importance ratings higher than feasibility ratings on average. On Survey 1, importance ratings ranged from 3.44 (“develop academic partnerships”) to 4.61 (“make implementation easier by removing burdensome documentation tasks”) and feasibility ratings ranged from 2.89 (“visit other sites”) to 4.10 (“distribute educational materials”). Survey 1 standard deviations varied from 0.68 to 1.18. See Table 2 for importance and feasibility results for the 33 initial implementation strategies. Figures 1 and 2 display these findings on go-zone plots, where the four quadrants or “zones” are divided by the grand mean scores of 4.01 for importance and 3.49 for feasibility. Zone 1 includes strategies rated above the grand mean for importance and feasibility (i.e., high feasibility/high importance), Zone 2 includes strategies rated above the grand mean for feasibility but not importance (i.e., high feasibility/low importance), Zone 3 includes strategies rated below the grand mean for feasibility and importance (i.e., low feasibility/low importance), and Zone 4 includes strategies rated above the grand mean for importance but below the feasibility grand mean (i.e., low feasibility/high importance).

Table 2 Results of 33 initial implementation strategies in Survey 1
Fig. 1
figure 1

Go-zone plot: Survey 1 importance and feasibility ratings (limited range to focus on origin)

Fig. 2
figure 2

Go-zone plot: Survey 1 importance and feasibility ratings (full range 1–5)

Survey 2 strategy ratings

Based on the multimethod approach described above, Survey 2 contained a reduced set of 21 strategies with updated definitions (see Fig. 3). From Survey 1 to Survey 2, 14 strategies were retained (with updates to strategy title and/or definition in most cases), 7 strategies were collapsed into three, 12 were removed, and 4 were added. Feasibility and importance grand means were similar for Survey 2 (importance grand mean = 4.05; feasibility grand mean = 3.33). On Survey 2, importance ratings ranged from 3.61 (“use train the trainer strategies”) to 4.48 (“develop a useable implementation plan”) and feasibility ratings ranged from 2.55 (“support workflow adjustments”) to 4.06 (“offer a provider-informed menu of free, brief measures”). Survey 2 standard deviations varied from 0.56 to 1.22.

Fig. 3
figure 3

Go-zone plot: Survey 2 importance and feasibility ratings (limited range to focus on origin)

Survey 2 top-rated strategies

Among the 21 revised implementation strategies included in Survey 2 (see Table 4), six were rated as most important and most feasible (see Zone 1 strategies in Table 3, Fig. 3, and Fig. 4). These top-rated strategies include (1) assess for readiness and identify barriers and facilitators; (2) identify and prepare champions; (3) develop a usable implementation plan; (4) offer a provider-informed menu of free, brief measures; (5) develop and provide access to training materials; and (6) make implementation easier by removing burdensome documentation tasks.

Table 3 Results of 21 implementation strategies in Survey 2
Fig. 4
figure 4

Go-zone plot: Survey 2 importance and feasibility ratings (full range 1–5)

Several additional strategies were rated within 0.50 of the feasibility grand mean, yet above the mean cutoff for importance (see Table 3, Zone 4 strategies with asterisks). These include “conduct ongoing training,” “provide ongoing clinical consultation/coaching,” “monitor implementation progress and provide feedback,” “monitor fidelity to MBC core components,” and “promote adaptability”.

Stakeholder group comparisons

On Survey 1, provider and researcher ratings were not significantly different with three exceptions. First, as compared to researchers, providers reported that it is more feasible and important to make implementation easier by removing burdensome paperwork (feasibility provider M = 4.31 vs researcher M = 3.35; feasibility t [44] = -2.96, p = 0.01, d = 0.88; importance provider M = 4.85 vs researcher M = 4.30; importance t [44] = 2.90, p < 0.01, d = 0.86). Second, as compared to providers, researchers reported it is more important to monitor the implementation effort (provider M = 4.20 vs researcher M = 4.67; t [44] = −2.51, p = 0.02, d = −0.72). Third, train-the-trainer feasibility ratings were significantly higher among providers (M = 3.81) than researchers (M = 3.30; t [45] = 2.06, p < 0.05, d = 0.61). On Survey 2, provider and researcher ratings were not significantly different with one exception; providers reported it is more important to make implementation easier by removing burdensome paperwork (provider M = 4.50 vs researcher M = 3.94; t [44] = 2.04, p = 0.048, d = 0.62).

Discussion

We applied an established, stakeholder-informed method to identify important and feasible implementation strategies for measurement-based care (MBC) used in school-based mental health treatment. MBC was selected as an under-implemented yet promising and scalable clinical practice in schools that can be added to any presenting concern or treatment plan to improve care quality for children and adolescents. We identified six top-rated implementation strategies for MBC based on ratings of importance and feasibility in schools. Those strategies were (1) assess for readiness and identify barriers and facilitators; (2) identify and prepare champions; (3) develop a usable implementation plan; (4) offer a provider-informed menu of free, brief measures; (5) develop and provide access to training materials; and (6) make implementation easier by removing burdensome documentation tasks.

These six strategies identified represent a natural chronology for organizing an implementation approach for clinical providers in schools. For example, several strategies could be put in place before an initial training or provision of training materials occurs (e.g., assess for readiness, develop an implementation plan) and others could follow. These strategies could also be provided as a “bundle” to support MBC implementation in schools.

Several additional strategies were rated as highly important and relatively feasible within 0.50 of the feasibility grand mean. In general, these strategies reflect those that promote ongoing implementation in clinical practice after initial planning and provider training, which is highly consistent with extant findings about the importance of post-training implementation support strategies [64,65,66]. As these strategies are near the “border” of feasibility and importance grand means, they warrant attention as potentially viable strategies, given the strictly numeric, bivariate cutoff between zones based on grand mean values.

Implementation and feasibility ratings were not significantly different between providers and researchers, although future replication with a larger sample size is warranted. The few significant differences identified involved moderate to large effect sizes, with providers emphasizing the reduction of burdensome documentation and researchers emphasizing fidelity monitoring to support MBC in schools. These differences have face validity; providers have more experience with barriers related to documentation and other clinical workflow details than researchers do, and researchers are more focused on ensuring the implementation is carried out as intended. These differences illustrate the importance of ensuring bidirectional communication, collaboration, and perspective sharing between these two groups of stakeholders, and highlight the importance of sampling various stakeholder perspectives when examining implementation processes.

Also, by focusing specifically on MBC implementation in schools, the current results reveal a narrower and a higher range of both importance and feasibility ratings for MBC implementation strategies in schools as compared to general EBP implementation (our importance range = 3.61–4.48 versus a range of 2.62–4.59 in prior work and our feasibility range = 2.55–4.06 versus a range of 2.08–3.72 in prior work [54]. These differences suggest the value of prioritizing implementation strategies to specific implementation settings and contexts as was the case in this study.

Limitations

This study has several limitations. First, although this sample was nationally representative, it is relatively small, and thus importance and feasibility ratings may not hold for a larger sample. Degrees of freedom were further limited by only requesting feasibility or importance ratings if the participant responded that the strategy was relevant to MBC. Also, school providers were recruited from a national dataset of teams engaged in school mental health quality assessment and improvement efforts, which may be a more select group of school mental health providers. Future studies should examine importance and feasibility ratings from a wider range of school mental health providers. A larger sample would also allow for more powered analyses of school and provider characteristics (e.g., school size, provider characteristics in Table 1) as moderators of feasibility and importance ratings.

Also, we selected 33 implementation strategies already rated highly in a prior study of EBP implementation in schools, and thus we were unlikely to find mean importance or feasibility ratings in the low to moderate range. Although this may raise questions about potential ceiling effects, the grand means for each construct were not overly high (importance grand mean = 4.05; feasibility grand mean = 3.33), and we used the grand mean as the cut point for the sample (as is conventional for go-zone graphs) to interpret differences among ratings.

Finally, stakeholders’ qualitative feedback about the definition of each strategy was used to develop the final list that appears in Table 4, but recommendations about application of the strategy were not included. This is most pertinent to feasibility, and our team is currently examining these qualitative data to understand how we might optimize feasibility of individual strategies that were rated highly important, but less feasible (Awad M, Connors E: Promoting measurement-based care in school mental health through practice-specific supervision, submitted). Feasibility is a complex construct; many elements contribute to feasibility ratings for a given practice or strategy [67] and when we assess perceptions of feasibility prospectively, the rater has to make assumptions about what resource or training requirements, for example, are part of the strategy [7]. It is not uncommon for school stakeholders to rate implementation supports or best practices as more important than feasible due to their experience with resource constraints and structural barriers in schools [16, 68]. Therefore, future research should continue to examine how to operationalize, tailor, and evaluate strategies to promote feasibility in the school context, in order to support schools’ capacity to feasibly implement new initiatives with integrity and sustainability [33, 69].

Table 4 Final list of 21 implementation strategies and definitions for MBC in school mental health

Conclusion and future directions

Methods to select and tailor implementation strategies for a particular practice and setting have been somewhat elusive to date in implementation research and practice [5]. The methods used in this study can be applied to other evidence-based practices, settings, and contexts to solve implementation challenges. In addition, the effectiveness of implementation strategies selected for their potential importance and feasibility needs to be empirically examined. Identification of top-rated strategies for a particular intervention and context is foundational to future strategy testing with practicing providers in real-world care systems. Strategies selected from implementation science methods, such as the current survey methods with go-zone plots, should also be critically examined for the possibility of bundling or combining some strategies together (for parsimony and/or alignment) as well as when to apply strategies across implementation stages over time.