Skip to main content

Influences of Inner and Outer Settings on Wraparound Implementation Outcomes

Abstract

In recent years, implementation researchers have focused on associations among organizational characteristics (inner settings), policy and funding drivers (outer setting), and implementation outcomes. The current study evaluated the influence of outer setting drivers on implementation of Wraparound care coordination for youth with complex behavioral health needs. Data were drawn from two sources. First, we examined the impact of outer setting drivers on Wraparound implementation by comparing Wraparound implementation in states that used Community Mental Health Centers (CMHCs) versus Care Management Entities (CMEs). Wraparound fidelity data were compiled for a sample of 1174 direct service providers within 9 states. Second, we compared workforce development efforts across CMHCs and CMEs within a separate sample of 813 administrators and practitioners. Results of multilevel models found that CMEs were associated with higher overall fidelity scores than CMHCs [b = .219, t(5.47) = 3.26, p = .020] even after accounting for state-level covariates and the clustering of individual practitioners within organizations. Furthermore, compared with CMHC staff, those employed by CMEs reported higher competence with the Wraparound model and attended more Wraparound-related trainings. Staff employed by CMHCs were more likely to change practices than their CME counterparts. These findings suggest possible influences of outer settings on Wraparound implementation. The unique policies and procedures associated with CMEs may have helped facilitate practice fidelity among staff, promoted competence, and required fewer practice changes to implement Wraparound. Such findings underscore the importance of policy and fiscal factors in implementation of Wraparound and other evidence-based strategies.

Introduction

Despite the proliferation of evidence-based practices (EBPs), few children and adults with complex behavioral health needs receive evidence-based psychosocial treatment (EBT; Bond, 2018; Bruns et al., 2016; Rieckmann et al., 2011; So et al., 2019). In recent years, state and federal intermediaries, policymakers, and researchers have sought to identify ways to promote the use of EBT (Bruns et al., 2016; Powell et al., 2013). Recent research has shown, however, that the majority of implementation strategies (Powell et al., 2017), measures (Lewis et al., 2015), and funded research studies (Purtle et al., 2016) focus on the individual practitioner level (typically training and workforce development), with very little attention focused on the policy and funding drivers, or the “outer setting” of the implementation ecology. This is problematic, given the growing literature indicating that solely devoting effort to individuals, e.g., via training, is inadequate to promote EBT at scale (Aarons et al., 2014; Bruns et al., 2019; Fixsen et al., 2005; Joyce & Showers, 2002; Locke et al., 2019; Lyon et al., 2019).

The current paper capitalizes on a wealth of data collected across multiple states and children’s behavioral health provider organizations by a national intermediary purveyor organization (IPO; see Proctor et al., 2019) focused on promoting high-quality deployment of intensive care coordination for children and adolescents with serious emotional and behavioral disorders (SEBD) utilizing a research-based Wraparound model (Coldiron et al., 2017). The purpose of this research was to explore the association between Wraparound practice fidelity at the practitioner level and specific systems-level funding and policy factors. The primary aim was to evaluate whether outer setting drivers hypothesized to promote model-adherent implementation of care coordination using a research-based Wraparound model (and promoted by this IPO) were associated with some implementation advantages. A secondary aim was to determine if the outer setting influences select workforce development outcomes.

Background: Organizational and System Determinants of EBT Adoption

Both theory and research from implementation science provide insights into factors associated with adoption of innovations and EBT uptake. In health care, theoretical frameworks underscore the importance of contextual influences that occur both within (inner setting) and outside (outer setting) individual provider organizations (Aarons et al., 2011; Damschroder et al., 2009). Commonly cited inner setting predictors include, but are not limited to, a skilled workforce, an organizational climate that is open to change, effective leadership, and access to adequate resources (Ehrhart et al., 2014; Fixsen et al., 2005, 2009; Glisson & Williams, 2015). Outer setting variables, such as political support, governmental policies and regulations, financing or reimbursement arrangements, economic policies, eligibility criteria, and systems-level administrative structures, all interact with organization-level factors to influence implementation success (Aarons et al., 2014; Bruns et al., 2019; Damschroder et al., 2009; 2011; So et al., 2019; Rieckmann et al., 2011).

The relative influence of inner and outer settings has been explicated in recent literature. For example, the Consolidated Framework for Implementation Research (CFIR) assumes an ecological approach whereby inner and outer settings interact with a variety of individual- and intervention-level characteristics and processes to influence the effectiveness of implementation efforts (Damschroeder et al., 2009, 2011). Recent research shows that certain policy levers have demonstrated at least some success in promoting the uptake of particular service models (Rieckmann et al., 2011; So et al., 2019). For example, Bond (2018) proposed a variety of policy strategies designed to promote EBT uptake, most of which center on financial incentives and regulatory mandates to fund the selection, technical support, and implementation of established EBT strategies.

Recent research also provides evidence that general characteristics of outer settings at the state level, including Medicaid expansion, state per capita income, and political party in control of state government, are associated with greater fiscal support for EBT adoption (Bruns et al., 2019). Medicaid provides health coverage for low-income individuals and families in the USA (see https://www.medicaid.gov/), and many states have expanded coverage to increase the number of people who qualify for benefits, which has increased access to mental health services within those states (McMorrow et al., 2017). The relation between income and EBT adoption is noteworthy in light of large income disparities in the USA—even at the state level, median household income varies widely across individual states, with recent 5-year estimates ranging from averages of about $45,000 to $85,000 (United States Census Bureau, 2020). Finally, political party control is significant in that the two dominant political parties in the USA are guided by different priorities in allocating resources for mental health services: Republican policymakers tend to be more fiscally conservative than their Democratic party peers, and thus have been less likely to invest in EBTs, but more likely to consider budget impact and cost-effectiveness of behavioral health research (Bruns et al., 2019; Purtle et al., 2018).

Finally, research using other widely cited frameworks, such as the Exploration, Preparation, Implementation, and Sustainment (EPIS) framework (Aarons et al., 2011) have emphasized the importance of “bridging” entities, such as IPOs and centers of excellence. These structures can translate across individuals working at the various levels of the implementation ecology (Moullin et al., 2019). In the case of Wraparound, such support has come from IPOs such as the National Wraparound Implementation Center (NWIC), which supported the states included in the current investigation.

Despite these recent movements towards increased understanding of the influence of the outer, inner, and bridging factors, such research is in its infancy. Watson et al. (2018) have argued that the significant attention afforded the inner setting overshadows that afforded outer settings despite the fact that external factors are often drivers of and antecedents to implementation readiness at the organization level. The relative lack of research likely stems from outer setting variables being less readily observed, measured, and assigned to experimental conditions. On a pragmatic level, they also tend to be resistant to change (Aarons et al., 2014). Thus, efforts to improve implementation outcomes often focus on malleable inner setting characteristics, and efforts to study initiatives aimed at the seemingly immutable outer setting remain scarce (Bond, 2018; Purtle et al., 2016, 2020; So et al., 2019; Watson et al., 2018; Williams et al., 2018).

The Current Studies

Given the relative lack of focus on outer settings by implementation researchers, there is significant room for growth in our understanding of which types of system-level factors influence implementation outcomes. To help fill this gap in the literature, we conducted two separate but related studies that examined the impact of a select number of outer setting variables on the implementation of Wraparound care coordination for children with SEBD in the USA. In addition, we examined the influence of a single outer setting construct (administrative structure) on outcomes associated with participation in workforce development trainings.

Wraparound is a research-based strategy that provides a unique lens through which to apply concepts from implementation frameworks in that Wraparound is considered as much a systems-level intervention as a child and family level intervention (Bruns et al., 2015; Coldiron et al., 2017). As a result, installing high-fidelity Wraparound care coordination as an element of a system or state’s service array for implementation requires explicit efforts directed at system and organizational levels of the implementation ecology, as well as attention to fidelity to the practice model. This requires purveyors such as the NWIC to not only train and coach care coordinators and other Wraparound practitioners, but to also focus on providing technical assistance (TA) to state- and organizational-leadership to transform systems, policies, financing strategies, and organizational structures, with data collected on all these efforts to guide continuous quality improvement (CQI; Aarons, & Sommerfeld, 2012). In this way, Wraparound is similar to frameworks such as school-based Positive Behavioral Interventions and Supports (PBIS; Sugai & Horner, 2002) or integrated care for co-occurring health and mental health conditions (Ratzliff, Unutzer, Katon, & Stephens, 2016), which focus largely on the structures needed to assure effective delivery of specific services or treatments.

In the current studies, we specifically focused on the degree to which administrative, policy, and financing structures in the outer setting influenced conditions at the Wraparound provider organization (inner setting) level, including practitioner perceptions of training relevance and impact, practitioner skill development, and Wraparound fidelity, in nine states between 2013 and 2018. In the rest of this introduction, we provide an overview of Wraparound care coordination, discuss how inner and outer settings are proposed to influence Wraparound implementation, and present the aims of the current research.

The Wraparound Model

Wraparound Practice

Among intensive, evidence-based community strategies for youth, Wraparound is relatively unique in that it is not a treatment per se, but rather a defined individualized team-based process aimed at developing and implementing multi-component plans of care for youth with complex behavioral health needs (Bruns et al., 2010). The process extends from the system of care (SOC) philosophy, which proposes that outcomes for youth with SEBD and their families will be improved via increased access to care and better coordination of the multiple youth- and family serving systems that serve them, such as mental health, child welfare, education, and juvenile justice (Stroul & Blau, 2008; Stroul & Friedman, 1994). In Wraparound, a young person, their family, and a Wraparound team composed of formal and natural supports are supported by a trained Wraparound care coordinator to create a strengths-based and individualized plan designed to facilitate positive change through a combination of formal evidence-based treatment strategies as well as informal and community supports (Bruns et al., 2010; Coldiron et al., 2017).

Necessary Program and System Supports for Wraparound

Because Wraparound is defined by a high degree of flexibility and involves engaging multiple systems, implementation tends to be complex and requires significant organization-, and system-level support. As with other EBTs, at the organization level, staff must be hired selectively, acquire skills, and reach an adequate level of competency to facilitate the Wraparound process (Fixsen et al., 2005, 2009; Lyon et al., 2011). Organizations must also deploy transparent management practices, monitor staffing ratios, examine practice quality, and track youth- and family level outcomes (Bruns et al., 2006; Coldiron et al., 2017; Miles et al., 2011).

At the system level, Wraparound administrators must coordinate efforts across multiple child- and family serving agencies to identify the appropriate population and provide access to an array of services and supports to ensure access. To do this well, systems must facilitate collaborative relationships across sectors and providers, develop unique fiscal strategies, and focus on continuous quality improvement, among other things (Bruns et al., 2006, 2019; Magnabosco, 2006; Powell et al., 2015).

Wraparound Administrative Structures

In the USA, system-level Wraparound implementation is often attempted via networks of traditional Community Mental Health Centers (CMHC). CMHCs are designed to provide an array of mental health services (National Council for Behavioral Health, 2013), including outpatient and day treatment, crisis care, consultation and education. Although these organizations and their structures can encourage community collaboration and integration with other services (Mitchell & Eisendrath, 2002), the traditional “fee for service” approach requires CMHCs to provide a wide array of services with increased focus on productivity thereby limiting their ability to function within a coordinated system and provide individualized, flexible, model-adherent care coordination via Wraparound.

To achieve greater flexibility and coordination on behalf of youth with SEBD and their families, some states have established funding and management structures to promote better coordinated systems and more integrated care for children. One innovation has been the introduction of Care Management Entities (CMEs), organizational entities that serve as the hub for coordinating care for youth with complex behavioral health needs by engaging multiple systems (Pires, 2010). States that establish CMEs to implement Wraparound provide these organizations with a more flexible case rate, allowing them to procure formal services and informal supports (e.g., after-school clubs, recreation) for all members of the family (including parents and siblings). Ideally, CMEs develop contracts with these providers and community partners, aiding quality monitoring and accountability. CMEs also employ care coordinators who are trained and supervised to provide this care exclusively (rather than as one service type among many). All the above allow CMEs to engage a range of outer and inner setting drivers that can be mobilized to provide holistic, individualized, and tailored care for the entire family (Anglin et al., 2014; Pires, 2010).

Although research on CME-related outcomes is limited, studies suggest that installing features of CMEs in child-serving systems is feasible (Ireys et al., 2018), and that CMEs may promote positive outcomes among youth on measures of concomitant use of anti-psychotic medications, psychiatry-related emergency department (ED) visits, and psychiatric hospitalizations. Despite the potential importance to service systems and implementation science, however, no previous study has examined differences in care coordination implementation outcomes across states that use different “outer context” strategies.

The remainder of this article describes two related studies that sought to address this research gap by investigating whether a state’s decision to use CMEs at the outer setting impacts outcomes related to Wraparound implementation. In the first study, we used multilevel modeling to determine if administrative structure (CME versus CMHC) is associated with Wraparound practice fidelity while accounting for the clustering of Wraparound practitioners within multiple provider organizations that are in turn nested within states. We also controlled for additional state-level variables that have been found to be associated with state investment in EBPs, including Medicaid expansion status, median state household income, and political party in control (Bruns et al., 2019). The second study was more exploratory in nature, with a goal of assessing relations among outer and inner settings by measuring the degree to which administrative structure (CME versus CMHC) is associated with workforce development training outcomes among Wraparound practitioners. Specific research questions were as follows:

Question 1: Compared to states employing traditional CMHC structures, do states using CMEs demonstrate higher levels of Wraparound fidelity?

Hypothesis 1

Fidelity scores as measured by national Wraparound coaches will be higher within CMEs as compared to CMHCs.

Hypothesis 2

CME staff will demonstrate practice fidelity in fewer years than CMHC staff.

Question 2: Compared to states using CMHC structures, do CME staff report inner setting conditions that are more consistent with Wraparound-specific innovations?

Hypothesis 3

Compared to those from CMHCs, workforce members within CME structures will report higher levels of:

  • perceived importance of attending Wraparound-specific trainings;

  • pre-training competence on Wraparound key elements;

  • attendance at Wraparound-related trainings;

  • implementation support from administrators following trainings;

  • impact on practice following Wraparound-specific trainings.

Hypothesis 4 Compared to those from CMHCs, workforce members from CMEs will report fewer barriers to daily practice.

Method

Study 1

Sample

Data related to the first research question were drawn from a sample of 1174 Wraparound care coordinators between 2013 and 2018. Practitioners were from nine of the 11 states that received technical assistance (TA) and workforce trainings from NWIC during the study period. The remaining two states received only partial support and were thus excluded from this set of analyses. Five of the NWIC-supported states in this sample employed a CME structure, and four employed a CMHC structure. In general, initiation of NWIC support for these five states coincided with adoption of CME structures, as states pursued training, coaching, and technical assistance from external experts as part of their systems change efforts. The number of participants per state ranged from 20 to 475, with most states including between 37 and 154 individuals. Note that because data were focused on provision of ongoing professional coaching, demographic data were not collected. Furthermore, since these data were collected for quality improvement efforts related to ongoing coaching provided by NWIC, the project has been classified as non-research and has been deemed exempt from review by the Institutional Review Board (IRB) of the lead author’s institution.

Measures

Wraparound Fidelity

National NWIC coaches observed and rated Wraparound practitioners on their adherence to Wraparound practice elements. Observers used the Coaching Observation Measure for Effective Teams (COMET) to record and score their observations (Estep, Matarese, & Hensley, 2014). The COMET is a 46-item instrument that provides an overall fidelity score and individual scores on the following subscales: (1) Grounded in a strengths perspective (e.g., “ability to reframe the family’s story from a strengths perspective”); (2) Driven by underlying needs (e.g., “ability to elicit and blend all team member perspectives to develop a needs statement”); (3) Supported by an effective team process (e.g., “ability to help the team brainstorm and select effective strategies focused on underlying needs”); and (4) Determined by families (e.g., “ability to support the family in taking a lead in the planning process”).

Based on a combination of practice observation and review of plans of care and other documentation, NWIC coaches scored COMET items 0 for “not demonstrated,” and 1 for “demonstrated,” using detailed scoring criteria. A total composite score was calculated by dividing the total number of demonstrated items by the total number of demonstrated and not demonstrated items. Unobserved/unscored items were excluded from the calculation. When used by trained NWIC coaches, the COMET has been found to demonstrate good inter-rater reliability and discriminant validity using a method of known groups (Estep et al., 2014), and internal consistency was high in the current sample (α = .941). Using the COMET as an indicator of implementation fidelity rather than other widely used adherence measures, such as the Wraparound Fidelity Index (WFI) offered specific advantages. First, since COMET assessments are part of NWIC coaching activities, the scores reflect ongoing progress towards meeting implementation criteria that closely match the skills promoted through ongoing NWIC trainings. As such, the COMET is a formative tool that is tied to ongoing coaching, whereas indices such as the WFI are more summative in nature. COMET assessments also might be more objective than WFI assessments since the former are based on expert observations, whereas the latter are based on self-report. Finally, because these ratings were made by professional coaches, there were virtually no missing COMET data (ratings were unavailable for only one service provider).

Administrative Structure

To confirm each state’s assignment to an administrative structure category (CME or CMHC), the NWIC director and assistant director (third/fourth authors) and evaluation and research director (last author) rated each state on three policy and fiscal characteristics expected of CMEs: (1) If the organization employing Wraparound care coordinators engages multiple systems to coordinate care for youth with complex behavioral health needs; (2) If the state uses a per member per month “case rate” rather than a fee-for-service reimbursement mechanism for Wraparound care coordination (to facilitate the procurement of an individualized set of formal services and informal supports for the youth/family); and (3) If care coordinators’ sole job is Wraparound, rather than serving other functions. Systems meeting each of these criteria were coded as CME structures. All three co-authors reached consensus on which states were assigned to each the two administrative structures. In the current sample, five states employed a CME structure, and the remaining four states employed a CMHC structure.

Medicaid Expansion

Medicaid expansion data were gathered from the Kaiser Family Foundation (KFF, 2020). States that have not adopted Medicaid expansion (N = 4) were coded as a ‘0’ and those that have adopted expansion (N = 5) were coded as a ‘1.’

Median Household Income

Each state’s median household income was gathered from U.S. Census data for the 2014 to 2018 time period (United States Census Bureau, 2020).

State Political Party Control

Data on state political parties were drawn from the National Conference of State Legislatures (NCSL, 2020). Political control was coded as ‘0’ for full Republican control (Governor and legislature), and ‘1’ for divided control (Governor and/or at least one chamber of the state legislature controlled by different political parties). None of the states included in this study were characterized by full Democratic Party control of the executive and legislative branches of state government.

Time

Time was operationalized as number of years each state has been receiving training and coaching from NWIC at the time of the fidelity assessment. In this study, values ranged from 1 to 10 years.

Analyses

To examine differences in Wraparound fidelity across CME and CMHC structures (Hypothesis 1), we first examined simple mean differences in composite COMET scores across the two structures. Given that the providers assessed in this study were nested within provider organizations, which in turn were nested within the state-level administrative structures, we next tested the fit of multilevel models to assess the influence of state-level predictors of fidelity scores while accounting for clustering that may occur at the provider organization level (Field, 2018; Luke, 2004). As such, we began with a null model with state and provider organization to determine if we needed to account for the nested structure of the data. Given the relatively large size of the intraclass correlation coefficients (ICCs, reported in the “Results” section), we built a random intercepts model with administrative structure as our primary state-level independent variable, the three state-level covariates (Medicaid expansion, median state income, and political party control), and time (number of years of NWIC support) entered as additional covariate. Next, we repeated this analysis with a random slopes term for administrative structure to see if this improved the model. Finally, once we identified the best-fitting model, we assessed Hypothesis 2 by adding an interaction term between time and administrative structure to determine if the degree to which fidelity scores changed across years of support varied across CME and CMHC administrative structures. All of these analyses were conducted using SPSS version 27.

Study 2

Data related to the second research question were drawn from a sample of 813 administrators and Wraparound practitioners from 8 of the 9 states referenced above. Four of these states employed a CME structure, while the other four relied on CMHCs to implement Wraparound. All the administrators and Wraparound staff for Study 2 participated in at least one NWIC practitioner training. NWIC trainings focus on topics such as introduction to Wraparound practice, advanced skills-based trainings, and achieving necessary organizational and system supports for Wraparound (see https://www.nwic.org/workforce-development). All participants in this subsample completed at least one NWIC training between 2014 and 2018. The majority of participants identified as female (88%) and served as Wraparound care coordinators (73%) or Wraparound supervisors (27%). 59% had a Bachelor’s degree and 37% had a Master’s degree (see Table 1 for a summary of demographics by administrative structure). A higher percentage of staff from CMHCs attended introductory trainings (20.7% CME, 31.1% CMHC); and a higher percentage of CME staff attended intermediate, advanced, or specialized trainings (79.3% CME, 68.9% CMHC). Again, data collection processes were exempt from IRB review and did not require informed consent.

Table 1 Demographic data for Study 2

Measures

Administrative Structure

CME versus CMHC structure was assessed using the same procedures outlined for Study 1. In this sample, data for one of the CME states were not available, thus there were four CME and four CMHC states in these analyses.

Training Variables

Data related to workforce development trainings were gathered via the Impact of Training and Technical Assistance (IOTTA) measure, which is a self-report survey designed to assess satisfaction with trainings, perceptions of training effectiveness, and intended and actual use of training content in practice situations (Coldiron, Walker, & Hensley, 2015; Walker and Bruns, n.d.). Separate versions of the IOTTA were administered at two time points: a post-event IOTTA administered immediately following the conclusion of an NWIC-sponsored training event, and a follow-up IOTTA administered 8 weeks after the event. In addition to a second rating of satisfaction and perceptions of training effectiveness, the follow-up IOTTA also included questions related to actual use of training material in Wraparound practice. The following items and subscales were used in the current analyses:

Perceived importance of training A single item from the post-event IOTTA asked, “In your current role, how important is it for you to master the information, tools, and/or skills described in the training goals?” Response categories ranged from 0 (“not at all important”) to 10 (“utmost importance”).

Existing competence At post-event, participants were asked, “Before today's training, what level of mastery or competence did you have with the skills and information, tools, and/or skills described in the training goals?” Response categories ranged from 0 (“complete beginner”) to 10 (“fully expert”).

Number of trainings NWIC tracks the number of trainings each participant attended and matches initial and follow-up survey responses using unique identifiers on IOTTA surveys.

Degree of follow-up from administrators Two follow-up IOTTA items evaluated the extent to which participants received follow-up support for Wraparound practice, including (1) whether their supervisors met with them for coaching related to training content, and (2) the extent to which they received further training. Response categories ranged from − 2 (“strongly disagree”) to 2 (“strongly agree”). Scores on the individual items were averaged to form a single composite score (r = 541).

Organizational barriers to implementation Two follow-up IOTTA items focused on organizational barriers to implementation. One focused on administrative and technological barriers and the other focused on negative opinions of the training content among colleagues. Response categories ranged from − 2 (“strongly disagree”) to 2 (“strongly agree”). Scores on the individual items were averaged to form a single composite score (r = .438).

Training impact Training impact was assessed using 8 items from the follow-up IOTTA. Each item begins with the stem, “Since the training, how have the following aspects of your work changed?” Example items include “How you understand families/ problems/needs;” and “How you collaborate with other organizations in the community.” Response categories range from − 3 for “large negative impact” to 3 for “large positive impact.” Scores on the individual items were averaged to form a single composite score (α = .943).

Missing data were limited for most measures, with only one participant failing to provide data on existing competence and 11 failing to provide data on perceived importance of the training. However, 157 participants failed to provide data on degree of follow-up, and 163 failed to provide data on organizational barriers, which results in a 20% missing data rate for those two variables.

Analysis Plan

The data available to assess the second research question were largely focused on training outcomes, which resulted in limited demographic data and few contextual variables other than participants’ state. Since organization-level data were not collected, we were unable to follow the same analytic plan that we used for Study 1. Indeed, the lack of demographic and contextual covariates suggest a more exploratory analytic approach for Study 2 as compared to Study 1. Given the availability of data on participants’ home state, we began by calculating ICCs to examine the possibility that clustering of training participants within states influenced their perceptions of training impact. Results indicated very small ICCs for each training outcome, ranging from less than .01 to .078. Thus, for these analyses, we ignored data structure and assessed differences in outcomes across participants from CMEs versus CMHCs using a series of independent samples t-tests. Specifically, we compared perceived importance of training, pre-training competence, number of trainings, organizational support, organizational barriers, and self-reported training impact for participants in CME versus CMHC states.

Results

Study 1: Impact of Administrative Structure on Fidelity Scores

COMET composite total scores were higher for practitioners working within states that employed CME structures (\(\stackrel{-}{x}\)CME = .458; SD = .254) than those employed by CMHCs (\(\stackrel{-}{x}\)CMHC = .273; SD = .196). Although such findings are consistent with Hypothesis 1, simple comparisons of mean scores failed to account for the clustering of study participants within provider organizations and states. As such, we tested an unconstrained multilevel model with organization and state to assess the potential influence of such clustering. Results indicated that multilevel models were appropriate for the current study as ICCs were relatively high for both organization (ICC = .131) and state (ICC = .209).

Our first model was a three-level model that included random intercepts for organization and state. Administrative structure, time, and the three state-level covariates were entered as fixed effects, and COMET scores were entered as the dependent variable (see Table 2 for a descriptive summary of mean COMET scores across the independent variables). Results of the multilevel model suggest that only administrative structure was statistically significantly associated with COMET scores [b = .219, t(5.47) = 3.26, p = .020], with CMEs associated with higher fidelity scores than CMHCs. We next tested a model in which we entered an additional random term for administrative structure. However, results suggested that including random slopes for administrative structure did not improve the fit of the model given the small and non-significant change in the log-likelihood (− 2LL) value [X2change(1) = 0.88, p = .348], thus rendering the random intercepts/fixed slopes model the best fit for these data (see Table 3).

Table 2 Fidelity scores across levels of independent variables by administrative structure
Table 3 Results from random intercept multilevel models assessing main and interaction effects of time and state-level predictors on Wraparound fidelity

Fidelity Scores by Years of Support

Descriptive data in Table 2 suggest that fidelity scores decreased across years of NWIC support for both CME and CMHC states. However, the multilevel models suggest that this trend was not statistically significant [b =  − .005, t(384.32) =  − 1.26, p = .210]. To investigate possible differences in relations between years of support and COMET scores across administrative structures, we ran an additional model that added a time by administrative structure interaction term to the random intercepts model described above. Results suggest that adding the interaction term did not improve model fit [X2change(1) = 3.55, p = .060], as the interaction between time and structure was not statistically significant [b = .021, t(175.5) = 1.92, p = .056; see Table 3].

Study 2: Impact of Administrative Structure on Training Outcomes

As shown in Table 4, Wraparound staff receiving training from NWIC and working within CME structures were more likely than those from CMHC structures to report existing competence on training-related topics (\(\stackrel{-}{x}\) CME = 5.22, \(\stackrel{-}{x}\) CMHC = 4.79; t =  − 2.08; p = .038); and they attended more trainings than their CMHC counterparts (\(\stackrel{-}{x}\)CME = 1.73, \(\stackrel{-}{x}\)CMHC = 1.57; t =  − 2.12; p = .035). However, participants from CMHC structures were more likely than those from CME structures to report successfully making positive changes to their practices following participation in NWIC trainings (\(\stackrel{-}{x}\)CME = 1.35, \(\stackrel{-}{x}\)CMHC = 1.56; t = 2.81; p = .005). Mean scores on perceived importance of training did not differ for practitioners working within CME versus CMHC systems (\(\stackrel{-}{x}\)CME = 8.94, \(\stackrel{-}{x}\)CMHC = 8.90; t =  − .275; p = .784). Participants also reported similar scores on degree of follow-up from administrators (\(\stackrel{-}{x}\)CME = 0.596, \(\stackrel{-}{x}\)CMHC = 0.687; t = 1.12; p = .265) and organizational barriers associated with implementation (\(\stackrel{-}{x}\)CME =  − .460, \(\stackrel{-}{x}\)CMHC =  − .361; t = .960; p = .337). Together, these findings suggest partial support for Hypothesis 3, and they fail to support Hypothesis 4.

Table 4 Mean scores on training outcomes across CME and CMHC structures

Discussion

The current findings provide preliminary evidence that outer setting conditions influence Wraparound implementation processes. Consistent with Hypothesis 1, the state-level policy and funding context, as promoted by use of CME structures, was associated with adherence to prescribed Wraparound practice. Furthermore, the relation between administrative structure and fidelity was maintained even after accounting for the nested structure of the data and controlling for several known state-level predictors of EBP support in the multilevel models. These trends were evident immediately, with CME states demonstrating higher practice fidelity than CMHC states within the first year of receiving support from the IPO; and the differences across systems were maintained through multiple years of IPO support. These findings provide partial support for Hypothesis 2, as we expected CMEs to reach higher levels of fidelity more quickly than CMHCs; although we expected such discrepancies to emerge over time rather than shortly after the onset of technical support.

Findings related to workforce development trainings were mixed. Consistent with Hypothesis 3, staff affiliated with CMEs attended more trainings than those from CMHCs, and they also reported higher levels of competence on training content. However, there were no differences in perceptions of the importance of trainings, or the amount of follow-up support received after training sessions. Furthermore, staff from CMHCs were more rather than less likely than their CME peers to report changes in their practices following training. Finally, contrary to Hypothesis 4, there were no perceived differences across administrative structures in organizational barriers to implementation. These mixed findings have implications for how we might interpret the reasons behind the observed differences in COMET fidelity scores across CMEs and CMHCs. We turn to this topic below.

Mechanisms Underlying Differences Across Administrative Structures

Workforce Development

Together, the findings of the current set of studies provide evidence that the outer setting has an influence on Wraparound implementation. However, these analyses provide less information about the mechanisms driving the observed differences across CMEs and CMHCs. Consistent with current models of implementation (Damschroder et al., 2009, 2011), we might expect professional development activities, such as trainings, to explain at least some of the differences in implementation outcomes across administrative structures. Indeed, virtually all major implementation frameworks underscore the important role that competent and skillful staff play in implementation (Bond, 2018; Chambers et al., 2013; Fixsen et al., 2005, 2009). However, the structure of our current data files did not allow us to fully explore this hypothesis since the training data and the fidelity data were gathered from two different samples. That said, two sets of findings from the current analyses suggest that while workforce development may play a role in implementation fidelity, other factors are probably more responsible for the observed differences in COMET fidelity scores across CMEs and CMHCs.

First, if workforce development trainings were driving the observed differences across structures, we would expect CMEs and CMHCs to start with similar fidelity scores, but for CMEs to reach acceptable levels of fidelity faster than CMHCs. In the current study, however, Wraparound care coordinators affiliated with CMEs obtained higher COMET scores than those from CMHCs shortly after technical support from NWIC began, and these advantages persisted over time. Second, our findings related to workforce development training were mixed, with few outcomes favoring CME structures. While the two factors favoring CMEs (reported competence and number of trainings attended) might explain some of the variance in fidelity scores, it is likely that other characteristics of CMEs are more responsible for the observed outcomes.

Structural Factors

Looking beyond the effects of workforce development trainings, an alternative explanation for the current findings is that CMEs are more conducive to Wraparound implementation than CMHCs because they are structured in a way that facilitates care coordination. As described by Pires (2010), CMEs benefit children and adolescents with complex behavioral health conditions by reducing the fragmentation of care that is often experienced when multiple service systems are involved. By providing a “centralized vehicle for coordinating the full array of [supports] for children and adolescents with complex behavioral health issues” (Pires, et al., p.1), CMEs can promote efficiency in implementation processes, particularly by defining specific roles of the workforce and providing care coordinators with greater flexibility to do their jobs well. These benefits, in turn, may be associated with increased fidelity to the Wraparound model.

Alternate Explanations

Clearly, additional research is necessary to determine the degree to which the above-mentioned characteristics of CMEs are related to fidelity outcomes. Furthermore, future studies could explore a number of alternate explanations for the current findings. As evidenced by the large amount of unexplained variance in the multilevel models, there are clearly additional variables that are related to the differences in COMET scores across administrative structures. For example, the immediate relative success of CMEs may have resulted from pre-existing competencies among staff. Results from Study 2 hinted at this possibility as CME participants reported higher levels of existing competence than CMHC participants (see Table 4), and they also were more likely to attend intermediate or advanced trainings (see Table 1). This may have resulted from the fact that CMEs have more targeted hiring practices through which they can specifically hire professionals with skills that will help them succeed as care coordinators. Alternately, these administrative structures may have different levels of internal training quality or organization-level supervision practices. Finally, it is possible that specific pre-existing structures that were not assessed in the current study promoted different outcomes across CMEs and CMHCs.

Limitations

The current study was exploratory and took advantage of training and coaching data that were collected for purposes other than research. As such, it has a number of limitations, which we discuss below.

Representativeness of the Sample

First, only a small number of states was included in the analyses for both Study 1 and 2, and we were unable to match respondents on the measures of training and the measure of fidelity. Furthermore, whereas the Study 1 sample included only providers, the Study 2 sample included providers and administrators. At the state level, the majority of states included in both studies were led by Republican statehouses and Governors. As noted earlier in this article, Republican policymakers tend to be less likely than their Democratic party peers to invest in EBTs, and more likely to consider budget impact and cost-effectiveness of behavioral health research (Bruns et al., 2019; Purtle et al., 2018). As such, inclusion of more states with Democratic leadership may have resulted in different outcomes.

Mixed Quality of the Measures

Another limitation is that although most of the measures in these studies have been widely used and appear to have sound psychometric properties, several of our measures related to NWIC trainings were based on participants’ responses to single items. It is clear that these simple measures cannot adequately capture the complexity of constructs such as pre-training competence. Similarly, although the lack of differences in training-related outcomes across CME and CMHC states suggest that workforce development is not a primary driver of differences in implementation outcomes, it is possible that more comprehensive workforce development measures would have revealed different findings.

Endogeneity

Since the data that were analyzed in this set of studies were gleaned from a practice setting, our ability to control for confounding factors, as well as our ability to identify explanatory variables, was limited. Indeed, as noted throughout the discussion section of this article, the current study provides preliminary evidence that outer setting structures influence implementation fidelity among Wraparound service providers. However, the study is limited by the fact that we were unable to fully explore the mechanisms through which the impact of administrative structure operates. This resulted in an endogeneity problem in our statistical models as we were unable to include many of the variables that would explain the variance in fidelity scores across CME and CMHC structures. Thus, as noted above, alternate explanations for the differences across structures are possible—for example, CMEs may have been more successful than CMHCs in recruiting experienced staff, or they may have had better systems in place to facilitate staff supervision. While a limitation in the current analysis, this problem can facilitate future research based on a more structured design that can specifically examine the mechanisms underlying the differences in fidelity outcomes observed across CME and CMHC structures.

Implications and Future Directions

One implication of this study is that states interested in implementing Wraparound with fidelity might be well-served to use CMEs, and enact the requisite policy and fiscal changes that must accompany them. However, as noted in the introduction of this article, outer settings are typically difficult to change. States that currently rely on CMHCs to deliver intensive care coordination may be challenged to abandon current administrative structures in favor of a CME-based system. Thus, it seems essential to learn more about the mechanisms through which CME structures encourage and support Wraparound implementation with the goal of promoting such innovations across provider contexts as a way to encourage high-quality, model-adherent implementation.

Given the same training and coaching efforts yielded inconsistent results across CME and CMHC states, along with the immediate differences in fidelity scores across administrative structures, a second implication is that workforce development efforts alone are unlikely to be sufficient to support EBT uptake. This is not surprising, as recent theoretical models of implementation have proposed numerous inner and outer setting variables that are thought to either promote or inhibit successful implementation efforts (Aarons et al., 2014; Damschroder et al., 2009, 2011; Fixsen et al., 2005, 2009). As suggested by Bond (2018) and others (Rieckmann et al., 2011), specific policy levers, such as funding incentives and establishment of state and local technical assistance entities, may help achieve and maintain successful implementation. Within organizations, leaders can introduce new management strategies, such as quality monitoring, increased use of data systems, and an intentional focus on continuous quality improvement (Powell et al., 2012).

At the highest level, the current study reinforces the need to not only consider installation of evidence-based practices but also to focus on evidence-based policies. It has now been empirically documented that the research base on the outer context is miniscule compared to studies of organizations and individuals (Lewis et al., 2015; Purtle et al., 2016). Continuation of this trend will only preserve the status quo, in which research-based practice represents a fraction of all services delivered in behavioral healthcare (Bruns et al., 2015). As implementation researchers continue to broaden their focus to examine the unique interactions across inner and outer settings, we will become better positioned to implement comprehensive, coordinated systems capable of supporting youth and families with complex needs.

References

  1. Aarons, G. A., Ehrhart, M. G., Farahnak, L. R., & Sklar, M. (2014). Aligning leadership across systems and organizations to develop a strategic climate for evidence-based practice implementation. Annual Review of Public Health, 35, 255–274. https://doi.org/10.1146/annurev-publhealth-032013-182447.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Aarons, G. A., Hurlburt, M., & Horwitz, S. M. (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research, 38(1), 4–23. https://doi.org/10.1007/s10488-010-0327-7.

    Article  PubMed  Google Scholar 

  3. Aarons, G. A., & Sommerfeld, D. H. (2012). Leadership, innovation climate, and attitudes toward evidence-based practice during a statewide implementation. Journal of the American Academy of Child and Adolescent Psychiatry, 51(4), 423–431. https://doi.org/10.1016/j.jaac.2012.01.018.

    Article  PubMed  Google Scholar 

  4. Anglin, G., Swinburn, A., Foster, L., Brach, C., & Bergofsky, L. (2014). Designing care management entities for youth with complex behavioral health needs (No. 8b26e5381126442589b3a4e7f0a701a3). Mathematica Policy Research.

  5. Bond, G. R. (2018). Evidence-based policy strategies: A typology. Clinical Psychology: Science and Practice, 25(4), 1–14. https://doi.org/10.1111/cpsp.12267.

    Article  Google Scholar 

  6. Bruns, E. J., Kerns, S. E., Pullmann, M. D., Hensley, S. W., Lutterman, T., & Hoagwood, K. E. (2016). Research, data, and evidence-based treatment use in state behavioral health systems, 2001–2012. Psychiatric Services, 67(5), 496–503. https://doi.org/10.1176/appi.ps.201500014.

    Article  PubMed  Google Scholar 

  7. Bruns, E. J., Parker, E. M., Hensley, S., Pullmann, M. D., Benjamin, P. H., Lyon, A. R., & Hoagwood, K. E. (2019). The role of the outer setting in implementation: Associations between state demographic, fiscal, and policy factors and use of evidence-based treatments in mental healthcare. Implementation Science, 14(1), 96. https://doi.org/10.1186/s13012-019-0944-9.

    Article  PubMed  Google Scholar 

  8. Bruns, E. J., Pullmann, M. D., Sather, A., Brinson, R. D., & Ramey, M. (2015). Effectiveness of wraparound versus case management for children and adolescents: Results of a randomized study. Administration and Policy in Mental Health and Mental Health Services Research, 42(3), 309–322. https://doi.org/10.1007/s10488-014-0571-3.

    Article  PubMed  Google Scholar 

  9. Bruns, E. J., Suter, J. C., & Leverentz-Brady, K. M. (2006). Relations between program and system variables and fidelity to the wraparound process for children and families. Psychiatric Services, 57(11), 1586–1593. https://doi.org/10.1176/ps.2006.57.11.1586.

    Article  PubMed  Google Scholar 

  10. Bruns, E. J., Walker, J. S., Zabel, M., Matarese, M., Estep, K., Harburger, D., Mosby, M., & Pires, S. A. (2010). Intervening in the lives of youth with complex behavioral health challenges and their families: The role of the wraparound process. American Journal of Community Psychology, 46(3–4), 314–331. https://doi.org/10.1007/s10464-010-9346-5.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Chambers, D. A., Glasgow, R. E., & Stange, K. C. (2013). The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implementation Science, 8, 117. https://doi.org/10.1186/1748-5908-8-117.

    Article  PubMed  Google Scholar 

  12. Coldiron, J. S., Bruns, E. J., & Quick, H. (2017). A comprehensive review of wraparound care coordination research, 1986–2014. Journal of Child and Family Studies, 26(5), 1245–1265. https://doi.org/10.1007/s10826-016-0639-7.

    Article  Google Scholar 

  13. Coldiron, J. S., Walker, J., & Hensley, S. (2015, March). The revision and application of a training impact survey for Wraparound. In Paper presented at the 2015 annual research and policy conference on child, adolescent, and youth adult behavioral health, Tampa, FL.

  14. Damschroder, L. J., Aron, D. C., Keith, R. E., Kirsh, S. R., Alexander, J. A., & Lowery, J. C. (2009). Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science, 4, 50–64. https://doi.org/10.1186/1748-5908-4-50.

    Article  PubMed  Google Scholar 

  15. Damschroder, L. J., & Hagedorn, H. J. (2011). A guiding framework and approach for implementation research in substance use disorders treatment. Psychology of Addictive Behaviors, 25(2), 194–205. https://doi.org/10.1037/a0022284.

    Article  PubMed  Google Scholar 

  16. Ehrhart, M. G., Aarons, G. A., & Farahnak, L. R. (2014). Assessing the organizational context for EBP implementation: The development and validity testing of the Implementation Climate Scale (ICS). Implementation Science, 9, 157. https://doi.org/10.1186/s13012-014-0157-1.

    Article  PubMed  Google Scholar 

  17. Estep, K., Matarese, M., & Hensley, S. (2014). Development, training, and initial statistical summary of a coaching measure for wraparound teams. In Presentation at 27th annual children’s mental health research and policy conference, Tampa, FL.

  18. Field, A. (2017). Discovering statistics using IBM SPSS Statistics: North American edition (5th ed.). Sage Publications, Inc.

  19. Fixsen, D. L., Blase, K. A., Naoom, S. F., & Wallace, F. (2009). Core implementation components. Research on Social Work Practice, 19(5), 531–540. https://doi.org/10.1177/1049731509335549.

    Article  Google Scholar 

  20. Fixsen, D., Naoom, S., Blase, K., Friedman, R., & Wallace, F. (2005). Implementation research: A synthesis of the literature. University of South Florida, Louis de la Parte Florida Mental Health Institute, National Implementation Research Network.

  21. Glisson, C., & Williams, N. J. (2015). Assessing and changing organizational social contexts for effective mental health services. Annual Review of Public Health, 36(1), 507–523. https://doi.org/10.1146/annurev-publhealth-031914-122435.

    Article  PubMed  Google Scholar 

  22. Ireys, H. T., Brach, C., Anglin, G., Devers, K. J., & Burton, R. (2018). After the demonstration: What states sustained after the end of federal grants to improve children’s health care quality. Maternal and Child Health Journal, 22(2), 195–203. https://doi.org/10.1007/s10995-017-2391-z.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Joyce, B., & Showers, B. (2002). Student achievement through staff development (3rd ed.). Association for Supervision and Curriculum Development.

  24. KFF. (2020). Status of state Medicaid expansion decisions: Interactive map. Retrieved December 8, 2020, from https://www.kff.org/medicaid/issue-brief/status-of-state-medicaid-expansion-decisions-interactive-map/.

  25. Lewis, C. C., Weiner, B. J., Stanick, C., & Fischer, S. M. (2015). Advancing implementation science through measure development and evaluation: A study protocol. Implementation Science, 10(1), 1–10. https://doi.org/10.1186/s13012-015-0287-0.

    Article  Google Scholar 

  26. Locke, J., Lee, K., Cook, C. R., Frederick, L., Vázquez-Colón, C., Ehrhart, M. G., Aarons, G. A., Davis, C., & Lyon, A. R. (2019). Understanding the organizational implementation context of schools: A qualitative study of school district administrators, principals, and teachers. School Mental Health, 11(3), 379–399. https://doi.org/10.1007/s12310-018-9292-1.

    Article  PubMed  Google Scholar 

  27. Luke, D. A. (2019). Multilevel modeling (Vol. 143). Sage Publications, Inc.

  28. Lyon, A. R., Pullmann, M. D., Whitaker, K., Ludwig, K., Wasse, J. K., & McCauley, E. (2019). A digital feedback system to support implementation of measurement-based care by school-based mental health clinicians. Journal of Clinical Child and Adolescent Psychology, 48(sup1), S168–S179. https://doi.org/10.1080/15374416.2017.1280808.

    Article  PubMed  Google Scholar 

  29. Lyon, A. R., Stirman, S. W., Kerns, S. E., & Bruns, E. J. (2011). Developing the mental health workforce: Review and application of training approaches from multiple disciplines. Administration and Policy in Mental Health and Mental Health Services Research, 38(4), 238–253. https://doi.org/10.1007/s10488-010-0331-y.

    Article  PubMed  Google Scholar 

  30. Magnabosco, J. L. (2006). Innovations in mental health services implementation: A report on state-level data from the U.S. Evidence-Based Practices Project. Implementation Science, 1, 13–11. https://doi.org/10.1186/1748-5908-1-13.

    Article  PubMed  PubMed Central  Google Scholar 

  31. McCarthy, P. T. (2014). The road to scale runs through public systems. Stanford Social Innovation Review, 12(2).

  32. McMorrow, S., Gates, J. A., Long, S. K., & Kenney, G. M. (2017). Medicaid expansion increased coverage, improved affordability, and reduced psychological distress for low-income parents. Health Affairs, 36(5), 808–818. https://doi.org/10.1377/hlthaff.2016.1650.

    Article  PubMed  Google Scholar 

  33. Miles, P., & Brown, N. (2011). & The National Wraparound Initiative Implementation Work Group. . The wraparound implementation guide.

    Google Scholar 

  34. Mitchell, T., & Eisendrath, S. J. (2002). Community mental health centers. In L. Breslow (Ed.), Encyclopedia of public health (Vol. 1, pp. 259–260). Macmillan Reference USA. https://link.gale.com/apps/doc/CX3404000209/GVRL?u=wash_main&sid=GVRL&xid=5d60f08b.

  35. Moullin, J. C., Dickson, K. S., Stadnick, N. A., Rabin, B., & Aarons, G. A. (2019). Systematic review of the exploration, preparation, implementation, sustainment (EPIS) framework. Implementation Science, 14(1), 1. https://doi.org/10.1186/s13012-018-0842-6.

    Article  PubMed  Google Scholar 

  36. National Conference of State Legislatures. (2020). State partisan composition. Retrieved December 8, 2020, from https://www.ncsl.org/research/about-state-legislatures/partisan-composition.aspx#.

  37. National Council for Behavioral Health. (2013). Community Mental Health Act 50th Anniversary Timeline. https://www.thenationalcouncil.org/about/national-mental-health-association/overview/community-mental-health-act/50th-anniversary-timeline_final_for_mag-2/#foobox-1/0/50th-Anniversary-Timeline_final_for_mag.jpg.

  38. Pires, S. A. (2010). How states, tribes and localities are re-defining systems of care. Evaluation and Program Planning, 33(1), 24–27. https://doi.org/10.1016/j.evalprogplan.2009.05.008.

    Article  PubMed  Google Scholar 

  39. Powell, B. J., Beidas, R. S., Lewis, C. C., Aarons, G. A., McMillen, J. C., Proctor, E. K., & Mandell, D. S. (2017). Methods to improve the selection and tailoring of implementation strategies. The Journal of Behavioral Health Services and Research, 44(2), 177–194. https://doi.org/10.1007/s11414-015-9475-6.

    Article  PubMed  Google Scholar 

  40. Powell, B. J., Hausmann-Stabile, C., & McMillen, J. C. (2013). Mental health clinicians’ experiences of implementing evidence-based treatments. Journal of Evidence-Based Social Work, 10(5), 396–409. https://doi.org/10.1080/15433714.2012.664062.

    Article  PubMed  Google Scholar 

  41. Powell, B. J., McMillen, J. C., Proctor, E. K., Carpenter, C. R., Griffey, R. T., Bunger, A. C., Glass, J. E., & York, J. L. (2012). A compilation of strategies for implementing clinical innovations in health and mental health. Medical Care Research and Review, 69(2), 123–157. https://doi.org/10.1177/1077558711430690.

    Article  PubMed  Google Scholar 

  42. Powell, B. J., Waltz, T. J., Chinman, M. J., Damschroder, L. J., Smith, J. L., Matthieu, M. M., Proctor, E. K., & Kirchner, J. E. (2015). A refined compilation of implementation strategies: Results from the Expert Recommendations for Implementing Change (ERIC) project. Implementation Science, 10(1), 1–14. https://doi.org/10.1186/s13012-015-0209-1.

    Article  Google Scholar 

  43. Proctor, E., Hooley, C., Morse, A., McCrary, S., Kim, H., & Kohl, P. L. (2019). Intermediary/purveyor organizations for evidence-based interventions in the US child mental health: Characteristics and implementation strategies. Implementation Science, 14(1), 3. https://doi.org/10.1186/s13012-018-0845-3.

    Article  PubMed  Google Scholar 

  44. Purtle, J., Dodson, E. A., Nelson, K., Meisel, Z. F., & Brownson, R. C. (2018). Legislators’ sources of behavioral health research and preferences for dissemination: Variations by political party. Psychiatric Services, 69(10), 1105–1108. https://doi.org/10.1176/appi.ps.201800153.

    Article  PubMed  Google Scholar 

  45. Purtle, J., Nelson, K. L., Counts, N. Z., & Yudell, M. (2020). Population-based approaches to mental health: History, strategies, and evidence. Annual Review of Public Health. https://doi.org/10.1146/annurev-publhealth-040119-094247.

    Article  PubMed  Google Scholar 

  46. Purtle, J., Peters, R., & Brownson, R. C. (2016). A review of policy dissemination and implementation research funded by the National Institutes of Health, 2007–2014. Implementation Science. https://doi.org/10.1186/s13012-015-0367-1.

    Article  PubMed  Google Scholar 

  47. Ratzliff, A., Unützer, J., Katon, W., & Stephens, K. A. (2016). Integrated care: Creating effective mental and primary health care teams. Wiley.

  48. Rieckmann, T. R., Kovas, A. E., Cassidy, E. F., & McCarty, D. (2011). Employing policy and purchasing levers to increase the use of evidence-based practices in community-based substance abuse treatment settings: Reports from single state authorities. Evaluation and Program Planning, 34(4), 366–374. https://doi.org/10.1016/j.evalprogplan.2011.02.003.

    Article  PubMed  PubMed Central  Google Scholar 

  49. So, M., McCord, R. F., & Kaminski, J. W. (2019). Policy levers to promote access to and utilization of children’s mental health services: A systematic review. Administration and Policy in Mental Health and Mental Health Services Research. https://doi.org/10.1007/s10488-018-00916-9.

    Article  PubMed  Google Scholar 

  50. Stroul, B. A., & Blau, G. M. (Eds.). (2008). The system of care handbook: Transforming mental health services for children, youth, and families. Paul H Brookes Publishing.

  51. Stroul, B. A., & Friedman, R. M. (1994). A system of care for children and youth with severe emotional disturbances. (Revised). Georgetown University Child Development Center.

    Google Scholar 

  52. Sugai, G., & Horner, R. (2002). The evolution of discipline practices: School-wide positive behavior supports. Child and Family Behavior Therapy, 24(1–2), 23–50. https://doi.org/10.1300/j019v24n01_03.

    Article  Google Scholar 

  53. United States Census Bureau. (2020). Income. Retrieved December 8, 2020, from https://www.census.gov/programs-surveys/acs/.

  54. Walker, J. S. (2017, June 12). A study of turnover among Wraparound care coordinators and supervisors. National Wraparound Initiative. https://nwi.pdx.edu/a-study-of-turnover-among-wraparound-care-coordinators-and-supervisors/.

  55. Walker, J. S. & Bruns, E. J. (n.d.). The impact of training and technical assistance for wraparound. http://www.nwi.pdx.edu/pdf/IOTTA-results.pdf.

  56. Walker, J. S., & Bruns, E. J. (2006). Building on practice-based evidence: Using expert perspectives to define the wraparound process. Psychiatric Services, 57, 127–154. https://doi.org/10.1176/ps.2006.57.11.1579.

    Article  Google Scholar 

  57. Watson, D. P., Adams, E. L., Shue, S., Coates, H., McGuire, A., Chesher, J., Jackson, J., & Omenka, O. I. (2018). Defining the external implementation context: An integrative systematic literature review. BMC Health Services Research, 18(1), 209. https://doi.org/10.1186/s12913-018-3046-5.

    Article  PubMed  PubMed Central  Google Scholar 

  58. Williams, N. J., Ehrhart, M. G., Aarons, G. A., Marcus, S. C., & Beidas, R. S. (2018). Linking molar organizational climate and strategic implementation climate to clinicians’ use of evidence-based psychotherapy techniques: Cross-sectional and lagged analyses from a 2-year observational study. Implementation Science, 13(1), 85. https://doi.org/10.1186/s13012-018-0781-2.

    Article  PubMed  Google Scholar 

Download references

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Jonathan R. Olson.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interest.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Olson, J.R., Azman, A., Estep, K.M. et al. Influences of Inner and Outer Settings on Wraparound Implementation Outcomes. Glob Implement Res Appl 1, 77–89 (2021). https://doi.org/10.1007/s43477-021-00008-1

Download citation

Keywords

  • Outer settings
  • Wraparound
  • Implementation
  • Fidelity
  • Behavioral health