Background

Substance use is common among adolescent offenders and relates to delinquency, psychopathology, social problems, risky sex and sexually transmitted infections like HIV, and other health problems [1, 2]. An estimated 70 % of arrested juveniles have had prior drug involvement [3] and over 1/3 have substance use disorders [4, 5]. Arrested youth initiate substance use earlier than other adolescents, leading to more problematic substance use and higher recidivism [68].

US juvenile courts processed 1,058,500 delinquency cases in 2013, with 31 % of cases adjudicated [9]. Most youth who come into contact with the juvenile justice (JJ) system are supervised in the community [10], and the proportion of youth under community supervision is increasing as states across the country seek alternatives to incarceration/detention [9, 11, 12]. Given the contribution of substance use to recidivism, JJ agencies are uniquely positioned to significantly impact public health through substance use identification and early intervention [13].

Because substance use services are generally provided outside the JJ system [14], cross-system linkage is necessary, but often problematic [1517]. Even when linkages are in place, some community service providers do not consistently offer evidence-based services [18]. Collaboration requires communication across agencies that have historically existed as silos, with distinct cultures and belief systems about the effectiveness and importance of substance use treatment [1921]. This context offers an ideal opportunity for implementation science, as communities strive to better meet the needs of youth.

The JJ-TRIALS Cooperative

The Juvenile Justice—Translational Research on Interventions for Adolescents in the Legal System (JJ-TRIALS) is a cooperative research initiative funded by the National Institute on Drug Abuse (NIDA). Six research centers (RCs: Columbia University, Emory University, Mississippi State University, Temple University, Texas Christian University, University of Kentucky) and one coordinating center (CC: Chestnut Health Systems) were funded in July 2013. Each RC recruited one or more JJ Partners to participate in all planning and implementation activities from the outset. The JJ-TRIALS steering committee (SC: composed of principal investigators, JJ Partners, and a NIDA project scientist) was charged by NIDA with developing a study protocol that achieved two goals: (1) improving the delivery of evidence-based practices (EBPs) in community-based JJ settings and (2) advancing implementation science.

Collaboration and cooperation among JJ-TRIALS researchers, partners, and NIDA personnel are critical for study protocol development, refinement, adherence, and implementation. Each of these constituencies provides input on feasibility, utility, and scientific rigor. This approach ensures a study design that meets scientific and partner expectations, while also keeping feasibility in focus. JJ partners provide a real-world comprehensive understanding of the JJ system and its processes through study development, thus assuring a meaningful focus and increasing the study’s potential impact.

Developing the study protocol

The Study Design Workgroup focused on five goals during the development of the JJ-TRIALS protocol: (1) conceptualizing how substance use should be addressed through partnerships between JJ and behavioral health (BH) agencies, (2) identifying evidence-based tools for addressing substance use, (3) identifying a conceptual framework to understand the process of implementing changes, (4) using that framework to guide overall study design, and (5) testing two distinct strategies for implementing desired changes. The final study protocol conforms to a hybrid implementation design [22]. It examines organizational-level implementation outcomes and youth outcomes, using a mixed-methods approach [23]. Primary aims are to (1) improve the continuum of substance use services for juvenile offenders under community supervision and (2) test the effectiveness of two implementation strategies for promoting system-wide change.

The guiding evidence-based practices framework

Best practices for substance use treatment involve a logically sequenced continuum ranging from initial screening to placement and retention in appropriate care. The JJ-TRIALS Cooperative sought to specify how screening, assessment, service referral, and treatment services are interconnected in the identification and linkage to care. The design team developed a service cascade framework that captured the receipt of BH services and provided a unifying approach to guide site activities and study outcomes across a diverse set of sites with unique needs and goals.

The JJ-TRIALS Behavioral Health Services Cascade (hereinafter the Cascade) was modeled after the HIV care cascade, a widely used framework for depicting both gaps in HIV surveillance and treatment [2426]. The Cascade provides a data-driven framework for understanding how justice-involved youth move from JJ to community-based BH providers as substance use problems are identified and responses implemented. The Cascade is premised on the idea that the overlap between substance use problems and JJ contact necessitates screening of all youth who come into contact with the justice system [27, 28]. In an ideal system, a positive initial screen would lead to a more in-depth assessment and, if warranted, subsequent linkage to evidence-based care in the community. There are numerous evidence-based screening and assessment instruments [29, 30], various evidence-based treatment and prevention interventions [31], and promising interventions for linking youth to community-based providers [32, 33].

Evidence shows that the service continuum begins to break down at the initial step of screening in most JJ settings. A national survey of juvenile probation agencies revealed that only 47.6 % reported using a standardized tool to screen and/or assess substance use [17]. Furthermore, a typical specialty adolescent substance use treatment program only adopts about half of high-quality substance use care indicators and EBPs [34]. Figure 1 represents hypothetical data for the Cascade as youth transition across service systems, with each column representing the difference between ideal and actual levels of service delivery. Differences between ideal and actual levels represent problems related to identification, transition, and retention in care. Youth with substance use problems can only be engaged in appropriate treatment if their needs are identified.

Fig. 1
figure 1

Hypothetical retention in the Cascade as youth transition across service systems

Although the Cascade serves as a framework for setting goals around improved evidence-based practice, the study protocol allows sites to choose where on the Cascade they will focus their improvement efforts. This degree of agency-level autonomy recognizes that different EBPs will “fit” better across different agencies (i.e., address the needs of youth, work within constraints of the system). Each agency, informed by data and best practices, sets its own goals for reducing service gaps. The study protocol uses a series of menus of evidence-based screening and assessment tools and treatments to help guide these decisions, but does not dictate that sites focus on a specific point on the Cascade or a particular EBP.

The guiding implementation science framework

The Exploration, Preparation, Implementation, Sustainment (EPIS) framework of Aarons and colleagues guides the design of this study [35]. Consistent with models of quality improvement in healthcare systems [36], EPIS considers the multilevel nature of service systems, the organizations within systems, and client needs during the process of implementing a new intervention. The EPIS model posits four phases of organizational processes during system change. The Exploration Phase involves identification of the problem, appropriate evidence-based solutions, and factors that might impact implementation. Once a proposed solution is identified for adoption, the Preparation Phase begins. This phase involves bringing together stakeholders in a planning process [37], which can be complex, depending on the number of stakeholders and potentially competing priorities and needs [38]. The Implementation Phase begins when initiating change-related activities. Factors affecting implementation include outer context political and funding concerns, inner organizational context issues (e.g., fit with clinician productivity and work demands), and consumer concerns (e.g., applicability of practices for client needs) [39]. When the new practice is routinely used, the Sustainment Phase begins. Sustainment may be facilitated by the degree to which the new services or changes are institutionalized at different levels in the service setting (i.e., system, organizations).

The Cooperative has adapted EPIS to address the complex context within which the JJ-TRIALS study occurs. First, EPIS has typically been applied to the implementation and adoption of one specific EBP [40]. In JJ-TRIALS, sites are asked to select a target goal from the Cascade and implement an EBP that addresses that goal. Thus, each study site could potentially implement a different EBP. Second, while the linear nature of EPIS guides the general design (timing of implementation strategies and measurement), it also implies a dynamic process. In the current study, sites are taught to use data to inform implementation decisions through the application of rapid-cycle testing [4143]. With each “test,” there are subsequent periods of exploration (e.g., what worked, what went wrong), preparation (e.g., modifications to the original plan), and implementation (e.g., enacting the revised plan). JJ-TRIALS is designed to capture these activities to explore and refine the EPIS model.

Methods/Design

Selecting the implementation interventions

Implementation studies typically have focused on a single evidence-based intervention [4446], a specific set of best practices [47, 48], generic best practices [49], or a single evidence-based instrument [50]. Few studies have focused on outcomes that cross service system sectors of care [44]. Head-to-head organizational comparative effectiveness trials are rare, in part because the resources needed to execute them often exceed those available in a typical National Institutes of Health (NIH) grant. In JJ-TRIALS, several discrete implementation strategies were combined and manualized to address organizational and system barriers [51]. This effort leverages the resources of RCs with the practical guidance of JJ partners to field a multisite, direct comparison of implementation strategies in a relatively large sample of sites.

The JJ-TRIALS protocol compares two novel implementation interventions that combine several implementation strategies with demonstrated efficacy. These strategies include a needs assessment [52], ongoing training and education [37, 53], local change teams with facilitation [54, 55], and data-driven decision-making [56, 57]. The basic implementation approach compares a Core set of intervention strategies to a more Enhanced set that incorporates all core components plus active facilitation. Across both study conditions, data-driven decision-making serves as a common thread.

Data-driven decision-making (DDDM)

According to the JJ-TRIALS partners, most JJ departments are encouraged to use data to inform decisions, yet few JJ agencies are adequately skilled and resourced in doing so. A number of recent JJ initiatives such as the MacArthur Foundation’s Models for Change [58] have emphasized the importance of making data-informed policy choices. Focusing on systematic data collection, synthesis, and interpretation can help agencies to transform the ways they address problems and implement future change. In design discussions, JJ partners questioned whether providing tools and training would be sufficient or whether a guided “mentoring” approach would be needed to enact system-wide change using DDDM.

DDDM is the process by which key stakeholders collect, analyze, and interpret data to guide efforts to refine or reform a range of outcomes and practices [59]. In JJ settings, DDDM has been used to guide system-wide reform to reduce recidivism and system costs while improving related outcomes such as public safety and access to evidence-based services [6062]. In one instance, DDDM was associated with a 5-year doubling of the proportion of youth accessing EBPs while reducing arrest rates by almost half [58]. This approach has the potential to address unmet substance use treatment needs for JJ-involved youth.

Implementation intervention components

The two sets of implementation intervention strategies tested in JJ-TRIALS are additive (see Table 1 for a description of Core and Enhanced components). The Core condition includes five interventions implemented at all sites during the 6-month baseline period (see timeline below): (1) JJ-TRIALS orientation meetings, (2) needs assessment/system mapping, (3) behavioral health training, (4) site feedback report, and (5) goal achievement training. Following the baseline period, two additional Core components are delivered to all sites: (6) monthly site check-ins and (7) quarterly reports. As part of goal achievement training, sites receive assistance in using their site feedback report to select goals to meet their local needs. Sites are trained on using data to inform decisions (e.g., selecting a goal, applying plan-do-study-act) and enlisting DDDM templates and tools (developed as part of the project) to plan and implement proposed changes. While DDDM principles are expected to facilitate change, organizations may need additional support to apply these principles to their improvement efforts during the implementation phase. The Enhanced condition adds continuing support for the use of DDDM tools by adding research staff facilitation of DDDM over 12 months and formalized local change teams (LCTs) featuring representation from the JJ agency and a local BH provider (with meetings facilitated by research staff). Figure 2 depicts how selection and timing of specific components was informed by EPIS.

Table 1 Description of Core and Enhanced intervention components
Fig. 2
figure 2

Selection and timing of Core and Enhanced components

Study design

The design uses a cluster randomized trial with a phased rollout to evaluate the differential effectiveness of the Core and Enhanced conditions in 36 sites (18 matched pairs—see below) in 7 states. The design features randomization to one of two conditions, randomization to one of three cohorts (with start times spaced 2 months apart), the inclusion of a baseline period in both experimental conditions, and data collection at regular intervals (enabling time series analyses; see Fig. 3). In addition to comparing the two implementation conditions, it also allows sites to serve as their own controls by using an interrupted time series design with the baseline period as an existing practice control. This design enables three time-series comparisons: (1) Baseline (“activities as usual”) versus Core, (2) Baseline versus Enhanced, and (3) Core versus Enhanced.

Fig. 3
figure 3

JJ-TRIALS Study Design

Primary research questions include:

  1. 1.

    Does the Core and/or Enhanced intervention reduce unmet need by increasing Cascade retention related to screening, assessment, treatment initiation, engagement, and continuing care?

  2. 2.

    Does the addition of the Enhanced intervention components further increase the percentage of youth retained in the Cascade relative to the Core components?

  3. 3.

    Does the addition of the Enhanced intervention components improve service quality relative to Core sites?

  4. 4.

    Do staff perceptions of the value of best practices increase over time, and are increases more pronounced in Enhanced sites?

The study also includes exploratory research questions. Examples include: How do sites progress through EPIS phases with and without facilitation? Are Enhanced sites more successful in implementing their chosen action plans, achieving greater improvement in cross-systems interagency collaboration, and experiencing greater reductions in 1-year recidivism rates? Is one condition more cost-effective than the other? And how do inner and outer context measures (e.g., system, organizational, staff) moderate relationships between experimental conditions and service outcomes?

Sample

The sample includes 36 sites, with each site composed of one JJ and one or two BH agencies (overall more than 72 participating organizations). Sites were matched into pairs within state systems (based on local population, number of youth referred to JJ, number of staff, and whether EBPs are used). JJ agencies include probation departments (in six states) or drug courts (in one state); BH providers include substance use treatment providers within a county or service region. JJ inclusion criteria were (a) ability to provide youth service records, (b) serve youth under community supervision, (c) access to treatment provider(s) if treatment is not provided directly, (d) minimum average case flow of 10 youth per month, (e) minimum of 10 staff per site, and (f) a senior JJ staff member who agrees to serve as site leader/liaison during the study. Study sites are geographically dispersed and were identified by state JJ agencies (and not selected for particular substance use or related BH service needs).

At the beginning of the project, each site forms an Interagency Workgroup composed of 8–10 representatives from JJ and BH agencies. Recommended composition includes representatives of JJ leadership (e.g., Chief Probation Officer), BH leadership (e.g., Program Director), other JJ and BH agency staff, and other key stakeholders likely to be involved in improvement efforts (e.g., Juvenile Court Administrator, JJ Data Manager).

At least 360 staff members from participating JJ and BH agencies are expected to participate in one or more study activities. Information from at least 120 individual youth case records per site is de-identified and extracted from site data files on a quarterly basis throughout the study period (a minimum sample of 4320 de-identified service records). Interagency workgroup participation, staff survey responses, and youth records are nested within sites.

Recruitment and consenting

Partners facilitated identification and recruitment of JJ agencies. RC staff described study involvement and worked with JJ leadership to identify and recruit the BH partner. JJ agency leadership provided signed letters of commitment and, if required by agency policy or state law, court orders authorizing RC access to confidential juvenile case records. Individual staff recruitment occurs immediately after each leadership and line staff orientation meeting. During orientations, all aspects of the research study are explained and informed consent is obtained from participants, consistent with institutional review board (IRB) protocols at each RC.

Randomization

The design features two stages of randomization: (a) to one of three start times as part of a phased rollout and (b) to the Core or Enhanced condition. The CC was responsible for all randomization procedures. For the first stage, RCs were used as strata and the six county sites within each were matched into pairs in terms of the number of youth (ages 10–19) in the county based on the 2010 census, the number of youth entering community supervision, the number of staff in community supervision, whether they used standardized screeners/assessments and evidence-based treatment. Each RC PI reviewed matches and ensured comparability prior to randomization. Within each RC, the three resulting pairs were then randomly assigned to one of three start times using a random number generator in Excel. This procedure was utilized to smooth out the logistical burden of implementation and to control for the influence of other exogenous factors [63, 64].

For the second stage of randomization, one site in each pair was randomly assigned to Core and the other to Enhanced. Given that there were only 18 pairs of sites, “optimal” randomization was used to find the most balanced pattern of assignment across each RC. This approach involved running 10,000 permutations of the possible assignments of sites within each pair to condition. For each of these, multivariate Hotelling’s T2 was computed to assess the degree of balance on cohort and condition both within and across all RCs. The final randomization design was selected from a pool of the top 2 % of permutations balancing across all characteristics.

The study is also double-blinded such that neither the RC staff nor any county site staff are aware of their assignment until after both sites in a pair have completed the Core components. Once completed, the condition of both sites is revealed by the CC PI to RC PI and, subsequently, to sites. This design aspect is ideal in studies with multiple sites that have initial variability and require intensive researcher-driven activities such as training, monitoring, or coaching.

Power

For service record level hypotheses, 2160 bi-weekly observations are expected on service delivery outcome measures (36 sites × 60 bi-weekly periods). For site-level hypotheses, 72 observations are expected (36 sites × 2 data collection points), and for staff level hypotheses, a minimum of 1440 observations are expected, with 720 per condition (average of 10 staff × 36 sites × 4 time points). The effective n for power calculations in repeated measures analysis varies between a lower bound of the number of unique sites (N = 36) and an upper bound of the observations per condition (O = 1440 staff surveys or 2160 bi-weekly youth record periods), as a function of the Intraclass Correlation Coefficient (ICC) associated with the outcome measure (e.g., number of youth entering treatment) over time and the number of repeated measures per site. Assuming that the ICC is low (.2 or less), effect sizes in the small to moderate range (.25 to .35) should be detected with 80 % or more power [65]. Several strategies are employed to further increase power: (a) optimal randomization to evenly distribute the 36 sites as much as possible across start up time and condition, (b) using standardized measures to reduce measurement error, and (c) modeling site differences as a function of staff and organizational covariates.

Measurement

A multilevel approach to measurement is necessary for our understanding of change processes within complex service systems [36, 66]. Youth interact with JJ agency staff who work within a larger organization; in turn, the organization operates within a system that includes BH providers, oversight agencies, and funders. The proposed measurement plan assesses information from these levels.

The design employs three data collection periods: baseline (6 months; generally corresponding to EPIS’ Exploration and Preparation phases), experiment (12 months; corresponding to EPIS’ Implementation phase), and post-experiment (6 months; corresponding to EPIS Sustainment phase). Figure 4 includes a timeline depicting all intervention components (top portion) and data collection (bottom portion) for sites in wave 1. During baseline, RCs initiate collection of de-identified youth records data related to the Cascade dating back to October 1, 2014, administer agency surveys, conduct a local needs assessment (systems mapping exercise and group interview with interagency workgroup members), and administer leadership and line staff surveys at participating agencies. Leadership and line staff complete follow-up surveys during months 2 and 12 of the experiment period and again at month 6 of the post-experiment period. A representative from each site reports progress toward site-selected goals (i.e., implementation activities) during a monthly site check-in phone call. In the Enhanced condition, local change team members complete implementation process surveys during the experiment period. The 6-month post-experiment period consists only of data collection, including youth record extraction, agency and staff surveys, group interview (to determine whether sites sustain new practices), and monthly site check-in calls. Data collection components are summarized in Table 2.

Fig. 4
figure 4

Timeline depicting intervention components (top portion) and data collection (bottom portion) for sites in wave 1

Table 2 Data collection components

Fidelity

The JJ-TRIALS cooperative seeks to manage fidelity by balancing adherence to central elements of the implementation interventions and timely submission of research data with flexibility in addressing diverse site needs. This approach to fidelity aims to address the domains described by Proctor and colleagues [67] with regard to protocol adherence, dose/exposure, and quality. Protocol adherence is fostered by the provision of pre-implementation training activities to key principals (e.g., facilitators) along with the review of critical resources (e.g., detailed instructional manuals, preparation checklists). As implementation ensues, fidelity is further measured by RC-level reporting of the actual date of each study activity relative to its targeted completion date. The Timeline Compliance system tracks key elements of dose, such as the number of attendees at specific trainings [44]. Each implementation intervention has fidelity procedures that provide additional detail regarding adherence, dose, and quality. Procedures include automated reporting (e.g., online BH training sessions), observational ratings (e.g., webinar BH training sessions), facilitator-reported fidelity ratings (e.g., goal achievement training), and participant ratings (e.g., local change team meetings).

Hypotheses

Table 3 summarizes the primary hypotheses corresponding to the research questions above. H1 and H2 focus on retention in the Cascade: H1 compares both experimental conditions to their respective baseline period, whereas H2 compares the differential effectiveness of Core versus Enhanced sites. Table 3 shows the working definition and formula for the rates of each step within the Cascade (see Fig. 1), designed to map onto existing and widely used performance metrics systems (Center for Substance Abuse Treatment adolescent treatment branch and National Outcome Monitoring System; the Network for the Improvement of Addiction Treatment (NIATx); the National Quality Forum; the Office of National Coordinator of Healthcare Reform; and Washington Circle Group). The rates shown are proportions of youth receiving the service within each site, divided by the number in the earlier step, with dashed lines highlighting changes in the denominator.

Table 3 JJ-TRIALS research questions and hypotheses

Latent Growth Curve Modeling (LGCM) will be used to test H1 and H2 using MPLUS [68]. A significant change in the slope between the baseline and experimental time periods (H1) or between Core and Enhanced conditions (H2) would suggest that the intervention affected the growth curve. This analysis will be repeated for each targeted outcome measure in the Cascade. To the extent that there are site differences, data can be analyzed within sites, using non-parametric simplified time series (STS) analysis [69, 70]. MPLUS will also allow examination of time-varying covariates to determine whether early implementation activities have significant effects on later time points.

H3a utilizes bi-weekly intake cohorts and tests whether percentages of youth meeting “timing targets” differ significantly between the 18 Enhanced and the 18 Core sites. Records data include dates to allow examination of time between various points in the Cascade (see Table 4). Trends can be examined over time using simplified time series analysis. H3b and H3c are considered exploratory, using data from agency surveys and needs assessment group interviews (measured twice: baseline and end of experiment; see Table 5). Survey content is derived from the JJ-TRIALS National Survey (developed by Chestnut Health Systems and administered in 2014). Group interviews (recorded and transcribed) generate descriptive detail on the entire Cascade, including system capacities, referral processes, the nature and use of screening instruments, the quality of available services, and features in the inner and outer contexts of agencies likely to influence service delivery.

Table 4 Measures from de-identified records corresponding to the Behavioral Health Service Cascade
Table 5 Service cascade: crosswalk of quantitative and quality measures

H4 examines staff perceptions of the value of services along the Cascade. Table 6 describes domains and sample items. Analyses will focus on change in staff responses cross-sectionally over time, using staff nested within agency. Hierarchical linear modeling (HLM) [71] will serve as the basic analysis paradigm in which Enhanced and Core sites are compared. Growth modeling may be appropriate since measures will be collected approximately every 6 months, and it is expected that the groups will be equivalent at baseline. MPLUS can be used to analyze these data using “known class” as an indicator of implementation condition in a multigroup analysis (e.g., linear growth curve modeling). Time-invariant and time-varying covariates that may differentially affect the growth curves of the two implementation conditions will be examined. Should growth model specification not fit the data, multilevel regression modeling will be used.

Table 6 Staff survey domains and example items

Trial status

Feasibility testing

Feasibility testing was conducted in Spring 2015 in three sites not participating in the main study. Study protocol components tested included staff orientations, BH and goal achievement training content, data collection procedures for the needs assessment and baseline staff surveys, content and format of the site feedback report and DDDM templates, and elements of the Enhanced intervention (facilitation, LCT meetings). Information gleaned from feasibility sites was gathered in a systematic format and shared weekly with the Study Design Workgroup. As modifications to content and presentation formats were made, revised protocols were tested in other feasibility sites. Recommended modifications were reviewed and approved by the Steering Committee in September 2015. The extensive testing of all materials, trainings, and procedures in multiple sites helped ensure that anticipated variability across the 36 main study sites was accounted for and addressed.

Main trial

Thirty-six sites from seven states were recruited between January and December 2014. RCs began working with their six respective sites to start obtaining de-identified records in the Fall of 2014. In February 2015, sites corresponding to each RC were paired and randomized to one of three start times. After agency surveys were completed (November 2015), one site from each of the 18 pairs was randomized to the Core (n = 18) or Enhanced (n = 18) study condition. The study began in wave 1 sites in April 2015, with waves 2 and 3 beginning June and August, respectively.

Discussion

The JJ-TRIALS protocol, developed through a collaborative partnership among NIDA, researchers, and JJ partners, has the potential to impact the field of implementation science as well as JJ and BH service systems in significant ways.

Implementation science innovations

The engagement of JJ partners as collaborators throughout study design, implementation, and interpretation of results has been key to JJ-TRIALS. Active involvement of JJ partners in decisions is essential in designing a study that is both scientifically sound and grounded in the realities confronting the system. For JJ partners, involvement has created a sense of ownership, enhancing the likelihood that interventions are adopted and sustained.

There is great complexity in interactions between the JJ system and community service providers. The problem-solving orientation inherent in EPIS [35] is valuable in understanding the myriad factors that may affect system change: outer context issues, inner context organizational issues, and consumer concerns. These factors become the leverage points for effectively intervening to promote durable system change. EPIS is also fruitful as a framework for developing implementation strategies. The linear phases provide a platform for content and timing of intervention strategies and measurement, yet the dynamic aspect of EPIS suggests recursive movement through those phases as agencies assess and modify implementation efforts. JJ-TRIALS utilizes these strengths of EPIS and builds on current approaches to measuring process improvement [44].

DDDM is another innovative component that is compatible with the needs of researchers who rely on data for evaluating study activities and JJ partners who rely on data to demonstrate accountability to data-driven goals. Participants are trained in applying data-informed strategies using a blended learning approach [72] to facilitate the use of evidence-based practices in identifying and addressing youths’ service needs. Process mapping [73] helps identify addressable gaps in cross-systems service integration. Moreover, reliance on information already captured in sites’ service record data (both electronic and paper formats) allows tracking of the downstream changes resulting from implementation activities.

Finally, JJ-TRIALS efforts (from both quality improvement and evaluation perspectives) are aimed at the entire Cascade, from identification of need (screening and clinical assessment), linkage to care, through retention in treatment. While the JJ system has made progress in the past two decades in determining procedures for the identification of BH needs [74], far less attention has been paid to the implementation of sound procedures for addressing those needs [33]. JJ-TRIALS uses a hybrid measurement model [22] that incorporates measurement of these Cascade-related outcomes at multiple levels: systems, agencies, staff, and youth.

Challenges and potential solutions

Several challenges inherent in developing a complex multisite protocol with multiple levels of measures and hypotheses became apparent as the JJ-TRIALS SC prepared to launch this protocol. First, to test H1, and to introduce local site leadership and staff to the basic concepts and components of the study, a baseline period was established in which data on current services and staff/organizational factors could be collected. Engaging sites in orientation and data collection activities while seeking to ensure that sites did not prematurely begin to address gaps in the Cascade presented a practical challenge.

A second challenge relates to the feasibility of implementing the complex protocol, both for the RCs and participating agencies. With six geographically separated sites per RC, simultaneously initiating the study in all sites would have presented a substantial burden that might have resulted in incomplete or poor implementation of study components. Accordingly, the design included a phased rollout (similar to a stepped wedge design) [64, 75], in which one-third of the matched site pairs were randomly assigned to begin the study in each of three waves, 2 months apart.

Another key concern reflects challenges in meeting the needs and expectations of complex, dynamic service systems while maintaining fidelity to the study protocol. Because JJ agencies face a number of competing priorities and resource constraints, RCs must be sensitive to these issues and maintain flexibility in the study timetable to maintain buy-in among stakeholders. Yet, consistent implementation across sites and across RCs is essential for internal validity. Therefore, flexibility was built into the intervention to allow for variability. Extensive fidelity procedures were developed, including pre- and post-implementation checklists for each intervention component, fidelity monitoring of trainings and facilitation, and monthly facilitator learning circle calls. Each emphasizes “fidelity with flexibility”—keeping to the written protocol to the best of the RC’s ability, while being responsive to the specific needs, preferences, and constraints of the site whenever possible.

Data quality has also proven to be a challenge. As anticipated, wide variability exists in the quality of data available to populate the Cascade. Some sites maintain electronic systems and routinely capture most Cascade elements, while others primarily utilize paper records. Even when data are available electronically, validity can be questioned (e.g., missing values could reflect absence of a service or failure to record a service). RCs have worked closely with sites to ensure adequate and appropriate data, including sending research staff to the site to manually extract records or providing assistance to JJ agencies in developing/modifying electronic systems. In this regard, JJ-TRIALS is likely to facilitate improved data collection within participating sites, addressing existing gaps in justice agencies’ ability to track and report youth outcomes [76].

Conclusions

Through a collaborative partnership among researchers, JJ partners, and NIDA, JJ-TRIALS is incorporating several implementation strategies and the EPIS framework to address unmet substance use treatment needs among juveniles under community supervision. Although such a complex implementation study presents challenges, the protocol is expected to provide important insight regarding the efficacy of implementation interventions to improve BH services in a multi-system context, a test of the utility of EPIS for measuring and assessing organizational and systems changes, the use of a new Cascade framework for analyzing youth data on substance use services, and the ability of JJ and BH agencies to use data-driven decision making to achieve system change. Increasing the use of evidence-based practices for identifying, referring, and treating youth with substance use problems will improve both public health and public safety and provide new tools and strategies for JJ agencies and their BH partners to use when addressing other organizational and system improvements.

Ethical approval

IRB approval was granted by all six research institutions and the coordinating center.