This manuscript adheres to the Standards for Reporting Implementation Studies (StaRI) Statement (Additional file 1) [50].
The Adolescent and child Suicide Prevention in Routine clinical Encounters (ASPIRE) trial is a hybrid type III effectiveness-implementation trial [51] with a longitudinal cluster randomized design [52,53,54]. We will answer questions related to implementation strategy effectiveness in 32 pediatric and/or family medicine practices (henceforth referred to as “pediatric practices”) nested within two health systems within the Mental Health Research Network (MHRN), a National Institute of Mental Health-funded practice-based research network of 21 health systems. This study will be conducted in Henry Ford Health System (HFHS) and Kaiser Permanente Colorado (KPCO). During the active implementation period, 32 pediatric practices in the two health systems will receive S.A.F.E. Firearm materials, including brief training and cable locks. Half of the practices (k = 16) will be randomized to receive Nudge; the other half (k = 16) will be randomized to receive Nudge plus 1 year of facilitation to target additional clinician and practice implementation barriers (Nudge+). Trial study recruitment will start in 2022.
Regulatory approvals
The ASPIRE trial was registered on ClinicalTrials.gov on April 14, 2021 (NCT04844021). The University of Pennsylvania institutional review board (IRB) serves as the single IRB (sIRB); reliance agreements were completed by both participating health systems. The study was approved on December 2, 2020 (#844327). The study is overseen by a data safety and monitoring board (DSMB) comprised of experts in implementation science methods, suicide prevention, and firearm safety promotion. The DSMB had an introductory meeting in February 2021 and will convene annually.
Study team and governance
The study team includes an interdisciplinary group of researchers, clinicians, and health system stakeholders with expertise in implementation science, behavioral economics, firearm safety promotion, suicide prevention, biostatistics, mixed methods, and pediatric clinical care. The following consultants also contribute expertise to the study: the original developer of Safety Check, the developer of the hybrid design approach, and firearm safety experts (i.e., master firearm safety course instructors) who provide perspectives on the broader firearm landscape to ensure ecological validity of the work.
Implementation framework, targets, and mechanisms
Our research is guided by two implementation science frameworks: the Proctor et al. framework and CFIR [40, 55]. The Proctor et al. framework guides the relationship between our implementation strategies and implementation outcomes, listed in Fig. 1. Fidelity, operationalized as parent-reported clinician delivery of the two components of S.A.F.E. Firearm (brief counseling around firearm safe storage, offering cable locks), is the primary study outcome. Secondary outcomes include reach (EHR-documented program delivery) and acceptability (i.e., parent- and clinician-report of acceptability via online survey) of S.A.F.E. Firearm as well as implementation strategy costs [55]. CFIR guides our understanding of mechanisms related to inner setting factors (i.e., clinician and practice factors) that may mediate and/or moderate the relationship between implementation strategies and fidelity. Our primary mechanism of interest is practice adaptive reserve, a self-report practice-level measure composed of six factors: infrastructure, facilitative leadership, sense-making, teamwork, work environment, and culture of learning.
Study aims and approach
Setting
We will conduct the proposed study in two geographically diverse MHRN systems that serve urban, suburban, and rural communities to maximize generalizability of our findings. HFHS (Michigan) includes the Detroit metro area and serves over 1.25 million patients per year, 38% of whom are racial or ethnic minorities. This is important given evidence of racial and ethnic disparities in suicide generally [56, 57] and firearm injury and mortality specifically [57, 58]. HFHS includes seven hospitals and more than 50 ambulatory care practices, 14 of which are pediatric practices. Our second partner, KPCO, serves approximately 600,000 members across Colorado including urban, suburban, and rural samples. It has 27 ambulatory care practices, including 24 pediatric practices (some stand alone, some are multi-specialty clinics), of which we will purposively choose 18 representative practices to participate. Thus, we will include 32 practices across the two sites. (Please see Fig. 2, CONSORT diagram.) Both health systems use the Epic electronic health record system. Recent estimates indicate that 45% of households in Colorado and 40% of households in Michigan owned firearms [59], putting Colorado above the national average of ownership [31].
Participants
Participants will include parents of youth seen in pediatric primary care, pediatric and family medicine clinicians (hereafter referred to as “clinicians”), and health system leaders. Clinicians delivering the S.A.F.E. Firearm program will include physicians (MD, DO) and advanced practice clinicians (nurse practitioner, physician assistant) who regularly conduct well-child visits with children and work in pediatric or family medicine departments.
Parents of youth seen in pediatric primary care
We will include parents and/or legal guardians (hereafter referred to as “parents”) at participating pediatric practices who have a child ages 5–17 years who attends a well-child visit. At least one parent must attend to be eligible. Our target age range of youth reflects the fact that suicide is the second leading cause of death among youth ages 10 and over [60], and rates are increasing in youth ages 5–12, particularly among Black or African American children [61,62,63]. Our upper limit is based on the age when most young people transition out of pediatrics in the participating health care systems. To optimize ecological validity, there are no exclusion criteria. We expect an N of approximately 58,866 eligible youth over the course of one year.
Clinicians and health system leaders
There are currently 137 physicians and 14 non-physician clinicians within the two systems who see young people within pediatrics and family medicine. Leaders (n = 20) include practice and department chiefs and health plan directors.
Evidence-based practice/intervention
Safety Check was developed using social cognitive theory [64] and uses a harm reduction approach to meet parents where they are with regard to their storage behavior [65, 66]. For this study, we will deploy an adapted version of Safety Check which maintains the key components of the original intervention (i.e., counseling and offering a cable lock) [37, 67] but extends its reach and acceptability [19, 38, 41]. Drawing on the ADAPT-ITT framework [42], we collaborated with parents, firearm safety experts, clinicians, and health system leaders [19, 38, 41, 43] to adapt Safety Check to reach a broader age group (i.e., youth < 18) and to serve as a universal suicide prevention strategy in pediatric primary care. Parents have been involved in the selection of name and logo (see Fig. 3); the program is now renamed S.A.F.E Firearm. Both firearm-owning and non-firearm-owning parents reported high acceptability of the adapted program [68, 69].
Implementation strategies
Prior to randomization, all 32 practices will receive S.A.F.E. Firearm materials and training. Clinicians will be strongly encouraged by pediatric leadership to access brief online training prior to trial launch [70, 71]. The video will include targeted information on how to counsel parents about firearm safety using motivational interviewing, an evidence-based approach that takes a nonjudgmental stance.
Nudge
All participating practices will receive the Nudge, which will be delivered via the EHR. During the study’s preparation phase, we will work with pediatric practice leadership and Epic information technology specialists to refine the design and functioning of our Nudge. We will prototype and pilot the Nudge to ensure it is consistent with current workflow, effective, and unobtrusive. We have decided to use a EHR SmartList, which is a pre-defined list of choices that users can select using their mouse or keyboard and are particularly helpful for documenting values that a clinician is required to use repeatedly, thus saving time and keyboard strokes. SmartLists are already used for other types of visit documentation in both health systems, which means clinicians are familiar with their functionality. We will add a default SmartList to the standard “Well-Child Visit” documentation template to serve as a Nudge and allow for tracking of S.A.F.E. Firearm implementation. The clinician will be asked to select a value from a drop-down list (e.g., “Discussed safe firearm storage” or “Did not discuss safe firearm storage;” “Offered a cable lock” or “Did not offer a cable lock”). Clinicians will be trained in how to document intervention delivery as part of annual training requirements. Our Nudge condition is informed by behavioral economic theory by enabling choice and bringing the desired behavior to the attention of the clinician [72]. While a hard stop in the EHR requiring a decision or a default where the desired behavior is preselected is likely more powerful [18], our approach is responsive to health system stakeholder preferences.
Nudge+
Practices randomized to this condition will receive the Nudge as described above, as well as 12 months of facilitation [73]. The role of the facilitator is to engage with study practices, to assist each practice in setting change and performance goals around the implementation of S.A.F.E. Firearm, and to troubleshoot implementation barriers.
Our approach to facilitation is informed by established facilitation manuals (i.e., Veteran Health Affairs Quality Enhancement Research Initiative [QUERI] facilitation manual [21, 74] and Agency for Healthcare Research and Quality [AHRQ] practice facilitation manual [22]) and includes six stages. First, facilitators will engage in an informal pre-implementation readiness assessment with each practice to identify potential implementation barriers and to develop relationships with stakeholders. Second, facilitators will support practices in addressing these barriers and launching the implementation strategy activities. These activities include identifying where in the workflow S.A.F.E. Firearm can be implemented, when S.A.F.E. Firearm will be delivered during the well-child visit, who in the practice will be responsible for storing the cable locks, where the locks will be stored, and other workflow matters. In keeping with behavioral economic principles, we will pay close attention to cable lock storage locations so locks can serve as visual reminders of the program (e.g., in baskets by documentation stations). Third, in the first 3 months of the active implementation period, facilitators will work with practices to set goals and establish metrics to monitor S.A.F.E. Firearm implementation. During this period, the facilitator will regularly engage with practice leadership and clinicians. In addition, facilitators will begin to develop a sustainment plan in collaboration with stakeholders. Fourth, in months 3–9, the facilitators will continue to work with practices to address barriers identified in the pre-implementation phase as well as new barriers that emerge as clinicians and practices begin implementing. This includes established implementation strategies such as Plan-Do-Study-Act cycles [75] and audit and feedback [76]. Fifth, in months 9–12, facilitators will engage in continued efforts to maintain gains and begin to enact the sustainment plan in preparation for the end of facilitation. Sixth, in month 12, facilitation activities will end, and the practices will transition to the formal sustainment period. Over the course of the active implementation period, facilitators (i.e., members of the study team who are trained in facilitation and include masters and doctoral level prepared colleagues) will offer expert consultation (i.e., webinars and technical assistance via email and phone as needed) and regular peer-to-peer calls supported by facilitators where practices can share their experience. All activities will be tracked via logs [21, 74] to ensure the ability to measure which strategies are delivered via facilitation (i.e., implementation fidelity).
Randomization
We will randomize practices to the active implementation conditions (Nudge [k = 16] or Nudge+ [k = 16]), using covariate-constrained randomization [77]. Covariate-constrained randomization enumerates a large number of possible assignments of the strategies to the practices and quantifies the balance across arms with respect to a set of pre-specified covariates for each one. Then, from a subset of possible assignments that achieve adequate balance, one is randomly chosen as the final allocation of strategies for the study. We will implement this randomization procedure to achieve balance with respect to three practice-level covariates: health system, practice size, and percent of patient panel that lives in a rural (i.e., non-metropolitan) [78] area based on geocoded patient home address.
Study timeline
Year 1 will be devoted to carefully planning and piloting our procedures to optimize our approach, including the collection of our primary outcome. In Year 2, we will begin collecting parent-reported clinician fidelity to allow us to capture baseline rates. The trial will launch in Year 2 and run for 12 months. During this period, both systems will deploy the EHR Nudge in all practices. Practices randomized to the Nudge+ condition will also receive facilitation. In years 3 and 4, the Nudge will continue in all practices but facilitation will be discontinued in the Nudge+ practices; we will continue to collect data from all practices to look at sustainment for 1 year. We will collect survey, interview, practice logs, and EHR data to answer study questions and test hypotheses. Aim 1 will examine the effects of Nudge+ relative to Nudge on parent-reported clinician fidelity, reach, cable lock distribution, acceptability, and implementation cost [55, 79]. See Fig. 4.
Aim 1: Examine the effects of Nudge vs. Nudge+ on implementation outcomes
Primary outcome
Fidelity is defined as a patient-level outcome indexing whether the patient received S.A.F.E. Firearm as prescribed by the program model; we call this “target S.A.F.E. Firearm.” The achievement of this outcome requires the patient’s clinician to follow both intervention steps (i.e., counseling and offering cable locks). Patients’ receipt of target S.A.F.E. Firearm will be measured via the following yes/no questions on a parent survey: (a) did someone on the healthcare team talk to you about firearm storage during your child’s recent visit? and (b) were free cable firearm locks made available to you during your child’s recent visit? Patients will receive a binary fidelity score indicating whether the clinician completed both (a) and (b) with them. In addition, we will code whether the steps occurred separately for supplemental analyses.
Secondary outcomes
Reach, or the number of parent-youth dyads who receive the intervention divided by the number of eligible parent-youth dyads [79], will be extracted from EHR data, based on clinician responses to EHR documentation. EHR data collection represents an exceptional opportunity to understand clinician behavior with all parents of youth rather than restricting data collection to a subset of clinicians who self-select to provide self-report or allow observation of their behavior [80], and we will be able to determine the entire clinical population denominator rather than the sample denominator.
As an additional measure of reach, the number of cable locks distributed in each practice will be recorded by research staff on a monthly basis. Because families will be permitted to take more than one lock, this metric will offer a proxy for the maximum number of firearms that may have been secured due to the intervention.
Acceptability will be measured from the perspective of both parents and clinicians. The parent survey will inquire about the acceptability of each S.A.F.E. Firearm program component separately with a single yes/no item (i.e., I found/would have found it acceptable to talk about firearm storage during my child’s visit; I found/would have found it acceptable to have free cable firearm locks made available to me during my child’s visit). Clinicians will rate the acceptability of each S.A.F.E. Firearm program component and implementation strategy separately via a single item rated on a six-point Likert scale (strongly disagree to strongly agree). This approach is based on our previous work assessing clinician acceptability of firearm safety programming [38].
To collect fidelity and acceptability data, all eligible parents in both health systems will be contacted within 2 weeks of their completed well-child visit, via email, mail, patient portal message, text message, or phone call by research specialists employed by their respective health system. The message will invite them to complete a survey via REDCap, a secure, web-based application for collecting and managing survey data that can be completed via computer or mobile device [81]. Follow-up contacts (e.g., phone calls, texts) will be made up to approximately 4 weeks after the well-child visit to enhance response rates. Follow-up recruitment strategies will differ and will be informed by best practices at each respective health system. Participants will be eligible for an incentive via lottery for survey completion (e.g., $100 gift card). We anticipate that we will be able to obtain responses from approximately 18,665 individuals using these methods.
To collect acceptability data, clinicians (N = 151) will be contacted via email using the Dillman Tailored Design Method [82] to boost response rates. Clinician participants will receive gift cards/gifts each time they complete a survey if allowed by their health system. Alternatively, an altruistic incentive will be used where the study will contribute to a charitable organization for each returned survey.
Cost will be measured using a pragmatic method to capture all resources needed to deploy the implementation strategies [83,84,85]. The primary objective of the cost analysis is to estimate the cost of each strategy at the system level to gather information that will allow other decision makers to assess the resources needed to take this approach to scale within their systems. We will capture these costs by prospectively and pragmatically using spreadsheet-based templates on a monthly basis consistent with our previous studies [83, 84, 86]. These templates provide the framework for capturing costs related to each component of the implementation strategy (e.g., Epic build and maintenance; facilitation training and activities).
Hypotheses
We will compare the effects of two active implementation conditions, Nudge (EHR SmartList) vs. Nudge+ (EHR SmartList + facilitation) at the end of the implementation period as well as at the end of a 1-year sustainment period. We will test a total of four related hypotheses:
-
1)
Change in the probability of target fidelity from the pre-implementation period to the active implementation period will be equivalent in Nudge vs. Nudge+.
-
2)
Change in the probability of target fidelity from the pre-implementation period to the active implementation period will be superior in Nudge+ relative to Nudge.
-
3)
Change in the probability of target fidelity from the pre-implementation period to the sustainment period will be equivalent in Nudge vs. Nudge+.
-
4)
Change in the probability of target fidelity from the pre-implementation period to the sustainment period will be superior in Nudge+ relative to Nudge.
These hypotheses will also be tested with regard to the secondary implementation outcomes of reach, acceptability, and cost. Finally, we will descriptively evaluate each arm separately to determine the magnitude of change in the probability of target fidelity and other implementation outcomes over time.
Aim 2: Use mixed methods to identify implementation strategy mechanisms
Our understanding of the mechanisms through which the implementation strategies work is informed by previous research [73, 87] describing the practice-level mechanism, clinical adaptive reserve, through which facilitation, a practice-level implementation strategy, operates. We hypothesize that facilitation will increase practice adaptive reserve, or the ability to make and sustain change at the practice level, because it will allow for problem-solving and tailoring specific to the individual practice. Previous research [73, 87] suggests that facilitation improves practice relationship infrastructure; aligns management functions in which clinical care, practice operations, and financial functions share a consistent vision; facilitates leadership and teamwork; and improves the work environment to create a culture of learning [87]. These are all components of adaptive reserve.
Participants and procedure
Participants will include clinicians and health system leaders (e.g., practice directors, department chairs, and health plan directors) in the two systems. In addition to surveys assessing the hypothesized mechanism at pre-implementation and active implementation as described in Aim 1, we will also conduct qualitative interviews with a subset of clinicians (n = 24) and leaders (n = 14) at the end of the active implementation period.
Primary mediator
We will measure practice-level adaptive reserve using the Practice Adaptive Reserve Scale [87], a self-report practice-level measure that is completed by practice staff and aggregated into an organizational construct composed of six factors that include relationship infrastructure, facilitative leadership, sense-making, teamwork, work environment, and culture of learning. The tool has high internal consistency, has been found to be associated with greater implementation in previous cross-sectional research [88], and is sensitive to change due to facilitation [87].
Moderators
We will measure clinician attitudes towards firearm safety promotion in pediatric healthcare settings using questions from the American Academy of Pediatrics Periodic Survey [89, 90]. We will also examine patient demographic variables (e.g., race, ethnicity, gender identity) as potential moderators.
Qualitative interviews
We will conduct brief interviews with a purposive sample of clinician survey respondents (equally distributed across health system and arm) to obtain more detailed information from those demonstrating high (n = 12 [6 per arm]) and low (n = 12 [6 per arm]) fidelity measured via EHR documentation. The purpose of these interviews will be to identify additional mechanisms through which implementation strategies might operate such as motivation, self-efficacy [91], and psychological safety (i.e., safe environment for risk taking) [92]. The interview guide will be developed using the Consolidated Framework for Implementation Research [40]. We will oversample for clinicians who report firearm ownership on the survey. We will interview all leaders who agree to participate (total N = 20; anticipated n = 14). Participants will receive $25 or an equivalent gift for participation as allowed by their health system as denoted above.
Aim 3: Examine the effects of the adapted intervention on clinical outcomes
The objective of this exploratory aim is to examine clinical outcomes to assess the public health impact of wide-scale health system implementation.
Participants and procedures
As described in Aim 1, we will survey all eligible parents in the participating practices within two weeks following their child’s well-child visit.
Exploratory effectiveness outcomes
We will assess parent-reported firearm storage behavior, as well as youth suicide attempts, death, and unintentional firearm injuries as exploratory outcomes. Firearm storage behavior will be assessed with two questions on the parent survey that ask parents: (1) whether they have made firearms less accessible to their children since their child’s recent visit, and if so, what changes they have made, and if no, (2) whether they intend to make firearms less accessible to their children since their child’s visit. The Theory of Planned Behavior informed the development of these questions [93]. Questions were piloted with parents to ensure sensitivity and appropriateness. Responses to the intention question will be rated on a five-point Likert scale ranging from strongly disagree to strongly agree.
Youth suicide attempts, deaths, and unintentional firearm injury and mortality data will be extracted from administrative data from each health system. Relevant events will be identified via ICD-10 codes and will include all codes typically used to identify suicide attempts (including non-firearm suicide attempts) as well as official state and federal mortality records that have already been matched to health system patient records.
Sample size calculation
Sample sizes differ by aim and approach. For quantitative outcomes, we powered on our primary implementation outcome of fidelity (i.e., parent-reported clinician delivery of the program). After accounting for non-response, we expect to include data from 18,556 parents of youth within 32 practices. Power calculations were implemented Computer Program PASS Power Analysis and Sample Size Software, (NCSS LLC, 2019) were based on a GEE test for two proportions in a cluster randomized design. Assuming an average practice size of 730 patients and an ICC of 0.03, we will have at least 89% power to detect a difference of .1 in the probability of fidelity between Nudge and Nudge+ in the active implementation period. For qualitative data, we will use purposive sampling until thematic saturation is reached (in the case of clinicians) or until all individuals within the group agree (in the case of leaders) [94].
Data analysis
In Aim 1, the primary dependent variable is parent-reported fidelity. For each observation period (pre-implementation, active implementation, sustainment) and for each implementation condition (Nudge, Nudge+), we will describe the proportion of parents who reported having received the intervention with fidelity. We will calculate fidelity using three binary outcomes that will be modeled separately: received counseling (yes/no), offered lock (yes/no), both (yes/no). For each fidelity outcome, we will fit a single model to simultaneously examine differences between the pre-exposure and active implementation periods for both conditions as well as differences between Nudge and Nudge+. For comparing the change in the log-odds of fidelity from pre-exposure to active implementation between Nudge and Nudge+, we will use a three-sided test to simultaneously test for equivalence and superiority (as well as non-inferiority) of Nudge+ relative to Nudge [95]. Based on input from leadership in the two health systems and a review of the literature [96,97,98], we established that in order for Nudge+ to be considered meaningfully superior to Nudge, the difference in the change in the probability of fidelity relative to pre-implementation would need to be detect a difference of .1 in the probability. All analyses will be repeated using the sustainment period outcomes in place of the active implementation period outcomes. We will also repeat these analyses for parent-reported safe storage and exploratory effectiveness variables including youth suicide attempts, deaths, and unintentional firearm injury and mortality. Additionally, we will conduct a sensitivity analysis to explore whether intervention effectiveness varies significantly by health system.
Mediation will be tested using the product of coefficients method [99,100,101]. In this approach, the total effect of Nudge+ relative to Nudge will be parsed into direct and indirect effects through the mediator, practice adaptive reserve. Models will test (a) the effect of Nudge+ relative to Nudge on practice adaptive reserve and (b) the effect of practice adaptive reserve on log-odds of fidelity, controlling for Nudge+ versus Nudge. All models will include covariates to address potential mediator-outcome confounds including baseline values of the mediator and outcome variables. We will also conduct sensitivity analyses to test for an exposure-mediator interaction and will model if appropriate. An unbiased estimate of the indirect effect will be derived via the product of coefficients from the two models and confidence intervals for the indirect effect will be generated using Monte Carlo methods [100,101,102,103]. We will test the statistical significance of the indirect effect using the joint significance test [103].
Variables that potentially modify the effect of Nudge+ relative to Nudge will be tested separately by adding terms for each moderator and its interaction with the exposure to the Aim 1 models for the active implementation period. These models will estimate the conditional relationships between Nudge+ (relative to Nudge) and implementation outcomes across different values of the putative moderators.
Qualitative analysis and mixed methods
Text answers from open-ended survey questions with parents from Aims 1 and 3, and digitally recorded and transcribed interviews with clinicians and leaders on the mechanisms of the implementation strategies, will be loaded into NVivo qualitative data analysis software [104]. Analysis will be guided by an integrated approach [105], which outlines a rigorous, systematic method for analyzing qualitative data using an inductive and deductive process of iterative coding to identify recurrent themes, categories, and relationships. The structure of our mixed methods approach is sequential (quantitative data is primarily collected before qualitative data and quantitative data is weighed more strongly than qualitative; QUAN>qual). The function is “complementarity” (to elaborate upon the quantitative findings to understand the how of implementation), and the process is connecting (having the qualitative data set build upon the quantitative data set) [106]. To integrate the quantitative and qualitative results, we will follow guidelines for best practices in mixed methods [107].