BACKGROUND

Population-based screening programs for colorectal cancer (CRC) can reduce CRC-related mortality by preventing CRC from occurring and by catching it at earlier, more treatable stages.1 Screening outreach programs that incorporate fecal immunochemical tests (FITs) and follow-up endoscopy are effective and cost-effective ways to screen people.2, 3 A recent evidence review indicates that the most effective clinic-based programs provided FIT kits through the mail (using pre-addressed stamped return envelopes), involved patient reminders, and/or distributed FITs in clinic.4 However, evidence-based interventions can only be effective if they are successfully implemented (i.e., mailings are completed), and little is known about factors that influence the implementation of such programs in clincal practice.5

Implementation of CRC screening programs is challenging in the United States, where the delivery of CRC screening varies by clinic and health system.6, 7 For example, Liss and colleagues (2016) found that a mailed FIT program was an effective way to initiate a screening process (18.4% more patients screened with outreach) but current cost structures limited ability to implement them.8 Health systems have little guidance on how to select and adapt interventions for their particular population and clinical setting.4 How a health center implements a full screening program differs depending on available leadership’s prioritization of a given screening target, how successful a given intervention program could be, available resources for implementing and sustaining the intervention, and whether the program is a good fit for the health center’s population.5, 9, 10 These implementation decisions, and the underlying context that drives them, might have significant implications for the success of these programs.

A variety of factors, such as program components, leadership, lab arrangements, and state-based incentives for screening metrics, likely play a role in successful implementation, but organizational factors in particular seem most compelling to examine in busy health center practices. Clinics can face a variety of challenges implementing a centralized CRC screening program, including staff turnover, competing time pressures, funding resources, challenges with electronic health records (EHRs) and supporting technology, and access to colonoscopy.11,12,13,14 Case study results from a screening demonstration program15 indicated the importance of designing for pre-existing infrastructure and existing service delivery systems, as well as having a multidisciplinary implementation team, collaborating with partners and a medical advisor, and allowing adequate start-up time. A study of in-clinic FIT distribution in eight Iowa physician offices (in which only 45% of 400 FITs were handed out) concluded that implementation required the support of nursing staff for planning and executing the program.16 The complexities of implementing a centralized screening outreach program and the need for staff support in doing so have been echoed in other studies of CRC screening interventions.12, 13, 17

Few studies of mailed FIT program effectiveness18,19,20,21 have analyzed factors that contributed to program implementation success. Many studies described intervention components, and some studies have outlined barriers to adapting and implementing mailed FIT outreach programs.22, 23 But none of the studies cited above specifically examined contextual factors as they relate to successful implementation. One study reported moderators of evidence-based approaches to CRC screening within 59 FQHCs.24 Of these, eight (15.7%) had implemented mailed FIT programs, but the study presented no organizational-level factors linked to implementation outcomes.24

A growing interest in assessing implementation success has led to the use of new analytic strategies to determine causal pathways to outcomes and to test theoretical frameworks against empirical data.25,26,27 The Strategies and Opportunities to Stop Colon Cancer in Priority Populations (STOP CRC) study provided a unique opportunity to assess factors associated with successful implementation of a mailed FIT outreach program in eight busy health centers. A cluster-randomized, pragmatic trial of a mailed FIT program, STOP CRC aimed to increase CRC screening in eight federally qualified health centers (FQHCs; 26 clinics) in Oregon and California.28,29,30 Health centers had large variation in implementation, specifically in the proportion of eligible patients that were mailed a FIT in lagged data (21.0-81.7%), and screening rates were strongly associated with implementation success.30 Thus, it is critical to understand the specific combination of conditions that explained the variation in implementation success of the mailed outreach across the health centers.

METHODS

STOP CRC aimed to increase CRC screening in health center clinics by providing tools and training to deliver a mailed FIT outreach program. The project created a series of EHR tools that identified patients who were eligible and due for screening. Twenty-six clinics from eight health systems participated (41,193 eligible patients in the first year) (Fig. 1). The health centers were diverse (Table 1). They were located in Oregon and California, were rural and urban, and varied in size (7548–54,850 total patients). All health systems were members of the OCHIN clinical information network and shared an EHR (Epic©).31 The program involved three major steps: clinic staff were trained to use the tools to mail a personalized introductory letter, mail a FIT kit, and then mail a personalized reminder to patients due for CRC screening. The letter and instructions directed the patient to label and place the completed test in a pre-addressed stamped envelope, and send it back to a designated lab, or deliver it to the clinic. The lab sent results directly to the clinic EMR. The research team conducted surveys and interviews throughout the life of the project, executed improvement cycles, collected EHR data, and held trainings and meetings to support full program implementation.

Figure 1
figure 1

Implementation Outcome: % of eligible patients mailed a FIT.

Table 1 Health Center Characteristics*

We applied configurational comparative methods (CCMs) to identify the specific combinations of conditions that distinguished health centers with high, medium, or low implementation levels of the mailed outreach to eligible members. CCMs provide a formal mathematical approach to conduct cross-case analysis that draws upon Boolean algebra and set theory to identify a “minimal theory,” a crucial set of difference-making combinations that uniquely distinguish one group of cases from another.32,33,34,35,36 The analytic objective of CCMs is to identify necessary and sufficient conditions, a fundamentally different search target than that of correlation-based methods, and thus CCMs do not require large sample sizes; in fact, an often-cited strength of CCMs is their versatility with small-n studies.34, 37, 38 “CCMs” is an umbrella term for the broader family of configurational approaches, which include Qualitative Comparative Analysis (QCA) and more recently Coincidence Analysis (CNA). Applied in political science and sociology since the 1980s, CCMs have started to gain traction in health services research and implementation science in recent years. For example, a 2019 Cochrane Review of school-based interventions for asthma self-management prominently used CCMs to identify conditions aligned with successful implementation39; a 2019 review of innovative approaches in mixed-methods research devoted an entire section to CCMs38; a 2020 study applied CCMs to determine the key subset of implementation strategies directly linked to implementation success across a national sample of VA medical centers32; and the comprehensive 2020 Handbook on Implementation Science included a whole chapter dedicated solely to configurational comparative methods.40 CCMs represent a group of mathematical, case-oriented approaches that use applied set theory and Boolean algebra, offering a way to analyze data and is distinct from current traditional methods like logistic regression or qualitative research.34, 35, 41

Table 2 Final Model from Configurational Comparative Methods (CCMs) Analysis

We used the Consolidated Framework for Implementation Research (CFIR) model42 as our framework for identifying outreach implementation factors. The CFIR model identifies a comprehensive list of factors that might be associated with effective implementation.42 CFIR constructs include categories of characteristics in the intervention, inner setting, outer setting, and process. Intervention characteristics capture specific elements of the intervention that impact implementation, such as adding in reminder calls or mailing reminder letters. Process characteristics capture differences in their process, like having a clinic champion or dedicated information technology (IT) staff. Inner setting components, such as system growth, or a centralized implementation team, capture structural or cultural characteristics of the system that may impact outcomes. Outer setting characteristics, such as interface issues with an external lab, may indicate influences outside of the health system that could impact implementation of the full program. CFIR components were pulled from qualitative and quantitative data. These components were derived from clinic responses to the baseline survey, baseline interview, cost interview, year 1 project lead interview and follow-up survey, Plan-Do-Study-Act (i.e., quality improvement cycle) reports, EHR data, and research team knowledge acquired through the study13, 43 (see Supplemental Table on Health Clinic Characteristics).

Methods for the clinic surveys, interviews, and Plan-Do-Study-Act have been reported previously.12, 13 In brief, clinic leadership/staff involved in executing the intervention completed a survey and in-depth interview prior to the implementation of STOP CRC (baseline survey and interview) and again approximately 12 months following the first year of implementation (year 1 project lead interview and clinic follow-up survey). Survey questions were guided by key domains of CFIR and interview questions explored similar concepts with greater ability to focus on context specific barriers and facilitators to CRC screening in general and STOP CRC implementation specifically. Year 1 interviews and surveys explored mostly the same questions, and also focused on issues of ongoing implementation and maintenance for the clinics. Additionally, during year 1 interviews, we solicited feedback on the motivations for and outcomes from implementing the quality improvement cycle (Plan-Do-Study Act), and obtained information from each clinic on the amount of time spent on tasks - to implement the intervention. All interviews were audio-recorded, transcribed, coded, and content summarized using standard qualitative analysis techniques,44,45,46 aided by the use of a qualitative software program (Atlas.ti).

All components together represented 50 potential explanatory factors (see Supplemental Table). To reduce the number of factors, we used functions available within the Coincidence Analysis package (“cna”)47 in R; R (version 3.5.0) and R Studio (version 1.1.383) were also used to support the analysis. The multi-step configurational approach we used for data reduction has been detailed in depth in previous studies32, 48; we present the main details here. Specifically, we identified configurations of conditions with the strongest connection to the implementation outcome of mailed outreach as the initial program step (i.e., high, medium, or low implementation levels). We set r to a maximum of 3, where r stands for the number of objects to be selected at the same time from a larger set of n objects (i.e., the 50 potential explanatory factors, each with at least two possible values). In setting r to a maximum of 3, we considered all 1-, 2-, and 3-condition configurations across the 50 factors that were instantiated within the dataset and met the cutoff threshold. We then interpreted this condition-level output on the basis of our research question (i.e., at least one program-related condition had to be present) as well as logic, theory, and prior knowledge to narrow the initial set of 50 potential explanatory factors to a smaller subset of candidate factors to model. As part of our selection criteria, we looked for configurations where different values for the exact same set of factors could explain the low, medium, and high levels of implementation with 100% consistency (i.e., the solution yielded the outcome every time) and 100% coverage (i.e., every case with the outcome was explained by the solution) with no model ambiguity.49, 50 Ultimate solutions were developed using the modeling functions in the “cna” R package.

Implementation Outcomes

We categorized levels of mailed outreach implementation according to the proportion of eligible patients who were mailed a FIT kit between June 2014 and February 2015. Proportions at the eight centers were 0.21, 0.26, 0.33, 0.37, 0.38, 0.43, 0.59, and 0.81. Given the large, tightly clustered group of proportions between .33 and .43 and the greater spread of values above and below the cluster, we characterized the eight health centers as having a low (< 30% of eligible patients were sent a FIT kit), medium (30 to 50% were sent a FIT), or high (> 50% were sent a FIT) implementation level. While the difference between implementation rates within the high group was quite large, we felt it important to capture what elements these two centers had in common that set them apart from the other centers, given that both had implementation rates at least 16% higher than those in the “medium” cluster.

RESULTS

Using the subset of factors identified in data reduction, our CCMs analyses identified a solution with one dichotomous program-related factor and one dichotomous organizational-related factor that in combination perfectly distinguished among high, medium, and low implementation with 100% consistency and 100% coverage. The two factors were Centralized Implementation Team (values: 1 = YES; 0 = NO) and Separate Introductory (Intro) Letter (values: 1 = YES; 0 = NO); there were no missing values for these factors in the dataset. Health centers with high levels of implementation had centralized implementation teams (including FIT program staff) and mailed the introductory letter separate from the FIT kit. Health centers with medium levels of implementation had two solution pathways for the combination of Centralized Implementation Team = NO and Separate Intro Letter = YES; or the combination of Centralized Implementation Team = YES and Separate Intro Letter = NO. Health centers with low levels of implementation had the combination of Centralized Implementation Team = NO and Separate Intro Letter = NO.

Put differently, two dichotomous factors with possible values of 1 (YES) or 0 (NO) yield four possible combinations: 1-1, 1-0, 0-1, and 0-0. For each of the three outcomes (high, medium, or low implementation levels), these specific two-condition configurations always yielded a particular outcome (i.e., 100% consistency) as well as explained all the cases with that particular outcome (i.e., 100% coverage) (Table 2).

DISCUSSION

Success in implementing the outreach mailing step of a mailed FIT CRC screening program was accounted for by two factors with 100% consistency: a centralized implementation team with dedicated staff for delivering intervention components and the mailing of an introductory letter prior to the FIT kit mailing. These findings tell us that having a dedicated team and fidelity to the program can lead to successful implementation and increase CRC screening.

A centralized implementation team may have facilitated implementation by providing protected time for staff to implement the program. Health centers without centralized FIT program staff relied on multiple staff in diverse roles to deliver the program, which may have led to more fragmented implementation. Having the ability to have a centralized team may have also provided needed infrastructure for successful implementation of the program: all three sites with centralized implementation teams had the combination of champions, IT staff, and program managers. In another example of possible facilitation, only systems with a centralized process issued reminder letters.

There are several reasons that mailing an introductory letter separate from the FIT kit may have been directly linked to an increased proportion of eligible patients being mailed a FIT test (implementation success). One possibility is that the inclusion of an introductory letter reflected the health care system’s commitment to deliver all components of the program. Another possibility is that the process of mailing the introductory letters provided a reminder to staff to complete the full workflow. Successful implementation of the mailed outreach may have been associated with the successful implementation of other STOP CRC intervention steps. All five systems that mailed the introductory letter separately also reported no lab issues; gave patients the option to drop the kit off to the clinic or return it by mail; and did not report any major IT challenges; additionally, four of these five also reported significant growth in system size. An important caveat is that it cannot be conclusively determined from these observational findings that the introductory letter was causally related to the outcome or a marker of broader program factors; more evidence would be required to establish a causal connection, such as independent replication of these results in other studies.

Additional conditions consistently linked to low implementation outcomes were IT challenges and lab issues: the two systems with low implementation were the only two to report these two problems. As published previously, in post-implementation qualitative interviews, staff from both systems described challenges with printing and formatting the introductory letter along with determining a workflow for executing this activity, leading to staff frustration.12 They also described struggling to determine the correct postage for the FIT kit mailing, which in turn created “re-dos” of work and delays in mailing.12

The two systems with low implementation outcomes also lacked champions and IT staff to troubleshoot technology challenges. The lack of a clinical champion aligns with previously reported comments made during pre- and post-implementation interviews regarding concerns that a mailed FIT approach might not be the best match for the patients served by their systems (e.g., patients prefer face-to-face conversations with providers and/or have transitory addresses).12 By contrast, interview participants at the high-implementation clinics viewed the program as an opportunity to learn about a set of tools (population-based mailing) that could eventually be applied to other areas of preventive care.12

Contrary to expectation, staffing changes, clinic growth, and attendance at training did not consistently distinguish level of implementation. We previously published results of qualitative assessments from interviews with implementation clinic leadership and staff captured at baseline and repeated 6 to 9 months after implementation.12 The most commonly reported barriers consisted of time required for staff to implement program components, inadequate EHR staffing for resolving issues related to program implementation such as batch printing, and lack of easy-to-understand and actionable reports on the mailed FIT outcome. Reported successes included use of the EHR and the opportunity to standardize and operationalize processes for population outreach. One person replied, “Once you get the hang of the process, it was pretty straightforward.” Another said the program aligned with clinic goals and culture: “We are moving to a team approach and the transition from the provider being the center to other staff outside of the exam room being able to help patients.” These previously reported themes could help to explain why the presence of centralized processes fostered success in implementing the program: centralized processes were accompanied by dedicated resources, the ability to solve EHR challenges, and a strong desire to work through challenges to eventually create workflows that made the program simpler to execute.

Centralized programs, such as those implemented by large, organized health care systems and health insurance plans, often have centralized labs and use contracted mailing vendors to reduce the burdens of mailings on clinics.51, 52 Some of the STOP CRC health centers with the most successful implementation used staff and resources to implement mailings.53, 54 Others were supported by additional significant grant funding55 (STOP CRC clinics received reimbursement for research-related activities). STOP CRC was one of the first multi-site studies of mailed FIT outreach to rely on clinic staff to deliver the intervention components.

Our study has limitations. Our analysis assessed health-system-level organizational factors as they related to implementation success of the mailed outreach and did not measure patient-related factors that might also mediate FIT completion. Additionally, details of clinic-level variation (i.e., clinic size, champions, clinic-specific workflows, and processes) may not have appeared in our analysis at the health center level. Additionally, details that were unobserved may explain why having a centralized implementation team and introductory letter were associated with greater implementation. The center with the highest rate of implementation had a small population (n < 300), which may have made it easier for them to implement the program broadly among their patients and account for some of the difference between rates in the two “high implementation” health centers. We might have also missed some key organizational-related components linked to outcomes such as staff burnout, or financial readiness.

Our study also has strengths in that we used innovative methods (CCMs) to understand organizational- and program-level factors that directly linked to clinic staff’s ability to carry out an EHR-enabled CRC screening improvement program in busy practice settings.

CONCLUSION

Using an innovative analytic approach, we identified two key factors that in combination perfectly distinguished among high, medium, and low implementation levels of the mailed FIT outreach of a CRC screening program: a centralized implementation team (an organization-level factor) and successful mailing of an introductory letter (a program-level factor). By identifying the conditions for intervention success, these results can inform future efforts to improve the implementation of evidence-based interventions into clinical practice.