Background

The successful implementation of evidence-based innovations (EBIs) to improve healthcare delivery requires a well-planned strategy to change ‘the way we do things around here’ [1, 2]. This is because evidence from innovation research does not indicate easy translation into practice, especially if fit with the local context has not been addressed [3, 4]. Literature has shown that the outcomes of implementation vary by organization: contextual factors such as leadership support, capacity, culture, and resources can greatly affect how readily new EBIs are adopted [5,6,7]. However, contextual factors such as organizational culture and climate are difficult to change and are often beyond the direct control of implementers. It may be more useful to think about how cultural elements can be leveraged and “it would probably be most effective to focus culture change efforts as narrowly as possible” [8]. Accordingly, researchers have turned their attention to implementation strategies, defined as specific tools or techniques (e.g., educational programs, incentives) used to improve uptake of EBIs [9,10,11]. They are social interventions targeting “various properties of interventions that make them more or less amenable to implementation” [10]. Studies suggest that implementation strategies are most effective at promoting the use of EBIs when they are customized to the unique barriers and facilitators in a setting [12,13,14,15,16,17].

Despite recognition of the value of customizing implementation strategies for the contexts in which they are applied, strategies are too often designed and applied in a standardized manner and fail to adequately reflect the diverse settings with different implementation determinants [18,19,20]. For example, Bosch et al. [21] examined 22 quality improvement studies and found that the selected implementation strategies did not always match identified implementation barriers. Even if the prospective barrier analysis was conducted, only a few authors explicitly mentioned the methods used to translate the results into tailored strategies. Thus, the literature indicates that in many cases no explicit theoretical or evidential rationale is provided for a chosen strategy [22, 23]. Instead, it appears that in the absence of knowing what works, implementers use many different approaches [24]. Consequently, we know relatively little about how organizations choose different implementation strategies in real world settings and the factors that may drive these choices [25,26,27].

Another aspect of implementation strategies often considered is the number of strategies (i.e., single or multifaceted). Historically, the assumption seems to have been that the use of multiple implementation strategies, i.e., a multifaceted or multicomponent approach to implementation, is more effective in promoting adoption of EBIs. For example, a study of U.S. Department of Veterans Affairs clinics working to implement evidence-based HIV treatment found that implementers used an average of 25 (plus or minus 14) different implementation strategies [28]. Likewise, a review by Bacci et al. found that most studies in community pharmacist patient care services utilized more than one implementation strategy, with a mean of 6.5 strategies (range, 1–36 strategies) [29]. Despite these examples and the assumption underlying them, there remains a paucity of empirical evidence on the motivations and factors underlying the decisions that result in more or less implementation strategies. The purpose of this mixed methods study is to identify factors that might prompt organizations to choose different numbers and types of implementation strategies. The analysis is based on a formative evaluation of 15 rheumatology clinics throughout the United States that have been planning for implementation of an evidence-based shared decision-making aid (DA) for patients with lupus. Insights from the study are important for better understanding how contextual factors shape the number and type of implementation strategies.

Consolidated Framework for Implementation Research (CFIR)

The identification of contextual determinants was guided by the Consolidated Framework for Implementation Research, while the implementation strategies were informed by a typology of implementation strategies. The CFIR is a widely used framework that is designed to assess potential determinants of implementation within local settings [20]. The CFIR organizes 39 standardized constructs into five major domains. Definitions of these domains and their operationalization in this study are described in Table 1. In this analysis, the 39 constructs and their five parent domains were used to (1) develop a semi-structured interview guide, (2) organize our thematic analysis of contextual determinants that were associated with decisions surrounding choice of implementation strategies.

Table 1 CFIR operationalization

Expert Recommendations for Implementing Change (ERIC)

This study utilizes a taxonomy of implementation strategies developed by Powell et al. [30] and the ERIC. It classifies 73 individual implementation strategies into six categories: planning (e.g., assessing readiness and identifying barriers, selecting strategies and getting buy-in), educating (e.g., developing effective educational materials, conducting ongoing training), financing (e.g., altering incentive/allowance structures, facilitating financial support), restructuring (e.g., revising professional roles), managing quality (e.g., developing and organizing quality monitoring systems, conducting audit and feedback, using reminders), and attending to the policy context (e.g., creating or changing credentialing and/or licensure requirements) [30, 31]. Implementation strategies can be utilized individually (i.e., discrete) or combined together to build a tailored multicomponent strategy for implementation [11, 31]. For example, an organization could utilize educational strategies alone (a discrete strategy) or combine training and technical assistance (a multicomponent strategy). Given the range of strategies available and allowances for organizations to choose different discrete strategies in real world settings, different combinations of implementation strategies likely exist among organizations.

Another consideration given varied organizational and implementation contexts is whether implementation strategies should be deployed in a standardized manner or allow organizational stakeholders to customize the strategies or choose which strategies to use in their unique situation. This is a common dilemma in implementation research that seeks to identify effective strategies using rigorous research designs (e.g., randomized control trials) that require some degree of standardization yet recognize the need to consider the important role of context [32]. Standardized strategies are assumed to be effective independent of the context into which an innovation is being introduced. In contrast, customized approaches to using implementation strategies allow organizations to adapt the strategy or choose strategies for their local needs and resources. Customized approaches may be especially valuable in situations where the implementation settings vary substantially and/or where the implementation strategies themselves are delivered by the local organization [11, 17].

Methods

Setting

The analysis reported here was part of a larger evaluation of different strategies for implementing the DA [33]. Using a purposive sample, 15 rheumatology clinics were identified through the professional network of the principal investigator and invited to participate based on their geographic distribution throughout the United States and their capacity to meet study criteria (e.g., commitment to use the DA for study duration, ability to recruit minimum number of patients to view the DA).

In this study, we used a combination of standardized and customized implementation strategies to implement the DA designed to educate lupus patients about their treatment options and help them engage in more shared decision making with their physicians. All clinics used standardized implementation strategies that were provided uniformly by the research team (e.g., training on use of DA, designation of a clinic champion and refresher training course). In addition, each clinic could choose from a ‘menu’ of implementation strategies that could be customized to their clinic. These customized implementation strategies were directed to both clinic personnel and patients. Clinic-targeted strategies focused on integrating the DA into existing work processes, while patient-targeted strategies focused on raising awareness and educating patients about the DA. This approach was utilized to provide clinics the flexibility to choose activities that met their unique needs and capabilities. Table 2 provides a list of both the standardized and customized strategies.

Table 2 List of implementation strategies

Data sources

The analysis reported in this paper relied primarily on two data sources. Qualitative data were collected via semi-structured telephone interviews during the first 6 months of the study, between November 2018 and August 2019. The purpose of these formative interviews was to understand local circumstances surrounding the anticipated implementation of the DA, especially barriers and facilitators to implementation. The interview protocol was constructed using the CFIR technical assistance website (https://cfirguide.org/) and included questions related to all five domains of the CFIR (Additional file 1_Interview guide). These questions were then adapted to fit our study. Further, additional questions were added to the protocol using the subject matter expertise of the study investigators. Three evaluation team members (LH, AH, RK), who were trained in qualitative interviewing, piloted the initial interview protocol with four key informants to assess the clarity of the questions, identify gaps in the interview guide, and improve the flow of the interview. Subsequent interviews were conducted by these same three evaluation team members.

A study coordinator and/or the principal investigator of each clinic selected key informants from a variety of positions within a clinic, including physicians, pharmacists, administrative directors/clinic managers, nurses, medical assistants, patient technicians, front office staff and study coordinators, to assure that we captured a wide variety of perspectives on potential challenges to implementing the DA. The eligibility criteria included being an employee of the clinic and familiar with the clinic’s research and patient care activities. One day prior to each interview, an email was sent to each key informant. The email included a short video that described the DA, a PowerPoint print-out of the DA, and a summary of topics to be discussed during the interview. Each interview began with a verbal informed consent of the potential benefits and risks of participating in the study and their rights as participants. Field notes, written during and after interviews, were included in the project files. The IRB approved the use of verbal informed consent given that the interviews were conducted via telephone. During the consent process, clinic staff were informed that participation was voluntary, and they could withdraw from the study at any time. Overall, 90 interviews were conducted across all 15 clinics, with an average of six key informants per clinic (range 2 to 9). The average duration of the interviews was 34 minutes (range 15 to 60). Table 3 provides details about informants and clinic groupings by number of strategies. All interviews were recorded and subsequently transcribed verbatim, yielding 658 pages of transcribed text. Data saturation was reached when the evaluation team determined that new interviews were not yielding new information.

Table 3 Professional roles of key informants and number of implementation strategies adopted by clinics

Quantitative data consisted of a count of the number of implementation strategies used by a clinic to implement the DA (range 3 to 11). These data were collected from the 15 clinics as a part of the DA implementation process. Prior to initiating DA use at each clinic (6 months following the start of the study), two members of the research team (JS and LH) met virtually with the implementation team at each clinic. The clinic implementation teams included the site principal investigator (a rheumatologist), clinic champion, nurses, and office staff. The purpose of these virtual meetings was to summarize the perceived barriers to implementing the DA identified during the formative interviews described above [34]. Another purpose was to select implementation strategies that the implementation team believed were feasible in their clinic and would be effective at facilitating implementation of the DA. The study team used the ERIC to identify a candidate set of implementation strategies for each clinic based on its specific barriers. As noted earlier (Table 2), all clinics were given the freedom to choose standardized or customized strategies. The higher number of strategies meant that clinics added more customized strategies. We recorded the implementation team’s initial selection of strategies. Approximately 3 months after initiating the DA at each clinic, we asked the clinic champion to verify these strategies, including adding ones that were subsequently adopted and removing ones that were never used. We used the second verified list in our quantitative analysis.

Analytic strategy

Qualitative data analysis, consisting of both coding, synthesis, and memo writing, utilized a team of four investigators (LH, AK, RK, AH) to mitigate issues of bias, and thus, provide greater confidence in the coding and analysis process. Transcribed interviews were coded using NVivo 12 Plus (QSR International). Our coding proceeded through two stages and utilized a deductive and inductive approach. As part of the larger evaluation, the four investigators independently coded three different transcripts from the same key informants based on the CFIR constructs/subconstructs which were subsequently mapped to the domains of the CFIR (e.g., relative advantage mapped to intervention characteristics). Kappa statistics were used to assess inter-rater agreement (IRA), with estimates for all five major domains exceeding 0.830. In addition, investigators subsequently met to discuss the source of any remaining coding differences and the lead evaluator (LH) regularly conducted peer debriefing with other members of the research team to minimize potential bias and assumptive coding. Once the differences were reconciled, the 15 clinics were divided among the four team members (range 2–5 clinics per team member) for applying the codes (deductive stage). Further, team members identified emergent themes within the CFIR domains (inductive phase). These themes were then summarized in the form of a written memo for each domain that described key themes and how they were expected to affect implementation of the DA. During kick-off meetings with stakeholders, these memos were shared and discussed to verify accuracy [34].

In the next analytic step, clinics were grouped into three groups: low, moderate, and high number of implementation strategies. The underlying reason for grouping clinics into three groups was to explore whether those that chose just 1–2 strategies (just the standard ones) differed from those clinics that chose 6–8 strategies (added clinic- or patient-targeted strategies). Clinics were assigned to these groups using classical multidimensional scaling (MDS). MDS is an exploratory data reduction technique (similar to factor analysis but reduces cases rather than variables) used to obtain quantitative estimates of similarity among groups of items. Scree plots were used to assess how many dimensions were appropriate, which in our case indicated three dimensions. Given that the input data reflected the number of implementation strategies utilized by the clinics, we labeled these groups: 1. High number of implementation strategies group (HNIS) (10–11 total strategies); 2. Moderate number of implementations strategies group (MNIS) (7–9 total strategies); and 3. Low number of implementation strategies group (LNIS) (4–6 total strategies).

In the final analytic step, we synthesized the themes within each of these strategy groups, wrote three memos (one for each group), and finally compared and contrasted the themes across the groups. Our objective in the final step was to identify distinguishing contextual factors between the three groups with different numbers of implementation strategies.

Results

Since the purpose of this analysis was to understand what factors might prompt organizations to choose different numbers and types of implementation strategies, the results reported below focus on the key differences between the three groups. Most of the distinguishing factors we identified related to the Inner Setting and Process Domains. While not discussed in detail below, common contextual determinants are listed in Table 4.

Table 4 Overview of CFIR constructs linked to common themes across groups

Inner setting domain

Structural Characteristics & Compatibility

It is notable that most clinics in the MNIS and LNIS groups were members of larger organizations that had a more formal, centralized structure with sharply drawn functional and professional delineations that resulted in relatively little interaction between the physicians and nurses. Such separation was seen as a challenge to coordinate efforts to identify suitable patients for viewing the DA. Moreover, some members did not strongly identify with the clinic due to technically being employed and managed by the larger organization.

“The nurses that we have are not specific to our clinic. They will see pulmonary patients, renal patients and be rooming those patients at the same time. It is just a function of how their work is set up that we just do not have a ton of interaction with them.” (Physician, MNIS).

“…the physicians have very little say in how the clinic runs. The nurses do not report to us. We do not hire the nurses. We do not hire the support staff. We have no say in the hiring or firing of those individuals.” (Physician, LNIS).

“It takes a while to get up to change… kind of give you a sense of the complexity: the front desk people who check in our patients there, they report directly to one supervisor. Once [patients]checked in, they go back to the hallway to be processed and triaged by our nursing MA; those people report to a different supervisor. So, there’s a lot of complexity.” (Physician, LNIS).

Notably, these differences in Structural Characteristics and Compatibility presented problems for the integration of the DA with existing organizational processes, which ultimately affected how the clinics approached implementing the DA. This issue is described in more detail in the Process Domain (see below).

Networks and communications

Groups also differed in how they communicate and involve clinic personnel in decisions. Specifically, HNIS clinics were much more inclusive and proactive in communicating with the entire staff about the purpose and required actions for projects.

“Whenever anything new is being implemented, we always have a meeting. We have a meeting every week. So, everything that is going on in the office or something new that is coming up is said in that meeting. If something that is happening right then, our supervisor will come right away and let us know what we need to do. We know the events for the month” (Practice office associate, HNIS).

“I think [physician] is very good with meeting with the research team. He meets with us at least two to three times weekly, we talk about what we’ve been doing during the week, he presents us with policy changes, studies, ideas. So, as far as communication, it’s usually done in person, and it’s usually done by [physician] himself.” (Clinical research coordinator, HNIS).

“We hold a monthly clinic meeting, if it’s something that they need to be aware beforehand, then I send out an email and usually float around clinic and make sure everybody is aware of any changes.” (Clinic manager, MNIS).

In contrast, interviewees from clinics in the LNIS group believed that communication was patchier and more inconsistent across different members of the clinic, especially front-line administrative staff.

“We’re not really involved in any of that stuff. I know my doctor does research, but I have no idea what he does or where he goes.” (Medical assistant, LNIS).

Culture

The groups of clinics also differed with respect to their culture. Clinics from HNIS group reported having more of a personal and dynamic organizational culture, characterized by teamwork and employee collaboration as a critical factor of work environment. On the other hand, the LNIS clinics had more formal, hierarchical cultures with different departments/specialists working independently. Clinics within the MNIS did not have a consistent, dominant culture type.

“…Everyone is very willing to help, even if you perform work that is not what the provider intended for it to be. Everything is presented in a way that is constructive and not demoralizing” (Clinical research coordinator, HNIS).

“We do have rules and things that are in place for safety, but we are a big family and it’s an open-door policy that anybody can come in and ask for anything or give suggestions. So, I would say it’s a little bit of both” (Triage clinic lead, MNIS).

“There is a very regimented instruction as to how the clinic is operationalized for a lot of different reasons, not just because of specialty services, but also how many physicians are seeing patients at any given day, what is the volume of the specialty areas…” (Research director, LNIS).

Available resources

Groups also differed with respect to their available resources. Interviewees from HNIS clinics did not consistently report resource deficiencies as a barrier, while interviewees from the MNIS and LNIS felt they lacked resources to implement the DA. Notably, however, the types of resources differed between these two groups. Interviewees from MNIS clinics felt they lacked the human resources to reliably implement the DA, mostly due to turnover.

“There is a motivation to do it. But there are time constraints, and we are also very short staffed right now in terms of physicians. So, that is limiting how much we are able to do.” (Physician, MNIS).

“That would be my biggest thing that would impede it. The other thing would be that because we do not consistently always have the same two people in the clinic, because we have people who float through, that might probably be a little bit of a barrier too.” (Registered nurse, MNIS).

In contrast, interviewees from LNIS clinics also described a lack of human resources, but in this case (and as noted above) because those resources were not in the clinic’s control. In addition, clinics from LNIS group felt they lacked the physical space that would ideally be available to implement the DA.

“I am removed from the clinic in a sort of sense that I’m not actually a clinic employee. I’m an employee of the university. As I’m placed under the University…. I don’t directly answer to the clinic.” (Research director, LNIS).

“The only problem I am concerned about is at any given time there can be three to four rheumatologists and there is a shared space for five clinics all together. So, we don’t have dedicated rooms just for Rheumatology, it is shared. I am concerned if it is up 20 minutes and the staff needs to get in another patient, am I going to be able say “hey can I borrow a room?” and I don’t want a patient to feel rushed, but space is kind of a premium in the clinic.” (Director of the clinic/physician, LNIS).

Process domain

Planning

All groups discussed the importance of clinical staff to be engaged and educated from the start for greater buy-in and planning for DA implementation. However, MNIS and LNIS clinics placed a greater emphasis on the need for planning, especially for opportunities to adapt the implementation process, due to overlapping priorities and focus on generating income.

“Just to make sure that everybody is aware and trained on it. So, anybody working with [physician], they should understand that we need a time for the patient to fill this out. We all should be on the same page because that’s always a big thing, we might be told something at the front desk that maybe the nurses aren’t aware of. So, just making sure everybody understands the process, each other’s roles, and understands how it’s going to help the patient. That would be the biggest thing for me.” (Dynamic scheduler service rep, MNIS).

“I would like it to be very clear in exactly what needs to be done and how it’s going to be done. I don’t want to be doing it blindly, because we would probably be the one to initiate it with the patient since we are the face they see first. So, if we know what we’re doing, then we’re good.” (Medical assistant, LNIS).

Executing (intersection with inner setting: Structural Characteristics & Compatibility)

Clinics in the LNIS group were more likely to report several inner setting challenges to implementing the DA. In particular, clinics in this group were concerned about the logistics of implementing the DA, including their ability to pre-identify eligible patients and schedule them to view the DA. These issues were especially concerning when the clinics had a centralized scheduling system and corresponding lack of dedicated front office staff, which would result in additional work for other staff and/or more serious work process redesign.

“So, regarding returning patients, the front desk isn’t even going to know, ‘oh, this is returning lupus patient’, just going through the check-in process. So, I guess that would fall more to the nurse and the doctor as far as like bringing it up on the front end.” (Administrative assistant, LNIS).

“It’s a very complex administrative structure. It takes a while to change …and to give you a sense of the complexity, the front desk people who check in our patients, they report directly to one supervisor. So, there’s a lot of complexity there to get things done.” (Chief of rheumatology, LNIS).

Discussion

The purpose of this study was to empirically assess whether different types of clinic contextual factors may drive different numbers of implementation strategies while trying to implement an evidence-based DA for patients with lupus. Using the CFIR, we found that factors pertaining to the Inner Setting and Process Domains were most likely to distinguish between the groups with different number of implementation strategies adopted. In contrast, Intervention characteristics (i.e., the DA), Individual Characteristics, and the Outer Setting did not differentiate between the groups. In some ways, this pattern of findings is not surprising. Clinics arguably differed the most with respect to their inner settings, with a diverse range of sizes, culture, and decision-making authority, all of which can play a significant role in successful implementation [35, 36]. All clinics in the LNIS group had structural challenges such as limited resources (i.e., physical space and staff), which had the effect of creating busier clinics with fewer opportunities for choosing a greater number of implementation strategies. In addition, clinics in the LNIS group had more hierarchical, formalized structures, and in contrast to the HNIS group, the LNIS and MNIS were multispecialty clinics and members of academic medical centers. Other research has found that clinic environments with centralized authority relationships like those found in academic medical centers can impose barriers to engaging in learning behaviors and change efforts [37].

It is also notable that the number of perceived barriers in a clinic was negatively associated with the number of implementation strategies. For example, all clinics from LNIS group selected only one clinic-targeted strategy (quarterly newsletters) and only one clinic from this group selected a patient-targeted strategy (clinic poster about DA). In contrast, nearly all clinics from HNIS selected all available clinic- and patient-targeted strategies. One explanation for this pattern is that the choice of implementation strategies is limited by resource availability. Indeed, our analysis revealed differences between the groups with respect to resources, including and especially human resources. Given that the tailored implementation strategies were primarily delivered by the clinic staff, it is conceivable that such deficiencies may have limited their ability or willingness to take on more strategies. To the extent more (or more customized) strategies are effective at promoting implementation of the DA, such differences may result in patchy, differential implementation patterns across clinics that may undermine the DA’s effectiveness at promoting greater shared decision-making among lupus patients. Future research, however, is needed to ascertain whether more (and more customized) implementation strategies are, in fact, more effective at promoting implementation. It is conceivable, for example, that too many strategies may create confusion among clinic members or patients and undermine their collective effectiveness. Likewise, it is possible that some strategies may compete with each other for resources and attention, suggesting that certain combinations of strategies may be most effective at supporting DA implementation.

While not the primary focus of the analysis, our findings with respect to common themes of contextual determinants are supported in the existing literature [38]. In general, all clinics had positive views of the DA, indicating that it would be a useful tool for patients and a good source of information. Another common facilitator was a general consensus among respondents that clinics were supportive of any initiative or program that was perceived to be beneficial for patients. Almost all clinics indicated time as the main challenge to using the DA, and in doing so, highlighted the DA’s potential for disrupting clinic workflow and adding to the workload of the clinic staff. Consequently, nearly all respondents noted the need to get buy-in at all levels and make sure everyone understood the purpose and intended use of the DA. Likewise, because the clinics were characterized as busy settings with multiple competing demands and priorities, it was important to proactively plan for DA use, for example, by pre-identifying eligible patients (e.g., lupus patients scheduled for the day or week) and providing options of delivering the DA before the appointments (e.g., via website at home or iPad in clinic waiting room). These findings highlight some potential implications for practitioners, which are explored below after discussing the study limitations.

Theoretically, one would expect clinics to select strategies with careful planning to ensure that the strategies would effectively address implementation barriers and facilitators. At the same time, there is a value in performing incremental implementation with ongoing evaluation. The literature shows that implementation strategies generally change over time, adapting to the context [28, 39]. Given the formative stage of our evaluation and the cross-sectional nature of our analysis, it is imperative to receive feedback from the staff and make modifications to the implementation plan.

Furthermore, as noted earlier, the factors that differentiated the groups were largely in the Inner Setting and Process Domains. These are the primary domains included in the CFIR that encompass the work-environment factors and implementation processes. This finding suggests that strategies targeting more malleable aspects of an organization and addressing barriers by improving the fit between the DA and the context might have a greater impact. For example, clinics may be able to change the implementation climate or how patients flow through the clinic and get access to the DA. In contrast, barriers related to things in the Outer Setting may be more intractable. Likewise, changes to the DA in this setting, given how it is constructed and delivered, provide few options for change. And even if they could change the DA, it could potentially undermine the effectiveness of the DA. Thus, clinic leaders might consider creating an implementation climate where employees regard innovation use as a top priority, not as a distraction from or obstacle to the performance of their “real” work.

The findings reported above and the implications discussed next should be interpreted in light of several limitations. First, the study included only 15 clinics, and most of these clinics were located within academic medical centers, each representing unique organizational and community contexts and limiting generalizations to other clinics. Future work could address this limitation by including more clinics, including addressing questions about how to disseminate the DA to new settings. Second, it is possible that responses to our interview questions were biased, with key informants wanting their clinic to appear in a positive light. This can especially be the case with organizational research like ours where key informants may feel added pressure to respond positively due the employment relationship [40]. We attempted to mitigate this issue by capturing a broad range of perspectives from clinical personnel across different roles and contexts. Likewise, there is a possibility of selection bias issues (e.g., clinics are more motivated to change and implement the DA due to a contractual relationship for study duration). To mitigate this issue, we recruited clinics with varied characteristics from a wide-range of geographic areas. Nevertheless, future research could still build on this work and address this shortcoming by conducting site visits to observe the use of the DA in situ. In addition, it is conceivable that our analytic approach introduced biases given the subjective nature of coding and developing themes. While we cannot eliminate these biases, our evaluation adopted several steps to mitigate them 1) including multiple respondents from each site; 2) involving multiple investigators in coding and memo writing; and 3) including member checks, whereby a summary of findings was provided to each clinic for feedback and correction.

Conclusion

Findings show that, despite recognition of the value of customizing implementation strategies for the contexts in which they are applied, they are too often chosen in a manner that fail to adequately reflect the diverse settings that may present unique factors associated with implementation. Our findings also highlight the importance of the inner context – both in terms of structural characteristics and existing work processes – as a driving factor for why some organizations select different numbers and types of implementation strategies.