Introduction

The field of implementation science has grown exponentially in recent years, particularly in healthcare and human service delivery settings (Beidas et al., 2022). Implementation scientists have generated and applied numerous theories, models, and frameworks (TMFs) to guide investigations into determinants (e.g., barriers or facilitators) of a practice change, the process of implementation, evaluation of implementation efforts, sustainability of a change, and behavior change principles (Nilsen, 2015). Similarly, implementation scientists have created common nomenclatures and definitions for selecting and reporting implementation strategies (Powell et al., 2015; Proctor et al., 2013; Waltz et al., 2019) and outcomes (Lengnick-Hall et al., 2022; Proctor et al., 2011) to continue to build collective knowledge on the operationalization and effectiveness of implementation efforts, as well as sustainability. Despite these significant gains in establishing the scientific field of implementation science, use and application of this knowledge among healthcare practitioners seeking to implement a practice change remains fragmented (Westerlund et al., 2019), further perpetuating the 13–17 year gap between when research discoveries are made and when they are used in real-world practice. Accordingly, the emerging disconnection between implementation science and implementation practice has been referred to as a ‘secondary gap’(Westerlund et al., 2019).

There are several early efforts to address this secondary gap, particularly in healthcare settings (Metz et al., 2021). The secondary gap refers to the “paradoxical gap between scientific knowledge concerning implementation and the actual real-life implementation and use of this knowledge in healthcare practice” (McNett et al., 2019; Westerlund et al., 2019). Efforts to address the secondary gap include examples of how to use TMFs to guide implementation efforts (Moullin et al., 2020; Tucker et al., 2021), development of roles such as implementation facilitators and implementation support practitioners (Albers et al., 2020; Metz et al., 2022), as well as core competencies for these roles (Moore, 2023). In addition, there are education and training programs to teach practitioners how to apply principles of implementation science when instituting a practice change in their local settings (Hunter et al., 2020; McNett et al., 2022; Tucker et al., 2021).

In our experience training and consulting with healthcare teams on the application of implementation science principles to guide local implementation efforts, we consistently receive feedback on challenges applying implementation science strategy and outcome language when using it to guide planned practice changes. For example, the implementation strategy definitions offered in common taxonomies tend to be general and abstract (to be applicable to a wide variety of settings), which presents a challenge for practitioners who might have difficulty operationalizing and detailing how the strategy will be used in their specific context using their terminology and related resources (Bunger et al., 2017).

To address these challenges, our team has generated supplemental tools to bridge the language between implementation science and implementation practice. These tools can be used in conjunction with implementation TMF and/or existing toolkits to provide clarity in terminology and consistency in aligning implementation efforts with knowledge generated from the field of implementation science. The purpose of this paper is to describe the development and intended use of these tools for healthcare practitioners seeking to use a science-informed approach to implementing a practice change. We conclude with a case example of how healthcare practitioners may choose to apply these tools and improve their implementation of evidence-based practices in real-world settings.

Development of Supplemental Tools

When providing guidance to healthcare teams seeking to implement a practice change, our team recommends use of a determinants TMF to provide an early organizational assessment of the setting for implementation, followed by a process or evaluation TMF to establish an implementation and evaluation plan (Tucker et al., 2021). This approach begins with an assessment of the setting for implementation, as implementation culture, climate, and organizational readiness, can significantly influence the success of implementation efforts (Jacobs et al., 2014). This can be done using a determinants TMF, such as the Consolidated Framework for Implementation Research (CFIR), (Damschroder et al., 2009; Waltz et al., 2019) or the integrated-Promoting Action on Research Implementation in Health Services (i-PARIHS) (Harvey & Kitson, 2016; Hunter et al., 2023), as both have accompanying tools to formally evaluate facilitators and barriers for implementation (Hunter et al., 2023; Waltz et al., 2019). There are a number of additional tools and scales that can be used for this assessment, such as the Implementation Climate Scale (ICS) (Ehrhart et al., 2014), Organizational Readiness to Change Assessment (ORCA) (Helfrich et al., 2009), Organizational Culture and Readiness Scale for System-Wide Integration of Evidence-based Practice (Melnyk et al., 2022), and several others (Miake-Lye et al., 2020).

The second step when working with healthcare teams to implement a practice change is to select a process TMF to apply implementation science knowledge and develop a structured and systematic plan for implementation (Tucker et al., 2021). In our experience, this is the stage where practitioners experience difficulty interpreting the scientific language of implementation. Although many process TMFs (e.g., the Fuld Institute Implementation for Sustainability Toolkit, the Iowa EBP Model & Implementation Framework) contain step-by-step toolkits to guide development of a science-informed plan for implementation (Buckwalter et al., 2017; Cullen et al., 2022; McNett et al., 2023), practitioners often struggle with understanding terminology of implementation strategies, determining which ones to select, and how to select and measure relevant implementation outcomes. As such, our team developed two tools – the Implementation Strategies Supplemental Tool (ISST) (Online Resource: Table 1) and the Implementation Outcomes Supplemental Tool (IOST) (Table 2) – that can be used during this phase of the implementation planning process to bridge the language between science and practice. The tools can be used as a supplement to multiple process TMFs, particularly those that incorporate a phased approach to implementation. Below, we describe our process of developing the ISST and IOST, pilot testing each tool with practitioner cohorts, and iteratively refining both tools according to cohort input and recommendations for improvement.

Table 2 Implementation outcomes supplemental tool (IOST)

Implementation Strategies Supplemental Tool (ISST)

The Implementation Strategies Supplemental Tool (ISST) (Online Resource: Table 1) was originally developed to be used in conjunction with either the Fuld Institute Implementation for Sustainability Toolkit (i.e. Fuld Toolkit) or the Iowa Implementation for Sustainability Framework (i.e. Iowa Framework) (Cullen et al., 2022; McNett et al., 2023). Both build on implementation science literature and incorporate a phased approach to implementation, with suggested implementation strategies listed in each phase that practitioners can select for use in their implementation efforts. The Fuld Toolkit simply lists suggested implementation science strategies (McNett et al., 2023), while the Iowa Framework lists suggested strategies, and provides a comprehensive explanation, procedure, and example for selected strategies (Cullen et al., 2022). Despite the relative ease of use of these two tools, it has been our experience that healthcare practitioners still struggle to understand what specific strategies refer to, and how to select strategies to address organizational facilitators and barriers when building their implementation plan. As such, the ISST was created by our team using the Expert Recommendations for Implementing Change (ERIC) strategies (Powell et al., 2015), Cochrane Effective Practice and Organisation of Care (EPOC) strategies, (Mazza et al., 2013), and Iowa Framework implementation strategies and associated constructs (Cullen et al., 2022).

We developed the ISST initially to augment either the Fuld Toolkit (McNett et al., 2023) or the Iowa Framework (Cullen et al., 2022), and intentionally made the tools general enough to be used in conjunction with any process TMF that uses a phased approach for implementation and selection and tailoring of specific implementation strategies. The ISST was created to efficiently aid in selection of implementation strategies based on categories of identified barriers (Li et al., 2018; McNett et al., 2022), and are aligned with the phase of the evidence-based change initiative. Given our strong healthcare focus and specifically hospital care, we adopted a categorization scheme from a systematic review of organizational contextual factors that influence evidence-based practices across healthcare settings (Li et al., 2018). The categories proposed by Li et al. (2018) are inter-related, with leadership influence connecting to all other categories, which include: organizational readiness/culture, human and fiscal resources, mentors/facilitators/champions communication (collaboration/teamwork) and networks, education and resources, and evaluation, monitoring, and feedback. We then incorporated ERIC strategies and other published strategies that were applicable into each category (Garner et al., 2012, 2018; Leeman et al., 2017; Powell et al., 2015; Waltz et al., 2019). This created a pragmatic way to identify categories that may influence an implementation initiative, and then select specific strategies within that category to build into an implementation plan. Lastly, we aligned these strategy terms and definitions with the specific stages of implementation proposed in the Fuld Toolkit (McNett et al., 2023) and Iowa Framework (Cullen et al., 2022).

The ISST was created by two experts in implementation science who routinely provide coaching for clinical teams seeking to implement a practice chance. To establish preliminary face validity, two additional experts in implementation science and implementation practice reviewed the tool and established agreement for alignment of the selected strategies within the suggested categories and phases. The ISST was then piloted by three cohorts (n = 5, n = 17, and n = 35) of healthcare teams being trained in implementation science and practice, which identified areas of improvement and clarification. Members of each cohort included registered nurses, quality improvement specialists, clinical educators, advanced practice providers, and healthcare clinical leaders involved with implementing practice changes in their settings. Each cohort was introduced to the ISST while receiving education and training on selecting and tailoring implementation strategies for a specific practice change they were tasked with implementing. Members of each cohort were asked for feedback on perceptions and usability of the tool when used to aid in their implementation planning.

Overwhelmingly, team members reported the tool as being helpful for understanding and interpreting the scientific terminology of implementation strategies. Members of each cohort explained that the categorization of strategies made selection easier, as they could target specific areas that were identified as barriers in their organizational assessment (i.e. education, leadership, etc.), and easily select strategies from that category, rather than scrolling through a long list of options with no categorization. Finally, teams requested guidance on what phase of implementation certain strategies should be performed. For example, we paired the ISST with the Fuld Toolkit, which has four phases of implementation, and teams queried if certain strategies should be done earlier in the implementation efforts (i.e. phase 1), or in the later stages of implementation (i.e. phase 4). As such, we reviewed recommendations from the literature regarding timing of strategies (Aarons et al., 2011; McNett et al. 2022; Powell et al., 2012; Wong et al., 2022), and added a new column to the ISST to indicate recommended timeframes for initiating specific strategies. This column included the specific phase listed in the Fuld Toolkit or Iowa Framework, as both utilize a four-step approach to implementation. However, generic language on timing was also added (i.e. early, mid, late) so the tool could be paired with other implementation process models that may not have specific phases delineated. This revised version of the ISST is currently being used in ongoing education and training among healthcare teams nationally who are leading and participating in evidence-based practice initiatives (Gallagher-Ford et al., 2020; Tucker et al., 2022).

Implementation Outcomes Supplemental Tool (IOST)

The second resource that was created for teams seeking to use a science informed approach for implementation was development of an outcomes table (Table 2). The outcomes table lists implementation outcomes as defined by Proctor et al. (2011). It has been our experience that healthcare teams are familiar with common patient/clinical and system level outcomes, but do not consider incorporating implementation outcomes into their evidence-based practice initiatives consistently. Implementation outcomes are defined as the ‘effects of deliberate and purposive actions to implement new treatments, practices, and services’ (Proctor et al., 2011). In essence, these outcomes measure practitioner perceptions about the practice change, and how well and how often they are actually engaging in the practice change. Implementation outcomes are an essential component of any practice change, as they provide important information on the degree to which the change is being implemented and sustained over time by front line staff. Implementation outcomes include acceptability, adoption, appropriateness, feasibility, fidelity, cost, penetration, and sustainability (Proctor et al., 2011).

While formal definitions of these measures have been proposed and widely used by implementation scientists (Lengnick-Hall et al., 2022; Proctor et al., 2023), they are not routinely integrated by clinical teams seeking to implement a practice change. Rather, healthcare practitioners tend to focus solely on reportable quality indicators, clinical outcomes, and/or health system outcomes (McNett et al., 2020). Or, they focus on quality of care and processes of care, defined as what a provider does to maintain or improve health (Agency for Healthcare Quality and Research, 2015).

Tool development for the IOST was guided by existing published resources that included using standard nomenclature in the implementation science literature (Khadjesari et al., 2020; Mettert et al., 2020). Specifically, the IOST was created using implementation outcomes identified by Proctor et al. (2011, 2023), particularly those outcomes which are commonly used in implementation research within healthcare settings (Khadjesari et al., 2020; McNett et al., 2020; Mettert et al., 2020). While implementation scientists are likely familiar with these outcomes and methods for evaluation, our experiences with healthcare practitioners indicated many were not familiar with these terms, methods for measurement, or how they could be used to evaluate success of an implementation initiative over time. As such, our tool included the formal definitions of outcomes (Proctor et al., 2011, 2023), alternative terms that were more familiar to front line practitioners (Khadjesari et al., 2020), and proposed time points and methods for measurement of these indicators when planning and evaluating an implementation initiative (Opperman et al., 2016; Tucker, 2014; Weiner et al., 2017).

Similar to the strategies tool, the outcome tool was piloted with the same three cohorts of healthcare teams seeking to implement a practice change (Gallagher-Ford et al., 2020; Tucker et al., 2022). Initial feedback on use of the outcomes tool was positive, as it provided concrete measures to evaluate incremental and ongoing success of the proposed practice change. The original iteration of the tool included a column that referenced underlying theoretical frameworks and terminology associated with measurement of the outcome. However, feedback from healthcare teams across the three cohorts indicated this column was not useful for their implementation efforts, and so it was removed. The resulting IOST is now routinely used as a supplement for healthcare teams to use as they create a plan for implementation and evaluation of EBP initiatives (Gallagher-Ford et al., 2020; Tucker et al., 2022).

Case Example

While both the ISST and IOST were created to be used in conjunction with either the Fuld Toolkit (McNett et al., 2023) or Iowa Framework (Cullen et al., 2022), we have included an example to demonstrate how the tools can also be used with the Implementation Research Logic Model (Smith et al., 2020) to develop a plan for implementation that applies findings and principles from the field of implementation science. The following case example highlights how implementation practitioners can use the tools to guide their implementation efforts.

A healthcare team comprised of physicians, registered nurses, pharmacists, a social worker, and an occupational therapist is interested in implementing an evidence-based falls risk screening tool among hospitalized stroke patients participating in an inpatient rehabilitation program. First, the team used the CFIR framework and its technical assistance resources to identify facilitators and barriers to their proposed practice change (Damschroder et al., 2009). Findings indicated several barriers preventing practice changes related to the strength of the evidence to support the change, the program costs, leadership engagement, resources needed to implement the change, and making the program a priority. In addition, the team identified barriers related to knowledge and beliefs, lack of local champions, and engagement among frontline staff. These barriers are displayed in the logic model (Fig. 1).

Fig. 1
figure 1

Implementation research logic model example (Smith, Rafferty & Li, 2020)

Second, the team selected a process framework, the Implementation Research Logic Model, to develop a plan for implementation and evaluation of this practice change. Next, the team reviewed the primary barriers identified from the CFIR framework, and used the ISST to determine which categories of barriers were most prominent, and which potential implementation strategies would be most beneficial. Based on this assessment, the primary barriers were within the categories of leadership, human and fiscal resources, and education and training (Table 1). Within each category, the team then reviewed the suggested strategies, and identified those that were relevant to this practice change, and feasible to be implemented by the team. These strategies were added to their logic model and aligned with identified barriers (Fig. 1).

Following this step, each selected category and strategy from the ISST could then be operationalized using the Logic Model to create the plan for implementation and evaluation. For example, the leadership strategies selected included: “recruit, designate, and train for leadership”, which encouraged the team to be advocates for the falls screening tool and to role model use of it with expected users. The leadership strategy of “providing clinical supervision” was used to work with users and ensure the tool was used; and the strategy of “provide ongoing consultation” assisted new users and those struggling with its application. Similarly, within the human and fiscal resources category, strategies included “centralizing technical assistance” to bring the screening tool into the electronic health record; “stage implementation scale up” to allow a pilot phase for addressing any unanticipated challenges, and use of an “implementation advisor” who was the champion for the practice change to ensure adherence to the implementation process and assisting others as needed. Finally, education and training strategies included “developing an implementation glossary” to define terms; “conduct educational meetings” to educate staff on evidence supporting the change and influence their knowledge and beliefs; as well as using the strategy of “distribute educational materials” which included the new procedures and quick reference cards; and “use the train-the-trainer” strategy for new users joining the team.

The next step after selecting relevant implementation strategies was for the team to propose “implementation mechanisms” through which each strategy was hypothesized to be effective. These hypothesized mechanisms were rooted in empirical evidence and/or had theoretical underpinnings that helped justify how the implementation strategies would work (Lewis et al., 2022). For instance, educational strategies (e.g., conduct educational meetings; distribute educational materials) were expected to (a) increase practitioners’ “perceived relevance and motivation” for using screening tools and (b) improve “knowledge and skill acquisition” towards implementing the screening tools as they were originally intended. Moreover, strategies such as “stage implementation scale-up,” “train-the-trainer,” and “provide clinical supervision” were hypothesized to generate a strong implementation climate, thus leading to improved implementation of fall risk screening tools.

Lastly, the team next reviewed the list of outcomes in the IOST (Table 2) to determine which ones would be applicable and feasible for this proposed practice change. From the IOST, the team prioritized the evaluation of penetration, fidelity, and cost. The team focused on evaluating penetration because they wanted to evaluate the degree of spread of the practice change among the provider team. By measuring penetration as a proportion of providers who were completing the falls screening tool over the proportion of providers who should be completing the tool, the team could evaluate the degree of spread of the practice change among the team in real time. These data were also used by the leaders and implementation advisor to identify further education needs and for audit and feedback.

The team chose fidelity as a second implementation outcome to evaluate to the extent to which the team was implementing all components of the fall screening tool as they were intended to be implemented. Fidelity was measured by members of the nursing team at weekly intervals by doing chart audits of tool completion. These nurses were members of the frontline staff and familiar with the routine procedures on the unit for screening and documentation of protocol elements. Thus, they were able to observe and review documentation of the number of screening tool components that were completed by various staff members over the number of screening tool components that should be completed. These weekly reports provided important information on what aspects of the protocol had low fidelity and allowed the team to work with frontline staff in real time to address areas of clarity or training that needed to be modified to improve fidelity to the screening tool, again circling back to the implementation strategies.

In addition, the team also decided to evaluate project cost. They calculated a return on investment (ROI) using the procedures outlined by Tucker et al., and Opperman et al.(Opperman et al., 2016; Tucker, 2014). In this ROI, the team calculated the number of hours required for education and training of the team and multiplied that times the average hourly rate of the staff. The team also reviewed the prior 6 months of falls data and determined 6 falls had occurred, costing the organization approximately $120,000 ($20,000) dollars per fall (Tucker, 2014). They determined if they could reduce the fall rate even by 2 falls during the implementation period, it could result in $40,000 in cost savings within the first 6 months of the project, which would offset the minimal resources required to educate and train the staff on the new tool.

Discussion

Failure to bridge the gap between implementation science and implementation practice threatens to limit the value and contributions of the field, and further delays moving needed evidence into routine practice. To enhance the applicability of the principles and concepts from implementation science, our team developed two tools (the ISST and IOST), which translate established technical definitions of implementation strategies and outcomes into plain language summaries. These tools also offer guidance for selecting and sequencing strategies, and prioritizing outcomes that they measure to monitor progress.

The ISST, IOST and other future practical implementation tools have potential to be used by “implementation support practitioners,” an emerging professional role. As defined by Albers et al. (Albers et al., 2021), implementation support practitioners work closely with administrators and practitioners to effectively implement clinical, educational, and therapeutic practices to communities, families, and patients. Metz et al. (2021) claimed that one key focus of implementation support practitioners is to build organizational capacity to implement evidence into routine care, thereby enhancing the likelihood that capacity building efforts will lead to improved organizational outcomes (Bunger & Lengnick-Hall, 2019; McNett et al., 2021), such as increased adoption or sustainment of evidence-based practices. Accordingly, the resources presented in our current toolkit serve as foundational materials to help implementation support practitioners work with healthcare teams and organizations to build implementation capacity and apply the concepts, principles, and emerging best practices in the field. For example, prior work in the implementation practice field has strongly recommended that healthcare organizations identify data sources for internal, cyclical evaluations to monitor the success, including sustainability, of implementation efforts. Our ISST and IOST prompt practitioners to identify these data sources in preparation for such evaluations that can quantify the value of their implementation activities (Vroom & Massey, 2022).

Based on our experience, the ISST and IOST might be most useful to practitioners when paired with didactic activities – such as the training we provide to practitioners who use our materials, which is essential for implementation practice capacity building (Juckett et al., 2022). As illustrated in the case example, healthcare teams seeking to implement a practice change should follow a science-informed approach to implementation. This includes first performing an organizational assessment using a determinants TMF (Proctor et al., 2011). The ISST can then be used to select strategies based on identified barriers that have been categorized as commonly occurring in healthcare settings (Li et al., 2018). Selected strategies from the ISST can be inserted into a process TMF to create a tailored implementation plan for that specific practice change and setting. Lastly, the IOST can then be used to select and measure implementation outcomes specific to that initiative, and to identify time points and methods for evaluation. Notably, however, these materials should be augmented with opportunities for practitioners to consult with implementation experts and engage in practical application activities, as having foundational information as well as expertise to guide the process of implementation will increase likelihood of implementation success and sustainability over time.

Despite the unique strengths of our tools, they are not without limitations. Although the ISST and IOST were developed through an iterative process and were vetted by healthcare teams and implementation experts, their effectiveness has not yet been empirically tested. In addition, the tools do not explicitly include considerations for equitable reach, particularly in evaluating implementation outcomes. Modifying the implementation outcome definitions to those proposed by Eslava-Schmalback et al. (2019) could provide a mechanism for practitioners to gauge the degree to which their implementation efforts affect disadvantaged or minority groups within a target population. Future research on use of the tools could integrate these additional definitions and evaluate for feasibility and usability among healthcare teams. Future research should also include investigations into effectiveness of the tools on implementation success and sustainability over time. Despite these limitations, initial pilot work suggests that these tools have been well-received by healthcare practitioners and are commonly requested by frontline clinicians who are leading implementation initiatives in real-world settings. Further, given that the fields of implementation science and practice are constantly evolving, we recognize that the strategies and outcomes described in the ISST and IOST will likely evolve over time.

Conclusion

Development of supplemental tools to bridge the language between the science and the practice of implementation can be one approach to avoid development of a secondary gap in research translation. The field of implementation science has made substantial contributions towards identifying optimal methods to increase the uptake of evidence into routine practice settings. It is essential that healthcare teams and others seeking to implement evidence-based practice changes into real-world settings apply this information to provide a science-informed approach to implementation and increase likelihood of implementation success and sustainability over time.