Background

Every year, millions of people worldwide experience a stroke [1, 2]. In 2016 alone, there were over 13 million new cases of stroke globally [3]. At elevated risk for stroke are persons who are 65 and older, practice unhealthy behaviors (smoking, poor diet, and physical inactivity), have metabolic risks (high blood pressure, high glucose, decreased kidney function, obesity, and high cholesterol), and represent lower socioeconomic groups [1, 4, 5]. With the rapid growth of the older adult population, the number of stroke survivors is expected to rise dramatically in the coming years, contributing to a shift in increased global disease burden [6,7,8,9]. Stroke is one of the leading causes of long-term disability worldwide, and stroke survivors often face extensive challenges that result in self-care dependency, mobility impairments, underemployment, and cognitive deficits [1, 10]. Frequently, stroke survivors are admitted to stroke rehabilitation settings, such as outpatient care centers, skilled nursing facilities, and home health agencies. Occupational therapy (OT) practitioners work with stroke survivors in these settings to address their physical, cognitive, and psychosocial challenges [10,11,12,13]. Considered allied health professionals, OT practitioners across the stroke rehabilitation continuum are expected to implement a person-centered care plan using evidence-based assessments and interventions intended to maximize stroke survivors’ independence in daily activities and routines (e.g., dressing, bathing, mobility). Furthermore, healthcare users (e.g., stroke survivors) expect practitioners to deliver evidence-based practice and provide the highest quality occupational therapy services.

The benefits of OT in stroke rehabilitation have been well documented [14]. For instance, evidence-based OT interventions can lead to improved upper extremity movement [15, 16], enhanced cognitive performance [17], and increased safety with mobility [18]. However, as with several allied health professions, OT practitioners can experience complex barriers when implementing evidence-based care into routine practice [19,20,21]. Specific to stroke rehabilitation, Juckett et al. [22] identified several barriers that limited OT practitioners’ use of evidence and categorized these barriers according to the Consolidated Framework for Implementation Research (CFIR) [23]. Notable barriers to evidence use were attributed to challenges adapting evidence-based programs and interventions to meet patients’ needs (e.g., adaptability), a lack of equipment and personnel (e.g., available resources), and insufficient internal communication systems (e.g., networks and communication). Although identifying these barriers is a necessary precursor to optimizing evidence implementation, Juckett et al. [22] also emphasized the urgent need for OT researchers and practitioners to identify implementation strategies that facilitate the use of evidence in stroke rehabilitation. Relatedly, Jones et al. [24] examined the literature regarding implementation strategies used in the rehabilitation profession: occupational therapy, physical therapy, and speech–language pathology. While they found some encouraging findings, it is difficult to replicate these strategies given the heterogeneity in how implementation strategies and outcomes were defined and the inconsistency with which implementation strategy selection was informed by implementation theories, models, and frameworks (TMFs) [24]. Just as it is critical to select implementation strategies based on known implementation barriers, the design of implementation studies should be guided by TMFs to optimize the generalizability of findings towards both implementation and patient outcomes [25].

Implementation strategies are broadly defined as methods to enhance the adoption, use, and sustainment of evidence-based interventions, programs, or innovations [26, 27]. Historically, the terminology and definitions used to describe implementation strategies have been inconsistent and lacking details [28,29,30]. Over the past decade, however, these strategies have been compiled into taxonomies and frameworks to facilitate researchers’ and practitioners’ ability to conceptualize, apply, test, and describe implementation strategies utilized in research and practice. The Expert Recommendations for Implementing Change (ERIC) project [28] describes a taxonomy of 73 discrete implementation strategies that have been leveraged to optimize the use of evidence in routine care [29, 31]. Additionally, as part of the ERIC project, an expert panel examined the relationships among the discrete implementation strategies to determine any themes and to categorize strategies in clusters [29]. Table 1 depicts how discrete implementation strategies are organized in the following clusters: use evaluative and iterative strategies, provide interactive assistance, adapt and tailor to the context, develop stakeholder interrelationships, train and educate stakeholders, support clinicians, engage consumers, utilize financial strategies, and change infrastructure.

Table 1 Summary of implementation strategies utilized in terms of ERICa thematic clusters [29]b

Discrete and combined implementation strategies may be considered effective if they lead to improvements in implementation outcomes. Proctor et al. [32] defined the following eight outcomes in their Implementation Outcomes Framework (IOF) that are often perceived to be the “gold standard” outcomes in implementation research: acceptability, adoption, appropriateness, cost, feasibility, fidelity, penetration (e.g., reach), and sustainability. In other words, implementation outcomes are the effects of purposeful actions (e.g., strategies) designed to implement evidence-based or evidence-informed innovations and practices [32]. The ERIC taxonomy and IOF serve as examples of TMFs that provide a uniform language for characterizing implementation strategies and their associated implementation outcomes. These common nomenclatures help articulate implementation-related phenomena explanations, leading to an enhanced understanding of the relationship between implementation strategies and implementation outcomes [33]. As such, fields that have recently adopted implementation science principles—such as occupational therapy—should make a concentrated effort to frame their research methodologies using established implementation TMFs.

Although implementation research has seen significant progress in recent years, findings are only beginning to emerge specific to the allied health professions (e.g., OT) [24]. Implementation strategies such as educational meetings, audit and feedback techniques, and the use of clinical reminders hold promise for increasing the use of evidence by allied health professionals [24, 34]; however, there is little guidance for how these findings can be operationalized, particularly in stroke rehabilitation. This knowledge gap is particularly concerning given the Centers for Medicare & Medicaid Services (CMS)’ recent changes in payment models that provide reimbursement based on the value of services delivered. In other words, rehabilitation settings are reimbursed according to the quality of services implemented (as measured by improvements in patient outcomes) rather than the quantity of services provided. The increased attention on patient outcomes from the policy level (e.g., CMS) warrants the immediate need for OT practitioners to implement the highest quality of interventions with patients, such as stroke survivors, to improve patient outcomes and ensure that rehabilitation services are adequately reimbursed [35, 36].

As OT practitioners aim to implement high-quality, evidence-based interventions for stroke survivors, the OT profession must have a clear understanding of the strategies that have been utilized to support the use of evidence and their reported outcomes. To do this, occupational therapy and rehabilitation researchers must articulate explanations of implementation strategies and outcomes using commonly known TMFs, as well as the ERIC taxonomy and IOF. The purpose of this review is to explore the breadth of current implementation research and identify potential gaps in how occupational therapy researchers articulate their implementation strategies and report implementation outcomes for reproducibility in other research and practice contexts. Accordingly, this scoping review will address the following objectives:

  1. 1.

    Synthesize the types of implementation strategies—using the ERIC taxonomy—utilized in occupational therapy research to support the use of evidence-based interventions and assessments in stroke rehabilitation.

  1. 2.

    Synthesize the types of implementation outcomes—using the IOF—that have been measured to determine the effectiveness of implementation strategies in stroke rehabilitation.

  1. 3.

    Identify additional implementation theories, models, and frameworks that have guided occupational therapy research in stroke rehabilitation.

  1. 4.

    Describe the influence between implementation strategies and implementation outcomes.

Methods

The scoping review methodology was guided by Arksey and O’Malley’s scoping review framework [37] and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Scoping Review (PRISMA-ScR) reporting recommendations [38]. The review team developed an initial study protocol (unregistered; available upon request) to address the review objectives and identify the breadth of literature examining implementation strategies and outcomes in stroke rehabilitation. The first author conducted preliminary searches to assess the available literature, allowing the team to revise the search strategy and search terms consistent with the iterative nature of scoping reviews. A detailed description of the search strategy can be found in the Appendix in Table 5.

Eligibility criteria

Studies were eligible for inclusion in the review if they (a) examined the implementation of interventions or assessments, (b) had a target population of adult (18 years and older) stroke survivors, (c) included occupational therapy practitioners, and (d) took place in the rehabilitation setting. Studies published in English between Jan 2000 and May 2020 were included as the occupational therapy profession called for immediate improvements in the use of evidence to inform practice at the turn of the millennium [39], the latter date marking when the authors began the bibliographic database search. The “rehabilitation setting” was defined as acute care hospitals and post-acute care home health agencies, skilled nursing facilities, long-term acute care hospitals, hospice, inpatient rehabilitation facilities and units, and outpatient centers. Studies were excluded if they (a) only reported on intervention effectiveness (not implementation strategy effectiveness), (b) assessed psychometrics, (c) were not available in English, (d) examined pediatric patients, (e) were published as a review or conceptual article, and (f) failed to include occupational therapy practitioners as study participants.

Information source and search strategy

The following five electronic databases were accessed to identify relevant studies in the health and mental health fields: PubMed, CINAHL, Scopus, Google Scholar, and PsychINFO. Implementation Science and Implementation Science Communications were also hand searched, as they are the premier peer-reviewed journals in dissemination and implementation research. Given the diverse terminology used to describe implementation strategies in the stroke rehabilitation field, we developed an extensive list of search terms based on previous scoping reviews that have assessed the breadth of implementation research in rehabilitation. The most recent search was conducted in May 2020. Sample search term combinations included “knowledge translation”[All Fields] OR “implement*”[All Fields]) AND “occupational therap*”[All Fields] AND “stroke”[MeSH Terms] OR “stroke” (see Additional file 1 for the complete terminology list and a database search sample). All studies identified through the search strategy were uploaded into Covidence for study selection.

Selection process

Beginning with the study title/abstract screening phase, the first and third authors (JEM and LAJ) applied the inclusion and exclusion criteria to all studies that were identified in the initial search (agreement probability = 0.893). When authors disagreed during title/abstract screening, the second author (JLP) decided on studies to advance to the full-text review phase. Similar to scoping review screening methods conducted in the implementation science field [40], all authors reviewed a random sample (15%) of the full-text articles in the full-text screening phase to decide on study inclusion and evaluate consistency in how each author applied the inclusion/exclusion criteria. The authors achieved 100% agreement and proceeded with screening each full-text article individually.

Data charting—extraction process

An adapted version of Arksey and O’Malley’s data charting form was created to extract variables of interest from each included study. In the data extraction phase, all authors extracted data from another random 15% of included studies to pilot test the charting form and confirm the final variables to be extracted. Authors met biweekly to share progress on independent data extraction and compare the details of data extracted across authors. Variables were extracted that represented study design, population, setting, guiding frameworks, and the description of the intervention/assessment being implemented; however, the review's primary aim was extracting information relative to implementation strategies and associated implementation outcomes.

To do this, a two-step process to extract data on implementation strategies and outcomes was used. In Step 1, team members charted the specific terminology used to describe strategies or outcomes in each study. In Step 2, the review team used a directed content analysis approach to map this charted information and terminology to the ERIC taxonomy [28] and the IOF [32]. For instance, an implementation strategy that authors initially described as “holding in-services with clinicians” was “translated” to “conducting educational meetings.” Likewise, implementation outcomes that were initially described as “adherence” were converted to “fidelity.” This translation process was guided by descriptions of implementation strategies as listed in the original 2015 ERIC project publication (as well as the ERIC ancillary material) and the seminal IOF publication from 2011. The extracted and translated data was entered using the Excel for Microsoft 365 program.

Synthesis process

The authors followed Levac et al.’s [41] recommendations for advancing scoping methodology to synthesize data. One author (JEM) cleaned the data (e.g., spell check, cell formatting) to ensure that Excel accurately and adequately performed operations, calculations, and analyses (e.g., creating pivot tables, charts). As scoping reviews do not seek to aggregate findings from different studies or weigh evidence [37, 41], only descriptive analyses (e.g., frequencies, percentages) were conducted from the extracted data to report the characteristics of the included studies and thematic clusters. The descriptive data and results of the directed content analysis were organized into tables using themes to articulate the review’s findings that addressed the research objectives.

Results

The search yielded 1219 articles. After excluding duplicates, 868 titles and abstracts were reviewed for inclusion. Among those, 49 articles progressed to full-text review, and 26 met the criteria for data extraction, as shown in Fig. 1.

Fig. 1
figure 1

PRISMA flow diagram [42] outlining the review’s selection process

Study characteristics

Table 2 describes the studies’ characteristics. The studies were published between 2005 and 2020, all within the last 10 years except one [43]. Studies were most set in Australia (27%) and most commonly conducted in an inpatient rehabilitation healthcare setting (65%). While two studies targeted practitioners in any healthcare setting by implementing an educational related implementation strategy (e.g., conduct ongoing training) either at an offsite location [44] or nonphysical [45] environment, none of the studies was conducted in a long-term acute care hospital (LTACH) or hospice setting. Most studies used a pre–post research design (50%), followed by process evaluation (14%). Studies used quantitative methods (69%) most frequently, with similar utilization between qualitative (12%) and mixed-method (19%) approaches. While the studies primarily implemented stroke-related interventions (92%), this was not mutually exclusive, as some implemented a combination of an intervention (e.g., TagTrainer), an assessment (e.g., Canadian Occupational Performance Measure (COPM)), or clinical knowledge (e.g., upper limb poststroke impairments).

Table 2 Study characteristics (N =26)

Implementation strategies

The studies included in this review collectively utilized 48 of the 73 discrete strategies drawn from the ERIC taxonomy. Discrete implementation strategies per study ranged from 1 to 21, with a median of four strategies used per study. The two most commonly used implementation strategies applied in 42% of studies were distribute educational materials [44, 46,47,48,49,50,51,52,53,54,55] and assess for readiness and identify barriers and facilitators [47,48,49, 52, 56,57,58,59,60,61,62]. The latter strategy implies two separate actions; however, only two studies [48, 49] assessed readiness “and” identified barriers and facilitators. Other discrete implementation strategies frequently used included: conduct educational outreach visits, conduct ongoing training, audit & provide feedback, and develop educational materials. Of all studies included in this review, 88% used at least one of these six primary strategies.

Thematic clusters of implementation strategies

Waltz et al. [29] identified nine thematic clusters using the ERIC taxonomy (Table 1), which allowed further exploration of another dimension of the implementation strategies. Table 1 provides a summary of how the implementation strategies were organized in terms of thematic clusters. Twenty three of the 26 studies [43, 44, 46,47,48,49,50,51,52,53,54,55,56,57,58,59,60, 62,63,64,65,66,67] implemented at least one discrete implementation strategy in the cluster, train and educate stakeholders, followed by 17 of 26 studies which examined strategies in the use evaluative and iterative strategies cluster [47,48,49,50, 52, 55,56,57,58,59,60,61,62, 65,66,67,68]. The train and educate stakeholders cluster comprises four of the six most used implementation strategies: conduct ongoing training, develop educational materials, conduct educational outreach visits, and distribute educational material. The other two commonly used implementation strategies, assess for readiness and identify barriers and facilitators and audit and provide feedback, are categorized in the cluster use evaluative and iterative strategies.

Within the change infrastructure cluster, one study used the implementation strategy mandate change [50], and another study used change physical structure & equipment [65]. Within the cluster of utilize financial strategies, one study [50] used the following implementation strategies: alter incentive/allowance structure and fund & contract for the clinical innovation. The included studies applied the least number of strategies from this cluster, with only two out of the nine possible implementation strategies being used—the lowest percentage, 1%, used amongst the thematic clusters.

Implementation outcomes

Table 3 provides a summary of the measurements and implementation outcomes used in each study. The implementation outcomes measured per study ranged from 1 to 4. Studies most frequently included two implementation outcomes, with adoption being frequently measured in 81% of studies [43,44,45, 48,49,50,51,52,53,54,55, 57, 59,60,61,62,63,64, 66,67,68]. Fidelity followed and was measured in 42% of studies [43, 47, 52, 53, 56,57,58, 60, 63, 65, 68]. Seven of the eight implementation outcomes were measured in at least one of the studies, whereas implementation cost was the only implementation outcome not addressed in any of the studies. Moreover, Moore et al.’s [50] study is the only one to measure penetration and sustainability. All the studies used various approaches to measuring implementation outcomes, as shown in Table 3. For example, 11 of 20 studies measuring adoption used administrative data, observations, or qualitative or semi-structured interviews [43, 52, 53, 55, 57, 59, 60, 62,63,64, 66, 68].

Table 3 Summary of data for studies included in the review

Theories, models, and frameworks

Notably, of the 26 included articles, 12 explicitly stated using a TMF to guide the selection and application of implementation strategies (Table 4). The most common supporting TMF employed among the articles (n = 5) was the Knowledge-to-Action Process framework [44, 48,49,50, 61], categorized as a process model. Classic or classic change theory was the next most commonly applied category of TMFs, including the Behavior Change Wheel [47, 57, 60] (n = 3) and Theory of Planned Behavior [44] (n = 1). No implementation evaluation frameworks were utilized (e.g., Reach, Efficacy, Adoption, Implementation, Maintenance (RE-AIM) or Implementation Outcomes Framework). A select number of studies described the components of their implementation strategies following reporting guidelines. Two studies [47, 64] used the Template for Intervention Description and Replication (TIDieR) checklist. One study [47] used the Standards for Quality Improvement Reporting Excellence (SQUIRE). Moreover, one study [57] followed the Standards for Reporting Implementation studies (StaRI) checklist but did not explicitly mention an implementation framework to guide study design.

Table 4 Summary of implementation theories, models, and frameworks (TMFs) used in studies

Association between implementation strategies and implementation outcomes

The findings from studies examining the effect of implementation strategies on implementation outcomes were generally mixed. While 42% of studies used strategies that led to improved implementation outcomes, 50% led to inconclusive results. For instance, McEwen et al. [51] developed a multifaceted implementation strategy that involved conducting educational meetings, providing ongoing education, appointing evidence champions, distributing educational materials, and reminding clinicians to implement evidence in practice. These strategies led to increased adoption of their target EBP, the Cognitive Orientation to daily Occupational Performance (CO-OP) treatment approach, suggesting this multifaceted strategy may facilitate EBP implementation among OTs. Alternatively, Salbach et al. [48] examined the impact of an implementation strategy consisting of educational meetings, evidence champions, educational materials, local funding, and implementation barrier identification that pertained to stroke guideline adoption. However, these strategies only led to the increased adoption of two out of 18 recommendations described in the stroke guidelines. Levac et al. [64] also utilized a combination of educational meetings, dynamic training, reminders, and expert consultation to increase the use of virtual reality therapy with stroke survivors, yet found these combined strategies did not lead to an increase in virtual reality adoption among practitioners serving stroke survivors.

Discussion

This scoping review is the first to examine implementation strategy use, implementation outcome measurement, and the application of theories, models, and frameworks in stroke rehabilitation and occupational therapy. Given that implementation science is still nascent in occupational therapy, this review’s purpose was to synthesize implementation strategies and outcomes using uniform language—as presented by the ERIC and IOF taxonomies—to clearly understand the types of strategies being used and outcomes measured in the occupational therapy and stroke rehabilitation fields. Importantly, this review also calls attention to the value of applying theories, models, and frameworks to guide implementation strategy selection and implementation outcome measurement.

Operationalizing implementation strategies and outcomes are essential for reproducibility in subsequent research studies and in practice. Without a clear language for defining strategies and reported outcomes, stroke rehabilitation and occupational therapy researchers place themselves at risk of contributing to what is currently being referred to as the “secondary” research-to-practice gap. This secondary gap is emerging in implementation science because empirical findings from implementation science have seldom been integrated into clinical practice [70]. For instance, the present review found that the distribution of educational materials was one of the most commonly utilized implementation strategies, yet it has been well established that educational materials alone are typically insufficient for changing clinical practice behaviors [71]. One potential reason that may explain why implementation science discoveries are rarely integrated into real-world practice may pertain to the fact that implementation strategies and outcomes are not consistently named or described, leading to difficulties replicating these strategies in real-world contexts. Using the ERIC and IOF to guide the description of strategies and reported outcomes is a logical first step in enhancing the replication of effective strategies for improving implementation outcomes.

Further, replication can be enhanced by describing strategies according to specification guidelines. Four studies in this review described implementation strategies using reporting standards such as the Template for Intervention Description and Replication (TIDieR) checklist, the Standards for Quality Improvement Reporting Excellence (SQUIRE), and Standards for Reporting Implementation studies (StaRI). Though the use of these reporting standards is promising for optimizing replication, Proctor et al. [27] also provide recommendations for how to specify implementation strategies designed to improve specific implementation outcomes. These recommendations include clearly naming the implementation strategy, describing it, and specifying the strategy according to the following parameters: actor, action, action target, temporality, dose, outcome affected, and justification. These recommendations have been applied in the health and human services body of literature [72, 73], but their application remains scarce in the fields of rehabilitation and occupational therapy [74].

One noteworthy finding from this review was the variation with which studies were guided by implementation TMFs. Fewer than half of the studies (n = 12) were informed by TMFs drawn from the implementation literature. The Knowledge-to-Action Process framework was applied in five studies, followed by the Behavior Change Wheel and Normalization Process Theory, represented in three and two studies, respectively. The lack of TMF application may also explain some of the variability in implementation strategy effectiveness. Interestingly, all 12 studies with TMF underpinnings found either mixed or beneficial outcomes as a result of their implementation strategies.

Conversely, the three studies that found no effect of their strategies on implementation outcomes were not informed by any implementation TMF. While this subset of studies is too small to draw definitive conclusions, the importance of using TMFs to guide implementation studies have been well established and endorsed by leading implementation scientists to identify the determinants that may influence implementation, understand relationships between constructs, and inform implementation project evaluations [25, 69, 75]. Despite their recognized importance, TMFs are often applied haphazardly in implementation projects, and the selection of appropriate TMFs is complicated given the proliferation of TMFs in the implementation literature [33]. While tools (e.g., dissemination-implementation.org/content/select.aspx) are available to help researchers in TMF selection, occupational therapy researchers in stroke rehabilitation who are new to the field of implementation science may be unfamiliar with such tools and resources. For instance, Birken et al. have developed the Theory, Model, and Framework Comparison and Selection Tool (T-CaST) that assesses the “fit” of different TMFs with implementation projects based on four areas: usability, testability, applicability, and acceptability [25]. Similarly, TMF experts have also developed a list of 10 recommendations for selecting and applying TMFs, and published specific case examples of how one TMF, the Exploration, Preparation, Implementation, Sustainment framework, has guided several implementation studies and projects [76].

In addition to synthesizing implementation strategies and outcomes that have been examined in the stroke rehabilitation literature, this review also corroborates other reviews in the rehabilitation field, which have found the mixed effectiveness of implementation strategies. A Cochrane review by Cahill et al. [77] was unable to determine the effect of implementation interventions on healthcare provider adherence to evidence-based practice in stroke rehabilitation due to limited evidence and lower-quality study designs. However, one encouraging finding from the present review, and specific to the occupational therapy field, was the frequent use of the following implementation strategy: assess for readiness and identify barriers and facilitators. The assessment of barriers and facilitators is a central precursor to selecting implementation strategies that effectively facilitate the use of evidence in practice [78]. Implementation strategies that are not responsive to these barriers and facilitators frequently fail to produce sufficient and sustainable practice improvements [78, 79].

Although identifying implementation barriers and facilitators is of paramount importance in implementation studies, the processes researchers use to select relevant implementation strategies based on these barriers and facilitators are often unclear. Vratsistas-Curto et al. [47], for instance, assessed determinants of implementation at the start of their study and mapped determinants to the Theoretical Domains Framework and Behavior Change Wheel to inform implementation strategy selection. This exemplar use of TMFs can strengthen the rigor of implementation strategy selection and elevate strategy effectiveness. However, not all implementation studies are informed by underlying TMFs, calling into question the rationale behind why specific strategies are used in certain contexts. Going forward, as the fields of stroke rehabilitation and occupational therapy grow their interest in implementation, researchers must be transparent when explaining the process and justification of their implementation strategy selection. Without this transparency, occupational therapy stakeholders and other rehabilitation professionals may continue to use implementation strategies without systematically matching them to identified barriers and facilitators. To facilitate strategy selection, Waltz et al. [78] gathered expert opinion data and developed a tool matching implementation barriers to implementation strategies. The tool draws language from the Consolidated Framework for Implementation Research (CFIR) [23] and matches identified CFIR barriers to the ERIC taxonomy of implementation strategies. Using the CFIR-ERIC matching tool may be a viable option for occupational therapy and stroke rehabilitation researchers who understand determinants of evidence implementation but require guidance when selecting relevant implementation strategies.

The other commonly examined implementation strategy identified in this review involved the use of educational meetings and materials. Eleven studies used one or more of these educational techniques to facilitate the implementation of evidence into practice. However, in the context of these educational techniques, all studies examining educational strategies failed to specify their implementation strategies as recommended by reporting guidelines [27]. Perhaps this lack of strategy specification can be attributed to the interdisciplinary divide in implementation nomenclature. Included studies from the present review often examined “knowledge translation interventions” or “knowledge translation strategies” (e.g., [64], [50]), and no studies specifically referenced the ERIC taxonomy or IOF. Across the rehabilitation field, the term “knowledge translation” is commonly used as a synonym for moving research into practice and is a term that has been widely accepted in the rehabilitation field since 2000 [24, 80, 81]. While international rehabilitation leaders have articulated distinctions between “knowledge translation” and “implementation science,” there is still tremendous work to be done in disseminating these distinctions to the broader rehabilitation audience [80, 81].

Further, additional research is also needed to evaluate the cost of implementing particular interventions in practice. Cost was the only implementation outcome that was not evaluated in any of the studies included in this review and points to a major knowledge gap in both the implementation science and stroke rehabilitation fields. Given that the lack of funds to cover implementation costs is a substantial barrier to EBP implementation in stroke rehabilitation [22], we must understand the costs associated with evidence-based interventions, programs, and assessments and the costs of using implementation strategies in stroke rehabilitation settings. One option for assessing these costs is the conduction of economic evaluations. For instance, Howard-Wilsher et al. [82] published a systematic overview of economic evaluations of health-related rehabilitation, including occupational therapy. Economic evaluations may be defined as comparing two or more interventions and examining both the costs and consequences of the intervention alternatives [82, 83]. Economic evaluations most commonly consist of cost-effectiveness analysis (CEA) but can consist of cost-utility, cost-benefit, cost-minimization, or cost-identification analysis [84, 85]. Consideration of resource allocation and costs is critically needed to make clinical and policy decisions about occupational therapy interventions [82] and should be a focus of future implementation work in occupational therapy and rehabilitation.

Limitations

While the present scoping review adds novel contributions to the implementation science field, stroke rehabilitation, and occupational therapy, it includes several limitations. First, scoping review methodologies have been critiqued for not requiring quality and bias assessments of included articles [41, 86]. Given that this review’s focus was to synthesize the breadth of implementation strategies and outcomes measured in a field (e.g., occupational therapy) newer to implementation science, critical appraisals and bias assessments were deemed “not applicable” by the review team, a distinction that is supported by current PRISMA-ScR reporting guidelines. Second, while a comprehensive search was conducted to capture all relevant literature, the review team could have further enhanced their search strategy by consulting with an institutional librarian or performing backward/forward searching to maximize search specificity. Third, the search was restricted to studies that included occupational therapy as the primary service provider of interest. Thus, most of the studies utilized implementation strategies at the provider level. The authors recognize that the effective implementation of best practices often requires organizational- and system-level changes; therefore, the findings do not represent strategies and outcomes applicable to stroke rehabilitation clinics and the more extensive healthcare system. Lastly, the results of this scoping review returned a relatively small sample size, and therefore, conclusions should be interpreted in consideration of the available evidence.

Conclusion

This scoping review revealed the occupational therapy profession’s use of implementation strategies and measurement of implementation outcomes in stroke rehabilitation. The fields of occupational therapy and stroke rehabilitation have begun to create a small body of implementation science literature; however, occupational therapy researchers and practitioners must continue to develop and test implementation strategies to move evidence into practice. Moreover, implementation strategies and outcomes should be described using uniform language that allows for comparisons across studies. The application of this uniform language—such as the language in the ERIC and IOF—will streamline the synthesis of knowledge (e.g., systematic reviews, meta-analyses) that will point researchers and practitioners to effective strategies that promote the use of evidence in practice. Without consistent nomenclature, it may continue to prove challenging to understand the key components of implementation strategies that are linked to improved implementation outcomes and ultimately improved care. By applying the ERIC taxonomy and IOF and using TMFs to guide study activities, occupational therapy and stroke rehabilitation researchers can advance both the fields of rehabilitation and implementation science.