Background

The use of research evidence to inform health policy is strongly promoted [1]. This drive has developed with increased pressure on healthcare organisations to deliver the most effective health services in an efficient and equitable manner [2]. Policy and management decisions influence the ability of health services to improve societal outcomes by allocating resources to meet health needs [3]. These decisions are more likely to improve outcomes in a cost-efficient manner when they are based on the best available evidence [4,5,6,7,8].

Evidence-informed decision-making refers to the complex process of considering the best available evidence from a broad range of information when delivering health services [1, 9, 10]. Policy and management decisions can be influenced by economic constraints, community views, organisational priorities, political climate, and ideological factors [11,12,13,14,15,16]. While these elements are all important in the decision-making process, without the support of research evidence they are an insufficient basis for decisions that affect the lives of others [17, 18].

Recently, increased attention has been given to implementation research to reduce the gap between research evidence and healthcare decision-making [19]. This growing but poorly understood field of science aims to improve the uptake of research evidence in healthcare decision-making [20]. Research implementation strategies such as knowledge brokerage and education workshops promote the uptake of research findings into health services. These strategies have the potential to create systematic, structural improvements in healthcare delivery [21]. However, many barriers exist to successful implementation [22, 23]. Individuals and health services face financial disincentives, lack of time or awareness of large evidence resources, limited critical appraisal skills, and difficulties applying evidence in context [24,25,26,27,28,29,30].

It is important to evaluate the effectiveness of implementation strategies and the inter-relating factors perceived to be associated with effective strategies. Previous reviews on health policy and management decisions have focussed on implementing evidence from single sources such as systematic reviews [29, 31]. Strategies that involved simple written information on accomplishable change may be successful in health areas where there is already awareness of evidence supporting practice change [29]. Re-conceptualisation or improved methodological rigor has been suggested by Mitton et al. to produce a richer evidence base for future evaluation, however only one high-quality randomised controlled trial has been identified since [9, 32, 33]. As such, an updated review of emerging research in this topic is needed to inform the selection of research implementation strategies in health policy and management decisions.

The primary aim of this systematic review was to evaluate the effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare. A secondary aim of the review was to describe factors perceived to be associated with effective strategies and the inter-relationship between these factors.

Methods

Identification and selection of studies

This systematic review was registered with Prospero (record number: 42016032947) and has been reported consistent with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines (Additional file 1). Ovid MEDLINE, Ovid EMBASE, PubMed, CINAHL Plus, Scopus, Web of Science Core Collection, and The Cochrane Library were searched electronically from January 01, 2000, to February 02, 2016, in order to retrieve literature relevant to the current healthcare environment. The search was limited to the English language, and terms relevant to the field, population, and intervention were combined (Additional file 2). Search terms were selected based on their sensitivity, specificity, validity, and ability to discriminate implementation research articles from non-implementation research articles [34,35,36]. Electronic database searches were supplemented by cross-checking the reference list of included articles and systematic reviews identified during the title and abstract screening. Searches were also supplemented by hand-searching publication lists from prominent authors in the field of implementation science.

Study selection

Type of studies

All study designs were included. Experimental and quasi-experimental study designs were included to address the primary aim. No study design limitations were applied to address the secondary aim.

Population

The population included individuals or bodies who made resource allocation decisions at the managerial, executive, or policy level of healthcare organisations or government institutions. Broadly defined as healthcare policy-makers or managers, this population focuses on decision-making to improve population health outcomes by strengthening health systems, rather than individual therapeutic delivery. Studies investigating clinicians making decisions about individual clients were excluded, unless these studies also included healthcare policy-makers or managers.

Interventions

Interventions included research implementation strategies aimed at facilitating evidence-informed decision-making by healthcare policy-makers and managers. Implementation strategies may be defined as methods to incorporate the systematic uptake of proven evidence into decision-making processes to strengthen health systems [37]. While these interventions have been described differently in various contexts, for the purpose of this review, we will refer to these interventions as ‘research implementation strategies’.

Type of outcomes

This review focused on a variety of possible outcomes that measure the use of research evidence. Outcomes were broadly categorised based on the four levels of Kirkpatrick’s Evaluation Model Hierarchy: level 1—reaction (e.g. change in attitude towards evidence), level 2—learning (e.g. improved skills acquiring evidence), level 3—behaviour (e.g. self-reported action taking), and level 4—results (e.g. change in patient or organisational outcomes) [38].

Screening

The web-based application Covidence (Covidence, Melbourne, Victoria, Australia) was used to manage references during the review [39]. Titles and abstracts were imported into Covidence and independently screened by the lead investigator (MS) and one of two other reviewers (RH, HL). Duplicates were removed throughout the review process using Endnote (EndNote, Philadelphia, PA, USA), Covidence and manually during reference screening. Studies determined to be potentially relevant or whose eligibility was uncertain were retrieved and imported to Covidence for full-text review. The lead investigator (MS) and one of two other reviewers (RH, HL) then independently assessed the full-text articles for the remaining studies to ascertain eligibility for inclusion. A fourth reviewer (KAB) independently decided on inclusion or exclusion if there was any disagreement in the screening process. Attempts were made to contact authors of studies whose full-text articles were unable to be retrieved, and those that remained unavailable were excluded.

Quality assessment

Experimental study designs, including randomised controlled trials and quasi-experimental studies, were independently assessed for risk of bias by the lead investigator (MS) and one of two other reviewers (RH, HL) using the Cochrane Collaboration’s tool for assessing risk of bias [40]. Non-experimental study designs were independently assessed for risk of bias by the lead investigator (MS) and one of two other reviewers (RH, HL) using design-specific risk-of-bias-critical appraisal tools: (1) Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies from the National Heart, Lung, and Blood Institute (NHLBI; [41], February) and (2) Critical Appraisal Skills Program (CASP) Qualitative Checklist for qualitative, case study, and evaluation designs [42].

Data extraction

Data was extracted using a standardised, piloted data extraction form developed by reviewers for the purpose of this study (Additional file 3). The lead investigator (MS) and one of two other reviewers (RH, HL) independently extracted data relating to the study details, design, setting, population, demographics, intervention, and outcomes for all included studies. Quantitative results were also extracted in the same manner from experimental studies that reported quantitative data relating to the effectiveness of research implementation strategies in promoting evidence-informed policy and management decisions in healthcare. Attempts were made to contact authors of studies where data was not reported or clarification was required. Disagreement between investigators was resolved by discussion, and where agreement could not be reached, an independent fourth reviewer (KAB) was consulted.

Data analysis

A formal meta-analysis was not undertaken due to the small number of studies identified and high levels of heterogeneity in study approaches. Instead, a narrative synthesis of experimental studies evaluating the effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare and a thematic synthesis of non-experimental studies were performed to describe factors perceived to be associated with effective strategies and the inter-relationship between these factors. Experimental studies were synthesised narratively, defined as studies reporting quantitative results with both an experimental and comparison group. This included specified quasi-experimental designs, which report quantitative before and after results for primary outcomes related to the effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare. Non-experimental studies were synthesised thematically, defined as studies reporting quantitative results without both an experimental and control group, or studies reporting qualitative results. This included quasi-experimental studies that do not report quantitative before and after results for primary outcomes related to the effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare.

The thematic synthesis was informed by inductive thematic approach for data referring to the factors perceived to be associated with effective strategies and the inter-relationship between these factors. The thematic synthesis in this systematic review was based on methods described by Thomas and Harden [43]. Methods involved three stages of analysis: (1) line-by-line coding of text, (2) inductive development of descriptive themes similar to those reported in primary studies, (3) analytical themes representing new interpretive constructs undeveloped within studies but apparent between studies once data is synthesised. Data reported in the results section of included studies were reviewed line-by-line and open coded according to meaning and content by the lead investigator (MS). Codes were developed using an inductive approach by the lead investigator (MS) and a second reviewer (TH). Concurrent with data analysis, this entailed constant comparison, ongoing development, and comparison of new codes as each study was coded. Immersing reviewers in the data, reflexive analysis, and peer debriefing techniques were used to ensure methodological rigor throughout the process. Codes and code structure was considered finalised at point of theoretical saturation (when no new concepts emerged from a study). A single researcher (MS) was chosen to conduct the coding in order to embed the interpretation of text within a single immersed individual to act as an instrument of data curation [44, 45]. Simultaneous axial coding was performed by the lead investigator (MS) and a second reviewer (TH) during the original open coding of data to identify relationships between codes and organise coded data into descriptive themes. Once descriptive themes were developed, the two investigators then organised data across studies into analytical themes using a deductive approach by outlining relationships and interactions between codes across studies. To ensure methodological rigor, a third reviewer (JW) was consulted via group discussion to develop final consensus. The lead author (MS) reviewed any disagreements in descriptive and analytical themes by returning to the original open codes. This cyclical process was repeated until themes were considered to sufficiently describe the factors perceived to be associated with effective strategies and the inter-relationship between these factors.

Results

Search results

The search strategy identified a total of 7783 articles, 7716 were identified by the electronic search strategy, 56 from reference checking of identified systematic reviews, 8 from reference checking of included articles, and 3 articles from hand-searching publication lists of prominent authors. Duplicates (3953) were removed using Endnote (n = 3906) and Covidence (n = 47), leaving 3830 articles for screening (Fig. 1).

Fig. 1
figure 1

PRISMA Flow Diagram

Of the 3830 articles, 96 were determined to be potentially eligible for inclusion after title and abstract screening (see Additional file 4 for the full list of 96 articles). The full-text of these 96 articles was then reviewed, with 19 studies (n = 21 articles) meeting all relevant criteria for inclusion in this review [9, 27, 46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64]. The most common reason for exclusion upon full-text review was that articles did not examine the effect of a research implementation strategy on decision-making by healthcare policy-makers or managers (n = 22).

Characteristics of included studies

The characteristics of included studies are shown in Table 1. Three experimental studies evaluated the effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare systems. Sixteen non-experimental studies described factors perceived to be associated with effective research implementation strategies.

Table 1 Characteristics of included studies

Study design

Of the 19 included studies, there were two randomised controlled trials (RCTs) [9, 46], one quasi-experimental study [47], four program evaluations [48,49,50,51], three implementation evaluations [52,53,54], three mixed methods [55,56,57], two case studies [58, 59], one survey evaluation [63], one process evaluation [64], one cohort study [60], and one cross-sectional follow-up survey [61].

Participants and settings

The largest number of studies were performed in Canada (n = 6), followed by the United States of America (USA) (n = 3), the United Kingdom (UK) (n = 2), Australia (n = 2), multi-national (n = 2), Burkina Faso (n = 1), the Netherlands (n = 1), Nigeria (n = 1), and Fiji (n = 1). Health topics where research implementation took place were varied in context. Decision-makers were typically policy-makers, commissioners, chief executive officers (CEOs), program managers, coordinators, directors, administrators, policy analysts, department heads, researchers, change agents, fellows, vice presidents, stakeholders, clinical supervisors, and clinical leaders, from the government, academia, and non-government organisations (NGOs), of varying education and experience.

Research implementation strategies

There was considerable variation in the research implementation strategies evaluated, see Table 2 for summary description. These strategies included knowledge brokering [9, 49, 51, 52, 57], targeted messaging [9, 64], database access [9, 64], policy briefs [46, 54, 63], workshops [47, 54, 56, 60], digital materials [47], fellowship programs [48, 50, 59], literature reviews/rapid reviews [49, 56, 58, 61], consortium [53], certificate course [54], multi-stakeholder policy dialogue [54], and multifaceted strategies [55].

Table 2 Implementation strategy summary description

Quality/risk of bias

Experimental studies

The potential risk of bias for included experimental studies according to the Cochrane Collaboration tool for assessing risk of bias is presented in Table 3. None of the included experimental studies reported methods for allocation concealment, blinding of participants and personnel, and blinding of outcome assessment [9, 46, 47]. Other potential sources of bias were identified in each of the included experimental studies including (1) inadequate reporting of p values for mixed-effects models, results for hypothesis two, and comparison of health policies and programs (HPP) post-intervention on one study [9], (2) pooling of data from both intervention and control groups limited ability to evaluate the success of the intervention in one study [47], and (3) inadequate reporting of analysis and results in another study [46]. Adequate random sequence generation was reported in two studies [9, 46] but not in one [47]. One study reported complete outcome data [9]; however, large loss to follow-up was identified in two studies [46, 47]. It was unclear whether risk of selective reporting bias was present for one study [46], as outcomes were not adequately pre-specified in the study. Risk of selective reporting bias was identified for one study that did not report p values for sub-group analysis [9] and another that only reported change scores for outcome measures [47].

Table 3 Risk of bias of included experimental studies using the Cochrane Collaboration tool for assessing risk of bias

Non-experimental studies

The potential risk of bias for included non-experimental studies according to the Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies from the National Heart, Lung, and Blood Institute, and the Critical Appraisal Skills Program (CASP) Qualitative Checklist is presented in Tables 4 and 5.

Table 4 Risk of bias of included non-experimental studies using the Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies
Table 5 Risk of bias of included non-experimental studies using the Critical Appraisal Skills Program (CASP) Qualitative Checklist

Narrative synthesis results: effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare

Definitive estimates of implementation strategy effect are limited due to the small number of identified studies, and heterogeneity in implementation strategies and reported outcomes. A narrative synthesis of results is described for changes in reaction/attitudes/beliefs, learning, behaviour, and results. See Table 6 for a summary of study results.

Table 6 Summary of study results

Randomised controlled trials

Interestingly, the policy brief accompanied by an expert opinion piece was thought to improve both level 1 change in reaction/attitudes/beliefs and level 3 behaviour change outcomes. This was referred to as an “authority effect” [46]. Tailored targeted messages also reportedly improved level 3 behaviour change outcomes. However, the addition of a knowledge broker to this strategy may have been detrimental to these outcomes. When organisational research culture was considered, health departments with low research culture may have benefited from the addition of a knowledge broker, although no p values were provided for this finding [9].

Non-randomised studies

The effect of workshops, ongoing technical assistance, and distribution of instructional digital materials on level 1 change in reaction/attitudes/beliefs outcomes was difficult to determine, as many measures did not change from baseline scores and the direction of change scores was not reported. However, a reduction in perceived support from state legislators for physical activity interventions was reported after the research implementation strategy. All level 2 learning outcomes were reportedly improved, with change scores larger for local than state health department decision-makers in every category except methods in understanding cost. Results were then less clear for level 3 behaviour change outcomes. Only self-reported individual-adapted health behaviour change was thought to have improved [47].

Thematic synthesis results: conceptualisation of factors perceived to be associated with effective strategies and the inter-relationship between these factors

Due to the relative paucity of evidence for effectiveness studies, a thematic synthesis of non-experimental studies was used to explore the factors perceived to be associated with effective strategies and the inter-relationship between these factors. Six broad, interrelated, analytic themes emerged from the thematic synthesis of data captured in this review (Fig. 2). We developed a conceptualisation of how these themes interrelated from data captured both within and across studies. Some of these analytic themes were specifically mentioned in individual papers, but none of the papers included in this review identified all, nor developed a conceptualisation of how they interrelated. The six analytic themes were conceptualised as having a unidirectional, hierarchal flow from (1) establishing an imperative for practice change, (2) building trust between implementation stakeholders, (3) developing a shared vision, and (4) actioning change mechanisms. These were underpinned by (5) employment of effective communication strategies and (6) provision of resources to support change.

Fig. 2
figure 2

Conceptualisation of Inter-related themes (analytic themes) associated with effective strategies and the inter-relationship between these factors

Establish imperative

Organisations and individuals were driven to implement research into practice when there was an imperative for practice change. Decision-makers wanted to know why change was important to them, and their organisation and or community. Imperatives were seen as drivers of motivation for change to take place and were evident both internal to the decision-maker (personal gain) and external to the decision-makers (organisational and societal gain).

Personal gain

Individuals were motivated to participate in research implementation projects where they could derive personal gain [48, 50, 56]. Involvement in research was viewed as an opportunity rather than an obligation [56]. This was particularly evident in one study by Kitson et al. where all nursing leaders unanimously agreed the potential benefit of supported, experiential learning was substantial, with 13 of 14 committing to leading further interdisciplinary, cross-functional projects [50].

Organisational and societal gain

Decision-makers supported research implementation efforts when they aligned to an organisational agenda or an area where societal health needs were identified [48, 50, 53, 55, 59, 64]. Practice change was supported if it was deemed important by decision-makers and aligned with organisational priorities, where knowledge exchange was impeded if changes had questionable relevance to the workplace [48, 53, 64]. Individuals reported motivation to commit to projects they felt would address community needs. For example, in one study, nursing leaders identified their passion for health topics as a reason to volunteer in a practice change process [50]. In another study, managers were supportive of practice change to improve care of people with dementia, as they thought this would benefit the population [55].

Build trust

Relationships, leadership authority, and governance constituted the development of trust between stakeholder groups.

Relationships

The importance of trusting relationships between managers, researchers, change agents, and staff was emphasised in a number of studies [48, 50, 54, 59, 64]. Developing new relationships through collaborative networking and constant contact reportedly addressed mutual mistrust between policy-makers and the researchers, and engaged others to change practice [54, 59]. Bullock et al. described how pre-existing personal and professional relationships might facilitate implementation strategy success through utilising organisational knowledge and identifying workplace “gatekeepers” to engagement with. In the same study, no real link between healthcare managers and academic resources was derived from fellows that were only weakly connected to healthcare organisations [48].

Leadership authority

The leadership authority of those involved in research implementation influenced the development of trust between key stakeholders [50, 52, 55, 59, 61]. Dagenais et al. found recommendations and information was valued if credited from researchers and change agents whose input was trusted [52]. The perception that individuals with senior organisational roles reduce perceived risk and resistance to change was supported by Dobbins et al., who reported that seniority of individuals is a predictor of systematic review use in decision-making [50, 59, 61]. However, professional seniority should be related to the research implementation context, as the perceived lack of knowledge in content area was a barrier to providing managerial support [55].

Governance

A number of studies expressed the importance of consistent and sustained executive support in order to maintain project momentum [48, 50, 52, 53, 59, 64]. In the study by Kitson et al., individuals expressed concern and anxiety around reputational risk if consistent organisation support was not provided [50]. Organisational capacity was enhanced with strong management support and policies [57]. Uneke et al. identified good stewardship in the form of governance to provide accountability and protection for individuals and organisations in their study. Participants in this study unanimously identified the need for performance measurement mechanisms for the health policy advisory committee to promote sustainability and independent evidence to policy advice [54]. Bullock et al. found that managers view knowledge exchange in a transaction manner and are keen to know and use project results as soon as possible. However, researchers and change agents may not wish to apply results due to the phase of the project [48]. This highlighted the importance of governance systems to support confidentiality and limiting the release of project results before stakeholders are confident of findings.

Develop shared vision

A shared vision for desired change and outcomes can be built around common goal through improving understanding, influencing behaviour change, and working with the characteristics of organisations.

Stakeholder understanding

Improving the understanding of research implementation was considered a precursor to building shared vision [50, 52, 55, 56]. Policy-makers reported lack of time prevented them from performing an evidence review and desired experientially tailored information, education, and avoidance of technical language to improve understanding [52, 55, 58]. It was perceived that lack of clarity limited project outcomes in the study by Gagliardi et al., which emphasised the need for simple processes [56]. When challenges arose in Kitson et al., ensuring all participants understood their role from implementation outset was suggested as a process improvement [50].

Influence change

Knowledge brokers in Campbell et al. were able to elicit well-defined research questions if they were open, honest, and frank in their approach to policy-makers. Policy-makers felt that knowledge brokering was more useful for shaping parameters, scope, budget, and format of projects, which provides guidance for decision-making rather than being prescriptive [49]. However, conclusive recommendations that aim for a consensus are viewed favourably by policy-makers, which means a balance between providing guidance without being too prescriptive, must be achieved [63]. Interactive strategies may allow change agents to gain better understanding of evidence in organisational decisions and guide attitudes towards evidence-informed decision-making. Champagne et al. observed fellows participating in this interactive, social process, and Dagenais et al. reported practical exercises and interactive discussions were appreciated by knowledge brokers in their own training [52, 59]. Another study reported barriers in work practice challenges being viewed as criticism; despite this, organisation staff valued leaders’ ability to inspire a shared vision and identified ‘challenging processes’ as the most important leadership practice [50].

Characteristics of organisation

Context-specific organisational characteristics such as team dynamics, change culture, and individual personalities can influence the effectiveness of research implementation strategies [50, 53, 56, 59]. Important factors in Flanders et al. were clear lines of authority in collaborative and effective multidisciplinary teams. Organisation readiness for change was perceived as both a barrier and a facilitator to research implementation but higher staff consensus was associated with higher engagement in organisational change [60]. Strategies in Dobbins et al. were thought to be more effective if they were implemented in organisations with learning culture and practices, or facilitated an organisational learning culture themselves, where Flanders et al. reported solutions to hospital safety problems often created more work or change from long-standing practices, which proved a barrier to overcome [53, 61]. Individual resistance to change in the form of process concerns led to higher levels of dissatisfaction [50].

Provide resources to support change

Individuals were conscious of the need for implementation strategies to be adequately resourced [48,49,50, 55, 56, 58, 59, 61]. There was anxiety in the study by Döpp et al. around promoting research implementation programs, due to the fear of receiving more referrals than could be handled with current resourcing [55]. Managers mention service pressures as a major barrier in changing practice, with implementation research involvement dependent on workload and other professional commitments [50, 56]. Lack of time prevented evidence reviews being performed, and varied access to human resources such as librarians were also identified as barriers [58, 59]. Policy-makers and managers appreciated links to expert researchers, especially those who had infrequent or irregular contact with the academic sector previously [49]. Managers typically viewed engagement with research implementation as a transactional idea, wanting funding for time release (beyond salary costs), while researchers and others from the academic sector consider knowledge exchange inherently valuable [48]. Vulnerability around leadership skills and knowledge in the study by Kitson et al. exposed the importance of training, education, and professional development opportunities. Ongoing training in critical appraisal of research literature was viewed as a predictor of whether systematic reviews influenced program planning [61].

Employ effective communication strategies

Studies and study participants expressed different preferences for the format and mode of contact for implementation strategies [48, 51, 52, 55, 56, 59, 64]. Face to face contact was preferred by the majority of participants in the study by Waqa et al. and was useful in acquiring and accessing relevant data or literature to inform the writing of policy briefs [51]. Telephone calls were perceived as successful in Döpp et al. because they increased involvement and opportunity to ask questions [55]. Electronic communication formats in the study by Bullock et al. provided examples of evidence-based knowledge transfer from academic settings to the clinical setting. Fellows spent time reading literature at the university and would then send that information to the clinical workplace in an email, while managers stated that the availability of website information positively influenced its use [48]. Regular contact in the form of reminders encouraged actions, with the study by Dagenais et al. finding lack of ongoing, regular contact with knowledge brokers in the field limitated research implementation programs [52].

Action change mechanism

Reviewers interpreted the domains (analytical themes) representing a model of implementation strategy success to lead to a change mechanism. Change mechanisms refer to the actions taken by study participants to implement research into practice. Studies did not explicitly measure the change mechanisms that lead to the implementation of research into practice. Instead, implicit measurements of change mechanisms were reported such as knowledge gain and intention to act measures.

Discussion

This review found that there are numerous implementation strategies that can be utilised to promote evidence-informed policy and management decisions in healthcare. These relate to the ‘authority effect’ from a simple low-cost policy brief and knowledge improvement from a complex multifaceted workshop with ongoing technical assistance and distribution of instructional digital materials [46, 47]. The resource intensity of these strategies was relatively low. It was evident that providing more resource-intensive strategies is not always better than less, as the addition of a knowledge broker to a tailored targeted messaging strategy was less effective than the messages alone [9]. Due to the paucity of studies evaluating the effectiveness of implementation strategies, understanding why some implementation strategies succeed where others fail in different contexts is important for future strategy design. The thematic synthesis of the wider non-effectiveness literature included in our review has lead us to develop a model of implementation strategy design that may action a change mechanism for evidence-informed policy and management decisions in healthcare [48,49,50,51,52,53,54,55,56,57,58,59,60,61, 63, 64].

Our findings were concomitant with change management theories. The conceptual model of how themes interrelated both within and across studies includes similar stages to ‘Kotter’s 8 Step Change Model’ [65]. Leadership behaviours are commonly cited as organisational change drivers due to the formal power and authority that leaders have within organisations [66,67,68]. This supports the ‘authority effect’ described in Beynon et al. and the value decision-makers placed on information credited to experts they trust [46]. Authoritative messages are considered a key component of an effective policy brief, and therefore, organisations should consider partnering with authoritative institutions, research groups, or individuals to augment the legitimacy of their message when producing policy briefs [69]. Change management research proposes change-related training improves understanding, knowledge, and skills to embed a change vision at a group level [70,71,72]. The results of our review support this view that providing adequate training resources to decision-makers can improve understanding, knowledge, and skills, leading to desired change. The results of our thematic synthesis appear to support knowledge broker strategies in theory. Multi-component research implementation strategies are thought to have greater effects than simple strategies [73, 74]. However, the addition of knowledge brokers to a tailored targeted messaging research implementation strategy in Dobbins et al. was less effective than the messages alone [9]. This may indicate that in some cases, simple research implementation strategies may be more effective than complex, multi-component ones. Further development of strategies is needed to ensure that a number of different implementation options are available, which can be tailored to individual health contexts. A previous review by LaRocca et al. supports this finding, asserting that in some cases, complex strategies may diminish key messages and reduce understanding of information presented [10]. Further, the knowledge broker strategy in Dobbins et al. had little or no engagement from 30% of participants allocated to this group, emphasising the importance of tailoring strategy complexity and intensity to organisational need.

This systematic review was limited both in the quantity and quality of studies that met inclusion criteria. Previous reviews have been similarly limited in the paucity of high-quality research evaluating the effectiveness of research implementation strategies in the review context area [10, 29, 32, 75]. The limited number of retrieved experimental, quantitatively evaluated effectiveness studies, means the results of this review were mostly based on non-experimental qualitative data without an evaluation of effectiveness. Non-blinding of participants could have biased qualitative responses. Participants could have felt pressured to respond in a positive way if they did not wish to lose previously provided implementation resources, and responses could vary depending on the implementation context and what changes were being made, for example, if additional resources were being implemented to fill an existing evidence-to-practice gap, versus the disinvestment of resources due to a lack of supportive evidence. Despite these limitations, we believe our comprehensive search strategy retrieved a relatively complete identification of studies in the field of research. A previous Cochrane review in the same implementation context area recently identified only one study (also captured in our review) using their search strategy and inclusion criteria [33, 76]. A meta-analysis was unable to be performed due to the limited amount of studies and high levels of heterogeneity in study approaches, as such, the results of this synthesis should be interpreted with caution. However, synthesising data narratively and thematically allowed this review to examine not only the effectiveness of research implementation strategies in the context area but also the mechanisms behind inter-relating factors perceived to be associated with effective strategies. Since our original search strategy, we have been unable to identify additional full-texts from the 11 titles excluded due to no data reporting (e.g. protocol, abstract). However, the Developing and Evaluating Communication strategies to support Informed Decisions and practice based on Evidence (DECIDE) project has since developed a number of tools to improve the dissemination of evidence-based recommendations [77]. In addition, support for the relationship development, face to face interaction, and focus on organisational climates themes in our conceptual model is supported by the full version [78] of an excluded summary article [79], identified after the original search strategy.

Studies measured behaviour changes considered on the third level of the Kirkpatrick Hierarchy but did not measure whether those behaviour changes led to their intended improved societal outcomes (level 4, Kirkpatrick Hierarchy). Future research should also evaluate changes in health and organisational outcomes. The conceptualisation of factors perceived to be associated with effective strategies and the inter-relationship between these factors should be interpreted with caution as it was based on low levels of evidence according to the National Health and Medical Research Council (NHMRC) of Australia designations [80]. Therefore, there is a need for the association between these factors and effective strategies to be rigorously evaluated. Further conceptualisation of how to evaluate research implementation strategies should consider how to include health and organisation outcome measures to better understand how improved evidence-informed decision-making can lead to greater societal benefits. Future research should aim to improve the relatively low number of high-quality randomised controlled trials evaluating the effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare. This might allow formal meta-analysis to be performed, providing indications of what research implementation strategies are effective in which context.

Conclusions

Evidence is developing to support the use of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare. A number of inter-relating factors were thought to influence the effectiveness of strategies through establishing an imperative for change, building trust, developing a shared vision, and action change mechanisms. Employing effective communication strategies and providing resources to support change underpin these factors, which should inform the design of future implementation strategies.