Introduction

Evidence-based practice (EBP) refers to the development and provision of health services according to best research evidence, health care providers’ expertise and patients’ values and preferences [1]. Adoption of EBP by organizations can create safer practices, better patient outcomes and decrease health care costs [2]. Best practice and technology can be defined as an innovation [3, 4]. However, some authors reported that health services and practices are not always based on best evidence [5,6,7,8,9]. Braithwaite and colleagues summarized that 60% of health services in the USA, England and Australia follow best practice guidelines; about 30% of health services are of low value; and 10% of patients globally experience iatrogenic harm [10].

To implement innovations, research evidence must be synthesized, adapted and applied in a specific health care context, and this adoption must be evaluated [11]. The adoption of innovations is improved when devoted individuals, often referred to as champions, facilitate implementation [3, 12, 13]. Champions are individuals (health care providers, management [14, 15], or lay persons [16, 17]) who volunteer or are appointed to enthusiastically promote and facilitate implementation of an innovation [13, 18, 19]. There is confusion and overlap between the concept of champion and other concepts, such as opinion leaders, facilitators, linking agents, change agents [19, 20], coaches and knowledge brokers [19]. Some studies have attempted to clarify these different roles that are intended to facilitate implementation [19, 20]. Despite this, these terms are sometimes used synonymously, while at other times treated as different concepts [19, 21]. Hence, we sought to only examine champions in this study.

There are at least four recent published reviews that reported on the effectiveness of champions [21,22,23,24]. In 2016, Shea and Belden [24] performed a scoping review (n = 42) to collate the characteristics and impacts of health information technology champions. They reported that in a subset of studies (24 qualitative and three quantitative), 23 of the 27 studies reported that champions had a positive impact during the implementation of health information technology [24]. In 2018, Miech and colleagues [21] conducted an integrative review (n = 199) of the literature on champions in health care. They reported a subset of 11 quantitative studies (four studies that randomly allocated the presence and absence of champions and seven studies that reported an odds ratio) that evaluated the effectiveness of champions [21]. They reported that despite some mixed findings in the subset of studies, use of champions was reported to generally influence adoption of innovations [21]. In 2020, Wood and colleagues [23] conducted a systematic review (n = 13) on the role and efficacy of clinical champions in facilitating implementation of EBPs in settings providing drug and alcohol addiction and mental health services. They reported that champions influenced health care providers use of best practices or evidence-based resources in four qualitative studies [23]. In 2021, Hall and colleagues [22] performed a systematic review and metanalysis of randomized controlled trials (RCT; n = 12) that evaluated the effectiveness of champions, as a part of a multicomponent intervention, at improving guideline adherence in long-term care homes. They concluded from three RCTs that there is low certainty evidence suggesting that the use of champions may improve staff adherence to guidelines in long-term care settings [22].

According to Tufanaru and colleagues [25], synthesizing the effectiveness of an intervention requires the summary of quantitative studies using a systematic process. As described above, two of the previous reviews discussing champions’ effectiveness were primarily composed of qualitative studies [23, 24]. Synthesizing qualitative studies may highlight relationships that exist between champions and aspects of implementation, but does not inform champions’ effectiveness based on the definition outlined by Tufanaru and colleagues [25]. Furthermore, some of the previous reviews examining champions’ effectiveness were limited to the following: (1) types of innovations (i.e. health information technology [24]); (2) setting (i.e. long-term care settings [22] or health care settings providing mental health and addiction treatment [23]); or study design/effect size (i.e. only including experimental design studies [21, 22] or studies reporting odd ratios [21]). Moreover, as some of the previous reviews sought to examine other aspects pertaining to champions in addition to champions’ effectiveness, they utilized study designs (i.e. integrative review [21], scoping review [24]) that did not require the performance of some conventional steps for systematic reviews as outlined by the JBI manual [25] and the Cochrane handbook [26]. For example, grey literature was not included, or the methodological quality of included studies was not appraised in the two cited reviews [21, 24].

To build on the four reviews describing champions’ effectiveness [21,22,23,24], we conducted a systematic review to determine whether the use of champions, tested in isolation from other implementation strategies, are effective at increasing the use of innovations across health care settings and innovation types. Our review is rooted in a post-positivist paradigm [27] because it focused on the relationships between measurable components of champions and implementation and emphasized the rigour attributed to study design (e.g. experimental studies are more rigorous than quasi-experimental and observational studies). The research questions were as follows: (1) How are champions described and operationalized in the articles that evaluates their effectiveness? (2) What are the effects of champions on the uptake of innovations (knowledge use) by patients, providers and systems/facilities? (3) What are the effects of champions on patient, provider and system/facility outcomes?

Methods

The research team followed the JBI approach to conducting systematic review of effectiveness [25] and used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [28] and the Synthesis without meta-analysis (SWiM) in systematic reviews reporting guidelines [29]. The research team registered the review in Open Science Framework as part of a broader scoping review [30]. See Additional files 1 and 2 for the PRISMA and the SWiM checklists respectively.

Search strategy and study selection

Search strategy

WJS devised a search strategy with a health sciences librarian for a larger scoping review that aimed to describe champions in health care. A second health science librarian assessed the search strategy using the Peer Review of the Electronic Search Strategy (PRESS) checklist [31]. The search strategy (outlined in Additional file 3) consisted of Boolean phrases and medical subject headings (MESH) terms for the following concepts and their related terms: champions, implementation and health care/health care context. Eight electronic databases (Business Source Complete, CINAHL, EMBASE, Medline, Nursing and Allied Health, PsycINFO, ProQuest Thesis and Dissertations, and Scopus) were searched from inception to October 26, 2020, to identify relevant articles. Further, WJS identified and assessed additional records retrieved from the reference lists of included studies and synthesis studies that were captured by the search strategy and from forward citation searches of included studies.

Eligibility criteria

Inclusion

We included all published studies and unpublished theses and dissertations that used a quantitative study design to evaluate the effectiveness of individuals explicitly referred to as champions at either increasing the use of innovation or improving patient, provider, or system/facility outcomes within a health care setting. English language articles were included regardless of date of publication or type of quantitative study design.

Exclusion

Synthesis studies, qualitative studies, study protocols, conference abstracts, editorials and opinion papers, case studies, studies not published in English, articles without a full text available, and articles that are not about knowledge translation or EBP were excluded.

Study selection

We used Covidence [32] to deduplicate records; WJS and MDV independently assessed the title and abstract of these deduplicated records. Records were included if the title and abstract mentioned champions within health care. All potentially relevant articles and articles that had insufficient information were included for full text screening. WJS and MDV independently assessed the inclusion of full text articles in accordance with the eligibility criteria detailed above. Discrepancies were resolved through consensus or if necessary, through consultation of a third senior research team member (ML, IDG, JES). WJS and MDV piloted the eligibility criteria on 100 records and 50 full text articles.

Data extraction

WJS and MDV extracted data in duplicate using a standardized and piloted extraction form created by the research team in DistillerSR [33]. The following data were extracted: (1) study characteristics: first author, year of publication, study design, country, setting, details on the innovation being implemented, study limitations, funding, and conflict of interest; (2) study participant demographics: sample size, age, sex and gender identity, and professional role; (3) champion demographics: number of champions, age, sex and gender identity, and professional role; (4) operationalization of champions: quantitative measures relative to the presence or influence of champions and the reliability and validity of these measures; and (5) study outcome: the dependent variable evaluated with use of champions, method of measurement of dependent variable, reliability and validity of measure(s), statistical analysis/approach undertaken, and statistical results and significance of results at p-value of 0.05 or less. WJS and MDV resolved discrepancies through discussion or through consultation of a senior research team member. WJS contacted authors for missing data integral to the analysis (e.g. to clarify statistical test results when integers are not reported in a figure in an article).

Quality appraisal

WJS and MDV independently appraised study methodological quality using five JBI critical appraisal tools for (1) case–control studies [34], (2) cohort studies [34], (3) cross-sectional studies [34], (4) quasi-experimental (non-randomized experimental) studies [25] and (5) randomized control trials [25]. Non-controlled before and after studies and interrupted time series were assessed using the critical appraisal tool for quasi-experimental studies [25]. Discrepancies were resolved through consensus. Each question response was attributed a score according to a scoring system from a recently published JBI systematic review [35] (Yes = 2; Unclear = 1; and No = 0). A quality score between 0 and 1 was calculated for each included study by dividing the total score with the total number of available scores. According to this quality score, the research team classified each study as either weak (quality score < 0.5), moderate (quality score between 0.5 and 0.74), or strong (quality score between 0.75 and 1) [36]. Studies were included in the data synthesis regardless of the quality score. We also examined the total percentage of “Yes” responses for each critical appraisal question to determine the areas of variability in quality between studies with the same study design.

Data synthesis

Through visually examining the data in tables, we found methodological and topic heterogeneity amongst the included studies (apparent from the varying types of innovations and study outcomes), which warranted the need for a narrative synthesis of the data. WJS used both inductive and deductive content analysis [37] to aggregate study outcomes into categories as detailed below. Two senior research team members (IDG and JES) evaluated and confirmed the accuracy of the performed categorization. WJS deductively categorized each extracted study outcome as either innovation use or as outcomes as described by Straus and colleagues [38]. We specifically defined innovation use in this study as comprising (1) conceptual innovation use: an improvement in knowledge (enlightenment) or attitude towards an innovation (best practices, research use, or technology) (often referred to as conceptual knowledge use [38]); and (2) the use or adoption of an innovation (instrumental knowledge use [38]). WJS further categorized study outcomes as either patient, provider and system/facility outcomes. Examples of patient outcomes included changes in patient’s health status and quality of life. Provider outcomes included provider satisfaction with practice. System/facility outcomes included system-level indicators such as readmission rates, length of stays and access to training [38]. Differing from Straus and colleagues [38], we also stratified innovation use into patient, provider and system/facility innovation use according to the level of analysis and intended target for change in the study. Patient and provider innovation use was defined as uptake of an innovation by patients and providers [38]. System/facility innovation use was defined as the adoption of an innovation throughout a whole system or facility; this included, for example, adoption of programs which entailed changes in work culture, policies and workflows [39,40,41]. Moreover, WJS used inductive content analysis to further categorize study outcomes within their respective category of innovation use or outcome according to the type of practice or technology being implemented. For example, the implementation of transfer boards, hip protectors and technology were grouped together, as these innovations pertain to the introduction of new equipment in clinical practice. Study outcomes that could not be coded according to the above classifications were grouped into an “other outcomes” category (e.g. whether formal evaluations were more likely to be conducted).

To answer research question 1, we inductively organized the measures used to identify exposure to champions into three categories: (1) studies that used a single dichotomous (“Yes or No”) or Likert scale, (2) studies that appointed a champion for their study and (3) studies that used more nuanced measures for champion exposure. To answer research questions 2 and 3, we used a predetermined set of vote-counting rules used in published systematic reviews [42,43,44] as outlined on Table 1. As previously suggested by Grimshaw and colleagues [45], we reported the study design, sample sizes, significant positive, significant negative and non-significant relationships, and the magnitude of effect (if reported by the study) for all the studies. We also performed a sensitivity analysis to determine whether the number of categories for which a conclusion can be made, or the conclusion for any category would change when weak studies are removed from the analysis [43, 46]. Lastly, we conducted a sub-group analysis of the data to evaluate whether our conclusions would change, or if there are differences in conclusions, between studies that used a dichotomous (presence/absence) measure and studies that appointed champions or used more nuanced measures of the champion strategy.

Table 1 Vote-counting rules

Results

Search results

As demonstrated in the flowchart (Fig. 1), the database search identified 6435 records and the additional citation analysis identified 3946 records. Duplicates (n = 2815) were removed using Covidence [32], leaving 7566 articles to screen. After titles and abstracts were screened, 2097 articles were identified to potentially met eligibility criteria. The full text of these 2090 articles was reviewed (seven articles could not be retrieved), with 35 studies (37 articles) [39,40,41, 47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80] meeting all the inclusion criteria (Additional file 4 lists excluded full text articles and reasons for exclusion).

Fig. 1
figure 1

PRISMA 2020 flow diagram

Characteristics of included studies

The included studies in our systematic review were primarily conducted in the last 10 years (2010–2020), with the highest proportion of studies conducted in North America (n = 28) and in acute care/tertiary settings (n = 20). The number of health care settings per article ranged from one to 1174 settings and sample sizes ranged from 80 to 6648 study participants. Table 2 summarizes study characteristics, and Table 3 provides more detailed descriptions of each study.

Table 2 Summary of included studies (n = 35)
Table 3 Description of included articles

Methodological quality

Of the 35 included studies, 19 (54.3%) were rated as strong [47, 48, 52, 58,59,60,61,62,63,64,65, 67, 68, 73,74,75,76,77,78,79,80], 11 (33.3%) were rated as moderate [39,40,41, 49, 50, 54, 56, 57, 70,71,72] and 5 (13.9%) were rated as weak [51, 53, 55, 66, 69] (See Additional file 5). Lower methodological quality was generally attributed to the lack of description of study participants and setting, lack of reliable and valid measures used to assess exposure to champions and study outcomes and the lack of processes used for random allocation and concealment of participant allocation to groups.

Description and operationalization of champions

Overall, there was a scarcity of demographic information reported on the champions. None of the included studies reported the age of the champions, and only one study reported the sex of the champion [80]. Nine studies identified the profession of the champions as either nursing or medicine [49, 51, 54, 55, 66, 70, 72, 74, 75].

Most studies (n = 25 of 35, 71.4%) operationalized champions as the perceived presence or absence of champions by survey respondents measured by single dichotomous (“Yes/No responses) or Likert items. Tables 5 and 6 detail operationalization of champions for each included study.

Four of the 35 studies (11.4%) described the appointment of champions in their study setting [54, 72, 73, 80]. There was a range of one champion [80] to six champions [54] in these studies [54, 72, 73, 80]. Two of these studies described the activities performed by the champions: (1) training nurses in the Kangaroo mother care and providing educational videos to mothers of neonatal intensive care patients [73] and (2) creating and implementing a protocol related to appropriate urinary catheter use and auditing urinary catheter use [80]. The other two studies detailed the training provided to champions but not their activities [54, 72].

The remaining 6 of 35 studies (17.1%) operationalized champions using five unique subscales (two studies used the same subscale) that assessed the presence of a champion who possessed or performed particular attributes, roles, or activities [50, 59, 61, 68, 77,78,79]. Overall, these measures demonstrate that champions can perform differing roles and activities from enthusiastically promoting or role modelling behaviour towards a particular innovation, to broader leadership roles (e.g. managing or acquiring resources). In 4 of the 6 studies (66.7%) [59, 61, 68, 79], the used champion subscale had acceptable internal consistency (α ≥ 0.70 [97]); one study (16.7%) reported that the used champion subscale had low internal consistency (α = 0.43) [50]. In 2 of the 6 studies (33.3%), the authors conducted an exploratory factor analysis and reported that the champion items loaded highly to a single factor [68, 77, 78]. The champion subscales were part of five larger questionnaires that measured another construct: (1) organizational readiness in adopting electronic health technologies [56, 63]; (2) organizational factors affecting adoption of electronic mail [45], transfer boards [72, 73] and e-health usage [54]; (3) sustainability of pharmacy-based innovations [74]. Furthermore, none of the included studies reported performing an evaluation on whether the champions’ activities were perceived to be helpful by the individuals who were intended to use the innovation. Also, none of the included studies assessed whether there was adequate exposure to champions to produce an effect.

Categorization of study outcomes

Across all 35 studies, we extracted and categorized 66 instances for which the relationships between champions and innovation use or patient, provider, or facility/system outcome were evaluated. Some studies evaluated the relationships between champions and more than one dependent variable. Table 4 presents the relationships between champions and innovation use and/or the resulting impact of innovation use pertaining to patients, providers and systems/facilities for each of the included studies.

Table 4 Summary of champions’ effectiveness in increasing innovation use and improving outcomes

Champions’ effectiveness in increasing innovation use

Twenty-nine studies evaluated the effectiveness of champions in increasing innovation use: five studies evaluated the effectiveness of champions in increasing conceptual innovation use [61, 64, 65, 68, 75, 77, 78], and 25 studies evaluated the effectiveness of champions in increasing instrumental innovation use [39,40,41, 47,48,49,50, 52, 54, 55, 57,58,59, 62, 63, 66, 67, 70,71,72,73,74,75,76, 80]. One study evaluated both conceptual and instrumental innovation use [75]. Based on our vote-counting rules, we were able to draw conclusions between the use of champions and the following three categories: (1) providers’ knowledge and attitudes towards an innovation (conceptual innovation use); (2) providers’ use of an innovation (instrumental innovation use); and (3) system/facility’s establishment of processes that encourages use of best practices, programs and technology throughout an organization (instrumental innovation use). A description of each conclusion relative to these three categories of innovation use is detailed below. We present the study outcomes organized into their respective innovation use categories, the statistical analysis and approach and test statistics and measure of magnitude supporting our conclusions in Table 5.

Table 5 Champions’ effectiveness in increasing patient, provider and system/facility’s innovation use

Champions’ effectiveness in increasing provider conceptual innovation use

Four studies evaluated the effectiveness of champions in improving providers’ attitudes and awareness of new technology or equipment (conceptual innovation use) [61, 64, 65, 68, 77, 78]. One of the 4 studies used a quasi-experimental design [77, 78], while the other three studies were cross-sectional observational studies [61, 68, 77, 78]. Two of the 4 studies (50%) reported that champions were effective in increasing provider conceptual innovation use [61, 64, 65], and 2 of the 4 studies (50%) reported mixed findings regarding the effectiveness of champions in increasing provider conceptual innovation use [68, 77, 78]. Therefore, our findings suggest that the use of champions in these four studies [61, 64, 65, 68, 77, 78] was, overall, not consistently related to providers’ conceptual innovation use of new technology or equipment.

Champions’ effectiveness in increasing provider instrumental innovation use

Seventeen studies evaluated the effectiveness of champions in increasing health care provider use of innovations (instrumental innovation use) [47,48,49, 52, 54, 55, 57, 58, 62, 63, 66, 67, 70, 72, 74, 76, 80]. One of the 17 studies was a clustered RCT, while 2 of the 17 studies used a quasi-experimental design [54, 80], and the remaining 14 studies were observational studies [47,48,49, 55, 57, 58, 62, 63, 66, 67, 70, 72, 74, 76]. Eight of the 17 studies (47.1%) reported that champions were effective in increasing provider’s use of innovations [49, 52, 57, 62, 67, 72, 76, 80]. Six of the 17 studies (35.3%) reported that mixed findings exist regarding the effectiveness of champions in increasing provider’s use of innovations [47, 48, 54, 58, 63, 66]. Two of the 17 (11.8%) studies reported that no relationship exists between champions and providers’ use of innovations [55, 70] and one of the 17 (5.9%) studies reported that champions decreased provider’s use of an innovation [74]. Therefore, our findings suggests that the use of champions in these 17 studies [47,48,49, 52, 54, 55, 57, 58, 62, 63, 66, 67, 70, 72, 74, 76, 80] was overall, not consistently related to providers’ use of best practice or technological innovations.

Champions’ effectiveness in increasing system/facility instrumental innovation use

Seven studies evaluated the effectiveness of champions in increasing systems/facilities’ adoption of technology, best practices and programs (instrumental innovation use) [39,40,41, 50, 59, 71, 75]. One of the 7 studies used a quasi-experimental design [39], while the remaining studies used observational study designs [40, 41, 50, 59, 71, 75]. Five of the 7 (71.4%) studies reported that champions were effective in increasing the formation of policies and processes and increasing uptake of technology at hospitals [59] and nursing homes [39], best practices in public health and pediatric practices [75] and programs in primary care clinics [41, 71]. One of the 7 (14.3%) studies reported that mixed findings exist regarding the effectiveness of champions in increasing the adoption of a depression program in primary care clinics [40] and 1 of the 7 (14.3%) studies reported that champions had no effect in increasing uptake of electronic mail at academic health science centres [50]. Therefore, across these seven studies [39,40,41, 50, 59, 71, 75], the use of champions was overall related to increased use of technological innovations, best practices and programs by systems/facilities.

Champions’ influence on outcomes

Ten studies evaluated the effectiveness of champions at improving outcomes. Six of the 10 studies evaluated the effectiveness of champions in improving patient health status or experience (patient outcomes) [41, 51, 53, 57, 60, 76]. One of the 10 studies evaluated the effectiveness of champions in improving provider’s satisfaction with the innovation [77, 78], and three studies evaluated the effectiveness of champions in improving system/facility-wide outcomes such as quality indicators [56], the establishment of organizational training programs [69], or sustainability of programs [79]. Based on our vote-counting rules, we drew conclusions between the use of champions and patient outcomes (see Table 6).

Table 6 Champions’ effectiveness on patient, provider and system/facility’s outcomes

Champions’ influence on patient outcomes

Six studies evaluated the effectiveness of champions in improving patient outcomes [41, 51, 53, 57, 60, 76]. All six studies used observational study designs. Three of the 6 studies (50%) reported that champions were effective in decreasing adverse patient outcomes [51, 53] or improving patients’ quality of life [60], while the other three studies (50%) reported that champions did not have a significant effect in improving patients’ standardized depression scale scores [41], patients’ laboratory tests and other markers related to diabetes [76] or their satisfaction with health services [57]. Therefore, across these six studies [41, 51, 53, 57, 60, 76], the use of champions was overall, not consistently related to improvements in patient outcomes.

Champions’ effectiveness on innovation use and outcomes

Three of the 35 studies evaluated the effectiveness of champions in increasing both innovation use and outcomes [41, 57, 76]. In these three studies, the use of champions improved health care providers’ use of best practices [57, 76] and the uptake of a depression program by facilities [41] but did not impact patient outcomes.

Sensitivity analysis and sub-group analysis of data

We found that when weaker quality studies were removed, the number of categories that we can make conclusions on, or their respective conclusions, did not change. Further, our conclusions did not change when we examined study findings across studies (n = 25 of 35, 71.4%) that operationalized champions using dichotomous (presence/absence) measures. We could not make conclusions but observed trends across studies that used more nuanced measures or appointed champions for their study (n = 10 of 35, 28.6%), because the categories of innovation use and outcomes in this subset had less than four included studies. In this subset of studies, a positive trend was suggested between use of champions and improvement in provider instrumental innovation use, according to three quasi-experimental studies [54, 72, 80].

Discussion

Summary of study findings

In this review, we aimed to summarize how champions are described and operationalized in studies that evaluate their effectiveness. Secondly, we assessed whether champions are effective at increasing innovation use or improving patient, provider and system/facility outcomes.

Description and operationalization of champions

We found that most studies evaluating the effectiveness of champions operationalized exposure to champions using a single item scale that asked whether participants perceived a presence or absence of a champion. Furthermore, we found that there was minimal demographic information provided regarding the champions in the included studies. Our findings add to the review by Miech and colleagues [21], revealing four additional subscales [50, 59, 77,78,79] measuring champions to the three champion subscales [40, 68, 100] they cited in their review. Our results reinforce Miech and colleagues’ [21] claim that more nuanced measures are needed to examine champions, as our review also only found champion subscales and did not locate a full instrument intended to measure the champion construct.

Champions’ effectiveness

Our review demonstrates that causal relationships between deployment of champions and improvement in innovation use and outcomes in health care settings cannot be drawn from the included studies because of the methodological issues (i.e. lack of description of champions, lack of valid and reliable measures used and use of observational study designs) present in most of these studies. Hall and colleagues also found low confidence evidence pertaining to champions’ effectiveness in guideline implementation in long-term care [22]. When we tried to make sense of the evidence, we found that across seven studies, champions were related to increased use of innovations at an organizational level [39,40,41, 50, 59, 71, 75]. Our findings indicate that champions do not consistently improve provider’s attitudes and knowledge across four studies [61, 64, 65, 68, 77, 78], provider’s use of innovations across 17 studies [47,48,49, 52, 54, 55, 57, 58, 62, 63, 66, 67, 70, 72, 74, 76, 80], or patient outcomes across six studies [41, 51, 53, 57, 60, 76]. We only found one study suggesting that the use of champions is associated with decreased provider instrumental innovation use [74], and none of the studies reported that the use of champions resulted in worse outcomes or harms. Damschroder and colleagues [101] reported that a single champion may be adequate in facilitating the implementation of technological innovations, but a group of champions composed of individuals from different professions may be required to encourage providers to change their practices. Furthermore, the myriad of mixed findings pertaining to the effectiveness of champions could be related to the lack of (1) description of the champions; (2) fidelity of the champion strategy; (3) evaluation of champion’s activities and level of exposure to champions; and (4) assessments of confounding contextual factors affecting champions’ performance. According to Shaw and colleagues [102], champions can undertake many roles and activities and that the assumption that champions operate in the similar manner may make comparisons difficult if these distinctions are not clarified.

Our results draw similar conclusions from the four cited published reviews on champions [21,22,23,24]. However, as detailed in this ‘Discussion’ section, our review (1) synthesized more quantitative evidence across varying health care settings and innovation types to reinforce the conclusions made from the past reviews; (2) highlighted areas where adequate research is conducted around champions and innovation use and outcomes; (3) identified four additional scales used in champion effectiveness studies not previously cited in previous reviews; and (4) provided implications of our findings in research and deployment of champions.

Implications of study findings

One implication of our study findings is that it provides a summary of 35 studies that evaluate the effectiveness of champions across varied health care settings and innovation types. Furthermore, we identified areas for which the effectiveness of champions was not well examined: (1) patients’ innovation use, (2) organizational conceptual innovation use, (3) provider outcomes, (4) and system/facility outcomes. In addition, our findings suggest that individuals who are thinking of mobilizing champions should begin by reflecting on their intended implementation goal (innovation use or outcomes by patients, providers, or by systems/facilities). If the goal is to increase organizational use of innovations, then there is some evidence to support the position that the use of champions may be beneficial. However, if the goal is to improve innovation use by providers and patients, or improve outcomes, individuals implementing EBP should be mindful when using champions until more conclusive evidence exists to support their effectiveness pertaining to these goals. Although there is a lack of evidence suggesting that the use of champions can be harmful, there are opportunity costs that come with deploying champions (e.g. clinician time and sometimes monetary costs) that may be better used to deploy a different implementation strategy. Furthermore, our findings imply that future effectiveness studies should examine whether champions perform distinct roles or activities depending on the innovation type or level of implementation (i.e. system/facility or individual (providers or patients)). To differentiate between several types of champions present in implementation requires future studies to provide more detailed descriptions of the champion strategy. One way to achieve this objective is through the development and use of valid, reliable and pragmatic tools that evaluate champions’ activities and exposure to champions. A second means is through the conduct of process evaluations in conjunction with experimental studies. Strauss and colleagues [38] defined process evaluations as qualitative or mixed method studies that are intended to describe the process of implementation, and the experiences of the individuals involved in implementation. Michie and colleagues [103] also highlighted that triangulation of qualitative data with findings of experimental studies would increase the validity of conclusions that an observed change is due to the applied knowledge translation strategy. Lastly, process evaluations may also help inform the optimal dose of champions required to achieve an implementation goal [38].

Limitations

Limitations of our review

Apart from theses and dissertations, we did not consider other grey literature in this study. Moreover, our eligibility criteria excluded studies that were not written in English. Further, our conclusions, made through vote counting, does not consider the effect size and the sensitivity of each individual study in estimating these effect sizes [45]. We tried to mitigate this limitation by reporting both the effect sizes and the sample sizes for each study. Moreover, as we only included studies that explicitly called the individual a champion, our review excluded other studies that deployed an individual that could have performed similar roles or activities as a champion but was not labelled a champion.

Limitations in the primary studies

The methodological, outcome measure and topic heterogeneity across the included studies did not allow us to conduct a meta-analysis to calculate the magnitude of champions’ effectiveness. The lack of description or evaluation of the champions’ attributes, roles and activities in most of the studies makes it difficult to decipher why the effectiveness of champions was found primarily mixed. In addition, the minimal use of both experimental research designs and reliable and valid measures to assess exposure to champions across the included studies makes it impossible to draw causal conclusions. Lastly, the included studies were mostly conducted in North American or European countries; hence, these findings may not be pertinent to other continents.

Conclusions

We aimed to evaluate the effectiveness of champions in improving innovation use and patient, provider and system/facility outcomes in health care settings. In 5 of 7 studies, champions and use of innovations by systems/facilities was positively associated. The effectiveness of champions in improving innovation use by providers and patients, or outcomes was either inconclusive or unexamined. There was little evidence that champions were harmful to implementation. To mitigate the uncertainty related to champions’ effectiveness, their deployment should be accompanied by a plan: (1) on how the use of champions will achieve goals or address barriers to implementation; (2) defining and evaluating fidelity of champion’s activities; and (3) evaluating champions’ effectiveness.