Background

Researchers, health policymakers, leaders, educators, and health-research collaboratives are becoming increasingly interested in effective ways to rapidly translate research into practice to improve healthcare delivery systems, and ultimately, health outcomes [1,2,3]. The field of implementation science has exploded over the past two decades, as more evidence has been generated to support strategies for translating research evidence into health practice and policy successfully, sustainably, and at scale [4]. Concurrently, there is growing recognition of the need to develop capacity within healthcare settings and among health professionals to promote evidence-based knowledge translation practices, and enable the consistent, timely and sustained use of research evidence in health practice [3, 5, 6]. This recognition has led to the emergence of education programmes in the field of implementation and dissemination science; many of which have been led by universities and target academic researchers [7, 8]. Few education programmes have specifically focused on developing knowledge translation skills in health professionals [4]. Developing the capacity and capability of healthcare services and health professionals to adopt, adapt, and implement research evidence is critical to the sustainability of healthcare delivery systems [3, 5]. For this paper, the term capacity is defined as the readiness of and access to resources needed for individuals and organisations to engage in knowledge translation. Capability is defined as individuals’ knowledge and skills required to engage in translation practice [9, 10].

Initiatives such as the establishment of research translation centres, academic health science centres and clinical research networks have also sought to drive integrated evidence-based healthcare delivery [11]. Investment has been made in the strategic implementation of roles such as embedded researchers [12], knowledge brokers [2, 13], mentors [14], and implementation support practitioners [3], in a bid to support the active, timely and sustained translation of research in healthcare settings. The existing evidence supporting the implementation, outcomes and sustainability of these, and other strategies, to promote the translation of research into healthcare practice, has not been reviewed systematically.

This review was undertaken as part of a broader programme of work to promote the rapid translation of research knowledge into rural and regional healthcare settings. Currently, published reviews of strategies to build knowledge translation capacity have focused predominantly on education, training and initiatives led by academic institutions and targeting either academic researchers or a mix of researchers and health professionals [4, 8]. Other reviews have focused on programmes to develop evidence-based practice knowledge, skills and capabilities for health professionals to conduct practice-based research [15, 16]. Another published review investigated the accessibility of online knowledge translation learning opportunities available for health professionals [17]. This current review aims to fill the gap in the literature by scoping the evidence on programmes that aim to build capacity and capability within settings in which healthcare is delivered to patients or consumers (healthcare settings) and in health professionals, to implement research in practice.

A search of Cochrane Database of Systematic Reviews, Joanna Briggs Institute’s (JBI) Evidence Synthesis, PROSPERO and Google Scholar for reviews of knowledge translation capacity and capability building programmes and models in healthcare settings, yielded no existing or planned reviews. The decision to undertake a scoping review, rather than a conventional systematic review, was based on three key factors: (1) the heterogeneity evident in knowledge translation capacity and capability building programmes and models implemented in healthcare settings; (2) the absence of an existing synthesis of evidence for knowledge translation capacity and capability building programmes delivered in healthcare settings or for health professionals and (3) the need to identify the gaps in knowledge about these types of programmes [18].

This scoping review aimed to scope the literature describing programmes or models designed to build capacity and capability for knowledge translation in healthcare settings, and the evidence supporting these programmes and models. The specific review questions were:

  1. (1)

    What models or approaches are used to develop knowledge translation capacity and capability in healthcare settings?

  2. (2)

    How are the models and approaches to building knowledge translation capacity and capability funded, and the efforts sustained in healthcare settings?

  3. (3)

    How are these models or approaches evaluated and what types of outcomes are reported?

Methods

This review used the JBI scoping review methodology. Search terms were developed for population, concept and context (PCC). The review questions, inclusion and exclusion criteria and search strategies were developed in advance (Additional File 1 Scoping Review Protocol). The review is reported in line with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) extension for scoping reviews (Additional File 2 PRISMA-ScR checklist [19].

Search strategy

The JBI three-step search strategy was applied. The researchers identified a set of key papers based on their knowledge of knowledge translation capacity and capability building programmes. These papers were used to identify key search terms. In consultation with the research librarians (FR and JS; see “Acknowledgements”), the research team conducted preliminary scoping searches to test the search terms and strategy and refine the final search terms. A tailored search strategy using the search terms was developed for each academic database (Additional file 3 Search Strategy).

Academic databases searched included Ovid MEDLINE, CINAHL, Embase and PsycInfo. Selected grey literature platforms, based on the researchers’ knowledge of relevant websites and organisations, were searched. Where larger search yields were observed (e.g. via Google and Google Scholar), the first 250 items were reviewed (Additional file 4 Grey Literature Searches). The final research database searches were conducted on 30th December 2022 by a researcher with extensive systematic literature searching experience (EW) in consultation with the research librarians. Grey literature searches were conducted on 15th March 2023. Searches of the reference lists of included records and forward citation searches were undertaken.

Inclusion criteria and exclusion criteria

Literature was selected according to predefined inclusion and exclusion criteria developed using the PCC framework (see Table 1). Research education or capacity-building programmes delivered to qualified health professionals, working in healthcare settings in high-income countries (HIC) as defined by the Organisation for Economic Co-operation and Development (OECD) [20], were included. The HICs criteria was included to introduce a level of homogeneity around the broader resource contexts of the study populations [21]. No date limits were applied, and all types of literature published up to 30 December 2022 were included. Literature published in English only was included, due to resource limitations.

Table 1 Inclusion and exclusion criteria

Study selection, quality appraisal and data extraction

Citations were imported into Covidence (Veritas Health Innovation, Melbourne, Australia) for screening. Titles and abstracts were screened independently by two reviewers, with conflicts resolved by a third independent reviewer. Similarly, full texts were reviewed by two researchers and the reasons for exclusion were documented (Additional file 5 Excluded Studies). Data were extracted from the included texts by two independent researchers. All texts were reviewed by a second researcher to ensure the accuracy and consistency of data extraction. Formal quality appraisal was not undertaken as part of the scoping review, in line with this methodology [18].

Extracted data were tabulated and results were synthesised using a descriptive approach guided by the review objectives. The distinction between capacity and capability building strategies described in the papers was not drawn or analysed as part of this review. Outcomes measured and reported in the papers were synthesised descriptively as guided by the review objectives, the scoping review methodology and drawing on Cooke’s framework [22]. Although Cooke’s framework was initially developed for evaluating research capacity building, the four structural levels of impact are also relevant to informing and evaluating approaches to build capacity for research building for impact on health practice [23]. Sustainability, although not included as a key concept in the initial database searches, was considered in relation to both programme funding and the maintenance and spread of the programme outcomes [24]. Sustainability features were identified throughout the data extraction and synthesis processes.

Results

Of the 10,509 titles and abstracts that were screened, 136 were included for full text screening. Of these, 34 met the inclusion criteria and the reasons for exclusion of 102 articles are shown in Fig. 1. Through citation search of the initial set, which involved hand-searching of reference lists, and forward searching of citations, an additional three papers were identified [25].

Fig. 1
figure 1

PRISMA flow diagram

Knowledge translation capacity and capability building programme delivery

A total of 37 papers, examining 34 knowledge translation capacity and capability building programmes were included in this review. The summary of the knowledge translation capability building programmes and their characteristics are shown in Table 2. Programmes were delivered in Australia [6, 26,27,28,29,30,31,32,33,34,35,36,37,38], Canada [39,40,41,42,43,44,45,46,47], England [48,49,50,51,52], the United States of America [53,54,55], Sweden [56, 57], Scotland [58], Saudi Arabia [59] and in multiple countries [11, 60] and were implemented from 1999 to 2021. Programmes tended to target a mix of health and research professionals; however, some targeted specific groups including allied health [26, 27, 29, 32, 35,36,37, 55], nurses [34, 45, 49], doctors [59], managers [57] and cancer control practitioners [53].

Table 2 Knowledge translation capability building programme characteristics

Strategies for building knowledge translation capacity and capability in health professionals and healthcare settings

Various capacity and capability building strategies were identified in the programmes. More than half of the programmes were described as using a combination of two or more strategies to build knowledge translation capability [6, 11, 26, 27, 30, 32, 33, 36,37,38,39,40,41,42, 46, 47, 49, 52, 55,56,57,58]. Programmes commonly involved targeted training and education for individuals and teams, delivered predominantly in the healthcare workplace, with few delivered in universities [31, 51, 59, 60], or other settings (e.g. partnership organisations) [28, 47]. Education was frequently employed in concert with other strategies such as dedicated implementation support roles [6, 26, 27, 32, 36,37,38,39,40,41,42, 46, 47, 49, 52, 55,56,57,58].

Other initiatives included strategic research-practice partnerships, typically between a health service and academic institution [11, 28, 38, 49], collaboratives (three or more research-interested organisations) [11, 32, 33, 40,41,42, 46,47,48, 52, 56, 58], co-designed knowledge translation capacity-building programmes with health professionals or health programme managers [11, 26, 27, 36, 37, 47, 57, 58], and dedicated funding for knowledge translation initiatives [33, 39]. The programmes reporting isolated strategies utilised education [31, 34, 43, 44, 51, 53, 54, 59, 60], a support role [29, 35, 45, 50] and research-practice partnerships [28].

The duration of the programmes varied significantly from 1-day workshops to upskill implementation leads [6] to comprehensive 3-year support programmes [39]. In some cases, programmes involving the implementation of a support role were described as ongoing [29, 35].

Pedagogical principles and theoretical frameworks

The pedagogical principles or learning theories underpinning the capability building programmes were rarely described explicitly, but rather were implied in the descriptions of the programmes. Many programmes purposefully made time in the curriculum for group work to foster connections with peers and promote social learning [6, 11, 26,27,28, 33, 34, 36,37,38, 40,41,42, 44, 46,47,48,49,50,51,52,53,54,55,56,57,58,59,60]. Further, experiential learning or “learning by doing” whereby participants applied their new knowledge and skills to a real-world project or knowledge translation initiative [61] was commonly described as a core component of capability building programmes [6, 26,27,28, 30, 31, 34, 36,37,38,39,40, 43, 44, 46, 47, 49,50,51, 53,54,55,56,57, 59]. Passive learning through didactic teaching (e.g. via lectures, seminars or webinars) was a common feature of education strategies [6, 31, 39, 43, 44, 46, 49, 51,52,53,54,55,56, 58,59,60]. Many programmes also incorporated individual or team-based mentoring with a more experienced knowledge translation specialist or researcher [26,27,28,29,30, 32, 35,36,37, 39,40,41,42, 44,45,46, 54, 55]. Behaviour change theory or techniques were referenced by a few studies [6, 43, 57]. Self-efficacy theory informed three programmes [34, 36, 37, 44]. Finally, one programme incorporated debate as a pedagogy [59].

Programme funding and sustained outcomes of the knowledge translation capacity and capability efforts

Sources of funding for the programmes included research institutes (e.g. Swedish Research Council, NIHR, Canadian Institutes of Health Research) [6, 11, 39,40,41,42, 44, 45, 47,48,49,50, 52, 60], government health departments (e.g. ministries or states responsible for health funding) [11, 26, 27, 32, 33, 53, 59], health services or academic health science centres [29, 31, 46, 55, 58], small grants [56, 57] and a university [54]. Five papers made no reference to a funding source [28, 30, 34, 43, 51].

Wenke [35] identified measures to promote the financial sustainability of the Health Practitioner Research Fellow role, including “Additional research and administrative funding, the use of technology and team based research” (p. 667). Proctor [54] identified the reliance on a single funding source for subsidising the TRIPLE programme as a threat to its sustainability. Similarly, Robinson [11] identified the 5-year funding cycles for Applied Research Collaborations as a factor undermining their sustainability. Gerrish [49] identified time limited funding of the Collaboration for Leadership in Applied Health Research and Care (CLAHRC) as a prompt to focus on “securing research grants and capitalising upon a range of opportunities for knowledge translation within a broader agenda focused on quality, innovation, productivity and prevention” (p. 223–224).

Moore [43] described potential mechanisms to ensure the sustainability of the Practicing Knowledge Translation programme, such as delivering online courses. Finally, Young [37] identified the adaptability of the AH-TRIP programme as a key sustainability feature, along with the establishment of a dedicated working group to conduct a formal sustainability assessment. Few papers explicitly described factors or mechanisms to sustain the efforts and outcomes of the knowledge translation capability building programmes. Hitch [29] described the development of a senior leadership position for knowledge translation in occupational therapy in which key deliverables included the development of documentation and resources to support the ongoing sustainability of the position. Similarly, Sinfield [52] described the development of a bank of resources housed on the CLAHRC website, a “train the trainer” model, and e-learning resources to sustain the capability building efforts.

Eleven programmes were guided by a knowledge translation theory or framework such as the Knowledge to Action (KTA) cycle [26, 27, 43, 44, 58, 59], the Dobbins (2002) Framework [40] and the National Collaborating Centre for Methods and Tools to frame the education programme [41, 42]. Martin [31] used the Consolidated Framework for Implementation Research (CFIR) to guide the implementation of the programme. Mickan [32] referred to the use of knowledge management theory, the linkage and exchange model and the social change framework to inform the functions of the knowledge brokers implemented in their programme. Morrow [6] used the Theoretical Domains Framework (TDF) and behaviour change theory in the development of their intervention. Mosson [56] used the principles of training transfer to inform their education programme.

Programme implementation level of influence and manager engagement

Programmes were categorised according to their implementation at four structural levels of impact in accordance with Cooke’s [22] framework: individual, team, organisational and supra-organisational. See Table 3 for the levels, definitions and citations. Interventions implemented at the individual or team level aimed to build the knowledge translation capacity of individuals and teams through increased knowledge, self-efficacy, research culture and engagement in knowledge translation. Programmes targeted at individuals included university courses [31, 51, 60], workplace training [44, 54, 57] and fellowship programmes [30]. Several programmes delivered training in a team environment to facilitate potential collaboration [26, 27, 36, 37, 39, 53]. Some larger-scale training interventions were implemented at an organisational level. For example, one study delivered workshops to teams across 35 units from different organisations [56]. Interventions aimed at this level most commonly took the form of dedicated research support roles, embedded within health organisations. These roles often involved educating interested health professionals through various means [29, 32, 41, 42, 45], strengthening research culture [45], engaging stakeholders [32], developing partnerships or collaborations [35, 45] and building research infrastructure [29, 35, 41, 42]. Other organisational strategies included secondments which provided health service staff with protected time to engage in knowledge translation endeavours [50]. Strategies implemented at the supra-organisational-level generally aimed to improve healthcare practice through collaboration, and strategies typically involved multifaceted initiatives of cross-organisational research collaborations such as CLAHRCs [48, 52] and Research Translation Centres [11]. In other cases, clinical-academic collaborations were fostered through a competitive funding initiative [33] and the development of communities of practice [58].

Table 3 Knowledge translation capability building programmes’ levels of impact

First line (middle) or senior executive managers were described as integral to many of the programmes to develop knowledge translation capacity and capability across the four levels of impact. Manager involvement was enacted in several ways: managers as programme participants [29, 38, 41,42,43,44, 46, 48, 49, 51,52,53,54, 56, 57, 60]; engagement or overt support of managers [26, 27, 32, 40, 50, 55]; letter of intent or support for team member participation [34, 39, 44]; managers were involved in the delivery of the strategy [38, 48] and co-design of the programme with managers [58]. In one programme, the manager needed to sign off to demonstrate their overt support for and was subsequently involved in the programme [56]. Several papers mentioned the presence and/or need for manager involvement or support in the outcomes or findings [11, 31, 35, 45, 49, 55]. One paper, describing a programme targeting doctors working in the family medicine context, did not explicitly refer to the involvement of managers; however, it did note that doctors in these settings also filled a managerial role [59].

Programme and model evaluation

Evaluation methods

Twenty-seven programmes underwent some degree of formal evaluation, with defined aims and methods described to varying levels of detail in the papers (Table 4). The outcomes of the remaining seven programmes were described as the authors’ general reflections or learnings from some informal or not otherwise-described evaluation process [34, 38, 45, 49, 51, 52, 58]. Data collection methods used in the evaluations described included surveys [26,27,28,29,30,31, 39, 40, 43, 44, 46, 53,54,55,56,57, 59, 60], individual interviews [6, 11, 28, 30,31,32, 35,36,37, 43, 46,47,48, 50, 54, 56, 57], author reflections [26, 27, 34, 38, 45, 49, 51,52,53, 58,59,60], focus groups [26, 27, 30, 31, 35, 41, 44, 50], documentary analysis [28, 40, 42, 48, 55], attendance records [42, 43, 55], measured research outputs [29, 38] and observed changes to clinical guidelines, practice, or networks [38, 39, 55]. Twenty-four programmes were evaluated using multiple data collection methods.

Table 4 Evaluation and outcomes reported

Outcomes measured or described

Although the outcomes measured and reported varied significantly across the 34 programmes, all papers reported positive outcomes and the achievement of the programme objectives to varying extents. Outcome measures utilised in programme evaluations included participant self-reported improvements in knowledge, skills or confidence (etc.) [6, 26, 27, 29, 31, 33, 34, 39, 41, 43, 44, 46, 50, 53, 54, 56, 57], participant satisfaction with or perceived quality of the programme [6, 28, 34, 37, 46, 47, 52,53,54, 56, 57, 59, 60], participant experiences of programme [11, 29, 30, 32, 34, 36, 44, 46, 48, 50, 51], participant self-reported changes to clinical or knowledge translation practice, guidelines or organisation policy (etc.) [27, 31, 33, 41, 43, 44, 47, 56, 57], barriers and enablers of knowledge translation [11, 26, 27, 35, 36, 39, 45, 50, 55], attendance or engagement with programme [28, 37, 39, 40, 42, 43, 55], perceptions of organisational culture [26, 27, 35, 41, 48,49,50], observed or reported behaviour change (e.g. knowledge translation leadership development or changed clinical practice) [35, 38, 39, 49, 55], milestone achievement (e.g. implementation plans completed) [33, 37, 39, 53], new or expanded partnerships, collaborations, or networks [28, 29, 33, 35], traditional research outputs (e.g. papers, conference presentations, grants) [29, 33] and interest in programme or new applications [60].

Strengths and limitations of evaluation studies

Programme evaluations were strengthened by the inclusion of multiple outcome measures. Studies which incorporated multiple outcome measures often used a combination of self-reported outcomes or experience and more objective outcomes or observations such as milestone achievement [29, 36, 37, 39, 53, 55], observed behaviour change [35, 39, 44, 55] new or strengthened collaborations [29, 33], research outputs [29], observed skill development [59] and programme cost [36, 37]. Ten programme evaluations were informed by existing theoretical models including the Kirkpatrick Model [6, 46, 53, 54, 56, 57], the TDF [26, 27], the CFIR [31], Promoting Action on Research Implementation in Health Services framework [44], reflexive thematic analysis against knowledge brokering theory and practice [32] and the Canadian Academy of Health Sciences’ Framework for Evaluation and Payback Framework [33]. One programme used both the Reach, Effectiveness, Adoption, Implementation, Maintenance (RE-AIM) and TDF in two separate evaluations of the same programme [36, 37]. Evaluations tended to focus on short-term outcomes [6, 33, 46, 53, 59, 60]; however, there were examples of longer-term programme evaluation, as defined by data collected beyond 12-month post-programme delivery [26,27,28,29, 31, 32, 35,36,37,38,39, 41, 42, 44, 47, 49, 57].

There were some common limitations in the programme evaluations identified across the included papers. No studies measured health outcomes as a result of the programme. Several focused on only one outcome such as programme attendance or engagement [40], research outputs [38], perceptions or experiences of the programme [30, 32, 51], participant self-reported changes in knowledge, skills, confidence [6], barriers and enablers of knowledge translation [45], satisfaction or perceived quality of the programme [52]. One study did not identify any specific measurable outcomes [58]. Other commonly identified limitations were the use of self-reported outcomes only [26, 27, 33, 48, 50, 56, 57], and outcomes reported or measured among self-selected and likely more engaged and invested participants [11, 26, 28, 39, 44, 46, 47, 57]. Papers commonly described small sample sizes or low response rates in evaluations requiring participant involvement (e.g. surveys, interviews) [6, 11, 29,30,31,32, 37, 46, 47, 53]. Studies were often conducted at a single site or with a single cohort [29, 35, 41, 42, 44, 48], limiting the generalisability of the findings. Furthermore, programme evaluations were often limited by the inclusion of programme participants only in the data collection activities (e.g. [11, 26, 31, 36, 37, 47, 53]), i.e. there were no comparisons or controls. However, some programme evaluations engaged a broader range of relevant stakeholders to identify a more diverse range of outcomes at various levels of impact [28, 34, 35, 41, 42, 48, 50]. A notable example is the case study evaluation undertaken by Wenke [35] in which the relevant healthcare service executive director, incumbent holding the implementation support role, and six clinicians who had worked with the incumbent, participated in the evaluation. Similarly, Haynes’ [28] evaluation involved the chief investigators, members of the research network, and policy, practitioner and researcher partners.

There was an apparent lack of attention to the sustainability of programmes in the evaluations and there was only one evaluation which incorporated economic evaluation as part of the programme [37]. Further, measures of objective or observed behaviour change were included in only a few evaluations, including increased clinician engagement in research [35], the attraction of research funding [38], clinical service changes [35] and sustained knowledge translation practices post-programme participation [39, 44, 50, 55].

Although some studies investigated perceptions of organisational impacts such as research or knowledge translation culture [26, 27, 41, 42, 48,49,50] and barriers and enablers of knowledge translation [11, 26, 27, 35, 36, 39, 45, 50, 55], only one programme evaluation utilised a validated tool or approach to measuring organisational factors (the TDF) [26, 27]. Few studies used validated data collection tools to measure any types of outcomes [26, 27, 44, 54]. Only one evaluation included a control group in the data collection and analysis [31]. One study evaluated participant satisfaction and engagement with the programme only [60]. Poorly described or informal evaluation methods were identified in several papers [34, 38, 40, 45, 49, 51, 52, 55, 58, 59].

Discussion

To our knowledge, this is the first scoping review of programmes designed to build capacity and capability for knowledge translation in healthcare settings. We sought to identify the models and approaches to building knowledge translation capability in healthcare settings, including the types of strategies used, the underpinning theories, funding sources, sustainability features, mechanisms of evaluation and the outcomes measured and reported. Our findings indicate that this is an area of increasing research interest and practice internationally [3]. We identified numerous types of strategies in place to promote knowledge translation capability in health settings, and an array of outcomes measured and reported in evaluations of these programmes.

Education was the most frequently described strategy and was delivered most often within health settings, followed by universities, and other organisations. Education was often delivered in concert with other strategies including implementation support roles, co-design of capability-building initiatives, funding for knowledge translation and strategic research-practice partnerships and collaboratives. This suggests that education is the cornerstone of knowledge translation capability building. It also points to the widespread recognition of the complexity of knowledge translation in practice [62,63,64,65] and the need to take a multifaceted approach to developing health professionals’ and healthcare service capacity and capability.

Programmes were implemented at four structural levels of impact [22]. Translating research in health practice requires the active involvement of and collaboration with various stakeholders [66, 67]; therefore, programmes aimed at team, organisational and supra-organisational level are more likely to see meaningful and sustained outcomes and impacts beyond the life of the programme and evaluation [4, 5, 68]. Social and experiential pedagogies [69], and mentoring featured prominently in the capability building programmes analysed as part of this review. Although didactic learning featured in many of the programmes in our review, this approach was complemented by either collaborative or experiential learning, or both. In contrast, Juckett et al. [4] found didactic coursework was a prominent feature within academic initiatives that aimed build advance knowledge translation practice.

Many programmes appeared to be dependent on time-limited funding or non-recurrent grants (government or philanthropic), with some only funded for discrete periods of time [36, 37, 45]. There were no references to ongoing funding sources to enable programme development, delivery or evaluation. This lack of certainty around funding and resourcing may undermine the continuity, quality, sustainability and impact of knowledge translation capability building programmes. Knowledge translation capacity-building programme leads can optimise the opportunities for ongoing funding by producing high-quality evaluations demonstrating impact on practice, and the alignment of their programmes with broader health policy agendas (e.g. promoting equity and quality in healthcare, and reducing inefficiencies) [70].

Although not always described explicitly as sustainability measures, there was evidence of these integrated in numerous programmes to maintain and further spread the impact of the programmes within the setting [24]. One of the sustainability features was the active engagement of managers in many of the programmes described in this review [26, 27, 29, 32, 34, 38,39,40, 42,43,44, 46, 48,49,50,51,52,53,54,55,56,57,58, 60]. This reinforces the recognition of the role of middle managers in supporting and mediating health practice changes, and in building capacity and positive attitudes toward knowledge translation within their teams [71, 72]. For organisations in which health professionals work independently, such as in medical and family practices, the middle manager role may be filled by the health professionals themselves [59]; therefore, strategies tailored to these settings and individuals are needed.

Both the content and implementation of several programmes were informed by knowledge translation theories or frameworks, which suggests a level of integrity in these programmes and the commitment of those developing and delivering the programmes to the theory and practices they seek to promote in participants. Furthermore, utilising evidence-informed implementation may promote the sustainability of the intervention in practice, and sustained outcomes and impact of the programme [73]. Several programmes were co-designed with end-users [26, 37, 38, 47, 57, 58], which not only increases the suitability of the programme to the local context but also increases a sense of ownership of the capability building programmes and strategies, and the potential to enhance sustainability [68, 74]. Other benefits of the early involvement of end-users in the development of capacity-building programmes in health settings include the integration of features and complexities reflective of the healthcare environment, and improved adoption and adaptation [75, 76]. This accentuates the need for future knowledge translation capacity and capability building programmes to be co-designed with end-users.

Overall, we found that programmes’ targeted levels of impact rarely corresponded with the outcomes measured in their evaluation. This highlights the need to develop standardised or at least streamlined frameworks that can be adopted by those leading the delivery or evaluation of programmes, to facilitate the planning and execution of appropriate evaluation. That is, if the programme targets the individual level, outcomes measured should relate to individuals (for example, self-reported improvements in knowledge, attitudes, satisfaction with programme) and if targeting the organisational level, outcomes related to research culture, the advent of new partnerships with research institutions or changes to organisational practice and policy, for example, should be measured. Similarly, programme evaluations rarely made clear the timeframe over which the outcomes were expected and measured. In one exemplary case, Young et al. [37] presented a programme logic which identified the programme components including inputs, activities and participant types, and linked these to the anticipated short-, medium- and long-term outcomes. The evaluation was then designed around these components and in reference to the RE-AIM framework. This points to the utility of programme logic in designing programmes and their evaluations.

Only one study, also Young et al.’s [37], explicitly referred to the absence of a dedicated evaluation budget as a limitation; however, this was likely the case for all programme delivery and evaluation teams, contributing to many of the identified limitations in the evaluations. Therefore, a standardised, theory-informed evaluation framework is needed to enable robust and consistent evaluation of multiple types of short-, medium- and longer-term outcomes, which correspond with the various levels of impacts [4, 6, 31, 32, 56]. This will enable more strategic programme implementation, make effective use of limited resources and provide for more illuminating programme evaluations to guide future capability building practice.

Strengths and methodological limitations

This scoping review is strengthened by the systematic methods used. The involvement of a large team of researchers, with different levels of research and knowledge translation experience, representing different perspectives including experienced academic and knowledge translation researchers, those involved in developing and delivering knowledge translation capability building programmes, early career researchers and health professionals working in healthcare settings, in every stage of the review, enhanced the rigour of the study and the strength of our findings.

The main limitation of this review is the nature of the review topic and the existence of many synonyms and homonyms for several key concepts. The sheer breadth of relevant literature means the search strategy may not have included every relevant term, and therefore, may not have captured all eligible studies. Although formal quality assessment was not conducted, the limitations identified in the included papers and the absence of a formal evaluation in seven programmes indicate a generally low level of quality of the studies. This underlines the need for caution when interpreting and utilising the findings of this review. The settings in which the programmes were delivered were primarily larger health organisations with tiered managerial structures. Therefore, the findings, particularly as they relate to the role of and implications for middle managers, may not apply to contexts within which health professionals work independently (for example, physicians and family medicine doctors).

The databases searched did not include education research-specific databases, which may also have inadvertently excluded relevant papers from the search yield. The review of the programmes is limited to the strategies, characteristics, evaluation methods and outcomes as they were reported in the papers. It is likely that some of the papers did not include the details of all programme components and elements. This review is also limited by heterogeneity of the programmes with respect to the strategies described, outcomes measured, and findings reported. This diversity precluded the identification and application of a proxy measure of impact and subsequent comparison of the programmes. Nonetheless, as this scoping review aimed to map the programmes and strategies documented, the characteristics of the programmes, outcomes measured and reported, we were able to address the review questions.

Implications for practice and future research

The review findings reinforce the need for knowledge translation capacity and capability building programmes to comprise multiple strategies working in concert to affect impact at the individual, team, organisational and supra-organisational levels. Practice-based pedagogies, collaborative learning and manager engagement are central to programmes to promote favourable outcomes. The review also highlighted several gaps in the literature. First, there is a need for more rigorous programme evaluations, which requires dedicated funding or resourcing. This could be supported by future research such as a focused systematic review on the outcomes and impacts of individual strategies (e.g. education), or structural levels of impact (e.g. team-level), to identify the most appropriate outcome measures and data collection methods. This will aid in simplifying programme evaluations and promote consistency across programmes. Furthermore, subsequent research could be undertaken to identify the presence and utility of any conceptual frameworks to guide capacity and capability building programme development, implementation and evaluation.

Conclusion

There are a range of programmes that aim to develop knowledge translation capacity and capability in healthcare settings. Programmes tend to be multifaceted with education as the cornerstone, facilitate experiential and collaborative learning and target different levels of impact: individual, team, organisational and supra-organisational. All papers described successful outcomes and the achievement of programme objectives to some degree. Features to promote sustainability are evident; however, the sustainability of programmes and their outcomes and impacts may be threatened by the lack of commitment to long-term funding, and resourcing for rigorous programme evaluation. Indeed, the outcomes and impacts of these programmes are unclear and unable to be compared due to the often poorly described and widely inconsistent methods and outcome measures used to evaluate these programmes. Future research is required to inform the development of theory-informed frameworks to guide the use of methods and outcome measures to evaluate the short-, medium- and longer-term outcomes at the different structural levels, with a view to measuring objectively, the impacts on practice, policy and health outcomes in the longer term.