Background

With significant time lags between evidence production and implementation [1], there is a long-standing need to accelerate the uptake of research findings into practice to improve healthcare processes and outcomes. The growing implementation science literature provides information on effective methods for moving evidence into practice; however, this scientific knowledge is large, complex, and may be challenging to apply. This has led to a paradoxical research–practice gap, whereby the evidence produced in implementation science is not being applied in real-world practice settings [2]. Thus, there have been recent calls to improve the mobilization of implementation science knowledge beyond the scientific community and into practice settings [3, 4].

Moving implementation science into practice requires a workforce of implementation practitioners who understand how to apply the science of implementation. In this paper, we define “implementation practitioners” as those who are “doing” the implementation of evidence-informed practices, as well as those who are supporting or facilitating implementation efforts [5]. This may include point-of-care staff, managers, quality improvement professionals, intermediaries, implementation support staff, and policymakers. To build the workforce of implementation practitioners, there is a need for training and professional development opportunities, which we call “capacity-building initiatives.” While there are an increasing number of implementation capacity-building initiatives available [6, 7], these programs often focus on teaching researchers about implementation science, with fewer aimed at teaching how to apply implementation science to improve implementation of evidence in practice settings (i.e., implementation practice) [7,8,9]. A recent systematic review [7] of the academic literature included 31 papers (reporting on 41 capacity-building initiatives) published between 2006 and 2019. The review found that many capacity-building initiatives were intended for researchers at a postgraduate or postdoctoral level, and there were fewer options for implementation practitioners working in practice settings.

While there are some examples of practitioner-focused capacity-building initiatives in the literature [10,11,12,13,14,15,16,17,18,19,20,21,22], most are being developed and delivered in isolation and not published in the academic or grey literature. In addition, reviewing this literature revealed that most of these publications focus on reporting evaluations of the short- and long-term outcomes of the capacity-building initiatives with only high-level details of the specific training content and the rationale for this content. Despite the development of competencies [23, 24] and frameworks [22] for implementation research and practice that have been informed through primary studies, literature reviews, and convening experts, to our knowledge, there has not been a consensus-building approach to date. Thus, there is limited synthesized information on what content is currently included in implementation practitioner capacity-building initiatives and no universal agreement or guidance on what content should be included to effectively teach implementation practitioners.

The increasing demand for and offerings of implementation practice capacity-building initiatives provide an opportunity to synthesize and learn from the individuals and teams offering this training. Our research team, which is composed of implementation scientists, implementation practitioners, clinicians, health leaders, and trainees, conducted a mixed-methods study to explore the experiences of teams offering capacity-building initiatives focused on implementation practice to inform the future development of high-quality training initiatives. The study had three aims. The first aim, which is the focus of this paper, was to describe what capacity-building initiative developers and deliverers identified as essential training content for teaching implementation practice. The other two aims (to be reported on elsewhere) were to describe and compare the similarities and differences between the capacity-building initiatives (e.g., structure, participants) and explore the experiences of those developing and delivering capacity-building initiatives for practitioners.

Methods

We used the Good Reporting of a Mixed Methods Study (GRAMMS) checklist [25] to inform our reporting (Additional file 1).

Study design

The overall study was a convergent mixed-methods study [26] (cross-sectional survey and qualitative descriptive design [27]) that applied an integrated knowledge translation approach [28] where all study participants were invited to contribute to the analysis, interpretation, and reporting of the study. Here we report on one component of the larger study. Specifically, we focus on a sub-set of the quantitative and qualitative data reporting on the content and curriculum of the capacity-building initiatives.

Study participants

We enrolled English-speaking individuals who had experience developing and/or delivering a capacity-building initiative that focused on teaching learners how to apply implementation science knowledge and skills to improve the implementation of evidence-informed practices in practice settings. The capacity-building initiatives must have been offered in the last 10 years and could be offered in any geographical location or online. We excluded capacity-building initiatives that focused on training researchers or graduate students to undertake implementation research.

We used purposive sampling. First, using the professional networks of the study team, we compiled a list of capacity-building initiatives and the primary contact (e.g., training lead). Second, three team members (JR, IDG, AM) independently screened the capacity-building initiatives included in Davis and D’Lima’s systematic review [7], consulting the full-text papers as needed to identify initiatives focused on implementation practice. Finally, we used snowball sampling to identify other individuals who had developed and delivered capacity-building initiatives. The first author (JR) invited the potential participants by email. If no response was received, an email reminder was sent 2 weeks and 4 weeks after the initial invitation.

Once the primary contact for a capacity-building initiative was enrolled, they had the opportunity to share the study invitation with their other team members. This resulted in some capacity-building initiatives having more than one person enrolled in the study, providing multiple perspectives on the development and delivery of the initiative. For simplicity, we refer to them as “teams” regardless of whether there was one person enrolled or multiple people enrolled.

Data collection

First, participants completed an online questionnaire developed by the study team, which included closed-ended and open-ended questions (Additional file 2 presents the sub-set of questions used in this analysis that focused on the content and curriculum). The questionnaire was piloted internally by two team members, and minor changes were made to improve functionality (e.g., branching logic), comprehensiveness (e.g., adding in open text boxes for respondents), and clarity (e.g., defining key terms used). We asked for one completed questionnaire per capacity-building initiative. When there were multiple team members enrolled in the study, they could nominate one person to complete the questionnaire on their behalf or complete the questionnaire together.

After completing the questionnaire, all participants were interviewed individually or in a focus group via videoconference to explore the questionnaire responses and discuss the capacity-building initiative in more detail. Individual interviews were used when there was only one team member enrolled; focus groups were used when there were two or more team members. The interviews and focus groups were facilitated by one of three research team members, all of whom identified as women and were trained in qualitative interviewing: JR is a master’s prepared registered nurse; OD is a master’s prepared speech-language pathologist with doctoral training in health rehabilitation sciences research and a knowledge translation specialist; JL is a doctoral prepared researcher with expertise in patient engagement. A semi-structured question guide was developed by the first and senior author (JR, IDG) and shared with the broader team. We used the team feedback to update the question guide, including adding new questions and probes, re-ordering the questions to improve flow, and refining the wording of the questions for clarity (Additional file 2 presents the sub-set of questions used in this analysis that focused on the content and curriculum).

Finally, we asked participants to share any capacity-building initiative materials to provide further details (e.g., scientific or grey literature publications, website materials, training agendas, promotional materials). We only collected publicly available materials to minimize concerns around the sharing/disclosing of proprietary content.

The questionnaire and publicly available materials provided data on what content is currently included in the capacity-building initiatives. The interview and focus group data provided information on why certain content was included, as well as how and why content changed over time. Together, this provided information on what we have labeled “essential content,” which is a reflection of both what study participants have chosen to include in their training initiatives, and their views on priority content areas for implementation practitioners based on their own experiences developing and delivering the initiatives.

Data analysis

Closed-ended questionnaire responses were analyzed using descriptive statistics. Frequencies (counts and percentages) were calculated for nominal data. Medians and ranges were calculated for continuous data. The questionnaire responses, qualitative transcripts, and course materials shared by participants were uploaded to NVivo12Pro for data management [29]. The merged dataset was analyzed using conventional content analysis, with the codes emerging inductively from the data [30]. Two authors (JR, OD) started by independently reading the data and coding all segments that pertained to training content and curriculum. They met regularly to compare their coding, discuss and resolve differences, build and revise the coding scheme, and group codes into categories. When the coding scheme was well-developed, and the coders were coding consistently (which occurred after coding data from one-third of the teams), the remaining data were coded by either JR or OD. The coding was then audited by one of seven members of the broader research team (HA, DMB, LBG, AMH, SCH, AEL, DS). These seven team members were “senior reviewers” with subject matter expertise in implementation science and practice [31]. They audited the coding and offered their feedback on how the text segments were labeled and categorized. This feedback was discussed by the two primary coders (JR, OD) and the senior author (IDG). The review process resulted in (1) changes to which codes were applied to specific text segments, (2) changes to the coding structure, including splitting existing codes into more precise labels, and (3) re-organizing existing codes into new categories. The final coding scheme was applied to the data. Finally, we categorized the identified theories, models, frameworks, and approaches (i.e., other methods in implementation) (TMFAs) [32] according to the three main aims described by Nilsen [33]: to guide, to understand or explain, or to evaluate implementation. We also categorized the identified implementation steps and skills according to the three phases in the Implementation Roadmap [34]: issue identification and clarification; build solutions; and implement, evaluate, sustain.

Integration of quantitative and qualitative data

We used integration approaches at several levels. At the methods level, we used building where the interview probes were developed based on questionnaire responses [26]. We also used merging by bringing the questionnaire and interview/focus group data together for analysis [26], giving both datasets equal priority. At the interpretation and reporting level, we used a narrative weaving approach to describe the categories informed by both datasets [26]. The integration of quantitative and qualitative data contributed to an expansion of our understanding of the capacity-building initiative content [26], with the questionnaire contributing to identifying what content is included and the interview/focus group data providing the rationale for the content.

Strategies to enhance methodological rigor

Dependability and confirmability [35] were enhanced by maintaining a comprehensive audit trail including raw data (e.g., verbatim transcripts), iterations of coding and coding schemes, and notes from data analysis meetings. To enhance credibility and confirmability [35], 35% of the data were coded independently by two people. All study participants were sent a summary of their data prepared by the research team and were asked to review it for accuracy and comprehensiveness (i.e., member checking the data). In addition, having senior reviewers with content expertise audit the coding helped make sense of the different implementation concepts and terms in the data, ensuring that data were coded and categorized accurately. Finally, interested study participants were involved in the sense-making process through their involvement in writing and critically revising this manuscript. We aimed to facilitate an assessment of the transferability [35] of the findings by describing contextual information on the capacity-building initiatives and study participants.

Results

We enrolled 33 people (representing 17 teams) who developed and delivered capacity-building initiatives focused on implementation practice. Collectively, these 33 study participants shared information on 20 unique capacity-building initiatives that were offered by their 17 teams (Fig. 1). We indicate the denominator throughout the results to make clear when the results refer to capacity-building initiative level data, which was largely collected through the questionnaire and the shared capacity-building initiative materials (n = 20) or team-level data, which was largely collected through interviews and focus groups (n = 17).

Fig. 1
figure 1

Summary of data collected. aOne team reported on 3 capacity-building initiatives and one team reported on 2 capacity-building initiatives. All other teams reported on 1 capacity-building initiative only. Teams participating in this study comprised between 1 and 6 people. bFive study participants took part in two focus groups

Between September 2021 and November 2022, we collected 20 questionnaire responses (i.e., one per capacity-building initiative) and conducted 10 online interviews and 7 online focus groups (i.e., one per team) (Fig. 1). The focus groups included between 2 to 6 people. Interviews lasted an average of 60 min (range = 51–77 min) and focus groups lasted an average of 68 min (range = 51–79 min). We received materials for 11 out of 20 capacity-building initiatives, specifically: 6 publications, 2 course agendas, 2 course advertisements, and 1 website.

Study participants

The 33 study participants represented a blend of both research and practice experience. Half of the study participants (n = 17/33, 52%) currently identified as both a research professional (researcher or implementation scientist) and a practice-based professional (clinician or implementation practitioner). Three-quarters of study participants (n = 24/33, 73%) were currently involved in implementation in practice settings (clinician or implementation practitioner or manager/leader) (Table 1). Nearly all study participants reported having experience in implementation practice (n = 31/33, 94%). Of those with experience, the median number of years’ experience was 9 (range = 4–30 years).

Table 1 Demographic characteristics of study participants (N = 33)

Contextual information on capacity-building initiatives

The capacity-building initiatives (n = 20) had been offered a median of 4 times (range = 1–35 offerings) between 2009 and 2022 (Table 2).

Table 2 Contextual information on capacity-building initiatives (N = 20)

Capacity-building initiative content

Nine of 17 teams (53%) explicitly described their capacity-building initiatives as introductory level. Study participants identified a variety of content areas included in their capacity-building initiatives, which we present according to four categories and 10 sub-categories, as well as the overarching categories of applied and pragmatic content and tailoring and evolving content (Fig. 2). Illustrative quotes are presented in Table 3.

Fig. 2
figure 2

Organization of study findings. The number of teams that discussed each category is indicated in brackets; the teams could have identified/described the category in any or all of the data sources: questionnaire, interview or focus group, shared capacity-building initiative materials. TMFAs theories, models, frameworks, approaches

Table 3 Illustrative quotes

Taking a process approach to implementation

Twelve teams (n = 12/17, 71%) described the importance of teaching learners to take a process approach to implementation. Participants highlighted that because learners tend to be action-focused, they needed to include content on the importance of taking a thoughtful approach and not jumping in too quickly without a thorough plan. To do this, these teams included content on how to develop a comprehensive implementation plan. Teaching this process approach also required information on how long the process can take, its iterative nature, and the need to be adaptable as things change.

Identifying and applying implementation TMFAs

All 17 teams reported that their capacity-building initiatives included two or more implementation TMFAs. In total, study participants identified 37 unique TMFAs that were introduced in their capacity-building initiatives (Table 4). The most common were the Knowledge-to-Action Framework (n = 14/20), Theoretical Domains Framework (n = 11/20), COM-B Model for Behavior Change (n = 9/20), RE-AIM (n = 9/20), Consolidated Framework for Implementation Research (n = 8/20), and the Behavior Change Wheel (n = 5/20). The remaining TMFAs were all used by four or fewer capacity-building initiatives.

Table 4 Theories, models, frameworks, and approaches (n = 37) included in the 20 capacity-building initiatives

Eleven capacity-building initiatives used a TMFA as the underpinning structure for the training content: nine were based on the Knowledge-to-Action framework, one was based on the Behavior Change Wheel, and one was based on the Awareness-to-Adherence Model.

Of the 20 capacity-building initiatives, 16 (80%) included at least one TMFA that guides implementation, 16 (80%) included at least one TMFA that explains implementation, and 10 (50%) included at least one TMFA to evaluate implementation. Eight of the 20 capacity-building initiatives (40%) included TMFAs from all three aims; six (30%) included TMFAs from two aims (guide/explain = 4, explain/evaluate = 1, guide/evaluate = 1); and six (30%) included TMFAs from one aim only (guide = 3, explain = 3).

Nine teams (n = 9/17, 53%) described the importance of focusing on the “how,” showing learners the menu of options and helping them to understand how to appropriately select and apply TMFAs to the different stages of their implementation projects. One team described introducing tools to facilitate the selection of TMFAs (e.g., Dissemination & Implementation Models in Health [75], T-CaST [76, 77]).

Teams noted that the content on TMFAs was often challenging for learners, with one team describing it as “bamboozling” (Case M). Challenges were due to learner anxiety with the academic nature and language of TMFAs, as well as difficulties understanding how they can be applied to their work. To address these challenges, teams changed their capacity-building initiative content to make it less theoretical (i.e., less focus on telling them about theories), with an increased focus on how to apply theory in implementation projects. Other teams described including information to reinforce the flexible application of TMFAs, emphasizing the ability to try one out and re-visit the choice if it is not meeting the project needs.

Learning implementation steps and skills

All 17 teams described how their training content focused on practical implementation skills to complete various steps in the process. Teams described seven core steps (Fig. 2).

Defining the problem and understanding context

Fifteen teams (n = 15/17, 88%) identified the importance of teaching learners to clearly define what problem the implementation project is aiming to address. Examples of this content included: clarifying what the problem is, understanding the context and current practice, using data to show the problem (evidence-practice gap), understanding the root cause of a problem, defining a problem that is specific and feasible to address, and understanding the problem from different perspectives.

Teams described spending a significant amount of time on this content due to its foundational nature for learning about subsequent steps in the implementation process. However, one participant cautioned the need to strike a balance between helping learners to thoroughly define and understand their problem without going so in-depth that they lose sight of what they are trying to accomplish within their implementation project.

Finding, appraising, and adapting evidence

Many teams (n = 12/17, 71%) described content about the evidence to be implemented as critical, including how to find, appraise, and adapt evidence for the context in which it is being implemented. Several teams described how learners could be quick to select the evidence to be implemented based on hearing about something “bright and shiny” (Team M), learnings from conferences and meetings, or papers reporting on a single study. Because of this, training content on how to conduct a more comprehensive search and appraisal of the evidence was essential.

Specifically, teams included content on the importance of ensuring there is evidence to support what is being implemented, how to search for research evidence, the importance of considering other forms of evidence such as staff and patient experiences, how to merge research evidence with experiential knowledge, considerations for ensuring the fit of the evidence to be implemented with the implementation setting, and understanding the appraised quality and levels of evidence (e.g., the evidence pyramid). Two teams (n = 2/17, 12%) acknowledged that even after learners acquired some knowledge and skills to search for and appraise evidence, they rarely had the time to undertake these tasks in their day-to-day professional roles. Therefore, making learners aware of resources to support this work was important.

Seven teams (n = 7/17, 41%) described training content related to adapting the evidence, practice, or innovation being implemented to fit with the local context. The concept of adaptation could be challenging for learners accustomed to working in a more “top-down” or directive model, where they assumed the evidence, practice, or innovation would be implemented as is. In these cases, teams identified that it was especially important to include information on how the organizational context and group needs should be considered to optimize the uptake and sustainability of the evidence, practice, or innovation being implemented.

Assessing barriers and facilitators

Fifteen teams (n = 15/17, 88%) discussed the fundamental importance of including content on how to systematically assess for barriers and facilitators that are likely to influence implementation. Teams shared how learners may either skip right from evidence selection to implementation solutions or erroneously believe that simply telling people a change is being made should be enough to result in behavior change. Teaching learners about the determinants that may influence the adoption (or lack of adoption) of evidence and the process for identifying these determinants was, therefore, identified as critical by nearly all teams. The content for this stage frequently included different TMFAs to guide the work (e.g., Theoretical Domains Framework [TDF] [37], Consolidated Framework for Implementation Research [CFIR] [41, 42]).

Selecting and tailoring implementation strategies

Fifteen teams (n = 15/17, 88%) highlighted the importance of teaching learners how to select implementation strategies using a structured approach that aligns with and addresses the identified barriers. Teams shared that learners may default to using familiar implementation strategies (such as education); therefore, teaching about the full range of implementation strategies was important. The capacity-building initiatives frequently included content and activities on how to map identified barriers to specific evidence-based implementation strategies and how to prioritize which ones to select. Again, teams described relevant resources (such as the Expert Recommendations for Implementing Change [ERIC] Taxonomy [78], the Behavior Change Technique [BCT) Taxonomy [79], and the Behavior Change Wheel [38]) that they used to help learners understand and apply the implementation strategy selection process.

Monitoring and evaluating

Fifteen teams (n = 15/17, 88%) described training content related to monitoring and evaluating implementation. Teams shared how they reinforced the importance of evaluating implementation projects to make course corrections and show the impact of their work. Three teams (n = 3/17, 18%) acknowledged that monitoring and evaluation can be unfamiliar and intimidating to learners and ensured that the content covered the “nuts and bolts” of monitoring the implementation process and conducting an outcome evaluation. Five capacity-building initiatives (n = 5/20, 25%) included logic models as a tool to plan for evaluations; other TMFAs included RE-AIM [39, 40] and Proctor’s implementation outcomes [51].

Sustaining and scaling

Eleven teams (n = 11/17, 65%) stated they included content on sustainability, such as tools for sustainability planning, determinants of sustainability, strategies for assessing and enhancing sustainability, and challenges with sustaining change over time. One team (n = 1/17, 6%) described including information on spread and scale. Although this content was often introduced near the end of the capacity-building initiative, teams reminded learners that sustainability needs to be considered at the beginning and throughout the implementation process.

Disseminating

Five teams (n = 5/17, 29%) included content on how to disseminate the findings of implementation projects. Content included strategies to disseminate project findings to interested and affected parties and decision-makers, as well as dissemination through scientific venues such as conference presentations and publications.

While all capacity-building initiatives (n = 20/20) focused on the implementation of evidence into practice, two teams (n = 2/17, 12%) also included information on how to undertake a dissemination project (e.g., developing a resource to share evidence). Teams also described the need to teach learners about the full spectrum of knowledge translation and the distinction between dissemination and implementation.

Developing relational skills

All teams (n = 17/17) discussed the importance of learning about the relational skills required throughout the implementation process, with one participant describing it as the “most neglected part of capacity building” (Case N).

Teams identified three main content areas for teaching these relational skills: forming and maintaining an implementation team, identifying and engaging interested and affected parties, and building implementation leadership and facilitation. Cutting across these three main areas, there were general examples of other relational content, including how to build trusting relationships, work inter-professionally, navigate power differences and hierarchies, and communication skills.

Forming and maintaining an implementation team

Nine teams (n = 9/17, 53%) discussed content on how to build an implementation team and define roles, how to manage team dynamics, and how to engage members throughout the implementation project.

Identifying and engaging interested and affected parties

All teams (n = 17/17) described content related to identifying and engaging interested and affected parties. Topics included the value of engagement, identifying and mapping key influencers, strategies for engagement, tailoring engagement approaches, and evaluating engagement.

Fourteen teams (n = 14/17, 82%) stated they included content on the importance of engaging health consumers (e.g., patients, families, caregivers). While some capacity-building initiatives only briefly discussed this, others described more detailed content, such as the rationale for and importance of consumer engagement, guidance for reimbursing health consumer partners, and strategies for working with health consumers. Two teams (n = 2/17, 12%) highlighted the importance of having this content delivered by health consumers themselves to showcase their experiences and stories.

Building implementation leadership and facilitation

Eleven teams (n = 11/17, 65%) included content on the knowledge and skills needed to be a facilitator of the implementation process including: the role of the facilitator, effective leadership, change management, managing resistance, and motivating others. Learners entering the capacity-building initiative may not recognize their ability to be an implementation leader; it was, therefore, important to include content that encouraged learners to reflect on their current attitudes and skills as a leader, work on leadership development, and help learners see themselves as leaders of implementation.

Offering applied and pragmatic content

All teams (n = 17/17) discussed the importance of applied content for teaching implementation practice. Teams acknowledged the growing and complex implementation science literature and highlighted the importance of content that effectively distills this literature into pragmatic and accessible content for learners (e.g., top five tips, toolkits, case examples). Teams reported that including practical tools and resources in the capacity-building initiatives was important so that learners had something tangible they could apply in their practice. Thirteen teams (n = 13/17, 76%) named at least one additional resource that they shared with learners. Twenty-seven unique resources were identified (Table 5).

Table 5 List of additional resources (n = 27) shared in the 20 capacity-building initiatives

Tailoring and evolving capacity-building initiative content

Seven teams (n = 7/17, 41%) described the importance of tailoring the content to each group of learners. While some teams acknowledged that there is content that is “locked in” or “universal,” other content can be tailored to meet the specific needs of learners (for example, based on learners’ area of practice, implementation projects, baseline knowledge, and learning needs).

Of the 20 capacity-building initiatives, 17 (85%) had been offered more than one time. These teams described changes to their training content over time (Table 6). These content changes were prompted by feedback received via formal learner evaluation forms; informal check-ins with learners during the capacity-building initiative; observations of what learners are asking questions about or struggling with; and new developments in the fields of knowledge translation, implementation science, and adult education.

Table 6 Examples of changes to content in capacity-building initiatives

Teams shared emerging topics that are becoming increasingly important to include in their capacity-building initiatives. More recent offerings of the capacity-building initiatives have taught learners about taking an intersectionality lens, considerations for equity, diversity, and inclusion, and applying a principled approach to partnerships.

Discussion

This study aimed to describe what capacity-building initiative developers and deliverers identify as essential content for teaching implementation practice. Based on the experiences of 17 teams that delivered 20 capacity-building initiatives, we identified four categories of content including taking a process approach to implementation, implementation TMFAs, implementation steps and skills, and relational skills, as well as the overarching categories of applied and pragmatic content, and tailored and evolving content. These findings provide an overview of the content being covered by a variety of capacity-building initiatives worldwide and the rationale for this content. Learning about the rationale for the content provided insights into some of the challenges current and aspiring implementation practitioners face both in the learning process and in their practice settings. These findings provide a foundation for building, refining, and researching capacity-building initiatives to further develop the implementation practice workforce, which is essential for scaling the implementation of evidence globally.

In this study, teams identified 37 different TMFAs and 27 additional resources that were introduced across the 20 capacity-building initiatives. While some of these were applied across a substantial number of capacity-building initiatives (e.g., Knowledge-to-Action framework [36]), most were used infrequently. This finding signals a general lack of consensus about what TMFAs and resources to use, a finding reported elsewhere [103]. A recent scoping review identified over 200 knowledge translation practice tools (i.e., tools that guide how to do knowledge translation) [104]. This has created a potentially overwhelming number of TMFAs that are used infrequently and/or inappropriately [105, 106], with many practitioners reporting a lack of confidence in choosing a framework [107]. It is worth reflecting on whether the people developing and delivering capacity-building initiatives are propagating this challenge by sharing and endorsing so many TMFAs and resources, especially without equipping learners with the tools needed to select and implement appropriate TMFAs. While some teams in our study did describe the importance of content on how to identify and select appropriate TMFAs, only one team identified the use of selection tools to facilitate this process. As more practice-based selection tools are developed and tested [104, 106], they may be helpful to implementation practitioners as they explore the large number of potential TMFAs to apply in their work.

The capacity-building initiatives in this study aligned with current understanding of core pillars [22] and essential competencies for implementation practice [23, 108]. Leppin and colleagues [22] identified three core pillars: understanding evidence-based interventions and implementation strategies; using theories, models, and frameworks during the implementation process; and methods and approaches to implementation research. The content in our included capacity-building initiatives closely aligned with the first two pillars, with less emphasis on the third pillar of implementation research. Moore and Khan identified 37 competencies linked to nine core implementation activities: inspire stakeholders and develop relationships, build implementation teams, understand the problem, use evidence to inform all aspects of implementation, assess the context, facilitate implementation, evaluate, plan for sustainability, and brokering knowledge [23]. The capacity-building initiatives we examined in our study generally covered these nine activities, although some were described less frequently (e.g., building an implementation team, sustainability). While the depth of our data did not allow for a direct comparison between the capacity-building initiative content and the more detailed individual competencies, future work should explore the alignment between training content and current and emerging competencies for implementation practice and science. For instance, novel competencies are emerging related to equity considerations in implementation science [109]. While some teams in our study described including new content on equity in their capacity-building initiatives, further work is needed to explore how this training content aligns with these emerging competencies, how effectively it is developing implementation practitioners’ capacity to integrate equity considerations during implementation, and whether there are differences in equity considerations for implementation research versus implementation practice.

We identified several areas where, despite learning content in the capacity-building initiative, practitioners might experience challenges applying this knowledge in practice. First, although about 70% of the capacity-building initiatives in our study included content on how to find, appraise, and adapt evidence, there were concerns about whether learners could (or should) action these skills in day-to-day practice, given the time-intensive nature. This concern aligns with a systematic review that found “lack of time” as a top barrier to healthcare providers searching for, appraising, and learning from evidence [110]. Support from librarians has been shown to have positive outcomes (e.g., time savings for healthcare providers, more timely information for decision making) [111], although we acknowledge that librarians may not be easily accessible in all practice-based settings. Second, nearly all teams included content on monitoring and evaluation. However, based on the collective experiences of our team of implementation scientists and implementation practitioners, monitoring and evaluation are often not done (or not done in depth) in practice-based settings. Setting up effective data collection and monitoring systems has been identified as one of the top ten challenges to improving quality in healthcare, with settings often lacking the required expertise and infrastructure [112]. It is possible that the high proportion of teams including monitoring and evaluation content in their training is in response to this gap and an attempt to better equip learners with the required knowledge to effectively apply these skills in their settings.

The topic of sustainability was included by less than two-thirds of the teams. Given the growing attention on sustainability and scalability [113,114,115], this was surprising. There are several potential explanations. First, it is possible that sustainability concepts were integrated throughout the other content and not explicitly articulated as a separate content area by study participants. Second, most of the capacity-building initiatives in our study were time-limited, introductory courses. While most capacity-building initiatives introduced process models (e.g., KTA framework [36], Quality Implementation Framework [52], EPIS [58, 59]), which encourage consideration of the full implementation process from planning to sustainability, it is possible that the focus of the training was on the earlier phases of the models, with less attention to the longer-term activities of sustainability and scalability. However, sustainability needs to be considered early and often [116, 117] and it is worth considering who bears this responsibility. Johnson et al. [118] raised a similar question and recommended sustainability planning be a “dynamic, multifaceted approach with the involvement of all those who have a stake in sustainability such as funders, researchers, practitioners, and program beneficiaries” [118(p. 7). It is thus important to ensure that capacity-building initiatives are equipping learners with the knowledge and skills to enhance sustainability and scalability throughout the full implementation process.

All teams described the importance of relational skills in the implementation process, from forming and maintaining a core implementation team, to engaging interested and affected parties in the implementation process, to effectively leading and facilitating implementation. Relational skills are required to work effectively in implementation practice, with about half of the 37 implementation core competencies being relational in nature [23]. In addition, an international survey of implementation experts most frequently identified collaboration knowledge and skills (e.g., interpersonal skills, networking and relationship building, teamwork and leadership skills, motivational skills, and ability to work with other disciplines and cultures) as the most helpful competency [24]. Our study also provided several examples of how this relational content is evolving in alignment with societal priorities and emerging areas in the fields of knowledge translation and implementation science, including integrated knowledge translation [28] and co-production [119] approaches, power differences and dynamics [120], equity, diversity, and inclusion, intersectionality considerations [121,122,123,124,125,126], and taking a principled approach to partnerships [127, 128]. It is promising that many teams offering capacity-building initiatives are staying abreast of these latest advances and priorities in developing the knowledge and skills of implementation practitioners.

Strengths and limitations

We used a comprehensive recruitment approach to enroll a geographically diverse sample of participants with a variety of implementation, clinical, and research experiences, providing an international perspective on implementation practice training. We used a recent systematic review [7] as one strategy to identify published capacity-building initiatives; however, it is important to acknowledge that we did not conduct a comprehensive review of the literature and some capacity-building initiatives may have been missed. Furthermore, the inclusion of English-speaking participants only may have limited the identification and participation of other capacity-building initiative developers and deliverers. In addition, the current study focused on capacity-building initiatives offered primarily in the health sector. Implementation science and practice span many fields, offering an opportunity to replicate this study design to examine commonalities and unique content needs across different regions and contexts.

The use of primary and multiple data collection methods facilitated the collection of in-depth information on both what content is covered in the capacity-building initiatives as well as how and why this content is included. However, it is important to acknowledge that we only received capacity-building initiative materials from 11 of the 20 programs, which may have limited the comprehensiveness of the information on each initiative. In addition, the discussion guide asked participants about “critical content” and participants may therefore have only highlighted the “core” content in the time-limited interviews and focus groups. As such, while our findings provide an overview of what experts in the field identify as important training content, this likely is not reflective of every possible topic covered across capacity-building initiatives. Furthermore, while study participants shared what they included in their initiatives and why those content areas are important, this may not be representative of the optimal training content for all settings. Given the purpose of this study was not to assess the outcomes of the capacity-building initiatives, we cannot ascertain whether specific capacity-building initiative content is associated with better learner or health-system outcomes, which is an important area for future work.

Although this work extends our knowledge of key training content for implementation practice, content and curriculum are just one component of designing and delivering effective implementation practice training programs. Our team is currently working to synthesize additional data to describe the structure, format, and evaluation approaches of the capacity-building initiatives, as well as describe the experiences of the teams who facilitate the training.

Conclusions

The results of this study highlight what experienced capacity-building initiative developers and deliverers identify as essential content for teaching implementation practice. These learnings may be informative to researchers, educators, and implementation practitioners working to develop, refine, and deliver capacity-building initiatives to enhance the translation of implementation science into practice. Future research is needed to better understand how the training content influences implementation outcomes.