Approximately 3300 veterans with transient ischemic attack (TIA) are cared for in a United States Department of Veterans Affairs (VA) Emergency Department (ED) or inpatient ward annually.1 TIA patients are at high risk of recurrent vascular events2,3,4; however, interventions which deliver timely TIA care can reduce that risk by up to 70%.5,6,7,8 Despite the known benefits of timely TIA care, gaps in care quality exist in both private-sector US hospitals9 and the VA healthcare system.10 A formative evaluation of TIA acute care in VA indicated that most facilities lacked a TIA-specific protocol and that clinicians struggled with uncertainty regarding the decisions to admit TIA patients for timely care.11, 12

The objective of the PREVENT trial13 was to evaluate the effectiveness of an implementation strategy bundle to promote local adaptation and adoption of a multi-component QI intervention to improve TIA care quality. The bundle was based upon several implementation frameworks and our previous success employing external facilitation with VA clinical teams.14 First, we used the integrated Promoting Action on Research Implementation in Health Services (iPARIHS) framework15 to guide external facilitation to assist local champions cultivate clinical teams, and disseminate professional education materials related to acute TIA care, and to facilitate local QI efforts using performance data. Second, based on prior studies which have distinguished CFIR constructs between high and low effective implementation,16 we operationalized key constructs (Planning; Goals & Feedback; and Reflecting & Evaluating) from the CFIR inner setting and implementation process domains17 a priori. Participating teams were trained on these concepts and received reinforcement through external facilitation.

The specific aim of this evaluation was to examine the effect of the implementation strategy bundle on implementation success. We hypothesized that clinical teams which engaged in the implementation strategies and locally adapted the PREVENT program components would realize the greatest implementation success.



VA facilities were rank ordered in terms of the quality of TIA care based on seven guideline-concordant processes of care and invitations to participate were sent to VA facilities with the greatest opportunity for improvement. Recruitment stratified by region continued until six facilities agreed to participate. PREVENT sites were pragmatically allocated to the stepped-wedge trial in three waves based on the ability to schedule baseline and kickoff meetings.

Quality Improvement Intervention

The rationale and methods used for the development of the PREVENT intervention have been described elsewhere.13 The provider-facing QI intervention was based on a prior systematic assessment of TIA care performance at VA facilities nationwide as well as an evaluation of barriers and facilitators of TIA care performance using four sources of information: baseline quality of care data,10 staff interviews,11 existing literature,18,19,20,21,22 and validated electronic quality measures.10 The PREVENT QI intervention included five components (see Appendix 1): quality of care reporting system (see Appendix 2), clinical programs, professional education, electronic health record tools, and QI support including a virtual collaborative.

Implementation Strategies

PREVENT employed a bundle of three primary implementation strategies: (1) team activation via audit and feedback,23, 24 reflecting and evaluating, planning, and goal setting17; (2) external facilitation (EF)23,24,25; and (3) building a community of practice (CoP).26 In addition, PREVENT allowed for local adaptation of its intervention components and the coordinating site provided EF to the site champion and team.

Active implementation of PREVENT at each site began with a full-day kickoff meeting facilitated by the coordinating site, and involved multidisciplinary staff members from the participating site involved in TIA clinical care. The site team used the PREVENT Hub, a Web-based audit and feedback platform (see Appendix 2), to explore their facility-specific quality of care data and identify gaps. Using approaches from systems redesign,27, 28 site team members brainstormed about barriers to providing highest quality of care, identified solutions to address barriers, ranked solutions on an impact-effort matrix, and developed a site-specific action plan that included high-impact/low-effort activities in the short-term plan and high-impact/high-effort activities in the long-term plan. Local QI plans were entered into the PREVENT Hub, and metrics were tracked on the Hub allowing teams to monitor performance over time. Using the Hub to observe other participating sites’ QI activities and performance, facility teams could learn which QI activities either did or did not improve metrics at peer sites. In addition, the coordinating team introduced strategies for activating teams: team planning, goal setting and feedback, and reflecting and evaluating using examples and data from past stroke QI teams and presenting a video of a VA stroke QI team in practice.

During the 1-year active implementation period, the site team members joined monthly PREVENT virtual collaborative conferences which served as a forum for sites to share progress on action plans, articulate goals for the next month, and review new evidence or tools. EF was provided by the PREVENT nurse trained in Lean Six Sigma methodology28 and quality management.

Evaluation Approach

The stepped-wedge29, 30 implementation trial included six participating sites where active implementation was initiated in three waves, with two facilities per wave. The unit of analysis was the VA facility.


We employed a mixed-methods design to evaluate this complex implementation intervention with prospective data collection from multiple sources. Qualitative data sources included the following: semi-structured interviews, observations, field notes, and Fast Analysis and Synthesis Template (FAST) facilitation tracking.31 Interviews were conducted in-person during site visits or by telephone at baseline, and at 6 and 12 months after active implementation. Key stakeholders included staff involved in the delivery of TIA care, their managers, and facility leadership; we also accepted “snowball” referrals from key stakeholders. Upon receipt of verbal consent, interviews were audio-recorded. The audio-recordings were transcribed verbatim. Transcripts were de-identified and imported into Nvivo12 for data coding and analysis.32

Using a common codebook, two team members independently coded identical transcripts for the presence or absence of CFIR constructs as well as magnitude and valence for four CFIR implementation constructs (i.e., Goals & Feedback, Planning, Reflecting & Evaluating, and Champions). Valence (+ or −) was scored for each construct if it was present and influencing the implementation of PREVENT at that site.16, 17 Magnitude was scored as 2 if it had a strong influence on PREVENT implementation, 1 if it had a weak or moderate effect, and 0 if it had a neutral effect. The evaluation team conducted formal debriefings after each kickoff, site visit, and collaborative call. These observations were recorded and transcribed for analyses.

We also used the FAST template, which is a structured electronic log, as a rapid, systematic method for extracting key concepts across data sources including interviews, collaborative calls, and Hub utilization data.31 We adapted an external facilitator tracking sheet for prospective collection of the dose and contents of site-specific, external facilitation provided by the evaluation team to participating site teams.25

Facility Baseline Characteristics

The measure of quality of care was the “without-fail” rate (WFR), defined as the proportion of veterans with TIA who received all of the processes of care for which they were eligible from among seven processes of care (Table 1). The WFR was calculated at the facility level based on electronic health record data using validated algorithms.10 In addition to the baseline WFR for each facility, data from the Office of Productivity, Efficiency and Staffing (OPES) were obtained to classify neurology and emergency medicine staffing levels ( We report the annual TIA volume at each site (seen in the ED and inpatient setting) and the proportion of patients who were admitted.

Table 1 Facility Baseline Characteristics

Implementation Evaluation

Teams reported on implementation progress on a monthly basis during the virtual collaborative calls. Teams were encouraged to adapt PREVENT components best suited for their local context and addressed gaps in care; thus, a fidelity evaluation was not applicable. The first of three primary implementation outcomes included the facility team’s number of implementation activities (defined as an intentional activity planned to change practices; aligned with each site’s action plans (see Appendix 3) and completed during the 1-year active implementation period).14

The second outcome included final level of team organization (defined as the degree of team cohesion and Group Organization [GO Score])16, 33 for improving TIA care at the end of the 12-month active implementation period. The GO Score16, 33 is a measure of team activation on a 1–10 scale for improving TIA care based on specified practices (see Table 2). The evaluation team independently determined each site’s GO Score by discussing evidence from the study data sources during team meetings and then voting independently using a digital, real-time anonymous ballot until 80% agreement was reached. The rationale for using both implementation outcomes was that they measured two distinct but complementary aspects of implementation: number of activities completed is an overall measure of implementation action, whereas the GO Score describes the degree to which the facility team is functioning as a unit to implement facility-wide policies and structures of care.

Table 2 Implementation Strategies and Outcomes

As an additional clinical measure of implementation outcome, the final column in Table 3 indicates whether or not the facility achieved ≥ 15-point improvement (in absolute terms to reflect planned vs temporal change) in their WFR over their 1-year course of active implementation (see Table 1).

Table 3 Matrix Display of Longitudinal Implementation Data

Using a mixed-methods approach34, 35 grounded in the CFIR,16, 17 we conducted a cross-case and data matrix approach36 to evaluate the degree to which the sites engaged in the bundle of implementation strategies; the association between implementation strategies and implementation and clinical success; and the associated contextual factors. Given that PREVENT’s implementation strategies and outcomes were tracked and rated by the coordinating team prospectively, no data were missing.


Baseline Context

The baseline facility context and QI team characteristics of the six participating facilities are provided in Table 1, listed in order of their baseline quality (WFR). The WFR for sites B, C, and D was similar to the fiscal year 2017 national WFR average of 34.3%, whereas site A was substantially below and Sites E and F were considerably higher than the national WFR. Sites E and F also had the highest level of neurology staffing. Emergency medicine staffing was similar across sites. More than 50% of TIA patients were admitted to the hospital, but admission rates were lowest at the two sites (A and B) with the lowest WFRs. The annual TIA patient volume varied from 13 to 46. At baseline, no sites had active teams in place working on TIA care quality, indicating that all of the teams began the active implementation period from a similar starting point. The participating site QI teams were diverse but generally included members from neurology, emergency medicine, nursing, pharmacy, and radiology; some teams also included hospitalists, primary care staff, education staff, telehealth staff, or systems redesign staff.

Implementation Strategies

Over the course of the 1-year active implementation period, we observed an overall high site engagement with each of the implementation strategies. In Table 2, we present the dose of implementation strategies delivered within the overall strategy bundle: EF, community of practice, Hub (audit and feedback), and local adaptation of PREVENT. The total number of completed implementation activities and final GO Score after 12 months of active implementation are also presented in Table 2. Site labels are retained from Table 1 and are listed in order of the three waves.

Audit and Feedback

We observed frequent usage of the Hub. The quality of care data (i.e., the 7 processes of care that comprised WFR) was updated monthly on the Hub; the average site champion Hub usage was 21.3 visits per 12 months (1.8 visits per month) and the average non-champion team member Hub usage was 20.2 visits per 12 months (1.7 visits per month). This Hub usage aligns with interview data from site team members indicating that they used the Hub for the process of care data, to access the QI plans, and to download materials from the library:

I [champion] used some of those slides [hub library] in order to show them [providers at local site] what the PREVENT program was and why it’s important.

External Facilitation

Facility QI teams and champions engaged with the EF during active implementation. Education on PREVENT components during the first half of the year, quality monitoring, planning, and networking between and within sites were the most commonly employed EF topics. Overcoming barriers and data reflection and evaluation were also frequent EF tasks.

The EF was really helpful in allowing me to …call and vent, and she was also really very encouraging. …That was an interesting lesson to learn that you might feel like you’re unsuccessful because of that one particular metric, …and so I appreciate her lending me her ear...

When the [EF] knew that I was encountering a barrier that was related to physicians, without asking, she would immediately provide me the data that I needed to discuss with that provider to make them understand what we were doing and that was really helpful.

Community of Practice

Sites were active participants in the virtual CoP. A site representative was present on all calls. Site champions attended an average of 80.5% of the monthly collaborative conferences (range 66 to 92%) during the active implementation period. All six teams participated in a promotion ceremony which was attended by local VA facility leadership as well as all of the PREVENT sites’ team members; peers from other sites acknowledged the implementation successes and lessons learned from the graduating site.

Implementation Outcomes

The implementation outcome data (Tables 2 and 3) indicated that implementation took place at all facilities given that all sites successfully completed at least 15 implementation activities (range 15–39, mean 26.5) as part of their action plans to improve the quality of TIA care over 12 months. Despite heavy clinical demands at all participating sites, none of the teams withdrew from PREVENT.

Table 3 provides longitudinal data for implementation outcomes as well as scores on CFIR constructs related to the implementation strategies which PREVENT facilitated among the local clinical champions and QI teams during the year of active implementation. The sites were ranked in terms of the final GO Score. All sites achieved a substantial increase in the GO Score from baseline to the midpoint of the active implementation period (6 months): mean of 1 at baseline (no facility-wide approach) to 5.5 at 6 months (some facility-wide approach activities). All sites began at the same level, 1, because they had no pre-existing organization around TIA. The kickoff and the Hub allowed teams to examine their data, identify gaps in care, develop an action plan, and start to come together as a team; these two elements of the PREVENT intervention were the key factors responsible for the observed improvements in team activation across sites during the first 6 months of active implementation.

I came out of it [kickoff] feeling that I knew what the issue is…I know what the goal is, and I have information sources so that I'm able to do it. …. one of the biggest things that I see is that I think that it really helped to come up with like more of a team…

I think [the kickoff] played a large role of how we decided we wanted to proceed … I feel like that was the first time we’ve, we really kind of drilled down on, on a good …starting plan …of an outline of what we wanted to accomplish.

The matrix display in Table 3 also indicates the key role of champions in promoting implementation success, as there was a direct link between the strong presence of effective champions (i.e., “+2” level) and the GO Score at both the 6- and 12-month timepoints. At 6 months, the correspondence between a champion score of + 2 and a GO Score of ≥ 6 (cf. sites F, A, and C) was 100% (3/3). None of the other CFIR constructs systematically distinguished the three sites with scores ≥ 6 from the three sites without. Moreover, all three of these sites with GO Scores ≥ 6 at the 6-month timepoint ultimately achieved ≥ 15 point improvements (in absolute terms) in their WFR rates over the 1-year active implementation period, explaining 75%, 3 of the 4 sites, with ≥ 15 point gains.

At 12 months, the correspondence between a champion score of + 2 and a positive gain in the GO Score between the 6- and 12-month timepoints was likewise 100% (cf. sites F, A, B, D). As before, none of the other constructs systematically distinguished the four sites that improved their GO Score between the 6- and 12-month timepoints from those that did not.

Furthermore, Table 3 indicates that the presence of an effective champion was a necessary but not sufficient condition for the strong presence of the CFIR constructs of Reflecting & Evaluating, Goals & Feedback, and Planning. For all five of the rows in Table 3, where a + 2 score appeared for any of these three CFIR constructs (cf. site F, 6 and 12 months; site A, 6 and 12 months; site B, 12 months), the correspondence with a champion score of + 2 was 100% (5/5). The reverse was not true, however: the correspondence between sites with + 2 scores for champions with + 2 scores for all 3 CFIR constructs of was only 43% (3/7) (cf. site A, 12 months, Reflecting & Evaluating = + 1; site C, 6 months, Planning = 0; site D, 12 months, all 3 constructs = +1). This indicates that the strong presence of effective champions acted as a pre-condition for the strong presence of Reflecting & Evaluating, Goals & Feedback, and Planning (rather than the other way around), helping to explain how champions at the + 2 level influenced ongoing implementation.

Site E had multiple individuals who engaged in clinical champion activities, and therefore had lower champion construct scores and achieved lower GO Scores. For example, individual team members at site E often engaged in activities that addressed processes within their own service areas, rather than carrying out coordinated, cross-service efforts led by a central champion. The negative valence in site D’s clinical champion construct reflected turnover in their local champion early in active implementation which resulted in the lowest total number of activities completed. However, with the replacement of the local champion, addition of new team members, dedicated EF, and CoP and Hub engagement by the new champion and team members, site D’s GO Score improved.

Although the sites completed a diverse range of implementation activities (Table 2), the most common categories included activities related to professional education (e.g., teaching house staff) and implementation of clinical programs (e.g., prospective patient identification systems). Teams that engaged in planning to the greatest degree were those with the highest number of completed implementation activities (Table 3). Sites varied considerably in terms of the timing of the implementation activities completed over the 1 year of active implementation (Fig. 1, Appendix 3). Half of the teams began strongly with implementation soon after the kickoff whereas other teams engaged in more activities during the latter half of the year.

Figure 1
figure 1

Implementation activities over the 1-year active implementation period.


Our data suggested that the presence of an effective champion was key for implementation success . Indeed, when one site lost its champion, implementation progress was halted, and then revived with the replacement of the local champion who was subsequently supported with EF. Effective champions, individuals who “drive through” implementation according to CFIR16, appear to play a critical role in engaging in the implementation strategies including the EF, and fostering teams in reflecting and evaluating, goal setting, and planning: activities which were related both to the total number of implementation activities completed and the degree of team cohesion.37, 38

Our results also suggest that an alternative approach to implementation occurred at a site with a distributed champion model—one in which several individuals shared the responsibility for actions usually performed by a champion. In this context, the individuals limit the activities in which they drive through implementation to their specific clinical area versus the overall program. Although this site (E) did not achieve high scores for team functioning, they did complete many implementation activities. A key to the success of this approach is the degree to which the various champions are able to independently complete a given implementation activity with limited guidance by a central champion

PREVENT site champions were diverse, including staff from neurology, nursing, pharmacy, and systems redesign. Our study findings indicated that the professional discipline of the champion was less important than the role they played in either performing implementation activities themselves or engaging other team members in the QI process.

The site team members, and especially the champions, regularly contacted the EF who provided information, support, and encouragement across a broad range of topics. Two key EF activities merit further description: conducting chart reviews and facilitating implementation of the patient identification tool. Clinical champions were sometimes dismayed when the monthly performance data indicated that some patients had failed process measures. The EF conducted targeted chart review which identified gaps in care or documentation; this chart review information supported the champions in their efforts to engage in quality improvement. The EF also worked with teams to implement a patient identification tool to identify patients with TIA who were cared for in the ED or inpatient setting. This tool was used at some sites to prospectively ensure that patients received needed elements of care and at other sites to retrospectively identify opportunities for improvement. Given that many of the champions were clinicians without prior QI experience, the EF was able to help connect clinicians with local clinical informatics staff to implement the patient identification tool.

These findings have direct relevance for healthcare systems like the VA where quality improvement resources may need to be targeted at lower performing facilities which may lack existing teams, have small patient volume, and vary considerably in terms of baseline context. The current findings emphasized the importance of EF and indicate that EFs should be flexible given the heterogeneity in site needs. Moreover, the combination of the in-person kickoff meeting and the Hub at the launch of the active implementation period was critical in three domains/areas: the development of site-specific action plans that were based on site-specific performance data; the early formation of team identity; and to the training of champions and QI teams on how to reflect and evaluate on their data. The high degree of Hub usage suggests that healthcare systems implementing QI programs should provide a forum for providing updated site-level quality of care data as well as resources for quality improvement that can be readily shared across sites. Finally, healthcare systems should consider supporting QI teams within CoP that serve as supported arenas for public accountability for making progress on action plans, sharing lessons learned, and providing encouragement.

Contribution to the Literature

To our knowledge, the PREVENT implementation strategy bundle is one of the few implementation interventions to operationalize CFIR implementation process constructs as strategies a priori to train its QI teams and prospectively evaluate its uptake. Indeed, a recent review of CFIR usage in implementation research recommended future research identified the need for research with prospective CFIR use with a priori specification.34 Moreover, we provided specifications39 on the set of implementation strategies delivered as a bundle. In addition, we described the level and timing of engagement and implementation activities across QI teams. These findings provide evidence for specific implementation strategies in the setting of a complex clinical problem when no quality of care reporting system exists.40

Our results are aligned with a recent systematic review of the effect of clinical champions on implementation which concluded champions were necessary but insufficient for implementation.41 An emergent finding was that modest and strong positive CFIR planning construct was related to implementation success (especially with respect to the number of implementation activities completed).


The primary limitation of PREVENT is that implementation occurred only within VA hospitals. Future research should evaluate implementation in non-VA settings. Because several implementation strategies were deployed simultaneously, it was difficult to disentangle the unique effects of each strategy. Although a six-site sample was sufficient to make case comparisons, future studies might include a larger number of facilities to evaluate additional implementation outcomes.

In summary, this study found that engagement in a bundle of CFIR-related implementation strategies facilitated the local adaptation and adoption of the PREVENT TIA QI program and that two alternative approaches to the role of champions were associated with implementation success.