BACKGROUND

As a growing number of healthcare organizations pursue Patient-Centered Medical Home (PCMH) implementation, the challenges inherent in transforming complex primary care systems are increasingly apparent.15 In response, organizations including the Veterans Health Administration (VHA) have adopted the quality improvement collaborative (QIC) strategy popularized by the Institute for Healthcare Improvement (IHI) Breakthrough Series (BTS).610 A QIC is “a structured framework within which teams learn about research and best practice, apply quality methods, and exchange their experiences of making improvements.”11 While popular, this time-intensive and resource-intensive method lacks the evidence base to warrant widespread adoption, prompting calls to open the “black box” of QICs to better understand their overall effectiveness, how they operate, and why they succeed or fail.79,1118

Recently, there has been a growing trend toward incorporating more virtual learning modalities in QICs, as organizations pursue potential advantages of Web-based education, including greater convenience, cost-savings, economies of scale, and connectivity and support across geographical and organizational boundaries.10,19 Evidence for virtual QICs (VQICs) is nascent but promising.2023 In one of the few published studies, the IHI tested an electronic version of its Breakthrough Series model. Outcomes were comparable to those achieved in traditional collaboratives while costs were considerably lower, providing preliminary evidence that the model can be adapted to a virtual environment.20 However, project leads acknowledged that implementation occurred under ideal conditions (small number of teams, highly trained faculty, solid topic, robust measures), and recommended further testing of VQICs in “more wide-ranging conditions such as a greater number of teams, less-experienced faculty, and [a] less-developed topic.”20, p.579

In the absence of evidence for VQICs per se, the literature on virtual learning in medical education offers further guidance. Cook et al.’s 2008 meta-analysis concluded that “Internet-based instruction is associated with favorable outcomes across a wide variety of learners, learning contexts, clinical topics, and learning outcomes [and] appears to have an effectiveness similar to traditional methods.”19, p.1195 Others building on this work highlight issues related to learner engagement with technology, creating opportunities for meaningful interaction, and “course-context interactions” that may influence a course’s success more than any intrinsic feature of the course itself.24

OBJECTIVES

Targeted evaluations are necessary to answer remaining questions about QICs, especially their virtual cousins. In particular, we still know little about what specific implementation methods are more/less effective, in what contexts, and for what audiences and learning objectives. Such process-related questions have special resonance for QICs that rely heavily on virtual modalities, as answers will likely differ from non-virtual QICs in important ways. In both cases, seeking the perspectives of learners participating in such improvement efforts is critical; they have insider knowledge about what works (or not), making their feedback essential to improving both the method and its eventual impact.

To this end, we report findings from a Survey of Learner Experiences nested within a larger multi-method process evaluation of the VISN 4 Virtual Collaborative (VC), which was developed to support PCMH implementation across one Veterans Integrated Service Network (VISN). By shedding light on why and for whom the VC met or failed to meet its objectives and identifying specific opportunities for improvement, we contribute to the evidence base on VQICs and offer insights relevant to unpacking the QIC black box more generally.

METHODS

Setting: VHA, PACT, and the VISN 4 Virtual Collaborative

VHA’s nationwide PCMH initiative, known as PACT (“Patient Aligned Care Teams”), launched in 2010 and included a two-phase strategy for training primary care teams in PCMH/PACT principles and practices (Fig. 1). However, by fall 2011, new national restrictions on travel funding prompted VHA to reconsider its training plan and promote new local approaches. Concurrent with the travel restrictions, clinical operations leaders in VISN 4, a six-state region serving more than 275,000 primary care patients, developed and launched the Virtual Collaborative (VC) to further facilitate PACT transformation across 56 primary care clinics. Training needs were particularly high in geographically remote community-based clinics, where teams were more isolated, less able to attend face-to-face trainings, and generally less “in the loop” about PACT happenings. The VC also targeted known barriers to PACT implementation, including insufficient local leadership support at several sites and the general difficulty of finding time for practice improvement work in busy clinical settings.

Figure 1
figure 1

The VISN 4 Virtual Collaborative in Context.

The VC replicates IHI’s Breakthrough Series (BTS) model in several respects, including alternating learning sessions and action periods, team-driven small tests of change, collaborative resource sharing and teach-backs, and coaching by expert facilitators.6 Like the virtual BTS, the VC relies heavily on distance technologies (Web-based collaboration software, audio-conferencing, e-mail, Intranet sites).20 However, unlike both the VBTS and its non-virtual counterpart, the VC delivers content in smaller, more frequent doses; it targets all primary care teams across the VISN rather than a select group of improvement teams; and participants cannot opt into (or out of) VC participation. Additional details about the VC model are provided in an online appendix.

Survey Design and Procedures

The Survey of Learner Experiences was an anonymous online survey designed to elicit individual learners’ perspectives on the usefulness, impact, and acceptability of the VC, and to assess whether and how these differed by team role, practice setting (i.e., medical center [VAMC] or community-based outpatient clinic [CBOC]), exposure to PACT training prior to the VC, and degree of engagement in VC activities. We designed the survey in consultation with the VC Planning Committee. The final version included 32 structured items that covered: participation in and perceived usefulness of different VC components; perceived usefulness and impact of the VC; acceptability of the training format; and basic demographic information (role on PACT team, home facility, practice setting, and exposure to prior PACT training initiatives). In addition, three free-response items invited comments about what was most useful about the VC, what changes would improve it, and any additional feedback respondents wished to convey to VISN/VC leadership.

We fielded the survey in April 2012 (Round 1) and again in September 2012 (Round 2). Following an announcement by VC faculty during a virtual session, we e-mailed all registered VC participants an invitation containing a Web Link to the survey. Participants received weekly updates and reminders about the survey, as well as a summary of final results.

All evaluation activities were reviewed and approved as Quality Improvement by the Philadelphia VA Medical Center Institutional Review Board (IRB).

Participants

All VC registrants were eligible for the survey. However, the survey chiefly targeted core members of VISN 4’s approximately 350 PACT teams, for whom participation in the VC was mandatory. Under the PACT model, the core team consists of a primary care provider (PCP); a registered nurse (RN) care manager; a clinical associate (licensed practical nurse or health care technician); and a clerk.

Data Analysis

For the purposes of this paper, we are most interested in learners’ perspectives after prolonged experience with the VC, and therefore focus on Round 2 results.

Residents and those reporting no involvement in the VC were excluded from analysis. Survey responses were dichotomized (e.g., agree/strongly agree versus other) and all comparisons made using Chi-square tests. For multiple comparisons (i.e., between different roles), we used Bonferroni-type adjustment. We developed two additional measures to summarize overall engagement in the VC: (1) a “meeting dose” variable, based on reported frequency and duration of team meetings, whereby we classified respondents into “high,” “medium,” or “low dose” groups based on time spent in team meetings; and (2) a dichotomous variable indicating whether or not the respondent had the “full VC experience” (attended at least four virtual sessions between April and September 2012, high team meeting dose).

Using a priori codes derived from survey aims, two authors (AB and GT) and a research assistant coded and summarized responses to free-response survey items to identify themes and illustrative quotes that offered further insight into quantitative results.

FINDINGS

We analyzed responses to the Round 2 survey from 353 VC participants. Table 1 describes the overall survey response rate and sample characteristics. PCPs were most strongly represented (39 %), while clerks comprised the smallest group of respondents (5 %). About one-half (48 %) of respondents worked at a VA Medical Center (VAMC), and about one-third (35 %) had participated in one or both of the preceding national PACT training initiatives mentioned above.

Table 1 Characteristics of Study Sample

Comprehensive results on overall participation, perceived usefulness, impact, and acceptability are presented in Table 2. Virtual sessions were well attended and given modestly positive ratings by most respondents. Team meetings were less well attended and generally shorter than originally envisioned, yet most respondents still found them useful.

Table 2 Overall Participation, Perceived Usefulness, Impact and Acceptability

Most respondents perceived some benefit to participating in the VC (greater knowledge about PACT, better access to PACT-related resources, increased communication with other teams, progress with PACT implementation). Feedback about the training format, including the virtual aspect, was also moderately positive; however, comparatively few respondents reported no disruption to their day-to-day work. Responses to open-ended items revealed additional benefits to VC participation, including a better understanding of the larger PACT vision; a broadened perspective about the challenges of PACT implementation; and increased peer-to-peer exchange of ideas between teams at the same facility and at different facilities. Several respondents echoed the following sentiments:

“I like to see what others are doing so we know other ways to implement and use PACT.” (Nurse Manager, VAMC)

“It gives me the perspective that none of us are having an easy time putting this into practice.” (Clinical Associate, VAMC)

Another respondent expressed appreciation for the ways in which the VC challenged participants to rethink the status quo and try something new that might ultimately benefit them:

“Revamping how things are done is a good thing… Getting providers and nurses out of their comfort zone in the name of making the product and process better is a good thing… I may not always like homework, meetings, and perceived busywork; I greatly appreciate the room for more autonomy with my patients and schedule.” (PCP, VAMC)

Differences by Role, Practice Setting, Prior Training, and Overall Engagement

Comparisons by role revealed differences in respondents’ participation in and experience of the virtual sessions (Table 3). RNs in this sample were less likely to have attended most/all virtual sessions, but more likely than either clinical associates or PCPs to find the presentations during these sessions useful. In terms of impact, both RNs and clinical associates were more likely than PCPs to agree that the VC had improved their access to people who could answer questions related to PACT.

Table 3 Differences by Role

While low response rates for clinical associates and clerks limited statistical analyses, qualitative data offered further insight into the ways that role differences shaped learners’ experiences. At some locations, the mandate ensuring protected time for team members to participate in virtual sessions was only enforced for providers or for providers and RNs. Even at sites where enforcement was more comprehensive, clerks were generally not able to attend sessions regularly. As one provider wrote:

“Only my RN comes to the collaborative, no one else from my team… has time or has been released from work tasks. I think if we value something then we make it a priority, so if the PACT is truly a priority, set aside time away from work to let people attend.” (PCP, VAMC)

In addition, survey comments indicated that the didactic presentations did not meet the learning needs of all team members. A number of respondents felt the bulk of the virtual sessions “were more directed towards the provider and not the rest of the PACT team” (Clinical Associate, VAMC). Clinical associates and clerks in particular expressed a wish for more content specific to their functions under PACT. In addition to such comments, the comparative absence of feedback from clerical staff likely reflects, at least in part, the degree to which that group was less engaged in the VC compared to team members in others roles.

We found few significant differences when we compared responses from participants who worked primarily at a VAMC compared to a CBOC, except around frequency and perceived usefulness of team meetings (Table 4). Respondents working at a VAMC on average reported spending more time meeting with their team, and were about twice as likely to have a high team meeting “dose” and to have participated fully in the VC. Not surprisingly, they were also more likely than those at CBOCs to agree that team meetings helped with PACT implementation and were a good use of time.

Table 4 Differences by Practice Setting

A large number of write-in survey comments spoke to ongoing issues related to staffing, time, and workload that interfered with teams’ ability to meet and work on practice improvement. While these issues are not unique to CBOCs, CBOC teams may be especially hard hit because they have less access to resources and support from their parent facility, as the following two comments convey:

“The Virtual Collaborative would be great if all facilities were able to implement it… Unfortunately, we have no LPNs or Health Techs and the RNs do all the triage, phone calls,… dressings, nursing visits, etc. At times, [we have] two providers sharing one RN with no additional help. This makes it quite difficult to implement any PACT concepts and basically we are NOT doing PACT…” (PCP, CBOC)

“[Not] enough people work at my CBOC to form a PACT… So, since there is no team and no PACT at my CBOC, having to participate in the Virtual Collaborative is like teaching a computer course to someone without a computer.” (PCP, CBOC)

Survey results indicated that having prior training tended to contribute positively to learners’ experiences (Table 5). Respondents with prior training were more likely to value the virtual sessions and to feel the sessions were worth their time. They were also more likely to report that their team met every 2 weeks or more and that these meetings were a good use of time and helped with PACT implementation.

Table 5 Differences by Prior PACT Training

Respondents with prior training were more likely than those without to agree that participation in the VC increased their communication with other teams and access to experts and other PACT-related resources. They were also more likely to agree that VC participation was worthwhile for someone in their role and was helping their team to operationalize PACT concepts. In terms of acceptability, those with previous training were more apt to feel that the VC was comparable to an off-site training; similarly, they were more enthusiastic about getting training in “small doses” and less likely to feel that participating in the VC interfered with their work.

Comments like the following expressed the challenge felt by participants without any previous PACT training, some of whom were learning about PACT for the first time through the VC:

“All these months into [PACT], we are just now finding out about tools to help with this process. I feel that we are embarking on a quest without having the basic fundamentals in place first.” (Clinical Associate, CBOC)

Of the 268 respondents for whom we were able to calculate a team meeting “dose,” 31 % were in the high dose group, while one-fifth of respondents had the full VC experience. Comparisons by measures of overall engagement showed that high VC engagement made a positive difference across the board, in terms of perceived usefulness, impact, and acceptability (Table 6).

Table 6 Differences by Measures of Engagement

Fully engaged respondents were more likely to value virtual sessions and team meetings and consistently voiced stronger agreement with items assessing the VC’s impact. Differences were substantial and significant across all items, but differences in perceived impact on PACT implementation were most striking: 70 % of those in the highly engaged group agreed that participating in the VC was helping their team to implement PACT, compared to only 32 % of all remaining respondents.

With regards to acceptability, respondents who spent more time meeting with their teams were more apt to feel the VC offered equivalent training and that their participation had not interfered with their day-to-day work. Similarly, those who participated fully in the VC showed significantly higher agreement on all measures of acceptability.

Additional Insights from Survey Comments

Review of survey comments yielded further insights into factors contributing to variations in participants’ learning experiences. One additional source of variation that we did not ask about explicitly relates to differences in team composition. Like other PACT training initiatives, the VC curriculum presupposed a 3:1 staffing model (three support staff to one PCP); however, few teams were actually staffed according to this model. Consequently, many participants felt that the VC did not speak to the particular realities and needs of their team. As one respondent wrote:

“I am not yet fully PACT as we are quite low in RNs [one RN to eight providers]…The presenters of the collaborative could give alternative ways of doing things for teams like ours…instead of making suggestions for the ideal theoretical team that we currently don't have yet.” (PCP, CBOC)

Qualitative data also revealed an unintended negative impact for some participants at sites with especially low staffing levels; for these respondents, participating in the VC and hearing about innovations elsewhere in the VISN was frustrating, because they felt they lacked the resources necessary to implement similar changes. One respondent expressed this forcefully:

“We are so severely understaffed that going to these sessions is like sending a diabetic to lunch at a candy store. What's the use when you can't avail yourself of such wonderful 'theoretical’ concepts. PACT is flying overhead and we're still in the bunkers.” (PCP, VAMC)

Finally, survey comments from a handful of sites conveyed frustration with a perceived lack of support and engagement from local leadership; respondents from these sites felt that frontline staff had no input into planning and decisions around PACT and described an excess of “administrative red tape” that inhibited innovation at the team level.

DISCUSSION

Our survey revealed significant variations in participants’ perceptions of the VC’s usefulness, impact, and acceptability as a vehicle for organizational change. Quantitative results identified differences related to role, practice setting, prior training, and overall engagement in VC activities, while qualitative data uncovered additional contextual factors that shaped the relative success of the VC in participants’ eyes, including differences in team composition and local resource constraints. Understanding such “course-context interactions” is critical, as it is these interactions rather than the intrinsic features of the “course” that explain what makes a program such as the VC succeed or not.24

The two factors most consistently linked to ratings of the VC were respondents’ degree of engagement in the VC and their exposure to prior PACT training. Respondents who were fully engaged in the VC found it more useful, more acceptable, and, interestingly, less disruptive, even though this group presumably devoted more time to VC activities and might plausibly be expected to have experienced more disruption. While our N’s were too small to detect significant differences between sites, one possible explanation is that these fully engaged respondents hailed from facilities characterized by stronger overall readiness for change, with greater buy-in and support for the transition to PACT. Similarly, respondents who entered the VC with some foundational knowledge about PACT were more likely to perceive a positive impact and more apt to believe that the VC provided training that was comparable to what a more traditional format would have offered. This is particularly significant, given that those familiar with earlier face-to-face trainings were arguably in a better position to draw accurate comparisons between those trainings and the VC. We posit a possible link to team maturity, in that respondents who participated in earlier training initiatives are more likely to belong to more stable and engaged PACTs.

The opportunity to hear about successful strategies and common struggles from teams across the VISN was among the most valued aspects of the VC, a benefit reported in other QIC evaluations as well.11 Survey comments suggest that such exchange with peers provided informational as well as moral support, and may have helped to counter change fatigue2 as participants learned that certain challenges were intrinsic to any change process and felt better prepared to face those challenges.

Responses to write-in questions revealed at least two major reasons why certain participants benefited less from the VC. First, the content covered in the virtual sessions did not match the learning needs of all participants, especially those in support staff roles, those on teams that were not ideally staffed, and those previously unfamiliar with PACT/PCMH concepts. Second, respondents described a number of barriers that prevented them from participating fully in VC activities, including issues related to staffing constraints, insufficient and/or unprotected time, competing workload pressures, and a perceived lack of local leadership support.

Participant feedback collected through the survey allowed VC leadership to make more informed decisions about how to improve the program and mitigate the barriers that hampered its effectiveness. These “lessons learned” carry broader implications for the design and evaluation of future virtual collaboratives:

  • Organizations must commit to creating dedicated time for team members to participate in training and improvement activities. Without such protected time, it will be hard for them to engage in, feel engaged by, and move forward with practice improvement work, and training efforts may inadvertently contribute to staff burnout.

  • Training content should reflect and address teams’ real-world working conditions, including the reality of persistent resource constraints and the need to find creative adaptations to those.

  • Participants who are less experienced and/or less motivated to change would benefit from additional supports, such as more intensive coaching and linkage to more seasoned teams.

  • Generating the organizational readiness for change necessary for quality improvement efforts to succeed may require targeted education for administrators and managers around the overall vision for change and core quality improvement concepts and strategies, to ensure that teams receive the resources and autonomy they need to innovate.

Limitations

Certain limitations should be considered in interpreting the results of this study. Our survey methods do not permit us to generalize results, as we do not know whether respondents’ experiences are representative of all VC participants. Response rates varied by role and by individual site, and we have no information about non-responders’ characteristics, experiences, and reasons for not responding. Similarly, response rates precluded comparisons between sites and between different roles (especially clerks).

In addition, the survey design assumed that respondents were members of defined and fully activated PACTs; however, survey comments indicated that some PACT teams are not yet truly functioning as such. Because the wording of several survey items assumed membership on an active PACT, some respondents may have dropped out before completing the survey or selected response options that failed to capture their experience accurately.

Finally, there are several potentially relevant contextual variables that we did not measure, including individual learning histories and preferences, experiences with other collaboratives and distance learning, team composition and “maturity,” and organizational readiness for change, to name a few. These limitations are mitigated by the rich qualitative data generated by our three unstructured items, which contributed to a fuller understanding of the quantitative results and yielded additional insights that would not otherwise have been captured.

Conclusion

Quality improvement collaboratives continue to gain popularity as the “facilitating vehicle” of choice for healthcare improvement initiatives; in recent years, this approach has been adopted for increasingly complex initiatives and has started to “go virtual” as organizations seek to minimize costs and achieve economies of scale. However, little is known about how QICs, much less virtual QICs, operate, which features and methods are more/less helpful in promoting desired changes, and what factors explain why some succeed more than others.

Our study extends the evidence base for QICs in general and virtual QICs in particular, by examining such “black box” questions from the perspective of participants in an innovative virtual collaborative within a large Veterans Integrated Service Network. We identified specific contextual factors that either enhanced or undermined learners’ experiences, which led to improvements to this collaborative and may usefully inform the design and evaluation of other virtual collaboratives in the future. Finally, our study demonstrates the value of seeking constructive criticism from those working at/near the frontlines of patient care, whose expertise is critical to evaluating and improving this popular improvement strategy.