INTRODUCTION

Patient engagement and the promotion of patient participation in healthcare decision-making is a cornerstone of patient-centered care. There is evidence that improving patient engagement leads to benefits such as better health outcomes, improved patient safety, greater quality of life, reduced healthcare costs, and decreased staff burnout.1, 2 Until recently, resources for patient engagement were limited in scope.3

The Center for Evaluation of Patient Aligned Care Teams (CEPACT) created a web-based, interactive patient engagement toolkit (https://go.usa.gov/xUtdV) to improve patient engagement across the Veterans Health Administration (VHA).4 The toolkit consists of a collection of specific patient engagement strategies that have been successfully used by primary care providers in VA clinical settings and were generated through consensus from a group of VHA primary care leadership, staff, and patients.4 The strategies include a broad range of activities that can be carried out before a primary care visit, during the visit, after the visit, or in a class setting to engage patients in their healthcare. The toolkit also provides a collection of patient engagement resources for overcoming barriers to patient engagement for both patients and providers. The toolkit is disseminated as a searchable database from which users can export details on patient engagement strategies that best fit their local needs. Using this toolkit, staff are expected to be change agents by implementing multi-pronged practices intended to improve patient engagement. The toolkit was created and subsequently disseminated to VHA healthcare facilities within one regional Veteran Integrated Service Network (VISN).

While studies have examined toolkit implementation and dissemination in healthcare settings, there is limited literature describing tools to support patient engagement.5 A recent systematic review of toolkits as a method for knowledge translation (KT) indicates a lack of detailed description of the implementation process and how well KT actually occurred.6

In this paper, guided by the KT framework (Fig. 1),7, 8 we present the evaluation of a multi-level patient engagement toolkit dissemination and implementation process at facilities across one region in the VHA. KT is defined as a dynamic and cyclical process by which practitioners incorporate knowledge from both research and practice to improve quality in healthcare settings.8 The KT framework is an iterative process that proceeds through seven steps: (1) identifying a problem, (2) adapting knowledge to local context, (3) assessing barriers to knowledge use, (4) tailoring and implementing, (5) monitoring knowledge use, (6) evaluating knowledge use, and (7) sustaining knowledge use.9 This paper describes how facilities implemented the CEPACT toolkit using these seven steps of KT and identifies important barriers and facilitators.

Figure 1
figure 1

Knowledge translation (KT) framework summary of results.

METHODS

Setting and Participants

We enlisted 40 VHA facilities primarily serving Pennsylvania, New Jersey, and Delaware to use a patient engagement toolkit to select and implement patient engagement practices at their sites. Ongoing individual and group phone meetings were used to track implementation from February 2017 through May 2019. This project was reviewed by the Corporal Michael J. Crescenz VA Medical Center internal review board and considered a quality improvement project.

  1. 1.

    Identifying the problem. Following a kick-off meeting with regional leadership, selected implementers were emailed the project’s overview with a toolkit link and an invitation to speak on the phone one-on-one. Implementers were nurse managers, registered nurses, clinical nurse specialists, primary care leads, and health promotion disease prevention (HPDP) coordinators from 10 VA medical centers (VAMCs) and 30 Community-Based Outpatient Clinics (CBOCs). The initial one-on-one phone call focused on establishing buy-in and introducing the toolkit. The toolkit consists of a list of practices categorized into activities that can take place before, during, and after a visit as well as administrative resources such as training or improving communication between staff. Implementers were provided a summary reflecting current performance (as described below) to assist in identifying a patient engagement problem at their site.

  2. 2.

    Adapting knowledge to local context. On the second call with each site, implementers discussed practice selection ideas and plans for recruiting additional team members to assist with implementation. These semi-structured interviews also asked implementers to describe planned strategies for establishing and working with teams, getting ongoing leadership assistance, rollout, tracking activities, and monitoring successes and challenges.

  3. 3.

    Assessing barriers to knowledge use. During the calls, implementers were asked to describe any facilitators and barriers that would impact execution, any strategies they would take to rollout their practices at their site, leadership’s involvement and support, and staffing capacity at the site and for the project.

  4. 4.

    Tailoring and implementing. Follow-up calls focused on the concrete implementation steps and were structured to answer the following questions: What progress have you made on implementing your practice? Who is supporting you with implementation? What successes and challenges have you experienced? How have you overcome your challenges? What type of tracking are you doing for this project? What are your timeline and goals for implementing this practice?

  5. 5.

    Monitoring knowledge use. Sites were randomly assigned into a high or low coaching intervention group to assess whether additional, external support influenced implementation. Those in the high coaching intervention group were contacted every 1–3 months for 2 years and were provided with toolkit coaching support as needed. The low coaching intervention group was contacted every 6 months for 2 years to collect data on progress. Both groups also participated in quarterly group calls with their respective peers. Quarterly group calls were established to provide a venue for implementers to share their successes and challenges and to troubleshoot barriers together. Agendas for these group meetings included presenting their patient engagement practices and having CEPACT evaluation staff share relevant data and information pertaining to the project. CEPACT staff assisted the high coaching intervention group with challenges by suggesting solutions and making connections to peers and other resources. The low coaching intervention group was not provided any additional support outside of peer connections during group calls.

  6. 6.

    Evaluating knowledge use. All implementers were provided a performance summary of their site’s survey of healthcare experiences of patients (SHEP)10, 11 scores to assist in monitoring patient satisfaction over time. This summary displayed data from the last 4 quarters on 16 SHEP questions related to patient engagement. Once practices were selected, the performance summary highlighting 4 SHEP questions related to the site’s chosen practices was provided on a quarterly basis.

  7. 7.

    Sustaining knowledge use. At the end of the 2-year intervention, implementers were asked to describe their plans for ongoing sustainability, maintenance, and monitoring of practice(s). Implementers were also asked for feedback on the toolkit and the project.

Data Analysis

Each call was attended by two evaluators: an interviewer and a notetaker. Field notes were written and transferred to NVivo 1212 for analysis. Using an applied thematic analysis approach,13 the qualitative codebook was structured around the seven categories of the KT framework. A total of 20% of the field notes were coded jointly by two coders to establish coder agreement and refine the codebook. The remaining notes were divided and coded individually.

A scoring rubric was developed to assess implementation progress. The scoring rubric included 4 domains adapted from the Patient-Centered Medical Home Assessment (PCMH-A) tool14: Tracking (extent of metrics or tools to measure implementation); Resource Mobilization (extent of resources such as staff, technology, funding); Degree of Implementation (extent of implementation and use); and Spread (extent of spread to their team, clinic, or facility). Each domain was scored on a 10-point scale with 1 being the lowest and 10 being the highest. Two evaluators independently scored the site and then met to establish a consensus-based rating. All sites were scored every 6 months. To create a final rating, consensus scores were aggregated into an average for each site across all practices and all time periods. For example, if a site selected two practices, those practices were scored individually three times over the course of the project and averaged at the end. Once the data were coded, matrices were used to split up the results based on quartiles. The lower quartile (≤ 6) was compared with the higher quartiles (> 6) to assess differences between low and high performing sites and low/high intervention groups.

RESULTS

KT Framework and the Patient Engagement Toolkit

Figure 1 depicts the findings of this evaluation in the context of the KT framework. Each step in the framework illustrates examples of what was accomplished. Progress was bi-directional with key ingredients woven throughout. Implementer engagement, organizational support, and strong collaboration were key markers of success. Below are the results of the seven steps of KT taken to implement the patient engagement toolkit.

  1. 1)

    Identifying the problem

Implementers had varying responses when first introduced to the toolkit by the CEPACT team. A few used the project as an opportunity to begin a project of interest. While reactions to the toolkit were primarily positive, many were unsure of how to use the toolkit. Several pointed out practices in the toolkit that were already happening at their facility, while others pointed out practices they felt would not work at their site.

Implementers chose their practices for a variety of reasons. Most used the SHEP scores to assist in their practice selection and considered areas where there was the most room for improvement. Some implementers chose practices highlighted in individual SHEP questions. For example, one SHEP question measured whether patients were being asked about their stress level. To ensure providers were asking about stress, some sites posted the exact language near the nurses’ station, so staff would remember to ask the question verbatim. Others selected practices based on already known site issues (for example, reducing high-risk patient readmissions). Implementers also selected activities that coincided with ongoing leadership initiatives already in progress.

There were 25 different practices selected by 40 sites. Activities at the patient level ranged from goal setting to education on specific topics such as treatment options, medication changes, or how to contact staff and utilize services. Activities at the staff level included patient-centered trainings and process changes to improve the patient’s care experience.

  1. 2)

    Adapting knowledge to local context

Implementers used a variety of strategies to introduce the project to their facilities and obtain buy-in from staff and leadership. Meeting with facility leadership or service line leadership to present the project was a common first step. If the project involved a specific key role, some individuals would target leadership specific to that role (e.g., clerk supervisors). At initial meetings, SHEP scores were reviewed with the team as a tool for selecting practices. In anticipation of practice spread, implementers also met with each sub-facility leader individually or sought out representatives and worked on “selling the project” to these key personnel. To incentivize staff involvement, some sites linked selected practices with performance metrics and patient outcomes. Implementers also attempted to establish that a practice was easier and less time-consuming than providers might anticipate.

Not all implementers had a formal team. Those without a formal team would seek out personnel to provide consultation on the practice. Others had a formal team that would meet regularly to review progress on the project. Regardless, many involved personnel from various disciplines including nurses, clerks, primary care physicians, pharmacists, social workers, educational specialists, and administrative staff. Some implementers were never able to assemble even an informal team.

  1. 3)

    Assessing barriers to knowledge use

Implementers indicated several barriers to the application of patient engagement practices (Table 1). The most frequently cited barrier was staffing shortages followed by time availability, lack of buy-in, and issues with leadership. Less frequently mentioned barriers were issues with team communication, scheduling problems, and patient pushback or confusion. Barriers such as staffing shortages impacted implementation and progression; however, most implementers continued to push forward.

  1. 4)

    Tailoring and implementing

Table 1 Barriers to Successful Practice Implementation

Many implementers changed or tailored their practices over the course of the project. This was sometimes due to staff loss or a change in the point-of-contact for the project. There were five instances of a new person coming on board. Once updated on the project and their predecessor’s progress, most of these individuals were able to pick up where the previous contact left off. Often changes and tailoring occurred when the practice was being simplified or combined with an existing task at the clinic. On a few occasions, implementers added elements to their practice such as showing a short educational video prior to asking high-risk patients to complete a survey. Many also opted to add additional practices to work on over the course of the project. This would occur if an individual completed a practice and was interested in another, or if their clinic was promoting another initiative that coincided with a practice in the toolkit, or as a spin-off from the original practice.

At meetings, team leaders assigned tasks, created schedules and timelines, developed standardized templates, and reviewed data trends. Successful interventions utilized the staff’s strengths and included them in the planning and decision-making. Educating staff who were not present at meetings via emails, posters or handouts, informal conversations, or formal training sessions was also important for buy-in. Implementers who were more successful checked in routinely with their staff and would troubleshoot on an ongoing basis.

Collaborations with other departments such as information technology (IT) or with other managers also facilitated practice implementation, as the scope of some projects extended beyond the implementer’s skill set. Some sites incentivized the staff by tying the practice into their performance plan, while others made the practice mandatory. Some individuals piloted their practice before expanding to other teams or sites, while others implemented at multiple locations at the same time and made site-specific adjustments as necessary. As a final step, sites educated patients on practice changes.

  1. 5)

    Monitoring knowledge use

Implementers had the option of using SHEP scores alone as a tracking tool or designing their own tracking mechanism. Some only used the SHEP summary while others designed their own systems for tracking progress and outcomes. Tracking was challenging, and many implementers did not have the capacity or the knowledge to conduct their own tracking outside of SHEP performance summary.

Apart from SHEP, tracking included gathering feedback from patients and providers. Patient feedback came primarily from informal, non-systematic verbal conversations about the practice being implemented. Provider feedback was obtained during group meetings or huddles. Other tracking methods included tallying walk-in patients to either decrease clinic disruption due to patients arriving without an appointment or increase walk-in clinic use to decrease emergency department use. Using the electronic health record (EHR) to track practices was another common approach.

If the practice involved the production and printing of materials, implementers used the available stock to determine whether the materials were being used. This was not ideal as knowledge of what was being printed did not always indicate what was being used. Sometimes staff would inform the implementer when materials were running low or the implementer would spot check the stock to assess use.

Some implementers designed their own tracking tools and systems. At one site, where huddles were being implemented, a sheet was created to track clinic issues. Others created short surveys, designed in-house, to assess patient satisfaction. Yet others found ways to spot check implementation through direct observation, audits, or email requests to staff.

  1. 6)

    Evaluating knowledge use

Figure 2 shows the distribution of average rubric site scores across all domains, practices, and time periods by intervention type. While there are no statistically significant differences between the rubric scores of the two groups based on the mean and median scores, scores in the high coaching intervention group were consistently above 4.75. Four sites did not successfully start a practice while five (out of 40) attempted to start practices but fell short. All five of these sites attempted more than one practice through the course of the project. Sites with high rubric scores tried more implementation strategies overall (high = 24 strategies, low = 16 strategies). While a low rubric score did not indicate a difference in barriers experienced, sites with low rubric scores had fewer practice facilitators (high = 9 facilitators, low = 3 facilitators). Facilitators involved a variety of methods to establish buy-in and commitment, including implementer engagement; staff engagement; leadership support; staffing; interdisciplinary collaborations; and piggy-backing onto regional, national, or site initiatives (Table 2).

Figure 2
figure 2

Rubric score distribution by coaching intervention type (high vs. low). Sites in this evaluation were randomly assigned into a high (n = 18) or low coaching (n = 20) intervention group to assess whether additional, external support influenced implementation. The boxplots displayed include lines at the maximum, the third quartile, the median, the first quartile, and the minimum rubric scores.

Table 2 Facilitators to Successful Practice Implementation

Regardless of rubric scores, sites saw no differences on their patient satisfaction over time. However, they reported the summaries were useful to gauge their patient satisfaction measures. There were also no notable differences in rubric scores based on the level of support provided to implementers. While the frequency of individual follow-up calls (1–3 months vs. 6 months) did not impact implementation, receiving calls did help to establish accountability and support. The group calls were met with mixed responses. Despite inconsistent attendance on these calls, implementers indicated that they valued having a venue to troubleshoot with their peers and learn about practices others were pursuing.

  1. 7)

    Sustaining knowledge use

While everyone was oriented to the patient engagement toolkit at the beginning of their project to select their practices, based on self-report, 18 sites returned to it throughout the project. Ninety-three percent of high coaching intervention sites returned to the toolkit as compared with 26% of low coaching intervention sites. Those who returned to the toolkit after selecting their initial practices did so to get ideas for new projects or to share the toolkit with others at their site. Most implementers anticipated returning to the toolkit to select new practices if patient engagement was a priority at their site. Almost all those with active practices still in place planned on completing their implementation and those who succeeded with implementation indicated their practices were self-sustaining and now a part of routine processes at their clinics.

DISCUSSION

Our region-wide implementation of a provider-facing toolkit of strategies to increase patient engagement found that several factors were vital to successful KT. Successful implementers experienced just as many barriers, but leveraged facilitators to overcome obstacles. Key facilitators included implementer engagement, organizational support, and strong collaborations. While not critical to success, coaching support was perceived to aid the process.

Previous definitions of patient engagement lacked a description of facilitators to patient engagement.4, 15 A unique aspect of the CEPACT toolkit is the inclusion of staff resources as facilitators, an element which, in this initiative, proved to be a critical and a significant marker of success that needed to be present throughout the KT process. This highlights the necessity for future toolkits to summarize the baseline resources necessary for successful implementation.

Our findings about the systematic barriers to improving patient engagement are consistent with other studies. Staffing and time limitations are known obstacles to patient engagement. Effective clinicians work within these limits to be successful.16 LaRocca et al. found that KT interventions were more effective when they allowed for flexibility and tailoring to local needs and preferences.17 In this patient engagement intervention, we encouraged implementers to tailor practices to their own context and found it enabled them to be persistent in their efforts to change practices.

The literature indicates coaching and facilitation from sources external to the implementation can be an integral source of problem solving and support to implementers.18, 19 Sites that received more coaching were more likely to revisit the toolkit. Some implementers indicated that the follow-up calls from CEPACT evaluators held them accountable. This coaching element is not inherently built into the KT model or any real-world scenario. In the real world, the lack of continuous monitoring by outside evaluators may be counterbalanced by buy-in and collaborations with leadership and staff, holding implementers accountable to local personnel. However, this finding does highlight the value of external practice facilitation to support practice transformation.

This evaluation of a patient engagement toolkit implementation addresses the gap in the literature by providing the details of the processes and outcomes of a KT intervention. The KT framework effectively captured the process of this toolkit dissemination and implementation. As the KT framework asserts, we found the progression to be an iterative process, as implementers stalled or regressed depending on barriers encountered. This patient engagement initiative involved several key ingredients for success woven throughout the process: identifying committed and engaged implementer(s), organizational support, interdisciplinary collaborations, adaptability, and system-level integration.

Analyses of patient satisfaction (SHEP) showed no differences based on average rubric scores or level of coaching support. Patient satisfaction is a construct which is known to be difficult to measure20 and change. SHEP is limited by small sample sizes, recall bias, and lag times that may all impact accurate capture of patient satisfaction. In addition, improving patient engagement may not be enough to change satisfaction which is influenced by many factors.

This initiative was limited by several factors. Social desirability may have biased some implementers’ responses. All assessments occurred over the phone rather than in-person. Nonetheless, these calls were important to the implementers for accountability and idea exchange. Individuals varied in their practices and approach to implementing the practices, so replication of the process might produce a different outcome; however, certain key activities were similar across sites that were successful. These included identifying an engaged implementer, providing routine check-ins via meetings or weekly emails, and involving all the necessary staff to implement the practice (such as IT personnel) and disseminate key information required for the successful implementation of the practice.

In this assessment of a patient engagement toolkit dissemination and implementation, implementers were successful when they had engagement and support from their leadership and team. Future toolkits should highlight the facilitating factors necessary for KT, which are important not only for successful implementation but also for building accountability and sustainability.