KT Framework and the Patient Engagement Toolkit
Figure 1 depicts the findings of this evaluation in the context of the KT framework. Each step in the framework illustrates examples of what was accomplished. Progress was bi-directional with key ingredients woven throughout. Implementer engagement, organizational support, and strong collaboration were key markers of success. Below are the results of the seven steps of KT taken to implement the patient engagement toolkit.
Identifying the problem
Implementers had varying responses when first introduced to the toolkit by the CEPACT team. A few used the project as an opportunity to begin a project of interest. While reactions to the toolkit were primarily positive, many were unsure of how to use the toolkit. Several pointed out practices in the toolkit that were already happening at their facility, while others pointed out practices they felt would not work at their site.
Implementers chose their practices for a variety of reasons. Most used the SHEP scores to assist in their practice selection and considered areas where there was the most room for improvement. Some implementers chose practices highlighted in individual SHEP questions. For example, one SHEP question measured whether patients were being asked about their stress level. To ensure providers were asking about stress, some sites posted the exact language near the nurses’ station, so staff would remember to ask the question verbatim. Others selected practices based on already known site issues (for example, reducing high-risk patient readmissions). Implementers also selected activities that coincided with ongoing leadership initiatives already in progress.
There were 25 different practices selected by 40 sites. Activities at the patient level ranged from goal setting to education on specific topics such as treatment options, medication changes, or how to contact staff and utilize services. Activities at the staff level included patient-centered trainings and process changes to improve the patient’s care experience.
Adapting knowledge to local context
Implementers used a variety of strategies to introduce the project to their facilities and obtain buy-in from staff and leadership. Meeting with facility leadership or service line leadership to present the project was a common first step. If the project involved a specific key role, some individuals would target leadership specific to that role (e.g., clerk supervisors). At initial meetings, SHEP scores were reviewed with the team as a tool for selecting practices. In anticipation of practice spread, implementers also met with each sub-facility leader individually or sought out representatives and worked on “selling the project” to these key personnel. To incentivize staff involvement, some sites linked selected practices with performance metrics and patient outcomes. Implementers also attempted to establish that a practice was easier and less time-consuming than providers might anticipate.
Not all implementers had a formal team. Those without a formal team would seek out personnel to provide consultation on the practice. Others had a formal team that would meet regularly to review progress on the project. Regardless, many involved personnel from various disciplines including nurses, clerks, primary care physicians, pharmacists, social workers, educational specialists, and administrative staff. Some implementers were never able to assemble even an informal team.
Assessing barriers to knowledge use
Implementers indicated several barriers to the application of patient engagement practices (Table 1). The most frequently cited barrier was staffing shortages followed by time availability, lack of buy-in, and issues with leadership. Less frequently mentioned barriers were issues with team communication, scheduling problems, and patient pushback or confusion. Barriers such as staffing shortages impacted implementation and progression; however, most implementers continued to push forward.
Tailoring and implementing
Many implementers changed or tailored their practices over the course of the project. This was sometimes due to staff loss or a change in the point-of-contact for the project. There were five instances of a new person coming on board. Once updated on the project and their predecessor’s progress, most of these individuals were able to pick up where the previous contact left off. Often changes and tailoring occurred when the practice was being simplified or combined with an existing task at the clinic. On a few occasions, implementers added elements to their practice such as showing a short educational video prior to asking high-risk patients to complete a survey. Many also opted to add additional practices to work on over the course of the project. This would occur if an individual completed a practice and was interested in another, or if their clinic was promoting another initiative that coincided with a practice in the toolkit, or as a spin-off from the original practice.
At meetings, team leaders assigned tasks, created schedules and timelines, developed standardized templates, and reviewed data trends. Successful interventions utilized the staff’s strengths and included them in the planning and decision-making. Educating staff who were not present at meetings via emails, posters or handouts, informal conversations, or formal training sessions was also important for buy-in. Implementers who were more successful checked in routinely with their staff and would troubleshoot on an ongoing basis.
Collaborations with other departments such as information technology (IT) or with other managers also facilitated practice implementation, as the scope of some projects extended beyond the implementer’s skill set. Some sites incentivized the staff by tying the practice into their performance plan, while others made the practice mandatory. Some individuals piloted their practice before expanding to other teams or sites, while others implemented at multiple locations at the same time and made site-specific adjustments as necessary. As a final step, sites educated patients on practice changes.
Monitoring knowledge use
Implementers had the option of using SHEP scores alone as a tracking tool or designing their own tracking mechanism. Some only used the SHEP summary while others designed their own systems for tracking progress and outcomes. Tracking was challenging, and many implementers did not have the capacity or the knowledge to conduct their own tracking outside of SHEP performance summary.
Apart from SHEP, tracking included gathering feedback from patients and providers. Patient feedback came primarily from informal, non-systematic verbal conversations about the practice being implemented. Provider feedback was obtained during group meetings or huddles. Other tracking methods included tallying walk-in patients to either decrease clinic disruption due to patients arriving without an appointment or increase walk-in clinic use to decrease emergency department use. Using the electronic health record (EHR) to track practices was another common approach.
If the practice involved the production and printing of materials, implementers used the available stock to determine whether the materials were being used. This was not ideal as knowledge of what was being printed did not always indicate what was being used. Sometimes staff would inform the implementer when materials were running low or the implementer would spot check the stock to assess use.
Some implementers designed their own tracking tools and systems. At one site, where huddles were being implemented, a sheet was created to track clinic issues. Others created short surveys, designed in-house, to assess patient satisfaction. Yet others found ways to spot check implementation through direct observation, audits, or email requests to staff.
Evaluating knowledge use
Figure 2 shows the distribution of average rubric site scores across all domains, practices, and time periods by intervention type. While there are no statistically significant differences between the rubric scores of the two groups based on the mean and median scores, scores in the high coaching intervention group were consistently above 4.75. Four sites did not successfully start a practice while five (out of 40) attempted to start practices but fell short. All five of these sites attempted more than one practice through the course of the project. Sites with high rubric scores tried more implementation strategies overall (high = 24 strategies, low = 16 strategies). While a low rubric score did not indicate a difference in barriers experienced, sites with low rubric scores had fewer practice facilitators (high = 9 facilitators, low = 3 facilitators). Facilitators involved a variety of methods to establish buy-in and commitment, including implementer engagement; staff engagement; leadership support; staffing; interdisciplinary collaborations; and piggy-backing onto regional, national, or site initiatives (Table 2).
Regardless of rubric scores, sites saw no differences on their patient satisfaction over time. However, they reported the summaries were useful to gauge their patient satisfaction measures. There were also no notable differences in rubric scores based on the level of support provided to implementers. While the frequency of individual follow-up calls (1–3 months vs. 6 months) did not impact implementation, receiving calls did help to establish accountability and support. The group calls were met with mixed responses. Despite inconsistent attendance on these calls, implementers indicated that they valued having a venue to troubleshoot with their peers and learn about practices others were pursuing.
Sustaining knowledge use
While everyone was oriented to the patient engagement toolkit at the beginning of their project to select their practices, based on self-report, 18 sites returned to it throughout the project. Ninety-three percent of high coaching intervention sites returned to the toolkit as compared with 26% of low coaching intervention sites. Those who returned to the toolkit after selecting their initial practices did so to get ideas for new projects or to share the toolkit with others at their site. Most implementers anticipated returning to the toolkit to select new practices if patient engagement was a priority at their site. Almost all those with active practices still in place planned on completing their implementation and those who succeeded with implementation indicated their practices were self-sustaining and now a part of routine processes at their clinics.