INTRODUCTION

In 2001, the Institute of Medicine’s (IOM) landmark report, Crossing the Quality Chasm, identified patient-centeredness as a key aspect of high-quality care [1]. Providing patient-centered (PC) care has become a significant focus for healthcare systems across the USA. PC care represents a shift from a clinician-driven, disease-focused approach toward one in which the patient’s values, needs, and experiences are accounted for in medical consultation, treatment, and follow-up [2]. PC communication, which includes active patient involvement in shared decision-making and treatment recommendations, is a cornerstone of PC care [3]. PC communication has been associated with a range of improved health-related outcomes [4], including less patient anxiety [5], higher quality of life [6], better emotional health [7], better blood glucose control [7], higher patient activation [8], higher patient satisfaction [9], and higher functional health literacy [10].

Clinician communication skills are mutable [11,12,13]; thus, intervening to improve PC communication is a promising approach to improving patient experiences. Audit-feedback, a process in which clinician behaviors are measured and data is communicated back to them [14], may be effective in improving communication [15] as it increases clinicians’ awareness of patients’ care experiences. According to an audit-feedback conceptual model [16], clinician reaction is a key factor influencing the extent to which feedback will have an impact; clinicians who perceive feedback as inaccurate, low value, or otherwise flawed are less likely to act on it. Measuring patient perceptions of clinician communication may only be useful if clinicians find the feedback actionable and trustworthy.

A valid and reliable PC communication measure is needed to improve the likelihood that clinicians will value the feedback provided. Many existing PC communication measures are lengthy, measured at the facility or department rather than individual level, and are administered long after the actual clinical encounter, making it difficult for clinicians and leadership to use information in a timely manner [17, 18]. While there are many components of PC communication [3], for this study, we focused on the IOM definition [1]—the extent to which patient values and preferences are incorporated into care. The collaboRATE, developed to measure shared decision-making (SDM), is a 3-item tool that assesses the extent to which each of 3 core PC communication dimensions are present in a clinical encounter: (1) explanation of the health issue, (2) elicitation of patient preferences, and (3) integration of patient preferences in treatment decisions [5, 6, 19]. These key dimensions measured by collaboRATE not only measure SDM but also map directly to the core principles of PC communication (see Table 1). The collaboRATE measure has discriminative validity, concurrent validity with measures of SDM, intra-rater reliability, and is sensitive to change [5].

Table 1 collaboRATE and Patient-Centered Communication

We assessed the value of measuring PC communication at the point-of-care using collaboRATE and reporting this data in audit-feedback to clinicians. The aims of the study were first to determine whether collaboRATE can meaningfully identify between-clinician differences and second to assess the acceptability, practicality, and potential integration [20] of feedback using this measure with primary care providers (PCPs).

METHODS

Study Design

Brief, in-person, point-of-care surveys were administered to primary care patients at one Veterans Affairs Medical Center (VAMC). Data from the surveys were used to create audit-feedback reports which were shared during qualitative interviews in the second phase of the study.

All study procedures were approved by the local VA Institutional Review Board.

Recruitment/Participants

Data collection took place at primary care clinics at one VAMC in the northeast U.S. between April and August 2018. Patients who had a primary care appointment with one of 20 PCPs were approached by a research assistant immediately after their appointment and invited to complete a brief survey. Patients were excluded if they were not in the clinic for a primary care appointment, were not able to complete the survey in English, had already been enrolled in the study, or had been seen by a PCP for whom a sufficient number of surveys had already been collected. Of the 1208 patients approached for this study, a total of 657 were eligible and invited to participate.

After patient data collection was completed, PCPs and leadership were invited via email to take part in semi-structured qualitative interviews. Only PCPs who had at least 25 patients respond to the survey (N = 16) were eligible for the interview since prior work has indicated that this number is needed to characterize the clinician’s overall performance [7]. Eligible leaders included all medical center and regional leadership with responsibilities associated with overseeing primary care and/or improving patient experiences of care (N = 8).

Data Collection

Patients were given a one-page paper survey that took about 2 min to complete. It included structured questions regarding patients’ experience in their primary care appointment, including the 3-item collaboRATE measure, a satisfaction question, a question about their general health, and a question suggested by a patient consultant concerning how well their needs were met (see Table 2 for survey details).

Table 2 Survey Questions

The collaboRATE questions are scored on a 0–9 Likert scale anchored by “no effort was made” to “every effort was made.” Responses to each item are dichotomized prior to analyses to reflect whether the respondent gave a “top score” (i.e., marked a 9) or not. A “top score” on the full collaboRATE indicates that the respondent gave a top score on all three items. This is consistent with collaboRATE scoring recommendations derived from psychometric reliability and validity work conducted with this measure [5, 21]. Sensitivity analyses failed to reveal any biasing effect of the “top score” cut-point for dichotomization. Patterns of findings and significance when treating all dichotomized variables as continuous were consistent with those reported below.

Qualitative interviews with clinicians were conducted in-person and lasted approximately 45 min. Prior to their interviews, PCPs were asked a small number of structured questions regarding demographic information and professional experience. Each PCP was asked to reflect on the idea of receiving regular feedback from patients about their communication during appointments. Then, the clinician was given a brief, personalized feedback report on how their own patients assessed their communication and satisfaction with the visit overall. They reviewed their feedback with the interviewer using “think-aloud protocol” [22], which elicited feedback on the presentation (e.g., ease of interpretation), and the content (e.g., alignment with expectations). They were also shown a graph anonymously depicting the proportion of collaboRATE “top scores” for all clinicians for whom ≥ 25 surveys were completed, allowing them to compare their own performance to that of their colleagues. Finally, the PCPs were asked questions about the usefulness of collecting and receiving such point-of-care communication data and their recommendations. Qualitative interviews with hospital leaders were like those of the PCPs; however, the feedback reports included only the graph of all clinician scores (depicted anonymously) and scores by clinic (see Fig. 1 for data collection details).

Figure 1
figure 1

Data collection flow chart.

Data Analysis

Descriptive statistics were calculated to characterize the mean and distribution across clinicians for patient demographics, clinician demographics, patient perceptions of clinician communication, and overall care.

Next, to examine whether patient perceptions of clinician communication were associated with basic patient or clinician demographics or with perceptions of overall care, we employed general linear mixed (GLM) models to control for the nesting of patients within clinicians, with all patient-level intercepts and slopes treated as random effects. A logit link function and restricted PQL estimation were utilized for models with binary outcomes. All continuous patient-level predictor variables were within-clinician centered and all continuous clinician-level predictor variables were grand-centered [23]. Analysis to identify high- and low-rated clinicians was restricted to the 16 clinicians with ≥ 25 patient respondents.

Interviews were audio-recorded, transcribed, and imported into NVivo software [24]. Using a framework analysis approach [25], two members of the study team (ED and JH) developed an initial codebook that included a priori themes of interest and emergent themes that arose after reading three transcripts. They then coded the same three transcripts independently and met to reach coding consensus and standardize the codebook. They independently coded the remaining transcripts meeting periodically to discuss and resolve coding challenges. Analysis included the systematic comparison of coded segments to create categories, map connections, and identify theoretical concepts and salient themes in the data. Results were shared with the larger research group for final interpretation.

RESULTS

Phase 1. Patient Perceptions of Clinician Communication and Demographic Variables

A total of 485 patients from 20 clinicians completed the survey. Table 3 shows descriptive statistics for each clinician in the overall sample. Patient perceptions of clinician communication were not associated with whether the clinician was full-time or part-time, whether the clinician worked primarily at the main VA hospital or an outpatient clinic, patients’ self-reported general health, nor whether the patient was seeing their clinician for a routine visit the day the survey was completed. Our GLM analysis (Table 4) revealed that age was a significant predictor of the likelihood of providing a collaboRATE “top score,” such that patients under the age of 39 were significantly less likely to provide a collaboRATE top score (55.0%) than patients over the age of 65 (75.0%), with patients between the ages of 40 and 64 falling in the middle (71.2%). We observed a general trend that better self-reported health was associated with a greater likelihood of providing a collaboRATE “top score”; however, this association was not statistically significant (p = 0.064). No other patient or clinician characteristics significantly predicted patient perceptions of clinician communication.

Table 3 Descriptive Statistics by Provider (N= 20)
Table 4 Predicting Patient Perceptions of Patient-Centered Clinician Communication from Patient and Clinician Demographics (N= 485 patients from 20 clinicians)

Patient Perceptions of Clinician Communication and Overall Care

Because perceptions of care were positively skewed, we dichotomized responses to both the “needs met” and “satisfaction” questions such that we compared those providing the highest rating (i.e., needs completely met; very satisfied) to all others. Among the entire sample, patients’ perceptions of care were high overall—the average rating for “needs met” was 4.70 (out of 5) and the average for satisfaction was 5.76 (out of 6) (see Table 2). Those who gave a collaboRATE “top score” to their clinician were 10.8 times more likely to report that their needs were “completely met” (89.6%) compared with those who did not (44.3%) (t(19) = 9.71, p < 0.001, OR = 10.80). Additionally, those who gave a collaboRATE “top score” to their clinician were 13.3 times more likely to report that they were “very satisfied” (94.2%) compared with those who did not (55.0%) (t(19) = 7.48, p < 0.001, OR = 13.3) (see Table 4). These relationships held even when controlling for patient age and self-reported general health (Table 5).

Table 5 Predicting Needs Met and Satisfaction from Patient Perceptions of Clinician Communication (N= 485 Patients from 20 Clinicians)

We found substantial variability across clinicians in terms of patients’ perceptions of their communication. The proportion of patients giving collaboRATE “top scores” ranged from 41 to 92% among the 16 clinicians who had ≥ 25 patient respondents (see Fig. 2).

Figure 2
figure 2

Proportion of patients giving collaboRATE “top scores” among 16 clinicians who had ≥ 25 patient respondents.

Phase 2. Provider and Leadership Perspectives on the collaboRATE Measure

Seven PCPs with ≥ 25 patient respondents and six leaders participated in interviews. Characteristics of PCP interviewees are described in Table 6. The PCPs we interviewed included internal or family medicine physicians and nurse practitioners all of whom had worked for the VA for ≥ 5 years. Additionally, most had participated in some type of communication training prior to this study. Among the 16 PCPs eligible to be interviewed, those who participated in the interviews trended toward having a higher likelihood of receiving collaboRATE top scores (77.0%) than did those who declined (65.0%) (t(14) = 2.04, p = 0.06, OR = 1.8). Six of the eight leaders invited to take part in an interview participated. Because some of the leaders were also practicing PCPs, we chose to analyze PCP and leadership data together. As a result, most themes below are ascribed to “interviewees,” though we note when themes were detected primarily among one group or another.

Table 6 Description of PCP Interviewees (N= 7)

Below we provide an overview of general perceptions of patient-centered communication and the value of receiving individualized feedback based on collaboRATE data. Participants highlighted strengths of the measure, including its timeliness and that it reflects individualized data. They also highlighted several factors that make receiving the information challenging and offered recommendations for making the information more actionable.

General Perceptions of Patient-Centered Communication

Interviewees recognized PC communication was a means to achieving positive patient outcomes and had important intrinsic value. As one leader remarked, “I can have great outcomes, but if I do bad on this (PC communication) score then I am really missing the mark” (HL-03). The exception to this was one clinician who questioned at length the value of PC communication versus the primary goal of their work which is to “diagnose and treat” patients. This clinician said:

Someone was talking to me [about] Disneyland and [how] we should try to resonate something like that and I feel like this is healthcare and this is serious business (…). I really feel like … if 70% of my patients are in good health … I feel like that’s my job. You don’t have to like me. (P-03)

Like some other PCPs, this clinician conflated being patient-centered with being likeable.

Many suggested PC communication skills are what differentiate one clinician from another. At the same time, some noted that practicing this skill was not necessarily instinctual for clinicians; one leader characterized this by saying it was difficult to refrain from simply telling patients “this is what you have to do.” These general perceptions of PC communication help to understand the context of the themes surrounding measurement and feedback described below.

Perceptions of PC Communication Measure and Feedback

Interviewees expressed interest in the audit-feedback report, many describing the data as unique both in its timeliness (collected at point-of-care) and in that it can be attributed to the individual clinician.

This is the only data that I’ve seen that could tell me what I’m doing right and what I could be doing better. You know? So that’s what I would put into action. (P-06)

The clinician-specific point-of-care measure was compared favorably with other patient experience measures that were collected distally and provided feedback at the facility level. More than one interviewee also noted that this measure was distinct because it felt within clinicians’ control, unlike other measures that depended greatly on patient behaviors or external factors (e.g., A1C levels).

I think because it’s the provider level that is different. And it’s really hard to manage your own practice if you don’t necessarily (know)… was it something I did?, versus external factors. (HL-03)

Although most interviewees felt it could be worthwhile to receive this type of data regularly, this opinion was uniformly accompanied by several caveats, described below.

More Information Needed to Make It Actionable

A prominent theme among PCPs was that additional information, such as patient identifiers, was needed to make the measure more actionable. Such information included patients’ primary diagnoses, reason for the visit, or the specific reason for the score given (e.g., clinician looked at computer too much). As one interviewee said, “How can we improve behavior if we don’t know what the issue is?” (HL-01). Both PCPs and leadership showed interest in information regarding the range and distribution of clinician scores, noting this information, along with additional benchmarks or norms for the collaboRATE measure, were critical for determining if and where communication issues existed.

Prioritization by Leadership Among Competing Metrics

The value of receiving this data was questioned given other measures on which clinicians were being evaluated. Several PCPs noted that communication could not always be prioritized given the time allocated for appointments and competing demands. As one clinician said:

If all I factored in were [sighs], you know, their issues and not the reminders and the performance measure that we’re graded on, I’d probably achieve a higher score [on PC communication], honestly. (…) But then the flip side is if I do that I’ll get, you know, badgering emails saying my performance measures are dropping (…). (P-12)

Unless PC communication is recognized as an important performance metric and equally valued by leadership, many were skeptical that simply measuring it would motivate change in PCP behavior. Clinicians also felt inundated with performance metrics, raising concerns that an additional metric would add to this burden.

More Evidence That PC Communication Scores Could Improve, Especially in the Face of Systemic and Logistical Challenges

Collecting and reporting on PC communication scores were considered worthwhile if they were amenable to improvement. Some expressed doubt that clinicians can change their behaviors:

It’s not something that you learn, it’s something that’s ingrained in you. So, I’m not sure you could teach it, if you understand. (…) You know we’ve done the motivational interviewing, we’ve done all of the workshops. And has that really influenced people’s practice? I don’t know. (P-10)

Additionally, clinicians pointed to systemic and logistical challenges that make it hard for them to be patient-centered, even if they were to be successful at improving their communication. While it was acknowledged that the VA has longer appointment times than most private practices, many still felt this was the greatest barrier to using PC communication. This was specifically noted within the context of the number of clinical reminders to be addressed and frequent instant messaging from telephone triage during patient visits.

It’s tough to get everything done in the visit. There’s my agenda and there’s the patient’s agenda. And fortunately, there’s a lot of overlap. But there’s not always. (…) You know, LDL and their cholesterol and there’s endless reminders. I could spend the whole visit dealing with reminders. (P-12)

Addressing these reminders along with general use of the electronic medical record requires substantial amounts of time facing the computer instead of the patient. One clinician noted that given the configuration of the room, she must sit with her back to the patient while on the computer, remarking with exasperation that clinicians are being asked to be more patient-centered while nothing has been done to make the space in which she sees patients more conducive to PC interactions.

Recommendations for PC Communication Interventions

Peer-led or neutral third-party coaching was often mentioned as a way to improve clinicians’ PC communication. Most interventions discussed by interviewees focused on low-scorers learning from high-scorers. This was described as either self-directed and clinician-specific or more management-led and systemic, all centered around the questions of: What are the highest scoring clinicians doing differently than the low-scorers? and, What supports are around them that help them do better? General communication trainings were almost universally considered unhelpful but could provide a foundation if they are mandatory and offered with protected time to participate.

DISCUSSION

Given the number of quality metrics currently used to evaluate PCP performance, the question may be asked: Why use a metric focused on PC communication? Our findings showed that the collaboRATE measure is sensitive, demonstrating variation in scores among clinicians, and is predictive of satisfaction and perceptions of how well needs are met at the level of individual patients. These qualities appear to enhance the measure’s face validity for PCPs and leaders and make it a potentially valuable tool for understanding PC communication skill level of clinicians. Our qualitative findings suggest PCPs and leaders recognize the potential value in point-of-care surveys, such as collaboRATE, to assess PC communication because the information is timely and individualized.

However, there are several perceived barriers that would need to be addressed to support the use of point-of-care surveys with audit-feedback processes to improve PC communication. First, providers’ general perceptions of PC communication indicate this may be a significant shift in practice for some clinicians. PC communication involves prioritizing patient preferences to inform the care provided. Yet medical education tends to be disease-focused, encouraging practitioners to identify the source of suffering and rely on standardized evidence-based treatments, with scant attention to their alignment with individual patient values and goals—a “find it, fix it” approach. This problem-solving approach may explain why interviewees sought to learn the identity of the patients who reported low scores, or at least some telling characteristics about them (e.g., has diabetes or chronic pain), to explain or fix the problem at an individual patient level.

To improve PC communication, our findings also suggest the need to better understand what PC communication entails. Several of our interviewees confounded perceptions of PC communication with “being well liked” and general patient satisfaction. Although these aspects may be a part of PC communication, it is not about creating “Disneyland” or “giving in” to patient demands. Rather, PC communication involves eliciting and understanding the patient as an individual and reaching a shared understanding that is concordant with the patient’s values [3]. If clinicians who rate poorly on PC communication skills do not make the distinction between PC communication and social desirability preferences, they may be less inclined to put in effort to improve their PC communication skills as they may feel the purpose of their job lies elsewhere. A more proactive approach is needed. In fact, significant efforts are underway nationally, both outside [26] and inside the VA [27], to change the culture of healthcare and move from a provider-driven, disease-focused system to one that is more patient-centered and holistic.

Specific feedback on measuring PC communication revealed it was not perceived to be an institutional priority by many interviewees. Clinicians are evaluated on many metrics, particularly quality measures documenting preventive care. Clinicians feel, and research shows [28], that the amount of time it can take to address the clinical reminders linked to these quality measures can be overwhelming, thereby limiting one’s ability to be more patient-centered [29]. Participants discussed the seeming incompatibility between numerous clinical reminders and PC communication. Yet paradoxically, many felt this would have to be elevated to a quality metric to signal its significance to leadership and to compel clinicians to act, in the spirit of “what gets measured gets done.” It is understandable that adding a quality metric, no matter its merits, may not be well received by clinicians when they already feel they are being evaluated on too many. It may be viewed more positively if done in combination with culling less effective metrics.

Many clinicians also identified organizational and logistical issues that impede their ability to practice PC communication (e.g., unhelpful location of computers in the room). This is a concern that has been noted in the literature [30]. Failure to address these issues is viewed by some as an indicator that this is not an institutional priority. Feedback to clinicians is most likely to be effective when the healthcare organization can support PC communication via systemic improvements.

Interviewee recommendations for interventions seemed to take these challenges into account. It is perhaps because of these perceived barriers to practicing PC communication that most interviewees felt one of the most effective interventions would be to learn from peers who seem to manage to achieve high ratings even within the constraints of their practice. There was strong consensus among interviewees around using a strengths-based approach and a peer-led coaching model to improve PC communication. Peer-led coaching models have been used in many medical education programs [31] and there is some evidence of their successful use among doctors post-training [32, 33]. Appreciative inquiry approaches have also been used to create positive organizational change within healthcare institutions [34,35,36].

LIMITATIONS

This study has several limitations. First, it involved a small sample of clinicians and leaders at one VAMC; this may limit generalizability to other medical centers. Second, there was a low response rate among the PCPs (7/16) and there is a chance of selection bias as the PCPs who agreed to be interviewed had a higher likelihood of receiving collaboRATE “top scores” than those who declined. Additionally, we did not collect many patient and clinician demographics, limiting our ability to explore differences in participation and collaboRATE scores by these variables. Finally, we chose to focus our study on individual PCPs and not on healthcare teams. In this era of team-based care, it seems an important area for future study to investigate how individual scores of PC communication interact with or otherwise affect patient perceptions of the quality of PC communication by the team. Despite these limitations, this study provides important findings that may help create more effective interventions to improve clinicians’ PC communication.

CONCLUSIONS

Clinicians get measured on many metrics over which they have varying degrees of control. PC communication is an aspect of care that is within clinicians’ control and has the potential to improve patients’ experience of care and their clinical outcomes. As a sensitive, timely, and targeted measure of PC communication, collaboRATE may be the right tool for health systems. Our findings suggest that providing feedback to clinicians based on their patients’ perceptions, while valued, may not be sufficient to change communication behaviors. This study identified several barriers that should be addressed in future implementation studies to increase the likelihood of improving PC communication among clinicians. As the focus on patient-centered care increases throughout healthcare systems, now is the time to both measure it effectively and find creative solutions to improve the ways clinicians communicate with their patients.