INTRODUCTION

Patients are interested in knowing about the quality of their physicians. Nearly 60% of US respondents report that when choosing a physician, online data and physician reviews are “somewhat” or “very” important.1 To find information about physicians, patients may look to commercial physician rating websites, government-sponsored websites, independently sponsored public reporting organizations, health system sites, insurance companies portals, and others.1,2,3,4,5,6,7,8,9,10,11,12,13 Although patient use of websites that report publicly reported quality metrics remains minimal,14 patients may be more likely to seek out online reviews about physicians or health systems.15 Commercial physician rating websites, for example, are viewed millions of times per month, and there have been rapid increases in numbers of patient reviews on these sites.8, 11

Few prior studies have reported on physician attitudes towards online reviews or public reporting of quality information. A now decade-old study suggested that general internists’ support for public reporting of individual physician performance was limited,16 and a recent single-center study suggests that few physicians feel that numerical reviews on commercial websites are accurate.16 Because gaps remain in our understanding of physician attitudes and beliefs about the many websites that publicly report health care quality and patient experience data, we aimed to examine physician attitudes about the accuracy of data on these websites and to describe physician beliefs about the helpfulness of different types of physician data for patients who are choosing a physician.

METHODS

Survey Characteristics and Administration

The Rhode Island Department of Health (RIDOH) has administered the Rhode Island Health Information Technology (HIT) Survey to physicians since 2008 as part of the state’s legislatively mandated Healthcare Quality Reporting Program. Survey data are used to measure and report process measures relating to HIT adoption and use, as well as the impact of HIT on physicians’ workflow and job satisfaction. Individual practitioner measures have been publicly reported for physicians since 2009. Physicians are officially “required” (by the RIDOH) to respond to the survey, but there is no penalty for nonresponse. Respondents are asked to self-identify as a primary care physician (PCP), as determined by an answer of “yes” to the question, “Do you provide primary care?” Respondents are also asked main practice setting (outpatient/office or inpatient/hospital) and practice size (1–3 clinicians, 4–9 clinicians, 10–15 clinicians, or more than 15 clinicians). We obtained other physician characteristics (age, gender, and specialty) and information about non-respondents from the state licensure file.

In 2017, the survey was administered between May 8 and June 12. Hard copy survey notices were mailed to all 4197 physicians licensed in Rhode Island who were in active practice and located in Rhode Island, Connecticut, or Massachusetts. Email notices and up to two email reminders were sent to those who had an email address on file with the RIDOH (N = 2296). There was no incentive offered for survey participation. The RIDOH works with local stakeholders to revise the survey tool prior to each administration in order to collect data to inform state HIT policy and initiatives and to accommodate new or evolving data needs. The RIDOH’s Institutional Review Board (IRB) reviewed this study and deemed it exempt.

Development and Description of Questions

In collaboration with RIDOH and a multi-stakeholder group (of physicians, researchers, and community members), the authors developed two additional questions for the 2017 survey about physicians’ beliefs and attitudes about websites that report data on experience and quality, including patient-generated reviews. We then reviewed the questions with stakeholders and pilot tested them in a convenience sample of local physicians.

These two final questions on the survey asked respondents about their knowledge, attitudes, and beliefs about the accuracy of data found on websites that report data about physician quality and about the types of information would be helpful for patients choosing a new physician. The wording of the second to last question on the survey read: “How accurate of a picture do you feel that the following types of online resources give about the quality of care that physicians provide?” We asked participants to respond using a 4-point Likert scale (“Not at all accurate; slightly accurate; somewhat accurate; very accurate.” “Have not heard of this” was an additional answer choice). The online resources physicians were asked to evaluate included: commercial physician rating websites (for-profit sites that are available to the public and allow patients to read and/or write quantitative and narrative reviews about physicians)10, 11; health systems’ websites (hospitals and health systems websites that compile and report physician-level data, most commonly ratings and comments drawn from the Consumer Assessment of Healthcare Providers and Systems [CAHPS] surveys)13; individual practices’ websites (any website that is created and maintained by an individual practice, which may or may not report data on physician quality or patient experience)3, 17; the Rhode Island Department of Health’s Find a Doctor tool18 (which includes data on board certification, hospital privileges, address information, disciplinary actions, and quality metrics that are related to use of health information technology); Medicare public reporting websites (which include practice-level performance data but minimal data on individual physician quality or experience metrics)19; non-profit physician quality websites (e.g., organizations that report practice- or physician-level data about quality or experience data separately from RIDOH)20; and insurance subscriber portals (insurance company sites that include information on in-network status and, in some cases, quality and experience data). We included links to examples of each type of site within the survey.

The final question on the survey asked respondents: “Which types of physician-specific information (i.e., not about the practice overall) would be helpful to include in online resources for patients to help them choose a new physician? (Select all that apply).” Respondents were provided a list of 14 information types to choose from (board certification, insurance plans accepted, clinical interests, languages spoken, hospital affiliation, residency training location, medical school, sex/gender, history of disciplinary actions, use of electronic health records (EHRs), performance-based quality measures, patient reviews and ratings, and age) as well as an option to provide additional free-text responses.

Analysis

We divided the group into PCPs and non-PCPs because at least one prior study focused on attitudes and beliefs of primary care physicians towards public reporting mechanisms,16 and another study described differences in the number and type of reviews for primary care compared with other specialties.10 We categorized age into three groups (30–50, 51–64, and 65–90). We used summary statistics, including means and percentages, in our analysis. We used univariable and bivariable (e.g., chi-square tests) statistics to describe the sample and identify preliminary statistical significance. Respondents were stratified by PCP vs. non-PCP based on self-reported data from the survey.

RESULTS

Characteristics of Respondents and Non-respondents

Among 4197 physicians licensed in Rhode Island, the RIDOH received a total of 1792 responses, for a response rate of 42.7%. Nearly half (44%) were aged 30–50 years old, while an additional 40% ranged from 51 to 64 years of age (Table 1). About 16% of respondents were older than 65 years. Two-thirds (68%) practiced in an outpatient setting, with nearly 60% in a practice of fewer than 10 physicians. Approximately one-third (29%) of respondents reported that they practiced primary care. Compared with non-respondents, respondents were older (only 44% of respondents were aged 30–50 years, while those 30 to 50 years old made up 52% of non-respondents, p < 0.001) (Table 1). Compared with non-respondents, a greater percentage of respondents were pediatricians (9% vs. 6%, p < 0.001) and psychiatrists (7% vs. 4%, p = 0.001).

Table 1 Characteristics of Physician-Respondents vs. Non-respondents

Perceived Accuracy of Reporting of Care Quality Across Website Types

Many physicians had heard of commercial physician rating sites (83%), but most were unaware of the existence of Medicare public reporting sites (54%) and nonprofit quality websites (64%). The majority of responding physicians felt the depiction of care quality on most website types was inaccurate (Fig. 1). This varied by website type: 43% reported that commercial physician rating websites were “not at all accurate,” while 18% reported that health system websites were “not at all accurate” and 15% reported that individual practice sites were “not at all accurate.” In contrast, 39% reported that commercial physician rating websites were “slightly or somewhat accurate,” while 51% reported that health system websites were “slightly or somewhat accurate.” Few physicians thought that reporting websites were “very accurate.” Only 1% of physicians reported that commercial physician rating sites were “very accurate” and 2% reported that Medicare public reporting websites were “very accurate.”

Figure 1
figure 1

Physician attitudes towards websites that report on quality measures, patient experience data, and patient reviews.

Perceived “Helpfulness” of Types of Information Presented

Physicians felt that patients choosing a new physician would be most helped by information on board certification (80%), clinical interests (76%), and languages spoken (66%) (Fig. 2). Importantly, 65% did not select performance-based quality measures as a helpful online resource when choosing a physician and 66% did not indicate that reviews from other patients are helpful. PCP and specialist physicians differed in their beliefs about the type of information that would be helpful, with more specialists reporting that clinical interests (PCPs = 72%, specialists = 78%, p = 0.01) and hospital affiliation (PCPs = 66%, specialists = 73%, p = 0.004) would be helpful and more PCPs reporting that languages spoken (PCPs = 70%, specialists = 64%, p = 0.05), gender (PCPs = 47%, specialists = 34%, p < 0.001), use of electronic health records (PCPs = 40%, specialists = 27%, p < 0.001), and physician age (PCPs = 31%, specialists = 26%, p = 0.034) would be helpful to patients looking for a physician.

Figure 2
figure 2

Physicians’ perception of helpfulness of various types of information for patients choosing a physician.

DISCUSSION

In a statewide survey of licensed physicians, we found that most physicians believed that websites that provide data about quality or experience of care are not accurate. This may stem from doubts about the validity of the data (e.g., most had heard of commercial physician rating sites and found the information inaccurate) but may also be driven by lack of knowledge about the existence and content of these websites. For example, most physicians were not aware of longstanding public mechanisms for reporting care quality, such as Medicare’s Hospital or Physician Compare websites. Survey respondents reported overwhelmingly that information around “board certification” and “insurance accepted” would be helpful when choosing a physician; in contrast, only one-third of physicians reported that performance metrics or ratings and reviews from other patients would be helpful. PCPs and specialists differed in the information that they viewed as “helpful” for patients.

Our study is only the most recent description of physicians’ attitudes towards public reporting of data about health care quality and patient experience. The earliest report on the topic, a survey conducted in 1986, queried hospital leaders on their opinions about the publication (by the Health Care Financing Administration) of risk-adjusted mortality data for hospitals.21 The publication described widespread skepticism about the practice of releasing such data, with 70% of health care leaders reporting that its usefulness to hospitals was “poor.” A 2014 follow-up to this study reported that health system leaders have shown, over time, increased faith in the validity of such data and in its contribution to improvement efforts (with more than 70% of respondents to that survey describing that public reporting stimulated improvement efforts).22, 23 However, neither of these studies focused on practicing physicians. One decade-old qualitative study of a mixed sample of primary care physicians and subspecialists reported that physicians described concerns with rigor and methodology of publicly reported data.24 Another (also decade-old) survey of general internists (Casalino et al.) reported that 45% supported public reporting of medical group performance and 32% were supportive of reporting individual physician performance.16 While we did not ask the same questions as this survey, the fact that the vast majority of respondents in our survey did not feel that public reporting websites are accurate would suggest that support for public reporting among currently practicing physicians is lower than previously reported. This may suggest growing frustration with online reporting of quality and experience data because of the rapidly changing landscape, with recent increases in the presence of patient-generated reviews8, 11 and the emergence of a new phenomenon in which hospitals and health systems have begun to publish physician-specific patient experience data and patient comments on their websites, for example.25 However, this finding may also merely reflect that this sample is different than previously surveyed populations.

Physicians’ skepticism towards commercial physician rating sites has also been previously reported. Holliday et al., in a cross-sectional survey of 828 physicians within a single accountable care organization, reported that (similar to our findings) only 36% of physicians “somewhat” or “strongly” agreed that commercial rating websites were accurate and 53% “somewhat” or “strongly” agreed that numerical data on health system sites was accurate.16 In contrast to this work, our study examined all physicians in a single state (compared with within one health system located in a single urban area), developed the questions in collaboration with a large multi-stakeholder group, and asked about other site types in addition to health system and commercial rating websites. Furthermore, Holliday et al. did not examine responses stratified by PCPs vs. specialists.

Review of the literature further reveals that patients’ desire for data about physician quality and patient experience and their belief in the accuracy of such data is often in conflict with physicians’ beliefs and preferences on the subject. One study by Ferndandez et al. reported that patients were significantly more likely than physicians to report that mortality data (in this case, about percutaneous coronary intervention) can provide accurate information about physician quality and can be useful in guiding physician selection.26 The types of information that patients and physicians find useful may be different as well. When physicians are presented with options for public reporting of data, most preferred at least some numeric data be included when data about quality are presented publicly.27, 28 In contrast, efforts to increase the patients’ use of publicly reported quantitative quality metrics (e.g., process measures and results from patient experience surveys) have, for the most part, failed to demonstrate increases in uptake.7, 29 And, when given the option to read narratives, patients prefer them over quantitative data.29, 30 This preference may be due, in part, to difficulty with understanding numeric data such as physician quantitative “report cards,”31,32,33 and is not without its downside. Schlessinger et al. reported that when narratives were present as part of the report card, patients chose physicians with lower scores on other quality metrics.29, 30

There is almost no published data describing physician beliefs about what information is perceived to be helpful for patients who are choosing a physician. We found that physicians were far more likely to report that information already ubiquitous online (e.g., data elements that are already listed on commercial physician rating websites and licensing boards in most states such as “board certification,” “insurance accepted,” and “clinical interests”) was helpful to patients. In contrast, quality metrics and patient-generated reviews, which are available online in some cases but not others (and may be more difficult to find), were much less likely to be reported as “helpful” to patients choosing a physician. It is notable that items that physicians reported to be less often helpful (e.g., reviews, narratives, and quality metrics) also tended to be more distal from the realm of physician control.

This study has several limitations. First, the survey was not anonymous and was administered by the state’s health department for public reporting purposes, which may have affected how physicians responded. Because the questionnaire was administered electronically, physicians who were more comfortable with technology or working online might have been more likely to answer it. Respondents were, informed that the only physician-level data reported on the RIDOH’s website for the year 2017 was (1) whether the physician used an EHR in the prior year; (2) whether the physician used e-prescriptions in the prior year; and (3) use of the EHR for purposes of patient engagement. Second, while the sample size was large and the response rate high for a physician survey that offered no incentives for participation, our response rate of just above 40% may affect generalizability. We also noted some differences in the characteristics of respondents and non-respondents. We were unable to track some of the non-respondents and were unable to send reminders in some cases. This is because only physicians with an email address on file with the department of health’s licensure division received a link via email; all others received a link via paper letter. Because SurveyMonkey tracks non-responders via email, these are the only non-respondents to whom we can send a reminder. Generalizability may be further limited by the fact that the survey was limited to physicians in a single state.

In conclusion, physicians are unaware of mechanisms for publicly reporting quality data and doubt the accuracy of information about physicians that is present online. More than two-thirds express skepticism about the usefulness of information that patients, in prior studies, have reported to be helpful when choosing a physician. This disconnect suggests a need to identify methods for reporting quality and experience data that are acceptable to both patients and physicians.