In a statewide survey of licensed physicians, we found that most physicians believed that websites that provide data about quality or experience of care are not accurate. This may stem from doubts about the validity of the data (e.g., most had heard of commercial physician rating sites and found the information inaccurate) but may also be driven by lack of knowledge about the existence and content of these websites. For example, most physicians were not aware of longstanding public mechanisms for reporting care quality, such as Medicare’s Hospital or Physician Compare websites. Survey respondents reported overwhelmingly that information around “board certification” and “insurance accepted” would be helpful when choosing a physician; in contrast, only one-third of physicians reported that performance metrics or ratings and reviews from other patients would be helpful. PCPs and specialists differed in the information that they viewed as “helpful” for patients.
Our study is only the most recent description of physicians’ attitudes towards public reporting of data about health care quality and patient experience. The earliest report on the topic, a survey conducted in 1986, queried hospital leaders on their opinions about the publication (by the Health Care Financing Administration) of risk-adjusted mortality data for hospitals.21 The publication described widespread skepticism about the practice of releasing such data, with 70% of health care leaders reporting that its usefulness to hospitals was “poor.” A 2014 follow-up to this study reported that health system leaders have shown, over time, increased faith in the validity of such data and in its contribution to improvement efforts (with more than 70% of respondents to that survey describing that public reporting stimulated improvement efforts).22, 23 However, neither of these studies focused on practicing physicians. One decade-old qualitative study of a mixed sample of primary care physicians and subspecialists reported that physicians described concerns with rigor and methodology of publicly reported data.24 Another (also decade-old) survey of general internists (Casalino et al.) reported that 45% supported public reporting of medical group performance and 32% were supportive of reporting individual physician performance.16 While we did not ask the same questions as this survey, the fact that the vast majority of respondents in our survey did not feel that public reporting websites are accurate would suggest that support for public reporting among currently practicing physicians is lower than previously reported. This may suggest growing frustration with online reporting of quality and experience data because of the rapidly changing landscape, with recent increases in the presence of patient-generated reviews8, 11 and the emergence of a new phenomenon in which hospitals and health systems have begun to publish physician-specific patient experience data and patient comments on their websites, for example.25 However, this finding may also merely reflect that this sample is different than previously surveyed populations.
Physicians’ skepticism towards commercial physician rating sites has also been previously reported. Holliday et al., in a cross-sectional survey of 828 physicians within a single accountable care organization, reported that (similar to our findings) only 36% of physicians “somewhat” or “strongly” agreed that commercial rating websites were accurate and 53% “somewhat” or “strongly” agreed that numerical data on health system sites was accurate.16 In contrast to this work, our study examined all physicians in a single state (compared with within one health system located in a single urban area), developed the questions in collaboration with a large multi-stakeholder group, and asked about other site types in addition to health system and commercial rating websites. Furthermore, Holliday et al. did not examine responses stratified by PCPs vs. specialists.
Review of the literature further reveals that patients’ desire for data about physician quality and patient experience and their belief in the accuracy of such data is often in conflict with physicians’ beliefs and preferences on the subject. One study by Ferndandez et al. reported that patients were significantly more likely than physicians to report that mortality data (in this case, about percutaneous coronary intervention) can provide accurate information about physician quality and can be useful in guiding physician selection.26 The types of information that patients and physicians find useful may be different as well. When physicians are presented with options for public reporting of data, most preferred at least some numeric data be included when data about quality are presented publicly.27, 28 In contrast, efforts to increase the patients’ use of publicly reported quantitative quality metrics (e.g., process measures and results from patient experience surveys) have, for the most part, failed to demonstrate increases in uptake.7, 29 And, when given the option to read narratives, patients prefer them over quantitative data.29, 30 This preference may be due, in part, to difficulty with understanding numeric data such as physician quantitative “report cards,”31,32,33 and is not without its downside. Schlessinger et al. reported that when narratives were present as part of the report card, patients chose physicians with lower scores on other quality metrics.29, 30
There is almost no published data describing physician beliefs about what information is perceived to be helpful for patients who are choosing a physician. We found that physicians were far more likely to report that information already ubiquitous online (e.g., data elements that are already listed on commercial physician rating websites and licensing boards in most states such as “board certification,” “insurance accepted,” and “clinical interests”) was helpful to patients. In contrast, quality metrics and patient-generated reviews, which are available online in some cases but not others (and may be more difficult to find), were much less likely to be reported as “helpful” to patients choosing a physician. It is notable that items that physicians reported to be less often helpful (e.g., reviews, narratives, and quality metrics) also tended to be more distal from the realm of physician control.
This study has several limitations. First, the survey was not anonymous and was administered by the state’s health department for public reporting purposes, which may have affected how physicians responded. Because the questionnaire was administered electronically, physicians who were more comfortable with technology or working online might have been more likely to answer it. Respondents were, informed that the only physician-level data reported on the RIDOH’s website for the year 2017 was (1) whether the physician used an EHR in the prior year; (2) whether the physician used e-prescriptions in the prior year; and (3) use of the EHR for purposes of patient engagement. Second, while the sample size was large and the response rate high for a physician survey that offered no incentives for participation, our response rate of just above 40% may affect generalizability. We also noted some differences in the characteristics of respondents and non-respondents. We were unable to track some of the non-respondents and were unable to send reminders in some cases. This is because only physicians with an email address on file with the department of health’s licensure division received a link via email; all others received a link via paper letter. Because SurveyMonkey tracks non-responders via email, these are the only non-respondents to whom we can send a reminder. Generalizability may be further limited by the fact that the survey was limited to physicians in a single state.
In conclusion, physicians are unaware of mechanisms for publicly reporting quality data and doubt the accuracy of information about physicians that is present online. More than two-thirds express skepticism about the usefulness of information that patients, in prior studies, have reported to be helpful when choosing a physician. This disconnect suggests a need to identify methods for reporting quality and experience data that are acceptable to both patients and physicians.