Abstract
Importance
Physician attitudes about websites that publicly report health care quality and experience data have not been recently described.
Objectives
To examine physician attitudes about the accuracy of websites that report information about quality of care and patient experience and to describe physician beliefs about the helpfulness of these data for patients choosing a physician.
Design, Participants, and Measures
The Rhode Island Department of Health (RIDOH) and a multi-stakeholder group developed and piloted two questions that were added to RIDOH’s biennial physician survey of all 4197 practicing physicians in Rhode Island: (1) “How accurate of a picture do you feel that the following types of online resources give about the quality of care that physicians provide?” (with choices) and (2) “Which types of physician-specific information (i.e., not about the practice overall) would be helpful to include in online resources for patients to help them choose a new physician? (Select all that apply).” Responses were stratified by primary care vs. subspecialty clinicians. Summary statistics and chi-squared tests were used to analyze the results.
Results
Among 1792 respondents (response rate 43%), 45% were unaware of RIDOH’s site and 54% were unaware of the Centers for Medicare & Medicaid Services (CMS)’ quality reporting sites. Only 2% felt that Medicare sites were “very accurate” in depicting physician quality. Most physicians supported public reporting of general information about physicians (e.g., board certification), but just over one-third of physicians felt that performance-based quality measures are “helpful” (and a similar percentage reported that patient reviews felt are “helpful”) for patients choosing a physician.
Conclusions
Physician-respondents were either uninformed or skeptical about public reporting websites. In contrast to prior reports that a majority of patients value some forms of publicly reported data, most physicians do not consider quality metrics and patient-generated reviews helpful for patients who are choosing a physician.
Similar content being viewed by others
INTRODUCTION
Patients are interested in knowing about the quality of their physicians. Nearly 60% of US respondents report that when choosing a physician, online data and physician reviews are “somewhat” or “very” important.1 To find information about physicians, patients may look to commercial physician rating websites, government-sponsored websites, independently sponsored public reporting organizations, health system sites, insurance companies portals, and others.1,2,3,4,5,6,7,8,9,10,11,12,13 Although patient use of websites that report publicly reported quality metrics remains minimal,14 patients may be more likely to seek out online reviews about physicians or health systems.15 Commercial physician rating websites, for example, are viewed millions of times per month, and there have been rapid increases in numbers of patient reviews on these sites.8, 11
Few prior studies have reported on physician attitudes towards online reviews or public reporting of quality information. A now decade-old study suggested that general internists’ support for public reporting of individual physician performance was limited,16 and a recent single-center study suggests that few physicians feel that numerical reviews on commercial websites are accurate.16 Because gaps remain in our understanding of physician attitudes and beliefs about the many websites that publicly report health care quality and patient experience data, we aimed to examine physician attitudes about the accuracy of data on these websites and to describe physician beliefs about the helpfulness of different types of physician data for patients who are choosing a physician.
METHODS
Survey Characteristics and Administration
The Rhode Island Department of Health (RIDOH) has administered the Rhode Island Health Information Technology (HIT) Survey to physicians since 2008 as part of the state’s legislatively mandated Healthcare Quality Reporting Program. Survey data are used to measure and report process measures relating to HIT adoption and use, as well as the impact of HIT on physicians’ workflow and job satisfaction. Individual practitioner measures have been publicly reported for physicians since 2009. Physicians are officially “required” (by the RIDOH) to respond to the survey, but there is no penalty for nonresponse. Respondents are asked to self-identify as a primary care physician (PCP), as determined by an answer of “yes” to the question, “Do you provide primary care?” Respondents are also asked main practice setting (outpatient/office or inpatient/hospital) and practice size (1–3 clinicians, 4–9 clinicians, 10–15 clinicians, or more than 15 clinicians). We obtained other physician characteristics (age, gender, and specialty) and information about non-respondents from the state licensure file.
In 2017, the survey was administered between May 8 and June 12. Hard copy survey notices were mailed to all 4197 physicians licensed in Rhode Island who were in active practice and located in Rhode Island, Connecticut, or Massachusetts. Email notices and up to two email reminders were sent to those who had an email address on file with the RIDOH (N = 2296). There was no incentive offered for survey participation. The RIDOH works with local stakeholders to revise the survey tool prior to each administration in order to collect data to inform state HIT policy and initiatives and to accommodate new or evolving data needs. The RIDOH’s Institutional Review Board (IRB) reviewed this study and deemed it exempt.
Development and Description of Questions
In collaboration with RIDOH and a multi-stakeholder group (of physicians, researchers, and community members), the authors developed two additional questions for the 2017 survey about physicians’ beliefs and attitudes about websites that report data on experience and quality, including patient-generated reviews. We then reviewed the questions with stakeholders and pilot tested them in a convenience sample of local physicians.
These two final questions on the survey asked respondents about their knowledge, attitudes, and beliefs about the accuracy of data found on websites that report data about physician quality and about the types of information would be helpful for patients choosing a new physician. The wording of the second to last question on the survey read: “How accurate of a picture do you feel that the following types of online resources give about the quality of care that physicians provide?” We asked participants to respond using a 4-point Likert scale (“Not at all accurate; slightly accurate; somewhat accurate; very accurate.” “Have not heard of this” was an additional answer choice). The online resources physicians were asked to evaluate included: commercial physician rating websites (for-profit sites that are available to the public and allow patients to read and/or write quantitative and narrative reviews about physicians)10, 11; health systems’ websites (hospitals and health systems websites that compile and report physician-level data, most commonly ratings and comments drawn from the Consumer Assessment of Healthcare Providers and Systems [CAHPS] surveys)13; individual practices’ websites (any website that is created and maintained by an individual practice, which may or may not report data on physician quality or patient experience)3, 17; the Rhode Island Department of Health’s Find a Doctor tool18 (which includes data on board certification, hospital privileges, address information, disciplinary actions, and quality metrics that are related to use of health information technology); Medicare public reporting websites (which include practice-level performance data but minimal data on individual physician quality or experience metrics)19; non-profit physician quality websites (e.g., organizations that report practice- or physician-level data about quality or experience data separately from RIDOH)20; and insurance subscriber portals (insurance company sites that include information on in-network status and, in some cases, quality and experience data). We included links to examples of each type of site within the survey.
The final question on the survey asked respondents: “Which types of physician-specific information (i.e., not about the practice overall) would be helpful to include in online resources for patients to help them choose a new physician? (Select all that apply).” Respondents were provided a list of 14 information types to choose from (board certification, insurance plans accepted, clinical interests, languages spoken, hospital affiliation, residency training location, medical school, sex/gender, history of disciplinary actions, use of electronic health records (EHRs), performance-based quality measures, patient reviews and ratings, and age) as well as an option to provide additional free-text responses.
Analysis
We divided the group into PCPs and non-PCPs because at least one prior study focused on attitudes and beliefs of primary care physicians towards public reporting mechanisms,16 and another study described differences in the number and type of reviews for primary care compared with other specialties.10 We categorized age into three groups (30–50, 51–64, and 65–90). We used summary statistics, including means and percentages, in our analysis. We used univariable and bivariable (e.g., chi-square tests) statistics to describe the sample and identify preliminary statistical significance. Respondents were stratified by PCP vs. non-PCP based on self-reported data from the survey.
RESULTS
Characteristics of Respondents and Non-respondents
Among 4197 physicians licensed in Rhode Island, the RIDOH received a total of 1792 responses, for a response rate of 42.7%. Nearly half (44%) were aged 30–50 years old, while an additional 40% ranged from 51 to 64 years of age (Table 1). About 16% of respondents were older than 65 years. Two-thirds (68%) practiced in an outpatient setting, with nearly 60% in a practice of fewer than 10 physicians. Approximately one-third (29%) of respondents reported that they practiced primary care. Compared with non-respondents, respondents were older (only 44% of respondents were aged 30–50 years, while those 30 to 50 years old made up 52% of non-respondents, p < 0.001) (Table 1). Compared with non-respondents, a greater percentage of respondents were pediatricians (9% vs. 6%, p < 0.001) and psychiatrists (7% vs. 4%, p = 0.001).
Perceived Accuracy of Reporting of Care Quality Across Website Types
Many physicians had heard of commercial physician rating sites (83%), but most were unaware of the existence of Medicare public reporting sites (54%) and nonprofit quality websites (64%). The majority of responding physicians felt the depiction of care quality on most website types was inaccurate (Fig. 1). This varied by website type: 43% reported that commercial physician rating websites were “not at all accurate,” while 18% reported that health system websites were “not at all accurate” and 15% reported that individual practice sites were “not at all accurate.” In contrast, 39% reported that commercial physician rating websites were “slightly or somewhat accurate,” while 51% reported that health system websites were “slightly or somewhat accurate.” Few physicians thought that reporting websites were “very accurate.” Only 1% of physicians reported that commercial physician rating sites were “very accurate” and 2% reported that Medicare public reporting websites were “very accurate.”
Perceived “Helpfulness” of Types of Information Presented
Physicians felt that patients choosing a new physician would be most helped by information on board certification (80%), clinical interests (76%), and languages spoken (66%) (Fig. 2). Importantly, 65% did not select performance-based quality measures as a helpful online resource when choosing a physician and 66% did not indicate that reviews from other patients are helpful. PCP and specialist physicians differed in their beliefs about the type of information that would be helpful, with more specialists reporting that clinical interests (PCPs = 72%, specialists = 78%, p = 0.01) and hospital affiliation (PCPs = 66%, specialists = 73%, p = 0.004) would be helpful and more PCPs reporting that languages spoken (PCPs = 70%, specialists = 64%, p = 0.05), gender (PCPs = 47%, specialists = 34%, p < 0.001), use of electronic health records (PCPs = 40%, specialists = 27%, p < 0.001), and physician age (PCPs = 31%, specialists = 26%, p = 0.034) would be helpful to patients looking for a physician.
DISCUSSION
In a statewide survey of licensed physicians, we found that most physicians believed that websites that provide data about quality or experience of care are not accurate. This may stem from doubts about the validity of the data (e.g., most had heard of commercial physician rating sites and found the information inaccurate) but may also be driven by lack of knowledge about the existence and content of these websites. For example, most physicians were not aware of longstanding public mechanisms for reporting care quality, such as Medicare’s Hospital or Physician Compare websites. Survey respondents reported overwhelmingly that information around “board certification” and “insurance accepted” would be helpful when choosing a physician; in contrast, only one-third of physicians reported that performance metrics or ratings and reviews from other patients would be helpful. PCPs and specialists differed in the information that they viewed as “helpful” for patients.
Our study is only the most recent description of physicians’ attitudes towards public reporting of data about health care quality and patient experience. The earliest report on the topic, a survey conducted in 1986, queried hospital leaders on their opinions about the publication (by the Health Care Financing Administration) of risk-adjusted mortality data for hospitals.21 The publication described widespread skepticism about the practice of releasing such data, with 70% of health care leaders reporting that its usefulness to hospitals was “poor.” A 2014 follow-up to this study reported that health system leaders have shown, over time, increased faith in the validity of such data and in its contribution to improvement efforts (with more than 70% of respondents to that survey describing that public reporting stimulated improvement efforts).22, 23 However, neither of these studies focused on practicing physicians. One decade-old qualitative study of a mixed sample of primary care physicians and subspecialists reported that physicians described concerns with rigor and methodology of publicly reported data.24 Another (also decade-old) survey of general internists (Casalino et al.) reported that 45% supported public reporting of medical group performance and 32% were supportive of reporting individual physician performance.16 While we did not ask the same questions as this survey, the fact that the vast majority of respondents in our survey did not feel that public reporting websites are accurate would suggest that support for public reporting among currently practicing physicians is lower than previously reported. This may suggest growing frustration with online reporting of quality and experience data because of the rapidly changing landscape, with recent increases in the presence of patient-generated reviews8, 11 and the emergence of a new phenomenon in which hospitals and health systems have begun to publish physician-specific patient experience data and patient comments on their websites, for example.25 However, this finding may also merely reflect that this sample is different than previously surveyed populations.
Physicians’ skepticism towards commercial physician rating sites has also been previously reported. Holliday et al., in a cross-sectional survey of 828 physicians within a single accountable care organization, reported that (similar to our findings) only 36% of physicians “somewhat” or “strongly” agreed that commercial rating websites were accurate and 53% “somewhat” or “strongly” agreed that numerical data on health system sites was accurate.16 In contrast to this work, our study examined all physicians in a single state (compared with within one health system located in a single urban area), developed the questions in collaboration with a large multi-stakeholder group, and asked about other site types in addition to health system and commercial rating websites. Furthermore, Holliday et al. did not examine responses stratified by PCPs vs. specialists.
Review of the literature further reveals that patients’ desire for data about physician quality and patient experience and their belief in the accuracy of such data is often in conflict with physicians’ beliefs and preferences on the subject. One study by Ferndandez et al. reported that patients were significantly more likely than physicians to report that mortality data (in this case, about percutaneous coronary intervention) can provide accurate information about physician quality and can be useful in guiding physician selection.26 The types of information that patients and physicians find useful may be different as well. When physicians are presented with options for public reporting of data, most preferred at least some numeric data be included when data about quality are presented publicly.27, 28 In contrast, efforts to increase the patients’ use of publicly reported quantitative quality metrics (e.g., process measures and results from patient experience surveys) have, for the most part, failed to demonstrate increases in uptake.7, 29 And, when given the option to read narratives, patients prefer them over quantitative data.29, 30 This preference may be due, in part, to difficulty with understanding numeric data such as physician quantitative “report cards,”31,32,33 and is not without its downside. Schlessinger et al. reported that when narratives were present as part of the report card, patients chose physicians with lower scores on other quality metrics.29, 30
There is almost no published data describing physician beliefs about what information is perceived to be helpful for patients who are choosing a physician. We found that physicians were far more likely to report that information already ubiquitous online (e.g., data elements that are already listed on commercial physician rating websites and licensing boards in most states such as “board certification,” “insurance accepted,” and “clinical interests”) was helpful to patients. In contrast, quality metrics and patient-generated reviews, which are available online in some cases but not others (and may be more difficult to find), were much less likely to be reported as “helpful” to patients choosing a physician. It is notable that items that physicians reported to be less often helpful (e.g., reviews, narratives, and quality metrics) also tended to be more distal from the realm of physician control.
This study has several limitations. First, the survey was not anonymous and was administered by the state’s health department for public reporting purposes, which may have affected how physicians responded. Because the questionnaire was administered electronically, physicians who were more comfortable with technology or working online might have been more likely to answer it. Respondents were, informed that the only physician-level data reported on the RIDOH’s website for the year 2017 was (1) whether the physician used an EHR in the prior year; (2) whether the physician used e-prescriptions in the prior year; and (3) use of the EHR for purposes of patient engagement. Second, while the sample size was large and the response rate high for a physician survey that offered no incentives for participation, our response rate of just above 40% may affect generalizability. We also noted some differences in the characteristics of respondents and non-respondents. We were unable to track some of the non-respondents and were unable to send reminders in some cases. This is because only physicians with an email address on file with the department of health’s licensure division received a link via email; all others received a link via paper letter. Because SurveyMonkey tracks non-responders via email, these are the only non-respondents to whom we can send a reminder. Generalizability may be further limited by the fact that the survey was limited to physicians in a single state.
In conclusion, physicians are unaware of mechanisms for publicly reporting quality data and doubt the accuracy of information about physicians that is present online. More than two-thirds express skepticism about the usefulness of information that patients, in prior studies, have reported to be helpful when choosing a physician. This disconnect suggests a need to identify methods for reporting quality and experience data that are acceptable to both patients and physicians.
References
Hanauer DA, Zheng K, Singer DC, Gebremariam A, Davis MM. Public awareness, perception, and use of online physician rating sites. JAMA. 2014;311(7):734–735. doi:https://doi.org/10.1001/jama.2013.283194
Bardach N, Hibbard JH, Dudley RA. Users of Public Reports of Hospital Quality: Who, What, Why, and How?: An Aggregate Analysis of 16 Online Public Reporting Web Sites and Users’ and Experts’ Suggestions for Improvement. Rockville, MD: Agency for Healthcare Research and Quality; 2011.
Christianson JB, Volmar KM, Alexander J, Scanlon DP. A report card on provider report cards: current status of the health care transparency movement. J Gen Intern Med. 2010;25(11):1235–1241. doi:https://doi.org/10.1007/s11606-010-1438-2
Davies AR, Ware JE. Involving consumers in quality of care assessment. Health Aff (Millwood). 1988;7(1):33–48.
Giordano LA, Elliott MN, Goldstein E, Lehrman WG, Spencer PA. Development, Implementation, and Public Reporting of the HCAHPS Survey. Med Care Res Rev. 2010;67(1):27–37. doi:https://doi.org/10.1177/1077558709341065
Glover M. Hospital Evaluations by social media: A comparative analysis of Facebook ratings among performance outliers. Placeholder.
Goff SL, Mazor KM, Pekow PS, et al. Patient Navigators and Parent Use of Quality Data: A Randomized Trial. Pediatrics. 2016;138(4). doi:https://doi.org/10.1542/peds.2016-1140
Gao GG, McCullough JS, Agarwal R, Jha AK. A changing landscape of physician quality reporting: analysis of patients’ online ratings of their physicians over a 5-year period. J Med Internet Res. 2012;14(1):e38. doi:https://doi.org/10.2196/jmir.2003
Lagu T, Goff SL, Hannon NS, Shatz A, Lindenauer PK. A mixed-methods analysis of patient reviews of hospital care in England: implications for public reporting of health care quality data in the United States. Jt Comm J Qual Patient Saf. 2013;39(1):7-15.
Lagu T, Hannon NS, Rothberg MB, Lindenauer PK. Patients’ evaluations of health care providers in the era of social networking: an analysis of physician-rating websites. J Gen Intern Med. 2010;25(9):942–946. doi:https://doi.org/10.1007/s11606-010-1383-0
Lagu T, Metayer K, Moran M, et al. Website Characteristics and Physician Reviews on Commercial Physician-Rating Websites. JAMA. 2017;317(7):766–768. doi:https://doi.org/10.1001/jama.2016.18553
Lee V. Transparency and Trust - Online Patient Reviews of Physicians. N Engl J Med. 2017;376(3):197–199. doi:https://doi.org/10.1056/NEJMp1610136
Ricciardi BF, Waddell BS, Nodzo SR, et al. Provider-Initiated Patient Satisfaction Reporting Yields Improved Physician Ratings Relative to Online Rating Websites. Orthopedics. 2017;40(5):304–310. doi:https://doi.org/10.3928/01477447-20170810-03
2008 Update on Consumers’ Views of Patient Safety and Quality Information. The Kaiser Family Foundation; 2008. http://www.kff.org/kaiserpolls/posr101508pkg.cfm. Accessed June 18, 2019.
Holliday AM, Kachalia A, Meyer GS, Sequist TD. Physician and Patient Views on Public Physician Rating Websites: A Cross-Sectional Study. J Gen Intern Med. 2017;32(6):626–631. doi:https://doi.org/10.1007/s11606-017-3982-5
Casalino LP, Alexander GC, Jin L, Konetzka RT. General internists’ views on pay-for-performance and public reporting of quality scores: a national survey. Health Aff (Millwood). 2007;26(2):492–499. doi:https://doi.org/10.1377/hlthaff.26.2.492
Mehrotra A, Brannen T, Sinaiko AD. Use patterns of a state health care price transparency web site: what do patients shop for? Inquiry. 2014;51. doi:https://doi.org/10.1177/0046958014561496
Find Healthcare Providers: Department of Health. http://www.health.ri.gov/find/providers/. Accessed June 18, 2019.
Centers for Medicare and Medicaid Services. Hospital Compare. https://www.medicare.gov/hospitalcompare/search.html. Accessed August 21, 2019.
Massachusetts Health Quality Partners, Quality Insights. http://www.mhqp.org/resources-for-patients/. Accessed June 18, 2019.
Berwick DM, Wald DL. Hospital leaders’ opinions of the HCFA mortality data. JAMA. 1990;263(2):247–249.
Lindenauer PK, Lagu T, Ross JS, et al. Attitudes of hospital leaders toward publicly reported measures of health care quality. JAMA Intern Med. 2014;174(12):1904–1911. doi:https://doi.org/10.1001/jamainternmed.2014.5161
Goff SL, Lagu T, Pekow PS, et al. A Qualitative Analysis of Hospital Leaders’ Opinions About Publicly Reported Measures of Health Care Quality. Jt Comm J Qual Patient Saf. 2015;41(4):169–176.
Barr JK, Bernard SL, Sofaer S, Giannotti TE, Lenfestey NF, Miranda DJ. Physicians’ views on public reporting of hospital quality data. Med Care Res Rev. 2008;65(6):655–673. doi:https://doi.org/10.1177/1077558708319734
Lagu T, Norton CM, Russo LM, Priya A, Goff SL, Lindenauer PK. Reporting of Patient Experience Data on Health Systems’ Websites and Commercial Physician-Rating Websites: Mixed-Methods Analysis. J Med Internet Res. 2019;21(3):e12007. doi:https://doi.org/10.2196/12007
Fernandez G, Narins CR, Bruckel J, Ayers B, Ling FS. Patient and Physician Perspectives on Public Reporting of Mortality Ratings for Percutaneous Coronary Intervention in New York State. Circ Cardiovasc Qual Outcomes. 2017;10(9). doi:https://doi.org/10.1161/CIRCOUTCOMES.116.003511
Geraedts M, Hermeling P, de Cruppé W. Communicating quality of care information to physicians: a study of eight presentation formats. Patient Educ Couns. 2012;87(3):375–382. doi:https://doi.org/10.1016/j.pec.2011.11.005
Geraedts M, Hermeling P, Ortwein A, de Cruppé W. Public reporting of hospital quality data: What do referring physicians want to know? Health Policy. 2018;122(11):1177–1182. doi:https://doi.org/10.1016/j.healthpol.2018.09.010
Schlesinger M, Kanouse DE, Rybowski L, Martino SC, Shaller D. Consumer response to patient experience measures in complex information environments. Med Care. 2012;50 Suppl:S56–64. doi:https://doi.org/10.1097/MLR.0b013e31826c84e1
Schlesinger M, Kanouse DE, Martino SC, Shaller D, Rybowski L. Complexity, public reporting, and choice of doctors: a look inside the blackest box of consumer behavior. Med Care Res Rev. 2014;71(5 Suppl):38S–64S. doi:https://doi.org/10.1177/1077558713496321
Hibbard JH. What can we say about the impact of public reporting? Inconsistent execution yields variable results. Ann Intern Med. 2008;148(2):160–161.
Hibbard JH, Greene J, Daniel D. What is quality anyway? Performance reports that clearly communicate to consumers the meaning of quality of care. Med Care Res Rev. 2010;67(3):275–293. doi:https://doi.org/10.1177/1077558709356300
Hibbard JH, Greene J, Sofaer S, Firminger K, Hirsh J. An experiment shows that a well-designed report on costs and quality can help consumers choose high-value health care. Health Aff (Millwood). 2012;31(3):560–568. doi:https://doi.org/10.1377/hlthaff.2011.1168
Acknowledgments
Dr. Lagu is supported by the National Heart, Lung, and Blood Institute of the National Institutes of Health under Award Number K01HL114745 and R01 HL139985-01A1. Dr. Lagu has served as a consultant for the Yale Center for Outcomes Research and Evaluation, under contract to the Centers for Medicare & Medicaid Services, for which she has provided clinical and methodological expertise and input on the development, reevaluation, and implementation of hospital outcome and efficiency measures. The views expressed in this manuscript do necessarily reflect those of the Yale Center for Outcomes Research and Evaluation or the Centers for Medicare & Medicaid Services.
The authors thank Samara Viner-Brown at RIDOH for her stewardship of the Rhode Island HIT survey over the past 10 years.
Funding
This work was funded by RIDOH under the CMS Medicaid Health Information Technology Implementation Advance Planning Document for Health Information Exchange (HIT IAPD-HIE), Rhode Island Executive Office of Health and Human Services.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
The RIDOH’s Institutional Review Board (IRB) reviewed this study and deemed it exempt.
Conflict of Interest
The authors declare that they do not have a conflict of interest.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Key Points
Question: What are physicians’ knowledge, attitudes, and beliefs about public reporting and physician rating websites?
Findings: In a survey of all physicians in the state of Rhode Island, many were unaware of common public reporting websites and most felt that these sites do not accurately depict the quality of care that physicians provide. Only 35% reported that performance-based quality measures are helpful when choosing a physician and only 34% reported that reviews from other patients are helpful for patients choosing a physician.
Meaning: In contrast to prior reports that a majority of patients feel that some forms of quality data (e.g., reviews) are useful when choosing a physician, most physicians either do not know that these data exist or do not believe that the information is useful for patients.
Rights and permissions
About this article
Cite this article
Lagu, T., Haskell, J., Cooper, E. et al. Physician Beliefs About Online Reporting of Quality and Experience Data. J GEN INTERN MED 34, 2542–2548 (2019). https://doi.org/10.1007/s11606-019-05267-1
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11606-019-05267-1