More than a quarter-century has passed since the Centers for Medicare and Medicaid Services (CMS) published their first “report cards” on hospital mortality data.1 While this initial attempt at public reporting suffered from methodological flaws and was ended in 1993,2 it represented the beginning of a huge international experiment. This experiment is ongoing and is still pondering the question: “Can providing publicly available information about health care quality direct patient choice of health care providers and simultaneously drive health care organizations to improve?1

The subsequent years have revealed that public reporting of quality data does seem to drive attempts on the part of physicians and hospitals to improve care.3 Yet there is still scant evidence that these initiatives have had an impact on where patients choose to receive care.4 It has been hypothesized that this is related to a lack of awareness of differences in health care quality, limited literacy and numeracy of patients, and poor design of publicly funded quality reporting websites.5 CMS’s Hospital Compare is an example of a site that includes a plethora of data, including information about processes of care (e.g., percentage of patients hospitalized for acute myocardial infarction treated with beta blockers); risk-adjusted outcomes, such as condition-specific mortality and readmission rates; results of surveys of patient experience of care; and measures of hospitalization costs and case volumes. However, despite the hope and expectation that rational patients would use this information to make informed choices, they rarely do. At the very least, the great experiment of public reporting has, thus far, failed to engage patients.

In contrast, social media use is widespread and growing. In 2014, 58 % of US adults used the social networking site Facebook.6 At the same time, 51 % of US hospitals are now on Twitter, and 99 % are covered by Yelp.7 Patients are also looking for health care quality data online: in 2012, 65 % of Americans were aware of online ratings of physicians and hospitals, and 23 % reported using ratings to choose a physician.8 Notably, these numbers appear to reflect patients’ use of online reviews found on social media sites and commercial rating sites (e.g., Yelp) rather than patients’ use of quality data found on government-funded websites.

That so many consumers are getting health care quality information from social media and commercial sources raises the question of whether the information that patients get from these sites is of similar value to the information they could get from sites such as Hospital Compare. In this issue of JGIM, Glover et. al. examine whether hospitals’ risk-adjusted readmission rates reported on Hospital Compare were associated with the average number of “stars” patients gave hospitals as part of their Facebook reviews.9 Most notably, the authors observed that hospitals with lower readmission rates (and thus a higher “objective” measure of quality) were more likely to have a higher star rating than hospitals with higher readmission rates. The authors also found that 88 % of hospitals had a Facebook page, indicating that this medium is widely used by hospitals for marketing, education, or other purposes. Higher-quality hospitals were also more likely to have a Facebook page and to have a higher number of total reviews.

The authors’ main finding, that there is an association between objective quality measures and Facebook reviews, is not entirely new. Prior work has suggested an association between patient ratings and objective measures of hospital quality.10 This study is important, however, in that it examines the most-used social media site in the US and provides us with new information about how US patients and hospitals use Facebook to share information about health care quality. Perhaps most important is the fact that most US hospitals have a Facebook page and are allowing themselves to be reviewed. The authors found that even many low-quality hospitals had hundreds of Facebook reviews.

One potential limitation of this work is that the association between higher scores and lower readmission rates that was found may be directly related to the number of reviews. Because hospitals with lower readmission rates had more reviews than hospitals with higher readmission rates, and because patient-generated reviews tend to skew towards the positive,11 more reviews will generally mean a higher average star rating. This could explain at least some of the observed association. However, we additionally suggest a more nuanced explanation: hospitals that are active on social media and encourage patients to provide ratings and feedback are the hospitals that are most concerned with patient-centeredness. Effectively using social media is an active, outward-facing endeavor and requires a commitment to transparency and engagement. These are both attributes that are likely to feed into the cultural and leadership mix of a high-quality organization. Anecdotal evidence from other social media websites such as Twitter have shown that the hospitals most adept at engaging patients on social media are hospitals that are well known for providing high-quality care (e.g., the Cleveland Clinic has more than a third of a million followers; the Mayo Clinic has more than a million). Of course, the remedy for a hospital with low-quality care is not to start a Facebook page or Twitter feed; rather, use of the medium is a marker of a hospital’s overall willingness to engage with patients, receive feedback, and make improvements as a result.

Despite this study’s important findings, it cannot overcome the general criticisms of patient-generated ratings on social media and commercial sites. The validity of such reviews has been questioned, because reviews are not generated from a representative sample of all patients and because it may encourage feedback from the most extreme positions (e.g., very happy or very dissatisfied). Others have expressed concern that narratives “distract” from objective measures of quality presented on public reporting websites.12 Because of these and other concerns, it has been argued that regardless of whether there is an association between reviews and objective measures of quality, these limitations make online reviews an unfit method for comparing hospital or physician quality. Again, we suggest a more nuanced approach. Neither the authors nor we are suggesting that patients’ reviews on social media should be the only way that patients get information about hospital quality. Yet the finding that social media reviews are associated with objective quality measures is somewhat reassuring, given that this study is a reflection of a trend: patients are voting with their clicks. A majority of US consumers use social media platforms. They find social media to be accessible and easy to use. Reviews, and particularly narrative reviews, are easier to interpret than numeric data such as process measures or risk-adjusted mortality rates. Taken together, this study and past work suggest that patients are using social media to find health care quality information and that they will increasingly make decisions about doctors and hospitals using this information.

This study and others like it also may also have important implications for the future of publicly reported health care quality data. Governments in the US and UK have invested millions of dollars in collecting and presenting quality data on publicly available websites. To ensure that this investment was worthwhile, we must improve the sites so that they better engage patients. It may be that the data currently presented on these sites fails to resonate with the emotional way that patients make their decisions in real life. One solution that has been suggested is to include narrative comments and patient reviews alongside other types of quality data, which might bring more patients to government-sponsored health care quality websites and increase the likelihood that patients will use many types of data to make choices about physicians and hospitals.13

Including patient voices on quality sites would also address public reporting’s least-discussed limitation: the risk of coming across as paternalistic. The idea that the “powers that be” should define and assess quality and decide which data patients should use to make decisions is sharply contrasted by the arrival of more social approaches to public reporting. These new approaches allow the public more freedom to both report their own perception of quality and choose on their own terms. Despite the limitations of allowing any patient to post a review, Glover’s study adds to recent literature suggesting that if government-funded websites do not take the step of trying to include patients in the process of rating hospitals and physicians, patients will find another, more accessible way to do so.