The impact of orthopaedic surgery is best measured using endpoints that matter to patients. Validated outcomes tools allow us to evaluate improvements in pain intensity, magnitude of disability, and other results that are important to patients. But metrics purporting to score patient satisfaction with the results of surgery—prominent both in the media [11] and in scientific publications of late [7, 14]—can be influenced by many factors that vary so wildly from patient to patient that trying to measure this parameter is unlikely to produce anything meaningful. And the act of quantifying “satisfaction” will result in findings that are tempting to quote, but risk misleading clinicians, policymakers, and patients.

Consider a patient who can walk two miles on uneven terrain after a knee replacement, at which point the knee will ache and she will need to rest. She might be quite satisfied with that result if she was limited to a few blocks on level ground before knee surgery, or severely dissatisfied if she summited Mt. Kiliminjaro the month prior to her elective arthroplasty. Using a single metric to summarize a patient’s sense of satisfaction after a surgical intervention at best seems insufficiently granular, and at worst, superficial. A procedure might leave a patient with little pain during typical daily activities (generally satisfying), but important limitations in terms of recreational activities or sports (generally unsatisfying). Even within a single domain such as pain or function, a single satisfaction metric may not capture the depth or nuance of a patient’s perceptions about a surgical result.

One might reasonably believe that the same limitations apply to any subjective patient-reported outcome measure. We believe this is not the case, at least not nearly to the same degree. Hundreds of studies have shown that numerous patient-reported outcomes tools reproducibly and reliably represent patients’ symptoms and function. But issues related to preoperative expectations, approaches to patient counseling (including promises or suggestions made by the surgeon), surgical indications, occupational and recreational demands, and premorbid levels of disability can result in the same surgical outcome resulting in incomparably different levels of satisfaction. In light of this, any tool that proposes to measure satisfaction appears to face an insurmountable face-validity problem.

When psychiatric distress is added to the picture, the image muddies further. Symptoms of anxiety and depression adversely affect outcomes scores in patients whose physical health status is objectively no worse [12, 15]. While those symptoms can confound the use of otherwise valid patient-reported outcomes tools, they render the measurement of satisfaction with treatment outcomes all but impossible. Anxiety and depression appear to be major and independent drivers of dissatisfaction with surgical results [4], as are differences in provider empathy [13], the latter accounting for such a large proportion of the satisfaction they measured that the very construct of satisfaction appeared in one study to be little different from a patient’s feeling that the doctor cared about him or her as a person.

But we believe it is important to distinguish between satisfaction based with the results of an intervention—which, as noted, we do not believe can be reliably or meaningfully measured—and satisfaction with the process of receiving care, which numerous entities already measure. Common process-based satisfaction questionnaires include tools like the Consumer Assessment of Health Plans Study [9] and the surveys conducted by Press Ganey [2]. If you work in the United States, your patients’ levels of satisfaction with their healthcare interactions likely are being tracked and publicly reported using those instruments, and reimbursement increasingly will be tied to your scores [3]. In this regard, healthcare is being treated like a commodity or service, and just as one can measure communication, responsiveness, and cleanliness in service-sector institutions, these things can be measured in hospitals and doctors’ offices.

Importantly, such process-based metrics do not consistently correlate with care quality, effectiveness, or validated patient-reported outcomes tools [8]. Even in studies that show a correlation between satisfaction and some care-quality metrics, satisfaction does not correlate with a number of important endpoints including complications and readmissions [16]. Some of the observed variation in satisfaction with care processes may be a function of psychiatric distress influencing patients’ perceptions [1]. We need further studies on this important topic, along with more-consistent measurement of the psychosocial aspects of illness whenever patient-reported outcomes tools of any sort are used, and more-refined tools that consider the outcomes of interest alongside the key psychological factors that can influence these outcomes.

Interestingly, it appears that by giving patients what they want, such as high levels of satisfaction with care processes, we may not be giving them what they need. In one important study using a frequently employed process-based satisfaction scale, the most satisfied patients were the patients who received the most prescriptions, incurred the greatest healthcare expenses, and were more likely to receive inpatient care. They also were more likely to die [5]. Although others have called that analysis into question [6], it seems clear enough that patients do not always know what is best for them. Some with colds seek antibiotics, not everyone who asks for opioids should receive them, and the first diagnostic maneuver for most patients with low-back pain usually is not an MRI. But when surgeons do not fulfill patients’ desires in these or other areas, in particular when we do not explain our rationales with great sensitivity, patients may express dissatisfaction, and patient-satisfaction scores may go down. But despite that risk, we believe that measuring satisfaction with care processes (though not, as noted, with treatment outcomes) is reasonable. Communicating the reasons for our decisions—particularly when they are at odds with what a patient believes (s)he needs—is foundational to good care, and doing so in a sensitive way is a key professional duty. Incenting these conversations by measuring the right endpoints and paying for the right results [10] may drive empathy in through the back door, and so may represent a good use of process-based satisfaction tools. Certainly, evaluating these incentives deserves further thoughtful inquiry.

In short, there will be no getting away from the public reporting of process-based satisfaction endpoints, and it seems likely that the incentives being crafted around these scores are here to stay. By contrast, substantial—perhaps insurmountable—face-validity issues hamper studies that purport to measure patients’ satisfaction levels with the outcomes of treatment. We will look skeptically at any studies that report such measurements, and we suggest that readers do the same.