Three decades ago, the Veterans Administration was mired in criticism over substandard care and there were widespread calls for “mainstreaming” Veterans into the private sector. In response, Congress converted VA into cabinet department and VA undertook a massive transformation that included dozens of aggressive programs intended to improve quality, safety, and efficiency such as external quality measurement based upon early HEDIS metrics, a national patient safety program and development of an electronic health record.1 Using reporting tools that were clumsy by today’s standards, VA achieved rapid and dramatic improvement on numerous measures. The Prevention Index, a composite of nine metrics, rose sharply from 34 to 81% between 1995 and 1997. Similarly, the Chronic Disease Care Index, a composite of 14 metrics related to five conditions, rose from 35% compliance to 80% in 3 years and then gradually to 90% over the next 6 years.2 During the ensuing decade, however, most of the metrics exhibited only minor improvements. Similar patterns of rapid improvement were subsequently observed for in-hospital measures of timely and effective care (ORYX) as well as for indicators of safety such as those for central line-associated bloodstream infections, surgical site infections, and thromboembolism prophylaxis when national and local quality improvement interventions such as checklists, “ICU bundles,” and peer consultation were instituted.

Because it was a leader in informatics and quality measurement, early comparisons with private sector health systems revealed that for processes and conditions specifically addressed by performance measures, VA outscored non-VA hospitals by more than 50% on average.3 Seven years later, a systematic review of 36 studies reported that VA consistently performed better than non-VA comparison groups on accepted processes of care for medical conditions and that risk-adjusted mortality was no different on average.4 Even more recently, using publicly reported data, VA generally performed better on the AHRQ patient safety indicators and hospital mortality.5 Differences from the private sector institutions have diminished, however, with the broad uptake of electronic health records and quality improvement programs.

In this issue of JGIM, Price and colleagues have reexamined these earlier comparisons and have arrived at similar findings.6 Rather ironically, their study was mandated under legislation passed in response to resurgent concerns about the quality of care in VA, particularly related to access. It is therefore reassuring that VA’s quality metrics yet again remain solid, in most cases, equivalent or superior to the private sector. It is also ironic, that because standard access measures in the community are not publicly reported, the present study fails to cast any light on that topic. The limited data that are available suggest that waiting times and delays for primary and specialty care in the community are similar to VA.7 VA’s own published access measures, however, indicate that although patients express high levels of satisfaction with their primary care physicians and overall outpatient care, an average of only between 40 and 60% of patients report that that they can always obtain urgent or routine care when they need it compared with an average of nearly 70% in the private sector.8 However, when adjustments are made for sociodemographic and clinical characteristics, insurance coverage, and geographic region, veterans’ access to care in the VA may actually be better than in the private sector.9

Analyses of performance metrics such as this study must be viewed in context. As mentioned, scores tend to plateau after early improvement. Most ORYX measures topped out years ago in VA and private sector hospitals and may no longer reveal meaningful variations in quality. That is also generally true for HEDIS metrics although, the higher scores that Price’s group observed in VA on a few measures, could certainly translate into potentially meaningful health advantages for Veterans, especially for those related to smoking cessation and management of diabetes and hypertension. These differences might be considered even more impressive because private sector hospitals have clear incentives to employ strategies that may inflate performance measures to maximize payments under value-based payment arrangements and to enhance quality ratings from external organizations. Critics have charged, however, that within VA there is also undue emphasis on metrics, given that a sizable proportion of executive bonuses are based on them, and that this, in turn, engenders distortions such as the highly publicized manipulation of data on access to care. Nonetheless, after two decades with little change, it is time to stop making comparisons on these measures.

Apart from any comparisons, the larger question is whether our stable of existing performance metrics provides truly meaningful information about quality and safety. McGlynn and her colleagues have characterized our current national approach to performance measurement as “burdensome, expensive, inaccurate, and indifferent to the complexity of care delivery.”10 Undue preoccupation with metrics encourages adoption of short-term strategies to meet arbitrary targets at the expense of undertaking more fundamental improvements in processes of care and programs to advance population health. Over time, the metrics tend to become disassociated from the underlying strategic objectives. In health care, for example, apparent improvements in outcomes are sometimes merely indicative of investment in coding as opposed to actual improvements in care processes.11, 12 We all recognize the “red box/green box” syndrome. Such trends are a manifestation of a phenomenon termed surrogation, which is defined as “the tendency for managers to lose sight of the strategic construct(s) that the [performance] measures are intended to represent, and subsequently behave as though the measures are the constructs of interest.”13 This is most likely to occur (and is most harmful to organizations) when strategic objectives are complex and difficult to quantify, for example, improving overall quality of care or patient-centeredness.

Surrogation is intensified when managerial compensation is tied to a performance metric but somewhat mitigated when multiple metrics are incentivized.14 Surrogation is not fueled solely by compensation but reflects a more fundamental tendency of human beings to rely on simplifying heuristics to make choices in complex situations, even in the face of ample evidence that they are inaccurate.15 When performance metrics are applied to increasingly complicated processes, the problem worsens and is magnified by cynicism about their accuracy. Some authorities believe that these corrupting influences are a nearly inevitable consequence of rigid performance measurement systems, particularly when they are perceived to be punitive. Examples abound in not only business and finance but also such diverse fields as public and private education, policing, and the military.16

The passionate embrace of performance measurement by the US health care system has produced an unbridled proliferation of metrics that has been decried by the National Academy of Medicine.17 In response, a federal/industrial complex has evolved to “help” hospitals and providers to survive in this jungle. The systems established to cope serve as distractions from true strategic goals and induce information overload that can distract clinicians in ways deleterious to patients such as overlooking abnormal labs or X-rays.18 Moreover, there is growing evidence that the quest to achieve ever more marginal gains risks further alienating the workforce among whom morale is already low.19

There should be no argument that attention to sizable differences in key metrics such as observed by Price et al. related to inpatient experience, demand attention. Nonetheless, the myopic focus on current metrics diverts attention from critically important aspects of health care, such as whether diagnoses are correct, whether treatments administered are appropriate, and if patients’ symptoms and function are improved by treatment. To address such complex questions will require not only a fresh approach to how we measure and value health care but also development of sophisticated, intelligent systems that bring to bear all relevant information and our best modern technology to help providers make the best clinical decisions and do the right thing, which is what we all strive to do.