Discussant
Dr. Thomas J. Watson (Rochester, NY): The issue of quality assessment, and how the data might be utilized by patients, payers, and regulatory agencies for directing care, as well as by hospitals for targeting their improvement initiatives, is certainly gaining a lot of attention among surgeons. Yet as the authors so nicely demonstrate, the manner in which quality outliers are identified varies widely based on lack of uniformity, methodology, and cut-off criteria. We are all quite indebted to the authors for bringing these inconsistencies into the light.
The manuscript is likely to fuel a significant debate regarding which methods and boundaries are appropriate for different purposes. The results of such a debate could have significant impact on institutions that fall just above or just below established thresholds.
I have two questions for the authors.
Number one, is a certain methodology more suitable than others based upon the width or standard deviation of the outcomes’ distribution? As an example, ranking hospitals in quintiles may not make sense when the outcomes are clustered closely together. Perhaps setting a minimum threshold would be more appropriate in such a circumstance.
Number two, if you were appointed health care czar today, which methodology and cut-offs would you choose?
Closing Discussant
Dr. Karl Y. Bilimoria: I think that the method selected obviously depends on the measure. And certainly, if it’s something like beta blocker post MI, where everybody is 95% plus, the range is going to be narrow. So setting up different criteria for that, a sort of a basement threshold, would be better.
The vast majority of measures that we see that are like this-where there is wide variation. I think it depends entirely upon the intent, whether it’s for a quality improvement initiative or whether it’s to be publicly disseminated with referral and reimbursement consequences.
Similarly, it would depend what I was using the measure for. But for NSQIP, I favor using quintiles or quartiles.
Discussant
Dr. Keith D. Lillemoe (Indianapolis, IN): I’m not going to make you the health care czar, but I’m going to make you the chair of a department of surgery. I get these kind of numbers and they are not made up. What would you recommend for either myself, your chair, Dr. Soper and any other surgical chair do with this data and to try to institute quality improvement, because this isn’t so much about persecuting the bad people, it’s trying to lift up the quality.
Regardless of the metric that you look at, we are all going to have some underperformers or outliers. What’s the step in instituting quality improvement?
Closing Discussant
Dr. Karl Y. Bilimoria: I think the first step will be bringing it to light and providing people their data and making sure it is high quality data. I think that we have a lack of that right now. Although you may get some reports, I think, providing detailed, high quality data back to the individual performers is something that’s been lacking in general.
Also, it’s not about the absolute number or where you rank. It’s about just showing, what half of the group you are in. And if you are in the lower half, that’s something to act on.
Finally, actually demonstrating performance improvement or at least some activity toward improving performance is needed. In some of these measures, the numbers are very small, so demonstrating absolute improvement in outcomes would be difficult. But at least on process measures, those are more absolute, and maybe we can improve there in these circumstances.
Discussant
Dr. Sharon Weber (Madison, WI): I find this whole concept a little bit disturbing in light of the era of public reporting of the outcomes. And I would be even more disturbed if the hospitals identified as low outliers changed when different methodologies were applied. Did you evaluate the specific hospitals that were identified at each end of the scale and whether they changed position when different methodologies were applied, especially for the low outliers?
Closing Discussant
Dr. Karl Y. Bilimoria: For the most part, the really low-performing hospitals are the same across most of the models. When you get to the better performing of the low-performers, there is some variation in the nature of the hospitals. So some do flip in and out of being an outlier.