Abstract
We address three questions related to public reports of diabetes quality. First, does clinic quality evolve over time? Second, does the quality of reporting clinics converge to a common standard? Third, how persistent are provider quality rankings across time? Since current methods of public reporting rely on historic data, measures of clinic quality are most informative if relative clinic performance is persistent across time. We use data from the Minnesota Community Measurement spanning 2007–2012. We employ seemingly-unrelated regression to measure quality improvement conditional upon cohort effects and changes in quality metrics. Basic autoregressive models are used to measure quality persistence. There were striking differences in initial quality across cohorts of clinics and early-reporting cohorts maintained higher quality in all years. This suggests that consumers can infer, on average, that non-reporting clinics have poorer quality than reporting clinics. Average quality, however, improves slowly in all cohorts and quality dispersion declines over time both within and across cohorts. Relative clinic quality is highly persistent year-to-year, suggesting that publicly-reported measures can inform consumers in choice of clinics, even though they represent measured quality for a previous time period. Finally, definition changes in measures can make it difficult to draw appropriate inferences from longitudinal public reports data.
Similar content being viewed by others
Notes
Physician clinics are defined as entities in a single location that provide primary or specialty ambulatory care.
Minnesota Community Measurement 2012 Health Care Quality Report.
Minnesota Statutes 62U.02.
More specifically, \(\hat{{q}}_{ijt}^*=\hat{{\alpha }}+\hat{{\beta }}_1 t+\hat{{\beta }}_2 t^{2}+Cohort_{ij} \hat{{\delta }}+t*Cohort_{ij} \hat{{\lambda }}\). This is quality correcting for definition changes.
In this case, quality improvement could result from selecting low-risk or compliant patients, improved quality documentation, or genuine improvements in care quality.
Errors are clustered to account for repeated observations on the same clinics across time.
We test for, and reject, the hypothesis that the definition changes affect the slope of quality improvement.
The average consumer may be best served by examining the most recent year’s data; however, anecdotal evidence suggests that both provider and insurer organizations make use of trend data, definition changes will be particularly relevant to these users.
References
Christianson, J. B., Volmar, K., Alexander, J., & Scanlon, D. P. (2010). A report card on provider report cards: Current status of the health care transparency movement. Journal of General Internal Medicine, 25(11), 1235–1241.
De Brantes, F., Bailey, E., DiLorenzo, J., & Moses, M. (2013). State report card on transparency of physician quality information. Issue Brief. Health Care Incentives (HCI) Improvement Institute.
Dranove, D., Kessler, D., McClellan, M., & Satterthwaite, M. (2003). Is more information better? The effects of report cards on cardiovascular providers and consumers. Journal of Political Economy, 111(3), 555–588.
Higgins, A., Veselovsky, G., & McKnown, L. (2013). Provider performance measures in private and public programs: Achieving meaningful alignment with flexibility to innovate. Health Affairs, 32(8), 1453–1461.
Jin, G. Z. (2005). Competition and disclosure incentives: An empirical study of HMOs. The RAND Journal of Economics, 36(1), 93–112.
Jung, K. (2010). Incentives for voluntary disclosure of quality information in HMO markets. The Journal of Risk and Insurance, 77(1), 183–210.
Minnesota Community Measurement. (2009). Minnesota HealthScoresSM. Retrieved February 10, 2009, from http://www.mnhealthcare.org/.
Minnesota Community Measurement. (2010). Breaking new ground: 2010 health care quality report. Retrieved November 30, 2010, from http://mncm.org/site/upload/files/HCQRFinal2010.pdf. http://www.ajmc.com/publications/issue/2013/2013-1-vol19-n2/do-electronic-medical-records-improve-diabetes-quality-in-physician-practices/3#sthash.lLcenLJH.dpuf.
O’Connor, P., Bodkin, N., Fradkin, J., Glasgow, R., Greenfield, S., et al. (2011). Consensus report: Diabetes performance measures: Current status and future directions. Clinical Diabetes, 29(3), 102–112.
Panzer, R., Gitomer, R., Greene, W., Webster, P., Landry, K., & Riccobono, C. (2013). Increasing demands for quality measurement. Journal of the American Medical Association, 310(18), 1971–1980.
Roski, J., & Kim, K. (2010). Current efforts of regional and national performance measurement initiatives around the United States. American Journal of Medical Quality, 25(4), 249–254.
Wang, J., Hockenberry, J., Chou, S., & Yang, M. (2011). Do bad report cards have consequences? Impacts of publicly reported provider quality information on the CABG market in Pennsylvania. Journal of Health Economics, 30(2), 392–407.
Wholey, D. R., Christianson, J. B., Sanchez, S. M., Feldman, R., & Peterson. M. (1992). The voluntary dissemination of performance information by health care organizations. In R. M. Scheffler & L. F. Rossiter (Eds.), Advances in health economics and health services research (Vol. 13, pp. 1–26).
Young, G. (2012). Multistakeholder regional collaboratives have been key drivers of public reporting, but now face challenges. Health Affairs, 31(3), 578–584.
Acknowledgments
We gratefully acknowledge the Robert Wood Johnson Foundation for funding this research through the Aligning Forces for Quality Evaluation Project. We also thank Minnesota Community Measurement for the use of their data and comments regarding our research.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
See Fig. 4.
Rights and permissions
About this article
Cite this article
McCullough, J.S., Crespin, D.J., Abraham, J.M. et al. Public reporting and the evolution of diabetes quality. Int J Health Econ Manag. 15, 127–138 (2015). https://doi.org/10.1007/s10754-015-9167-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10754-015-9167-z