Health Care Management Science

, Volume 16, Issue 3, pp 245–257

Comparing health outcomes among hospitals: the experience of the Lombardy Region

Article

Abstract

In recent years, governments and other stakeholders have increasingly used administrative data for measuring healthcare outcomes and building rankings of health care providers. However, the accuracy of such data sources has often been questioned. Starting in 2002, the Lombardy (Italy) regional administration began monitoring hospital care effectiveness on administrative databases using seven outcome measures related to mortality and readmissions. The present study describes the use of benchmarking results of risk-standardized mortality from Lombardy regional hospitals. The data usage is part of a general program of continuous improvement directed to health care service and organizational learning, rather than at penalizing or rewarding hospitals. In particular, hierarchical regression analyses - taking into account mortality variation across hospitals - were conducted separately for each of the most relevant clinical disciplines. Overall mortality was used as the outcome variable and the mix of the hospitals’ output was taken into account by means of Diagnosis Related Group data, while also adjusting for both patient and hospital characteristics. Yearly adjusted mortality rates for each hospital were translated into a reporting tool that indicates to healthcare managers at a glance, in a user-friendly and non-threatening format, underachieving and over-performing hospitals. Even considering that benchmarking on risk-adjusted outcomes tend to elicit contrasting public opinions and diverging policymaking, we show that repeated outcome measurements and the development and dissemination of organizational best practices have promoted in Lombardy region implementation of outcome measures in healthcare management and stimulated interest and involvement of healthcare stakeholders.

Keywords

Healthcare Effectiveness Outcomes Performance evaluation systems Multilevel models DRGs 

References

  1. 1.
    Donabedian A (2005) Evaluating the quality of medical care. Milbank Q 83:691–729CrossRefGoogle Scholar
  2. 2.
    Appleby J, Harrison A (2000) Health care in the UK 2000: the King’s fund review of health policy. King’s Fund, LondonGoogle Scholar
  3. 3.
    Department of Health (2008) The NHS in England: the operating framework for 2009/10. DoH, LondonGoogle Scholar
  4. 4.
    The Commonwealth Fund Commission on a High Performance Health System (2006) Why not the best? Results from a national scorecard on US. The Commonwealth Fund, New YorkGoogle Scholar
  5. 5.
    Agency for Healthcare Research and Quality (2012) Guide to inpatient quality indicators, US department of health and human services. Computer Science, RockvilleGoogle Scholar
  6. 6.
    Braithwaite J, Healy J, Dwan K (2005) The governance of health safety and quality. Commonwealth of Australia, CanberraGoogle Scholar
  7. 7.
    World Health Organization (2000) The world health report 2000: health systems improving performance. World Health Organization, GenevaGoogle Scholar
  8. 8.
    Canadian Institute for Health Information (2009) Hospital report 2008: acute care. Ottawa, OntarioGoogle Scholar
  9. 9.
    OECD (2007) OECD health care quality indicators project. Organisation for Economic Co-operation and Development, ParisGoogle Scholar
  10. 10.
    Ash A, Fienberg S, Louis T, Normand SL, Stukel T, Utts J (2012) Statistical issues in assessing hospital performance. COPPS-CMS White PaperGoogle Scholar
  11. 11.
    Nuti S, Vainieri M, Seghieri C (2013) Assessing the effectiveness of a performance evaluation system in the public health care sector: some novel evidence from the Tuscany Region experience. J Manag Gov 17(1):59–69CrossRefGoogle Scholar
  12. 12.
    Pinnarelli L, Nuti S, Sorge C, Davoli M, Fusco D, Agabiti N, Vainieri M, Perucci CA (2011) What drives hospital performance? The impact of comparative outcome evaluation of patients admitted for hip fracture in two Italian regions. Br Med J Qual Saf 21:127–134CrossRefGoogle Scholar
  13. 13.
    Donabedian A (1992) The role of outcomes in quality assessment and assurance. QRB Qual Rev Bull 11:356–360Google Scholar
  14. 14.
    Joint Commission on Accreditation of Healthcare Organization (JCAHO) (1994) A guide to establishing programs for assessing outcomes in clinical settings. Joint Commission on Accreditation of Healthcare Organizations, Oakbrook TerraceGoogle Scholar
  15. 15.
    Krumholz HM, Wang Y, Mattera JA, Wang Y, Han LF, Ingber MJ, Roman S, Normand SL (2006) An administrative claims model suitable for profiling hospital performance based on 30-day mortality rates among patients with an acute myocardial infarction. Circulation 113(13):1683–1692CrossRefGoogle Scholar
  16. 16.
    Vittadini G, Sanarico M, Berta P (2006) Testing procedures for multilevel models with administrative data. In: Zani S, Cerioli A, Riani M, Vichi M (eds) Data analysis, classification and the forward search. Springer-Verlag, Heidelberg, pp 329–337CrossRefGoogle Scholar
  17. 17.
    Arah OA, Klazinga NS, Delnoij DM, ten Asbroek AH, Custers T (2003) Conceptual frameworks for health systems performance: a quest for effectiveness, quality, and improvement. Int J Qual Health C 15(5):377–398CrossRefGoogle Scholar
  18. 18.
    Lilford R, Mohammed MA, Spiegelhalter DJ, Thomson R (2004) Use and misuse of process and outcome data in managing performance of acute medical care: avoiding institutional stigma. Lancet 364:1147–1154CrossRefGoogle Scholar
  19. 19.
    Lilford R, Pronovost P (2010) Using hospital mortality rates to judge hospital performance: a bad idea that just won’t go away. Br Med J 340: c2016CrossRefGoogle Scholar
  20. 20.
    Weiss KB, Wagner R (2000) Performance measurement through audit, feedback, and profiling as tools for improving clinical care. Chest 118(2 Suppl):53S–58SCrossRefGoogle Scholar
  21. 21.
    Brook RH, McGlynn EA, Shekelle PG (2000) Defining and measuring quality of care: a perspective from US researchers. Int J Qual Health 12(4):281–295CrossRefGoogle Scholar
  22. 22.
    Iezzoni LI (2003) Risk adjustment for measuring health care outcomes, 3rd edn. Health Administration Press, ChicagoGoogle Scholar
  23. 23.
    Normand ST, Shahian DM (2007) Statistical and clinical aspects of hospital outcomes profiling. Stat Sci 22(2):206–226CrossRefGoogle Scholar
  24. 24.
    Shahian DM, Robert EW, Iezzoni LI, Kirle L, Normand SL (2010) Variability in the measurement of hospital-wide mortality rates. New Engl J Med 363:2530–2539CrossRefGoogle Scholar
  25. 25.
    Brook RH, Iezzoni LI, Jencks SF, Knaus WA, Krakauer H, Lohr KN, Moskowitz MA (1987) Symposium: case-mix measurement and assessing quality of hospital care. Health Care Financ R . Spec No: 39-48Google Scholar
  26. 26.
    Lo Scalzo A, Donatini A, Orzella L, Cicchetti A, Profili S, Maresso A (2009) Italy: Health system review. In: Health systems in transition. Copenhagen: WHO regional office for Europe on behalf of the European observatory on health systems and policiesGoogle Scholar
  27. 27.
    Formez (2007) I Sistemi Di Governance Dei Servizi Sanitari Regionali. Formez, RomaGoogle Scholar
  28. 28.
    Censis (2008) I modelli decisionali nella sanitŁocale. Censis, RomaGoogle Scholar
  29. 29.
    Antonini L, Pin A (2009) The Italian road to fiscal federalism. Ital J Public Law 1:16Google Scholar
  30. 30.
    Nuti S, Seghieri C, Vainieri M, Zett S (2012) Assessment and improvement of the Italian Healthcare system: first evidences from a pilot national performance evaluation system. J Healthc Manag 53:3Google Scholar
  31. 31.
    Fusco D, Barone AP, Sorge C, D’Ovidio M, Stafoggia M, Lallo A, Davoli A, Perucci CA (2012) P.Re.Val.E.: Outcome research program for the evaluation of health care quality in Lazio, Italy. BMC Health Serv Res 12:25CrossRefGoogle Scholar
  32. 32.
    Basiglini A, Moirano F, Perucci CA (2011) Valutazioni comparative di esito in Italia: ipotesi di utilizzazione e di impatto. Mecosan 20(78):9–35Google Scholar
  33. 33.
    Nuti S (2008) La valutazione della performance in sanita. Il Mulino, BolognaGoogle Scholar
  34. 34.
    Vittadini G (2010) La valutazione della qualitel sistema sanitario analisi dell’efficacia ospedaliera in Lombardia. Guerini e Associati, MilanoGoogle Scholar
  35. 35.
    Donabedian A (1990) La qualità dell’assistenza sanitaria. Principi e metodologie di valutazione, La Nuova Italia Scientifica, RomaGoogle Scholar
  36. 36.
    Pagano A, Rossi C (1999) La valutazione dei servizi sanitari. In: Gori E., Vittadini G (eds) Qualit Valutazione nei servizi di pubblica utilit Serie gestione dimpresa e direzione. Etas, MilanoGoogle Scholar
  37. 37.
    Opit LJ (1993) The measurement of health service outcomes. Oxford Textbook of Health Care, 10, OLJ, LondonGoogle Scholar
  38. 38.
    Goldstein H, Spiegelhalter DJ (1996) League table and their limitations: statistical issues in comparisons of institutional performances. J Roy Stat Soc A Sta 159(3):385–443CrossRefGoogle Scholar
  39. 39.
    Werner RM, Bradlow ET (2006) Relationship between Medicare’s Hospital compare performance measures and mortality rates. JAMA-J Am Med Assoc 296(22):2694–2702CrossRefGoogle Scholar
  40. 40.
    Jha AK, Orav EJ, Li Z, Epstein AM (2007) The inverse relationship between mortality rates and performance in the Hospital Quality Alliance measures. Health Aff (Millwood) 26(4):1104–1110CrossRefGoogle Scholar
  41. 41.
    Zaslavsky A (2001) Statistical issues in reporting quality data: small samples and Casemix variation. Int J Qual Health C 13(6):481–488CrossRefGoogle Scholar
  42. 42.
    Jencks SF, Daley J, Draper D, Thomas N, Lenhart G, Walker J (1988) Interpreting hospital mortality data. JAMA-J Am Med Assoc 260(24):3611–3616CrossRefGoogle Scholar
  43. 43.
    Iezzoni LI (1997) The risks of risk adjustment. JAMA-J Am Med Assoc 278(19):1600–1607CrossRefGoogle Scholar
  44. 44.
    Goldman LE, Chu P, Osmond D, Bindman A (2011) The accuracy of present-on-admission reporting in administrative data. Health Serv Res 46(6pt1):1946–1962CrossRefGoogle Scholar
  45. 45.
    Austin PC, Tu JV (2006) Comparing clinical data with administrative data for producing acute myocardial infarction report cards. J Roy Stat Soc A Sta 169(1):115–126CrossRefGoogle Scholar
  46. 46.
    Vincent C, Neale G, Woloshynowych M (2001) Adverse outcomes in British hospitals: preliminary retrospective record review. Brit Med J 322:517–519CrossRefGoogle Scholar
  47. 47.
    Michel P, Quenon JL, de Sarasqueta AM, Scemama O (2004) Comparison of three methods for estimating rates of adverse outcomes and rates of proutcomeable adverse outcomes in acute care hospitals. Brit Med J 328:199CrossRefGoogle Scholar
  48. 48.
    Van den Heede K, Sermeus W, Diya L, Lesaffre E, Vleugels A (2006) Adverse outcomes in Belgian acute hospitals: retrospective analysis of the national hospital discharge dataset. Int J Qual Health C 18(3):211–219CrossRefGoogle Scholar
  49. 49.
    Dubois RW, Brook RH, Rogers WH (1987) Adjusted hospital death rates: potential screen for quality of medical care. Am J Public Health 77:1162–1167CrossRefGoogle Scholar
  50. 50.
    Canadian Institute for Health Information (2003) Hospital report 2002, acute care technical summaryGoogle Scholar
  51. 51.
    Agency for Healthcare Research and Quality (2003) Guide to inpatient quality indicators, US department of health and human services. Computer Science, RockvilleGoogle Scholar
  52. 52.
    Hox JJ (1995) Applied multilevel analysis. TT-Publikaties, AmsterdamGoogle Scholar
  53. 53.
    Robinson WS (1950) Ecological corelation ant the behavior of individuals. Am Sociol Rev 15:351–357CrossRefGoogle Scholar
  54. 54.
    Goldstein H (1995) Multilevel statistical models. Arnold, LondonGoogle Scholar
  55. 55.
    Thomas N, Longford NT, Rolph JE (1994) Empirical Bayes methods for estimating hospital specific mortality rates. Stat Med 13:889–903CrossRefGoogle Scholar
  56. 56.
    Normand SL, Glickman M, Gatsonis CA (1997) Statistical methods for profiling providers of medical care: issues and application. J Am Stat Assoc 92(439):803–814CrossRefGoogle Scholar
  57. 57.
    Rice N, Leyland A (1996) Multilevel models: applications to health data. J Health Serv Res Policy 1:154–164Google Scholar
  58. 58.
    Leyland AH, Boddy FA (1998) League tables and acute myocardial infarction. Lancet 351:555–558CrossRefGoogle Scholar
  59. 59.
    Marshall EC, Spiegelhalter DJ (2001) Institutional performance. In: Goldstein H, Leyland AH (eds) Multilevel modelling of health statistics. Wiley, Chichester, pp 127–142Google Scholar
  60. 60.
    Iezzoni LI, Ash SA, Schwartz M, Daley J, Hughes JS, Mackiernan YD (1996) Judging hospitals by severity-adjusted mortality rates: the influence of the severity adjustment method. Am J Public Health 86:1379–1387CrossRefGoogle Scholar
  61. 61.
    Istat (2009) Demografia in cifreGoogle Scholar
  62. 62.
    Browne WJ, Subramanian SV, Jones K, Goldstein H (2005) Variance partitioning in multilevel logistic models that exhibit overdispersion. J Roy Stat Soc A Sta 168:599–613CrossRefGoogle Scholar
  63. 63.
    Levin KA, Leyland AH (2005) Urban/rural inequalities in suicide in Scotland, 1981–1999. Soc Sci Med 60:2877–2890CrossRefGoogle Scholar
  64. 64.
    Elixhauser A, Steiner C, Harris DR, Coffey RM (1998) Comorbidity measures for use with administrative data. Med Care 36:8–27CrossRefGoogle Scholar
  65. 65.
    Snijders TAB, Bosker RJ (2011) Multilevel analysis: an introduction to basic and advanced multilevel modeling. Sage Publishers, New YorkGoogle Scholar
  66. 66.
    SIVEAS (2010) http://www.salute.gov.it. Accessed 15 Sept 2012
  67. 67.
    Hauck K, Street A (2006) Performance assessment in the context of multiple objectives: a multivariate multilevel analysis. J Health Econ 25:1029–1048CrossRefGoogle Scholar
  68. 68.
    Wright SP (2011) Multivariate analysis using the MIXED procedure. Paper presented at the Twenty-Third Annual Meeting of SAS Users? Group International, Nashville, TN (Paper 229). Retrieved March 14Google Scholar
  69. 69.
    Thiebaut R, Jacqmin–Gadda H, Chene G, Leport C, Commenges D (2002) Bivariate linear mixed models using SAS PROC MIXED. Comput Meth Prog Bio 69:249–256CrossRefGoogle Scholar
  70. 70.
    The World Bank (2010) Hospital performance and health quality improvements in So Paulo (Brazil) and Maryland (USA). The World Bank En Breve, No. 156Google Scholar
  71. 71.
    Ramos Rincón JM, Garcia Ruipérez D, Aliaga Matas F, Lozano Cutillas MC, Llanos Llanos R, Herrero Huerta F (2001) Specific mortality rates by DRG and main diagnosis according to CIE-9-MC at a level-II hospital. Ann Med Interna 18(10):510–516Google Scholar
  72. 72.
    Rosenberg MA, Browne MJ (2001) The impact of the inpatient prospective payment system and diagnosis-related groups: a survey of the literature. N Am Actuar J 5:84–94CrossRefGoogle Scholar
  73. 73.
    Librero J, Marin M, Peiro S, Verdaguer Munujos A (2004) Exploring the impact of complications on length of stay in major surgery diagnosis-related groups. Int J Qual Health C 16:51–57CrossRefGoogle Scholar
  74. 74.
    Busse R, Geissler A, Quentin W, Wiley MW (2011) Diagnosis-related groups in Europe: moving towards transparency, efficiency and quality in hospitals. McGraw-Hill, Open University Press, pp 9–21Google Scholar
  75. 75.
    Berta P, Callea G, Martini G, Vittadini G (2010) The effects of upcoding, cream skimming and readmissions on the Italian hospitals efficiency: a population-based investigation. Econ Model 27(4):812–821CrossRefGoogle Scholar
  76. 76.
    Chassin MR, Hannan EL, DeBuono BA (1996) Benefits and hazards of reporting medical outcomes publicly. New Engl J Med 334(6):394–398CrossRefGoogle Scholar
  77. 77.
    Werner RM, Asch DA (2005) The unintended consequences of publically reporting quality information. JAMA-J Am Med Assoc 293:1239–1244CrossRefGoogle Scholar
  78. 78.
    Keogh B, Spiegelhalter D, Bailey A, Roxburg J, Magee P, Hilton C (2004) The legacy of Bristol: public disclosure of individual surgeons’ results. Br Med J 329(7463):450–454CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Paolo Berta
    • 1
  • Chiara Seghieri
    • 2
  • Giorgio Vittadini
    • 1
  1. 1.Department of Quantitative MethodsCRISP - University of Milan BicoccaMilanItaly
  2. 2.Laboratorio Management e Sanità Istituto di managementScuola Superiore Sant’AnnaPisaItaly

Personalised recommendations