INTRODUCTION

In efforts to improve clinical outcomes related to patient safety and care quality for hospitalized patients, the Centers for Medicare and Medicaid Services (CMS) have tied hospital reimbursement to performance on 30-day condition-specific readmission rates since 2012. These condition-specific readmission rates have been publicly reported, are often considered markers of hospital quality,1 and more recently have been incorporated into the CMS composite of overall hospital quality (star ratings) to help consumers make informed decisions about where to get their health care.2 Policy makers have continued to expand the catalog of publicly reported readmission metrics to now include an adjusted all-cause 30-day hospital-wide readmissions (HWR) metric. This metric is intended to capture all-condition, unplanned 30-day readmissions at a hospital, thereby providing a broad indication of a hospital’s quality of care.3

To profile differences in hospital performance on the HWR metric, CMS uses a two-step approach commonly used to calculate and report other quality metrics: (1) estimating effects of patient characteristics (e.g., age, clinical diagnoses) on the outcome in question using within-hospital estimates, unconfounded with differences in hospital quality; (2) removing effects of patient characteristics on comparisons between hospitals, ideally leaving only hospital quality effects.4,5, 6 Although this approach has been used historically for assessing hospital performance on other quality metrics, such as mortality, models using patient-level factors have performed relatively poorly in predicting readmissions.6 Broader social, environmental, community, and medical factors contribute more to readmission risk than to mortality risk. Consequently, focusing on patient-level factors may provide an incomplete picture of readmission risk at the hospital level, and hospital characteristics such as larger size, academic status, socioeconomic status, and community characteristics are potentially important causes of variations in hospital readmission rates.7,8, 9

As an exploratory analysis, we sought to extend evaluations of hospital-level factors to the hospital-wide readmission metric, which is featured prominently in the CMS Star rating score. Our overall goal was not to generate an alternate readmission metric to compete with the existing CMS metric; that exercise would require a re-examination of patient-level data combined with hospital-level data. Rather, we sought to characterize how performance on the CMS-calculated HWR metric may be associated with hospital-level factors including size (defined by the total number of beds), safety net status [defined by the disproportionate share hospital (DSH) proportion], academic status [defined as member of the Association of American Medical Colleges (AAMC)], and National Cancer Institute Comprehensive Cancer Center (NCI-CCC) status. We also sought to examine whether offering particular service lines (e.g., transplant, psychiatric, critical care, emergency, hospice services)—often features of larger academic medical centers that have not been previously examined in the context of readmissions—tracks with hospital performance on the HWR metric.

METHODS

Study Design

We analyzed publically available national data published by CMS from July 1, 2011, through June 30, 2014, which provides data on adjusted 30-day hospital-wide readmissions at the hospital-level using cited methodology.10 We linked these data with a cohort of US hospitals provided from the American Hospital Association (AHA), which has been previously used in readmissions research,11 , 12 and contained cross-sectional, hospital-level statistics by year on hospital characteristics, as well as some patient sociodemographic and outcome data. The data set analyzed included 4785 hospitals in total observed for 1 year as reported in 2014. The objective of using this cross-sectional data set was to create a predictively valid regression model that was best fit to analyze rates of 30-day hospital-wide readmission rates, which have already been adjusted for patient factors. This study was approved by the Johns Hopkins Medicine Institutional Review Board.

Data Management

Based upon prior research we initially started with a hypothesis that there was a function of readmissions associated with academic hospitals, cancer hospitals, size, and socioeconomic status.7 , 13,14, 15 We then built upon this model using AHA variables to contribute covariates that fit well within the model while controlling for hypothesized co-variates (e.g., variables that may account for an association between readmissions and AAMC status or bed size). All authors participated in discussion by consensus of the AHA variables and the ones hypothesized to be potentially meaningful in predicting readmissions related to types of services offered in a way that they were not providing overlapping information. At the hospital level, cross-sectional data were managed across 1 year and there were 36 explanatory variables explored as potentially linked to 30-day readmission. First, hospital status as an AAMC was explored as a dichotomous covariate. Second, all hospitals and AAMCs were stratified by status as an NCI-CCC. Third, to evaluate for socioeconomic status, hospitals were categorized according to the DSH proportion of patients both continuously and by quartile according to data pulled from the Healthcare Cost Report Information System (HCRIS).16 Medicare DSH adjustment applies to hospitals that serve a significant disproportionate number of low-income patients, defined by the disproportionate patient percentage.17 Safety Net Hospitals were defined as hospitals in the highest quartile of the DSH proportion. Fourth, size of the hospital in terms of total bed count from HCRIS was explored both continuously and in terms of categorical ranges of bed totals (1–200; 201–400; 401+) as has been done previously.7 , 16 Fifth, transplant services were explored in terms of separate services offered (e.g., heart, lung, liver, kidney, pancreas) both categorically (each service present or absent) and the cumulative number of services offered (ranging from 0 to 6 transplant services offered). Ultimately, this variable was dichotomized as hospitals having four or more transplant services versus three or fewer. Sixth, variables were explored that categorized hospitals according to other services offered: These service level variables were either defined by the presence versus absence of services offered (e.g., wound care, hospice, hemodialysis, emergency, and psychiatric services) by the proportions of beds for specific services (e.g., number of intensive care beds out of the total beds greater than the median value of all hospitals), or the volumes of services (e.g., number of inpatient surgeries as a proportion of total discharges). Proportions of service-line beds and volumes of services were dichotomized by their median values for analyses.

Analysis Plan

We used a multilevel regression modeling approach to regress 30-day readmissions rates by the aforementioned predictors in the AHA data set.18 The multilevel model tested random versus fixed intercepts for hospitals at the state level to account for the possibility of unobserved latent factors that may vary by state.10 In total, all hospitals in the AHA data set were organized at the second level into 55 weighted clusters (i.e., the 50 US states as well as Puerto Rico, Guam, US Virgin Islands, Marshall Islands, and American Samoa).

The model was tested for multiple constructs, including treating the dependent variable for 30-day readmissions as continuous (linear mixed model), categorically according to readmission quartile (ordinal logistic mixed model), and as a dichotomous outcome by comparing the fourth quartile relative to the other three quartiles (logistic mixed model). The purpose of the last model was to examine the effects that set the lowest performing hospitals by 30-day hospital-wide readmission rate.

The multilevel regression models were developed in two iterations using Stata 14 (Stata Corp. Inc. 2016, Texas). The initial model was constructed to test the relationship between 30-day readmission rate and bed totals, DSH, and the separate as well as interaction terms for AAMC and NCI-CCC. Following this initial model, forward stepwise regression was used to test the expansion of a multilevel regression model to include other covariates (among those selected as above by consensus) that showed a statistically significant and potentially meaningful impact in predicting hospital-wide readmission rates.19

The final forms of the fixed and random-intercept models were compared using a chi-square test of the log-likelihood ratios. We also tested several random slopes from the data set using independent and exchangeable autocorrelation constructs, where covariates were well powered (i.e., where there was little missing data) to do so. We assumed these models to be well powered based on a power calculation that could detect a 0.091% change in absolute readmission rate between quartiles based on 4785 hospitals clustered into 55 states. Since the goal of the HWR measure is to evaluate hospital performance versus other hospitals (i.e., to rank them), we used quantile plots to describe changes in readmissions rates between categorical changes in AAMC status, NCI-CCC status, bed total, and DSH quartile. To address missing data, we used a multilevel modeling approach in our analysis, which is able to model unbalanced data across clusters.20 Additionally, we conducted cross-validation to test the sensitivity of regression modeling to missing data by running regression on a 50% sample and then testing the fit of the regression model to the other 50% sample. The correlations and fit were not significantly different between both data samples; thus, we could reject the hypothesis that missing data altered the model results.

RESULTS

Our analysis of 4785 hospitals showed variability that is standardized with other studies of patient outcomes in the US. The unweighted average hospital-wide readmission rate was 15.24%. In addition, the average DSH rate was 0.35, bed total was 154 on average, and all 240 AAMC and 52 NCI-CCC hospitals were represented (Table 1). Of 52 designated as NCI-CCC, 49 were also categorized as AAMC. In general, each of these factors was positively correlated with an increased patient-adjusted 30-day hospital-wide readmission rate compared to hospitals with low DSH, smaller size, or lacking AAMC/NCI-CCC status (Fig. 1a–d).

Table 1 Characteristics of Hospital-Level Factors and Performing in the Worst Hospital-Wide Readmission Quartile
Fig. 1
figure 1

Association between hospital-level factors and hospital performance* on the CMS hospital-wide readmission measure. *Performance defined as the hospital’s percentile rank on the hospital-wide readmission measure. This was obtained by ranking each of the 4785 US hospitals according to the hospital-wide readmission score and assigning a percentile rank to each institution (relative to all other institutions). CMS = Centers for Medicare & Medicaid Services; AAMC = Association of American Medical Colleges; AHA = American Hospital Association; DSH = Disproportionate Share Hospital; NCI-CCC = National Cancer Institute Comprehensive Cancer Center. Figures showing the unadjusted relationship between hospital-level factors, a DSH quartiles, b bed total categories, c AAMC alone status and AAMC and NCI-CCC status, and d Safety Net alone status and AAMC and Safety Net status (defined as hospitals in the highest DSH quartile), and national performance on the patient-level adjusted HWR metric. Box plots represent the 25th (bottom), 50th (line), and 75th (top) percentiles for each category shown

After consideration of all constructs for 30-day readmissions, a dichotomous measurement of the fourth quartile 30-day HWR rate relative to the other three quartiles as a binary logistic model was selected based on goodness of fit. When regressed against the 30-day HWR metric, hospitals with higher DSH, larger bed totals, and hospitals designated as AAMC or AAMC + NCI-CCC status had statistically significant odds ratios for falling into the worst performing HWR quartile (Table 2). Furthermore, state-level random effects models (models B and C) reduced the log-likelihood over a fixed-effects model (model A) suggesting that unobserved state-related factors related to 30-day readmissions may play an important role in hospital performance and were important to control.

Table 2 Adjusted Association between Hospital-Level Factors and Performing in the Worst HWR Quartile

Compared to the first three DSH quartiles, hospitals in the highest DSH quartile (safety net status) experienced a 1.99 (95% CI 1.61–2.45) adjusted odds ratio (aOR) of falling into the worst performing HWR quartile. Likewise, AAMC appeared to have a similar effect on readmission: aOR 1.95 (95% CI 1.35 to 2.83). The strongest predictor of being in the worst performing HWR quartile was the combination of AAMC status and NCI-CCC: aOR 5.16 (95% CI 2.58 to 10.31). Finally, hospitals with more beds appeared to have a slightly increased risk of falling into the worst performing HWR quartile.

Compared to a regression model that was restricted to DSH, bed total, and AAMC/NCI-CCC, stepwise regression produced a model of significantly better fit that conserved DSH, bed count, and AAMC/NCI-CCC. Hospitals with a high proportion of beds with intensive care services, those with emergency room departments, and hospitals with 4 or 5 transplant services—compared to 0, 1, 2 or 3 services—had higher readmission rates. Hospitals with hospice services and those with a higher proportion of total discharges being surgical cases had lower readmission rates.

Several combinations of other interaction terms were tested, but only AAMC and NCI-CCC offered significant prediction of HWR rates. Despite exploration of random slopes among the multilevel models, none of the predictors tested were also significant as random slopes. As such, independent autocorrelation structure was selected for final model versions over exchangeable.

DISCUSSION

We found significant effects of institution characteristics on relative hospital performance on CMS’s hospital-wide readmission measure. Specifically, large, academic safety net hospitals and cancer centers fared worse than other types of facilities. Hospitals with more than one of these factors (e.g., large academic safety net hospitals) were particularly more likely to perform worse on the HWR metric. Additional hospital-level factors associated with worse performance on CMS’s HWR metric included offering transplant services, offering emergency department services, and having a large proportion of total beds devoted to critical care. Conversely, the availability of hospice services and high surgical volumes were associated with better performance on the HWR metric. This is the first evaluation of hospital-level factors on the HWR metric, and, as such, further exploration to understand the underlying reasons for readmission variability between institutions should be addressed.

Potential explanations for our findings come in two broad categories. On one hand, the differences in the HWR metric may be true reflections of hospital quality.21,22, 23 This would help to explain why hospitals with large surgical volumes and those providing hospice services had lower readmission rates; large surgical volumes are associated with the “volume-outcome” relationship in which a higher volume of patients undergoing a particular procedure at a hospital is associated with better outcomes for those patients,24 and offering palliative care reduces inpatient care utilization and costs for patients approaching the end of life.25,26, 27 If indeed variation in hospital quality is the main driver for our findings, greater attention must be placed on the types of care defects that may exist at large, urban tertiary care centers and those that care for high-complexity patient populations (such as those with cancer or organ transplants). Specifically, we should work to identify the reasons these institutions are struggling and devise strategies to help them to improve. On the other hand, the differences in the HWR metric we observed may not represent institutional differences in quality. CMS’s current hospital-wide readmission measure may not adequately account for characteristics of certain types of hospitals, whether because of inadequate adjustment for measurable or unmeasurable patient characteristics or because of factors that cannot be accounted for by adjusting for individual patient characteristics. Then, we should work to improve the models and, in the meantime, avoid using them to define quality or trigger penalties. Of course, a combination of both explanations may be at play.

The hospital-level factors we considered in this study are broad and likely proxies for a number of underlying processes and potential challenges that institutions may face in affecting readmission outcomes, which may be inadequately accounted for in current models. For instance, bed size may be a surrogate for the number of services that are offered at an institution and may relate to referral patterns.28 Practically, if a middle-aged male with minimal comorbidities presents with an ischemic stroke, he may be managed appropriately at a small community hospital. If that same patient would also benefit from neurosurgical intervention, however, he probably would be better served by a hospital that provides all the services he needs; the more complex treatment plans are probably executed most effectively at centers that ‘see it all and do it all’—often large referral centers. Indeed, we found that hospitals that have a higher proportion of beds providing critical care services have higher readmission rates. Patient-level risk models are intended to capture the severity of illness, but they may fail to adequately account for the complexity and clinical needs of the sickest patients. However, future research should examine the reasons for this finding as higher volumes have generally been associated with reduced mortality and improved outcomes.24 , 29

AAMC hospitals, by their nature, are more likely to be referral centers, so one possible mechanism to explain their higher adjusted readmission rates would be a form of selection bias. Patients who fail treatment at local hospitals and are referred to AAMCs may be more vulnerable to readmission based on failing first-line treatment. Public teaching hospitals may also have missions to treat disadvantage populations, such as indigent patients,30 and indeed, this study showed that hospitals that were identified as both safety net and AAMC hospitals were more likely to have higher readmission rates. Additionally, cutting edge treatments offered at AAMCs may lead to planned readmissions that are uncaptured by current readmission algorithms. For example, some complex staged procedures that require patients to return to the hospital are classified as unplanned with current algorithms, and novel procedures may not yet be adequately codified. Hospitals that provide these types of services might appear to have excess unplanned readmissions (rather than planned readmissions) based on nuances of coding rather than care defects.

However, we must consider the possibility that some features of academic medical centers might cause higher readmission rates. For instance, market power is higher for large teaching hospitals, and this could allow them to attract patients and succeed financially despite real care defects.31 Additionally, junior physicians may feel less comfortable not sending a patient to the ED when called post-discharge.32 However, academic medical centers are learning environments where there is likely increased attention to detail, frequent use of current medical literature to guide clinical decision making, and redundancy of supervision that might reduce adverse outcomes.33 Recent data also suggest lower mortality rates at AAMCs.34 Finally, a health care facility’s teaching status on its own does not markedly improve or worsen patient outcomes.35

A number of other hospital-level characteristics were associated with poor performance on the HWR metric. A high proportion of DSH payments reflects a hospital’s responsibility for caring for socioeconomically disadvantaged patients. While some prior studies have suggested similar rates for disease-specific 30-day readmission rates between safety net and non-safety net hospitals,12 other studies have demonstrated that socioeconomic factors can play an important role in determining patient risk of readmission.15 Safety net facilities may be less able than other hospitals to invest in quality improvement,36 and their lack of financial resources may limit access to clinical resources. CMS continues to work with groups such as the National Quality Forum to study the effect of socioeconomic status on readmissions, and it is possible that socioeconomic status will be accounted for in updated federal readmission models.37 Having an emergency department may be a surrogate for offering services to patients with low socioeconomic status.38 NCI-CCC hospitals care for high volumes of complex cancer patients, where readmissions commonly represent expected sequela of treatment or disease progression.14 Consistent with prior literature, we found that organ transplant patients are highly vulnerable to complications and readmissions.39

Current methodology to standardize HWR readmission measures compares hospital observed rates to their expected results based on an average hospital’s performance caring for a similar mix of patients. A concern of adjusting for hospital-level factors is that this may improve the accuracy of the readmission measures while adjusting away deficits in quality that hospital comparison efforts seek to reveal.6 To address these policy challenges, different approaches have been proposed, such as comparing readmission rates across peer institutions only,4 defining preventable readmissions as ones linked to some process of hospital care when possible (e.g., improved care coordination)40 , 41 and incentivizing or rating institutions based on improvement rather than relative performance. Further work is needed to determine whether these strategies are helpful in the ultimate goal to improve quality to achieve best outcomes.

The results of this study should be taken within the context of its limitations. First, the HWR metric is focused on a Medicare population; different patterns of hospital-level readmission predictors may emerge if data become available for a larger population. Second, we focused on the HWR metric, but further research would need to investigate whether these results are generalizable to the condition specific metrics, (such as pneumonia, congestive heart failure, or stroke). Third, adjusting for SES is challenging particularly given that we only had access to hospital-level data, so our approach to use the disproportionate share hospital measurement as a marker of SES may not have captured residual SES effects that could have been defined by measuring patient-level factors, such as income,42 or more granular regional measures, such as area deprivation index.43 Fourth, CMS outcomes data were calculated from a time-period that was longer and in some cases earlier than the AHA annual survey data, which was a limitation of the available data. Fifth, CMS and AHA data were only provided at the hospital level, and they would benefit from including these hospital-level factors, such as different service line variables, into future derivations and evaluations of readmission measures that include patient-level data. Sixth, our multilevel model assumes that there are state-level factors that impact readmissions rates, but the variability in readmission performance may be better accounted for by other hospital-level factors (only) or in combination with patient-level factors that are not collected by CMS. Sixth, our analyses were limited by the services considered in the AHA survey and the nature of survey data collection; accordingly, we may have failed to identify other services or hospital-level factors associated with performance on the HWR metric.

CONCLUSIONS

Current patient-level risk adjustment methodology intended to allow for ranking hospitals by their relative readmission rates may not account for certain populations and services that impact readmission rates. Large academic medical centers, hospitals that serve a disproportionate share of low socioeconomic patients, and those serving complex patient populations tend to fare far worse on the CMS HWR measure than other types of institutions. While our findings do not offer an alternative to the existing measure, they raise concerns about its interpretation and practical use as a quality measure. Policy makers should reexamine the appropriateness of using the current hospital-wide readmission measure as an indicator of hospital quality.