Skip to main content
Log in

Examining the discharge practices of surgeons at a large medical center

  • Published:
Health Care Management Science Aims and scope Submit manuscript

Abstract

We investigate the discharge practices at a large medical center. Specifically, we look for indications that patients are being discharged sooner because of hospital bed-capacity constraints. Using survival analysis techniques, we find statistically significant evidence to indicate that surgeons adjust their discharge practices to accommodate the surgical schedule and number of available recovery beds. We find higher discharge rates on days when utilization is high. We also find an increased discharge rate on days when more surgeries are scheduled. Our findings suggest that discharge decisions are made with bed-capacity constraints in mind. We discuss possible explanations for this, as well as the medical and managerial implications of our findings.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Belien J, Demeulemeester E (2007) Building cyclic master surgery schedules with leveled resulting bed occupancy. Eur J Oper Res 176(2):1185–1204

    Article  Google Scholar 

  2. Blake JT, Dexter F, Donald J (2002) Operating room managers’ use of integer programming for assigning block time to surgical groups: a case study. Anesth Analg 94:143–148

    Google Scholar 

  3. Cox DR (1972) Regression models and life tables. J Roy Stat Soc 34(2):187–220

    Google Scholar 

  4. Locker TE, Mason SM (2005) Analysis of the distribution of time that patients spend in emergency departments. BMJ 330:1188–1189

    Article  Google Scholar 

  5. McManus ML, Long MC, Cooper A, Mandell J, Berwick D, Pagano M, Litvak E (2003) Variability in surgical case load and access to intensive care services. Anesthesiology 98(6):1491–1496

    Article  Google Scholar 

  6. Millard PH, Christodoulou G, Jagger C, Harrison GW, McClean SI (2001) Modeling hospital and social care bed occupancy and use by elderly people in an English health district. Health Care Manag Sci 4:57–62

    Article  Google Scholar 

  7. Price C (2009) Applications of operations research models to problems in health care. Dissertation, University of Maryland. College Park.

  8. Price C, Babineau T, Golden B, Harrington M, Wasil E (2007) Capacity management in a cardiac surgery line. presented at INFORMS 2007 Meeting, Seattle, WA.

  9. Singer DE, Carr PL, Mulley AG, Thibault GE (1983) Rationing intensive care–physician responses to a resource shortage. N Engl J Med 309(19):1155–1160

    Article  Google Scholar 

  10. Singer JD, Willett JB (1993) It’s about time—using discrete-time survival analysis to study duration and the timing of events. J Educ Stat 18:155–195

    Article  Google Scholar 

  11. Strauss MJ, LoGerfo JP, Yeltatzie JA, Temkin N, Hudson LD (1986) Rationing of intensive care unit services: an everyday occurrence. J Am Med Assoc 255(9):1143–1146

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bruce Golden.

Appendices

Appendix A: Description of the Singer-Willett Method

Singer and Willett [10] provide a way of handling discrete time survival data using logistic regression. They extend the traditional Cox proportional hazards model to discrete time by allowing the use of logistic regression to estimate the hazard rate at each day as a function of the specified covariates. We describe their methodology.

Consider a homogeneous population of individuals, each at risk of experiencing a certain event (e.g., being discharged from a hospital). For each individual, the data includes a set of covariates, or predictors (for our problem, age, elective, surgical tract, measures of downstream bed utilization,…) that can, but do not necessarily, vary with time. The time until the event occurs, or until the individual drops out of the sample without the event occurring, is given in the data set. The event occurrence is then recorded into discrete intervals, which must be the same for each individual. A variable is then defined for each time period as 1 if the event happens during that interval and 0 otherwise. Because the event can only happen once for each individual, each occurrence is inherently conditional on all of the preceding time periods. If the event happens in period N, it cannot have happened in periods in {N-1, N-2, …,1}. For example, if a patient is discharged after seven days, then the patient was not discharged on days one to six.

A random variable T indicates the day that the patient is discharged. Because event occurrence is conditional, T is characterized by its conditional probability distribution, i.e., the distribution of the probability that an event will happen on a certain day, given that it has not already occurred. This distribution gives the discrete time hazard for each time period, h j , or probability that the event will occur in time period j given that it has not already occurred

$$ {h_j} = \Pr \left[ {T = j\left| {T \geqslant j} \right.} \right]. $$

These discrete time hazards completely describe the distribution of time to event and average length of stay. The set of discrete time hazards is called the hazard function. The central focus of this method is estimating the effects of independent variables on the hazard function.

Now, consider the set of P predictors, Z p (p = 1,2, …,P), each of which describes the members of the population on a specific characteristic. As some of these values can vary with time, the predictors are recorded for every time period. Each of the P predictors for individual i in time period j is denoted by the vector z ij  = [z 1ij , z 2ij ,…, z Pij ]. To account for heterogeneity in the population, an index for each member of the population is added to the hazard function, now denoted h ij . The variable h ij measures the probability that, for individual i, the event will occur on day j, given the vector of predictors for the individual and that the event has not yet occurred. The hazard function now becomes

$$ {h_j} = \Pr \left[ {{T_i} = j\left| {T \geqslant j} \right.,{Z_{{1ij}}} = {z_{{1ij}}},{Z_{{2ij}}} = {z_{{2ij}}}, \ldots, {Z_{{Pij}}} = {z_{{Pij}}}} \right]. $$

This equation indicates that the hazard is dependent on the values of the predictors, but does not specify the functional form of the dependency. Because the hazards are probabilities, a common proposition is that the hazards have a logistic dependence on the predictors and the time periods. This model represents the log-odds of event occurrence as a function of the parameters and time period. The proposed model to be estimated is

$$ \frac{1}{{1 + {e^{{\left[ {\left( {{\alpha_1}{D_{{1ij}}} + {\alpha_2}{D_{{2ij}}} + \cdots + {\alpha_J}{D_{{Jij}}}} \right) + \left( {{\beta_1}{Z_{{1ij}}} + {\beta_2}{Z_{{2ij}}} + \cdots {\beta_p}{Z_{{pij}}}} \right)} \right]}}}}} $$

where [D 1ij , D 2ij , … , D Jij ] are dummy variables indexing time periods, and J is the last time period observed for anyone in the sample. These dummies are defined identically for every individual and every day, such that D Nij  = 1 during time period N, and 0 at all other times. As shown below, the intercept parameters, [α 1 , α 2 , …, α J ] measure the baseline hazard and the slope parameters, [β 1 , β 2 , , β J ], define the effects of the predictors on the hazard function. The regression equation that is estimated is

$$ {\text{logi}}{{\text{t}}_e}\left( {{h_{{ij}}}} \right) = \left[ {\left( {\left( {{\alpha_1}{D_{{1ij}}} + {\alpha_2}{D_{{2ij}}} + \cdots + {\alpha_j}{D_{{2ij}}}} \right) + \left( {{\beta_1}{Z_{{1ij}}} + {\beta_2}{Z_{{2ij}}} + \cdots {\beta_p}{Z_{{pij}}}} \right)} \right.} \right. $$

where D is the vector of daily dummy variables, and Z is the vector of predictor variables.

This function can be estimated easily using logistic regression. However, the outcomes are conditional, and therefore autocorrelated, and traditional logistic regression assumes that each observation is independent and identically distributed. Because our data set violates that assumption, traditional logistic regression might not be valid. Singer and Willett prove that the likelihood function for the discrete time hazard process is identical to that of a series of independent Bernoulli trials. Because the response variables have a distribution that is identical to the distribution of independent Bernoulli trials, logistic regression can be used to estimate the parameters for the hazard function.

Results from the Singer-Willett model are interpreted exactly the same as in a standard logistic regression model. However, instead of a standard intercept, the series of time period dummies measures the baseline hazard. The slope coefficients measure the increase in the odds ratio that the event will occur.

Appendix B: Discrete Time Survival Analysis Outputs

Table 2 Effect of the threshold definition on the magnitude and significance of the FULL parameter
Table 3 Utilization threshold survival model (AIC = 32619, pseudo R-squared = .3393)
Table 4 Utilization threshold model with surgical group control (AIC = 32619, pseudo R-squared = .3623)
Table 5 Continuous utilization model (AIC = 32619, pseudo R-squared = .3393)
Table 6 Day of week and surgical schedule model results (AIC = 32202, pseudo R-squared = .3496)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Anderson, D., Price, C., Golden, B. et al. Examining the discharge practices of surgeons at a large medical center. Health Care Manag Sci 14, 338–347 (2011). https://doi.org/10.1007/s10729-011-9167-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10729-011-9167-6

Keywords

Navigation