Markov decision process model for patient admission decision at an emergency department under a surge demand

Article

DOI: 10.1007/s10696-017-9276-8

Cite this article as:
Lee, HR. & Lee, T. Flex Serv Manuf J (2017). doi:10.1007/s10696-017-9276-8
  • 80 Downloads

Abstract

We study an admission control problem for patients arriving at an emergency department in the aftermath of a mass casualty incident. A finite horizon Markov decision process (MDP) model is formulated to determine patient admission decisions. In particular, our model considers the time-dependent arrival of patients and time-dependent reward function. We also consider a policy restriction that immediate-patients should be admitted as long as there is available beds. The MDP model has a continuous state space, and we solve the model by using a state discretization technique and obtain numerical solutions. Structural properties of an optimal policy are reviewed, and the structures observed in the numerical solutions are explained accordingly. Experimental results with virtual patient arrival scenarios demonstrates the performance and advantage of optimal policies obtained from the MDP model.

Keywords

Disaster response Emergency department Admission control Mass casualty incident Markov decision process Optimal policy 

Funding information

Funder NameGrant NumberFunding Note
National Emergency Management Agency of Korea
  • nema-md-2013-36

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  1. 1.Industrial and Systems EngineeringKAISTDaejeonRepublic of Korea

Personalised recommendations