Abstract
In this chapter, we summarize results both on finite and infinite horizon Markov decision processes (MDPs) in a random environment and with an absorbing set. Finite horizon models are used in Chap. 5. In addition, they serve as a starting point for the discussion on sequential utility maximizing decision problems in Chap. 3. The results on infinite horizon models are applied in 4. Since discounting is generally not considered in capacity control models, we focus on the expected total reward criterion.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
(2007). Markov Decision Processes and the Total Reward Criterion. In: Risk-Averse Capacity Control in Revenue Management. Lecture Notes in Economics and Mathematical Systems, vol 597. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-73014-9_2
Download citation
DOI: https://doi.org/10.1007/978-3-540-73014-9_2
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-73013-2
Online ISBN: 978-3-540-73014-9
eBook Packages: Business and EconomicsBusiness and Management (R0)