Abstract
We consider MDPs with countable state spaces and variable discount factors. The discount factor may depend on the state and the action. Under minimal assumptions we prove the reward iteration and formulate a structure theorem for MDPs. Also the useful notion of a bounding function is introduced.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Hinderer, K. (1971). Instationäre dynamische Optimierung bei schwachen Voraussetzungen über die Gewinnfunktionen. Abhandlungen aus dem Mathematischen Seminar University of Hamburg, 36, 208–223.
Hinderer, K. (1977). Does the value iteration for finite-stage discrete dynamic programs hold “always”? In Dynamische optimierung (pp. 41–56). Bonner Math. Schriften, No. 98.
Lippman, S. A. (1974/1975). On dynamic programming with unbounded rewards. Management Science, 21, 1225–1233.
Rieder, U. (1975b). On stopped decision processes with discrete time parameter. Stochastic Processes Application, 3, 365–383.
Veinott, Jr., A. F. (1969). Discrete dynamic programming with sensitive discount optimality criteria. The Annals of Mathematical Statistics, 40, 1635–1660.
Wessels, J. (1977). Stopping times and Markov programming. In Transactions of the seventh Prague conference on information theory, statistical decision functions, random P (pp. 575–585). Dordrecht: Reidel.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this chapter
Cite this chapter
Hinderer, K., Rieder, U., Stieglitz, M. (2016). Markovian Decision Processes with Discrete Transition Law. In: Dynamic Optimization. Universitext. Springer, Cham. https://doi.org/10.1007/978-3-319-48814-1_14
Download citation
DOI: https://doi.org/10.1007/978-3-319-48814-1_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-48813-4
Online ISBN: 978-3-319-48814-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)