Abstract
We consider the game model
introduced in (1.1). The problems we are concerned with in this chapter are those related to the discounted case, which are summarized as follows.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ash, R.: Real Analysis and Probability. Academic, New York (1972)
Bertsekas, D.P., Shreve, S.E.: Stochastic Optimal Control: The Discrete Time Case. Academic, New York (1978)
Hernández-Lerma, O.: Adaptive Markov Control Processes. Springer, New York (1989)
Hernández-Lerma, O., Lasserre, J.B.: Further Topics on Discrete-Time Markov Control Processes. Springer, New York (1999)
Jaśkiewicz, A., Nowak, A.: Zero-sum ergodic stochastic games with Feller transition probabilities. SIAM J. Control Optim. 45, 773–789 (2006)
Küenle, H.U.: On Markov games with average reward criterion and weakly continuous transition probabilities. SIAM J. Control Optim. 46, 2156–2168 (2007)
Nowak, A.: Measurable selection theorems for minimax stochastic optimization problems. SIAM J. Control Optim. 23, 466–476 (1985)
Schäl, M.: Estimation and control in discounted stochastic dynamic programming. Stochastics 20, 51–71 (1987)
Van Nunen, J.A.E.E., Wessels, J.: A note on dynamic programming with unbounded rewards. Manag. Sci. 24, 576–580 (1978)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2020 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Minjárez-Sosa, J.A. (2020). Discounted Optimality Criterion. In: Zero-Sum Discrete-Time Markov Games with Unknown Disturbance Distribution. SpringerBriefs in Probability and Mathematical Statistics. Springer, Cham. https://doi.org/10.1007/978-3-030-35720-7_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-35720-7_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-35719-1
Online ISBN: 978-3-030-35720-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)