Abstract
In this treatise we deal with optimization problems whose objective functions show a sequential structure and hence are amenable to sequential methods. The corresponding field mostly goes by the name Discrete-time Dynamic Programming. Other names are Discrete-time Stochastic Control and Multi-stage Optimization. In order to avoid possible confusion with programming in computer science we speak of Dynamic Optimization.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Altman, E. (1999). Constrained Markov decision processes (Stochastic Modeling). Boca Raton: Chapman & Hall/CRC.
Bäuerle, N., & Rieder, U. (2011). Markov decision processes with applications to finance (Universitext). Heidelberg: Springer.
Bellman, R. (1957). Dynamic programming. Princeton: Princeton University Press.
Bertsekas, D. P. (1975). Convergence of discretization procedures in dynamic programming. IEEE Transactions Automatic Control, AC-20, 415–419.
Bertsekas, D. P. (2001). Dynamic programming and optimal control (Vol. II, 2nd ed.). Belmont: Athena Scientific.
Bertsekas, D. P. (2005). Dynamic programming and optimal control (Vol. I, 3rd ed.). Belmont: Athena Scientific.
Bertsekas, D. P., & Shreve, S. E. (1979). Existence of optimal stationary policies in deterministic optimal control. Journal of Mathematical Analysis and Applications, 69, 607–620.
Blackwell, D. (1965). Discounted dynamic programming. The Annals of Mathematical Statistics, 36, 226–235.
Chang, H. S., Fu, M. C., Hu, J., & Marcus, S. I. (2007). Simulation-based algorithms for Markov decision processes. London: Springer.
Dvoretzky, A., Kiefer, J., & Wolfowitz, J. (1952b). The inventory problem. II. Case of unknown distributions of demand. Econometrica, 20, 450–466.
Dynkin, E. B., & Yushkevich, A. A. (1979). Controlled Markov processes. Berlin: Springer.
Filar, J., & Vrieze, K. (1997). Competitive Markov decision processes. New York: Springer.
Hernández-Lerma, O., & Lasserre, J. B. (1996). Discrete-time Markov control processes. New York: Springer.
Hernández-Lerma, O., & Lasserre, J. B. (1999). Further topics on discrete-time Markov control processes. New York: Springer.
Hinderer, K. (1970). Foundations of non-stationary dynamic programming with discrete time parameter (Lecture Notes in Operations Research and Mathematical Systems, Vol. 33). Berlin: Springer.
Hinderer, K., & Stieglitz, M. (1996). Increasing and Lipschitz continuous minimizers in one-dimensional linear-convex systems without constraints: The continuous and the discrete case. Mathematical Methods of Operations Research, 44, 189–204.
Howard, R. A. (1960). Dynamic programming and Markov processes. Cambridge: Technology Press of Massachusetts Institute of Technology.
Maitra, A. P., & Sudderth, W. D. (1996). Discrete gambling and stochastic games. New York: Springer.
Porteus, E. L. (1971/1972). Some bounds for discounted sequential decision processes. Management Science, 18, 7–11.
Powell, W. B. (2007). Approximate dynamic programming. New York: Wiley.
Puterman, M. L. (1994). Markov decision processes: Discrete stochastic dynamic programming. New York: Wiley.
Schäl, M. (1990). Markoffsche Entscheidungsprozesse. Stuttgart: B. G. Teubner.
Sennott, L. I. (1999). Stochastic dynamic programming and the control of queueing systems. New York: Wiley.
Wald, A. (1947). Foundations of a general theory of sequential decision functions. Econometrica, 15, 279–313.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this chapter
Cite this chapter
Hinderer, K., Rieder, U., Stieglitz, M. (2016). Introduction and Organization of the Book. In: Dynamic Optimization. Universitext. Springer, Cham. https://doi.org/10.1007/978-3-319-48814-1_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-48814-1_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-48813-4
Online ISBN: 978-3-319-48814-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)