Skip to main content

Introduction and Organization of the Book

  • Chapter
  • First Online:
Dynamic Optimization

Part of the book series: Universitext ((UTX))

  • 3060 Accesses

Abstract

In this treatise we deal with optimization problems whose objective functions show a sequential structure and hence are amenable to sequential methods. The corresponding field mostly goes by the name Discrete-time Dynamic Programming. Other names are Discrete-time Stochastic Control and Multi-stage Optimization. In order to avoid possible confusion with programming in computer science we speak of Dynamic Optimization.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Altman, E. (1999). Constrained Markov decision processes (Stochastic Modeling). Boca Raton: Chapman & Hall/CRC.

    MATH  Google Scholar 

  • Bäuerle, N., & Rieder, U. (2011). Markov decision processes with applications to finance (Universitext). Heidelberg: Springer.

    Book  MATH  Google Scholar 

  • Bellman, R. (1957). Dynamic programming. Princeton: Princeton University Press.

    MATH  Google Scholar 

  • Bertsekas, D. P. (1975). Convergence of discretization procedures in dynamic programming. IEEE Transactions Automatic Control, AC-20, 415–419.

    Article  MathSciNet  MATH  Google Scholar 

  • Bertsekas, D. P. (2001). Dynamic programming and optimal control (Vol. II, 2nd ed.). Belmont: Athena Scientific.

    MATH  Google Scholar 

  • Bertsekas, D. P. (2005). Dynamic programming and optimal control (Vol. I, 3rd ed.). Belmont: Athena Scientific.

    MATH  Google Scholar 

  • Bertsekas, D. P., & Shreve, S. E. (1979). Existence of optimal stationary policies in deterministic optimal control. Journal of Mathematical Analysis and Applications, 69, 607–620.

    Article  MathSciNet  MATH  Google Scholar 

  • Blackwell, D. (1965). Discounted dynamic programming. The Annals of Mathematical Statistics, 36, 226–235.

    Article  MathSciNet  MATH  Google Scholar 

  • Chang, H. S., Fu, M. C., Hu, J., & Marcus, S. I. (2007). Simulation-based algorithms for Markov decision processes. London: Springer.

    Book  MATH  Google Scholar 

  • Dvoretzky, A., Kiefer, J., & Wolfowitz, J. (1952b). The inventory problem. II. Case of unknown distributions of demand. Econometrica, 20, 450–466.

    Article  MathSciNet  MATH  Google Scholar 

  • Dynkin, E. B., & Yushkevich, A. A. (1979). Controlled Markov processes. Berlin: Springer.

    Book  MATH  Google Scholar 

  • Filar, J., & Vrieze, K. (1997). Competitive Markov decision processes. New York: Springer.

    MATH  Google Scholar 

  • Hernández-Lerma, O., & Lasserre, J. B. (1996). Discrete-time Markov control processes. New York: Springer.

    Book  MATH  Google Scholar 

  • Hernández-Lerma, O., & Lasserre, J. B. (1999). Further topics on discrete-time Markov control processes. New York: Springer.

    Book  MATH  Google Scholar 

  • Hinderer, K. (1970). Foundations of non-stationary dynamic programming with discrete time parameter (Lecture Notes in Operations Research and Mathematical Systems, Vol. 33). Berlin: Springer.

    Book  MATH  Google Scholar 

  • Hinderer, K., & Stieglitz, M. (1996). Increasing and Lipschitz continuous minimizers in one-dimensional linear-convex systems without constraints: The continuous and the discrete case. Mathematical Methods of Operations Research, 44, 189–204.

    Article  MathSciNet  MATH  Google Scholar 

  • Howard, R. A. (1960). Dynamic programming and Markov processes. Cambridge: Technology Press of Massachusetts Institute of Technology.

    MATH  Google Scholar 

  • Maitra, A. P., & Sudderth, W. D. (1996). Discrete gambling and stochastic games. New York: Springer.

    Book  MATH  Google Scholar 

  • Porteus, E. L. (1971/1972). Some bounds for discounted sequential decision processes. Management Science, 18, 7–11.

    Article  MathSciNet  MATH  Google Scholar 

  • Powell, W. B. (2007). Approximate dynamic programming. New York: Wiley.

    Book  MATH  Google Scholar 

  • Puterman, M. L. (1994). Markov decision processes: Discrete stochastic dynamic programming. New York: Wiley.

    Book  MATH  Google Scholar 

  • Schäl, M. (1990). Markoffsche Entscheidungsprozesse. Stuttgart: B. G. Teubner.

    Book  MATH  Google Scholar 

  • Sennott, L. I. (1999). Stochastic dynamic programming and the control of queueing systems. New York: Wiley.

    MATH  Google Scholar 

  • Wald, A. (1947). Foundations of a general theory of sequential decision functions. Econometrica, 15, 279–313.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this chapter

Cite this chapter

Hinderer, K., Rieder, U., Stieglitz, M. (2016). Introduction and Organization of the Book. In: Dynamic Optimization. Universitext. Springer, Cham. https://doi.org/10.1007/978-3-319-48814-1_1

Download citation

Publish with us

Policies and ethics