Skip to main content

Additional General Issues

  • Chapter
  • First Online:
Dynamic Optimization

Part of the book series: Universitext ((UTX))

  • 3028 Accesses

Abstract

We present the basic theorems for cost minimization and for DPs with an absorbing set of states. We also prove the basic theorem using reachable states. The important notion of a bounding function is introduced.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Adomian, G., & Lee, E. S. (1986). The research contributions of Richard Bellman. Computers & Mathematics with Applications Series A, 12, 633–651.

    Article  MathSciNet  MATH  Google Scholar 

  • Almudevar, A. (2014). Approximate iterative algorithms. Leiden: CRC Press/Balkema.

    Book  MATH  Google Scholar 

  • Bellman, R. (1957). Dynamic programming. Princeton: Princeton University Press.

    MATH  Google Scholar 

  • Bertsekas, D. P., & Tsitsiklis, J. N. (1996). Neuro-dynamic programming. Belmont: Athena Scientific.

    MATH  Google Scholar 

  • Chang, H. S., Fu, M. C., Hu, J., & Marcus, S. I. (2007). Simulation-based algorithms for Markov decision processes. London: Springer.

    Book  MATH  Google Scholar 

  • Dreyfus, S. E., & Law, A. M. (1977). The art and theory of dynamic programming. New York: Academic Press.

    MATH  Google Scholar 

  • Gani, J. (1984). Obituary: Richard Bellman. Journal of Applied Probability, 21, 935–936.

    Article  MathSciNet  Google Scholar 

  • Hinderer, K. (1970). Foundations of non-stationary dynamic programming with discrete time parameter (Lecture Notes in Operations Research and Mathematical Systems, Vol. 33). Berlin: Springer.

    Book  MATH  Google Scholar 

  • Karlin, S. (1955). The structure of dynamic programming models. Naval Research Logistics Quarterly, 2, 285–294 (1956).

    Article  Google Scholar 

  • Lew, A. (1986). Richard Bellman’s contributions to computer science. Journal of Mathematical Analysis and Applications, 119, 90–96.

    Article  MathSciNet  MATH  Google Scholar 

  • Morin, T. L. (1978). Computational advances in dynamic programming. In Dynamic programming and its applications (pp. 53–90). New York: Academic Press.

    Chapter  Google Scholar 

  • Porteus, E. L. (1975). An informal look at the principle of optimality. Management Science, 21, 1346–1348.

    Article  MathSciNet  Google Scholar 

  • Powell, W. B. (2007). Approximate dynamic programming. New York: Wiley.

    Book  MATH  Google Scholar 

  • van Roy, B. (2002). Neuro-dynamic programming: Overview and recent trends. In E. Feinberg & Shwartz, A. (Eds.) Handbook of Markov decision processes: Methods and applications. Boston: Springer.

    Google Scholar 

  • Sniedovich, M. (1992). Dynamic programming. New York: Marcel Dekker Inc.

    MATH  Google Scholar 

  • Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning. Cambridge: The MIT Press.

    Google Scholar 

  • Yakowitz, S. J. (1969). Mathematics of adaptive control processes. New York: American Elsevier Publishing Co., Inc.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this chapter

Cite this chapter

Hinderer, K., Rieder, U., Stieglitz, M. (2016). Additional General Issues. In: Dynamic Optimization. Universitext. Springer, Cham. https://doi.org/10.1007/978-3-319-48814-1_3

Download citation

Publish with us

Policies and ethics