Skip to main content

Recursive method in stochastic optimization under compound criteria

  • Chapter
Advances in Mathematical Economics

Part of the book series: Advances in Mathematical Economics ((MATHECON,volume 3))

Abstract

In this paper we propose a recursive method in stochastic optimization problems with compound criteria. By introducing four (Markov, general, primitive and expanded Markov) types of policy, we establish an equivalence among three (general, expanded Markov and primitive) policy classes. It is shown that there exists an optimal policy in general class. Further we apply this result to range, ratio and variance problems. We derive both forward recursive formula for past-value sets and backward recursive formula for value functions. The compound criteria is large for economic decision processes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Altman, E.: Constrained Markov Decision Processes. Chapman & Hall, New York 1999

    Google Scholar 

  2. Bellman, R.E.: Dynamic Programming. Princeton University Press, Princeton, NJ 1957

    Google Scholar 

  3. Bellman, R.E.: Some Vistas of Modern Mathematics. University of Kentuky Press, Lexington, KY 1968

    Google Scholar 

  4. Blackwell, D.: Discounted dynamic programming. Ann. Math. Stat. 36, 226–235 (1965)

    Article  Google Scholar 

  5. Denardo, E.V.: Contraction mappings in the theory underlying dynamic programming. SIAM Review 9, 165–177 (1968)

    Article  Google Scholar 

  6. Denardo, E.V.: Dynamic Programming: Models and Applications. Prentice-Hall, NJ 1982

    Google Scholar 

  7. Dynkin, E.B., Yushkevich, A.A.: Controlled Markov Processes. Springer, New York 1979

    Book  Google Scholar 

  8. Hinderer, K.: Foundations of Non-Stationary Dynamic Programming with Discrete Time Parameter. Lectures Notes in Operation Research and Mathematical Systems 33. Springer, Berlin 1970

    Book  Google Scholar 

  9. Howard, R.A.: Dynamic Programming and Markov Processes. MIT Press, Cambridge, Mass. 1960

    Google Scholar 

  10. Iwamoto, S.: Theory of Dynamic Program (in Japanese). Kyushu Univ. Press, Fukuoka 1987

    Google Scholar 

  11. Iwamoto, S.: Associative dynamic programs. J. Math. Anal. Appl. 201, 195–211 (1996)

    Article  Google Scholar 

  12. Iwamoto, S.: On expected values of Markov statistics. Bull. Informatics and Cybernetics 30, 1–24 (1998)

    Google Scholar 

  13. Iwamoto, S.: Conditional decision processes with recursive reward function. J. Math. Anal. Appl. 230, 193–210 (1999)

    Article  Google Scholar 

  14. Iwamoto, S.: “Dynamic Programming”, “Principle of Invariant Imbedding” (Japanese) In: Operations Res. Soc. Japan (ed.): Operations Research Dictionary 2000: Basic Ver., pp.229–245, & Terminology Ver. JUSE, Tokyo, 2000

    Google Scholar 

  15. Iwamoto, S., Fujita, T.: Stochastic decision-making in a fuzzy environment. J. Operations Res. Soc. Japan 38, 467–482 (1995)

    Google Scholar 

  16. Iwamoto, S., Sniedovich, M.: Sequential decision making in fuzzy environment. J. Math. Anal. Appl. 222, 208–224 (1998)

    Article  Google Scholar 

  17. Iwamoto, S., Tsurusaki, K., Fujita, T.: Conditional decision-making in a fuzzy environment. J. Operations Res. Soc. Japan 42, 198–218 (1999)

    Article  Google Scholar 

  18. Iwamoto, S., Ueno, T., Fujita, T.: Controlled Markov chains with utility functions. International Workshop on Markov Processes and Controlled Markov Chains, Changsha, Hunan, China, August 22–28, 1999

    Google Scholar 

  19. Karatzas, I., Shreve, S.E.: Methods of Mathematical Finance. Springer, New York 1998

    Google Scholar 

  20. Kreps, D.M.: Decision problems with expected utility criteria I. Math. Oper. Res. 2, 45–53 (1977)

    Article  Google Scholar 

  21. Kreps, D.M.: Decision problems with expected utility criteria II; stationarity. Math. Oper. Res. 2, 266–274 (1977)

    Article  Google Scholar 

  22. Markovitz, H.: Portfolio selection. J. Finance 8, 77–91 (1952)

    Google Scholar 

  23. Ozaki, H, Streufert, P.A.: Dynamic programming for non-additive stochastic objects. J. Math. Eco. 25, 391–442 (1996)

    Article  Google Scholar 

  24. Porteus, E.: An informal look at the principle of optimality. Management Sci. 21, 1346–1348 (1975)

    Article  Google Scholar 

  25. Porteus, E.: Conditions for characterizing the structure of optimal strategies in infinite-horizon dynamic programs. J. Opt. Theo. Anal. 36, 419–432 (1982)

    Article  Google Scholar 

  26. Puterman, M.L.: Markov Decision Processes: Stochastic Models. In: Handbooks in Operations Research and Management Science (Heyman, D.P., Sobel, M.J. eds.) Vol. 2, Chap. VIII. Elsevier, Amsterdam 1990

    Google Scholar 

  27. Puterman, M.L.: Markov Decision Processes: discrete stochastic dynamic programming. Wiley & Sons, New York 1994

    Book  Google Scholar 

  28. Sniedovich, M.: Dynamic Programming. Marcel Dekker, Inc. NY 1992

    Google Scholar 

  29. Stokey, N.L., Lucas, R.E..: Recursive Methods in Economic Dynamics. Harvard University Press, Cambridge, Mass. 1989

    Google Scholar 

  30. Strauch, R.: Negative dynamic programming. Ann. Math. Stat. 37, 871–890 (1966)

    Article  Google Scholar 

  31. Streufert, P.A.: Recursive Utility and Dynamic Programming. In: Barbera, S. et al. (eds.): Handbook of Utility Theory Vol. 1, Chap. III. Kluwer. Boston 1998

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer Japan

About this chapter

Cite this chapter

Iwamoto, S. (2001). Recursive method in stochastic optimization under compound criteria. In: Kusuoka, S., Maruyama, T. (eds) Advances in Mathematical Economics. Advances in Mathematical Economics, vol 3. Springer, Tokyo. https://doi.org/10.1007/978-4-431-67891-5_3

Download citation

  • DOI: https://doi.org/10.1007/978-4-431-67891-5_3

  • Publisher Name: Springer, Tokyo

  • Print ISBN: 978-4-431-65937-2

  • Online ISBN: 978-4-431-67891-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics