Advertisement

Recursive method in stochastic optimization under compound criteria

  • Seiichi Iwamoto
Part of the Advances in Mathematical Economics book series (MATHECON, volume 3)

Abstract

In this paper we propose a recursive method in stochastic optimization problems with compound criteria. By introducing four (Markov, general, primitive and expanded Markov) types of policy, we establish an equivalence among three (general, expanded Markov and primitive) policy classes. It is shown that there exists an optimal policy in general class. Further we apply this result to range, ratio and variance problems. We derive both forward recursive formula for past-value sets and backward recursive formula for value functions. The compound criteria is large for economic decision processes.

Key words

dynamic programming invariant imbedding compound criteria backward recursive formula forward recursive formula past-value sets expanded Markov policy general policy primitive policy range ratio variance 

JEL Classification

C61 D81 

Mathematics Subject Classification (2000)

90C39 90C40 93E20 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Altman, E.: Constrained Markov Decision Processes. Chapman & Hall, New York 1999Google Scholar
  2. 2.
    Bellman, R.E.: Dynamic Programming. Princeton University Press, Princeton, NJ 1957Google Scholar
  3. 3.
    Bellman, R.E.: Some Vistas of Modern Mathematics. University of Kentuky Press, Lexington, KY 1968Google Scholar
  4. 4.
    Blackwell, D.: Discounted dynamic programming. Ann. Math. Stat. 36, 226–235 (1965)CrossRefGoogle Scholar
  5. 5.
    Denardo, E.V.: Contraction mappings in the theory underlying dynamic programming. SIAM Review 9, 165–177 (1968)CrossRefGoogle Scholar
  6. 6.
    Denardo, E.V.: Dynamic Programming: Models and Applications. Prentice-Hall, NJ 1982Google Scholar
  7. 7.
    Dynkin, E.B., Yushkevich, A.A.: Controlled Markov Processes. Springer, New York 1979CrossRefGoogle Scholar
  8. 8.
    Hinderer, K.: Foundations of Non-Stationary Dynamic Programming with Discrete Time Parameter. Lectures Notes in Operation Research and Mathematical Systems 33. Springer, Berlin 1970CrossRefGoogle Scholar
  9. 9.
    Howard, R.A.: Dynamic Programming and Markov Processes. MIT Press, Cambridge, Mass. 1960Google Scholar
  10. 10.
    Iwamoto, S.: Theory of Dynamic Program (in Japanese). Kyushu Univ. Press, Fukuoka 1987Google Scholar
  11. 11.
    Iwamoto, S.: Associative dynamic programs. J. Math. Anal. Appl. 201, 195–211 (1996)CrossRefGoogle Scholar
  12. 12.
    Iwamoto, S.: On expected values of Markov statistics. Bull. Informatics and Cybernetics 30, 1–24 (1998)Google Scholar
  13. 13.
    Iwamoto, S.: Conditional decision processes with recursive reward function. J. Math. Anal. Appl. 230, 193–210 (1999)CrossRefGoogle Scholar
  14. 14.
    Iwamoto, S.: “Dynamic Programming”, “Principle of Invariant Imbedding” (Japanese) In: Operations Res. Soc. Japan (ed.): Operations Research Dictionary 2000: Basic Ver., pp.229–245, & Terminology Ver. JUSE, Tokyo, 2000Google Scholar
  15. 15.
    Iwamoto, S., Fujita, T.: Stochastic decision-making in a fuzzy environment. J. Operations Res. Soc. Japan 38, 467–482 (1995)Google Scholar
  16. 16.
    Iwamoto, S., Sniedovich, M.: Sequential decision making in fuzzy environment. J. Math. Anal. Appl. 222, 208–224 (1998)CrossRefGoogle Scholar
  17. 17.
    Iwamoto, S., Tsurusaki, K., Fujita, T.: Conditional decision-making in a fuzzy environment. J. Operations Res. Soc. Japan 42, 198–218 (1999)CrossRefGoogle Scholar
  18. 18.
    Iwamoto, S., Ueno, T., Fujita, T.: Controlled Markov chains with utility functions. International Workshop on Markov Processes and Controlled Markov Chains, Changsha, Hunan, China, August 22–28, 1999Google Scholar
  19. 19.
    Karatzas, I., Shreve, S.E.: Methods of Mathematical Finance. Springer, New York 1998Google Scholar
  20. 20.
    Kreps, D.M.: Decision problems with expected utility criteria I. Math. Oper. Res. 2, 45–53 (1977)CrossRefGoogle Scholar
  21. 21.
    Kreps, D.M.: Decision problems with expected utility criteria II; stationarity. Math. Oper. Res. 2, 266–274 (1977)CrossRefGoogle Scholar
  22. 22.
    Markovitz, H.: Portfolio selection. J. Finance 8, 77–91 (1952)Google Scholar
  23. 23.
    Ozaki, H, Streufert, P.A.: Dynamic programming for non-additive stochastic objects. J. Math. Eco. 25, 391–442 (1996)CrossRefGoogle Scholar
  24. 24.
    Porteus, E.: An informal look at the principle of optimality. Management Sci. 21, 1346–1348 (1975)CrossRefGoogle Scholar
  25. 25.
    Porteus, E.: Conditions for characterizing the structure of optimal strategies in infinite-horizon dynamic programs. J. Opt. Theo. Anal. 36, 419–432 (1982)CrossRefGoogle Scholar
  26. 26.
    Puterman, M.L.: Markov Decision Processes: Stochastic Models. In: Handbooks in Operations Research and Management Science (Heyman, D.P., Sobel, M.J. eds.) Vol. 2, Chap. VIII. Elsevier, Amsterdam 1990Google Scholar
  27. 27.
    Puterman, M.L.: Markov Decision Processes: discrete stochastic dynamic programming. Wiley & Sons, New York 1994CrossRefGoogle Scholar
  28. 28.
    Sniedovich, M.: Dynamic Programming. Marcel Dekker, Inc. NY 1992Google Scholar
  29. 29.
    Stokey, N.L., Lucas, R.E..: Recursive Methods in Economic Dynamics. Harvard University Press, Cambridge, Mass. 1989Google Scholar
  30. 30.
    Strauch, R.: Negative dynamic programming. Ann. Math. Stat. 37, 871–890 (1966)CrossRefGoogle Scholar
  31. 31.
    Streufert, P.A.: Recursive Utility and Dynamic Programming. In: Barbera, S. et al. (eds.): Handbook of Utility Theory Vol. 1, Chap. III. Kluwer. Boston 1998Google Scholar

Copyright information

© Springer Japan 2001

Authors and Affiliations

  • Seiichi Iwamoto
    • 1
  1. 1.Department of Economic Engineering, Graduate School of EconomicsKyushu University 27Higashiku, FukuokaJapan

Personalised recommendations