Abstract
In this paper we propose a recursive method in stochastic optimization problems with compound criteria. By introducing four (Markov, general, primitive and expanded Markov) types of policy, we establish an equivalence among three (general, expanded Markov and primitive) policy classes. It is shown that there exists an optimal policy in general class. Further we apply this result to range, ratio and variance problems. We derive both forward recursive formula for past-value sets and backward recursive formula for value functions. The compound criteria is large for economic decision processes.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Altman, E.: Constrained Markov Decision Processes. Chapman & Hall, New York 1999
Bellman, R.E.: Dynamic Programming. Princeton University Press, Princeton, NJ 1957
Bellman, R.E.: Some Vistas of Modern Mathematics. University of Kentuky Press, Lexington, KY 1968
Blackwell, D.: Discounted dynamic programming. Ann. Math. Stat. 36, 226–235 (1965)
Denardo, E.V.: Contraction mappings in the theory underlying dynamic programming. SIAM Review 9, 165–177 (1968)
Denardo, E.V.: Dynamic Programming: Models and Applications. Prentice-Hall, NJ 1982
Dynkin, E.B., Yushkevich, A.A.: Controlled Markov Processes. Springer, New York 1979
Hinderer, K.: Foundations of Non-Stationary Dynamic Programming with Discrete Time Parameter. Lectures Notes in Operation Research and Mathematical Systems 33. Springer, Berlin 1970
Howard, R.A.: Dynamic Programming and Markov Processes. MIT Press, Cambridge, Mass. 1960
Iwamoto, S.: Theory of Dynamic Program (in Japanese). Kyushu Univ. Press, Fukuoka 1987
Iwamoto, S.: Associative dynamic programs. J. Math. Anal. Appl. 201, 195–211 (1996)
Iwamoto, S.: On expected values of Markov statistics. Bull. Informatics and Cybernetics 30, 1–24 (1998)
Iwamoto, S.: Conditional decision processes with recursive reward function. J. Math. Anal. Appl. 230, 193–210 (1999)
Iwamoto, S.: “Dynamic Programming”, “Principle of Invariant Imbedding” (Japanese) In: Operations Res. Soc. Japan (ed.): Operations Research Dictionary 2000: Basic Ver., pp.229–245, & Terminology Ver. JUSE, Tokyo, 2000
Iwamoto, S., Fujita, T.: Stochastic decision-making in a fuzzy environment. J. Operations Res. Soc. Japan 38, 467–482 (1995)
Iwamoto, S., Sniedovich, M.: Sequential decision making in fuzzy environment. J. Math. Anal. Appl. 222, 208–224 (1998)
Iwamoto, S., Tsurusaki, K., Fujita, T.: Conditional decision-making in a fuzzy environment. J. Operations Res. Soc. Japan 42, 198–218 (1999)
Iwamoto, S., Ueno, T., Fujita, T.: Controlled Markov chains with utility functions. International Workshop on Markov Processes and Controlled Markov Chains, Changsha, Hunan, China, August 22–28, 1999
Karatzas, I., Shreve, S.E.: Methods of Mathematical Finance. Springer, New York 1998
Kreps, D.M.: Decision problems with expected utility criteria I. Math. Oper. Res. 2, 45–53 (1977)
Kreps, D.M.: Decision problems with expected utility criteria II; stationarity. Math. Oper. Res. 2, 266–274 (1977)
Markovitz, H.: Portfolio selection. J. Finance 8, 77–91 (1952)
Ozaki, H, Streufert, P.A.: Dynamic programming for non-additive stochastic objects. J. Math. Eco. 25, 391–442 (1996)
Porteus, E.: An informal look at the principle of optimality. Management Sci. 21, 1346–1348 (1975)
Porteus, E.: Conditions for characterizing the structure of optimal strategies in infinite-horizon dynamic programs. J. Opt. Theo. Anal. 36, 419–432 (1982)
Puterman, M.L.: Markov Decision Processes: Stochastic Models. In: Handbooks in Operations Research and Management Science (Heyman, D.P., Sobel, M.J. eds.) Vol. 2, Chap. VIII. Elsevier, Amsterdam 1990
Puterman, M.L.: Markov Decision Processes: discrete stochastic dynamic programming. Wiley & Sons, New York 1994
Sniedovich, M.: Dynamic Programming. Marcel Dekker, Inc. NY 1992
Stokey, N.L., Lucas, R.E..: Recursive Methods in Economic Dynamics. Harvard University Press, Cambridge, Mass. 1989
Strauch, R.: Negative dynamic programming. Ann. Math. Stat. 37, 871–890 (1966)
Streufert, P.A.: Recursive Utility and Dynamic Programming. In: Barbera, S. et al. (eds.): Handbook of Utility Theory Vol. 1, Chap. III. Kluwer. Boston 1998
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer Japan
About this chapter
Cite this chapter
Iwamoto, S. (2001). Recursive method in stochastic optimization under compound criteria. In: Kusuoka, S., Maruyama, T. (eds) Advances in Mathematical Economics. Advances in Mathematical Economics, vol 3. Springer, Tokyo. https://doi.org/10.1007/978-4-431-67891-5_3
Download citation
DOI: https://doi.org/10.1007/978-4-431-67891-5_3
Publisher Name: Springer, Tokyo
Print ISBN: 978-4-431-65937-2
Online ISBN: 978-4-431-67891-5
eBook Packages: Springer Book Archive