Stochastic Optimal Control
In the long history of mathematics, stochastic optimal control is a rather recent development. Using Bellman’s Principle of Optimality along with measure-theoretic and functional-analytic methods, several mathematicians such as H. Kushner, W. Fleming, R. Rishel. W.M. Wonham and J.M. Bismut, among many others, made important contributions to this new area of mathematical research during the 1960s and early 1970s. For a complete mathematical exposition of the continuous time case see Fleming and Rishel (1975) and for the discrete time case see Bertsekas and Shreve (1978).
Unable to display preview. Download preview PDF.
- Arrow, K.J. and Kurz, M. 1970. Public Investment, the Rate of Return, and Optimal Fiscal Policy. Baltimore: Johns Hopkins Press.Google Scholar
- Bellman, R. 1957. Dynamic Programming. Princeton: Princeton University Press.Google Scholar
- Bertsekas, D.P. and Shreve, S.E. 1978. Stochastic Optimal Control: the Discrete Time Case. New York: Academic Press.Google Scholar
- Malliaris, A.G. and Brock, W.A. 1982. Stochastic Methods in Economics and Finance. Amsterdam: North-Holland.Google Scholar