Advertisement

On Successive Approximation of Optimal Control of Stochastic Dynamic Systems

  • Fei-Yue Wang
  • George N. Saridis
Part of the International Series in Operations Research & Management Science book series (ISOR, volume 46)

Abstract

An approximation theory of optimal control for nonlinear stochastic dynamic systems has been established. Based on the generalized Hamilton-Jacobi-Bellman equation of the cost function of nonlinear stochastic systems, general iterative procedures for approximating the optimal control are developed by successively improving the performance of a feedback control law until a satisfactory suboptimal solution is achieved. A successive design scheme using upper and lower bounds of the exact cost function has been developed for the infinite-time stochastic regulator problem. The determination of the upper and lower bounds requires the solution of a partial differential inequality instead of equality. Therefore it provides a degree of flexibility in the design method over the exact design method. Stability of the infinite-time sub-optimal control problem was established under not very restrictive conditions, and stable sequences of controllers can be generated. Several examples are used to illustrate the applicati on of the proposed approximation theory to stochastic control. It has been shown that in the case of linear quadratic Gaussian problems, the approximation theory leads to the exact solution of optimal control.

Keywords

Hamilton-Jacobi-Bellman equation optimal control nonlinear stochastic systems 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Al’brekht, E. G. (1961). On the optimal stabilization of nonlinear systems, J. Appl. Math. Mech. (ZPMM), 25,5, 1254–1266.MathSciNetCrossRefGoogle Scholar
  2. Aoki, M. (1967). Optimization of Stochastic Systems, Academic Press, N.Y.zbMATHGoogle Scholar
  3. Bellman, R. (1956). Dynamic Programming, Princeton University Press, Princeton, N.J.zbMATHGoogle Scholar
  4. Doob, J. L. (1953). Markov Processes. Wiley, N.Y.zbMATHGoogle Scholar
  5. Dynkin, E. B. (1953). Stochastic Processes. Academic Press, N.Y.Google Scholar
  6. Itô, K. (1951). On Stochastic Differential Equations? Memn Amer. Math. Soc., 4.Google Scholar
  7. Jaynes, E. T. (1957). Information theory and statistical mechanics. Physical Review, 4, 106.MathSciNetzbMATHGoogle Scholar
  8. Kushner, H. (1971). Introduction to Stochastic Control, Holt, Reinhardt and Winston, NY.zbMATHGoogle Scholar
  9. Kwakernaak, H. and R. Sivan. (1972). Linear Optimal Control Systems, Wiley, N.Y.zbMATHGoogle Scholar
  10. Leake, R.J. and R.-W. Liu. (1967). Construction of suboptimal control sequences, SIAM J. Control, 5,1, 54–63.MathSciNetCrossRefGoogle Scholar
  11. Lukes, D. L. (1969). Optimal regulation of nonlinear dynamical systems. SIAM J. Control, 7,1, 75–100.MathSciNetCrossRefGoogle Scholar
  12. Nishikawa, Y., N. Sannomiya and H. Itakura. (1962). A method for suboptimal design of nonlinear feedback systems, Automatica, 7,6, 703–712.MathSciNetCrossRefGoogle Scholar
  13. Ohsumi, A. (1984). Stochastic control with searching a randomly moving target, Proc. Of American control Conference, San Diego, CA, 500–504.Google Scholar
  14. Panossian, H. V. (1988). Algorithms and computational techniques in stochastic optimal control, C.T. Lenodes (ed.), Control and Dynamic Systems, 28,1Google Scholar
  15. Rekasius, Z.V. (1964). Suboptimal design of intentionally nonlinear controllers. IEEE Trans. Automatic Control, AC-9,4, 380–386.MathSciNetCrossRefGoogle Scholar
  16. Sage, A. P. and C. C. White. (1977). Optimun Systems Control, Prentice-Hall, Englewood Cliffs, N.J.Google Scholar
  17. Saridis, G. N. and Fei-Yue Wang. (1994). Suboptimal control of nonlinear stochastic systems, Control — Theory and Advanced Technology, Vol. 10,No. 4, Part l, pp.847–871.MathSciNetGoogle Scholar
  18. Saridis, G. N. (1988). Entropy formulation for optimal and adaptive control. IEEE Trans. Automatic Control, AC-33,8, 713–721.MathSciNetCrossRefGoogle Scholar
  19. Saridis, G. N. and J. Balaram. (1986). Suboptimal control for nonlinear systems. Control-Thoery and Advanced Technology (C-TAT), 2.3, 547–562.Google Scholar
  20. Saridis, G. N. and C. S. G. Lee. (1976). An approximation theory of optimal control for trainable manipulators. IEEE Trans. Systems, Man, and Cybernetics, SMC-9,3, 152–159.MathSciNetCrossRefGoogle Scholar
  21. Skorokhod, A. V. (1965). Studies in the Theory of Random Processes. Addison-Wesley, Reading, MA.zbMATHGoogle Scholar
  22. Wang, Fei-Yue and G. N. Saridis. (1992). Suboptimal control for nonlinear stochastic systems, Proceedings of 31st Conference on Decision and control, Tucson, AZ, Dec.Google Scholar
  23. Wonham, W. M. (1970). Random differential equations in control theory. A. T. BharuchaReid (ed.), Probabilistic Methods in Applied Mathematics, Academic Press, NY.Google Scholar

Copyright information

© Springer Science + Business Media, Inc. 2002

Authors and Affiliations

  • Fei-Yue Wang
    • 1
  • George N. Saridis
    • 2
  1. 1.Department of Systems and Industrial EngineeringUniversity of ArizonaTucson
  2. 2.Department of Electrical, Computer and Systems EngineeringRensselaer Polytechnic InstituteTroy

Personalised recommendations