Skip to main content

An optimal control depending on the conditional density of the unobserved state

  • Conference paper
  • First Online:
Applied Stochastic Analysis

Part of the book series: Lecture Notes in Control and Information Sciences ((LNCIS,volume 177))

Abstract

The optimal control of a nonlinear partially observed stochastic control system is computed through using nonlinear filtering and linear quadratic techniques.

This work was partially supported by NSF grant DMS-9105649.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. V.E. Beneš, I. Karatzas, “On the Relation of Zakai's and Mortensen's Equation”, SIAM J. Control and Optimization 21 (1983) 472–489.

    Article  Google Scholar 

  2. A. Bensoussan, “Optimal Control of Partially Observed Diffusions”, in “Advances in Filtering and Optimal Stochastic Control”, W.H. Fleming, L.A. Gorostiza (Eds.), Proceedings of the IFIP-WG 7/1 Working Conference, Springer Lecture Notes in Control and Information Sciences, Vol. 42 (1982).

    Google Scholar 

  3. N. Christopeit, M. Kohlmann (Eds.), “Stochastic Differential Systems”, Proceedings of the 2nd Bad Honnef Conference, Springer Lecture Notes in Control and Information Sciences, vol. 43 (1982).

    Google Scholar 

  4. W.H. Fleming, “Nonlinear Semigroups for Controlled Partially Observed Diffusions”, SIAM Journal on Control and Optimization, Vol. 20, (1982).

    Google Scholar 

  5. K. Helmes, R.W. Rishel, “The Solution of a Partially Observed Stochastic Control Problem in Terms of Predicted Miss”, to appear.

    Google Scholar 

  6. R.S. Lipster, A.N. Shiryayev, Statistics of Random Processes I-General Theory, Springer-Verlag (1977).

    Google Scholar 

  7. H. Kunita, “Asymptotic Behavior of the Nonlinear Filtering Errors of Markov Processes” Journal of Multivariate Analysis 1 (1971), pp. 365–391.

    Article  Google Scholar 

  8. H.J. Kushner, “On the Dynamical Equations of Conditional Probability Density Functions with Application to Optimal Stochastic Control Theory”, J. Math. Anal. Appl. 8 (1964), pp. 332–344.

    Article  Google Scholar 

  9. R.E. Mortensen, “Stochastic Optimal Control with Noisy Observations”, International J. Control 4 (1966), pp. 455–464.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Ioannis Karatzas Daniel Ocone

Rights and permissions

Reprints and permissions

Copyright information

© 1992 Springer-Verlag

About this paper

Cite this paper

Helmes, K.L., Rishel, R.W. (1992). An optimal control depending on the conditional density of the unobserved state. In: Karatzas, I., Ocone, D. (eds) Applied Stochastic Analysis. Lecture Notes in Control and Information Sciences, vol 177. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0007054

Download citation

  • DOI: https://doi.org/10.1007/BFb0007054

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-55296-3

  • Online ISBN: 978-3-540-47017-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics