Journal of Optimization Theory and Applications

, Volume 59, Issue 3, pp 445–465 | Cite as

Control problems with random and progressively known targets

  • A. Leizarowitz
Contributed Papers
  • 35 Downloads

Abstract

The situation considered is of optimally controlling a deterministic system from a given state to an initially unknown targety in a fixed time interval [T0,T]. The target will be certainly known at a random time τ in [T0,T]. The controller knows the distributions ofy and τ. We derive the Bellman equation for the problem, prove a verification theorem for it, and demonstrate how the distribution τ influences the optimal control. We show that, in the linear-quadratic case, the optimal control is given by a feedback law that does not depend on the distribution of τ.

Key Words

Optimal control linear-quadratic systems unknown targets points of information 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Lee, E. B., andMarkus, L.,Foundation of Optimal Control Theory, John Wiley and Sons, New York, New York, 1968.Google Scholar
  2. 2.
    Fleming, W. H., andRishel, R. W.,Deterministic and Stochastic Optimal Control, Springer-Verlag, New York, New York, 1975.Google Scholar
  3. 3.
    Berkovitz, L. D.,Optimal Control Theory, Springer-Verlag, New York, New York, 1974.Google Scholar
  4. 4.
    Aubin, J. P., andCellina, A.,Differential Inclusions, Springer-Verlag, New York, New York, 1984.Google Scholar

Copyright information

© Plenum Publishing Corporation 1988

Authors and Affiliations

  • A. Leizarowitz
    • 1
  1. 1.Department of MathematicsCarnegie Mellon UniversityPittsburgh

Personalised recommendations