Control problems with random and progressively known targets
- 35 Downloads
The situation considered is of optimally controlling a deterministic system from a given state to an initially unknown targety in a fixed time interval [T0,T]. The target will be certainly known at a random time τ in [T0,T]. The controller knows the distributions ofy and τ. We derive the Bellman equation for the problem, prove a verification theorem for it, and demonstrate how the distribution τ influences the optimal control. We show that, in the linear-quadratic case, the optimal control is given by a feedback law that does not depend on the distribution of τ.
Key WordsOptimal control linear-quadratic systems unknown targets points of information
Unable to display preview. Download preview PDF.
- 1.Lee, E. B., andMarkus, L.,Foundation of Optimal Control Theory, John Wiley and Sons, New York, New York, 1968.Google Scholar
- 2.Fleming, W. H., andRishel, R. W.,Deterministic and Stochastic Optimal Control, Springer-Verlag, New York, New York, 1975.Google Scholar
- 3.Berkovitz, L. D.,Optimal Control Theory, Springer-Verlag, New York, New York, 1974.Google Scholar
- 4.Aubin, J. P., andCellina, A.,Differential Inclusions, Springer-Verlag, New York, New York, 1984.Google Scholar