Advertisement

Impulse Control Under Uncertainty

  • Alexander B. KurzhanskiEmail author
  • Alexander N. Daryin
Chapter
Part of the Lecture Notes in Control and Information Sciences book series (LNCIS, volume 468)

Abstract

The present chapter extends the existing theory of closed-loop control to impulse controlled systems subjected to uncertain input disturbances. These are assumed to be unknown but bounded by a given convex set. Once again the key element of the solution is the Principle of Optimality and its infinitesimal counterpart, the related Dynamic Programming Equation, [2]. The corresponding value function may be calculated here as the limit of optimal values for problems of motion correction. For one-dimensional systems such calculation yields a complete solution in explicit form, while for systems of higher dimension this leads to a numerical algorithm for calculating approximations of the value function and the related feedback controls. We also revisit the earlier formalizations of feedback impulse control and indicate how they adapt to the case of uncertainty [1, 7, 8, 9, 10, 11, 12].

References

  1. 1.
    Basar, T., Bernhard, P.: \(H^{\infty }\) Optimal Control and Related min max Design Problems. SCFA. Basel, Birkhäuser (1995)Google Scholar
  2. 2.
    Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957)Google Scholar
  3. 3.
    Bellman, R.: Stability Theory of Differential Equations. McGraw-Hill, New York (1953)Google Scholar
  4. 4.
    Bensoussan, A., Lions, J.L.: Contrôle impulsionnel et inéquations quasi-variationnelles. Dunod, Paris (1982)Google Scholar
  5. 5.
    Branicky, M.S., Borkar, V.S., Mitter, S.K.: A unified framework for hybrid control: model and optimal control theory. IEEE Trans. Autom. Control. 43(1), 31–45 (1998)Google Scholar
  6. 6.
    Carter, T., Humi, M.: A new approach to impulsive rendezvous near circular orbit. Celest. Mech. and Dyn. Astron. 112(4), 385–426 (2012)Google Scholar
  7. 7.
    Elliot, R.J., Kalton, N.J.: Values in differential games. Bull. Am. Math. Soc. 78(3), 427–432 (1972)Google Scholar
  8. 8.
    Krasovski, N.N.: Rendezvous Game Problems. National Technical Information Service, Springfield, Virginia (1971)Google Scholar
  9. 9.
    Krasovski, N.N., Subbotin, A.I.: Game-Theoretic Control Problems. Springer, New York (1988)Google Scholar
  10. 10.
    Kurzhanski, A.B.: Pontryagin’s alternated integral and the theory of control synthesis. Proc. Steklov’s Math. Inst. 224, 234–248 (1999). (Russian)Google Scholar
  11. 11.
    Leitmann, G.: Optimality and reachability with feedback controls. In: Blaquiere, A., Leitmann, G. (eds.) Dynamical Systems and Microphysics: Control Theory and Mechanics. Acdemic Press, Orlando (1982)Google Scholar
  12. 12.
    Subbotin, A.I.: Generalized Solutions of First-Order PDE’s: The Dynamic Optimization Perspective. SCFA. Birkhäuser, Boston (1995)Google Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2020

Authors and Affiliations

  • Alexander B. Kurzhanski
    • 1
    Email author
  • Alexander N. Daryin
    • 2
  1. 1.Faculty of Computational Mathematics and CyberneticsLomonosov Moscow State UniversityMoscowRussia
  2. 2.Google ResearchZürichSwitzerland

Personalised recommendations