Advertisement

The Open-Loop and Closed-Loop Impulse Controls

  • Alexander B. KurzhanskiEmail author
  • Alexander N. Daryin
Chapter
Part of the Lecture Notes in Control and Information Sciences book series (LNCIS, volume 468)

Abstract

This chapter describes how to find optimal open-loop and closed-loop impulse controls. We begin by defining an impulse control system and proving the existence and uniqueness of its trajectories, (see also [2, 11]). Then we set up the basic problem of open-loop impulse control. This is how to transfer the system from a given initial state to a given target state within given time under a control of minimum variation. A key point in solving the open-loop impulse control problem is the construction of reachability sets for the system. Here we indicate how to construct such sets and study their properties. After that we present some simple model examples. The solution to the optimal impulse control problem problem is given by the Maximum Rule for Impulse Controls, an analogue of Pontryagin’s Maximum Principle for ordinary controls.

References

  1. 1.
    Daryin, A.N., Malakaeva, A.Y.: Numerical methods for linear impulse feedback problems. J. Comput. Syst. Sci. Int. 47(2), 207–213 (2008)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Gel’fand, I.M., Shilov, G.E.: Generalized Functions. Volume I: Properties and Operations. Dover, New York (1991)Google Scholar
  3. 3.
    Krasovski, N.N.: The Theory of Control of Motion. Nauka, Moscow (1968)Google Scholar
  4. 4.
    Kostousova, E.K.: Control synthesis via parallelotopes: optimization and parallel computations. Optim. Methods Softw. 14(4), 267–310 (2001)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Kurzhanski, A.B.: Comparison principle for equations of the Hamilton-Jacobi type in control theory. Proc. Steklov’s Math. Inst. 253(S1), S185–S195 (2006)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Kurzhanski, A.B., Osipov, YuS: On controlling linear systems through generalized controls. Differenc. Uravn. 5(8), 1360–1370 (1969)Google Scholar
  7. 7.
    Kurzhanski, A.B., Vályi, I.: Ellipsoidal Calculus for Estimation and Control. SCFA. Birkhäuser, Boston (1997)CrossRefGoogle Scholar
  8. 8.
    Kurzhanski, A.B., Varaiya, P.: Ellipsoidal techniques for reachability analysis: internal approximation. Syst. Control Lett. 41, 201–211 (2000)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Leitmann, G.: The Calculus of Variations and Optimal Control: An Introduction. Plenum Press, New york (1981)CrossRefGoogle Scholar
  10. 10.
    Neustadt, L.W.: Optimization, a moment problem and nonlinear programming. SIAM J. Control 2(1), 33–53 (1964)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Schwartz, L.: Théorie Des Distributions. Hermann, Paris (1950)zbMATHGoogle Scholar
  12. 12.
    Schwartz, L.: Méthodes mathématiques pour les sciences physiques. Hermann, Paris (1961)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2020

Authors and Affiliations

  • Alexander B. Kurzhanski
    • 1
    Email author
  • Alexander N. Daryin
    • 2
  1. 1.Faculty of Computational Mathematics and CyberneticsLomonosov Moscow State UniversityMoscowRussia
  2. 2.Google ResearchZürichSwitzerland

Personalised recommendations