Feedback Minimum Principle for Optimal Control Problems in Discrete-Time Systems and Its Applications

  • Vladimir Dykhta
  • Stepan SorokinEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11548)


The paper is devoted to a generalization of a necessary optimality condition in the form of the Feedback Minimum Principle for a nonconvex discrete-time free-endpoint control problem. The approach is based on an exact formula for the increment of the cost functional. This formula is completely defined through a solution of the adjoint system corresponding to a reference process. By minimizing that increment in control variable for a fixed adjoint state, we define a multivalued map, whose selections are feedback controls with the property of potential “improvement” of the reference process. As a result, we derive a necessary optimality condition (optimal process does not admit feedback controls of a “potential descent” in the cost functional). In the case when the well-known Discrete Maximum Principle holds, our condition can be further strengthened. Note that obtained optimality condition is quite constructive and may lead to an iterative algorithm for discrete-time optimal control problems. Finally, we present sufficient optimality conditions for problems, where Discrete Maximum Principle does not make sense.


Exact formula of the cost functional increment Feedback controls Necessary optimality conditions Feedback Minimum Principle Maximum Principle Method of feedback iterations 


  1. 1.
    Sorokin, S.P.: Necessary feedback optimality conditions and nonstandard duality in problems of discrete system optimization. Autom. Remote Control 75(9), 1556–1564 (2014)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Dykhta, V.A.: Variational necessary optimality conditions with feedback descent controls for optimal control problems. Doklady Math. 91(3), 394–396 (2015)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Dykhta, V.A.: Positional strengthenings of the maximum principle and sufficient optimality conditions. Proc. Steklov Inst. Math. 293(1), S43–S57 (2016)CrossRefGoogle Scholar
  4. 4.
    Gabasov, R., Kirillova, F.M.: Qualitative Theory of Optimal Processes. Nauka, Moscow (1971). [in Russian]zbMATHGoogle Scholar
  5. 5.
    Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation I-II. Fundamental Principles of Mathematical Sciences, vol. 330-331. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  6. 6.
    Propoi, A.I.: Elements of the theory of optimal discrete processes. Nauka, Moscow (1973). [in Russian]Google Scholar
  7. 7.
    Mordukhovich, B.S.: Approximation Methods in Optimization and Control Problems. Nauka, Moscow (1988). [in Russian]zbMATHGoogle Scholar
  8. 8.
    Boltyanskiy, V.G.: Optimal Control of Discrete Systems. Nauka, Moscow (1973). [in Russian]Google Scholar
  9. 9.
    Sorokin, S.P., Staritsyn, M.V.: Numeric algorithm for optimal impulsive control based on feedback maximum principle. Optim. Lett. (2018).

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Matrosov Institute for System Dynamics and Control TheoryIrkutskRussia

Personalised recommendations