Advertisement

Comments on Paper “On the Relation Between Two Approaches to Necessary Optimality Conditions in Problems with State Constraints”

  • Dmitry Karamzin
Forum
  • 89 Downloads

Abstract

This Forum Note concerns the question of necessary optimality conditions in optimal control problems subject to state constraints. Some critical remarks about a recently published paper are made.

Keywords

Optimal control Maximum principle State constraints 

Mathematics Subject Classification

49N25 

1 Introduction

This Forum Note aims at discussing paper [1] recently published in this journal. Such a paper addresses the question of necessary optimality conditions for optimal control problems with state constraints in the form of the Pontryagin maximum principle [2]. A constructive criticism is advanced regarding its results, showing how to deduce the theorems in [1] from the results provided in [3]. Some drawbacks of the method of investigation are also revealed.

2 Some Critical Remarks Addressing Paper [1]

2.1 Deduction of the Results of [1] from the Literature

The main results of [1] reside in Theorems 9.1, p. 405, and 14.1, p. 417. At the same time, Theorem 14.1 is more general than Theorem 9.1, as the title of Sect. 12 suggests. Therefore, let us now focus merely on Theorem 14.1. Let us show how this assertion follows from the results obtained in [3]. Consider problem C and assume that the data are \(C^2\)-smooth as stated on p. 412 in [1]. At the first stage, assume that the type of minimum is global. Apply to Problem C the maximum principle in the Gamkrelidze form derived in [3], that is, conditions (8.47), (8.52), p. 132, 133. Maximum condition (8.52), in the notation and under the assumptions of [1], yields that
$$\begin{aligned} \psi (t)f'_u\left( y^0(t),u^0(t)\right) = \mu (t)\varGamma '_u\left( y^0(t),u^0(t)\right) ,\;\;\text{ for } \text{ a.a. }\; t\in \varDelta _2, \end{aligned}$$
where it is denoted \(\varGamma (y,u)=-\left<\varPhi '(y),f(y,u)\right>\). The regularity condition stated on p. 413 implies that \(\alpha (t):=\varGamma '_u(y^0(t),u^0(t))[\varGamma '_u(y^0(t),u^0(t))]^*\ge \delta \) on \(\varDelta _2\) for some \(\delta >0\). Therefore, the function
$$\begin{aligned} \mu (t)=(\alpha (t))^{-1}\psi (t)f'_u\left( y^0(t),u^0(t)\right) \left[ \varGamma '_u(y^0(t),u^0(t))\right] ^* \end{aligned}$$
is Lipschitz continuous on \(\varDelta _2\) in view of the assumptions imposed in Sect. 12.

Now, define the new multiplier \(\psi _y(t):=\psi (t)+\mu (t)\varPhi '(y^0(t))\). Then, according to the considerations in [3], see on p. 131, 132, and the multiplier variable change (8.46), we obtain that the multipliers \(\psi _y,\mu \) satisfy all the conditions of Theorem 14.1, except the conservation law (f). However, the conservation law easily follows from maximum condition (8.52) by virtue of the known standard arguments provided in [2], Chapter 2.

The examination of the extended weak minimum (e.w.m.) is somewhat more cumbersome in view of a certain technicality of this minimum condition. In this type of minimum, the controls are allowed to compete not only with \(u^0(t)\), but also with its shifted values \(u^0(t\pm \alpha )\), thus, at points of discontinuity \(\tau \), with \(u^0(\tau +)\), and \(u^0(\tau -)\). However, this property is enough to derive the conservation law from the maximum condition associated with the e.w.m. Therefore, impose an extra constraint—\(\varepsilon \)-tube about the minimizer \(u^0\), but the one reflecting this type of minimum. For simple notation, let \(u^0\) be piecewise constant: \(u^0(t)=c_j\), \(t\in \varDelta _j\). Denote \(T_j:=\{u:|u-c_j|\le \varepsilon \}\), \(j=1,2,3\), the \(\varepsilon \)-tube. Restrict the class of controls: for each \(u(\cdot )\) there exist \(\tau _j\in (t_j^0-\varepsilon ,t_j^0+\varepsilon )\) such that \(u(t)\in T_1\) for \(t\le \tau _1\), \(u(t)\in T_2\) for \(t\in (\tau _1,\tau _2)\), and \(u(t)\in T_3\) for \(t\ge \tau _2\). This is the e.w.m. Then, by applying the above arguments, we obtain the maximum condition in the tube \(|u-u^0(t)|\le \varepsilon \). By considering the obvious extra “shifting variations” at the points \(t^0_j\), one can easily obtain that the maximums over the tubes \(T_j\) and \(T_{j+1}\) are equal at \(t^0_j\), \(j=1,2\). These shifting variations can be applied directly to problem C because those feasible can be selected under the assumptions made on p. 412–413, including regularity and the fact that the endpoints are free. Thus, the conservation law (f) still follows from the maximum condition. Condition (f) is also easy to derive from (8.52) by virtue of the well-known trick in changing the time variable, the so-called reduction to v-problem, see, for example, [4], Sect. 2.5.1 (also used in [1], but in another context).

It has been shown that the result which resides in Theorem 14.1 can simply be derived from the already known results in the literature, notably from the source [3]. Moreover, along with the stationarity conditions suggested by Theorem 14.1, the maximum condition has also been obtained. The same assertion can also be deduced from the results presented in [5], either directly as above, or by virtue of a simple reduction.

At the same time, in Remark 9.1, the authors claim that the novelty consists in merely the method of investigation. Therefore, our next step is to analyze this method.

2.2 Some Drawbacks of the Method

The basic method for investigation used in [1] consists in combining the two approaches: the reduction to the v-problem1 and the reduction to a mixed constrained problem. In addition, when proving the monotonicity of the Lagrange multiplier \(\mu \), the argument analogous to the one of Chapter 6 in [2] is invoked, which consists in comparing with the trajectories lying in the interior of the domain.

This basic method has some drawbacks. First of all, the technique of reducing to a mixed constrained problem is obviously too restrictive as important information on the admissible trajectories subject to state constraints is lost by this transition. As a consequence, this method does not allow one to obtain the full-fledged maximum condition. In this regard, it should be pointed out that, under the assumptions of [1], the main result residing in Theorem 14.1 can be strengthened. Namely:
  1. (1)

    The maximum condition in the \(\varepsilon \)-tube about the minimizer can be obtained.

     
  2. (2)

    It can be proved that the measure \(\mu \) is continuous on [0, T].

     
  3. (3)

    Lipschitz continuity of \(u^0\) on \(\varDelta _2\) can be replaced by merely piecewise Lipschitz continuity.

     
Fact (1) has been proved in Sect. 2.1, while facts (2), (3) can simply be obtained from fact (1) by virtue of the considerations made, for example, in [6], see Lemmas 3.8, 3.10.

These facts clearly reveal the drawbacks of the applied method. Then, the following obvious question arises: Why is it necessary for such a method to be applied in the context of state constraints, if it leads to results that are even weaker than those that can simply be obtained from the classical results and approaches contained, notably, in [3]?

In view of these observations, the statement on p. 407, I quote, “However, this result is not, in general, valid in the case of extended weak minimality (the reason is that one cannot rely upon the maximality of Pontryagin function w.r.t. u, having in disposal only the stationarity of the extended Pontryagin function)” is wrong, because the maximum condition under the extended weak minimality holds true, see fact (1). At the same time, fact (2) gives a clear answer to the open question raised in Sect. 10.1, on p. 408, I quote, “In the general case, the question of presence or absence of atoms is open. We leave it for further research.” Therefore, the extremal constructed in the example of Sect. 10.2 does not yield the e.w.m. to (49). However, this fact remains unclear when reading this example.

The conclusion is that the applied method in [1] does not lead to the full set of the necessary optimality conditions. At the same time, some restrictive assumptions on the data are required as a consequence of the arguments used in the proof which should also be attributed to the drawbacks of the applied method. Next, we discuss these assumptions.

2.3 Restrictive Assumptions

The assumptions which are put forward in [1] are rather restrictive. It is assumed that the endpoints are free, while (66) is in force. Moreover, the extremal control \(u^0\) is continuous when the trajectory lies on the boundary of the state constraint set, while the control constraints are inactive. These assumptions greatly simplify the problem and are difficult to qualify as realistic.

At the same time, the authors assume that the endpoints are strictly embedded within the state constraints. This, however, excludes the important and interesting case under consideration which refers to the so-called non-degeneracy of the maximum principle. Regarding this issue, see the book [7] and the bibliography cited therein (in particular, see the important sources [8, 9, 10, 11, 12, 13]).

2.4 Wrong Citations

In view of the arguments presented in Sect. 2.1, the following sentence in [1]: “In paper [5] and then in [6], it was shown, by a simple change of the adjoint variable, that one can pass from the conditions in the Dubovitskii-Milyutin form to the conditions in the form of Gamkrelidze, but the possibility of the inverse passage was not investigated.” is incorrect. Indeed, first of all, clearly, ref. [5] of [1] should be replaced by ref. [3] of this note, which is 30 years older. The “inverse passage” is obvious, and, in fact, this passage has already been made here, in Sect. 2.1. The relation between the two distinct forms of necessary optimality conditions, including the relation to the original Gamkrelidze result, has been investigated in [3, 5].

2.5 Confusing Title?

The material of the previous section raises a question about the title of [1]. There is not even a single assertion in [1] on any relations between the discussed forms of the necessary conditions. The title may be regarded as misleading since it does not reflect the real content of the paper.

3 A Question About the Proof

On p. 399, in formula (19), it is not clear how the authors derive that \(\sigma (\tau )\) is Lipschitz continuous on \(\varDelta _2\). Function f is assumed merely as \(C^1\)-function, so \(f'_u\) is continuous. However, the composition of a continuous function and a Lipschitz continuous function may not be Lipschitz continuous. Most probably some extra smoothness assumption on f w.r.t. the u-variable is missing.

4 Conclusions

This Forum Note aims at establishing an objective viewpoint on results obtained in the paper [1] recently published in this journal. These results are constructively criticized. Some drawbacks of the method of investigation are revealed.

Footnotes

  1. 1.

    To be more accurate in formulations, the authors call it “replication trick.” The trick of replication of variables follows to be a simple problem reformulation substantially based on the reduction to v-problem. (The reduction to v-problem is applied to each subarc. The same replication trick was used in ref. 8 of [1], see p. 966 therein.)

Notes

Acknowledgements

The author acknowledges a highly constructive criticism from some of the referees and the support from the Russian Foundation for Basic Research (RFBR) during the project 18-29-03061.

References

  1. 1.
    Dmitruk, A., Samylovskiy, I.: On the relation between two approaches to necessary optimality conditions in problems with state constraints. J. Optim. Theory Appl. 173(2), 391–420 (2017)MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze, R.V., Mishchenko, E.F.: The Mathematical Theory of Optimal Processes. Interscience, New York (1962)Google Scholar
  3. 3.
    Neustadt, L.W.: An abstract variational theory with applications to a broad class of optimization problems. II: Applications. SIAM J. Control 5(1), 90–137 (1967)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Ioffe, A.D., Tikhomirov, V.M.: Theory of Extremal Problems. Elsevier, Amsterdam (1979)Google Scholar
  5. 5.
    Arutyunov, A.V., Karamzin, D.Yu., Pereira, F.L.: The maximum principle for optimal control problems with state constraints by R.V. Gamkrelidze: revisited. J. Optim. Theory Appl. 149, 474–493 (2011)MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Arutyunov, A.V., Karamzin, D.Yu.: On some continuity properties of the measure lagrange multiplier from the maximum principle for state constrained problems. SIAM J. Control Optim. 53(4), 2514–2540 (2015)MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Arutyunov, A.V.: Optimality conditions: abnormal and degenerate problems. Mathematics and its application. Kluwer, Dordrecht (2000)Google Scholar
  8. 8.
    Arutyunov, A.V., Tynyanskiy, N.T.: The maximum principle in a problem with phase constraints. Sov. J. Comput. Syst. Sci. 23, 28–35 (1985)MathSciNetGoogle Scholar
  9. 9.
    Arutyunov, A.V.: On necessary optimality conditions in a problem with phase constraints. Sov. Math. Dokl. 31(1), 1033–1037 (1985)MathSciNetGoogle Scholar
  10. 10.
    Dubovitskii, AYa., Dubovitskii, V.A.: Necessary conditions for strong minimum in optimal control problems with degeneration of endpoint and phase constraints. Usp. Mat. Nauk 40(2), 175–176 (1985)MathSciNetGoogle Scholar
  11. 11.
    Arutyunov, A.V.: On the theory of the maximum principle for state constrained optimal control problems with state constraints. Doklady AN SSSR 304(1), 11–14 (1989)Google Scholar
  12. 12.
    Vinter, R.B., Ferreira, M.M.A.: When is the maximum principle for state constrained problems nondegenerate? J. Math. Anal. and Appl. 187, 438–467 (1994)MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Arutyunov, A.V., Aseev, S.M.: Investigation of the degeneracy phenomenon of the maximum principle for optimal control problems with state constraints. SIAM J. Control Optim. 35(3), 930–952 (1997)MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Federal Research Center “Computer Science and Control” of the Russian Academy of SciencesMoscowRussia

Personalised recommendations