Skip to main content
Log in

On a Few Questions Regarding the Study of State-Constrained Problems in Optimal Control

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

The article is focused on the investigation of the necessary optimality conditions in the form of Pontryagin’s maximum principle for optimal control problems with state constraints. A number of results on this topic, which refine the existing ones, are presented. These results concern the nondegenerate maximum principle under weakened controllability assumptions and also the continuity of the measure Lagrange multiplier.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. See one of such examples also here, in Sect. 3, Problem (10).

  2. Herein, the fraction \(\frac{1}{\beta }\) is treated as \(\infty \) when \(\beta =0\). Thus, 0-nondegenerate maximum principle follows to be the conventional nondegenerate maximum principle, [24, 27].

  3. Let us clarify that, in [21], even Hölder-continuity of the measure-multiplier was shown, however, under an extra smoothness assumption w.r.t. the u-variable. Here, we intend to use the same class of smoothness for the data as in [1].

References

  1. Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze, R.V., Mishchenko, E.F.: The Mathematical Theory of Optimal Processes. Interscience, New York (1962)

    MATH  Google Scholar 

  2. Gamkrelidze, R.V.: Optimum-rate processes with bounded phase coordinates. Dokl. Akad. Nauk SSSR 125, 475–478 (1959)

    MathSciNet  MATH  Google Scholar 

  3. Warga, J.: Minimizing variational curves restricted to a preassigned set. Trans. Am. Math. Soc. 112, 432–455 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  4. Dubovitskii, A.Y., Milyutin, A.A.: Extremum problems in the presence of restrictions. Zh. Vychisl. Mat. Mat. Fiz. 5(3), 395–453 (1965); U.S.S.R. Comput. Math. Math. Phys. 5(3), 1–80 (1965)

  5. Neustadt, L.W.: An abstract variational theory with applications to a broad class of optimization problems. II: Applications. SIAM J. Control 5, 90–137 (1967)

    Article  MathSciNet  MATH  Google Scholar 

  6. Arutyunov, A.V., Tynyanskiy, N.T.: The maximum principle in a problem with phase constraints. Sov. J. Comput. Syst. Sci. 23, 28–35 (1985)

    MathSciNet  Google Scholar 

  7. Arutyunov, A.V.: On necessary optimality conditions in a problem with phase constraints. Sov. Math. Dokl. 31, 1 (1985)

    MATH  Google Scholar 

  8. Dubovitskii, A.Y., Dubovitskii, V.A.: Necessary conditions for strong minimum in optimal control problems with degeneration of endpoint and phase constraints. Usp. Mat. Nauk 40, 2 (1985)

    Google Scholar 

  9. Arutyunov, A.V.: Perturbations of extremal problems with constraints and necessary optimality conditions. J. Sov. Math. 54, 6 (1991)

    Article  MATH  Google Scholar 

  10. Arutyunov, A.V., Blagodatskikh, V.I.: Maximum-principle for differential inclusions with space constraints, Number theory, algebra, analysis and their applications. Collection of articles. Dedicated to the centenary of Ivan Matveevich Vinogradov, Trudy Mat. Inst. Steklov., vol. 200, Nauka, Moscow (1991); Proc. Steklov Inst. Math., 200 (1993)

  11. Arutyunov, A.V., Aseev, S.M., Blagodatskikh, V.I.: First-order necessary conditions in the problem of optimal control of a differential inclusion with phase constraints. Math. Sb. 184, 6 (1993)

    MATH  Google Scholar 

  12. Vinter, R.B., Ferreira, M.M.A.: When is the maximum principle for state constrained problems nondegenerate? J. Math. Anal. Appl. 187, 438–467 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  13. Arutyunov, A.V., Aseev, S.M.: State constraints in optimal control. The degeneracy phenomenon. Syst. Control Lett. 26, 267–273 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  14. Arutyunov, A.V., Aseev, S.M.: Investigation of the degeneracy phenomenon of the maximum principle for optimal control problems with state constraints. SIAM J. Control Optim. 35, 3 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  15. Ferreira, M.M.A., Fontes, F.A.C.C., Vinter, R.B.: Non-degenerate necessary conditions for nonconvex optimal control problems with state constraints. J. Math. Anal. Appl. 233, 116–129 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  16. Hager, W.W.: Lipschitz continuity for constrained processes. SIAM J. Control Optim. 17, 321–338 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  17. Maurer, H.: Differential stability in optimal control problems. Appl. Math. Optim. 5(1), 283–295 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  18. Afanas’ev, A.P., Dikusar, V.V., Milyutin, A.A., Chukanov, S.A.: Necessary condition in optimal control. Nauka, Moscow (1990). [in Russian]

  19. Galbraith, G.N., Vinter, R.B.: Lipschitz continuity of optimal controls for state constrained problems. SIAM J. Control Optim. 42, 5 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  20. Arutyunov, A.V.: Properties of the Lagrange multipliers in the Pontryagin maximum principle for optimal control problems with state constraints. Differ. Equ. 48, 12 (2012)

    Article  MathSciNet  Google Scholar 

  21. Arutyunov, A.V., Karamzin, D.Y.: On some continuity properties of the measure Lagrange multiplier from the maximum principle for state constrained problems. SIAM J. Control Optim. 53, 4 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  22. Halkin, H.: A satisfactory treatment of equality and operator constraints in the Dubovitskii–Milyutin optimization formalism. J. Optim. Theory Appl. 6, 2 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  23. Ioffe, A.D., Tikhomirov, V.M.: Theory of Extremal Problems. North-Holland, Amsterdam (1979)

    Google Scholar 

  24. Arutyunov, A.V.: Optimality Conditions: Abnormal and Degenerate Problems Mathematics and Its Application. Kluwer Academic Publisher, Dordrecht (2000)

    Google Scholar 

  25. Vinter, R.B.: Optimal Control. Birkhauser, Boston (2000)

    MATH  Google Scholar 

  26. Milyutin, A.A.: Maximum Principle in a General Optimal Control Problem. Fizmatlit, Moscow (2001). [in Russian]

    Google Scholar 

  27. Arutyunov, A.V., Karamzin, D.Y., Pereira, F.L.: The maximum principle for optimal control problems with state constraints by R.V. Gamkrelidze: revisited. J. Optim. Theory Appl. 149, 474–493 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  28. Vinter, R.B., Papas, G.: A maximum principle for nonsmooth optimal control problems with state constraints. J. Math. Anal. Appl. 89, 212–232 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  29. Clarke, F.H.: Optimization and Nonsmooth Analysis. Wiley-Interscience, New York (1983)

    MATH  Google Scholar 

  30. Ioffe, A.D.: Necessary conditions in nonsmooth optimization. Math. Oper. Res. 9, 159–189 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  31. Colombo, G., Henrion, R., Hoang, N.D., Mordukhovich, B.S.: Discrete approximations of a controlled sweeping process. Set-Valued Var. Anal. 23(1), 69–86 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  32. Colombo, G., Henrion, R., Nguyen, D.H., Mordukhovich, B.S.: Optimal control of the sweeping process over polyhedral controlled sets. J. Differ. Equ. 260, 4 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  33. Cao, T.H., Mordukhovich, B.S.: Optimality conditions for a controlled sweeping process with applications to the crowd motion model. Discret. Contin. Dyn. Syst. Ser. B 22, 2 (2017)

    MathSciNet  MATH  Google Scholar 

  34. Bryson, E.R., Yu-Chi, Ho: Applied Optimal Control. Taylor & Francis, London (1969)

    Google Scholar 

  35. Betts, J.T., Huffman, W.P.: Path-constrained trajectory optimization using sparse sequential quadratic programming. J. Guid. Control Dyn. 16(1), 59–68 (1993)

    Article  MATH  Google Scholar 

  36. Buskens, C., Maurer, H.: SQP-methods for solving optimal control problems with control and state constraints: adjoint variables, sensitivity analysis and real-time control. J. Comput. Appl. Math. 120, 85–108 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  37. Haberkorn, T., Trelat, E.: Convergence results for smooth regularizations of hybrid nonlinear optimal control problems. SIAM J. Control Optim. 49, 1498–1522 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  38. Dang, T.P., Diveev, A.I., Sofronova, E.A.: A Problem of Identification Control Synthesis for Mobile Robot by the Network Operator Method. Proceedings of the 11th IEEE Conference on Industrial Electronics and Applications (ICIEA), pp. 2413–2418 (2016)

  39. Zeiaee, A., Soltani-Zarrin, R., Fontes, F.A.C.C., Langari, R.: Constrained directions method for stabilization of mobile robots with input and state constraints. Proceedings of the American Control Conference, pp. 3706–3711 (2017)

  40. Mordukhovich, B.S.: Maximum principle in the problem of time optimal response with nonsmooth constraints. J. Appl. Math. Mech. 40(6), 960–969 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  41. Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation. Volume II. Applications. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer, Berlin (2006)

    Book  Google Scholar 

  42. Arutyunov, A.V., Vinter, R.B.: A simple ’finite approximations’ proof of the Pontryagin maximum principle under reduced differentiability hypotheses. Set-Valued Anal. 12(1–2), 5–24 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  43. Arutyunov, A.V., Karamzin, D.Y., Pereira, F.L.: Investigation of controllability and regularity conditions for state constrained problems. IFAC-PapersOnLine 50(1), 6295–6302 (2017)

    Article  Google Scholar 

  44. Arutyunov, A.V., Karamzin, D.Y.: Properties of extremals in optimal control problems with state constraints. Differ. Equ. 52, 11 (2016)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We kindly thank professor A.V. Arutyunov for useful comments and fruitful discussions. The support from the Russian Foundation for Basic Research during the projects 16-31-60005, 18-29-03061, and the support of FCT R&D Unit SYSTEC—POCI-01-0145-FEDER-006933/SYSTEC funded by ERDF | COMPETE2020 | FCT/MEC | PT2020 extension to 2018, Project STRIDE NORTE-01-0145-FEDER-000033 funded by ERDF | NORTE2020, and FCT Project POCI-01-01-0145-FEDER-032485 funded by ERDF | COMPETE2020 | POCI are also acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dmitry Karamzin.

Appendix: Proof of Theorem 3.1 in the Nonconvex Case

Appendix: Proof of Theorem 3.1 in the Nonconvex Case

In this section, we demonstrate how some extra assumptions imposed in § 3, namely linearity and convexity, can be dispensed with in the proof of the \(\beta \)-nondegenerate maximum principle. More precisely, let us omit Hypothesis 2, and consider Case C\(_0\) in which, for simplicity, assume that the endpoint set has the following reduced form \(S=\{x_0^*\}\times S_1\). Then, Hypothesis 3 can also be removed. (Some of its assumptions are removed automatically, some, such as \(g'_x(x_0^*,0)\ne 0\), by virtue of the proof.) Therefore, in what follows we prove Theorem 3.1 under merely Hypothesis 1, albeit for the reduced set S.

Let \((x^*,u^*)\) be an optimal process in (1). It is not restrictive to consider that \(J(u^*)=0\). Take \(\varepsilon \in \,]0,\delta [\). Consider the set-valued map

$$\begin{aligned} U_\varepsilon (t):=\left\{ \begin{array}{ll}\{u^*(t)\},&{}\quad \text{ if }\; t\in [0,\varepsilon ],\\ U,&{} \quad \text{ otherwise },\end{array}\right. \end{aligned}$$

Denote by \(\mathcal{M}\) the set of all control functions \(u:[0,1]\rightarrow \mathbb {R}^m\) such that \(u(t)\in U_\varepsilon (t)\) for a.a. t, and \(p\in S\), where \(p=(x_0^*,x_1)\), \(x_1=x(1)\), and trajectory \(x(\cdot )\) is such that \(|x(t)|\le c:=\Vert x^*\Vert _C+1\)\(\forall \, t\), and \(\dot{x}(t)= f(x(t),u(t),t)\), \(x(0)=x_0^*\). The set \(\mathcal{M}\) is closed (due to U is compact), and therefore, it is a complete metric space w.r.t. the norm in \(\mathbb {L}_1([0,1];\mathbb {R}^m)\).

Let ab be nonnegative numbers. Consider the function

$$\begin{aligned} \varDelta (a,b):=\left\{ \begin{array}{ll}\displaystyle a b^{-3}, &{} \quad b>0,\\ 1,\;a>0, &{} \quad b=0,\\ 0, &{} \quad a=b=0.\end{array}\right. \end{aligned}$$

Note that the function \(\varDelta \) is lower semicontinuous. This function will serve as a certain penalty function in the method applied below.

Take an integer positive number i. Define \(\varepsilon _i=i^{-1}\), \(J_i(u) =(J(u)+\varepsilon _i)^+\). Consider the following functional over \(\mathcal{M}\):

$$\begin{aligned} F_i(u):=J_i(u)+\varDelta \Biggl (\int _0^1(\chi ^+)^2\mathrm{d}t,J_i(u)\Biggr ), \end{aligned}$$

Here \(\dot{\chi }(t)=\varGamma (x(t),u(t),t)\), \(\chi (0)=0\). Functional \(F_i\) is lower semicontinuous and positive over \(\mathcal{M}\). (To see it, use \(\chi (t)\equiv g(x(t),t)\).)

Consider the following problem

$$\begin{aligned} \text{ Minimize }\;\;F_i(u)\;\;\text{ subject } \text{ to }\;\; u\in \mathcal{M}. \end{aligned}$$

Note that \(F_i(u^*)=\varepsilon _i\). By applying the Ekeland variational principle, we find that for all i there exists a control function \(u_i\in \mathcal{M}\) such that

$$\begin{aligned}&F_i(u_i)\le F_i(u^*)=\varepsilon _i, \end{aligned}$$
(19)
$$\begin{aligned}&\int _0^1|u_i(t)-u^*(t)|\mathrm{d}t\le \sqrt{\varepsilon _i}, \end{aligned}$$
(20)

and \(u_i\) is the unique solution to the following problem:

$$\begin{aligned} \text{ Minimize }\;\;F_i(u)+\sqrt{\varepsilon _i}\int _0^1|u-u_i(t)|\mathrm{d}t\;\;\text{ subject } \text{ to }\;\;u\in \mathcal{M}. \end{aligned}$$

If \(J_i(u_i)=0\), then, in view of optimality and due to the fact that \(J(u^*)=0\), the state constraints in (1) are violated. Therefore, by definition of \(\varDelta \), we have that \(F_i(u_i)\ge 1\). This, however, contradicts (19). Hence, \(J_i(u_i)>0\), and then, control \(u_i\) is optimal in the following problem:

$$\begin{aligned} \begin{array}{rcl} \text{ Minimize } &{} &{} \displaystyle \int _0^1 f_0(x,u,t)\mathrm{d}t+\int _0^1 (\chi ^+)^2z^{-3}\mathrm{d}t+\sqrt{\varepsilon _i}\int _0^1|u-u_i(t)|\mathrm{d}t, \\ \text{ subject } \text{ to } &{} &{} \dot{x}=f(x,u,t),\\ &{} &{} \dot{y}=f_0(x,u,t),\\ &{} &{} \dot{z}=0,\\ &{} &{} \dot{\chi }=\varGamma (x,u,t)\;\;\text{ for } \text{ a.a. }\,t\in [0,1],\\ &{} &{} x_0=x_0^*,\;x_1\in S_1,\;y_0=0,\;z_0=y_1+\varepsilon _i,\;\chi _0=0,\\ &{} &{} u(t)\in U_\varepsilon (t)\;\text{ for } \text{ a.a. }\,t,\;|x(t)|\le c,\; z(t)>0\;\forall \,t\in [0,1]. \end{array} \end{aligned}$$
(21)

Let \(x_i,y_i,z_i,\chi _i\) be the arcs corresponding to \(u_i(\cdot )\). From (20), after extracting a subsequence, it follows that \(u_i(t)\rightarrow u^*(t)\) for a.a. t and thereby, \(x_i(t)\rightrightarrows x^*(t)\) uniformly on [0, 1]. Then, \(|x_i(t)|<c\)\(\forall \, t\) for all sufficiently large i, so the state constraint \(|x(t)|\le c\) in Problem (21) is everywhere inactive. Let us apply the nonsmooth version of the maximum principle from [41]. There exist a number \(\lambda _i\ge 0\), absolutely continuous functions \(\psi _i,\sigma _i,\mu _i\) which do not simultaneously vanish and such that

$$\begin{aligned}&\displaystyle \dot{\psi }_i(t)= -\bar{H}'_x(x_i(t),u_i(t),\psi _i(t),\mu _i(t),\lambda _i-\kappa _i,t), \nonumber \\&\displaystyle \sigma _i(t)= 3\lambda _i \int _{t}^1 (\chi _i^+)^2z_i^{-4}\mathrm{d}s, \end{aligned}$$
(22)
$$\begin{aligned}&\displaystyle \mu _i(t)= 2\lambda _i\int _t^1 \chi _i^+z_i^{-3}\mathrm{d}s,\nonumber \\&\qquad \psi _i(1)\in -N_{S_1}(x_i(1)), \end{aligned}$$
(23)
$$\begin{aligned}&\quad \displaystyle \max _{u\in U}\{\bar{H}(x_i(t),u,\psi _i(t),\mu _i(t),\lambda _i-\kappa _i,t)-\lambda _i\sqrt{\varepsilon _i}|u-u_i(t)|\} \nonumber \\&\quad =\bar{H}(x_i(t),u_i(t),\psi _i(t),\mu _i(t),\lambda _i-\kappa _i,t)\;\;\text{ for } \text{ a.a. }\,t\in [0,1], \end{aligned}$$
(24)

where \(\kappa _i:=\sigma _i(0)\).

Note that \(\mu _i\) is obviously constant on \([0,\varepsilon ]\). This fact implies nontriviality of the multipliers over the interval \([\varepsilon ,1]\). Therefore, by normalizing, we have

$$\begin{aligned} \lambda _i+|\psi _i(\varepsilon )|+ \mu _i(\varepsilon )=1. \end{aligned}$$

Then, \(\mu _i\)s are uniformly bounded. Bearing this fact in mind, let us demonstrate that \(\kappa _i\rightarrow 0\). Indeed,

$$\begin{aligned} \kappa _i=\sigma _i(0)=\int _0^1 3\lambda _i\frac{(\chi _i^+)^2}{z_i^4}\mathrm{d}t =-\int _0^1 \frac{3\chi _i^+}{2z_i}\mathrm{d}\mu _i. \end{aligned}$$

Therefore, it is sufficient to show that \(\chi _i^+z_i^{-1}\rightrightarrows 0\).

Assume the contrary. Then there exist a number \(\varepsilon _0>0\) and points \(t_i\in [0,1]\) such that \(\chi _i^+(t_i)\ge 2\varepsilon _0 z_i\), \(\forall \, i\). Functions \(\chi _i^+(\cdot )\) are Lipschitz continuous uniformly w.r.t. i as the derivatives \(\dot{\chi }_i(t)\) are uniformly bounded by construction in view of Hypothesis 1. Therefore, there exist a number \(c_0>0\) such that

$$\begin{aligned} \chi _i^+(t)\ge \varepsilon _0 z_i,\;\;\forall \, t\in O_i=[t_i-c_0 z_i,t_i+c_0 z_i]. \end{aligned}$$

Whence, for all i, it follows

$$\begin{aligned} \int _0^1(\chi _i^+(t))^2 z_i^{-3}\mathrm{d}t \ge \int _{[0,1]\cap O_i}(\chi _i^+(t))^2 z_i^{-3}\mathrm{d}t \ge \int _{[0,1]\cap O_i}\varepsilon _0^2 z_i^{-1}\mathrm{d}t\ge c_0\varepsilon _0^2>0. \end{aligned}$$

However, this estimate contradicts (19). Therefore, \(\chi _i^+z_i^{-1}\rightrightarrows 0\), and thus, we have proved that \(\kappa _i\rightarrow 0\) as \(i\rightarrow \infty \).

Now, by using standard compactness arguments which involve the Arzela–Ascoli and Helly theorems, we find multipliers \(\lambda ,\psi ,\mu \), defined over [0, 1], such that \(\mu (\cdot )\) is decreasing, \(\mu (1)=0\), and, as \(i\rightarrow \infty \), \(\lambda _i\rightarrow \lambda \), \(\psi _i(t)\rightrightarrows \psi (t)\) uniformly on [0, 1], while \(\mu _i(t)\rightarrow \mu (t)\) for all points t in which function \(\mu \) is continuous. By passing to the limit in (22)–(24), using that \(\kappa _i\rightarrow 0\), it is easy to see that these multipliers satisfy the adjoint equation, the transversality condition \(\psi (1)\in -N_{S_1}(x_1^*)\), and the maximum condition in which the set U is replaced with \(U_\varepsilon (t)\).

The important step is to ensure the following nontriviality condition

$$\begin{aligned} \lambda +|\psi (\varepsilon )|+ \mu (\varepsilon +)=1. \end{aligned}$$

This condition is valid merely by virtue of the same arguments as proposed in [43]. (See the proof of Theorem 1 therein.) Indeed, in view of the controllability assumption imposed in Definition 3.1, in the proximity of the point \(t=\varepsilon \), the conventional controllability (that is, 0-controllability) condition is satisfied. Therefore, if we assume that \(\lambda =0\), \(\psi =0\), and \(\mu (t)=0\)\(\forall \, t\in \,]\varepsilon ,1]\), then the maximum condition considered near this point easily leads to a contradiction.

The obtained multipliers depend on \(\varepsilon \), that is, \(\lambda =\lambda _\varepsilon \), \(\psi =\psi _\varepsilon \), and \(\mu =\mu _\varepsilon \). Next, it is necessary to pass to the limit as \(\varepsilon \rightarrow 0\) in the obtained conditions. Take \(\beta \in \,]\alpha ,1[\). At this stage, we normalize the multipliers as follows

$$\begin{aligned} \lambda _\varepsilon + |\psi _\varepsilon (\varepsilon )| + \left( \int _\varepsilon ^1 |\mu _\varepsilon (t)|^{1/\beta }\mathrm{d}t\right) ^\beta = 1. \end{aligned}$$

The subsequent arguments are the same as in the proof of Theorem 3.1. By using the weak sequential compactness of the unit ball in \(\mathbb {L}_{1/\beta }([0,1];\mathbb {R})\), after extracting a subsequence, \(\mu _\varepsilon {\mathop {\rightarrow }\limits ^{w}}\mu \in \mathbb {L}_{1/\beta }([0,1];\mathbb {R})\), where we redefined without relabeling: \(\mu _\varepsilon (t)=0\) on \([0,\varepsilon ]\). By virtue of the Arzela–Ascoli theorem and standard estimates, after extraction of a subsequence, \(\lambda _\varepsilon \rightarrow \lambda \), \(\psi _\varepsilon \rightrightarrows \psi \in \mathbb {W}_{1,\beta }([0,1];\mathbb {R}^n)\), where \(\psi _\varepsilon \) is also redefined accordingly. It is clear that constructed \(\lambda ,\psi ,\mu \) satisfy all the conditions of the maximum principle. The obtained set of Lagrange multipliers \(\lambda ,\psi ,\mu \) is nontrivial by virtue of the same arguments as earlier, though with “\(\ge \)” in (8) instead of “\(=\)”. The proof is complete.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Karamzin, D., Pereira, F.L. On a Few Questions Regarding the Study of State-Constrained Problems in Optimal Control. J Optim Theory Appl 180, 235–255 (2019). https://doi.org/10.1007/s10957-018-1394-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-018-1394-2

Keywords

Mathematics Subject Classification

Navigation