Skip to main content

First- and Second-Order Optimality Conditions for Optimal Control Problems of State Constrained Integral Equations

Abstract

This paper deals with optimal control problems of integral equations, with initial–final and running state constraints. The order of a running state constraint is defined in the setting of integral dynamics, and we work here with constraints of arbitrary high orders. First-order necessary conditions of optimality are given by the description of the set of Lagrange multipliers. Second-order necessary conditions are expressed by the nonnegativity of the supremum of some quadratic forms. Second-order sufficient conditions are also obtained in the case where these quadratic forms are of Legendre type.

This is a preview of subscription content, access via your institution.

References

  1. Volterra, V.: Variazioni e fluttuazioni del numero d’individui in specie animali conviventi. Memoria R. comitato talassografico italiano, vol. 131 (1927)

    Google Scholar 

  2. Scudo, F.M., Ziegler, J.R.: The Golden Age of Theoretical Ecology, 1923–1940. Lecture Notes in Biomathematics, vol. 22, pp. 1923–1940. Springer, Berlin (1978). A collection of works by V. Volterra, V.A. Kostitzin, A.J. Lotka, and A.N. Kolmogoroff

    MATH  Google Scholar 

  3. Volterra, V.: Leçons sur les équations intégrales et les équations intégro-différentielles. Collection de monographies sur la théorie des fonctions. Gauthier-Villars, Paris (1913). Leçons professées à la Faculté des sciences de Rome en 1910

    MATH  Google Scholar 

  4. Kamien, M.I., Muller, E.: Optimal control with integral state equations. Rev. Econ. Stud. 43(3), 469–473 (1976)

    Article  MATH  Google Scholar 

  5. Vinokurov, V.R.: Optimal control of processes describable by integral equations. I. Izv. Vysš. Učebn. Zaved., Mat. 62(7), 21–33 (1967)

    MathSciNet  Google Scholar 

  6. Vinokurov, V.R.: Optimal control of processes described by integral equations. I. SIAM J. Control 7, 324–336 (1969)

    MathSciNet  Article  MATH  Google Scholar 

  7. Vinokurov, V.R.: Optimal control of processes described by integral equations. II. SIAM J. Control 7, 337–345 (1969)

    MathSciNet  Article  MATH  Google Scholar 

  8. Vinokurov, V.R.: Optimal control of processes described by integral equations. III. SIAM J. Control 7, 346–355 (1969)

    MathSciNet  Article  MATH  Google Scholar 

  9. Neustadt, L.W., Warga, J.: Comments on the paper “Optimal control of processes described by integral equations. I” by V.R. Vinokurov. SIAM J. Control 8, 572 (1970)

    MathSciNet  Article  MATH  Google Scholar 

  10. Bakke, V.L.: A maximum principle for an optimal control problem with integral constraints. J. Optim. Theory Appl. 13, 32–55 (1974)

    MathSciNet  Article  MATH  Google Scholar 

  11. Carlson, D.A.: An elementary proof of the maximum principle for optimal control problems governed by a Volterra integral equation. J. Optim. Theory Appl. 54(1), 43–61 (1987)

    MathSciNet  Article  Google Scholar 

  12. de la Vega, C.: Necessary conditions for optimal terminal time control problems governed by a Volterra integral equation. J. Optim. Theory Appl. 130(1), 79–93 (2006)

    MathSciNet  Article  MATH  Google Scholar 

  13. Carlier, G., Tahraoui, R.: On some optimal control problems governed by a state equation with memory. ESAIM Control Optim. Calc. Var. 14(4), 725–743 (2008)

    MathSciNet  Article  MATH  Google Scholar 

  14. Bonnans, J.F., de la Vega, C.: Optimal control of state constrained integral equations. Set-Valued Var. Anal. 18(3–4), 307–326 (2010)

    MathSciNet  Article  MATH  Google Scholar 

  15. Kawasaki, H.: An envelope-like effect of infinitely many inequality constraints on second-order necessary conditions for minimization problems. Math. Program. 41(1, Ser. A), 73–96 (1988)

    MathSciNet  Article  MATH  Google Scholar 

  16. Cominetti, R.: Metric regularity, tangent sets, and second-order optimality conditions. Appl. Math. Optim. 21(3), 265–287 (1990)

    MathSciNet  Article  MATH  Google Scholar 

  17. Páles, Z., Zeidan, V.: Optimal control problems with set-valued control and state constraints. SIAM J. Optim. 14(2), 334–358 (2003). (Electronic)

    MathSciNet  Article  MATH  Google Scholar 

  18. Bonnans, J.F., Hermant, A.: No-gap second-order optimality conditions for optimal control problems with a single state constraint and control. Math. Program. 117(1–2, Ser. B), 21–50 (2009)

    MathSciNet  Article  MATH  Google Scholar 

  19. Bonnans, J.F., Hermant, A.: Second-order analysis for optimal control problems with pure state constraints and mixed control-state constraints. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 26(2), 561–598 (2009)

    MathSciNet  Article  MATH  Google Scholar 

  20. Bonnans, J.F., Hermant, A.: Revisiting the analysis of optimal control problems with several state constraints. Control Cybern. 38(4A), 1021–1052 (2009)

    MathSciNet  MATH  Google Scholar 

  21. Bonnans, J.F., Shapiro, A.: Perturbation Analysis of Optimization Problems. Springer Series in Operations Research. Springer, New York (2000)

    MATH  Google Scholar 

  22. Robinson, S.M.: Stability theory for systems of inequalities. II. Differentiable nonlinear systems. SIAM J. Numer. Anal. 13(4), 497–513 (1976)

    MathSciNet  Article  MATH  Google Scholar 

  23. Zowe, J., Kurcyusz, S.: Regularity and stability for the mathematical programming problem in Banach spaces. Appl. Math. Optim. 5(1), 49–62 (1979)

    MathSciNet  Article  MATH  Google Scholar 

  24. Dmitruk, A.V.: Jacobi type conditions for singular extremals. Control Cybern. 37(2), 285–306 (2008)

    MathSciNet  MATH  Google Scholar 

  25. Hestenes, M.R.: Applications of the theory of quadratic forms in Hilbert space to the calculus of variations. Pac. J. Math. 1, 525–581 (1951)

    MathSciNet  Article  MATH  Google Scholar 

  26. Ambrosio, L., Fusco, N., Pallara, D.: Functions of Bounded Variation and Free Discontinuity Problems. Oxford Mathematical Monographs. The Clarendon Press Oxford University Press, New York (2000)

    MATH  Google Scholar 

  27. Folland, G.B.: Real Analysis, 2nd edn. Pure and Applied Mathematics (New York). Wiley, New York (1999). Modern techniques and their applications, A Wiley-Interscience Publication

    MATH  Google Scholar 

  28. Choquet, G.: Cours d’Analyse. Tome II: Topologie. Espaces Topologiques et Espaces Métriques. Fonctions Numériques. Espaces Vectoriels Topologiques. Masson et Cie, Éditeurs, Paris (1969). Deuxième édition, revue et corrigée

    MATH  Google Scholar 

Download references

Acknowledgements

We thank the two anonymous referees for their useful remarks. The research leading to these results has received funding from the EU 7th Framework Programme (FP7-PEOPLE-2010-ITN), under GA No. 264735-SADCO. The first and third authors also thank the MMSN (Modélisation Mathématique et Simulation Numérique) Chair (EADS, Inria, and Ecole Polytechnique) for its support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Constanza de la Vega.

Additional information

Communicated by Hans Joseph Pesch.

Appendix

Appendix

A.1 Functions of Bounded Variation

The main reference here is [26], Sect. 3.2. Recall that with the definition of \(\mathit{BV} ([0,T];\mathbb {R}^{n*} )\) given at the beginning of Sect. 2.2, for \(h \in\mathit{BV} ([0,T];\mathbb{R}^{n*} )\), there exist \(h_{0_{-}}, h_{T_{+}} \in\mathbb{R}^{n*}\) such that (6) holds.

Lemma A.1

Let \(h \in\mathit{BV} ([0,T];\mathbb{R}^{n*} )\). Let h l, h r be defined for all t∈[0,T] by

(147)
(148)

Then they are both in the same equivalence class of h, h l is left continuous, h r is right continuous, and, for all t∈[0,T],

(149)
(150)

Proof

Theorem 3.28 in [26]. □

The identification between measures and functions of bounded variation that we mention at the beginning of Sect. 2.2 relies on the following:

Lemma A.2

The linear map

$$ (c, \mu) \longmapsto\bigl( h \colon t \mapsto c- \mu\bigl([t,T] \bigr) \bigr) $$
(151)

is an isomorphism between \(\mathbb{R}^{r*} \times\mathit{M} ( [0,T];\mathbb{R}^{r*} )\) and \(\mathit{BV} ( [0,T];\mathbb{R}^{r*} )\), whose inverse is

$$ h \longmapsto( h_{T_+}, \mathrm{d}h ) . $$
(152)

Proof

Theorem 3.30 in [26]. □

Let us now prove Lemma 2.1:

Proof of Lemma 2.1

By (149), a solution in P of (11) is any \(p \in L^{1}([0,T];\mathbb{R}^{n*})\) such that, for a.e. t∈[0,T],

$$ p_t = D_{y_2}\varPhi[\varPsi](y_0,y_T) + \int_t^T D_y H[p](s,u_s,y_s)\,\mathrm{d}s + \int_{[t,T]} \mathrm{d} \eta_s \,g'(y_s) . $$
(153)

We define \(\varTheta\colon L^{1}([0,T];\mathbb{R}^{n*}) \rightarrow L^{1}([0,T];\mathbb{R}^{n*})\) by

$$ \varTheta(p)_t := D_{y_2}\varPhi[\varPsi](y_0,y_T) + \int_t^T D_y H[p](s,u_s,y_s)\,\mathrm{d}s + \int_{[t,T]} \mathrm{d} \eta_s \,g'(y_s) $$
(154)

for a.e. t∈[0,T], and we show that Θ has a unique fixed point. Let C>0 be such that ∥D y f, \(\| D^{2}_{y,\tau} f \| _{\infty}\le C\) along (u,y). Then

We consider the family of equivalent norms on \(L^{1}([0,T];\mathbb{R}^{n*})\)

$$ \| v \|_{1,K} := \bigl\| t \mapsto e^{-K(T-t)}v(t) \bigr\|_1 \quad(K \ge0). $$
(155)

Then

For K big enough, Θ is a contraction on \(L^{1}([0,T];\mathbb {R}^{n*})\) for ∥⋅∥1,K ; its unique fixed point is the unique solution of (11). □

Another useful result is the following integration by parts formula:

Lemma A.3

Let h,kBV([0,T]). Then h lL 1(dk), k rL 1(dh), and

$$ \int_{[0,T]} h^l \,\mathrm{d}k+ \int _{[0,T]} k^r \,\mathrm{d}h = h_{T_+}k_{T_+}- h_{0_-}k_{0_-} . $$
(156)

Proof

Let Ω:={0≤yxT}. Since χ Ω L 1(dh⊗dk), we have by Fubini’s theorem (Theorem 7.27 in [27]) and Lemma A.1 that h lL 1(dk), k rL 1(dh), and we can compute dh⊗dk(Ω) in two different ways:

 □

A.2 The Hidden Use of Assumption (A3)

We use assumption (A3) to prove Lemma 4.3 (and then Lemma 4.1, …) through the following:

Lemma A.4

Recall that \(M_{t}:= D_{\tilde{u}}G^{(q)}_{I^{\varepsilon_{0}}(t)}(t,\bar {u}_{t},\bar{y}_{t},\bar {u},\bar{y}) \in M_{|I^{\varepsilon_{0}}_{t}|,m} (\mathbb{R} )\), t∈[0,T]. Then \(M_{t}M_{t}^{*}\) is invertible and \(| ( M_{t}M_{t}^{*} ) ^{-1}| \le\gamma^{-2}\) for all t∈[0,T].

Proof

For any \(x \in\mathbb{R}^{|I^{\varepsilon_{0}}(t)|}\),

$$ \bigl\langle M_t M_t^* x,x \bigr\rangle =\bigl|M_t^*x \bigr|^2 \ge\gamma^2 |x|^2. $$

Then \(M_{t} M_{t}^{*} x =0 \) implies x=0, and the invertibility follows.

Let \(y \in\mathbb{R}^{|I^{\varepsilon_{0}}(t)|}\) and \(x:= ( M_{t}M_{t}^{*} ) ^{-1}y\).

$$ |y||x| \ge\langle y,x \rangle= \bigl\langle M_tM_t^* x,x \bigr\rangle= \bigl|M_t^*x\bigr|^2 \ge\gamma^2 |x|^2 . $$

For y≠0, we have x≠0; dividing the previous inequality by |x|, we get

$$ \gamma^2 \bigl\vert\bigl( M_tM_t^* \bigr) ^{-1}y \bigr\vert\le|y| . $$

The result follows. □

Before we prove Lemma 4.3, we define the truncation of an integrable function:

Definition A.1

Given any ϕL s(J) (s∈[1,∞[ and J interval), we will call the truncation of ϕ the sequence ϕ kL (J) defined for \(k \in\mathbb{N}\) and a.a. tJ by

$$ \phi^k_t := \begin{cases} \phi_t & \text{if }|\phi_t| \le k,\\[3pt] k \frac{\phi_t}{|\phi_t|} & \text{otherwise.} \end{cases} $$

Observe that \(\phi^{k} \xrightarrow[k \to \infty]{L^{s}} \phi\).

Proof of Lemma 4.3

In the sequel we omit z 0 in the notations.

(i) Let vV s . We claim that v satisfies

$$ M_t v_t + N_t \bigl( z[v]_t,v,z[v] \bigr) = h_t \quad \text{for a.a. } t \in J_l $$
(157)

iff there exists \(w \in L^{s}(J_{l};\mathbb{R}^{m})\) such that (v,w) satisfies

$$ \left\{ \begin{array}{l} M_t w_t = 0 , \\[3pt] v_t = M_t^* \bigl( M_t M_t^* \bigr)^{-1} \bigl( h_t - N_t \bigl (z[v]_t,v,z[v]\bigr) \bigr) + w_t, \end{array} \right. \quad \text{for a.a. } t \in J_l. $$
(158)

Clearly, if (v,w) satisfies (158), then v satisfies (157). Conversely, suppose that v satisfies (157). With Lemma A.4 in mind, we define \(\alpha\in L^{s}(J_{l};\mathbb{R}^{|I_{l}|})\) and \(w\in L^{s}(J_{l};\mathbb {R}^{m})\) by

Then

$$ \left\{ \begin{array}{l} Mw =0 , \\[2pt] v = M^* \alpha+ w , \end{array} \right. \quad \text{on } J_l . $$
(159)

We derive from (157) and (159) that

$$ M_tM_t^* \alpha_t + N_t \bigl( z[v]_t,v,z[v] \bigr) = h_t \quad \text{for a.a. } t \in J_l . $$

Using again Lemma A.4 and (159), we get (158).

(ii) Given \((v,h,w) \in\mathit{V}_{s} \times L^{s}(J_{l};\mathbb{R}^{|I_{l}|}) \times L^{s}(J_{l};\mathbb{R}^{m})\), there exists a unique \(\tilde{v}\in\mathit{V}_{s}\) such that

$$ \left\{ \begin{array}{l} \tilde{v}= v \quad \text{on } J_0 \cup\cdots\cup J_{l-1} \cup J_{l+1} \cup\cdots\cup J_\kappa, \\[4pt] \tilde{v}_t = M_t^* \bigl( M_t M_t^* \bigr)^{-1} \bigl( h_t - N_t \bigl(z[\tilde{v}]_t,\tilde{v},z[\tilde{v}]\bigr) \bigr) + w_t \quad \text{for a.a. } t \in J_l, \end{array} \right. $$
(160)

Indeed, one can define a mapping from V s to V s , using the right-hand side of (160). Then it can be shown, as in the proof of Lemma 2.1, that this mapping is a contraction for a well-suited norm, using Lemmas 2.2, 2.3, and A.4. The existence and uniqueness follow. Moreover, a version of the contraction mapping theorem with parameter (see, e.g., Théorème 21-5 in [28]) shows that \(\tilde{v}\) depends continuously on (v,h,w).

(iii) Let us prove (a): let \((\bar{h},v) \in L^{s}(J_{l};\mathbb{R}^{|I_{l}|}) \times\mathit {V}_{s}\), and let w:=0. Let \(\tilde{v}\in\mathit{V}_{s}\) be the unique solution of (160) for \((v,\bar{h},w)\). Then \(\tilde{v}\) is a solution of (112) by (i).

(iv) Let us prove (b): let \((\bar{h},\bar{v}) \in L^{s}(J_{l};\mathbb {R}^{|I_{l}|}) \times \mathit{V}_{s}\) as in the statement, and let \(\bar{w}\) be given by (i). Then \(\bar{v}\) is the unique solution of (160) for \((\bar{v},\bar {h},\bar{w})\).

Let \((h^{k},v^{k}) \in L^{\infty}(J_{l}; \mathbb{R}^{|I_{l}|}) \times\mathit {U}\), \(k \in \mathbb{N}\), be such that \((h^{k},v^{k})\xrightarrow[]{L^{s}\times L^{s}} (\bar{h},\bar{v})\), and let \(w^{k} \in L^{\infty}(J_{l};\mathbb{R}^{m})\), \(k \in\mathbb{N}\), be the truncation of \(\bar{w}\). It is obvious from Definition A.1 that

$$ M_t w^k_t = 0 \quad \text{for a.a. } t \in J_l . $$

Let \(\tilde{v}^{k} \in\mathit{U}\) be the unique solution of (160) for (v k,h k,w k), \(k \in\mathbb{N}\). Then, by the uniqueness and continuity in (ii),

$$ \tilde{v}^k \xrightarrow[]{L^s} \bar{v}, $$
(161)

and \(\tilde{v}^{k}\) is a solution of (114) by (i). □

We finish this section with an example where assumption (A3) can be satisfied or not.

Example A.1

We consider the scalar Example 2.1.2 with q=1 and f(t,s)=f(2ts):

$$ y_t = \int_0^t f(2t-s) u_s \,\mathrm{d}s, \quad t \in[0,T], $$
(162)

where f is a continuous function and is not a polynomial, and the trajectory \((\bar {u},\bar{y})=(0,0)\). Then

$$ M_t = f(t) \in M_{1,1}(\mathbb{R}), $$

and (A3) is satisfied iff

$$ f(t) \ne0 \quad\forall t \in[0,T] . $$

A.3 Approximations in W q,2

We will prove in this section Lemmas 4.2 and 4.6. First, we give the statement and the proof of a general result:

Lemma A.5

Let \(\hat{x}\in W^{q,2}([0,1])\). For j=0,…,q−1, we denote

$$ \left\{ \begin{array}{l} \hat{\alpha}_j := \hat{x}^{(j)}(0) , \\[5pt] \hat{\beta}_j := \hat{x}^{(j)}(1) , \end{array} \right. $$
(163)

and we consider \(\alpha_{j}^{k}, \beta_{j}^{k} \in\mathbb{R}^{q}\), \(k\in \mathbb{N}\), such that \((\alpha_{j}^{k} ,\beta_{j}^{k}) \longrightarrow(\hat{\alpha}_{j},\hat {\beta}_{j})\). Then there exists x kW q,∞([0,1]), \(k \in\mathbb{N}\), such that \(x^{k} \xrightarrow[]{W^{q,2}} \hat{x}\) and, for j=0,…,q−1,

$$ \left\{ \begin{array}{l} \bigl(x^k\bigr)^{(j)}(0) = \alpha^k_j, \\[5pt] \bigl(x^k\bigr)^{(j)}(1) = \beta^k_j. \end{array} \right. $$
(164)

Proof

Given uL 2([0,1]), we define x u W q,2([0,1]) by

$$ x_u(t):= \int_0^t \int _0^{s_1} \cdots\int_0^{s_{q-1}} u(s_q) \,\mathrm{d}s_q \,\mathrm{d}s_{q-1} \cdots\,\mathrm{d} s_1 ,\quad t \in[0,1]. $$

Then \(x_{u}^{(q)} = u\) and, for j=0,…,q−1,

$$ x_u^{(j)}(1) = \gamma_j \quad \Longleftrightarrow \quad \langle a_j ,u \rangle_{L^2} = \gamma_j, $$

where a j C([0,1]) is defined by

$$ a_j(t) := \frac{(1-t)^{q-1-j}}{(q-1-j)!} , \quad t \in[0,1]. $$

Indeed, a straightforward induction shows that

$$x_u^{(j)}(1) = \int_0^1 \int_0^{s_{j+1}} \cdots\int_0^{s_{q-1}} u(s_q) \,\mathrm{d}s_q \,\mathrm{d}s_{q-1} \cdots\,\mathrm{d} s_{j+1}. $$

Then integrations by parts give the expression of the a j . Note that the a j (j=0,…,q−1) are linearly independent in L 2([0,1]). Then

$$ \begin{array}{rccc} A \colon& \mathbb{R}^q&\longrightarrow&L^2 \bigl([0,1]\bigr) \\[6pt] & \begin{pmatrix} \lambda_0\\ \vdots\\ \lambda_{q-1} \end{pmatrix} &\longmapsto& \displaystyle\sum_{j=0}^{q-1} \lambda_j a_j \end{array} $$

is such that A A is invertible (here A is the adjoint operator), and

$$ x_u^{(j)}(1) = \gamma_j, \quad j = 0 , \ldots,q-1 \quad \Longleftrightarrow\quad A^*u= (\gamma _0,\ldots, \gamma_{q-1})^T . $$
(165)

Going back to the lemma, let \(\hat{u}:= \hat{x}^{(q)} \in L^{2}([0,1])\). Observe that

$$ \hat{x}(t) = \sum_{l=0}^{q-1} \frac{\hat{\alpha}_l}{l !}t^l + x_{\hat{u}}(t), \quad t \in[0,1], $$

and that \(A^{*} \hat{u}= (\hat{\gamma}_{0},\ldots,\hat{\gamma }_{q-1})^{T}\), where

$$ \hat{\gamma}_j := \hat{\beta}_j - \sum _{l=j}^{q-1} \frac{\hat{\alpha}_l}{(l-j)!} , \quad j = 0,\ldots,q-1 . $$

Then we consider, for \(k \in\mathbb{N}\), the truncation (Definition A.1) \(\hat{u}^{k} \in L^{\infty}([0,1])\) of \(\hat{u}\) and

(166)
(167)

It is clear that u kL ([0,1]) (by the definition of A); then x kW q,∞([0,T]). Since A u k=γ k and in view of (165), (166), and (167), (164) is satisfied. Finally, \(\gamma^{k}_{j} \longrightarrow\hat{\gamma}_{j}\), for j=1 to q−1; then \(\gamma^{k} \longrightarrow A^{*} \hat{u}\) and \(u^{k} \longrightarrow\hat{u}\). □

We can also prove the following:

Lemma A.6

Let \(\hat{x}\in W^{q,2}([0,1])\) be such that \(\hat{x}^{(j)}(0)=0\) for j=0,…,q−1. Then for δ>0, there exists x δW q,∞([0,1]) such that \(x^{\delta}\xrightarrow[\delta\to0]{W^{q,2}} \hat{x}\) and

$$ x^\delta= 0 \quad \text{\textit{on} } [0,\delta] . $$
(168)

Proof

We consider u δL ([0,1]), δ>0, such that u δ=0 on [0,δ] and \(u^{\delta}\xrightarrow[\delta\to0]{L^{2}}\hat{u}:= \hat{x}^{(q)}\). Then we define \(x^{\delta}:=x_{u^{\delta}}\) (see the previous proof). □

Now the proof of Lemma 4.6 is straightforward.

Proof of Lemma 4.6

We observe that \(\bar{b}_{i}= 0\) on I i implies that \(\bar {b}_{i}^{(j)}=0\) at the end points of I i for j=0,…,q i −1 (note that with the definition (68), if one component of I i is a singleton, then q i =1). Then the conclusion follows with Lemma A.6 applied on each component of \(\mathit{I}_{i}^{\varepsilon}\setminus\mathit{I}_{i}\). □

Finally, we use Lemma A.5 to prove Lemma 4.2.

Proof of Lemma 4.2

In the sequel we omit z 0 in the notations. We define a connection in W q,∞ between ψ 1 at t 1 and ψ 2 at t 2 as any ψW q,∞([t 1,t 2]) such that

$$ \left\{ \begin{array}{l} \psi^{(j)}(t_1) = \psi_1^{(j)}(t_1) , \\[6pt] \psi^{(j)}(t_2) = \psi_2^{(j)}(t_2) , \end{array} \right. \quad j=0,\ldots,q-1 . $$

(a) We define \(\tilde{b}_{i}\) on [0,t 0] by \(\tilde{b}_{i} := g_{i}'(\bar {y})z[v]\), i=1,…,r. We need to explain how we define \(\tilde{b}_{i}\) on ]t 0,T], using \(\bar{b}_{i}\) and connections, to have \(\tilde{b}_{i} \in W^{q_{i},s}([0,T])\) and \(\tilde{b}_{i}=\bar {b}_{i}\) on each component of \(\mathit{I}_{i}^{\varepsilon}\cap\mathopen {]}t_{0},T]\). The construction is slightly different whether \(t_{0} \in\mathit{I}_{i}^{\varepsilon}\) or not, i.e., whether \(i \in I^{\varepsilon}_{t_{0}}\) or not. Note that by the definition of ε 0 and of t 0, \(I^{\varepsilon}_{t}\) is constant for t in a neighborhood of t 0. We now distinguish the two cases just mentioned:

  1. 1.

    \(i \in I^{\varepsilon}_{t_{0}}\): We denote by [t 1,t 2] the connected component of \(\mathit {I}_{i}^{\varepsilon}\) such that t 0∈]t 1,t 2[. We derive from (105) that \(\tilde{b}_{i}= \bar{b}_{i}\) on [t 1,t 0]. Then we define \(\tilde{b}_{i} : = \bar{b}_{i} \) on ]t 0,t 2].

    If \(\mathit{I}_{i}^{\varepsilon}\) has another component in ]t 2,T], we denote the first one by \([t_{1}',t_{2}']\). Let ψ be a connection in \(W^{q_{i},\infty}\) between \(\tilde{b}_{i}\) at t 2 to \(\bar{b}_{i}\) at \(t_{1}'\). We define \(\tilde{b}_{i}:= \psi\) on \(\mathopen{]}t_{2},t_{1}'\mathclose{[}\), \(\tilde{b}_{i}:=\bar{b}_{i}\) on \([t_{1}',t_{2}']\), and so forth on \(\mathopen {]}t_{2}',T\mathopen{]}\).

    If \(\mathit{I}_{i}^{\varepsilon}\) has no more component, we define \(\tilde{b}_{i}\) on what is left as a connection in \(W^{q_{i},\infty}\) between \(\bar{b}_{i}\) and \(g_{i}'(\bar {y})z[v]\) at T.

  2. 2.

    \(i \not\in I^{\varepsilon}_{t_{0}}\): If \(\mathit{I}_{i}^{\varepsilon}\) has a component in [t 0,T], we denote the first one by [t 1,t 2]. Note that t 1t 0ε 0ε>0. We consider a connection in \(W^{q_{i},\infty}\) between \(\tilde{b}_{i}\) at t 0 and \(\bar{b}_{i}\) at t 1 and continue as in 1.

    If \(\mathit{I}_{i}^{\varepsilon}\) has no component in [t 0,T], we do as in 1.

(b) For all \(k \in\mathbb{N}\), we apply (a) to (b k,v k), and we get \(\tilde{b}^{k}\). We just need to explain how we can get, for i=1,…,r,

$$ \tilde{b}^k_i \xrightarrow[k \to\infty]{W^{q_i,2}} g_i'(\bar{y})z[\bar{v}]. $$

By construction we have

$$ \begin{array}{l@{\quad}l} \text{on } [0,t_0],& \tilde{b}^k_i =g_i'(\bar{y})z \bigl[v^k\bigr] \longrightarrow g_i'(\bar{y})z[\bar{v}] ,\\[5pt] \text{on } \mathit{I}_i^\varepsilon,&\tilde{b}^k_i = b^k_i \longrightarrow \bar{b}_i = g_i'(\bar{y})z[\bar{v}] . \end{array} $$

Then it is enough to show that every connection which appears when we apply (a) to (b k,v k), for example, \(\psi_{i}^{k} \in W^{q_{i},\infty}([t_{1},t_{2}])\), can be chosen in such a way that

$$ \psi_i^k \longrightarrow g_i'( \bar{y})z[\bar{v}] \quad \text{on } [t_1,t_2]. $$

This is possible by Lemma A.5. □

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Bonnans, J.F., de la Vega, C. & Dupuis, X. First- and Second-Order Optimality Conditions for Optimal Control Problems of State Constrained Integral Equations. J Optim Theory Appl 159, 1–40 (2013). https://doi.org/10.1007/s10957-013-0299-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-013-0299-3

Keywords

  • Optimal control
  • Integral equations
  • State constraints
  • Second-order optimality conditions