Sensitivity of Optimal Solutions to Control Problems for Second Order Evolution Subdifferential Inclusions

In this paper the sensitivity of optimal solutions to control problems described by second order evolution subdifferential inclusions under perturbations of state relations and of cost functionals is investigated. First we establish a new existence result for a class of such inclusions. Then, based on the theory of sequential \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Gamma $$\end{document}Γ-convergence we recall the abstract scheme concerning convergence of minimal values and minimizers. The abstract scheme works provided we can establish two properties: the Kuratowski convergence of solution sets for the state relations and some complementary \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Gamma $$\end{document}Γ-convergence of the cost functionals. Then these two properties are implemented in the considered case.


Introduction
It is well known ( [39,[44][45][46]) that many problems from mechanics (elasticity theory, semipermeability, electrostatics, hydraulics, fluid flow), economics and so on can be modeled by subdifferential inclusions or hemivariational inequalities. The latter are generalizations of partial differential equations (PDEs) and variational inequalities [26] in the sense that besides of physical phenomena leading to classical PDEs one has to take into consideration some nonlinear, nonmonotone and possibly multivalued laws (e.g. stress-strain, reaction-displacement, generalized forces-velocities, etc.) which can be expressed by means of the Clarke subdifferential.
In this paper, which is in a sense a continuation of [22,25], we deal with control problems for systems governed by evolution second order inclusions which are equivalent to second order hemivariational inequalities. More precisely, we consider minimize F(u, y) := F (1) (y) + F (2) subject to ⎧ ⎪ ⎨ ⎪ ⎩ y (t) + (Ay )(t) + (By)(t) + ι * ∂ J 1 (ιy(t)) +ι * ∂ J 2 (ιy (t)) f (t) + (Cu)(t) for a.e. t ∈ (0, T ) where T > 0, A and B are the Nemitsky operators corresponding, respectively, to a pseudomonotone operator A and a linear one B, J 1 and J 2 are locally Lipschitz superpotentials defined on a reflexive Banach space Z (∂ denotes their Clarke subdifferentials), ι is a linear, continuous and compact operator and C is an operator acting on the space U. The control is given as u = (u, y 0 , y 1 ) ∈ U ⊂ U × V × H , and the cost functionals F (i) , (i = 1, . . . , 4) are typically in integral form (for details and definitions of spaces V, H , V and W pq , see Sect. 3). Our goal is twofold. First prove a new existence result for the Problem (P) with the sum of two superpotentials, dependent, respectively, on displacement and its velocity. Second we investigate the sensitivity of optimal solutions to the control problem (C P); i.e., we are interested in the behavior of optimal solutions under perturbations of the system (state relations; e.g. coefficients in inclusion or parameters in superpotential are perturbed,...) as well as of perturbations of the cost functional (e.g. integrands depending on parameters).
Our approach is based on the sequential -convergence (epi-convergence in terms of [3]) theory (see [7,13,14,16,48]) in the sensitivity part, while for the existence of optimal solutions, we use the direct method. The nonemptiness of the solution set for (P) follows from the theory of pseudomonotone operators (cf. [24,51]) and it can be obtained for fairly general classes of operators. However, for sensitivity results, we restrict ourselves to special classes of maximal monotone operators for which the notion of PG-convergence can be applied.
The basic properties assuring the convergence of minimal values and minimizers of perturbed control problems to the minimal value and to a minimizer, respectively, of unperturbed problem are: on one hand the Painlevé-Kuratowski convergence (we use the nomenclature Kuratowski convergence in the sequel, for consistency with our previous works) of solution sets, which can be expressed as -convergence of their indicator functions and on the other hand some "complementary -convergence" of cost functionals.
The paper is organized as follows. In Sect. 2 we present an abstract setting for the multivalued operators and subdifferential inclusions as well as the sensitivity analysis which is based on the -convergence theory. Moreover, we recall some useful definitions and results from the theory of Clarke subdifferential and theory of pseudomonotone operators. In Sect. 3 we recall the definition and properties of PG convergence. Next, we present the control problem formulation and provide a priori estimates as well as the existence result for the underlying Problem (P). Furthermore, we analyze the perturbed problems and provide results on the Kuratowski convergence of solution sets, and we formulate sensitivity result. In Sect. 4 we discuss the -convergence of cost functionals and present the main result on the sensitivity of optimal solutions. In Sect. 5 we give examples of concrete operators and functionals which satisfy the abstract assumptions of preceding sections.

Abstract Scheme
In this subsection we recall the abstract scheme based on the -convergence theory which we use to study the stability of optimal control problems.
We consider a control system governed by a relation R which links the state y ∈ Y to the control variable u ∈ U, Y and U being the topological spaces of states and controls, respectively. Generally, the relation R can be chosen as an ordinary differential equation, a partial differential equation or a partial differential inclusion. It is also possible to consider variational inequalities (VI) or hemivariational inequalities (HVI).
The optimal control problem under consideration reads as follows: find (u * , y * ) ∈ R which minimizes a cost functional F: where the set R of admissible control-state pairs is defined by: and the solution map is given by The set of optimal solutions to (C P) R is denoted by R * , i.e., The sensitivity (stability) is understood as a "nice-continuous" asymptotic behavior of optimal solutions to the perturbed problems, i.e. perturbed state relations R k and perturbed cost functionals F k . So we consider the sequence of optimal control problems indexed by k ∈N = N ∪ {∞}, where the index k ∈ N indicates "a perturbation" and k = ∞ corresponds to the unperturbed original problem: We are looking for conditions which assure the following stability results: where K (U×Y)−lim sup stands for the sequential Kuratowski upper limit of sets. It is worth to recall (see e.g. Proposition 4.3 of [16]) that (ii) is equivalent to the following condition: if {k n } is an increasing sequence in N, (u * k n , y * k n ) ∈ R * k n , u * k n converges to u * ∞ in U and y * k n converges to y * ∞ in Y, then (u * ∞ , y * ∞ ) ∈ R * ∞ . In order to establish the conditions (i) and (ii), first we reformulate the problem (C P) R k as the unconstrained optimization one: where δ R denotes the indicator function of the set R, i.e., and then we apply an approach based on the theory of -convergence (epiconvergence), cf. [7,13,48], and the references therein.

Sequential -convergence
For the convenience of the reader in this subsection we recall some material from the -convergence theory, the generalized Clarke subdifferential and the theory of multivalued operators of monotone type. We quote here the definition of seq -convergence for functions of two variables. The case of one variable follows easily by omitting the other. For the case of functions of many variables we refer to Buttazzo and Dal Maso [7].
Let U and Y be two topological spaces. For u ∈ U and y ∈ Y we put σ u := {{u k } ⊂ U : u k → u} and σ y := {{y k } ⊂ Y : y k → y}. Given F k : U×Y →R = R∪{±∞}, k ∈ N, we define and if both these extended real numbers are equal, we say that there exists Similarly, for other combination of signs (+ and − denote sup and inf, respectively) we have and if they are equal there exists In turn, if the numbers in ( j) and ( j j) are equal, we say that there exists and then we write simply The general definition of a topological −limit is given by De Giorgi and Franzoni in [14], where one can also find the following theorem concerning the variational convergence of minimal values and minimizers.

Theorem 1 Let X be a topological space and let f k
(in this case x k is called to be "quasioptimal") and In the sequel we put Remark 2 If the topological space X satisfies the first axiom of countability, then the sequential seq (X − )-convergence coincides (see Proposition 8.1 of [12]) with the topological (X − )-convergence of De Giorgi and Franzoni [14]. Moreover, the sequential -limit operation is not additive, i.e. it is not enough to know lim F k and lim δ R k in order to calculate lim(F k + δ R k ), cf. Example 6.18 in [12].
In order to calculate the -limit of the sum of two functions we use the following two theorems Theorem 3 (Buttazzo and Dal Maso [7]) If

Theorem 4 If there exist
and F (2) then there exists also and we have Note that Theorem 4 follows directly from Theorem 3. Moreover, due to Theorem 3, the convergences follow from the following result (see also Propositions 4.1 and 4.5 in [16]): Let ( u k , y k ) be optimal or "quasioptimal solutions" to the problems (C P) R k such that and

Remark 6
The condition (2) of Proposition 5 is equivalent (cf. Propositions 4.3 and 4.4 of [16]) to the sequential Kuratowski convergence while the condition (1) (the complementary -convergence), roughly speaking, means a continuous convergence of cost functionals with respect to y and (U − ) convergence with respect to u. Note that for the sequence of operators G k : X → Y, where X and Y are topological spaces, we say that G k converges continuously (sequentially) . We also recall that for a sequence of sets {A n } n∈N in the topological space X , by K (X ) − lim inf A n we mean the set of all limits of sequences {x n } such that x n ∈ A n , while the set K (X ) − lim sup A n consists of all limits of subsequences {x k } such that x k ∈ A n k for any increasing sequence {n k } ⊂ {n}.

Clarke Subdifferential
Given a locally Lipschitz function J : Z → R, where Z is a Banach space, we recall (see [10]) the definitions of the generalized directional derivative and the generalized gradient of Clarke. The generalized directional derivative of J at a point u ∈ Z in the direction v ∈ Z , denoted by J 0 (u; v), is defined by The generalized gradient of J at u, denoted by ∂ J (u), is a subset of a dual space Z * given by ∂ J (u) = {ζ ∈ Z * : J 0 (u; v) ≥ ζ, v Z * ×Z for all v ∈ Z }. For the properties of Clarke subdifferential, see for example [10].

Multivalued Operators
We give the basic definitions for multivalued operators and then we quote the main surjectivity result for the operator classes under consideration (see e.g. [24,39,47]). Let Y be a reflexive Banach space and Y * be its dual space and let T : Y → 2 Y * be a multivalued operator.
We say that T is: be a linear, densely defined and maximal monotone operator.
(2) T is L-generalized pseudomonotone, if the following conditions hold: (a) for every y ∈ Y , T y is a nonempty, convex and weakly compact subset of in Y * and lim sup n→+∞ y * n , y n − y ≤ 0, then y * ∈ T y and y * n , y n −→ y * , y . The crucial point in the proof of the existence of a solution to the subdifferential inclusions considered below is the following surjectivity result.

Proposition 7
If Y is a reflexive, strictly convex Banach space, L : D(L) ⊂ Y → Y * is a linear, densely defined, maximal monotone operator and T : Y → 2 Y * \ {∅} is a bounded, coercive and L-generalized pseudomonotone operator, then L + T is surjective.

Control Problem for Second Order Subdifferential Inclusion
In this section we consider optimal control problem for systems described by evolution of second order subdifferential inclusion. We first recall the notion of parabolic Gconvergence (PG-convergence) of operators, then we state a result on the sensitivity of the solution set.

Notation
Let be an open bounded subset of R N and let V = W where 2 ≤ p < ∞ and 1/ p + 1/q = 1. Moreover, we consider a reflexive and separable Banach space Z and a linear continuous and compact mapping ι : V → Z . Then V ⊂ H ⊂ V * with compact embeddings. We denote, respectively, by ·, · and (·, ·) the duality between V and its dual V * and the inner product in H , and by · , |·|, · V * the norms in V , H and V * , respectively. Moreover, the adjoint operator to ι is denoted by ι * : Z * → V * . Given 0 < T < +∞, let Q = ×(0, T ). We introduce the The space Y is endowed with the topology defined in the following way We assume that the Nemytskii operatorῑ : W pq → Z corresponding to ι, is compact (for simplicity in the sequel we use the same symbol ι for its Nemytskii operator). For example, in particular application, we put Z = L p ( ) and ι being the embedding operator. Then, by the Lions-Aubin Lemma, we know that required compactness of ι holds. Note, moreover, that if v n −→v, then, by the Lions-Aubin Lemma, v n → v

PG-Convergence of Parabolic Operators
Following Svanstedt [49] we start with the following definition.

Remark 9
If a ∈ M, then the following inequalities hold a.e. in Q, for all ξ ∈ R N so the mappings from the class M are uniformly bounded, coercive and monotone.
Here and in the sequel the symbol D denotes the gradient operator taken with respect to the space variable x ∈ and the symbol div denotes the divergence with respect to the space variable x ∈ .

Remark 11 Given a k ∈ M, it can be shown that the Nemitsky operators
corresponding to the family of operators A k (t, y) = − div a k (t, x, Dy) are bounded, coercive, hemicontinuous and monotone. Therefore, by Proposition 7, for every k ∈N and g ∈ V * , there exists a unique solution y k ∈ W pq to the problem (5). The compactness of the class M with respect to the PG-convergence was established in [49]. The Definition 10 generalizes the one given for a class of linear operators by Colombini and Spagnolo in [11].

Remark 12
We use the notion of parabolic convergence to deal with the second order (in time) problem. This approach is possible due to the fact that the viscosity operator is coercive and hence the nature of the problem is parabolic. It remains an open problem, whether Definition 10 can be modified to include the second time derivative in the auxiliary problem (5). This would require to show the compactness of the underlying class of operators with respect to this new mode of convergence.

Problem Statement
We consider the following sequence of second order subdifferential inclusions: The hypotheses on the data of (P) k are the following.
(H 1 ) : The following relation holds We start with a priori estimate for the solution of the problem (P) k . To this end, we give the following lemma.
with a constant C > 0 dependent only on T, and the constants M, c i , i = 1, . . . , 6.
Proof Let y ∈ Y be a solution of the problem (P) k . Taking the duality brackets with y (t) ∈ V and integrating over ( with ξ(s) ∈ ∂ J 1 k (ιy (s)) and ζ(s) ∈ ∂ J 2 k (ιy(s)) for a.e. s ∈ (0, t). From the integration by parts formula (Proposition 23.23(iv), pp. 422-423 of [51]), we get From H (A) and Remark 9 we obtain Since B k is linear, symmetric and monotone, it follows that From In order to estimate the last term of left hand side of (7), we will use the relation and the fact that for all a, b > 0, p > 1 there exists a functionc : From H (J 2 )(i) and (12) we obtain Using (13) and the Jensen inequality, we obtain Combining the last inequality with (14), we obtain After simple calculations we obtain, whered(ε),d(ε) > 0. From (15) to (17) we obtain for any ε > 0 In order to estimate the right hand side of (7), we use the Young inequality with ε > 0 and obtain Combining (8), (10), (11), (18) and (19), we obtain Hence due to (H 1 ), we can choose ε > 0 such that the coefficient in front of t 0 y (s) p ds is positive, getting where the constant C depends on the problem data and T but it is independent on the initial conditions and k. From the formula by a direct calculation we obtain with the constant C > 0. Thus, using (21), we have with C > 0. Moreover, since y solves (P) k , from H (A), The assertion follows from (21) to (23).
We introduce the family of mappings K k : V → C(0, T ; V ) by means of the formula Using this definition, the problem (P) k can be equivalently rewritten as follows Now we formulate the existence theorem for the problem (P) k , k ∈ N. Its proof is analogous to the existence proof of [38], and therefore it will be sketched only briefly here.
Proof Let us fix k ∈N. We will proceed in two steps.
Step 1. First we assume that y 1 k ∈ V and introduce the operators We also consider the operator L : : v(0) = 0} and observe that z ∈ W pq solves the problem (P) k if and only if z − y 1 k ∈ D(L) solves the following one: where the operator T k : V → 2 V * is given by Recall (see e.g. [51], Proposition 32.10, p. 855) that L is linear, densely defined and maximal monotone operator. Moreover, we will prove that for each k ∈ N, the operator T k is bounded, coercive and L-generalized pseudomonotone. The solvability of the Problem (28) follows then from Proposition 7. We will state the following four lemmas on the properties of the operators A 1 k , B 1 k , N k and M k . The proofs of these lemmas are analogous to the proofs of the lemmas 7, 8, 9 and 12 of [38] (see also Remark 11).

Lemma 15 If H (A) holds and y 1
k ∈ V , then for each k ∈N the operator A 1 k defined by (24) satisfies: k is monotone and hemicontinuous (so also demicontinuous);

Lemma 16
If H (B)(i) holds and y 1 k ∈ V , then for each k ∈N the operator B 1 k defined by (25) satisfies:

is monotone and weakly continuous. Moreover, if H (B)(i) holds and y 1 k ∈ H , then the Nemitsky operator B k corresponding to B k satisfies
is weakly continuous as a mapping from W pq to V * .

Lemma 17
If H (J 1 ) holds and y 1 k ∈ V , then for each k ∈N the operator N k defined by (26) satisfies: (a) for each v ∈ V, N k v is a nonempty, convex and weakly compact subset of V * ;

Lemma 18 If H (J 2 ) holds and y 1 k ∈ V , then for each k ∈N the operator M k defined by (27) satisfies: (a) for each v ∈ V, M k v is a nonempty convex and weakly compact subset of
We continue the proof of Theorem 14. [38]. We omit the proof for brevity.

Claim 3 T k is L−generalized pseudomonotone. It can be proved by the argument that exactly follows the lines of the proof of Theorem 6 in
Step 2. Now we pass to the more general case and assume that y 1 k ∈ H . The proof is analogous to Step 2 in the proof of Theorem 6 in [38]. However, we provide the proof, since we deal with more general case involving a sum of two subdifferentials. Since V ⊂ H is dense, we can find a sequence {y 1n k } ⊂ V such that y 1n k → y 1 k in H , as n → ∞ (index k is now fixed). Consider a solution y n of the problem (P) k when y 1 k is replaced by y 1n k , i.e., a solution of the following problem ⎧ ⎪ ⎨ ⎪ ⎩ y n (t) + (A k y n )(t) + (B k y n )(t)+ ι * ∂ J 1 k (ιy n (t)) + ι * ∂ J 2 k (ιy n (t)) f k (t) + (C k u)(t) a.e. t ∈ (0, T ) y n (0) = y 0 k , y n (0) = y 1n k , y n ∈ Y.
(P) n k The existence of y n , for n ∈ N follows from the first part of the proof. We have y n (t)+ A k (t, y n (t))+ B k y n (t)+ι * ξ n (t)+ι * ζ n (t) = f k (t)+(C k u)(t) for a.e. t ∈ (0, T ), (29) or equivalently and From Lemma 13, since all terms in the right-hand side of (6) excluding y 1 k do not depend on n, we have Since {y 1n k } is bounded in H also {y n } and {y n } are bounded in V and W pq , respectively. So passing to a subsequence, we have y n −→y.
We will show that y is a solution of the problem (P) k . From (32) we also have y n → y weakly in W pq . From the continuity of embedding W pq ⊂ C(0, T ; H ) it follows that y n (0) → y(0) weakly in H and since y n (0) = y 0 k for all n ∈ N we conclude that y(0) = y 0 k . Moreover, y n (0) → y (0) weakly in H and since y n (0) = y 1n k → y 1 k strongly in H , it follows that y (0) = y 1 k . From Lemma 16 and (32), it follows that Now we pass to the limit in (30) and (31). Since y n → y and y n → y weakly in W pq and ι : W pq → Z is compact it follows that ιy n → ιy and ιy n → ιy strongly in Z. From growth conditions H (J 1 )(i) and H (J 2 )(i), it follows that {ξ n } and {ζ n } are bounded in Z * . From the reflexivity of this space, we have for a subsequence ξ n → ξ and ζ n → ζ weakly in Z * . From the convergence theorem of Aubin and Cellina (see [4]), we have ξ(t) ∈ ∂ J 1 k (ιy (t)) and ζ(t) ∈ ∂ J 2 k (ιy(t)) for a.e. t ∈ (0, T ).
In order to pass to the limit in the term A k y n , we will show that lim sup n→∞ A k y n , y n − y V * ×V ≤ 0 (35) and use Lemma 15 (e). Since lim n→∞ f k + C k u, y n − y V * ×V = 0 and lim n→∞ ξ n + ζ n , ιy n − ιy Z * ×Z = 0 from (29) From (36) to (38), we obtain (35), so we conclude that From (32), (33), (34) and (39) it follows that y solves (P) k . The proof is complete.

Theorem 19
If, in addition to the assumptions of the Theorem 14, we admit p = 2 and for k ∈N: with b 1 , b 2 , b 3 > 0 and b 1 > b 2 ι 2 , then the solution of the problem (P) k is unique.
The proof of Theorem 19 is based on a standard technique and follows from a simple direct calculation and application of the Gronwall lemma.

Sensitivity of Solution Sets for (P) k
In this section we provide the result of the sensitivity of the solution set of the second order subdifferential inclusion. We assume that u ∈ U ⊂ U × V × H , where U is the set of admissible controls. Moreover, for a sequence of controls {u k } ⊂ U, k ∈ N we say that (u k , y 0 k , Let us define the multivalued mappings S k : U → 2 Y , which assign to the control u ∈ U the set of all solutions of the problems (P) k corresponding to this control, where k ∈N. First we observe that under the hypotheses of Theorem 14, the mappings S k have nonempty values. We prove the following theorem.

Moreover, if the limit problem (P) ∞ has a unique solution (for example, if the assumptions of Theorem 19 hold for k = ∞), then we also have
so in this case S k (u k ) Before we give the proof of the Theorem 20, we prove a lemma, which to the best of our knowledge is a new result.

Lemma 21
Let Z be a separable and reflexive Banach space, and let J k : Z → R k ∈N be a family of locally Lipschitz functions such that ||∂ J k (z)|| Z * ≤ c(1+||z|| p−1 Z ) for all z ∈ Z with some c > 0 independent of k and Then a) for every sequence z k → z strongly in Z and for every v ∈ Z we have lim sup b) for all sequences z k → z strongly in Z and ξ k → ξ weakly in Z * such that ξ k (t) ∈ ∂ J k (z k (t)) for a.e. t ∈ (0, T ) we have ξ(t) ∈ ∂ J ∞ (z(t)) for a.e. t ∈ (0, T ).
Proof For the proof of a) assume that for a subsequence J 0 k (z k ; v) → α ∈ R (note that due to the growth condition the case α = ∞ is excluded here). It is enough to show, that α ≤ J 0 ∞ (z; v). From the basic properties of the Clarke subdifferential, we have . From the growth condition, since {z k } is bounded, it follows that for a subsequence, we have ξ k → ξ weakly in Z * for some ξ ∈ Z * . Hence ) and the proof of part a) is complete. Now we pass to the proof of b). Fix v ∈ Z. Since z k → z strongly in Z, then using Proposition 2.2.41 in [23] for a subsequence z k n , we have z k n (t) → z(t) for a.e. t ∈ (0, T ) with n → ∞, and for all n ≥ 1, z k n (t) Z ≤ h(t) for a.e. t ∈ (0, T ) with some h ∈ L p (0, T ). From the growth condition, we have Since the last function belongs to L 1 (0, T ), we can use the Fatou lemma and obtain lim sup Since ξ k → ξ weakly in Z * , we have Applying the assertion a) of the Lemma as well as (47), (48), and the definition of the Clarke subdifferential, we obtain We have shown that Next, we will show that for all v ∈ Z, the integrand in (49) is nonnegative for a.e. t ∈ (0, T ). Indeed, to the contrary, suppose that for some v ∈ Z, we have J 0 ∞ (z(t); v(t))− ξ(t), v(t) < 0 for all t ∈ N ⊂ (0, T ), where N is of positive measure. Define and, since w ∈ Z, we have a contradiction with (49). Now, by the separability of Z , consider a countable dense subset {v n } ∞ n=1 of Z . Taking in place of v in (49), the constant functions w n ∈ Z defined by w n (t) = v n for t ∈ (0, T ), we observe that the inequality J 0 ∞ (z(t); v n ) − ξ(t), v n ≥ 0 does not hold on the set N n ⊂ (0, T ) of measure zero. Now, we have ξ(t), v n ≤ J 0 ∞ (z(t); v n ) for all n ∈ N and t belonging to the set of full measure (0, T ) \ ∞ n=1 N n . Since J 0 ∞ (z(t); ·) is locally Lipschitz and hence continuous (see Proposition 2.1.1 in [10]), then, by density, we have ξ(t), v ≤ J 0 ∞ (z(t); v) for all v ∈ Z on the set of full measure and the assertion follows.
Proof (of Theorem 20). First observe that from Theorem 14, it follows that the sets S k (u k ) are nonempty for k ∈N. Suppose that y k ∈ S k (u k ) for k ∈ N. From Lemma 13, it follows that y k is bounded in C(0, T ; V ), y k is bounded in C(0, T ; H ) ∩ V and y k is bounded in V * . Hence, for a subsequence, we have y k −→y ∞ . It remains to show that y ∞ ∈ S ∞ (u ∞ ). In a standard way, from y k −→y ∞ , it follows that y k (0) → y ∞ (0) weakly in V and y k (0) → y ∞ (0) weakly in H . Hence, from (H 0 ), we obtain y ∞ (0) = y 0 ∞ and y ∞ (0) = y 1 ∞ . From the fact that y k ∈ S k (u k ), it follows that for a.e. t ∈ (0, T ), we have (50) where ξ k (t) ∈ ∂ J 1 k (ιy k (t)) and ζ k (t) ∈ ∂ J 2 k (ιy k (t)) for a.e. t ∈ (0, T ). From the growth conditions, we know that both {ξ k } and {ζ k } are bounded in Z * , so for a subsequences numerated by k again, we have ξ k → ξ and ζ k → ζ weakly in Z * as k → ∞.
Moreover, since y k → y ∞ and y k → y ∞ both weakly in W pq , and the Nemytskii operator ι : W pq → Z is compact, we have ιy k → ιy ∞ and ιy k → ιy ∞ strongly in Z as k → ∞.
From (51), (52) and Lemma 21, we obtain From Remark 9, we may assume, possibly passing to a subsequence, that with some b ∈ L q (Q; R N ). Let η ∈ R N and let 0 be an open set such that 0 ⊂⊂ , and let be an open interval with ⊂⊂ (0, T ).
Let us consider a sequence {v k } ⊂ W pq of solutions to the following auxiliary problem By Let From (50) and (55), we have Multiplying the last equality by (y k − v k )ϕ and integrating by parts, we obtain We claim that Indeed, let z k = y k − v k and z = y − v. We know that z k → z weakly in W pq and also strongly in H. Since which proves (61). Since the embedding W pq ⊂ L p (Q) is compact, and moreover, ι : W pq → Z is compact operator, from (57) and y k −→y ∞ , we conclude that v k → v and y k → y ∞ strongly in L p (Q), From (54), (58) and (62), we obtain and from (51), (52) and (63) with k → ∞. Using (61), (64), (65), H (B), H (C) and (H 0 ), we pass to the limit in (60) and we obtain On the other hand, taking the V * weak -limit in (50), we get Inserting the last equality to I 1 and integrating by parts, we have Hence, and from (60) we get in the limit we can take a family of mollifier kernels ϕ ε centered in (y, s) ∈ 0 × in place of ϕ and get, setting g( Since g ε → g in L 1 ( 0 × ), for a subsequence, we know that g ε (x, t) → g(x, t) for a.e. (x, t) ∈ 0 × . Hence, we have and hence also a.e. in Q for every η ∈ R N . Let w ∈ R N and λ > 0. Taking η = Dy ∞ + λw we obtain Recalling that a ∞ (x, t, ·) is continuous (by the definition of class M(m 0 , m 1 , m 2 , α)), we pass to the limit with λ → 0 in the last inequality and obtain If we replace w by −w we deduce that b(x, t) = a ∞ (x, t, Dy ∞ ) a.e.in Q. This together with (67) implies that y ∞ ∈ S ∞ (u ∞ ). The proof of (43) is completed. Moreover, if S ∞ (u ∞ ) is a singleton, than it follows that the whole sequence y k converges to y ∞ and hence (44) follows.

-Convergence of Cost Functionals
In this section we state conditions which guarantee the suitable -convergence of the cost functionals in the control problem (C P) R k . In the sequel we replace H (C) with the following, stronger, assumption: H (C) (1) : C k ∈ L(U, L q (Q)), k ∈N are uniformly bounded and C k c −→ C ∞ . We consider the following costs for y ∈ Y, u ∈ U, y 0 ∈ V, y 1 ∈ H, In the following hypotheses the conditions (i), (ii), and (iii) hold uniformly with respect to k ∈N.
Proof For the proof of ( j) assume that y n → y strongly in W 1, p (0, T ; L p ( )). This means that y n → y strongly in L p (Q) and y n → y strongly in L p (Q). From H (F (1) )(ii), we obtain k (x, t, y(x, t), y (x, t)) − F (1) k (x, t, y n (x, t), y n (x, t))| dxdt and the assertion follows. The continuity of F (i) k for i = 2, 3, 4 follows from the fact that convex and locally bounded functions are continuous and from Carathéodory Continuity Theorem (see Example 1.22 in [12]).
To prove the continuous convergence of F (1) k , assume that y k → y in . We have The first term on the right hand side tends to zero uniformly in k by (73). To prove the convergence of the second term, we proceed separately for the cases H (F (1) )(iiia) and H (F (1) )(iiib). For the proof in the first case choose y ∈ Y to get |F (1) k (x, t, y(x, t), y (x, t)) −F (1) ∞ (x, t, y(x, t), y (x, t))| dtdx.
By the Luzin theorem we know that for every ε > 0 we can find a compact set K ⊂ Q such that y(x, t) and y (x, t) are continuous on K and μ N +1 (Q \ K ) < ε (where μ N +1 stands for N + 1 dimensional Lebesgue measure). Hence, we have k (x, t, y(x, t), y (x, t)) − F (1) ∞ (x, t, y(x, t), y (x, t))| dtdx.
Now the fact that the last term in above inequality can be made arbitrarily small follows from the convergence F (1) k (·, ·, z, w) w−L 1 (Q) −→ F (1) ∞ (·, ·, z, w) for all (z, w) ∈ R 2 in the similar way as the convergence (13) in Lemma 4.1 in [17]. The proof in the case H (F (1) )(iiib) follows by a direct application of the Lebesgue dominated convergence theorem. For the proof of -convergence of F (2) k , F (3) k , and F (4) k observe that these functions are convex and locally equibounded. Hence, by Proposition 5.12 in [12], the -convergence is equivalent to the pointwise convergence. The pointwise convergence follows from (iii) in the same way as for F (1) . This completes the proof. Now, the main result on the sensitivity of optimal control problems follows from Proposition 5, Theorem 20, Proposition 22 and the direct method for the existence part. H(B), H(J 1 ), H(J 2 ), H(C) (1) , (H 0 ) and (H 1 ) for (P) k , we admit the hypotheses H (F ( j) ), j = 1, 2, 3, 4 for cost functionals F k (u, y 0 , y 1 , y) given by (68). Moreover, let the space of admissible controls U be compact in U ×V × H (alternatively we can assume that sublevel sets of the functionals F (2,3,4) k : U → R defined as F (2,3,4) k (u) = F (2) k (u) + F (3) k (y 0 ) + F (4) k (y 1 ) are compact). Then (i) For every k ∈N the problem (C P) R k has at least one optimal solution y * k ∈ S k (u * k , y 0 * k , y 1 * k ), m k := F k (u * k , y 0 * k , y 1 * k , y * k ) being its minimal value. (ii) If the limit (original) problem (C P) R ∞ has the "uniqueness of solution property", i.e., for all u ∈ U, S ∞ (u) = {y ∞ (u)} (see Theorem 19), then the sequence (u * k , y 0 * k , y 1 * k , y * k ) has a cluster point and, moreover, every cluster point to this sequence is an optimal solution to the problem (C P) R ∞ , i.e., (u * k n , y 0 * k n , y 1 * k n , y * k n ) (
(iii) We also have m k → m ∞ as k → ∞.

Examples
In this section we give examples of particular operators and functionals, for which the results of the paper are applicable. Let p = 2 and V = H 1 0 ( ). We provide examples for the following hypotheses H(A) Note that the operators A k , k ∈N, are given in the explicit form in the assumption H (A). Svanstedt (see Theorem 3.1 in [49]) proves that any sequence {a k } ∞ k=1 such that a k ∈ M has a subsequence such that a k PG −→ a ∞ with some a ∞ ∈ M. H(B) The condition H (B)(ii) holds for the operators defined by B k y, v = g k (x)y(x)v(x) + (∇ y(x) · h k (x))v(x)dx, for y, v ∈ V, k ∈N, where g k → g ∞ in L ∞ ( ) and h k → h ∞ in W 1,∞ ( ; R N ). Moreover,