Stability for Semilinear Parabolic Optimal Control Problems with Respect to Initial Data

A distributed optimal control problem for a semilinear parabolic partial differential equation is investigated. The stability of locally optimal solutions with respect to perturbations of the initial data is studied. Based on different types of sufficient optimality conditions for a local solution of the unperturbed problem, Lipschitz or Hölder stability with respect to perturbations are proved. Moreover, a particular example with semilinear equation, constant initial data, and standard quadratic tracking type objective functional is constructed that has at least two different locally optimal solutions. By the perturbation analysis, the existence of a problem with non-constant initial data is shown that also has at least two different locally optimal solutions.


Introduction
We consider the optimal control problem (P) min where y u denotes the solution of the semilinear parabolic Neumann problem ∂ ν y = 0 on = × (0, T ), y(0) = y 0 in , (1.1) and the set of admissible controls U ad is defined by with numbers −∞ ≤ α < β ≤ ∞. We assume that γ and κ are nonnegative real numbers, and y Q , y , and f are given functions to be specified later. We address two main issues. The first is the stability of selected local solutions of the optimal control problem with respect to a perturbation of the initial function y 0 . We select a fixed local minimizerū of the problem and estimate the distance to an associated local minimizer of the problem with perturbed initial function y 0 + φ, where φ L 2 ( ) is small enough. Such stability results might be of some interest for investigations on the value function in the context of feedback control, although the case T = ∞ is needed there. We refer, for instance, to the recent contributions [2,20,27]. For feedback control, the unbounded case (α = −∞, β = ∞) is important that is allowed in our paper as a particular case.
In Sect. 3, the associated stability analysis is performed for Tikhonov parameter κ > 0 under a second order sufficient optimality condition imposed onū. The main result of this section is Theorem 3.4 on Lipschitz stability of local solutions with respect to φ. In Sect. 4, we investigate the same issue for κ = 0, where the second order sufficient optimality condition of Sect. 3 cannot be expected to hold. Here, we apply a second order condition that is sufficient for strong local minimizers in the sense of calculus of variations. Under this second order condition, in Theorem 4.4 we derive Hölder stability of associated optimal states with respect to the perturbation. For the stability of strong locally optimal controls we invoke the known condition (4.13) on the level set of optimal adjoint states, cf. [14]. Under this assumption, in Theorem 4.6 we are able to prove Hölder or even Lipschitz stability of strong locally optimal controls.
To our best knowledge, these results are new. In the literature on optimal control of partial differential equations, several contributions to the stability analysis with respect to perturbations were published, we mention [1, 11, 19, 21-23, 28, 29].
Moreover, we refer to the discussion of general control and optimization problems in [18] and to the case of optimal control problems for ODEs in [17]. However, we do not know associated works, where perturbations of the initial data were addressed in the context of PDE control. Moreover, the application of our type of critical cone to such problems is new. In the above-mentioned papers on the control of PDEs, perturbations appeared in the differential equation, in its boundary condition, in the objective functional, or in inequality constraints. Handling perturbations of the initial data is more complicated for a nonlinear state equation, in particular since bounded initial data are needed to have a differentiable control-to-state mapping.
In Sect. 3, we construct a particular example of (P) that has two different local minimizers. It was a longstanding problem for optimal control problems with semilinear PDEs and quadratic objective, if more than one optimal solution can exist. Recently, in [24] this question was answered for a semilinear elliptic boundary control problem by constructing a problem with two different optimal solutions. The reader is also referred to [16], where the non-uniqueness of minimizers is established for abstract tracking type problems with quite general state equations generating non affine-linear control-to-state mappings.
While [16] and [24] prove the existence of problems with non-unique minimizers, they do not construct a concrete example. In our paper, we proceed in a different way and provide a concrete example with two different local minimizers. It uses the nonlinearity f (y) = y 3 − y. For a nonconvex objective functional, an example with two different global solutions was given in [4].

Assumptions and Preliminary Results
We impose the following assumptions on the problem (P): (A1) is an open bounded set in R n , 1 ≤ n ≤ 3, with a Lipschitz boundary . The time T is finite, 0 < T < ∞. (A2) We assume that f : Q × R → R is a Carathéodory function of class C 2 with respect to the last variable satisfying the following properties:
Assumption (A2) is fulfilled in particular by any polynomial f of odd degree and with a positive leading coefficient or f (y) = e y . In particular, the function f (y) = (y − y 1 )(y − y 2 )(y − y 3 ) with fixed real numbers y i , i = 1, 2, 3, satisfies our assumptions. This function that appears in the so-called Schlögl model, will be used in Sect. 5.
Throughout the paper, we use the standard space Let us recall the following known results : Under the previous assumptions, for every u ∈ L r (0, T ; L s ( )) with 1 r + n 2s < 1 and r , s ≥ 2, there exists a unique solution y u ∈ L ∞ (Q) ∩ W (0, T ) of (1.1). Moreover, the following estimates hold : for a monotone non-decreasing function η : [0, ∞) −→ [0, ∞) and some constant K both independent of u. Finally, if u k u in L r (0, T ; L s ( )), then holds.
The reader is referred to [5] and [6] for the proof of this result. Then, the mapping G : L r (0, T ; L s ( )) −→ L ∞ (Q) ∩ W (0, T ) given by G(u) = y u , the solution of (1.1), is well defined. The following differentiability properties of G are known: We refer to [5] for the proof of these theorems. Though the proof of Theorem 2.1 in [5] is performed for Dirichlet condition, the same arguments can be applied for the Neumann case with obvious modifications. In [5], the proof of Theorem 2.2 was carried out for s = 2, but it remains valid for our setting of r and s.
As a consequence of Theorem 2.2 and the chain rule, we deduce the following result: The functional J : L r (0, T ; L s ( )) −→ R is of class C 2 . Its first and second order derivatives are given by the expressions (2.12)

Remark 2.2
Though J is neither differentiable nor well defined in L 2 (Q) for n > 1, the linear and bilinear forms J (u) and J (u) can be extended to continuous forms defined on L 2 (Q) and L 2 (Q) × L 2 (Q), respectively, by the same expressions (2.10) and (2.11).
Problem (P) is a non-convex problem in general; see [24]. Therefore, we will distinguish between local and global minimizers for (P).

Definition 2.1
Given r , s ∈ [1, ∞], we say thatū is an L r (0, T ; L s ( ))-local minimizer of (P), if u ∈ U ad and there exists an L r (0, T ; L s ( )) ball B ε (ū) such that J (ū) ≤ J (u) ∀u ∈ U ad ∩ B ε (ū). If this inequality is strict whenever u =ū, thenū is called an L r (0, T ; L s ( ))-strict local minimizer of (P). We say thatū is a solution of (P) or a global minimizer if u ∈ U ad and J (ū) ≤ J (u) ∀u ∈ U ad .

Theorem 2.3 Problem (P) has at least one solution.
This result is well known if −∞ < α < β < +∞. In the other case, due to Assumption (A4) we have that κ > 0. Then, the existence of a solution for (P) is also true; see [4] or [8]. This is remarkable because the L 2 (Q)-Tikhonov term implies the boundedness of minimizing sequences only in L 2 (Q). This is not sufficient for dealing with the state equation, see Theorem 2.1.
From Corollary 2.1, the following well known results are deduced; see, for instance, [10] or [12] : Theorem 2.4 Letū be an L r (0, T ; L s ( ))-local minimizer of (P) with 1 r + n 2s < 1 and r , s ≥ 2. Then, there exist unique functionsȳ,φ ∈ W (0, T ) ∩ L ∞ (Q) such that Moreover, the inequality J (ū)v 2 ≥ 0 ∀v ∈ Cū holds, where Cū is the cone of critical directions defined by This theorem provides the first and second order necessary conditions for local optimality. To establish our stability results of (P), we need sufficient second order conditions. They will be addressed in Sects. 3 and 4.

Remark 2.3 Observe that (2.15) implies
for every u ∈ L 2 (Q) satisfying the control constraints. Indeed, it is enough to take into account that these controls u can be approximated in L 2 (Q) by controls of U ad .

Remark 2.5
(1) Letū ∈ U ad be a global minimizer for (P) and u ∈ L r (0, T ; L s ( )) with r , s ≥ 2 and 1 r + n 2s < 1 satisfy the control constraints α ≤ u(x, t) ≤ β for almost all (x, t) ∈ Q. Since U ad was selected as a subset of L ∞ (Q), this control u is not necessarily admissible for (P). Can it happen that J (u) < J (ū)? The answer is no. Indeed, for every integer k ≥ 1 we set u k (x, t) = Proj [−k,+k] (u(x, t)). Then u k ∈ U ad holds for every k ≥ max{α + , (−β) + }. The convergence u k → u in L r (0, T ; L s ( )) follows from Lebesgue's dominated convergence theorem. Using the estimates (2.5) and (2.6), it is easy to prove that y u k → y u in W (0, T ). From the optimality ofū, we get that J (ū) ≤ J (u k ) for every k ≥ 1 and, consequently, J (ū) ≤ lim k→∞ J (u k ) = J (u).
(2) Assume now thatū ∈ L r (0, T ; L s ( )) satisfies the control constraints and that u is a local minimizer for J in the following sense: there exists ρ > 0 such that J (ū) ≤ J (u) for every u satisfying the control constraints and such that u −ū L r (0,T ;L s ( )) ≤ ρ. Then,ū ∈ L ∞ (Q) holds. Once again this is obvious if −∞ < α < β < +∞. Otherwise, we observe that (2.10) leads to (2.15) and, hence, (2.17) is satisfied. Then, we can argue as in Remark 2.2 to deduce thatū ∈ L ∞ (Q). These observations justify the selection of U ad as a subset of L ∞ (Q).

Lipschitz Stability. Case Ä > 0
Letū ∈ U ad satisfy the first order necessary optimality conditions (2.13)-(2.15). A sufficient condition for strict local optimality ofū is the following where the critical cone Cū is defined above. Even more, the next theorem establishes that, under this assumption, the quadratic growth condition holds.
Proof In the case −∞ < α < β < +∞, the proof of (3.2) with r = s = 2 can be found in [10]. In the other case, due to (A4) we have that κ > 0. Then the proof follows the same steps as in [10] with some technical differences. Thus, we argue by contradiction: If (3.2) fails, then there exists a sequence {u k } ∞ k=1 ⊂ U ad such that We set ρ k = u k −ū L 2 (Q) and v k = 1 ρ k (u k −ū). By taking a subsequence, we can assume that v k v in L 2 (Q). Then, the proof follows as in [10]. The differences concern the proof of the following facts : where θ k ∈ [0, 1]. To prove this, we first observe thatū + ρ k v k = u k →ū andū + θ k ρ k v k =ū +θ k (u k −ū) →ū strongly in L r (0, T ; L s ( )). Hence, from Theorem 2.1 we get y k →ȳ and y θ k →ȳ strongly in L ∞ (Q) ∩ W (0, T ). Denote by ϕ k and ϕ θ k the adjoint states corresponding to u k andū + θ k (u k −ū), respectively. Subtracting the equations satisfied by them and invoking (2.3), it is easy to deduce that that ϕ k →φ and ϕ θ k →φ strongly in L ∞ (Q) ∩ W (0, T ). Finally, looking at the equation satisfied by Combining all these convergence properties and recalling the expressions for the derivatives of J , (2.10) and (2.11), we readily confirm the desired convergences.
Let us point out that, as proved in [10], the condition (3.1) is equivalent to Now, we consider perturbations in the initial condition of (1.1) leading to a family of perturbed optimal control problems (P ε ). Let {φ ε } ε>0 ⊂ L ∞ ( ) be a family of functions satisfying Let us comment on this class of admissible perturbations: We need φ ε ∈ L ∞ ( ) to have associated states in L ∞ (Q). Otherwise we cannot prove the differentiability of the control-to-state mapping that is needed for first and second order optimality conditions. The selection of the L 2 -norm in (3.7) is to obtain better and more practical perturbation results. In practice, perturbations are bounded in L ∞ ( ). However, the requirement lim ε→0 φ ε L ∞ ( ) = 0 is too strong. For instance, let ε ⊂ be a sequence of measurable subsets with | ε | → 0 and φ ε = δ ε χ ε with {δ ε } ε>0 ⊂ R bounded. This sequence of perturbation functions obeys (3.7), but not in the norm of L ∞ ( ).
We associate with this family the state equations For given ε and u, the solution of this equation will be denoted by y ε u . Then, we consider the perturbed optimal control problems Analogously to problem (P), every problem (P ε ) has at least one global minimizer u ε . All these minimizers are uniformly bounded in L ∞ (Q) by a constant depending on y 0 + φ ε L ∞ ( ) ; see Remark 2.4. Then, due to (3.6), this constant can be selected independently of ε, hence The next two theorems analyze the relation between the solutions of (P) and (P ε ).

Theorem 3.2 Let {u ε } ε>0 be a family of global minimizers of problems (P ε ).
Any controlū that is the weak * limit in L ∞ (Q) of a sequence {u ε k } ∞ k=1 with ε k → 0 as k → ∞ is a global minimizer of (P). Moreover, the convergence is strong in L 2 (Q).
Proof Notice that the existence of such weakly * converging sequences {u ε k } ∞ k=1 follows from (3.9). We denote by y ε k andȳ the states associated with u ε k andū, solutions of (3.8) and (1.1), respectively. From Theorem 2.1, (3.7), and (3.9), we infer that y ε k ȳ in W (0, T ), hence strongly in L 2 (Q). Using this fact and the optimality of u ε k , we deduce for every u ∈ U ad Sinceū ∈ U ad , the above inequalities imply thatū is a global minimizer of (P). With the convergence y ε k →ȳ in L 2 (Q) and κ > 0, the strong convergence u ε k →ū in L 2 (Q) follows.
Conversely, we have the following result: Theorem 3.3 Letū be a strict local minimizer of (P) in the L r (0, T ; L s ( ))-sense. Then, there exists a set {u ε } ε>0 of local minimizers of the problems (P ε ) such that u ε →ū strongly in L 2 (Q) when ε → 0.
Proof Sinceū is a strict local minimizer for (P), there exists a closed Let us consider the control problems Obviously,ū is the unique solution of (P) and every (P ε ) has at least one solution u ε . As in Theorem 3.2, we deduce that every weak limit of a converging sequence is a solution of (P). Sinceū is the unique solution of (P), we deduce that the whole family {u ε } ε>0 converges toū. Moreover, arguing as in the proof of Theorem 3.2, we infer that this convergence is strong in L 2 (Q). This implies that there exists ε 0 > 0 such that u ε −ū L 2 (Q) < ρ for every ε < ε 0 . Therefore, u ε is a local minimizer of (P ε ) for every ε < ε 0 . Indeed, for each ε < ε 0 we take a constant ρ ε > 0 such that Then, u is an admissible control for (P ε ) and, consequently, In the remainder of this section,ū will denote a local minimizer of (P) satisfying the sufficient second order condition (3.1). Its corresponding state and adjoint state will be denoted byȳ andφ, respectively. Hence, Theorem 3.3 implies the existence of a set {u ε } ε>0 of local minimizers of the problems (P ε ) such that u ε →ū strongly in L 2 (Q) as ε → 0. The next theorem estimates u ε −ū.
Theorem 3.4 Letū be a local minimizer of (P) satisfying the sufficient second order condition (3.1). Then, with the notation above, there exist ε 0 > 0 and L κ such that Before proving this theorem, we establish two auxiliary results. First, we fix the following notation: y ε and y ε denote the solutions of the unperturbed equation (1.1) and the perturbed equation (3.8), respectively, corresponding to u = u ε . Analogously, ϕ ε and ϕ ε stand for the corresponding adjoint states.

Lemma 3.1
There exist constants C > 0 and ε 1 > 0 such that Proof Since u ε →ū in L 2 (Q), there exist constants C 1 > 0 and ε 1 > 0 such that u ε L 2 (Q) ≤ C 1 for every ε ∈ (0, ε 1 ). From (2.6) and (3.6) we infer With the adjoint state equations satisfied by ϕ ε , we obtain Since u ε is a local minimum for (P ε ), the projection formula (2.17) yields for every ε ∈ (0, ε 1 ). This leads to the existence of a constant M such that We set w ε = y ε − y ε and subtract the equations for y ε and y ε . By the mean value theorem for real-valued functions, we get for some intermediate (3.13) Using (3.12) and which proves the first part of (3.11). To prove the second part, we put ψ ε = ϕ ε − ϕ ε . Subtracting their corresponding equations we obtain With (2.3), (3.12), the mean value theorem, and the estimate ϕ ε Thus, the estimate follows from the partial differential equation above, which concludes the proof.
Proof Let us take τ > 0 as in (3.3). We first prove that u ε −ū ∈ E τ u for every sufficiently small ε. Since obviously u ε −ū satisfies the sign conditions (3.5) . Taking a subsequence, we can assume that v ε v in L 2 (Q).
In the proof of Lemma 3.1, the boundedness of {u ε } ε>0 in L ∞ (Q) was established. Therefore, u ε →ū strongly in every L p (Q) for p < ∞. Using this fact along with the optimality ofū and u ε , we obtain Since this is true for any convergent subsequence of {v ε } ε>0 , we infer that the convergence J (ū)v ε → 0 as ε → 0 holds for the whole family. Therefore, there exists ε 0 such that . Hence, (3.14) follows from the fact that J is of class C 2 in L r (0, T ; L s ( )) with 1 r + n 2s < 1, r , s ≥ 2, and the convergenceū + θ(u ε −ū) →ū in this space.

Proof of Theorem 3.4 From the local optimality ofū and u ε , we infer
Adding these inequalities, we get Then, using (3.14), the mean value theorem, (2.10), and (3.11), we deduce for ε small enough These inequalities imply (3.10).

Stability Analysis. Case Ä = 0
In this case, due to Assumption (A4), the set U ad is bounded in L ∞ (Q). This simplifies some aspects of the analysis of (P). However, the second order analysis is more complicated. We follow [6] to formulate sufficient second order conditions for local optimality. Givenū ∈ U ad satisfying the first order necessary optimality conditions (2.13)-(2.15), we define the following cones for arbitrary τ > 0: The next result was proved in [6].

Definition 4.1
We say thatū is a strong local minimizer of (P) if there exists ε > 0 such that J (ū) ≤ J (u) for every u ∈ U ad with y u −ȳ L ∞ (Q) ≤ ε. If the inequality is strict for u =ū, we say thatū is a strict strong local minimizer. If J (ū) ≤ J (u) holds for every u ∈ U ad such that y u −ȳ L ∞ (0,T ;L 2 ( )) ≤ ε, thenū is said to be a strong local minimizer in the sense of L ∞ (0, T ; L 2 ( )).
The reader is referred to [6, Lemma 2.8] for the proof of these statements.

Corollary 4.1 The controlū is a strong local minimizer of (P) if and only if it is a strong local minimizer of (P) in the sense of L
Proof Ifū is a strong local minimizer of (P) in the sense of L ∞ (0; T ; L 2 ( )), then from the inequality y u −ȳ L ∞ (0,T ;L 2 ( )) ≤ √ | | y u −ȳ L ∞ (Q) we infer thatū is also a strong local minimizer of (P). The converse is proved by contradiction: Since {u k } ∞ k=1 is a bounded sequence in L ∞ (Q), we can extract a subsequence denoted in the same way such that u k * ũ in L ∞ (Q). Moreover, it follows from Theorem 2.1 that y k → yũ in L ∞ (Q). Then, (4.6) implies that yũ =ȳ and, hence,ũ =ū. We select This contradicts (4.6).
Proof We argue again by contradiction. Let {u k } ∞ k=1 ⊂ U ad with associated states {y k } ∞ k=1 satisfy As in the proof of Corollary 4.1, we find that y k →ȳ in L ∞ (Q). From Theorem 4.1, we deduce the existence of ε > 0 and δ > 0 such that (4.5) holds. Let us take k 0 such that y k −ȳ L ∞ (Q) ≤ ε and 1 k < δ for every k ≥ k 0 . Then, (4.8) contradicts (4.5) From this corollary, we deduce that any controlū ∈ U ad satisfying the first and second order optimality conditions (2.13)-(2.15) and (4.4) is a strict strong local minimizer of (P) in the sense of L ∞ (0, T ; L 2 ( )). Now, we analyze the relationship between (P) and the perturbed problems (P ε ) introduced in Sect. 3. We adopt the notation introduced there for the states y ε = y ε u ε and y ε = y u ε , as well as for the adjoint states ϕ ε = ϕ ε u ε and ϕ ε = ϕ u ε . Theorems 3.2 and 3.3 are reformulated as follows: Theorem 4.2 Let {u ε } ε>0 be a family of global minimizers of problems (P ε ). Any controlū that is a weak * limit in L ∞ (Q) of a sequence {u ε k } ∞ k=1 with ε k → 0 as k → ∞ is a global minimizer of (P). Moreover, the strong convergence y ε k →ȳ in L ∞ (0, T ; L 2 ( )) holds.
The proof of this theorem is almost the same as that of Theorem 3.2. The only difference is that we cannot prove the strong convergence of {u ε k } ∞ k=1 toū in L 2 (Q), because κ = 0. Instead, the strong convergence y ε k →ȳ in L ∞ (0, T ; L 2 ( )) can be obtained. Indeed, if we subtract the equations satisfied by y ε k andȳ we get y ε k −ȳ = ψ k + η k , where ψ k and η k satisfy From the first equation, we deduce that {ψ k } ∞ k=1 is bounded in C 0,θ (Q) for some θ ∈ (0, 1). From this boundedness, we immediately obtain that ψ k → 0 in C(Q) ⊂ L ∞ (0, T ; L 2 ( )) as k → ∞. From the second equation, we infer η k C([0,T ],L 2 ( )) ≤ C φ ε k L 2 ( ) for some constant C independent of k. Hence, {η k } ∞ k=1 converges to zero in C([0, T ], L 2 ( )). This proves the convergence y k →ȳ in L ∞ (0, T ; L 2 ( )). Despite that U ρ ad is not convex in general, with (2.7) it is easy to prove that U ρ ad is bounded and weakly * sequentially closed in L ∞ (Q). Hence, every problem (P ε ) has at least one solution u ε . Moreover,ū is the unique solution of (P). As in Theorem 3.3, every weak * limit in L ∞ (Q) of a weakly * converging subsequence of {u ε } ε>0 isū. Therefore, the whole family {u ε } ε>0 converges weakly * toū in L ∞ (Q). In addition, arguing as in the previous theorem, the associated states {y ε } ε>0 converge strongly toȳ in L ∞ (0, T ; L 2 ( )). This implies the existence of ε 1 > 0 such that y ε −ȳ L ∞ (Q) < ρ/3 for every ε ≤ ε 1 . We prove that u ε is a strong local minimizer of (P ε ) in the sense of L ∞ (0, T ; L 2 ( )). Let u ∈ U ad be such that y ε u − y ε L ∞ (0,T ;L 2 ( )) < ρ/3. Using (2.5) and (3.6) we infer that {y ε u } ε>0 is bounded in L ∞ (Q). Hence, we argue as in (3.13) to deduce the existence of a constant K independent of u ∈ U ad such that Selecting ε ∈ (0, ε 1 ] such that K φ ε L 2 ( ) < ρ/3 for every ε ≤ ε 0 , we obtain Therefore, u ∈ U ρ ad and, consequently, J ε (u ε ) ≤ J ε (u) holds. This proves that u ε is a strong local minimizer of (P ε ) in the sense of L ∞ (0, T ; L 2 ( )) for every ε < ε 0 . By Corollary 4.1 this also holds in the sense of L ∞ (Q), hence u ε is a strong local minimizer of (P ε ).
Letū be a strong local minimizer of (P) satisfying the sufficient second order condition (4.4). Its corresponding state and adjoint state will be denoted byȳ and ϕ, respectively. From the quadratic growth condition (4.5) we know thatū is a strict strong local minimizer of (P). In view of this, Theorem 4.3 implies the existence of a set {u ε } ε>0 of strong local minimizers of the problems (P ε ) such that u ε * ū in L ∞ (Q) and y ε →ȳ strongly in L ∞ (0, T ; L 2 ( )).

Remark 4.2
Though κ = 0, due to the boundedness of U ad in L ∞ (Q), Lemma 3.1 is still valid. Moreover, taking w ε = y ε u −ȳ and ψ ε = ϕ ε u −φ and arguing as in the last part of the proof of Lemma 3.1, we deduce the existence of ε 0 > 0 such that Now, we are able to prove a result on Hölder stability of optimal states. We recall the notation y ε = y ε u ε and ϕ ε = ϕ ε u ε .

Theorem 4.4 If {u ε } ε>0 is a sequence of strong local minimizers of the problems (P ε ) according to Theorem 4.3, there exists a constant L 0 such that
Proof From (3.11) and the triangle inequality we infer As in the proof of Theorem 4.2, we have that y ε →ȳ strongly in L ∞ (0, T ; L 2 ( )) as ε → 0. Therefore, we can apply (4.5) to deduce for every sufficiently small ε Using again (3.11), we estimate I 1 as follows : Since u ε is a global minimizer of (P ε ) andū is a feasible control for this problem, we have that I 2 ≤ 0. Finally, we can estimate I 3 as I 1 by using (4.9) instead of (3.11). Inserting these estimates in (4.12) and invoking (4.11), we obtain (4.10).
Unlike in the case κ > 0, in order to prove a stability estimate for the controls as κ = 0, we need an extra assumption. The proof of the stability inequality (3.10) was based on the second order condition (3.3). This condition does not hold if κ = 0 except for a few extreme cases; see [3]. That is why we have used (4.4).
Nevertheless, (4.4) leads only to the quadratic growth condition (4.5) that has been crucial in the proof of the stability estimate (4.10). The question is if we could have an inequality of type for all u ∈ U ad such that y u −ȳ L ∞ (Q) ≤ ε with some δ > 0, ε > 0, and r ∈ [1, ∞).
An inequality of this type would allow us to get some stability estimate for optimal controls. It was proved in [15] that an inequality of this type is impossible unlessū is a bang-bang control. In [14], an inequality of the above type was proved under a structural assumption on the adjoint stateφ. Following this idea, we assume ∃λ ∈ (0, 1] and C λ > 0 such that |{(x, t) ∈ Q : |φ(x, t)| ≤ ε}| ≤ C λ ε λ ∀ε > 0. (4.13) This assumption includes thatū is a bang-bang control. Indeed, (4.13) implies that the set of points whereφ vanishes has a zero Lebesgue measure. Moreover, (2.15) with κ = 0 yields thatū(x, t) = α ifφ(x, t) > 0 andū(x, t) = β ifφ(x, t) < 0. Therefore, u(x, t) belongs to {α, β} for almost every point (x, t) ∈ Q. We will prove that (4.13), along with the sufficient second order optimality condition (4.4), implies the stability of the optimal control with respect to the perturbations of the initial condition. We prepare the proof by some technical results.
Proof From Theorem 2.1 and the boundedness of U ad in L ∞ (Q), we deduce the existence of a constant M such that y u L ∞ (Q) ≤ M ∀u ∈ U ad . Moreover, taking u =ū and v = u −ū in (2.8) with u ∈ U ad arbitrary, we conclude that z u−ū ∈ C(Q). Subtracting the equations satisfied by y u ,ȳ, and z u−ū , and setting w = y u −ȳ − z u−ū , we get Then, using (2.3) we obtain This yields which proves the lemma.

Lemma 4.2 There exists a constant C γ such that
Then, we have that ψ L ∞ (Q) ≤ C γ and consequently
Proof The proof of this lemma follows the steps of the corresponding proof for [7, Lemma 2] established for the elliptic case. We will use the following property: for all > 0 there exists ε > 0 such that, if u ∈ U ad and y u −ȳ L ∞ (Q) < ε , we have . For the proof of (4.17), the reader is referred to [13,Lemma 6]; see also [9,Lemma 3.5]. Moreover, we will use the following fact established in the proof of Corollary 3 in [13]: there exists ε 1 > 0 such that for every u ∈ U ad such that y u −ȳ L ∞ (Q) < ε 0 . We will prove that for all u ∈ U ad satisfying that y u −ȳ L ∞ (Q) < ε ρ for some ε ρ ∈ (0, ε 0 ]. Then, (4.16) is a straightforward consequence of (4.18) and (4.19). The proof is split into three cases. Case Iu −ū ∈ C τ u . Sinceū satisfies the first order optimality conditions, we have that J (ū)(u −ū) ≥ 0. Moreover, from (4.4) we get Then, taking = μ 2 in (4.17) and using the above inequality, (4.16) follows with ε ρ = ε .
Case IIIu −ū / ∈ D τ u and u −ū ∈ G τū . Let C γ be the constant introduced in (4.15) and set τ * = τ/ max{1, C γ }. If u −ū / ∈ G τ * u , then case II applies. Otherwise, we define the sets Associated with these sets, we consider the controls u 1 = (u −ū)χ Q 1 and u 2 = (u − u)χ Q 2 , where χ Q i is the characteristic function of Q i . It is an immediate consequence of (2.10) that The definition of u 1 yields u 1 ∈ D τ u . Let us prove that u 1 ∈ G τū holds as well. Using (4.22), (4.15), and recalling the definition of τ * , we infer From the last two estimates for J (ū)(u −ū) and the fact that τ * ≤ τ , we get This implies u 1 ∈ G τū and, consequently, u 1 ∈ C τ u . Let us confirm that u 2 is a small perturbation of u 1 . Indeed, using (4.22), the fact that u −ū ∈ G τū , and (4.14) it follows
The reader is referred to [26,Lemma 6.3] for an extension of this result to sparse optimal control; see also [25]. Using the previous lemmas, we obtain the following result: ∀u ∈ U ad such that y u −ȳ L ∞ (Q) ≤ ε, where σ and μ are introduced in Lemma 4.4 and in the assumption (4.4), respectively.
Proof Performing a Taylor expansion, from (4.25) and (4.16) with ρ = 1, we infer We finish this section by establishing a stability result for optimal controls and associated states. Letū be a strong local minimizer of (P) satisfying the sufficient second order condition (4.4) and Assumption (4.13). Its corresponding state and adjoint state will be denoted byȳ andφ, respectively. From the growth condition (4.5), we know thatū is a strict strong local minimizer of (P). Hence, Theorem 4.3 implies the existence of a set {u ε } ε>0 of strong local minimizers of the problems (P ε ) such that u ε * ū in L ∞ (Q) and y ε →ȳ strongly in L ∞ (0, T ; L 2 ( )).
Theorem 4.6 Under the above notation, there exist L λ and ε 0 > 0 such that hold for every ε < ε 0 .

The Problem
Although semilinear parabolic control problems were considered since decades, it was an open question if problems with semilinear parabolic equation and standard quadratic tracking type objective functional may have more than one solution. Due to the quadratic structure of the objective functional, one could think of a hidden convexity of the problem. In general, the answer is no. Indeed, in [24] a semilinear elliptic optimal boundary control problem with quadratic objective functional was constructed that has two different optimal controls. The author also proved a remarkable result for functionals of the form in real Hilbert spaces H and U , where a mapping G : U → H and z ∈ H are given.
He proved the following: The functional J is convex for all z ∈ H if and only if G is affine. This result was extended in [16] to quite general tracking type functionals and applied to control-to-state mappings G associated with a nonsmooth elliptic equation, a Signorini type variational inequality, and an evolutionary obstacle problem. In our class of optimal control problems, G stands for the nonlinear control-to-state mapping and z = y Q . The result of [24] shows that there exists a desired state function y Q such that J is nonconvex. This does not yet imply the existence of multiple local minima, but it is an indication that they might exist for a suitable y Q .
Indeed, in this section we present a semilinear parabolic optimal control problem with at least two different local solutions, one of them being globally optimal. We also constructed a problem with two different global solutions. However, it has a slightly different and more academic structure and will be discussed elsewhere.
We consider the problem and to the pointwise control constraints α ≤ u(x, t) ≤ β a.e. in Q. The desired state y Q is defined by and the nonlinearity f is f (y) = y(y − 1)(y + 1) = y 3 − y. Therefore, the PDE is a particular case of the so-called Schlögl model of theoretical chemistry.

A First Locally Optimal Control
As in the previous sections, we denote by y u the state associated with the control u. For u = 0, we obtain y u = 0 as associated state, because zero is one of the so-called fixed points of f , i.e. we have f (0) = 0.
The idea for the example is as follows: We shall show thatū = 0 is a strict local minimizer. Next, we construct another control that has a smaller objective value than u. Because the problem has a (global) solution, there must be a global solution distinct from the zero control. Therefore, at least two different local solutions must exist. To proceed in this way, we consider the adjoint equation forū = 0. It is The reader can easily check thatφ It is easy to verify thatφ(x, t) ≥ 0 in Q if and only if 0 ≤ t s ≤t = ln( 1 2 (e 4 + 1)); notice thatφ(x, 0) = 0 iff t s =t. Proof First of all, we observe thatū is an admissible control for (E). Moreover,ū satisfies the first order optimality conditions (2.13)-(2.15). Indeed, the inequality obviously holds becauseφ ≥ 0,ū = 0, and every control u ∈ U ad is nonnegative. In addition, according to (2.11), the second-order derivative of J is Therefore, (3.3) and (4.4) hold for κ > 0 and κ = 0, respectively. Then, Theorems 3.1 and 4.1 along with Remark 4.2 imply thatū ≡ 0 is an L p (Q)-strict local minimum of (E) for every p ∈ [1, ∞].
Finally, we mention that the objective value forū = 0 is we recall that | | = 1 andȳ = 0. This value is independent on κ, because the Tikhonov regularization term vanishes forū = 0.

Existence of Another (Globally) Optimal Control
Once and for all, we select t s = 3.3. Notice that t s <t = 3.325003.... We recall that t s defines the location of the switching point of y Q . We define the following bang-bang control with switching point τ = 0.02 : Let us compute an upper bound for J (u τ ). We denote by y τ the state associated with u τ . Since u τ does not depend on x and y 0 = 0, we get that y τ is independent of x. With a slight abuse of notation we write y τ (t) = y τ (x, t). Then y τ satisfies the ordinary differential equation for every s ∈ [0, 1] and taking the functions . By separation of variables, we can solve the differential equation (5.3) in [τ, T ], where the control u τ is zero and get Setting Above we have used Actually, the numerical computation of y τ as well as J (u τ ) delivers the value J (u τ ) ≈ 1.6864 + κ < J (ū). As a consequence, we deduce that a global minimizer of (E) distinct fromū must exist. For comparison, the parabolic optimal control problem has been solved numerically. The computed optimal objective value is 1.6140 for κ = 0.3. All these numerical computations were performed by Mariano Mateos (University of Oviedo). We very much acknowledge his support.

Example with Perturbed Initial Data
In the previous subsection, the considered controls depended on t only, i.e. they were constant with respect to x. In this way, we were able to solve the state and adjoint equation as ordinary differential equation. We do not know, if other local minimizers exist for (E) that depend also on x. Nevertheless, the example had the flavour of an example for ODEs. By our perturbation analysis, we are able to construct an example that cannot be reduced to the discussion of ODEs. To this aim, we consider the perturbed example subject to ∂ y ε u ∂t − y ε u + f (y ε u ) = u in Q, ∂ ν y ε u = 0 on , y ε u (0) = φ ε in , (5.4) and to the pointwise control constraints α ≤ u(x, t) ≤ β a.e. in Q, where φ ε (x) = εφ(x) with φ ∈ L ∞ ( ) and ε > 0. (5.5) Theorem 5.2 For every ε > 0, problem (E ε ) has a strict local minimizer u ε in the L p (Q)-sense, for every p ∈ [1, ∞], with associated state y ε such that for all ε > 0 and for some constants L i > 0, i = 0, 1.
Proof We have already proved thatū ≡ 0 is a strict local minimizer of (E) in the which implies (5.8). In the last estimate, we used u ε ≤ β = 10.
Funding Open Access funding enabled and organized by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.