Continuous Differentiability of the Value Function of Semilinear Parabolic Infinite Time Horizon Optimal Control Problems on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L^2(\Omega )$$\end{document}L2(Ω) Under Control Constraints

An abstract framework guaranteeing the local continuous differentiability of the value function associated with optimal stabilization problems subject to abstract semilinear parabolic equations subject to a norm constraint on the controls is established. It guarantees that the value function satisfies the associated Hamilton–Jacobi–Bellman equation in the classical sense. The applicability of the developed framework is demonstrated for specific semilinear parabolic equations.

Continuous differentiability of the value function with respect to the initial datum is an important problem in optimal feedback control theory.Indeed, if the value function is C 1 then it is the solution of a Hamilton Jacobi Bellman (HJB) equation and its negative gradient can be used to define on optimal state feedback law.The subject matter of this paper addresses local continuous differentiability of the value function V for infinite horizon optimal control problems subject to semilinear parabolic control problems and norm constraints on the control.Such problems are intimately related to stabilization problems which are often cast as infinite horizon optimal control problems.Investigating infinite horizon problems constitutes one of the specificities of this paper.Another one is the fact that we focus on the differentiability of V on (subsets of) L 2 (Ω).Thus we need to consider the semilinear equations with initial data y 0 ∈ L 2 (Ω).As a consequence the solutions of the semilinear equations only enjoy low Sobolev-space regularity.This restricts the class of nonlinearities, compared to those which are admissible if the states are in L ∞ ((0, ∞) × Ω), which is the situation typically addressed in the literature on optimal control [Cas] and [Tro2].The latter necessitates to take the initial conditions in spaces strictly smaller than L 2 (Ω).Here we consider L 2 (Ω), first due to intrinsic interest, secondly because ultimately the HJB equation should be solved numerically, which is easier in an L 2 (Ω) setting than in other topologies, like H 1 (Ω).Let us also recall that one of the approaches to solve the HJB equation is given by the policy iteration.It assumes that the value function is C 1 .
The underlying analysis demands stability and sensitivity analysis of infinite dimensional optimal control problems subject to nonlinear equations.For this purpose we utilize the theory of generalized equations as established by [Don] and [Rob].It involves first order approximations of the state and adjoint equations, which lead to restrictions on the class of nonlinearities which can be admitted.We refer to the section on examples in this respect.
The current investigations are to some degree a continuation of work the first author's work on optimal feedback control for infinite dimensional systems.In [BKP1,BKP2,BKP3] Taylor approximations of the value function for problems with a concrete structure, namely, bilinear control systems, and the Navier Stokes equations were investigated and differentiability of the value function was obtained as a by product.In these investigations norm constraints were not considered.Here we admit norm constraints and we focus on semilinear equations.Let us also notice that the systems investigated in [BKP1,BKP2,BKP3] share the property that the second derivatives with respect to the state variable of the nonlinearity in the state equation do not depend on the state itself anymore.
Let us also compare our work to the developments in the field of parametric sensitivity analysis of semilinear parabolic equations under control constraints.There are many papers focusing on stability and sensitivity analysis of finite time horizon problems along with pointwise control constraints, see e.g.[BM,GHH,Gri,GV,Mal,MT,Tro1,Wac], and the literature there.First, of these papers, except for [GV,Tro1], consider the case with initial data in H 1 (Ω) or C( Ω).In [GV] again the third derivative of the nonlinearity is zero.Secondly, all of them consider the finite horizon case.Since we treat infinite horizon problems we have to guarantee stabilizability (for small initial data) under control constraints.Then we use a fixed point argument to obtain well-posedness of the system.Well-posedness and stability with respect to parameters of the adjoint equation is significantly more involved for infinite horizon problems than for finite horizon problems.It requires techniques, differently from those used in the finite horizon case.Another aspect is the proper characterization of the adjoint state at t = ∞.
In the finite dimensional case, there is, of course a tremendous amount of work on the treatment of the value function if it is not C 1 .Fewer papers concentrate on the case where the value function enjoys smoothness properties.We mention [Goe] and [CF] in this respect.
In order to achieve the goal we desire, we lay out the following setup.In Section 2, we consider an abstract parametric optimization problem with an equality constraint and another convex constraint.Existence of an optimal solution, of a multiplier associate to the equality constraint, and Lipschitz stability of the component of the state variable which lies in the complement of the kernel of the linearized constraint will be established.This result is necessary but not sufficient for the further developments, since stability is obtained in a norm which is too weak and since the stability estimate does not involve the component in the kernel of the linearized constraint and the multiplier, i.e. the adjoint states, yet.At the level of Section 2 this remains as Assumption (H7).In Section 3 we specify the concrete optimal stabilization problem and a set of conditions, most importantly on the nonlinearity of the state equation, under which Assumption (H7) can be established, for initial data y 0 ∈ L 2 (Ω).Section 3 also contains a summary of the main results of this paper.They are stated as theorems with a little stronger assumptions than eventually necessary, for the saker of easing the presentation.Section 4 is dedicated to the proof of verifying the assumptions of the general setup of Section 2 for the concrete optimal control problem stated in Section 3. As conclusion we obtain the Lipschitz continuity in the appropriate norms of the all variables appearing in the optimality system with respect to the parameter of interest, which is the initial condition y 0 , in our case.Since our analysis is a local one involving second order optimality conditions, solutions to the optimality system are related to local solutions to the optimal control problem.As a corollary to these results we obtain that the local value function is Fréchet differentiable.In Section 5, we show that in the neighborhood of global solutions the value function V satisfies the Hamilton-Jacobi-Bellman (HJB) equation in the strong sense.Finally, Section 6 is devoted to demonstrating that the developed framework is applicable for some concrete examples, namely for linear systems, Fisher's equations, and parabolic equations with global Lipschitz nonlinearities.All our results require a smallness assumption on the initial conditions y 0 .Two aspects need to be taken into consideration in this respect.First y 0 has to be sufficiently small so that the controlled system is stable.Secondly a second order optimality condition is needed.For this to hold a sufficient condition is provided by smallness of the adjoint state, which in turn can be implied by smallness of y 0 .We stress that these two issues are of related, but independent nature.
2 Lipschitz stability for an abstract optimization problem.
Here we present a stability result for an abstract, infinite dimensional optimization problem which will be the building block for the results below.This result is geared towards exploiting the specific nature of optimization problem with differential equations as constraints.First existence of a dual variable will result from a regular point condition.Subsequently the Lipschitz stability result is obtained in two steps.In the first one, we rely on the relationship between the linearized optimality conditions and an associated linear-quadratic optimal optimization problem, with an extra convex constraint.This approach is useful since it provides the existence of solutions to the linearized system on the basis of variational techniques.However it dictates a certain norms for the involved quantities.These norms are too weak for our goal of obtaining Lipschitz continuity of the adjoint variables in such a manner that differentiability of the cost with respect to the initial conditions can be argued.Therefore, in a second step we exploit the specific structure of the optimality systems, using the fact that it is related to a parabolic optimal control problem, to obtain the Lipschitz continuity in the stronger norms.This two step approach is also present in some of the earlier work on stability and sensitivity analysis which was quoted in the introduction.But due to that fact these papers considered finite horizon problems it came as a byproduct which improved the regularity of the adjoints.In our work it is essential to reach our goal.This is why we decided to formalize this two step approach which was not done in earlier work.
(P q ) with a parameter dependent equality constraint, and a general constraint described by x ∈ C, where C is a closed convex subset of a real Hilbert space X.Further, W is a real Hilbert space and P is a normed linear space.In the application that we have in mind, the parameter q will appear as the initial condition in the dynamical system.The following Assumption (H1) is assumed to hold throughout.
Assumption H1. q 0 ∈ P is a nominal reference parameter, x 0 is a local solution (P q 0 ), f : X −→ R + is twice continuously differentiable in a neighborhood of x 0 , e : X × P −→ W is continuous, and twice continuously differentiable w.r.t.x, with first and second derivative Lipschitz continuous in a neighborhood of (x 0 , q 0 ).
The derivatives with respect to x will be denoted by primes and the derivatives w.r.t.y and u later on, are denoted by subscripts.They are all considered in the sense of Lebesgue derivatives.
We introduce the Lagrangian L : X × P × W * −→ R associated to (P q ) by L(x, q, λ) = f (x) + λ, e(x, q) W * ,W . (2.1) Next further relevant assumptions are introduced: where int denotes the interior in the W topology.This regularity condition implies the existence of a Lagrange multiplier λ 0 ∈ W * , see e.g.[MZ] such that the following first order condition holds: where ∂I C (x) denotes the subdifferential of the indicator function of the set C at x ∈ X.
Condition (H3) is a bit stronger than a second order sufficient optimality condition, since it does not take into consideration the activity or inactivity of the constraints.Such weaker second order conditions typically allow to derive quadratic positive definite lower bounds on the cost and Hölder continuity with respect to perturbations.For Lipschitz continuity and differentiability stronger assumptions, such as (H3) are typically assumed.We refer exemplarily to [Gri, GHH, GV, Wac], and [IK,Section 2.3].The constraints in these references, however, are not identical with those of the present paper.
The stability result of (x 0 , λ 0 ) with respect to perturbation of q at q 0 will be based on Robinson's strong regularity condition which involves the following linearized form of the optimality condition, We define a multivalued operator T : and observe that (2.6) is equivalent to Here it is understood that T is evaluated at (x 0 , q 0 , λ 0 ) ∈ X × P × W * .But T is not yet the mapping for which we need to verify the Robinson-Dontchev strong regularity condition in our context.It relates to the fact that we must to treat the multiplier λ in smaller space than W * .Before we can properly specify this condition some additional preparation is necessary.We first introduce Banach spaces: with continuous injections.We emphasize that X * should not be confused with (X) * .A restriction of T will be defined as multivalued operator T : X × W * → X * × W . Indeed, in applications to optimal control problems extra regularity of multipliers can be obtained by investigating the solutions (2.3), see e.g.Section 3. In the context of optimal stabilization problems this structural property will become transparent in Proposition 4.1 and Proposition 4.2, see also [BKP3,Proposition 15].It will turn out to be essential for our purposes.But this situation where the multiplier has extra regularity is also of abstract interest.When studying stability in this setting this means that the second coordinate of the domain of T needs to be changed from W * to W * .This entails that the range space of T has to be modified appropriately, in order to obtain stability of the λ coordinate.For this purpose we introduce X * ⊂ X * .The reason for further restricting X to X will become evident in the proof of Proposition 4.2.It is related to the fact that we consider infinite horizon problems.A concrete use of these space is elaborated in detailed in subsection 3.2.2.
Now we adapt the conditions on f and e to the choice of the spaces in (2.8).

Assumption H4.
There exists a neighborhood (ii) the restriction e ′ (x, q) * ∈ L(W * , X * ) to W * defines operators e ′ (x, q) * ∈ L(W * , X * ) for every With these assumption holding we define the restricted linearized Lagrangian (2.9) Next we adapt ∂I C ⊂ X * to the situation of (2.8) and define for x ∈ X the set valued mapping (2.10) We henceforth assume that (x 0 , λ 0 ) ∈ X × W * , it will also follow as a special case of (H7) below.
The following assumption will guarantee that the restriction T of T is well-defined as operator from X × W * to X * × W , and the one beyond is needed for Lipschitz continuous dependence of local solutions to (P q ) with respect to q.
(2.12) Moreover (2.6) restricted to X × X * result in: (2.13) and the multivalued operator T : X × W * −→ X * × W related to (2.7) is defined as Observe that (2.13) is equivalent to Existence and Lipschitz continuity of solutions in a neighborhood (x 0 , q 0 , λ 0 ) will follow from the strong regularity assumption which requires us to show that there exist neighborhoods V ⊂ X * ×W of 0 and Û = Û1 × Û2 ⊂ X×W * of (x 0 , q 0 ) such that T −1 has the properties that T −1 ( V )∩ Û is singlevalued and that it is Lipschitz continuous from V to Û , see [Don], (and also [Rob], [IK,Definition 2.2,p 31], in case X = X, W * = W * , X * = X * ).We approach the strong regularity assumption in two steps.In the first one we argue invertibility of T and Lipschitz continuity of the variable x in X.For this purpose we exploit the symmetry of T and consider an associated variational problem.
In our specific situation the inverse of T -and consequently of T -is single-valued and thus the restriction to the neighborhood Û is not needed.Existence and Lipschitz continuity of λ as well as Lipschitz continuity of x in the small space X × W * remains an assumption in the generality of problem (P q ).It will be verified in a second step for the optimal stabilization problems in the following sections.
For the proof we shall employ the following lemma in which A ∈ L(X, X * ) and E ∈ L(X, W ) denote generic operators.For the sake of completeness we also include its proof.
By (H3) the sequences {y n } ∞ n=1 and hence {x n } ∞ n=1 are bounded.Thus there exists a subsequence {x n k } with weak limit x = x(ã, b) in S( b).Since J weakly lower semi-continuous, we have that J(x) ≤ lim inf k→∞ J(x n k ) and x minimizes J over S( b).This further implies that Ax+ã, v−x X * ,X ≥ 0 for all v ∈ S( b).Uniqueness of x follows from (H3).The regular point condition implies the existence of a multiplier λ = λ(ã, b) ∈ W * such that (2.19) holds.See e.g.[IK,Theorem 1.6] Proof of the Theorem 2.1.
(i) The proof of the first assertion of the Theorem 2.1 is based on the implicit function theorem of Dontchev for generalized equations, see [Don,Theorem 2.4,Remark 2.5].We introduce the mapping F : X × P × W * −→ X * × W given by F(x, q, λ) = L ′ (x, q, λ) e(x, q) , and observe that Assumption (H6) implies that for all (x, q 1 , λ), and (x, (2.20) By (H1) and (H5), and using the integral mean value theorem it can be argued that strongly approximates F at (x 0 , q 0 , λ 0 ), in the sense of Dontchev, [Don].In the next two steps the strong regularity condition for T will be verified.
(iii) (Uniqueness and Lipschitz continuity) Let (β 1 , β 2 ) ∈ Ṽ and ( β1 , β2 ) ∈ Ṽ with corresponding solutions (x, λ) ∈ X × W * and (x, λ) ∈ X × W * .This implies that By the first equations in (2.23) and ( 2.24) we obtain that Combining these inequalities, we have that The second equalities in (2.23) and ( 2.24) imply that Then (2.30) From the first equation in (2.21) we have Next we restrict the perturbation parameters to satisfy ( The analogous equation holds with (x, λ, β 1 ) replaced by (x, λ, β1 ).By (H3), (2.28) and Assumption (H7) we find where k denotes the embedding constant of W * into W * .Using (2.30) and rearranging terms there exists a constant k 2 > 0 such that (2.32) Applying (2.30) again this implies the existence of k 3 such that Another application of (H7) and (2.33) imply the existence of a constant k 4 and a neighborhood V of the origin in X * × W such that the desired Lipschitz stability estimate for holds.
(v) (Local solution to (P q )) Now we show that there exists a neighborhood Ñ of q 0 such that for q ∈ Ñ the second order sufficient optimality condition is satisfied at x(q), so that x(q) is a local solution of (P q ) by eg.[IK,Theorem 2.12,p42].Due to (H3) and regularity of f, e we obtain Let us define E q = (e y (x(q), q)) for q ∈ N (q 0 ).By the surjectivity of E q 0 and regularity of e there exists a neighborhood Ñ ⊂ N (q 0 ) such that E q is surjective for all q ∈ Ñ .Here we also use continuity of q → e y (x(q), q) from P → W at q 0 , which follows from (H1) and the continuity of q → x(q) at q 0 .Consequently exist δ 0 , γ > 0 such that satisfying z ≤ γ h by [IK,Lemma 2.13,p43].Let us define the orthogonal projection onto ker E q given by P ker Eq = I − E * q (E q E * q ) −1 E q .We choose Ñ so that and hence z ≤ γ h .From (2.35) this implies L ′′ (x(q), q, λ(q)) ≥ δ 0 x 2 , for all x ∈ ker E.
This concludes the proof.
3 Differentiability of value function for optimal stabilization subject to semi-linear parabolic equations.
Here we describe the optimal control problems which we shall analyze and state the main results.

Notation
Let Ω be an open connected bounded subset of R d with dimension d, and a Lipschitz continuous boundary Γ.The associated space-time cylinder is denoted by Q = Ω × (0, ∞) and the associated lateral boundary by Σ = Γ × (0, ∞).We define the Hilbert spaces , where U is a Hilbert space which will be identified with its dual.Observe that the embedding V ⊂ Y is dense and compact.Further V ⊂ Y ⊂ V * , is a Gelfand triple.Here V * denotes the topological dual of V with respect to the pivot space Y .For any T ∈ (0, ∞) we define the space endowed with the norm We shall frequently use that W ∞ embeds continuously into C([0, ∞), Y ), see e.g.[LM,Theorem 4.2] and that lim t→∞ y(t) = 0, for y ∈ W ∞ , see e.g.[CK].The set of admissible controls U ad is chosen to be where η is a positive constant.We further set U ad = {v ∈ U : v U ≤ η} and denote by P U ad the projection of U on U ad .For this choice of admissible controls, the dynamical system can be stabilized for all sufficiently small initial conditions in Y , see Corollary 4.

Problem formulation and assumptions.
We focus on the stabilization problem for an abstract semi-linear parabolic equation formulated as infinite horizon optimal control problem under control constraints: subject to the semilinear parabolic equation Throughout F is the substitution operator associated to a mapping f : R → R so that (Fy)(t) = f(y(t)).Sufficient conditions which guarantee the existence of solutions to (3.2b), (3.2c), as well as solutions (ȳ, ū) to (P), for y 0 ∈ Y sufficiently small, will be given below.We shall also make use of the adjoint equation associated to an optimal state ȳ, given by Its adjoint state p which will be considered in L 2 (I; V ) or in W ∞ .The following assumption will be essential.

Assumptions A. A1
The operator A with domain D(A) ⊂ Y and range in Y , generates a strongly continuous analytic semigroup e At on Y and can be extended to A ∈ L(V, V * ).
A2 B ∈ L(U , Y ) and there exists a stabilizing feedback operator K ∈ L(Y, U ) such that the semigroup e (A−BK)t is exponentially stable on Y .
A4 F : W (0, T ) → L 1 (0, T ; H * ) is weak-to-weak continuous for every T > 0, for some Hilbert space H which embeds densely in V .
Note that L 1 (0, T ; Remark 3.1.The requirement that F(0) = 0 in (A2) is consistent with the fact that we focus on the stabilization problem with 0 as steady state for (3.2b).Without loss of generality we further assume that F ′ (0) = 0, (3.3) which can always be achieved by making F ′ (0) to be perturbation of A.
Remark 3.2.Let us assume that (A3) holds.Then in view of the fact that F is a substitution operator we have [F ′ (y)v](t) = f ′ (y(t))v(t) for y and v in W ∞ , and -For examples of functions F which satisfy (A4) we refer to see Section 6.

Abstract setup.
Here we relate problem (P) to the abstract problem (P q ), which is used with the following spaces: where To express (P q ) for the present case, we set x = (y, u) ∈ W ∞ × U , and the parameter q becomes the initial condition and e(x, q) = e(y, u, y 0 ) is By (A3) the mapping e is Fréchet differentiable with respect to x = (y, u) ∈ W ∞ × U and thus for (y, u, y The Lagrange functional L : where (p, p 1 ) ∈ L 2 (I; V ) × Y corresponds to the abstract Lagrange multiplier λ ∈ W * .
In the remainder of this subsection we specify the mappings T and T for problem (P).This will facilitate the proofs of the main results further below.
At first we take a closer look to the adjoint (3.8) Now we assume that F ′ (y) is not only an element of L(W ∞ , L 2 (I; V * )) but rather that it can be extended to an operator F ′ (y) ∈ L(L 2 (I; V ), L 2 (I; V * )).This is guaranteed by (A5) at minimizers ȳ.Then (3.8) implies that p ∈ W ∞ , and hence p 1 ∈ C(I; Y ) and p 1 = p(0), see Proposition 4.1.In particular (p, p 1 ) = (p, p(0)) ∈ W ∞ , and (3.8) can equivalently be expressed as for all v ∈ L 2 (I; V ), where we assumed that From now on, let q 0 = ȳ0 denote a reference (or nominal) parameter with associated solution x 0 = (ȳ, ū).In Proposition 4.1 we shall argue that the regular point condition Assumption (H2) is satisfied and that consequently there exists a Lagrange multiplier (p, p1 ) such that the pair (x 0 , λ 0 ) = (ȳ, ū, p, p1 ) satisfies (2.3).Moreover, it will turn out that p ∈ W ∞ , p1 = p(0), and that ū ∈ U ∩ C(I; U ).For convenience let us present (2.3) for the present case , ū, ȳ0 ).We stress that while the Lagrange multiplier p belongs to W ∞ , the operator E * 1 in (3.9) is still considered as an element of L(L 2 (I; We are now prepared to specify the multivalued operators corresponding to (2.7) and (2.14) by where In (3.14), we underline the elements which are taken from different domains when compared to (3.13).The range of the first two coordinates of T is smaller than that of T .Accordingly we can make use of (3.9) when moving from the first row of (3.13) to the first row of (3.14).
For convenience of the subsequent work, we recall that the strong regularity condition introduced below (2.14) requires us to find neighborhoods of 0 and (ȳ, ū, p, p(0 admits a unique solution (y, u, p, p(0)) ∈ Û depending Lipschitz-continuously on β.
Remark 3.3.We observe that as a consequence of (A3) and Remark 3.2 the operator T is continuous.
Subsequently we shall frequently refrain from the underline-notation since the meaning should be clear from the context.

Main Theorems.
In this subsection, we present the main theorems of this paper.The first theorem asserts local continuous differentiability of the value function V w.r.t.y 0 , with y 0 small enough.The second theorem establishes that V satisfies the HJB equation in the classical sense.The proof of the first theorem is based on Theorem 2.1.It will be given in Section 4 below.For this purpose it will be shown that assumptions A imply (H1)-(H7).Moreover we need to assert the underlying assumption that problem (P) is well-posed.This will lead to a smallness assumption on the initial states y 0 .Consequently it would suffice to assume that (A3) and (A4) only hold locally in the neighborhood of the origin.Concerning (A5) observe that it is not implied by (A3).It is vacuously satisfied for ȳ = 0, which is the case for y 0 = 0, since then F ′ (0) = 0, see (3.3).
We invoke Theorem 2.1 to assert the Lipschitz continuity of the state, the adjoint state, and the control with respect to the initial condition y 0 ∈ Y in the neighborhood of a locally optimal solution (ȳ, ū) corresponding to a sufficiently small reference initial state ȳ0 .This will imply the differentiability of the value function associated to local minima.We shall refer to the value function associated to local minima as 'local value function'.
Theorem 3.1.Let the assumptions (A) hold.Then associated to each local solution (ȳ(y 0 ), ū(y 0 )) of (P) there exists a neighborhood of U (y 0 ) such that the local value function V : U (y 0 ) ⊂ Y → R is continuously differentiable, provided that y 0 is sufficiently close to the origin in Y .
To obtain a HJB equation we require additionally that t → (F(ȳ))(t) is continuous with values in Y for global solutions (ȳ, ū) to (P), with y 0 ∈ D(A).In view of the fact that for y 0 ∈ V we can typically expect that the solutions of semilinear parabolic equations satisfy y ∈ L 2 (I; this is not a restrictive assumption beyond that what is already assumed in (A3).
Theorem 3.2.Let the assumptions (A) hold, and let (ȳ(y 0 ), ū(y 0 )) denote a global solution of (P), for y 0 ∈ D(A) with sufficiently small norm in Y .Assume that there exists T y 0 > 0 such that F(ȳ) ∈ C([0, T y 0 ); Y ).Then the following Hamilton-Jacobi-Bellman equation holds at y 0 : (3.17) Moreover the optimal feedback law is given by The condition on the smallness of y 0 will be discussed in Remark 4.2 below.Roughly it involves well-posedness of the optimality system and second order sufficient optimality at local solutions.A more detailed, respectively stronger statement of Theorem 3.1 and Theorem 3.2, will be given in Theorem 4.1 and Theorem 5.1 below.The regularity assumptions F(ȳ) ∈ C([0, T y 0 ); Y ) of Theorem 3.2 will be addressed in Section 6.
In this section we give the proof for Theorem 3.1.Many of the technical difficulties arise from the fact that we are working with an infinite horizon optimal control problem.In this respect we can profit from techniques which were developed in [BKP3], which, however, do not include the case of constraints on the norm.Throughout we assume that assumptions (A1) -(A4) hold.

Well-posedness of problem (P).
Here we prove well-posedness for (P) with small initial data.First, we recall two consequences of the assumption that A is the generator of an analytic semigroup.
Consequence 1.Since A generates a strongly continuous analytic semigroup on Y , there exist ρ ≥ 0 and θ > 0 such that Consequence 2. For all y 0 ∈ Y, f ∈ L 2 (0, T ; V * ), and T > 0, there exists a unique solution y ∈ W (0, T ) to ẏ = Ay + f, y(0) = y 0 . (4.1) Furthermore, y satisfies for a continuous function c.Assuming that y ∈ L 2 (0, ∞; Y ), consider the equation where f ρ ∈ L 2 (I; V * ).Then the operator A ρ generates a strongly continuous analytic semigroup on Y which is exponentially stable, see [BPDM,p 115,Theorem II.1.2.12].It follows that y ∈ W ∞ , that there exists M ρ such that and that y is the unique solution to Lemma 4.1.There exists a constant C > 0, such that for all δ < (0, 1] and for all y 1 and y 2 in W ∞ with y 1 W∞ ≤ δ and y 2 W∞ ≤ δ, it holds that F(y 1 ) − F(y 2 ) L 2 (I;V * ) ≤ δC y 1 − y 2 W∞ .(4.4) Proof.Let y 1 , y 2 be as in the statement of the lemma.Using (A3) and Remark 3.1 we obtain the estimate

Now the claim follows by assumption (A3).
Lemma 4.2.Let A s be the generator of an exponentially stable analytic semigroup e Ast on Y .Let C denote the constant from Lemma 4.1.Then there exists a constant M s such that for all y 0 ∈ Y and f ∈ L 2 (I; V * ) with has a unique solution y ∈ W ∞ , which satisfies With Lemma 4.1 holding, this lemma can be verified in the same manner as [BKP3, Lemma 5, p 6].In the following corollary we shall use Lemma 4.2 with A s = A − BK, and the constant corresponding to M s will be denoted by M K .Further I denotes the norm of the embedding constant of W ∞ into C(I; Y ), i is the norm of the embedding V into Y , and we recall the constant η from (3.1).
Corollary 4.3.For all y 0 ∈ Y with there exists a control u ∈ U ad such that the system and thus the second inequality in (4.7) holds.We still need to assert that u ∈ U ad .This follows from the second smallness condition on y 0 Y and (4.9).
Remark 4.1.In the above proof stabilization was achieved by the feedback control u = −Ky.For this u to be admissible it is needed that U ad has nonempty interior.The upper bound η could be allowed to be time dependent as long as it satisfies inf Corollary 4.4.Let y 0 ∈ Y and let u ∈ U ad be such that the system Proof.Since y ∈ L 2 (I; Y ), we can apply Lemma 4.2 to the equivalent system y t = (A − ρI)y + F(y) + f , where f = ρy + Bu.This proves the assertion.
Lemma 4.5.There exists δ 1 > 0 such that for all y 0 ∈ B Y (δ 1 ), problem (P) possesses a solution (ȳ, ū) ∈ W ∞ × U ad .Moreover, there exists a constant M > 0 independent of y 0 such that Proof.The proof of this lemma follows with analogous argumentation as provided in [BKP3,Lemma 8].Let us choose, δ 1 ≤ min , where C as in Lemma 4.1 and M K denotes the constant from the Corollary 4.3.We obtain that for each y 0 ∈ B Y (δ 1 ), there exists a control u ∈ U ad with associated state y satisfying max u U , y W∞ ≤ M y 0 Y , (4.12) where M = 2M K max 1, i K L(Y, U ) .We can thus consider a minimizing sequence ( . Then we have y 0 + ρy n + Bu n L 2 (I;V * ) ≤ η(α, M ) y 0 Y .After further reduction of δ 1 , we obtain with M ρ from Corollary 4.4:
For the derivation of the optimality system for (P), we need the following lemma which is taken from [BKP1, Lemma 2.5].
, where G denotes the operator norm of G. Then for all f ∈ L 2 (I; V * ) and y 0 ∈ Y , there exists a unique solution to the problem: Moreover, We close this section by deriving the optimality conditions for (P).
Now, if we impose the additional assumption (A5), we have where we have used that z(0) = 0 and lim t→∞ p(t) = 0, since p ∈ W ∞ .We next estimate using (4.17), (4.19) and (4.26) By (4.11), this implies the existence of a constant C 2 such that sup Now we estimate, again using (A5) By

Verification of (H1)-(H6).
In this section we specialize the previously proved abstract results in Section 2 to the semilinear parabolic setting.We start with the following lemma which shows that assumptions A imply (H1)-(H6).(i) Verification of (H1): The initial condition y 0 is our nominal reference parameter q.Lemma 4.5 guarantees the existence of a local solution (ȳ, ū) ∼ x 0 to (P)∼ (P q 0 ).Clearly f defined in (3.5) satisfies the required regularity assumptions.Moreover e satisfies the regularity assumptions as a consequence of (A3).
(iii) Verification of (H3): The second derivative of e is given by For the second derivative of L w.r.t.(y, u), we find By (A3) for F ′′ and Lemma 4.5 , there exists M 1 such that (4.30) for each solution (ȳ, ū) of (P) with y 0 ∈ B Y (δ 2 ).Then we obtain Next choose ρ > 0, such that the semigroup generated by (A − ρI) is exponentially stable.This is possible due to (A1).We equivalently write the system in the previous equation as, Now, we invoke Lemma 4.6 with A − BK replaced by A − ρI, G = F ′ (ȳ), and f (t) = ρv(t) + Bw(t), and the role of the constant M K will now be assumed by a parameter M ρ .By selecting δ 2 ∈ (0, δ 2 ] such that ȳ W∞ sufficiently small, we can guarantee that F ′ (ȳ) L(W∞;L 2 (I;V * )) ≤ 1 / 2Mρ , see (4.11) and (3.3) in Remark 3.1.Then the following estimate holds, for a constant M 2 depending on M ρ , B , and the embedding of Y into V * .These preliminaries allow the following lower bound on L ′′ : . By possible further reduction of δ2 it can be guaranteed that γ > 0, see (4.27).Then by (4.32), we obtain, .
, we obtain the positive definiteness of L ′′ , i.e.
(iv) Verification of (H4): It can easily be checked that f ′ (y, u) can be extended to an element in We refer to Remark 3.2 to show that the restriction of e ′ (y, u, y 0 ) * to W * satisfies e ′ (y, u, y 0 (v) Verification of (H6): This is trivially satisfied.
Remark 4.2.Let us summarize our findings so far.There exists δ2 such that for each y 0 ∈ B Y ( δ2 ) problem (P) posesses a solution (ȳ, ū) ∈ W ∞ × (U ∩ C( Ī; U )), with an adjoint p ∈ W ∞ .Further (A1)-( A5) imply (H1)-( H6) for (P) with y 0 ∈ B Y ( δ2 ).As a consequence for each y 0 ∈ B Y ( δ2 ) and each associated local solution (ȳ, ū) there exists a neighborhood V of the origin in Remark 4.3.Here we remark on the smallness assumption on y 0 expressed by δ 2 , respectively δ2 .The condition y 0 ∈ B Y (δ 2 ) guarantees the well-posedness of (P), existence and boundedness of adjoint states as expressed in Proposition 4.1.The additional condition y 0 ∈ B Y ( δ2 ) implies that the second order optimality condition (H3) is satisfied, for each local solution associated to an initial condition y 0 ∈ B Y (δ 2 ).In the following we formulate the results for all y 0 ∈ B Y ( δ2 ).
Alternatively we could narrow down the claims to neighborhoods of single local solutions (ȳ, ū) with y 0 ∈ B Y (δ 2 ) and additionally assuming that the second order condition is satisfies at (ȳ, ū).
Concerning the second order condition itself, in some publications, see e.g.[Gri], it is required to hold only for elements x = (y, u) ∈ kerE and u = u 1 − u 2 , with u 1 , u 2 in U ad .By a scaling argument it can easily be seen that this condition is equivalent to the one we use.
4.3 Verification of (H7) and Lipschitz stability of the linearized problem.
Throughout the remainder, we assume that (A1)-( A5) are satisfied and that y 0 ∈ B Y ( δ2 ) so that Proposition 4.1 and Lemma 4.7 are applicable.In the following, the triple (y, u, p) refers to the solution T (y, u, p, p 1 ) = β.Throughout without loss of generality, we also assume that V is bounded.
Lemma 4.8.Let assumptions (A) hold and let (ȳ, ū), and p denote a local solution and associated adjoint state to (P) corresponding to an initial datum y 0 ∈ B Y ( δ2 ).Then, possibly after further reduction of V , the mapping β → p (β) is continuous from V to W ∞ . Proof.
Step 1: For β ∈ V , with V as in Remark 4.2, let (y (β) , u (β) , p (β) , p 1(β) ) be the solution to T (y, u, p, p 1 ) = β.As a consequence of (A5) it is also a solution to T (y, u, p, p(0)) = β with p (β) ∈ W ∞ .Thus the first two equations of this latter equality can be expressed as The above inequality is equivalent to Step 2: (Boundedness of {p (β) : β ∈ V }).Since V is assumed to be bounded, the discussion in Remark 4.2 shows that there exists a constant M 1 > 0 such that To argue the boundedness of p (β) , we use a similar technique as in the proof of Proposition 4.1.
With δ as in the proof of that Proposition, β ∈ V , and From the proof of Proposition 4.1, we know that there exists a constant M such that Consequently, we obtain with M from (4.11) for a.a.t > 0 Moreover we have that z W∞ ≤ C 1 for a constant independently of r ∈ R and β ∈ V .Due to (4.35a) and (4.35b), we have where we also used the feasibility of w ∈ U ad .Consequently The right hand side is uniformly bounded for β in the bounded set V and w.r.t.r ∈ R. Hence taking the supremum w.r.t.r ∈ R we verified that Step 3: (Continuity of p (β) in W ∞ ).Let {β n } be a convergent sequence in V with limit β.Since in W ∞ and strongly L 2 (0, T ; Y ) for every T ∈ (0, ∞), see e.g.[Emm,Satz 8.1.12,pg 213].Passing to the limit in the variational form of Since the solution to this equation is unique, we have p (β n ) ⇀ p (β) weakly in W ∞ .To obtain strong convergence, we set and by the choice of V , we also have that Let R 1 = r ∈ L 2 (I; V * ) : r L 2 (I;V * ) ≤ 1 , and denote the solution to (4.36) by z = z (β) for β ∈ V .From the estimates in (4.37) there exists M 2 such that z (β) W∞ ≤ M 2 for all β ∈ V , and r ∈ R 1 .
Proposition 4.2.Let assumptions (A) hold and let (ȳ, ū), and p denote a local solution and associated adjoint state state to (P) corresponding to an initial condition ȳ0 ∈ B Y ( δ2 ).Then there exists ε > 0 and C > 0 such that for all β and holds.
Proof.As we described in Step 3 of the proof of Lemma 4.8, since p ∈ W ∞ and lim Since p(0) = p, and since by Lemma 4.8, Consequently the constraints are inactive for these parameter values, i.e. we have We next treat separately the cases [0, T ) and [T, ∞).We consider first the case [T, ∞) and set (y, u, p) = y (β) , u (β) , p (β) , and (ŷ, û, p) = ŷ( β) , û( β) , p( β) .We shall use that From Lemma 4.6, see also the proof of Proposition 4.1, we know that there exists a constant In the following, C i denote constants independent of β and β ∈ V ∩ B Y (ε).From (4.35a) and (4.43) we obtain, for By the embedding W (T, ∞) ⊂ C(T, ∞; Y ), there exists a constant C 6 > 0: Similarly, we estimate on [0, T ]: Choose z as Then there exists C 7 > 0 such that z W (0,T ) ≤ C 7 r L 2 (0,T ;V * ) by Lemma 4.6.Note that C 7 depends on T , but T is fixed.We obtain the following estimate, Then by a similar computation to that for the t ∈ [T, ∞) case, we obtain, Combining this estimate with (4.44) and (4.48), we obtain for some We also have and thus This yields for all β and β ∈ V ∩ B Y (ε).Thus the verification of (H1)-(H7) is concluded.Here and in the following the p 1 coordinate of the adjoint state coincides with p(0).Therefore it is not indicated.
We now obtain the following corollary to Theorem 2.1.
Next we obtain one of the main results of this paper, the Fréchet differentiability of the local value function associated to (P).By referring to a local value function we pay attention to the fact that for some y 0 ∈ B Y ( δ2 ), problem (P) may not admit a unique solution.But since due to the second order optimality condition local solutions are locally unique under small perturbations of y 0 , there is a well-defined local value function.We continue to use the notation for Û and B Y (ȳ 0 , δ 3 ) of Corollary 4.9.Proof.Let ȳ0 ∈ B Y ( δ2 ), y 0 ∈ B Y (ȳ 0 , δ 3 ), and choose δy 0 sufficiently small so that y 0 + δy 0 ∈ B Y (ȳ 0 , δ 3 ) as well.Following Corollary 4.9 let (ỹ(y 0 + s(δy 0 )), ũ(y 0 + s(δy 0 )), p(y 0 + s(δy 0 ))) ∈ Û for s ∈ [0, 1] be solutions of the optimality system with (ỹ(y 0 + s(δy 0 )), ũ(y 0 + s(δy 0 ))) local solutions to (P).We obtain Observe the identity where p = p(y 0 ).Now we have for V(y 0 + s(δy 0 )) − V(y 0 ), , and by the continuous Fréchet differentiability of F ′ due to (A3) we have Let s n → 0 be an arbitrary convergent sequence.By Corollary 4.9 we have that ũ(y 0 + s n (δy 0 )) − u(y 0 ) U ≤ µs n (δy 0 ), for all s n sufficiently small.Hence there exists a subsequence, denoted by the same notation and some u such that s −1 n (ũ(y 0 + s n (δy 0 )) − u(y 0 )) ⇀ u weakly in U. Using (4.18), we have Remark 4.4 (Sensitivity w.r.t.other parameters).We have developed a technique to verify the continuous differentiability of the local value function V pertaining to a semilinear parabolic equation on infinite time horizon subject to control constraints with respect to small initial data y 0 ∈ Y .Thus the parameter q in (P q ) is the initial condition y 0 .The reason to focus on this case is due to feedback control.Without much additional effort the sensitivity analysis of the value function could be carried out with respect to other parameters as for instance additive noise on the right hand side of the state equation.The papers cited in the introduction, see e.g.[GHH], [GV], consider such situations for the finite horizon case.
5 Proof of Theorem 3.2: Derivation of the HJB Equation.
Utilizing the results established so far we now verify that the (global) value function V (i.e. the value function associated to global minima) is a solution to a Hamilton-Jacobi-Bellman equation.
The initial conditions will be chosen from the neighborhood Y 0 of the origin in Y so that the assertions of Theorem 4.1 and Corollary 4.9 are available.It will be convenient to recall the dynamic programming principle for the infinite time horizon problem: let y 0 be an initial condition for which a solution to (P) exists.Then for all τ > 0, we have For convenience we restate Theorem 3.2.Utilizing the notation that we have already established we can now slightly ease the assumption on the regularity of F(ȳ).
Step 1: Let us first prove that V ′ (y 0 ) Ay 0 + F(y 0 ) + Bu 0 + ℓ(y 0 , u 0 ) = 0. (5.4) For this purpose we invoke the dynamic programing principle: We have where we choose τ ∈ (0, min(T y 0 , τ y 0 )) .By continuity of ŷ and û at time 0, the first term converges to ℓ(y 0 , u 0 ) as τ → 0. To take τ → 0 in the second term we first consider (5.6) Using the facts that y 0 ∈ D(A), that the terms in square brackets are continuous with values in Y , and that A generates a strongly continuous semigroup on Y , we can pass to the limit in (5.6) to obtain that lim (5.7) Now we return to the second term in (5.5) which we express as (5.8) Using (5.7) and since y → V ′ (y) is continuously differentiable at y 0 , we can pass to the limit in (5.8) to obtain (5.9) Now we can pass to the limit in (5.5) and obtain (5.4).

Some Applications
In this section we discuss the applicability of the framework in two specific cases.It should be noted that even for linear state equations, the sensitivity result for the constraint infinite horizon optimal control problem may be new.

Fisher's Equation
We consider the optimal stabilization problem for the Fisher equation in an open connected bounded domain Ω in R d , d ∈ {1, 2, 3, 4}, with Lipschitzian boundary Γ = ∂Ω: where U and U ad are as in Section 3.1, B ∈ L(U , Y ), with Y = L 2 (Ω) and V = H 1 0 (Ω).To further cast this problem in the framework of Section 3, we define the operator Ay = (∆ + I)y, and y| Γ = 0, D(A) = H 2 (Ω) ∩ V.
Clearly A has an extension as operator A ∈ L(V, V * ).Moreover it generates an analytic semigroup on Y .Thus (A1) holds.For U = Y and B = I, condition (A2) is trivially satisfied.Feedback stabilization by finite dimensional controllers was analyzed in [Tri], for example.
It can readily be checked that the nonlinearity F(y) = −y 2 is twice continuously differentiable as mapping F : W ∞ → L 2 (I; V * ).The first and second derivatives of F are given by, Since the second derivative is independent of y, its boundedness is automatic.For the sake of illustration we verify the boundedness of the bilinear form of the second derivative on W ∞ × W ∞ .For this purpose, for arbitrary y Turning to (A4) we show that F(y) : W (0, T ) → L 1 (0, T ; V * ) is continuous for every T > 0. We consider the sequence y n ⇀ ŷ in W ∞ and let z ∈ L ∞ (0, T ; V ) be given.Then we estimate Since V is compactly embedded in Y , we obtain by the Aubin Lions lemma that y n − ŷ L 2 (0,T ;Y ) → 0 for n → ∞.This implies and (A4) follows.It is simple to check that F ′ (ȳ) = 2ȳ ∈ L(L 2 (I; V ), L 2 (I; V * )) and thus (A5) holds as well.
Remark 6.1.The specificity of this example rests in the fact that the second derivative is independent of the point were it is taken.Other nontrivial cases of analogous structure are reaction diffusion systems with bilinear coupling, see [Gri] where the finite horizon case was treated.Even the case of the Navier Stokes equations falls in this category.Sensitivity for the infinite horizon problems was treated by independent techniques in [BKP3].
6.2 Nonlinearities induced by functions with globally Lipschitz continuous second derivative.
Consider the system (P) with A associated to a strongly elliptic second order operator with domain H 2 (Ω) ∩ H 1 0 (Ω), so that (A1)-(A2) are satisfied.Let F : W ∞ → L(I; V * ) be the Nemytskii operator associated to a mapping f : R → R which is assumed to be C 2 (R) with first and second derivatives globally Lipschitz continuous, and second derivative globally bounded.The regularity assumption F(ȳ) ∈ C([0, T y 0 ); Y ) for y 0 ∈ V = H 1 0 (Ω) is satisfied by parabolic regularity theory.We discuss assumption (A3)-(A5) for such an F, and show that they are satisfied for dimensions d ∈ {1, 2}.For the finite horizon problem it will turn out that d = 3 is also admissible.By direct calculation it can be checked that F is continuously Fréchet differentiable for d ∈ {1, 2, 3}.We leave this part to the reader and immediately turn to the second derivative.
We proceed by considering the general dimension d to highlight, how the restrictions on the dimension arise.Thus let d ∈ N with d > 1.The case d = 1 can be treated with minor modifications from those in the following steps.
Let us focus on d = 2. Then the choice of parameters r = 6, r ′ = 6 / 5 , σ = 5 / 2 , ρ = 5 satisfies all the above requirements and it is convenient to further estimate (6.5).In fact we obtain , for all y, h 1 , h 2 ∈ W ∞ .Here we use the boundedness of g.By Lebesgue's bounded convergence theorem the last factor converges to 0 for h 2 W∞ → 0 and hence the fact that F is twice differentiable is verified.The continuity of the second derivative follows with the above estimates and again by the Lebesgue theorem.
Thus we fix parameters r and σ such that (6.6) are satisfied for d = 3, as for instance r ′ = 6 / 5 , σ = 15 / 7 , which implies that ρ = 15 and r ′ ρ = 18.Then for the finite horizon problem we can estimate by Hölder's inequality with η = r ′ ρ / 6 : From here we can proceed as in the case d = 2 to assert the continuous second Fréchet differentiability of F in d = 3 for the finite horizon case.
We consider the sequence y n ⇀ ŷ in W (0, T ) and let z ∈ L ∞ (0, T ; V ) be given.Then we estimate Then by the compactness of V in Y , we obtain (A4).Now we verify (A5).We recall Remark 3.1, and proceed as in (6.2) for y ∈ W ∞ , ϕ ∈ L 2 (I; V ), This shows F ′ (y) * satisfies (A5).
We can also consider the optimal stabilization problem with cubic nonlinearity, i.e.F(y) = y 3 in one dimension.This is a special monotone case of the Schlögl model of theoretical chemistry.In this model, one can easily verify assumption (A1) is satisfied by taking Ay = ∆y, y| Γ = 0, and D(A) = H 2 (Ω) ∩ V .Clearly A can be extended to A ∈ L(V, V * ).Moreover A generates an analytic semigroup on Y which is uniformly stable.Assumption (A2) is satisified under the same argumentation as in Fisher's equation.Differentiability assumption (A3), and continuity assumption (A4) are satisfied along similar computations as in subsections 6.2.1, 6.2.2.For (A5) we require that y 0 ∈ V .Indeed in this case for ȳ ∈ W ∞ by Gagliardo's inequality Thus ȳ3 ∈ L 2 (I; Y ) and parabolic regularity theory implies that ȳ ∈ C(I; V ) if y 0 ∈ V .We estimate for h, ϕ ∈ L 2 (I; V ), suppressing the arguments (t, x), which implies (A5).Moreover we have F(ȳ) ∈ C([0, T y 0 ); Y ), since V ⊂ C( Ω) in dimension 1, and thus the extra regularity demanded in Theorem 3.2 is satisfied.

Proof.
Since C is a closed and convex, S( b) is closed and convex.By assumption S( b) is nonempty.Hence there exists an x ∈ C such that Ex = b.Note that each such x can be uniquely decomposed as x = w + y, with y ∈ kerE, w ∈ kerE ⊥ and Ew = b.By (H3) the functional J is bounded from below and coercive on S( b).Hence there exists a minimizing sequence {x n } in S( b) such that lim n→∞ J(x n ) = inf x∈S( b) 3 and Remark 4.1.For δ > 0 and ȳ ∈ Y , we define the open neighborhoods B Y (δ) = {y ∈ Y : y Y < δ} , and B Y (ȳ, δ) = {y ∈ Y : y − ȳ Y < δ}.