Stochastic optimal control problems with delays in the state and in the control via viscosity solutions and applications to optimal advertising and optimal investment problems

In this manuscript we consider optimal control problems of stochastic differential equations with delays in the state and in the control. First, we prove an equivalent Markovian reformulation on Hilbert spaces of the state equation. Then, using the dynamic programming approach for infinite-dimensional systems, we prove that the value function is the unique viscosity solution of the infinite-dimensional Hamilton-Jacobi-Bellman equation. We apply these results to problems coming from economics: stochastic optimal advertising problems and stochastic optimal investment problems with time-to-build.

The presence of delays is the crucial aspect of (SDDE): these appear linearly via the integral terms where the first represents the delay in the state and the second the one in the control.For a complete picture of optimal control problems with delays we refer the reader to [10], while here we will only recall some results.A similar problem with delays only in the state was treated by means of the dynamic programming method via viscosity solutions in [10].In this paper we aim to extend some of these results to the case of delays also in the control.
The main difficulty for delay problems is in the lack of Markovianity, which prevents a direct application of the dynamic programming method, e.g.see [10].The approach we follow here, similarly to [10], is to lift the Date: May 20, 2024.
1 state equation to an infinite-dimensional Banach or Hilbert space (depending on the regularity of the data), in order to regain Markovianity. 1 This is done at a cost of of moving to infinite-dimension.
In [10] (where delays are only in the state) the classical approach of rewriting the state equation in the Hilbert space X =: R n × L 2 ([−d, 0]; R n ) was used.However when delays in the control appear this procedure has to be extended carefully.One possible way would be to use the extended delay semigroup (including the control in the state-space of the delay semigroup), e.g.see [3,Part II,Chapter 4] for deterministic problems.This approach brings the complication of having an unbounded control operator ("boundary control").However, when the delays appear in a linear way in the state equation, an alternative approach can be used.Such approach was proposed by [39] for deterministic control problems (see also [3,Part II,Chapter 4]) and extended in [26] to the stochastic setting.In [26] a linear state equation with additive noise is considered and an equivalent abstract representation of the state equation in the Hilbert space X is proved.In this paper we generalize this result of [26] to the following nonlinear state equation with multiplicative noise (note that such state equation is more general than (1.1)): In this case the abstract state equation in X is of the form for suitable operators A, M and coefficients b, σ.See Theorem 3.2 for the equivalent abstract representation of the state equation in X.
Going back to our original problem, (1.1) can be rewritten on X as (1.2) dY (t) = [AY (t) + f (u(t))]dt + σ(u(t)) dW (t), ∀t ≥ 0, for suitable A, f, σ (see (4.1)).The functional J (η, δ; u(•)) is rewritten as for a suitable cost function L (see (3.8)).Having lifted the problem in the space X we are in a similar setting to the one of [10], hence we wish to proceed in a similar way.Indeed we want to use the theory of viscosity solutions in Hilbert spaces (see [17,Chapter 3]) in order to treat the optimal control problem.However, A, f, σ in (1.2) have a different structure than the ones in [10].Indeed in [10] the unbounded operator is the classical delay operator, while A here it is (up to a bounded perturbation) its adjoint and the coefficient f here has a non-zero L 2 −component.At this point, in order to use the theory of viscosity solutions in Hilbert spaces of [17,Chapter 3], we rewrite the state equation by introducing a maximal dissipative operator Ã in the state equation, see Proposition 4.1.Next we introduce an operator B satisfying the so called weak B-condition (which is crucial in the theory of viscosity solutions in Hilbert spaces), see Proposition 4.3.Hence, we prove that the data of the problem satisfy some regularity conditions with respect to the norm induced by the operator B 1/2 , see Lemma 4.6.This enables us to characterize the value function of the problem as the unique viscosity solution of the following fully non linear second order infinite-dimensional HJB equation where H is the Hamiltonian.See Theorem 5.4 for such result.To the best of our knowledge, this is the first existence and uniqueness result for fully nonlinear HJB equations in Hilbert spaces related to stochastic optimal control problems with delays in the state and in the control.This extends the corresponding result of [10] to the case of delays (also) in the control.Moreover in the present paper the delay kernels a 1 , p 1 are such that their rows Instead in [10] a higher regularity of a 1 , a 2 (where a 2 is the delay kernel associated to the diffusion) is required, i.e.
only delays in the state are present).However we remark that the structure of the state equation in [10], with delays only in the state, is more general.
Stochastic differential equations with delays also in the control are more difficult since in this case the so called structure condition, that is the requirement that the range of the control operator is contained in the range of the noise, does not hold, e.g.see [17,Subsection 2.6.8].This fact, together with the lack of smoothing of the transition semigroup associated to the linear part of the equation (this is a common feature also in problems with delays only in the state), prevent the use of standard techniques, based on mild solutions and on backward stochastic differential equations.However stochastic differential equations with delays only in the control, linear structure of the state equation and additive noise (the HJB equation is semilinear) were completely solved in [28], [29], [30] by means of a partial smoothing property for the Ornstein-Uhlenbeck transition semigroup which permitted to apply a variant of the approach via mild solutions in the space of continuous functions.See also [31], [32], where this approach is extended to stochastic optimal control problems with unbounded control operators and applications to problems with delays only in the control (with delay kernel being a Radon measure) are given.Finally we refer to [21] for a deterministic optimal control problem with delays only in the control and linear structure of the state equation solved by means of viscosity solutions in the space R × W 1,2 ([−d, 0]) (see also Remark 4.5).
At the end of the manuscript, we provide applications of our results to problems coming from economics.We consider a stochastic optimal advertising problem with delays in the state and in the control and controlled diffusion, generalizing the one from [26], [27] (see also [36] for the original deterministic model).We characterize the value function as the unique viscosity solution of the fully non-linear HJB equation.We recall that, in the stochastic setting with additive noise, in [26], [27] a verification theorem was proved in the context of classical solutions and optimal feedback strategies were derived.Moreover an explicit (classical) solution of the HJB equation was derived in a specific case.We also recall that in [10] the case with no delays in the control (i.e.p 1 = 0) was treated via viscosity solutions.Finally, we consider a stochastic optimal investment problem with with time-to-build, inspired by [16, p. 36] (see also e.g.[1], [2] for similar models in the deterministic setting).We characterize the value function as the unique viscosity solution of the fully non-linear HJB equation.
The paper is organized as follows.In Section 2 we introduce the problem and state the main assumptions.In Section 3 we prove an equivalent infinite-dimensional formulation for a more general state equation and we rewrite the problem in an infinite dimensional setting.In Section 4 we prove some preliminary estimates for solutions of the state equation and the value function.In Section 5 we introduce the notion of viscosity solution of the HJB equation and state a theorem about the existence and uniqueness of viscosity solutions, and characterize the value function as the unique viscosity solution.In Section 6 we provide applications to problems coming from economics: stochastic optimal advertising models and stochastic investment models with time-to-build.

Setup and assumptions
We denote by M m×n the space of real valued m × n-matrices and we denote by | • | the Euclidean norm in R n as well as the norm of elements of M m×n seen as linear operators from R m to R n .We will write x • y for the inner product in R n .
Let d > 0. We consider the standard Lebesgue space L 2 := L 2 ([−d, 0]; R n ) of square integrable functions from [−d, 0] to R n , denoting by •, • L 2 the inner product in L 2 and by | • | L 2 the norm.We also consider the standard Sobolev space W 1,2 := W 1,2 ([−d, 0]; R n ) of functions in L 2 admitting weak derivative in L 2 , endowed with the inner product f, g W 1,2 := f, g , which render it a Hilbert space.It is well known that the space W 1,2 can be identified with the space of absolutely continuous functions from [−d, 0] to R n .Let τ = Ω, F , (F t ) t≥0 , P, W be a reference probability space, that is (Ω, F , P) is a complete probability space, W = (W (t)) t≥0 is a standard R q -valued Wiener process, W (0) = 0, and (F t ) t≥0 is the augmented filtration generated by W .We consider the following controlled stochastic delay differential equation (SDDE) (2.1) where (i) given a bounded measurable set U ⊂ R p , u(•) is the control process lying in the set -progressively measurable and integrable a.s.}; (ii) η 0 ∈ R n and η 1 ∈ L 2 are the initial conditions of the state y; ] → M n×n and if a j 1 , p j 1 are the j-th row of a 1 (•), p 1 (•) respectively for j = 1, ..., n, then Similarly to [10] we cannot treat the case of pointwise delay (e.g. a 1 , p 1 = δ −d the Dirac's Delta).In [10] a higher regularity of a 1 , a 2 (where a 2 is the delay kernel associated to the diffusion) is required, i.e. a j 1 , a j 2 ∈ W 1,2 ([−d, 0]; R n ) and a j 1 (−d) = a j 2 (−d) = 0 for every j ≤ n (of course p 1 = 0, i.e. in [10] only delays in the state are present).See also [19], [20] for similar restrictions in deterministic problems.Here, instead, we require less regularity, i.e. a j 1 , p j 1 ∈ L 2 ([−d, 0]; R n ).We will assume the following conditions.Assumption 2.2.b 0 : U → R n , σ 0 : U → M n×q are continuous and bounded.
and each control u(•) ∈ U, there exists a unique (up to Indistinguishability) strong solution to (3.1) and this solution admits a version with continuous paths that we denote by y η,δ;u .
We consider the following infinite horizon optimal control problem.
where ρ > 0 is the discount factor, l : R n × U → R is the running cost.As in [17, p. 98, Equation (2.8)], we define where the union is taken over all reference probability spaces τ .The goal is to minimize J (η, δ; u(•)) over all u(•) ∈ U.This is a standard setup of a stochastic optimal control problem (see [40,17]) used to apply the dynamic programming approach.We remark (see e.g.[17], Section 2.3.2) that for every reference probability space τ so the optimal control problem is in fact independent of the choice of a reference probability space.
(iii) There exists a local modulus of continuity for l(•, u), uniform in u ∈ U , i.e. for each R > 0, there exists a nondecreasing concave function ω R : R + → R + such that lim r→0 + ω R (r) = 0 and We will show, suitably reformulating the state equation in an infinite dimensional framework, that the cost functional is well defined and finite for a sufficiently large discount factor ρ > 0.
Throughout the paper we will write C > 0, ω, ω R to indicate, respectively, a constant, a modulus continuity, and a local modulus of continuity, which may change from place to place if the precise dependence on other data is not relevant.The equality involving random variables will be intended P−a.s..

Infinite dimensional Markovian representation
The optimal control problem at hand is not Markovian due to the delay.In order to regain Markovianity and approach the problem by Dynamic Programming we reformulate the state equation in an infinite-dimensional space generalizing a well-known procedure, see [3, Part II, Chapter 4], [39] for deterministic delay equations and [26] for the stochastic case with linear state equation and additive noise.
In this section, in place of (2.1), we will consider the following more general state equation: where, in this case, b 0 : R n × U → R n , σ 0 : R n × U → M n×q , while all the other terms satisfy the same conditions as in (2.1).In this setting we consider the following assumptions.
and each control u(•) ∈ U, there exists a unique (up to indistinguishability) strong solution to (3.1) and this solution admits a version with continuous paths that we denote by y η,δ;u .We define X := R n × L 2 .An element x ∈ X is a couple x = (x 0 , x 1 ), where x 0 ∈ R n , x 1 ∈ L 2 ; sometimes, we will write x = x 0 x 1 .The space X is a Hilbert space when endowed with the inner product The induced norm, denoted by | • | X , is then For R > 0, we set the following notation for the open balls of radius R in X, R n , and L 2 , respectively: We denote by L(X) the space of bounded linear operators from X to X, endowed with the operator norm An operator L ∈ L(X) can be seen as where We also denote by L 2 (H, K) the space of Hilbert-Schmidt operators from H to K endowed with the norm When H = K we simply write L 1 (H), L 2 (H).We denote by S(H) the space of self-adjoint operators in L(H).
Let us define the unbounded linear operator A : D(A) ⊂ X → X as follows: The adjoint of A is the operator Note that A * is the generator of the delay semigroup, see, e.g., [ where Φ(t) is the semigroup of truncated right shift in L 2 : Consider the infinite dimensional SDE By [17, Theorem 1.127], for each u(•) ∈ U, (3.3) admits a unique mild solution; that is, there exists a unique progressively measurable Define the linear operator M : where Theorem 3.2.Let Assumption 3.1 hold.We have the following claims.
(i) Let Y x,u be the unique mild solution to (3.3) with initial datum x ∈ X and control u(•) ∈ U.For every (ii) Let y η,δ;u be the solution to SDDE (3.1) with initial data η, δ and under the control u(•) ∈ U, and let x = M (η 0 , η 1 , δ).Then, for every t ≥ 0, In particular, for every t ≥ 0, Proof.(i) Using (3.2), we can rewrite the two components of (3.4) as Then, from which we get the first claim.
(ii) Let x = (x 0 , x 1 ) = M (η 0 , η 1 , δ).For ξ − t ∈ [−r, 0], ξ ∈ [−r, 0] we have: so that inserting it into (3.5)we have: where we have defined Ỹ x,u 0 to be the extension of Y x,u 0 to [−d, 0) by 0 (s), s ≥ 0. From (3.6) we have: To conclude the proof, by uniqueness of strong solutions to (3.1), we need to prove that Ỹ x,u 0 satisfies (3.1).On the other hand, by [25, Theorem 3.2], Y x,u (t) is also a weak solution of (3.3), i.e. it satisfies be defined by Then, Therefore, for every t ≥ 0, we have: Note that, for every t ≥ 0, on a has is the sum of the convolutions of L 2 -functions.Then, taking k → +∞ in the equation above, we get By (3.6), with ξ = 0 we have: so that, for every t ≥ 0, Recalling the definition of Ỹ x,u(•)
3.1.Objective functional.Using Theorem 3.2, we can give a Markovian reformulation on the Hilbert space X of the optimal control problem.We present such result for an optimal control problem with the more general state equation (3.1).Consider the functional J defined by (2.2) with y η,u(•),δ(•) being the solution of (3.1) (in place of (2.1)).Denoting by Y x,u a mild solution of (3.3) for general initial datum x ∈ X and control u(•) ∈ U and introducing the functional where the original functional J and J are related through We then consider the problem of optimizing J under (3.3) and define the value function V for this problem: (3.9) For what we said, an optimal control u * (•) ∈ U for the functional J(x; •) with x = M (η, δ) is also optimal for J (η, δ; •).Hence, from now on, we focus on the optimization problem (3.9).

B-continuity
In order to get B-continuity of the value function V , needed to employ the theory of viscosity solutions in infinite dimension, we consider the simpler state equation (2.1).In this case, the state equation (2.1) can be rewritten in infinite dimension as (4.1) where Indeed, since A is the sum of A with a bounded linear operator, by [15, Corollary 1.7], we have that (3.3) with specifications and (4.1) are equivalent and have the same (unique) mild solution Y x,u (t).

4.1.
Reformulation with a maximal dissipative operator.The aim of this subsection is to rewrite (4.1) with a maximal dissipative operator Ã in place of A. The need for that is that we want to use the viscosity solutions theory in infinite dimension to treat the HJB equation associated to the optimal control problem, which requires for the comparison theorem the presence of a maximal dissipative operator in the equation (see [17,Chapter 3]).The operator Ã is constructed by means of a suitable shift of the operator A.
Step 1.We prove that Ãµ is dissipative.Let where This implies that Ãµ is dissipative for every µ ≥ µ 0 .
On the other hand, yields the unique solution Taking ξ = 0 in this equality and equating with (4.4), we have: Then, for µ ≥ µ 0 Therefore, we have proved that, for every µ ≥ µ 0 and y ∈ X, there exists a unique solution x ∈ D( Ãµ ) to (4.3) given by The claim follows.Now, we fix µ > µ 0 and denote Ã := Ãµ = A − µI.
We may rewrite SDE (4.1) as Similarly to before, since the operator Ã is the sum of A with a bounded operator, by [ (i) B is strictly positive, i.e.Bx, x X > 0 for every x = 0; Let Ã−1 be the inverse of the operator Ã.Its explicit expression can be derived as in the proof of Proposition 4.1: Notice that Ã−1 ∈ L(X) and it is compact.Define now the compact operator (4.9) We are going to show that B satisfies the weak-B condition for Ã.
Proof.Clearly, B ∈ L(X), Ã * B = Ã−1 ∈ L(X), and B is self adjoint.Moreover, B is strictly positive; indeed and it is easy to check that, whenever x = 0, we have that | Ã−1 x| > 0. Finally, by dissipativity of Ã, we have: By strict positivity of B, we have that B 00 and B 11 are strictly positive.We introduce the We define which is a Hilbert space endowed with the inner product . Strict positivity of B ensures that the operator B 1/2 can be extended to an isometry By (4.12), the operator Ã * B 1/2 is well defined on the whole space X.Moreover, since Ã * is closed and B 1/2 ∈ L(X), Ã * B 1/2 is a closed operator.Thus, by the closed graph theorem, we have Remark 4.4.In the infinite dimensional theory of viscosity solutions it is only required that Ã * B ∈ L(X) (condition (iii) of Definition 4.2).Such an operator can be constructed for any maximal dissipative operator Ã (see, e.g., [17,Theorem 3.11]).Similarly to [10] in the case of the present paper, in addition, we also have Ã * B 1/2 ∈ L(X).Such stronger condition may be helpful in order to get differentiability properties of the value function (see [10, Proof of Theorem 6.5]) or in order to construct optimal feedback laws (see [11]).
Since B is a compact, self-adjoint and strictly positive operator on X, by the spectral theorem B admits a set of eigenvalues {λ i } i∈N ⊂ (0, +∞) such that λ i → 0 + and a corresponding set {f i } i∈N ⊂ X of eigenvectors forming an orthonormal basis of X.By taking {e i } i∈N defined by e i := 1 √ λ i f i , we then get an orthonormal basis of X −1 .We set X N := Span{f 1 , ...f N } = Span{e 1 , ...e N } for N ≥ 1, and let P N : X → X be the orthogonal projection onto X N and Q N := I − P N .Since {e i } i∈N is an orthogonal basis of X −1 , the projections P N , Q N extend to orthogonal projections in X −1 and we will use the same symbols to denote them.We notice that (which was proven in [10]) here is false.We first claim that such inequality does not hold on unbounded sets of X.Indeed, we provide the following counter example which is similar to the one in [21].Let n = 1 and consider Therefore, we have x N 0 = 1 and x N −1 → 0. Now note that if the inequality (4.16) holded on bounded sets B R then it would hold on the whole X with C R = C > 0 which is a contradiction.Indeed assume the inequality holds on bounded sets B R .Then it is true for every x = (x 0 , x 1 ) with |x| ≤ 1, i.e. we have Now let any y = (y 0 , y 1 ) ∈ X, y = 0, define x = y/|y| = (y 0 /|y|, y 1 /|y|), |x| = 1 so that (4.17) holds and then by multiplying the inequality by |y| we have which is a contradiction.(ii) We point out that the fact that here (4.17) is false is not in contradiction with [10], where such inequality was shown to be true.Indeed we recall that the operators Ã here and in [10] are not the same: Ã here is the adjoint (up to some bounded perturbation) of the operator Ã in [10].
In [21], where delays appear in the control similarly to here, the inequality is then proved in the smaller space R × W 1,2 ([−d, 0]), see [21, remark 5.4] and this is enough since the optimal control problem is deterministic.This would not be possible here due to the presence of the Brownian motion, whose trajectories are not absolutely continuous.Indeed, it would not be possible to have Y 1 (t) ∈ W 1,2 as in [21].

B-continuity of the value function.
In this subsection, we prove some estimates for solutions of the state equation and on the cost functional in order to prove the B-continuity of the value function.
Lemma 4.6.There exists C > 0 and a local modulus of continuity ω R such that for every x, y ∈ X, u ∈ U , R > 0. Moreover • Recalling the definition of b(x, u), (4.20) follows the boundedness of b 0 (u) and the boundedness of U.
• (4.21) follows by the boundedness of σ 0 : where {r i } q i=1 is the classical orthonormal basis of R q .• For (4.22) note that by (2.4) the function L(x, u) = l(x 0 , u) is weakly sequentially continuous uniformly in u ∈ U (as Then the inequality follows by application of [17,Lemma 3.6 (iii)] which can easily be extended to functions weakly sequentially continuous uniformly with respect to the control parameter.• (4.23) finally follows by the definition of L and (2.3).
• We notice that by [17, Appendix B] and (4.21), we have: Now taking the supremum over u and letting N → ∞ by (4.15) we have (4.24).
Remark 4.7.We remark that the fact that (4.16) is false (see Remark 4.5) was the reason that lead us to consider the simpler state equation (2.1), in place of the more general state equation (3.1), in order to use the approach via viscosity solutions: regarding b 0 , for instance, if (4.16) were true, we could have considered a more general b 0 of the form b 0 (x 0 , u), satisfying Indeed, using (4.16), we would have had from which we would have proved Similarly we could have allowed σ 0 (x 0 , u) to depend also on x 0 .This would have been enough in order to prove the B-continuity of V and the validity of the hypotheses of the comparison theorem [17, Theorem 3.56] when the state equation is of the form (3.1).This approach was used in [10], where this inequality was proved to be true.
Next, we show continuity properties of V .We recall first the notion of B-continuity (see [17,Definition 3.4]) Definition 4.11.Let B ∈ L(X) be a strictly positive self-adjoint operator.A function u : X → R is said to be B-upper semicontinuous (respectively, B-lower semicontinuous) if, for any sequence {x n } n∈N ⊂ X such that x n ⇀ x ∈ X and Bx n → Bx as n → ∞, we have A function u : X → R is said to be B-continuous if it is both B-upper semicontinuous and B-lower semicontinuous.
We remark that, since our operator B is compact, in our case B-upper/lower semicontinuity is equivalent to the weak sequential upper/lower semicontinuity, respectively.Proposition 4.12.Let Assumptions 2.2, 2.3, and 4.9 hold.For every R > 0, there exists a modulus of continuity ω R such that Hence V is B-continuous and thus weakly sequentially continuous.
Proof.We prove the estimate as in [17,Proposition 3.73], since the assumptions of the latter are satisfied due to Lemma 4.6.Then, (4.25) follows.As for the last claim, we observe that by (4.25) and by [17,Lemma 3.6(iii)], V is B-continuous in X.
We point out that V may not be continuous with respect to the | • | −1 norm in the whole X.

The value function as unique viscosity solution to HJB equation
In this section we prove that the value function V is the unique viscosity solution of the infinite dimensional HJB equation.
Given v ∈ C 1 (X), we denote by Dv(x) its Fréchet derivative at x ∈ X and we write where D x0 v(x), D x1 v(x) are the partial Fréchet derivatives.For v ∈ C 2 (X), we denote by D 2 v(x) its second order Fréchet derivative at x ∈ X which we will often write as We define the Hamiltonian function H : X × X × S(X) → R by Tr σ0(u)σ0(u) T Z00 − l(x0, u) =: H(x, r, Z00), for every x, r ∈ X, Z ∈ S(X).
By [17,Theorem 3.75] the Hamiltonian H satisfies the following properties.
(i) H is uniformly continuous on bounded subsets of X × X × S(X).
(ii) For every x, r ∈ X and every Y, Z ∈ S(X) such that Z ≤ Y , we have (5.1)H(x, r, Y ) ≤ H(x, r, Z).
(iii) For every x, r ∈ X and every R > 0, we have (iv) For every R > 0 there exists a modulus of continuity ω R such that ) and (4.21), then, for every x ∈ X, p, q ∈ X, Y, Z ∈ S(X), The HJB equation associated with the optimal control problem is the infinite dimensional PDE We recall the definition of B-continuous viscosity solution from [17].(ii) g : X → R is a radial test function if (iii) A viscosity solution of (5.5) is a function v : X → R which is both a viscosity subsolution and a viscosity supersolution of (5.5).
We can now state the theorem characterizing V as the unique viscosity solution of (5.5) in S.
Theorem 5.4.Let Assumptions 2.2, 2.3, and 4.9 hold.The value function V is the unique viscosity solution of (5.5) in the set S.
Proof.Notice that V ∈ S by Proposition 4.10.
The proof of the fact that V is the unique viscosity solution of the HJB equation can be found in [17,Theorem 3.75] as all assumptions of this theorem are satisfied due to Lemma 5.1.
Remark 5.5.We remark that, similarly to [10], Theorem 5.4 also holds in the deterministic case, i.e. when σ(x, u) = 0. (in which case we may take ρ 0 = Cm and k < ρ/C in (5.7)).The theory of viscosity solutions handles well degenerate HJB equations, i.e. when the Hamiltonian satisfies for every Y, Z ∈ S(X) such that Z ≤ Y .Hence viscosity solutions can be used in connection with the dynamic programming method for optimal control of stochastic differential equations in the case of degenerate noise in the state equation, in particular, when it completely vanishes (deterministic case).This is not possible using the mild solutions approach (see [10], [17]).
Remark 5.6.In this work we could not prove the partial differentiability of V with respect to x 0 as in [10,Theorem 6.5].Indeed in [10, Theorem 6.5] a key assumption was In [10] this condition holds under some standard assumptions, see [10,Example 6.2].In particular the cost l(•, u) is assumed to be Lipschitz (uniformly in u).However in the present paper we could not prove (5.8): indeed due to Remark 4.5 the following inequality (which holds true in [10]) is false This means that even for a Lipschitz l(•, u) (uniformly in u) we cannot use (5.9) in the following way ) , but this is of course not enough in order to prove (5.8).Hence we could not proceed as in the proof of [10,Theorem 6.5].
We finally remark that the same reason prevented us to apply C 1,1 −regularity results from [12], where L(•, u) was assumed to be Lipschitz with respect to | • | −1 , uniformly in u.

Applications
In this section we provide applications of our results to problems coming from economics.6.1.Optimal advertising with delays.The following model is a generalization of the ones in [26], [27] to the case of controlled diffusion.We recall that in [10] the case with no delays in the control (i.e.p 1 = 0) was treated.
The model for the dynamics of the stock of advertising goodwill y(t) of the product is given by the following controlled SDDE where ρ > 0 is a discount factor, l(x, u) = h(u) − g(x), with a continuous and convex cost function h : U → R and a continuous and concave utility function g : R → R which satisfies Assumption 2.3.We are then in the setting of Section 4. Therefore we can use Theorem 5.4 to characterize the value function V as the unique viscosity solution to (5.5).

6.2.
Optimal investment models with time-to-build.The following model is inspired by [16, p. 36].See also, e.g., [1,2] for similar models in the deterministic setting.
Let us consider a state process y(t), representing the stock capital of a certain enterprise at time t, and a control process u(t) ≥ 0, representing the investment undertaken at time t to increase y.We assume that the dynamics of y(t) is given by the following SDDE where l(x, u) := C(u) − F (x).We are then in the setting of Section 2 (with a 1 = 0).Therefore we can use Theorem 5.4 to characterize the value function V as the unique viscosity solution to (5.5).

0 e
) = a 0 y(t) + b 0 u(t) + 0 −d a 1 (ξ)y(t + ξ) dξ + 0 −d p 1 (ξ)u(t + ξ) dξ dt + [σ 0 + γ 0 u(t)] dW (t), t ≥ 0, y(0) = η 0 , y(ξ) = η 1 (ξ), u(ξ) = δ(ξ), ξ ∈ [−d, 0),where d > 0, the control process u(t) models the intensity of advertising spending and W is a real-valued Brownian motion.Moreover (i) a 0 ≤ 0 is a constant factor of image deterioration in absence of advertising; (ii) b 0 ≥ 0 is a constant advertising effectiveness factor; (iii) a 1 ≤ 0 is a given deterministic function satisfying the assumptions used in the previous sections and represent the distribution of the forgetting time; (iv) p 1 ≥ 0 is a given deterministic function satisfying the assumptions used in the previous sections and it is the density function of the time lag between the advertising expenditure and the corresponding effect on the goodwill level; (v) σ 0 > 0 is a fixed uncertainty level in the market; (vi) γ 0 > 0 is a constant uncertainty factor which multiplies the advertising spending; (vii) η 0 ∈ R is the level of goodwill at the beginning of the advertising campaign; (viii) η 1 ∈ L 2 ([−d, 0]; R) is the history of the goodwill level.(ix) δ ∈ L 2 ([−d, 0]; U ) is the history of the advertising spending.Again, we use the same setup of the stochastic optimal control problem as the one in Section 2 and the control set U is here U = [0, ū] for some ū > 0. The optimization problem is inf u∈U E ∞ −ρs l(y(s), u(s))ds ,

0 e
) = b0(u(t)) + 0 −d p1(ξ)u(t + ξ)dξ dt + σ0(u(t))dW (t), t ≥ 0, y(0) = η0, u(ξ) = δ(ξ), ξ ∈ [−d, 0), where (i) b 0 : U → [0, ∞) is a continuous bounded function representing an instantaneous effect of the investment on the capital; (ii) p 1 ≥ 0 is a given deterministic function satisfying the assumptions used in the previous sections and representing the density function of the time-to-build between the investment and the corresponding effect on the stock capital; (iii) σ 0 : U → [0, ∞) is a a continuous bounded function representing the uncertainty of achievement of the investment plans.(iv) η 0 ∈ R is the initial level of the capital; (v) δ ∈ L 2 ([−d, 0]; U ) is the history of the investment spending.Again, we use the same setup of the stochastic optimal control problem of Section 2 and the control set U here is U = [0, ū] for some ū > 0. The goal is to maximize, over all u(•) ∈ U, the expected integral of the discounted future profit flow in the form E ∞ −ρt (F (y(t)) − C(u(t)))dt , where F : R → R is a production function and C : U → R is a cost function.We assume that F, C satisfy Assumption 2.3.The optimization problem is equivalent to minimize, over all u(•) ∈ U, E ∞ 0 e −ρt l(y(t), u(t))dt , [3,Part II,Chapter 4].For problems with delays (also) in the control appearing in a linear way in the state equation, its adjoint, i.e.A, is used to reformulate the problem in the space X (see, e.g.,[3, Part II, Chapter 4]).Indeed, A is the generator of a strongly continuous semigroup e tA on X, whose explicit expression (see, e.g., [21, Eq. (73)]) is [17, B-condition.In this subsection, we recall the concept of weak B-condition for the maximal dissipative operator Ã and introduce an operator B satisfying it.This concept is fundamental in the theory of viscosity solutions in Hilbert spaces, see[17, Chapter 3], which will be used in this paper.