An optimal advertising model with carryover eﬀect and mean ﬁeld terms

We consider a class of optimal advertising problems under uncertainty for the introduction of a new product into the market, on the line of the seminal papers of Vidale and Wolfe, 1957 ([20]) and Nerlove and Arrow, 1962 ([16]). The main features of our model are that, on one side, we assume a carryover eﬀect (i.e. the advertisement spending aﬀects the goodwill with some delay); on the other side we introduce, in the state equation and in the objective, some mean ﬁeld terms that take into account the presence of other agents. We take the point of view of a planner who optimizes the average proﬁt of all agents, hence we fall into the family of the so-called “Mean Field Control” problems. The simultaneous presence of the carryover eﬀect makes the problem inﬁnite dimensional hence belonging to a family of problems which are very diﬃcult in general and whose study started only very recently, see Cosso et Al, 2023 ([3]). Here we consider, as a ﬁrst step, a simple version of the problem providing the solutions in a simple case through a suitable auxiliary problem.


Introduction
Since the seminal papers of [20] and [16] on dynamics model in marketing, a considerable amount of work has been devoted to problems of optimal advertising, both in monopolistic and competitive settings, and both in deterministic and stochastic environments (see [18] for a review of the existing work until the 1990's).
Our purpose here is to start exploring a family of models that put together two important features that may arise in such problems and that have not yet been satisfactorily treated in the actual theory on optimal control.On one side we account, as in [7] and [8] for the presence of delay effects, in particular the fact that the advertisement spending affects the goodwill with some delay, the so-called carryover effect (see e.g.[13], [18], [8] and the references therein).
On the other side, and more crucially, we take into account the fact that the agents maximizing their profit/utility from advertising are embedded in an environment where other agents act and where the action of such other agents influences their own outcome (see e.g.[15] for a specific case of such a situation).To model such interaction among maximizing agents, one typically resorts to game theory.However, cases like this, where the number of agents can be quite large (in particular if we hink of web advertising), are very difficult to treat in an N -agents game setting.A way to make such a problem tractable but still meaningful is to resort to what is called the mean-field theory.The idea is the following: assume that the agents are homogeneous (i.e.displaying the same state equations and the same objective functionals) and send their numbers to infinity.The resulting limit problem is in general more treatable, and, under certain conditions, its equilibria are a good approximation of the N -agents game (see e.g. the books [2] for an extensive survey on the topic).For the above reason, we think it is interesting, both from the mathematical and economic side, to consider the optimal advertising investment problem with delay of [7,8] in the case when, in the state equation and in the objective, one adds a mean field term depending on the law of the state variable (the goodwill), which takes into account the presence of other agents.There are two main ways of looking at the problem when such mean field terms are present.One (which falls into the class of Mean Field Games (MFG), see e.g.[2, Ch. 1], and which is not our goal here) is to look at the Nash equilibria where each agent takes the distribution of the state variables of the others as given.The other one, which we follow here, is to assume a cooperative game point of view: there is a planner that optimizes the average profit of each agent: this means that we fall into the family of the so-called "Mean Field Control" (MFC) problems (or "control of McKean-Vlasov dynamics").We believe that both viewpoints are interesting from the economic side and challenging from the mathematical side.In particular, the one we adopt here (the Mean Field Control) can be seen as a benchmark (a first best) to compare, subsequently, with the non-cooperative Mean Field Game case, as is typically done in game theory (see e.g.[1]).It can also be seen as the case of a big selling company (who acts as the central planner), which has many shops in the territory whose local advertising policies interact.
The simultaneous presence of the carryover effect and of the "Mean Field Control" terms makes the problem belong to the family of infinite dimensional control of McKean-Vlasov dy-namics: a family of problems that are very difficult in general and whose study started only very recently (see [3]).
Here we consider, as a first step, a simple version of the problem that displays a linear state equation, mean field terms depending only on the first moments, and an objective functional whose integrand (the running objective) is separated in the state and the control.We develop the infinite dimensional setting in this case.Moreover, we show that, in the special subcase when the running objective is linear in the state and quadratic in the control, we can solve the problem.This is done through the study of a suitable auxiliary problem whose HJB equation can be explicitly solved (see Section 4 below) and whose optimal feedback control can be found through an infinite dimensional Verification Theorem (see Section 5 below).
The paper is organized as follows.
• In Section 2, we formulate the optimal advertising problem as an optimal control problem for stochastic delay differential equations with mean field terms and delay in the control.Moreover, using that the mean field terms depends only on the first moments we introduce an auxiliary problem without mean field terms but with a "mean" constraint on the control (see (2.13)).
• In Section 3, the above "not mean field" auxiliary non-Markovian optimization problem is "lifted" to an infinite dimensional Markovian control problem, still with a "mean" constraint on the control (see (3.7)).
• In Section 4, we show how to solve the original problem in the special case when the optimal controls of the original and auxiliary problems are deterministic.We explain the strategy in Subsection 4.

Formulation of the problem
We call X(t) the stock of advertising goodwill (at time t ∈ [0, T ]) of a given product.We assume that the dynamics of X(•) is given by the following controlled stochastic delay differential equation (SDDE), where u models the intensity of advertising spending: where the Brownian motion W is defined on a filtered probability space (Ω, F, F = (F t ) t≥0 , P), with (Ω, F, P) being complete, F being the augmentation of the filtration generated by W , and where, for a given closed interval U ⊂ R, the control strategy u belongs to U := L 2 P (Ω × [0, T ]; U ), the space of U -valued square integrable progressively measurable processes.The last line in (2.1) must read as an extension of u to [−d, T ] by means of δ.
Here the control space and the state space are both equal to the set R of real numbers1 Regarding the coefficients and the initial data, we assume the following conditions are verified: Here a 0 and a 1 are constant factors reflecting the goodwill changes in absence of advertising, b 0 is a constant advertising effectiveness factor, and b 1 (•) is the density function of the time lag between the advertising expenditure u and the corresponding effect on the goodwill level.Moreover, x is the level of goodwill at the beginning of the advertising campaign, δ(•) is the history of the advertising expenditure before time zero (one can assume δ(•) = 0, for instance).
Notice that under Assumption 2.1 there exists a unique strong solution to the following SDDE starting at time t ∈ [0, T ): We denote such a solution by X t,x,u .It belongs In what follows, without loss of generality, we always assume to deal with a continuous version X t,x,u .
The objective functional to be maximized is defined as where for the functions f : [0, T ]×R × R → R and g : R × R → R we assume the following Assumption 2.2 is verified.
(i) The functions f, g are measurable.
(ii) There exist (iii) f, g are locally uniformly continuous in x, m, uniformly with respect to (t, u, z), meaning that for every R > 0 there exists a modulus of continuity w R : R Under Assumption 2.1 and Assumption 2.2, the reward functional J in (2.3) is well-defined for any (t, x; u(•)) ∈ [0, T ] × R + × U .We also define the value function V for this problem as follows: for (t, x) ∈ [0, T ] × R. We shall say that u * ∈ U is an optimal control strategy if it is such that Our main aim here is to finding such optimal control strategies We now take into account the controlled ordinary delay differential equation (ODDE) ] by δ as expressed by the last line in (2.5).We denote by M t,m,z the unique strong solution to (2.5).It is straightforward to notice the relationship Property (2.6) suggests that we can couple the two systems (2.2) and (2.5) as follows.We set and introduce, for x ∈ R 2 and with the process X t, x, u as the unique strong solution of the controlled SDDE then by (2.2), (2.5) , (2.6), and (2.9), we immediately have (2.10) Property (2.10) states that the process X t,x,u can be seen as the first projection of a bidimensional process driven by a SDDE whose coefficients do not involve any dependence on the law.Thanks to (2.10), we can rephrase the original control problem as follows.We define, for t ∈ [0, T ], x ∈ R 2 , and for where, with a slight abuse of notation, we identify Then, by (2.3), (2.4), (2.10), and (2.11), it follows that

Carryover effect of advertising: reformulation of the problem in infinite dimension
To recast the SDDE (2.9) as an abstract stochastic differential equation on a suitable Hilbert space we use the approach introduced first by [19] in the deterministic case and then extended in [7] to the stochastic case (see also [11] where the case of unbounded control operator is considered).We reformulate equation (2.9) as an abstract stochastic differential equation in the following Hilbert space H If y ∈ H, we denote by y 0 the projection of y onto R 2 and by y 1 the projection of y onto ). Hence y = (y 0 , y 1 ).The inner product in H is induced by its factors, meaning In particular, the induced norm is ∀y ∈ H.
Recalling (2.7), we define A : D(A) ⊂ H → H by where the domain D(A) is The operator A generates a C 0 -semigroup {e tA } t∈R + on H, where whereas the C 0 -semigroup {e tA * } t∈R + generated by A * is given by where A * 0 is the adjoint of A 0 .We then introduce the noise operator G : R → H defined by and the control operator B : R 2 → H defined by The adjoint B * : H → R 2 of B is given by We now introduce the abstract stochastic differential equation on with t ∈ [0, T ), y ∈ H, u ∈ U × U .Denote by Y t,y, u the mild solution to (3.1), i.e., the pathwise continuous process in L 2 P (Ω × [0, T ]; H) given by the variation of constants formula: Y t,y, u (s) = e (s−t)A y + Similarly as done in [7], if the space of admissible controls is restricted to U , one can show that (3.1) is equivalent to (2.9), in the sense that for every t ∈ [0, T ), u ∈ U, and for every y = (y 0 , y 1 ) ∈ H with A further equivalence is given by considering together (2.10) and (3.4), that provide Notice that, by (3.7), we have The problem with the above constraint z(s) = E[u(s)], for s ∈ [t, T ], is that it does not allow to apply directly the Dynamic Programming Approach to get the HJB equation.For this reason, instead of optimizing on the set U with the constraints z(s) = E[u(s)] s ∈ [t, T ], we take into consideration a different problem, for which the optimization is performed on the set U × U with the constraint z(s) = u(s) s ∈ [t, T ], hence considering the following value function In general we do not know if and how this function is related to V (and consequently to our goal V ).However it is clear from the constraints involved that, if for both problems V and V the supremum is reached on the set of deterministic controls, meaning V(t, y) =(to prove) = sup J (t, y, u) : u = (u, z) ∈ U × U , and u = z deterministic (4.3a) V (t, y) =(to prove) = sup J (t, y, u) : u = (u, z) ∈ U × U , and u = z deterministic , (4.3b) then finding the deterministic optimal controls for V is equivalent to doing that for V .For future reference, we restate this observation in the following proposition.
Proposition 4.1.Let t ∈ [0, T ] and y ∈ H.If (4.3a) and (4.3b) hold true, then a deterministic control u * = (u * , u * ) ∈ U × U is optimal for V if and only if it is optimal for V .
The HJB equation associated to the optimal control problem related to V is the following.
where In the next subsections we specify f, g and we show that with such a choice (4.3a) and (4.3b) are verified.

Explicit solution of the HJB equation in the auxiliary LQ case
In this section we specify the general model with for (x, m, u, z) ∈ R 4 , where We also set U = R.Notice that Assumption 2.2 is satisfied.Moreover, denoting α = (α 0 , −α 1 ), β = (β 0 , β 1 ), and recalling (2.12), we have, for q ∈ R 2 , u * (q) := arg max which entails, by considering the definition of H given in (4.5), and then the HJB equation (4.4) reads as Tr Q∇ 2 v(t, y) + Ay, ∇v(t, y) where λ = (λ 0 , −λ 1 ).We look for solutions of (4.9) of the following form Moreover, if v is of the form (4.10), (4.9) reads as (γ 0 + γ 1 ) + α, y 0 − r a(t), y − rb(t) = 0 (4.12) The previous equation (4.12) is to be intended in a mild way that we are going to specify in the following, since we cannot guarantee that, for all t, a(t) ∈ D(A * ).Indeed, by (4.11), a(T ) / ∈ D(A * ).Equation (4.12) can be split into two equations by isolating the terms containing y and all the other terms, namely ȧ(t), y + y, A * a(t) + α, y 0 − r a(t), y = 0 (4.13) and Taking into account that (4.13) must hold for all y ∈ H, and combining (4.13) and (4.14) with the final conditions (4.11), we obtain two separated equations, one for a and one for b, namely and We solve (4.15), which turns out to be an abstract evolution equation in H, in mild sense, getting Consequently we can write the solution to (4.16) where a is given by (4.17).So far we have found a solution v to the HJB equation (4.9) whose candidate optimal feedback is deterministic.In the next section we will prove that it is indeed the optimal control and that v = V .We will also prove that the optimal feedback control associated to the optimal control problem associated to V is deterministic.This will allow us to apply Proposition 4.1, so finding the optimal strategies for the initial problem in the linear quadratic case.

Fundamental Identity and Verification Theorem in the auxiliary LQ case
The aim of this subsection is to provide a verification theorem and the existence of optimal feedback controls for the linear quadratic problem for V introduced in the previous section.This, in particular, will imply that the solution in (4.10), with a and b given respectively by (4.17) and (4.18), coincides with the value function of our optimal control problem V defined in (4.2).
The main tool needed to get the wanted results is an identity (often called "fundamental identity", see equation (4.19)) satisfied by the solutions of the HJB equation.Since the solution (4.10) is not smooth enough (it is not differentiable with respect to t due to the presence of A * in a, given by (4.17)), we need to perform an approximation procedure thanks to which Ito's formula can be applied.Finally we pass to the limit and obtain the needed "fundamental identity".Proposition 4.2.Let Assumption 2.1 hold.Let v be as in (4.10), with a and b given respectively by (4.17) and (4.18), solution of the HJB equation(4.9).Then for every t ∈ [0, T ], y ∈ H, and u = (u, z) ∈ U × U , with u = z, we have the fundamental identity We should apply Ito's formula to the process e −rs v(s, Y t,y, u (s)) s∈[t,T ] , but we cannot, because Y t,y, u is a mild solution (the integrals in (3.2) are convolutions with a C 0 -semigroup) and not a strong solution of (3.1), moreover v is not differentiable in t, since ( λ, 0) ∈ D(A * ).Then we approximate Y t,y, u by means of the Yosida approximation (see also [10][Proposition 5.1]).For k 0 ∈ N large enough, the operator k − A, k ≥ k 0 , is full-range and invertible, with continuous inverse, and k(k − A) −1 A can be extended to a continuous operator on H. Define, for k ≥ k 0 , the operator on H It is well known that, as k → ∞, e tA k y ′ → e tA y ′ in H, uniformly for t ∈ [0, T ] and for y ′ on compact sets of H. Since A k is continuous, there exists a unique strong solution Y t,y, u k By taking into account (3.2) together with the same formula with A k in place of A, and by recalling the convergence e •A k → e •A mentioned above, one can easily show that We now take into consideration the HJB As argued for (4.9), a solution for (4.22) is given by where Finally, adding and subtracting We can now pass to prove a verification theorem i.e. a sufficient condition of optimality given in term of the solution v of the HJB equation.(i) For all (t, y) ∈ [0, T ] × H we have v(t, y) ≥ V (t, y), where V is the value function defined in (4.2).
Proof.The first statement follows directly by (4.19) due to the positivity of the integrand.Concerning the second statement, we immediately see that, when u = u * , (4.19) becomes v(t, y) = J (t, y; u * ).Since we know that, for any admissible control u = (u, z) ∈ U × U with u = z, J (t, y; u) ≤ V (t, y) ≤ v(t, x), the claim immediately follows.

Equivalence with the original problem in the LQ case
To find the solution of the original problem in the LQ case we need to apply Proposition 4.1, i.e. to prove that the optimal control in the original LQ case is deterministic.This is the subject of next proposition.Corollary 4.5.Let f, g be as in (4.7).Let t ∈ [0, T ], x ∈ R. If u * is as in (4.8), with (x, x) in place of y 0 , then u * (B * a(s)) is optimal for V (t, x).
Data availibility Data sharing not applicable to this article as no datasets were generated or analysed during the current study

Theorem 4 . 3 .
Let Assumption 2.1 hold true.Let v be in (4.10), with a and b given respectively by (4.17 ) and (4.18), solution to the HJB equation (4.9).Then the following holds.
and the Hamiltonian function defined as Notice that, in the above equations (4.4) and (4.6), the gradient inside the Hamiltonian H is indeed a couple of directional derivatives since it acts only through the operator B * whose image lies in R 2 .
u∈D f (t, y 0 , u) + B u, p , with H CV denoting the current value Hamiltonian function, and D being the diagonal in U × U , meaning D = {(u, u) : u ∈ U }. Notice that H 0 (t, y, p) depends on p only by means of B * p. Indeed, if we define H(t, y, q) := sup u∈D f (t, y 0 , u) + u, q ,(4.5)we get H 0 (t, y, p) = H(t, y, B * p).
.26) We then let k → ∞ in (4.26).Recalling the convergence e •A k → e •A mentioned above, we first notice that a k → a in H and b k → b in R, uniformly on [0, T ], as k → ∞.