Abstract
In this article, we develop integration by parts formulae on Wiener space for solutions of SDEs with general McKean–Vlasov interaction and uniformly elliptic coefficients. These integration by parts formulae hold both for derivatives with respect to a real variable and derivatives with respect to a measure understood in the sense of Lions. They allows us to prove the existence of a classical solution to a related PDE with irregular terminal condition. We also develop bounds for the derivatives of the density of the solutions of McKean–Vlasov SDEs.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The main object of study in this paper is the McKean–Vlasov stochastic differential equation (MVSDE)
driven by a Brownian motion \(B= \left( B^1, \ldots , B^d \right) \), with coefficients \(V_0, \ldots , V_d: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\) and initial condition \(\theta \), a square-integrable random variable independent of B. Here and throughout, we denote by \([\xi ]\) the law of a random variable \( \xi \) and by \({\mathcal {P}}_2({\mathbb {R}}^N)\) the set of probability measures on \({\mathbb {R}}^N\) with finite second moment.
MVSDEs are equations whose coefficients depend on the law of the solution. They are also referred to as mean-field SDEs and their solutions are often called nonlinear diffusions. These MVSDEs provide a probabilistic representation to the solutions of a class of nonlinear PDEs. A particular example of such nonlinear PDEs was first studied by McKean [29]. These equations describe the limiting behaviour of an individual particle evolving within a large system of particles undergoing diffusive motion and interacting in a ‘mean-field’ sense, as the population size grows to infinity. A particular characteristic of the limiting behaviour of the system, is that any finite subset of particles become asymptotically independent of each other. This propagation of chaos phenomenon was studied by McKean [30] and Sznitman [34] among many other authors. Existence and uniqueness results, the theory of propagation of chaos and numerical methods have been studied in a variety of settings (see, for example, [6, 7, 21, 31]).
As MVSDEs can be interpreted as limiting equations for large systems, they are widely used as models in statistical physics [7, 31] as well as in the study of large-scale social interactions within the theory of mean-field games [10, 11, 19, 20, 26,27,28]. Recently, these equations have also appeared in the mathematical finance literature in the specification and calibration of multi-factor stochastic volatility and hybrid models [5, 17].
In this paper, we develop several new integration by parts formulae for solutions of MVSDE. In turn, these formulae enable us to use MVSDE to define the solution of a class of partial differential equations that has the form
where \(g: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\) and the operator \(\mathcal {L}\) acts on sufficiently enough functions \(F:{\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\) and is defined
where \(\sigma (z, \mu )\) is the \(N \times d\) matrix with columns \(V_1(z,\mu ), \ldots , V_d(z,\mu )\). The last two terms in the description of \(\mathcal {L} F(x,[\theta ])\) involve the derivative with respect to the measure variable as introduced by Lions in his seminal lectures at the Collège de France (see [9] for details), which we describe in Sect. 2.3. Papers [3, 4, 22] present further details of the relevance of the class of nonlinear partial differential Eqs. (1.2)
For linear parabolic PDEs on \([0,T] \times {\mathbb {R}}^N\) it is well known from classical works such as [16, 18] that under uniform ellipticity or Hörmander condition, there exist classical solutions even when the initial condition is not differentiable. In this paper, we explore to what extent the same is true for the PDE (1.2) under a uniform ellipticity assumption. That is, we consider the question of whether the PDE (1.2) has classical solutions when the initial condition g is not differentiable. For this we exploit a probabilistic representation for the classical solutionFootnote 1 of the PDE (1.2) given in terms of a functional of \(X^{\theta }_t\) and of the solution of the following de-coupled equation:
We say that this equation is de-coupled as the law appearing in the coefficients is \(\left[ X^{\theta }_s \right] \) (the solution of Eq. (1.1)), rather than the law of \(X^{x, [\theta ]}_t \), the solution to Eq. (1.3) itself.Footnote 2 In the following, we show that, for a certain class of functions \(g: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\) (not necessarily smooth), the function
solves the PDE (1.2). A similar result has been proved in [8, 12] under different conditions than ours and for an initial condition g that is sufficiently smooth.
For the stochastic flow \((X_t^x)_{t \ge 0}\) solving a classical SDE with initial condition \(x \in {\mathbb {R}}^N\), the standard strategy to show that the function \(u(t,x):={\mathbb {E}}\, g(X_t^x)\) is a classical solution of a linear PDE is to show, using the flow property of \(X_t^x\), that for \(h>0\), \(u(t+h,x)={\mathbb {E}}\, [u(t,X_h^{x})]\) and then show that u is regular enough to apply Itô’s formula to \( u(t,X_h^{x})\). Expanding this process using Itô’s formula and sending \(h \rightarrow 0\) shows that u does indeed solve the related PDE. For MVSDEs, one can develop a similar approach. In this setting, to expand a function depending not only on the process \((X^{x, [\theta ]}_t )_{t \ge 0}\) (where we can use the usual Itô formula) but also on the flow of measures \(\left( [X^{\theta }_t] \right) _{t \ge 0}\), we require an extension of the classical chain rule and we use here the chain rule proved in [12]. Our main focus is therefore to provide conditions under which U, defined in (1.4), is regular enough to apply the Itô formula and the extended chain rule.
For a general Lipschitz continuous function \(g:{\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\), we cannot expect for the mapping \((x,[\theta ]) \mapsto {\mathbb {E}}[ \, g (X^{x, [\theta ]}_t,[ X^{\theta }_t ]) ]\) to be differentiable (for a fixed \(t>0\)) even when the coefficients in the equation for \(X^{x, [\theta ]}_t\) are smooth and uniformly elliptic. This is shown in Example 5.1. We are, however, able to identify a class of non-smooth initial conditions (including interesting examples, see Example 5.4) for which we can develop integration by parts formulas and establish sufficient smoothness of the associated function U. For g in this class, we use Malliavin calculus to show that \((x,[\theta ]) \mapsto {\mathbb {E}}[ \, g (X^{x, [\theta ]}_t,[ X^{\theta }_t ]) ]\) is differentiable. The differentiability in the measure direction is somewhat surprising since there is no noise added in the measure direction, and this smoothing property seems to be new. We give further details of our results in the next section.
1.1 Outline and main results
In Sect. 2, we introduce the notation and the basic results related to MVSDEs. In particular, when describing the smoothness of the coefficients in Eqs. (1.1) and (1.3) in our assumptions, we introduce the notation \({\mathcal {C}}^{k,k}_{b,\text {Lip}}({\mathbb {R}}^N\times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\) for functions k-times differentiable with bounded, Lipschitz derivatives, which we introduce precisely in Sect. 2.3. Similarly, we use the notation \({\mathbb {K}}^q_r(E,M)\) to denote processes taking values in a Hilbert space E which are smooth in both Euclidean and measure variables as well as in the Malliavin sense and M denotes how many times the process can be differentiated. This class, which we call the class of Kusuoka–Stroock processes, is introduced in Sect. 2.4. The class represents a generalization of the class of processes introduced in [25] and analysed in [14].
In Sect. 3, we prove some results on the differentiability of \(X^{x, [\theta ]}_t\), the solution to Eq. (1.3), with respect to the parameters \((x,[\theta ])\). The main result of Sect. 3 is Theorem 3.2, which says that if \(V_0, \ldots , V_d\in {\mathcal {C}}^{k,k}_{b, Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\), then \((t, x, [\theta ]) \mapsto X_t^{x, [\theta ]} \in {\mathbb {K}}^1_{0}({\mathbb {R}}^N,k)\). This is proved in the Appendix 6.2. We then introduce the uniform ellipticity assumption (UE) in Assumption 3.3, used throughout the rest of the paper. The rest of the section details several corollaries, where we analyse the processes that will play the rôle of Malliavin weights in the integration by parts formulas and identify the class \({\mathbb {K}}^q_r(E,M)\) of Kusuoka–Stroock processes to which they belong.
With the main technical results complete, in Sect. 4 we develop integration by parts formulas for derivatives of \((x,[\theta ]) \mapsto {\mathbb {E}}f(X^{x, [\theta ]}_t)\) under (UE) and the assumption that \(V_0, \ldots , V_d\in {\mathcal {C}}^{k,k}_{b,\text {Lip}}({\mathbb {R}}^N\times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\). We do this for derivatives with respect to x and with respect to \(\mu \). In particular we show that (see Propositions 4.1 and 4.2), for \(f \in {\mathcal {C}}^{\infty }_b({\mathbb {R}}^N;{\mathbb {R}})\), \(\Psi \in {\mathbb {K}}^q_r({\mathbb {R}},n)\) and for \(|\alpha | + |\beta | \le [n \wedge (k-2)]\), we have
where \(I^3_{\alpha }\left( I^2_{\beta }(\Psi )\right) \) and \({\mathcal {I}}^3_{\alpha }\left( I^2_{\beta }(\Psi )\right) \) are defined is defined in Sect. 4.1 and \(I^3_{\alpha }\left( I^2_{\beta }(\Psi )\right) \in {\mathbb {K}}_r^{q+2|\alpha |+3|\beta |}({\mathbb {R}}, m)\) and \({\mathcal {I}}^3_{\alpha }\left( I^2_{\beta }(\Psi )\right) \in {\mathbb {K}}_r^{q+4|\alpha |+3|\beta |}({\mathbb {R}}, m),\) where \(m=[n \wedge (k-2)]-|\alpha |-|\beta |\). We also consider integration by parts formulas for derivatives of the function \( x \mapsto {\mathbb {E}}f(X_t^{x, \delta _x})\) (see Theorem 4.4).
In Sect. 5, we return our attention to the PDE (1.2). In Definition 5.3, we introduce the class \(\mathbf (IC) \) of non-differentiable initial conditions g for which we are able to prove \((x,[\theta ]) \mapsto {\mathbb {E}}[ \, g (X^{x, [\theta ]}_t,[ X^{\theta }_t ]) ]\) is differentiable. We do this by extending the integration by parts formulas of Sect. 4 to cover this class. Then, for g in this class and assuming uniform ellipticity, and the coefficients \(V_0, \ldots , V_d\in {\mathcal {C}}^{3,3}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N); {\mathbb {R}}^N)\) (and possibly bounded depending on the exact form of g) we are able to prove the existence and uniqueness of solutions to the PDE (1.2). In particular, we show (see Theorem 5.8) that function U, defined in (1.4), is a classical solution of the PDE (1.2). Moreover, U is unique among all of the classical solutions satisfying the polynomial growth condition \(\left| U(t,x,[\theta ])\right| \le C (1+|x|+\Vert \theta \Vert _2)^q\) for some \(q>0\) and all \((t,x,[\theta ]) \in [0,T] \times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\).
Finally, in Sect. 6, we apply the integration by parts formulae to the study of the density function of \(X_t^{x,\delta _x}\). We study the smoothness of the density function and obtain estimates on its derivatives. The main result (see Theorem 6.1) states that, under suitable conditions, \(X_t^{x,\delta _x}\) has a density p(t, x, z) such that \((x,z) \mapsto p(t,x, z)\) is differentiable a number of times dependent on the regularity of the coefficients. Indeed, when these derivatives exist, there exist a constant C such that
where \( \mu = 4|\alpha |+ 3 |\beta | + 3 N\) and \( \nu = \textstyle \frac{1}{2} (N + | \alpha | + | \beta | )\). Moreover, if \(V_0, \ldots , V_d\) are bounded then the following Gaussian type estimate holds
1.2 Comparison with other works
As mentioned previously, the PDE (1.2) is also studied in [8] and [12]. Let us explain the relationship between the results in those works and the results in this paper.
In [8], the authors prove that derivatives of \((x,[\theta ]) \mapsto X^{x, [\theta ]}_t\) exist up to second order. We also prove this as part of Theorem 3.2, although we extend this to derivatives of any order (assuming sufficient smoothness of the coefficients). In [8], the hypotheses on the continuity and differentiability of the coefficients are the same as ours The authors then consider initial conditions \(g:{\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\) for which the derivatives up to second order exist and are bounded, which they use to prove regularity of U. Since g is sufficiently smooth, they do not need to impose any non-degeneracy condition on the coefficients. In our work we remove the constraint on the smoothness of g at the expense of assuming non-degeneracy condition on the coefficients of the MVSDEs. In this sense, their results are complementary to ours.
The paper [12] has a completely different scope. The authors are interested in a nonlinear PDE on \([0,T]\times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\), called the master equation in reference to the theory of mean-field games. The PDE we consider is a special case of this, although again they assume that the function g is twice differentiable. Their strategy for proving regularity of U is also different. In their setting, the authors prove that derivatives of the lifted flow \({\mathbb {R}}^N \times L^2(\Omega ) \ni (x,\theta ) \mapsto X^{x, [\theta ]}_t\) exist up to second order (with derivatives in the variable \(\theta \) being Fréchet derivatives on the Hilbert space \(L^2(\Omega )\)) where \(X^{x, [\theta ]}_t\) is the forward component in a coupled forward-backward system. They use this result, along with sufficient smoothness of g, to prove that the lifted function \(\widetilde{U}\), defined on on \([0,T]\times {\mathbb {R}}^N \times L^2(\Omega )\) is sufficiently regular in the Fréchet sense. They then prove a result which allows them to recover regularity of the second order derivatives of U from properties of the second order Fréchet derivatives of \(\widetilde{U}\). Using their strategy, the authors of [12] are able to impose hypotheses which only involve conditions on derivatives of the coefficients \(\partial _{\mu }V_i(x,[\theta ],v)\) evaluated at \(v=\theta \in L^2(\Omega )\).
This is in contrast to our assumptions which impose conditions on \(\partial _{\mu }V_i(x,[\theta ],v)\) for all \((x,[\theta ],v) \in {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \times {\mathbb {R}}^N\).
More recently, two other works [2, 13] give some partial results related to the smoothness of the solutions of McKean–Vlasov SDEs. In [2], the Malliavin differentiability of McKean–Vlasov SDEs is studied using a stochastic perturbation approach of Bismut type. In [13], the strong well-posedness of a McKean–Vlasov SDEs is proven when the diffusion matrix is Lipschitz with respect to both the space and measure arguments and uniformly elliptic and the drift is bounded in space and Hölder continuous in the measure direction. Both works restrict themselves to the particular case when the coefficient dependence on the law of the solution is of scalar type. We obtain some related results in [15], under the same scalar dependence restriction, but under the more general Hörmander condition.
We base our results on the use of Malliavin calculus techniques. The new integration by parts formulae and, more importantly, the identification of the processes appearing in these formulae as Kusuoka–Stroock processes is key to our analysis. The use of Kusuoka–Stroock processes is a very versatile tool. Not only that it enables us to identify the solution of the PDE (1.2), but the also allows us to study the density of \(X_t^{x,\delta _x}\) and obtain both polynomial and Gaussian local bounds for their derivatives. We are not aware of similar bounds obtained elsewhere in the literature for densities of solutions of MVSDEs.
2 Preliminaries
2.1 Notation and basic setup
We work on a filtered probability space \((\Omega , {\mathcal {F}}, \mathbb {F}= \{{\mathcal {F}}_t\}_{t \in [0,T]} , {\mathbb {P}})\) which supports an \(\mathbb {F}\)-adapted d-dimensional Brownian Motion, \(B=(B^1, \ldots , B^d)\). We also often denote \(B^0(s)=s\) for \(s \in [0,T]\). We assume that there is a sufficiently rich sub-\(\sigma \)-algebra \(\mathcal {G} \subset {\mathcal {F}}\) independent of B such that all measures \(\mu \in {\mathcal {P}}_2({\mathbb {R}}^N)\) correspond to the law of a random variable in \(L^2((\Omega ,\mathcal {G}, {\mathbb {P}}) ;{\mathbb {R}}^N)\). Then, we define \(\mathbb {F}\) to be the filtration generated by B, completed and augmented by \(\mathcal {G}\). This is to ensure that in the sequel when we consider processes starting from arbitrary initial conditions \( \theta \in L^2(\Omega ;{\mathbb {R}}^N)\) these processes will be \(\mathbb {F}\)-adapted. We denote the \(L^p\) norm on \((\Omega , {\mathcal {F}},{\mathbb {P}})\) by \(\Vert \cdot \Vert _p \) and we also introduce the space \({\mathcal {S}}^p_T\) of continuous \(\mathbb {F}\)-adapted processes \(\varphi \) on [0, T], satisfying
In addition to the probability space \((\Omega , {\mathcal {F}}, {\mathbb {P}})\), we will also make use of other probability spaces \(( \tilde{\Omega }, \tilde{\mathcal {F}}, \tilde{\mathbb {P}})\) and \((\widehat{\Omega }, \widehat{{\mathcal {F}}}, \widehat{{\mathbb {P}}})\) when performing the lifting operation associated with the Lions derivative. We assume that these satisfy the same conditions as \((\Omega , {\mathcal {F}}, {\mathbb {P}})\). We denote the \(L^p\) norm on each of these spaces by \(\Vert \cdot \Vert _p \) unless we want to emphasise which space we are working on, in which case we use \(\Vert \cdot \Vert _{L^p(\widetilde{\Omega })} \) etc. We use \(| \cdot |\) to denote the Euclidean norm. Throughout we denote by \(\alpha \) and \(\beta \) multi-indices on \(\{1, \ldots , N\}\) including the empty multi-index. We denote by \(Id_N\) the \(N \times N\) identity matrix. We also use some terminology from Malliavin calculus: we denote by \(\mathcal {\mathbf {D}}\) the Malliavin derivative and by \(\delta \) its adjoint, the Skorohod integral. We outline very briefly the basic operators of Malliavin calculus in Appendix 6.1.
2.2 Basic results on McKean–Vlasov SDEs
We study McKean–Vlasov SDEs with general Lipschitz interaction. The coefficients are functions from \({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\) to \({\mathbb {R}}^N\), where \({\mathcal {P}}_2({\mathbb {R}}^N)\) denotes the space of probability measures on \({\mathbb {R}}^N\) with finite second moment. We equip this space with the 2-Wasserstein metric, \(W_2\). For a general metric space (M, d), we define the 2-Wasserstein metric on \({\mathcal {P}}_2(M)\) by
where \({\mathcal {P}}_{\mu ,\nu }\) denotes the set of measures on \(M \times M\) with marginals \(\mu \) and \(\nu \). When we refer to the Lipschitz property of the coefficients, it is with respect to product distance on \({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\).
Proposition 2.1
(Existence, uniqueness and \(L^p\) estimates) Suppose that \(\theta \in L^2( \Omega )\) and \(V_0, \ldots , V_d\) are uniformly Lipschitz continuous, then there exists a unique, strong solution to the equation
and there exists a constant \(C=C(T)\), such that
Similarly, there exists a unique, strong solution to the equation
and there exists a constant \(C=C(p,T)\), such that for all \(p \ge 1\),
Moreover, for all \((x, \theta , t), (x^{\prime }, \theta ^{\prime }, t^{\prime }) \in {\mathbb {R}}^N \times L^2(\Omega ) \times [0,T]\) and \(p \ge 1\),
and
Finally, we have the following flow property for any \(t \in [0,T) \), \(s \in (t,T]\), \(x \in {\mathbb {R}}^N\) and \(\theta \in L^2(\Omega )\),
Proof
The proof is standard and we leave it to the reader. We note that the proof of existence and uniqueness of a solution to Eq. (2.1) was proved in [34] for first-order McKean–Vlasov interaction. The case of a generic Lipschitz McKean–Vlasov interaction is covered in [21].\(\square \)
2.3 Differentiation in \({\mathcal {P}}_2({\mathbb {R}}^N)\)
In Sect. 5, we study an SDE with a general McKean–Vlasov dependence. We will be interested in differentiability of the stochastic flow associated to this SDE and an associated PDE on \([0,T] \times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\). We thus need a notion of derivative for a function on a space of probability measures. The notion of differentiability we use was introduced by P.-L. Lions in his lectures at the Collège de France, recorded in a set of notes by Cardaliaguet [9]. The underlying idea is very well exposed in [11], which we draw on here.
Lions’ notion of differentiability is based on the lifting of functions \(U: {\mathcal {P}}_2({\mathbb {R}}^N)\rightarrow {\mathbb {R}}\) into functions \(\tilde{U}\) defined on the Hilbert space \(L^2(\tilde{\Omega };{\mathbb {R}}^N)\) over some probability space \((\tilde{\Omega },\tilde{\mathcal {F}},\tilde{\mathbb {P}})\), \(\tilde{\Omega }\) being a Polish space and \(\tilde{\mathbb {P}}\) an atomless measure, by setting \(\tilde{U}({\tilde{X}})=U([{\tilde{X}}])\) for \({\tilde{X}}\in L^2(\tilde{\Omega };{\mathbb {R}}^N)\). Then, a function U is said to be differentiable at \(\mu _0\in {\mathcal {P}}_2({\mathbb {R}}^N)\) if there exists a random variable \(\tilde{X}_0\) with law \(\mu _0\) such that the lifted function \(\tilde{U}\) is Fréchet differentiable at \(\tilde{X}_0\). Whenever this is the case, the Fréchet derivative of \(\tilde{U}\) at \(\tilde{X}_0\) can be viewed as an element of \(L^2(\tilde{\Omega };{\mathbb {R}}^N)\) by identifying \(L^2(\tilde{\Omega };{\mathbb {R}}^N)\) and its dual. The derivative in a direction \(\tilde{\gamma }\in L^2(\tilde{\Omega };{\mathbb {R}}^N)\) is given by
It then turns out (see Section 6 in [9] for details.) that the distribution of \(D \tilde{U} (\tilde{X}_0) \in L^2(\tilde{\Omega };{\mathbb {R}}^N)\) depends only upon the law \(\mu _0\) and not upon the particular random variable \( \tilde{X}_0\) having distribution \(\mu _0\). It is shown in [9] that, as a random variable, \(D \tilde{U} (\tilde{X}_0)\) is of the form \( g_{\mu _0}( \tilde{X}_0)\), where \( g_{\mu _0} : {\mathbb {R}}^N \rightarrow {\mathbb {R}}^N\) is a deterministic measurable function which is uniquely defined \(\mu _0\)-almost everywhere on \({\mathbb {R}}^N\), and is square-integrable with respect to the measure \(\mu _0\). We call \(\partial _{\mu }U(\mu _0):=g_{\mu _0}\) the derivative of U at \(\mu _0\). We use the notation \(\partial _{\mu } U(\mu _{0}, \cdot ) : {\mathbb {R}}^N \ni v \mapsto \partial _{\mu } U(\mu _{0},v) \in {\mathbb {R}}^N\), which satisfies, by definition,
This holds for any random variable \(\tilde{X}_{0}\) with distribution \(\mu _0\), irrespective of the probability space on which it is defined.
In the sequel, we will consider functions which are differentiable globally on \( {\mathcal {P}}_2({\mathbb {R}}^N)\). Moreover, we will consider functions where for each \(\mu \in {\mathcal {P}}_2({\mathbb {R}}^N)\), there exists a version of the derivative \(\partial _{\mu }U(\mu )\) which is assumed to be a priori continuous as a function
In this case such a version is unique since, for each \(\theta \in L^2(\Omega ; {\mathbb {R}}^N)\), \(\partial _{\mu }U([\theta ],v)\) is defined \([\theta ](dv)\)-a.e., so taking a Gaussian random variable G independent of \(\theta \), and \(\epsilon >0\), \(\partial _{\mu }U([\theta +\epsilon G],v)\) is defined (dv)-a.e. and taking \(\epsilon \rightarrow 0\) and using the continuity of \(\partial _{\mu }U\), identifies \(\partial _{\mu }U([\theta ],v)\) uniquely. We show how this definition works in practice in Examples 2.5 and 2.6.
For a function \(f: {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\), we can straightforwardly apply the above discussion to each component of \(f=(f^1, \ldots , f^N)\). To extend to higher derivatives we note that \(\partial _{\mu } f^i \) takes values in \({\mathbb {R}}^N\), so we denote its components by \( ( \partial _{\mu } f^i)_j : {\mathcal {P}}_2({\mathbb {R}}^N) \times {\mathbb {R}}^N \rightarrow {\mathbb {R}}\) for \(j=1, \ldots , N\) and, for a fixed \(v \in {\mathbb {R}}^N\), we can discuss again the differentiability of \( {\mathcal {P}}_2({\mathbb {R}}^N) \ni \mu \mapsto (\partial _{\mu } f^i)_j(\mu ,v) \in {\mathbb {R}}\). If the derivative of this function exists and there is continuous version of
then it is unique. It makes sense to use the multi-index notation \(\partial ^{(j,k)}_{\mu } f^i: = ( \partial _{\mu }( \partial _{\mu } f^i)_j)_k\). Similarly, for higher derivatives, if for each \((i_0, \ldots , i_n) \in \{1, \ldots , N\}^{n+1}\),
exists, we denote this \(\partial ^{\alpha }_{\mu }f^{i_0}\) with \(\alpha = (i_1, \ldots , i_n)\). Now, each derivative in \(\mu \) is a function of an ‘extra’ variable, so \(\partial ^{\alpha }_{\mu }f^{i_0}: {\mathcal {P}}_2({\mathbb {R}}^N) \times ({\mathbb {R}}^N)^n \rightarrow {\mathbb {R}}\). We always denote these variables, by \(v_1, \ldots , v_n\), so
When there is no possibility of confusion, we will abbreviate \((v_1, \ldots , v_n)\) to \({\varvec{v}}\), so that
For \({\varvec{v}}=(v_1, \ldots , v_n) \in ({\mathbb {R}}^N)^n\), we will denote
with \(|\cdot |\) the Euclidean norm on \({\mathbb {R}}^N\). It then makes sense to discuss derivatives of the function \(\partial ^{\alpha }_{\mu }f^{i_0}\) with respect to the variables \(v_1, \ldots , v_n\). If, for some \(j \in \{1, \ldots , N\}\) and all \((\mu , v_1, \ldots ,v_{j-1}, v_{j+1}, \ldots , v_n) \in {\mathcal {P}}_2({\mathbb {R}}^N) \times ({\mathbb {R}}^N)^{n-1}\),
is l-times continuously differentiable, we denote the derivatives \(\partial _{v_j}^{\beta _j}\partial ^{\alpha }_{\mu }f^{i_0}\), for \(\beta _j\) a multi-index on \(\{1, \ldots , N\}\) with \(|\beta _j| \le l\). Similar to the above, we will denote by \({\varvec{\beta }}\) the n-tuple of multi-indices \((\beta _1,\ldots , \beta _n)\). We also associate a length to \({\varvec{\beta }}\) by
and denote \(\# {\varvec{\beta }}:=n\). Then, we denote by \({\mathcal {B}}_n\) the collection of all such \({\varvec{\beta }}\) with \(\# {\varvec{\beta }}:=n\), and \({\mathcal {B}}:= \textstyle \cup _{n \ge 1} {\mathcal {B}}_n\). Again, to lighten notation, we will use
The coefficients in Eqs. (2.1) and (2.3) are of the type \(V_0, \ldots , V_d: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\), so depend on a Euclidean variable as well as a measure variable. Considering functions on \({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\) raises a question about whether the order in which we take derivatives matters. A result from [8] says that derivatives commute when the mixed derivatives are Lipschitz continuous.
Lemma 2.2
(Lemma 4.1 in [8]) Let \(g: {\mathbb {R}}\times {\mathcal {P}}_2({\mathbb {R}}) \rightarrow {\mathbb {R}}\) and suppose that the derivative functions
both exist and are Lipschitz continuous: i.e. there exists a constant \(C>0\) such that
Then, the functions \(\partial _{x} \partial _{\mu } g\) and \(\partial _{\mu } \partial _{x} \) are identical.
With this in mind, we can introduce the following definition.
Definition 2.3
(\({\mathcal {C}}^{n,n}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) ; {\mathbb {R}}^N)\))
-
(a)
Let \(V: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\) with components \(V^1, \ldots , V^N: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\). We say that \(V \in {\mathcal {C}}^{1,1}_{b, Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) ;{\mathbb {R}}^N)\) if the following hold true: for each \(i=1, \ldots , N\), \(\partial _{\mu } V^i\) exists and \(\partial _xV\) exists. Moreover, assume that for all \((x, \mu , v) \in {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \times {\mathbb {R}}^N\)
$$\begin{aligned} \left| \partial _x V^i(x,\mu ) \right| + \left| \partial _\mu V^i \left( x, \mu , v \right) \right| \le C. \end{aligned}$$In addition, suppose that \(\partial _{\mu }V^i\) and \(\partial _xV\) are Lipschitz in the sense that for all \((x, \mu , v), ( x^{\prime }, \mu ^{\prime }, v^{\prime } ) \in {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \times {\mathbb {R}}^N\),
$$\begin{aligned} \left| \partial _{\mu } V^i(x,\mu ,v) - \partial _{\mu } V^i(x^{\prime }, \mu ^{\prime },v^{\prime }) \right|&\le C \left( |x-x^{\prime }| + W_2(\mu , \mu ^{\prime }) + |v-v^{\prime }| \right) , \\ \left| \partial _{x} V(x,\mu ) - \partial _{x} V(x^{\prime }, \mu ^{\prime }) \right|&\le C \left( |x-x^{\prime }| + W_2(\mu , \mu ^{\prime }) \right) . \end{aligned}$$ -
(b)
We say that \(V \in {\mathcal {C}}^{n,n}_{b, Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) ; {\mathbb {R}}^N) \) if the following hold true: for each \(i=1, \ldots , N\), and all multi-indices \(\alpha \) and \(\gamma \) on \(\{1, \ldots , N\}\) and all \({\varvec{\beta }}\in {\mathcal {B}}\) satisfying \(|\alpha | + |{\varvec{\beta }}| + |\gamma | \le n\), the derivatives
$$\begin{aligned} \partial ^{\gamma }_x \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu }V^i(x,\mu ,{\varvec{v}}), \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu }\partial ^{\gamma }_xV^i(x,\mu ,{\varvec{v}}), \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\gamma }_x\partial ^{\alpha }_{\mu }V^i(x,\mu ,{\varvec{v}}) \end{aligned}$$exist. Moreover, suppose that each of these derivatives is bounded and Lipschitz.
-
(c)
We say that \(h \in {\mathcal {C}}^n_{b,Lip}({\mathcal {P}}_2({\mathbb {R}}^N) ; {\mathbb {R}}^N)\) if \(h:{\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\) does not depend on a Euclidean variable but otherwise satisfy the conditions in part (b).
Remark 2.4
-
1.
For functions \(V:{\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\), we will also consider the lifting \(\tilde{V} : {\mathbb {R}}^N \times L^2( \Omega ) \rightarrow {\mathbb {R}}^N\). Then, for \(\xi \in L^2(\Omega )\), \(\tilde{V}(\xi , \xi )\) should be interpreted as \(\tilde{V}(\xi ( \omega ), \xi ) = V(\xi ( \omega ), [\xi ])\) with the first argument being considered pointwise by \( \omega \) and the second depending on the random variable \(\xi \) through its law.
-
2.
From the bounds in Definition 2.3(a), we have the following simple consequences for the Fréchet derivative of the lifting \(\tilde{V}\) of V: for all \(x,x^{\prime } \in {\mathbb {R}}^N\) and \(\theta ,\theta ^{\prime }, \gamma ,\gamma ^{\prime } \in L^2(\Omega )\),
$$\begin{aligned} \left| D \tilde{V}(x,\theta )(\gamma ) \right|&\le C \, \Vert \gamma \Vert _2\\ \left| D \tilde{V}(x,\theta )(\gamma ) - D \tilde{V}(x^{\prime },\theta ^{\prime })(\gamma ^{\prime }) \right|&\le C \left[ \Vert \gamma \Vert _2 \left( |x-x^{\prime }|+\Vert \theta - \theta ^{\prime }\Vert _2 \right) \right. \\&\qquad \left. +\, \Vert \gamma -\gamma ^{\prime }\Vert _2 \right] . \end{aligned}$$ -
3.
Note that we cannot interchange the order of \(\partial _\mu \) and \(\partial _v\) in \( \partial _v \partial _{\mu }V(x,\mu ,v)\) since \(V(x,\mu )\) does not depend on v. However, if \(V \in {\mathcal {C}}^{n,n}_{b, Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) ; {\mathbb {R}}^N) \) then for all \(\alpha , {\varvec{\beta }}, \gamma \) with \(|\alpha | + |{\varvec{\beta }}| + |\gamma | \le n\), we have that
$$\begin{aligned} \partial ^{\gamma }_x \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu } V(x,\mu ,{\varvec{v}}) =\partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\gamma }_x \partial ^{\alpha }_{\mu } V(x,\mu ,{\varvec{v}}) = \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu } \partial ^{\gamma }_x V(x,\mu ,{\varvec{v}}) \end{aligned}$$due to Lemma 2.2.
We now introduce some concrete examples of functions \(V: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\).
Example 2.5
(Scalar interaction) Take \(U \in {\mathcal {C}}^{k+1}_b({\mathbb {R}}^N \times {\mathbb {R}};{\mathbb {R}}^N)\), \(\phi \in {\mathcal {C}}^{k+1}_b({\mathbb {R}}^N;{\mathbb {R}})\) and \(\textstyle V(x,\mu ):=U(x, \int \phi d \mu )\).
Example 2.6
(First-order interaction) Take \(W \in {\mathcal {C}}^{k+1}_b({\mathbb {R}}^N \times {\mathbb {R}}^N;{\mathbb {R}}^N)\) and \( \textstyle V(x,\mu ):= \int W(x, \cdot )d \mu \).
Lemma 2.7
In both examples, \(V \in {\mathcal {C}}^{k,k}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N); {\mathbb {R}}^N)\).
The proof is straightforward.
2.4 Kusuoka–Stroock processes on \({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\)
In Sect. 4, we develop integration by parts formulas modelled on those developed in works of Kusuoka [24] along with Stroock [25] for solutions of classical SDEs. These integration by parts formulas take the form
for processes \(\Psi , \Psi _{\alpha },\Psi _{\beta }\) belonging to a specific class. We work with a class of processes similar to one introduced in [25], which we call the class of Kusuoka–Stroock processes.
Definition 2.8
(Kusuoka–Stroock processes on \({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\)) Let E be a separable Hilbert space and let \(r \in {\mathbb {R}}\), \(q,M \in \mathbb {N}\). We denote by \({\mathbb {K}}^q_{r}(E,M)\) the set of processes \(\Psi : [0,T] \times \mathbb {R}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow \mathbb {D}^{M,\infty }(E)\) satisfying the following:
-
1.
For any multi-indices \(\alpha , {\varvec{\beta }}\), \(\gamma \) satisfying \(\vert \alpha \vert + |{\varvec{\beta }}| +|\gamma | \le M\), the function
$$\begin{aligned}{}[0,T] \times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \ni (t,x , [\theta ]) \mapsto \partial ^{\gamma }_x \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu } \Psi (t,x,[\theta ], {\varvec{v}}) \in L^p(\Omega ) \end{aligned}$$exists and is continuous for all \(p \ge 1\).
-
2.
For any \(p \ge 1\) and \(m \in {\mathbb {N}}\) with \(|\alpha | + |{\varvec{\beta }}| + |\gamma | +m \le M\), we have
$$\begin{aligned} \sup _{ {\varvec{v}}\in ({\mathbb {R}}^N)^{\# {\varvec{\beta }}}} \sup _{t \in (0,T]} t^{-r/2}&\left\| \partial ^{\gamma }_x \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu } \Psi (t,x,[\theta ], {\varvec{v}}) \right\| _{ {\mathbb {D}}^{m,p}(E) } \le C \, \left( 1 + |x| + \Vert \theta \Vert _2 \right) ^q. \end{aligned}$$(2.7)
Remark 2.9
This definition is different to that in [25] in the following ways:
-
1.
The processes depend on a parameter \(\mu \in {\mathcal {P}}_2({\mathbb {R}}^N)\).
-
2.
We keep track of polynomial growth in x of the \({\mathbb {D}}^{m,p}\)-norm through a parameter \(q>0\) instead of requiring it to be uniformly bounded.
-
3.
We require continuity in \(L^p(\Omega )\) rather than almost surely.
Remark 2.10
-
1.
The number M denotes how many times the Kusuoka–Stroock process can be differentiated; q measures the polynomial growth of the \({\mathbb {D}}^{m,p}\)-norm of the process in \((x,[\theta ])\), and r measures the growth in t.
-
2.
In the definition, we are able to stipulate that the \({\mathbb {D}}^{m,p}\)-norm of all the derivatives will be uniformly bounded w.r.t. \({\varvec{v}}\) because in the sequel the only dependence on \({\varvec{v}}\) in any Kusuoka–Stroock processes will come from \(\partial _{\mu } X^{x, [\theta ]}_t(v)\). In Lemma 6.7 \(\partial _{\mu } X^{x, [\theta ]}_t(v)\) is bounded w.r.t v and this carries over to the \({\mathbb {D}}^{m,p}\)-norm.
To analyse the density of solutions of the MVSDE (2.1) started from a fixed initial point in \({\mathbb {R}}^N\), it is useful to have notation for Kusuoka–Stroock processes which do not depend on a measure \(\mu \in {\mathcal {P}}_2({\mathbb {R}}^N)\). We denote this class by \({\mathcal {K}}_r^q({\mathbb {R}},M)\). The following lemma says that if we take a Kusuoka–Stroock process on \({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\) and evaluate its measure argument at a Dirac mass, then this forms a Kusuoka–Stroock process on \({\mathbb {R}}^N \). Its proof is straightforward.
Lemma 2.11
If \(\Psi \in {\mathbb {K}}_r^q({\mathbb {R}}, M)\) and we define \(\Phi (t,x):=\Psi (t,x,\delta _x)\), then \(\Phi \in {\mathcal {K}}_r^q({\mathbb {R}},M)\).
3 Regularity of solutions of McKean–Vlasov SDEs
This section contains some basic results about solutions of the equations involved, their integrability and their differentiability with respect to parameters. Existence and uniqueness of solutions to (1.3) is covered in Sect. 2.2.
Proposition 3.1
(First-order derivatives) Suppose that \(V_0, \ldots , V_d\in {\mathcal {C}}^{1,1}_{b,\text {Lip}}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\). Then the following hold:
-
(a)
There exists a modification of \(X^{x, [\theta ]}\) such that, for all \(t \in [0,T]\), the map \(x \mapsto X_t^{x, \theta }\) is \({\mathbb {P}}\)-a.s. differentiable. We denote the derivative \(\partial _x X^{x, [\theta ]}\) and note that it solves the following SDE
$$\begin{aligned} \partial _x X^{x, [\theta ]}_t = \text {Id}_N + \sum _{i=0}^d \int _0^t \partial V_i \left( X^{x, [\theta ]}_s, \left[ X^{\theta }_s \right] \right) \, \partial _x X^{x, [\theta ]}_s \, dB^i_s. \end{aligned}$$(3.1) -
(b)
For all \(t \in [0,T]\), the maps \(\theta \mapsto X^{\theta }_t\) and \(\theta \mapsto X_t^{x, [\theta ]}\) are Fréchet differentiable in \(L^2(\Omega )\), i.e. there exists a linear continuous map \(D X^{\theta }_t : L^2(\Omega ) \rightarrow L^2(\Omega )\) such that for all \( \gamma \in L^2(\Omega )\),
$$\begin{aligned} \Vert X_t^{\theta + \gamma }- X^{\theta }_t - D X^{\theta }_t(\gamma )\Vert _2 =o(\Vert \gamma \Vert _2) \quad \text { as } \Vert \gamma \Vert _2 \rightarrow 0 , \end{aligned}$$and similarly for \(X^{x, [\theta ]}_t\). These processes satisfy the following stochastic differential equations
$$\begin{aligned} D X^{x, [\theta ]}_t (\gamma ) =&\sum _{i=0}^d \int _0^t \left[ \partial V_i\left( X^{x, [\theta ]}_s, \left[ X_s^{\theta }\right] \right) \, D X^{x, [\theta ]}_s(\gamma ) \right. \nonumber \\&\left. +\, D\tilde{V}_i\left( X^{x, [\theta ]}_s, X^{\theta }_s \right) \left( D X^{\theta }_s (\gamma ) \right) \right] \, dB^i_s, \end{aligned}$$(3.2)$$\begin{aligned} D X^{\theta }_t (\gamma ) =\,&\gamma + \sum _{i=0}^d \int _0^t \left[ \partial V_i\left( X^{\theta }_s, \left[ X_s^{\theta }\right] \right) \, DX^{\theta }_s (\gamma ) \right. \nonumber \\&\left. +\, D\tilde{V}_i\left( X^{\theta }_s, X^{\theta }_s\right) \left( D X^{\theta }_s (\gamma )\right) \right] \, dB^i_s , \end{aligned}$$(3.3)where we denote by \(\tilde{V}_i\) the lifting of \(V_i\) to a function on \({\mathbb {R}}^N \times L^2(\Omega )\). Moreover, for each \(x \in {\mathbb {R}}^N\), \(t \in [0,T]\), the map \({\mathcal {P}}_2({\mathbb {R}}^N) \ni [\theta ] \mapsto X^{x, [\theta ]}_t \in L^p(\Omega )\) is differentiable for all \(p \ge 1\). So, \(\partial _{\mu } X^{x, [\theta ]}_t(v) \) exists and it satisfies the following equation
$$\begin{aligned} \partial _{\mu } X^{x, [\theta ]}_t(v)= & {} \sum _{i=0}^d \int _0^t \bigg \{ \partial V_i\left( X^{x, [\theta ]}_s, \left[ X_s^{\theta }\right] \right) \,\partial _{\mu } X^{x, [\theta ]}_s(v)\nonumber \\&+\, \widetilde{{\mathbb {E}}} \left[ \partial _{\mu }V_i \left( X^{x, [\theta ]}_s, \left[ X_s^{\theta }\right] , \widetilde{ X}^{v,[\theta ]}_s\right) \, \partial _x \widetilde{ X}_s^{v, [\theta ]} \right] \nonumber \\&+\, \widetilde{{\mathbb {E}}} \left[ \partial _{\mu }V_i \left( X^{x, [\theta ]}_s, \left[ X_s^{\theta }\right] , \widetilde{ X}^{\tilde{\theta }}_s\right) \, \partial _{\mu } \widetilde{ X}_s^{\tilde{\theta },[\theta ]}(v) \right] \bigg \} dB^i_s , \end{aligned}$$(3.4)where \(\widetilde{ X}^{\tilde{\theta }}_s\) is copy of \(X^{\theta }_s\) on the probability space \((\tilde{\Omega }, \tilde{\mathcal {F}}, \tilde{\mathbb {P}})\) driven by the Brownian motion \(\tilde{B}\) and with initial condition \(\tilde{\theta }\). Similarly, \( \partial _x \widetilde{X}_s^{v, [\theta ]} \) is a copy of \( \partial _x X_s^{v, [\theta ]} \) driven by the Brownian motion \(\tilde{B}\) and \(\partial _{\mu } \widetilde{ X}_s^{\tilde{\theta },[\theta ]}(v)= \left. \partial _{\mu } \widetilde{ X}_s^{x,[\theta ]}(v) \right| _{x = \tilde{\theta }}\). Finally, the following representation holds for all \(\gamma \in L^2(\Omega )\):
$$\begin{aligned} D X^{x, [\theta ]}_t (\gamma ) = \widetilde{{\mathbb {E}}} \left[ \partial _{\mu } X^{x, [\theta ]}_t(\tilde{\theta }) \, \tilde{\gamma }\right] . \end{aligned}$$(3.5) -
(c)
For all \(t \in [0,T]\), \(X^{x, [\theta ]}_t, X^{\theta }_t \in {\mathbb {D}}^{1, \infty }\). Moreover, \(\mathcal {\mathbf {D}}_r X^{x, [\theta ]}= \left( \mathcal {\mathbf {D}}^j_r (X^{x, [\theta ]})^i \right) _{\begin{array}{c} 1 \le i \le N \\ 1 \le j \le d \end{array}}\) satisfies, for \(0 \le r \le t\)
$$\begin{aligned} \mathcal {\mathbf {D}}_r X^{x, [\theta ]}_t = \sigma \left( X^{x, [\theta ]}_r, \left[ X^{\theta }_r \right] \right) + \sum _{i=0}^d \int _r^t \partial V_i\left( X^{x, [\theta ]}_s, \left[ X^{\theta }_s \right] \right) \, \mathcal {\mathbf {D}}_r X^{x, [\theta ]}_s \, dB^i_{s},\nonumber \\ \end{aligned}$$(3.6)where \(\sigma (z, \mu )\) is the \(N \times d\) matrix with columns \(V_1(z,\mu ), \ldots , V_d(z,\mu )\).
Proof
-
(a)
Recalling again that \(X^{x, [\theta ]}\) satisfies a classical SDE with time-dependent coefficients, it follows from [23] Theorem 4.6.5 there exists a modification of \(X^{x, [\theta ]}_t\) which is continuously differentiable in x, and the first derivative satisfies Eq. (3.1).
-
(b)
It is shown in [12, Lemma 4.17] that the map \( \theta \mapsto (X^{\theta }_t,X^{x, [\theta ]}_t)\) is Fréchet differentiable. It is then easy to see the Fréchet derivative processes satisfy Eqs. (3.2) and (3.3). Now, we follow the idea in [8] to show that \(\partial _{\mu } X^{x, [\theta ]}_t(v)\) solves Eq. (3.4). We first re-write the equation for \(D X^{\theta }_t(\gamma )\) in terms of \(\partial _{\mu }V_i\) instead of the Fréchet derivative of the lifting \(\tilde{V}_i\), as follows
$$\begin{aligned} D X^{\theta }_t (\gamma ) =\,&\gamma + \sum _{i=0}^d \int _0^t\bigg \{ \partial V_i\left( X^{\theta }_s, \left[ X_s^{\theta }\right] \right) \, DX^{\theta }_s (\gamma ) \nonumber \\&+ \widetilde{{\mathbb {E}}} \left[ \partial _{\mu } V_i \left( X^{\theta }_s, \left[ X^{\theta }_s \right] , \widetilde{X}^{\tilde{\theta }}_s\right) D\widetilde{X}_s^{\tilde{\theta }}(\widetilde{\gamma })\right] \bigg \} \, dB^i_s . \end{aligned}$$(3.7)Consider the equation satisfied by \(\partial _{\mu } X_s^{\tilde{\theta },[\theta ]}(v)\), evaluated at \(v= \widehat{\theta }\) and multiplied by \(\widehat{\gamma }\) with both random variables defined on a probability space \((\widehat{\Omega }, \widehat{{\mathcal {F}}}, \widehat{{\mathbb {P}}})\). Taking expectation with respect to \(\widehat{{\mathbb {P}}}\), we get
$$\begin{aligned} \widehat{{\mathbb {E}}}\left[ \partial _{\mu } X_t^{\tilde{\theta },[\theta ]}(\widehat{\theta })\, \widehat{\gamma } \right] =&\sum _{i=0}^d \int _0^t \bigg \{ \partial V_i\left( X^{\theta }_s, \left[ X_s^{\theta }\right] \right) \, \widehat{{\mathbb {E}}}\left[ \partial _{\mu } X_s^{\tilde{\theta },[\theta ]}(\widehat{\theta })\, \widehat{\gamma } \right] \nonumber \\&+ \widehat{{\mathbb {E}}} \widetilde{{\mathbb {E}}} \left[ \partial _{\mu }V_i \left( X^{\theta }_s, [X_s^{\theta }], \widetilde{ X}^{\hat{\theta }, [ \theta ]}_s \right) \partial _x \widetilde{X}_s^{\widehat{\theta }, [\theta ]} \, \widehat{\gamma } \right] \nonumber \\&+ \widetilde{{\mathbb {E}}} \left[ \partial _{\mu }V_i \left( X^{\theta }_s, \left[ X_s^{\theta }\right] , \widetilde{ X}^{\tilde{\theta }}_s \right) \widehat{{\mathbb {E}}} \left[ \partial _{\mu } \widetilde{X}_s^{\tilde{\theta },[\theta ]}(\widehat{\theta }) \, \widehat{\gamma } \right] \right] \bigg \}dB^i_s. \end{aligned}$$(3.8)In the above equation, we are able to take \(\widehat{\gamma }\) inside the Itô integral with no problem since it is defined on a separate probability space to the Brownian motion, B. We are also able to interchange the order of the Itô integral and expectation with respect to \(\widehat{{\mathbb {P}}}\) using a stochastic Fubini theorem (see for example [33, Theorem 65]). Again, since \((\widehat{\theta }, \widehat{\gamma })\) are defined on a separate probability space,
$$\begin{aligned} \widehat{{\mathbb {E}}} \widetilde{{\mathbb {E}}}&\left[ \partial _{\mu }V_i \left( X^{\theta }_s, \left[ X_s^{\theta }\right] , \widetilde{ X}^{\hat{\theta }, [ \theta ]}_s \right) \partial _x \widetilde{X}_s^{\widehat{\theta }, [\theta ]} \, \widehat{\gamma } \right] = \widetilde{{\mathbb {E}}} \left[ \partial _{\mu }V_i \left( X^{\theta }_s, \left[ X_s^{\theta }\right] , \widetilde{ X}^{\tilde{\theta }}_s \right) \partial _x \widetilde{X}_s^{\tilde{\theta }, [\theta ]} \, \tilde{\gamma } \right] , \end{aligned}$$which we can replace in Eq. (3.8) to get:
$$\begin{aligned} \widehat{{\mathbb {E}}}\left[ \partial _{\mu } X_t^{\tilde{\theta },[\theta ]}(\widehat{\theta }) \, \widehat{\gamma } \right] =&\sum _{i=0}^d \int _0^t \bigg \{ \partial V_i\left( X^{\theta }_s, \left[ X_s^{\theta }\right] \right) \, \widehat{{\mathbb {E}}}\left[ \partial _{\mu } X_s^{\tilde{\theta },[\theta ]}(\widehat{\theta }) \, \widehat{\gamma } \right] \nonumber \\&+ \widetilde{{\mathbb {E}}} \left[ \partial _{\mu }V_i \left( X^{\theta }_s, \left[ X_s^{\theta }\right] , \widetilde{ X}^{\tilde{\theta }}_s \right) \left( \partial _x \widetilde{X}_s^{\tilde{\theta }, [\theta ]} \, \tilde{\gamma } \right. \right. \nonumber \\&\left. \left. +\,\widehat{{\mathbb {E}}} \left[ \partial _{\mu } \widetilde{X}_s^{\tilde{\theta },[\theta ]}(\widehat{\theta }) \, \widehat{\gamma } \right] \right) \right] \bigg \}dB^i_s. \end{aligned}$$(3.9)Now, taking Eq. (3.1), satisfied by \(\partial _x X^{x, [\theta ]}_t\) and evaluating at \(x= \theta \), multiplying by \(\gamma \) and adding to Eq. (3.8), we see that \( \partial _x X^{\theta ,[\theta ]}_t \gamma + \widehat{{\mathbb {E}}}\left[ \partial _{\mu } X_t^{\tilde{\theta },[\theta ]}(\widehat{\theta })\, \widehat{\gamma } \right] \) is equal to
$$\begin{aligned}&\gamma + \sum _{i=0}^d \int _0^t \bigg \{ \partial V_i\left( X^{x, [\theta ]}_s, \left[ X_s^{\theta }\right] \right) \, \left( \partial _x X^{\theta ,[\theta ]}_s \gamma + \widehat{{\mathbb {E}}}\left[ \partial _{\mu } X_s^{\tilde{\theta },[\theta ]}(\widehat{\theta })\, \widehat{\gamma } \right] \right) \\&\quad + \widetilde{{\mathbb {E}}} \left[ \partial _{\mu }V_i \left( X^{x, [\theta ]}_s, \left[ X_s^{\theta }\right] , \widetilde{ X}^{\tilde{\theta }}_s \right) \left( \partial _x \widetilde{X}_s^{\tilde{\theta }, [\theta ]} \, \widetilde{\gamma } + \widehat{{\mathbb {E}}}\left[ \partial _{\mu } \widetilde{X}_s^{\tilde{\theta },[\theta ]}(\widehat{\theta }) \widehat{\gamma } \right] \right) \right] \bigg \} dB^i_s. \end{aligned}$$One can therefore see that the equation satisfied by \( \partial _x X^{\theta ,[\theta ]}_t \gamma + \widehat{{\mathbb {E}}}\left[ \partial _{\mu } X_t^{\tilde{\theta },[\theta ]}(\widehat{\theta })\, \widehat{\gamma } \right] \) is the same as Eq. (3.7) satisfied by \(D X^{\theta }_t(\gamma )\), so by uniqueness they are equal. This representation also makes clear the linearity and continuity of \(\gamma \mapsto D X^{\theta }_t(\gamma )\).
Following essentially the same procedure shows that \(\widehat{{\mathbb {E}}} \left[ \partial _{\mu } X^{x, [\theta ]}_t(\widehat{\theta }) \, \widehat{\gamma } \right] \) satisfies the same equation as \(D X^{x, [\theta ]}_t(\gamma )\), so that (3.5) holds. Hence, by definition \(\partial _{\mu } X^{x, [\theta ]}_t(v)\) exists and satisfies Eq. (3.4). This representation also makes clear the linearity and continuity of \(\gamma \mapsto D X^{x, [\theta ]}_t(\gamma )\).
-
(c)
Let \(X^{\theta , n}\) denote the Picard approximation of the solution to the McKean–Vlasov SDE (2.1), given by
$$\begin{aligned} X^{\theta , 0}_t&=\theta , \quad t \in [0,T] \\ X^{\theta , n}_t&= \theta + \sum _{i=0}^d \int _0^t V_i \left( X^{\theta , n}_s, \left[ X^{\theta , n-1}_s \right] \right) \, dB^i_s , \end{aligned}$$For each \(n \ge 1\), \(X^{\theta , n}\) is the solution of a classical SDE with time-dependent coefficients, which are differentiable in space, with each derivative of the coefficients being Lipschitz continuous. Therefore, by Nualart [32] Theorem 2.2.1 \(X^{\theta , n}_t \in {\mathbb {D}}^{1, \infty }\) for all \(t \in [0,T]\). The form of the equation satisfied by \(\mathcal {\mathbf {D}}X^{\theta , n}_t\) is the same as (3.6). It is then easy to show that \( \Vert X^{\theta , n}_t\Vert _{{\mathbb {D}}^{1, \infty }} < C(1 + \Vert \theta \Vert _2)\) uniformly in n. Now, since for all \(p \ge 2\), \( \Vert X^{\theta , n}_t - X^{\theta }_t\Vert _p \rightarrow 0\) as \(n \rightarrow \infty \), by Nualart [32] Lemma 1.5.3, \( X^{\theta }_t \in {\mathbb {D}}^{1, \infty }\). Similarly, \(X^{x, [\theta ]}_t \in {\mathbb {D}}^{1, \infty }\) since it solves a classical SDE with time-dependent coefficients. The measure term in the coefficients of the equation for \(X^{x, [\theta ]}_t\) is deterministic, so \(\mathcal {\mathbf {D}}_r(X^{x, [\theta ]}_t)\) satisfies the usual equation for the Malliavin derivative of an SDE which is precisely Eq. (3.6). \(\square \)
For our aplications, we need to extend the above result to higher order derivatives of \(X^{x, [\theta ]}_t\). The main result is summarised in the following theorem, which classifies \(X^{x, [\theta ]}_t\) as a Kusuoka–Stroock process.
Theorem 3.2
Suppose \(V_0, \ldots , V_d\in {\mathcal {C}}^{k,k}_{b, Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\), then \((t, x, [\theta ]) \mapsto X_t^{x, [\theta ]} \in {\mathbb {K}}^1_{0}({\mathbb {R}}^N,k)\). If, in addition, \(V_0, \ldots , V_d\) are uniformly bounded then \((t, x, [\theta ]) \mapsto X_t^{x, [\theta ]} \in {\mathbb {K}}^0_{0}({\mathbb {R}}^N,k)\).
Since each derivative process satisfies a linear equation (whose exact form is not important for our purposes) the proof is quite mechanical and reserved to the Appendix 6.2. Now we introduce some operators acting on Kusuoka–Stroock processes. These are the building blocks of the integration by parts formulae to come. For the rest of this section, we will need the following uniform ellipticity assumption.
Assumption 3.3
Let \(\sigma : {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^{N \times d}\) be given by
We make the assumption that there exists \(\epsilon >0\) such that, for all \(\xi \in {\mathbb {R}}^N\), \(z \in {\mathbb {R}}^N\) and \(\mu \in {\mathcal {P}}_2({\mathbb {R}}^N)\),
Now, for a multi-index \(\alpha \) on \(\{1, \ldots , N\}\), we introduce the following operators acting on elements of \({\mathbb {K}}^q_r({\mathbb {R}},n)\), defined for \(\alpha =(i)\), by
For \(\alpha = (\alpha _1, \ldots , \alpha _n)\) we inductively define
and make analogous definitions for each of the other operators. The following result states that these operators are well-defined and describes how each operator transforms a given Kusuoka–Stroock process. The proof is contained in Appendix 6.2.
Proposition 3.4
If \(V_0, \ldots , V_d\in {\mathcal {C}}^{k,k}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\), (UE) holds and \(\Psi \in {\mathbb {K}}^q_r({\mathbb {R}},n)\), then \(I^1_{\alpha }(\Psi )\) and \(I^3_{\alpha }(\Psi )\), are all well-defined for \(|\alpha |\le (k \wedge n)\). \(I^2_{\alpha }(\Psi )\), \({\mathcal {I}}^1_{\alpha }(\Psi )\) and \({\mathcal {I}}^3_{\alpha }(\Psi )\) are well defined for \(|\alpha |\le n \wedge (k-2)\). Moreover,
If \(\Psi \in {\mathbb {K}}^0_r({\mathbb {R}},n)\) and \(V_0, \ldots , V_d\) are uniformly bounded, then
4 Integration by parts formulae for the de-coupled equation
Having introduced some operators acting on Kusuoka–Stroock processes, we now show how to use these operators to construct Malliavin weights in integration by parts formulas. We first develop integration by parts formulas for derivatives of \(x \mapsto {\mathbb {E}}\, f(X^{x, [\theta ]}_t)\) and then separately \([\theta ] \mapsto {\mathbb {E}}\, f(X^{x, [\theta ]}_t)\). In the last part of this section, we will show how to combine these results to construct integration by parts formulas for derivatives of the function \(x \mapsto {\mathbb {E}}\,f(X^{x,\delta _x}_t)\).
4.1 Integration by parts in the space variable
Proposition 4.1
Let \(f \in {\mathcal {C}}^{\infty }_b({\mathbb {R}}^N;{\mathbb {R}})\) and \(\Psi \in {\mathbb {K}}^q_r({\mathbb {R}},n)\)
-
1.
If \(|\alpha | \le [n \wedge k]\), then
$$\begin{aligned} {\mathbb {E}}\left[ \partial ^{\alpha }_x\left( f\left( X^{x, [\theta ]}_t \right) \right) \, \Psi (t,x, [\theta ])\right] = t^{-|\alpha |/2} \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, I^1_{\alpha }(\Psi )(t,x, [\theta ])\right] . \end{aligned}$$ -
2.
If \(|\alpha | \le [n \wedge (k-2)]\), then
$$\begin{aligned} {\mathbb {E}}\left[ (\partial ^{\alpha }f)\left( X^{x, [\theta ]}_t \right) \, \Psi (t,x, [\theta ])\right] = t^{-|\alpha |/2} \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, I^2_{\alpha }(\Psi )(t,x, [\theta ])\right] . \end{aligned}$$ -
3.
If \(|\alpha | \le [n \wedge k]\), then
$$\begin{aligned} \partial ^{\alpha }_x \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, \Psi (t,x, [\theta ])\right] = t^{-|\alpha |/2} \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, I^3_{\alpha }(\Psi )(t,x, [\theta ])\right] . \end{aligned}$$ -
4.
If \(|\alpha | + |\beta | \le [n \wedge (k-2)]\), then
$$\begin{aligned} \partial ^{\alpha }_x \, {\mathbb {E}}\left[ (\partial ^{\beta } f)\left( X^{x, [\theta ]}_t \right) \, \Psi (t,x, [\theta ])\right]= & {} t^{-(|\alpha |+ |\beta |)/2} \, {\mathbb {E}}[f\left( X^{x, [\theta ]}_t \right) \\&I^3_{\alpha }\left( I^2_{\beta }(\Psi )\right) (t,x, [\theta ])] . \end{aligned}$$
Proof
-
1.
First, we note that Eq. (3.1) satisfied by \(\partial _x X^{x,[\theta ]}_t\) and Eq. (3.6) satisfied by \(\mathcal {\mathbf {D}}_r X^{x,[\theta ]}_t\) are the same except their initial conditions. It therefore follows that for \(r \le t\),
$$\begin{aligned} \partial _x X^{x,[\theta ]}_t = \mathcal {\mathbf {D}}_r X^{x,[\theta ]}_t \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r . \end{aligned}$$This allows us to make the following computations for \(f \in {\mathcal {C}}^{\infty }_b({\mathbb {R}}^N;{\mathbb {R}})\),
$$\begin{aligned} {\mathbb {E}}\left[ \partial _x \left[ f\left( X^{x, [\theta ]}_t \right) \right] \Psi (t,x, [\theta ]) \right] =&\, {\mathbb {E}}\left[ \partial f \left( X_t^{x,[\theta ]} \right) \,\partial _x X^{x,[\theta ]}_t\, \Psi (t,x, [\theta ]) \right] \\ =&\, \frac{1}{t} {\mathbb {E}}\left[ \int _0^t \partial f \left( X_t^{x,[\theta ]} \right) \, \partial _x X^{x,[\theta ]}_t \Psi (t,x, [\theta ]) \, dr \right] \\ =&\, \frac{1}{t} {\mathbb {E}}\left[ \int _0^t \partial f \left( X_t^{x,[\theta ]} \right) \, \mathcal {\mathbf {D}}_r X^{x,[\theta ]}_t \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\right. \\&\times \left. \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r \Psi (t,x, [\theta ]) \, dr \right] \\ =&\, \frac{1}{t} {\mathbb {E}}\left[ \int _0^t \mathcal {\mathbf {D}}_r f \left( X_t^{x,[\theta ]} \right) \, \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \right. \\&\times \left. \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r \, \Psi (t,x, [\theta ]) \, dr \right] \\ =&\, \frac{1}{t} {\mathbb {E}}\left[ f \left( X_t^{x,[\theta ]} \right) \, \delta \left( r \mapsto \left( \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\right. \right. \right. \\&\times \left. \left. \left. \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r \right) ^{\top } \Psi (t,x, [\theta ]) \right) \right] , \end{aligned}$$where we have used Malliavin integration by parts \({\mathbb {E}}\langle \mathcal {\mathbf {D}}\phi , u \rangle _{H_d} = {\mathbb {E}}\left[ \phi \, \delta (u) \right] \) in the last line. This proves the result for \(|\alpha |=1\). By Proposition 3.4, \(I^1_{\alpha }(\Psi ) \in {\mathbb {K}}^{q+2}_r({\mathbb {R}},(k \wedge n)-1) \) when \(|\alpha |=1\). We can therefore iterate this argument another \(|\alpha |-1\) times to obtain the result for all \(\alpha \) satisfying \(|\alpha | \le [n \wedge k]\).
-
2.
By the chain rule,
$$\begin{aligned} {\mathbb {E}}\left[ (\partial ^{i} f)\left( X^{x, [\theta ]}_t \right) \Psi (t,x, [\theta ])\right] =&\sum _{j=1}^N {\mathbb {E}}\left[ \partial _{x_i} \left( f\left( X^{x, [\theta ]}_t \right) \right) \left( \left( \partial _x X^{x,[\theta ]}_t\right) ^{-1} \right) ^{j,i} \right. \\&\left. \times \,\Psi (t,x, [\theta ])\right] \\ =&\, t^{-1/2} \, \sum _{j=1}^N {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) I^1_{(j)} \left( \left( \left( \partial _x X^{x,[\theta ]}_t\right) ^{-1} \right) ^{j,i}\right. \right. \\&\left. \left. \times \,\Psi (t,x, [\theta ])\right) \right] \\ =&\, t^{-1/2} \,{\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, I^2_{(i)}(\Psi )(t,x, [\theta ])\right] . \end{aligned}$$By Proposition 3.4, \(I^2_{(i)}(\Psi ) \in {\mathbb {K}}^{q+3}_r \left( {\mathbb {R}},[n \wedge (k-2)]-1 \right) \), so since \(|\alpha | \le [n \wedge (k-2)]\), we can apply this argument another \(|\alpha |-1\) times to get the result.
-
3.
We compute, for any \(i=1, \ldots , N\)
$$\begin{aligned}&\partial ^{i}_x \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \Psi (t,x, [\theta ])\right] \\&\quad = {\mathbb {E}}\left[ \partial ^i_x \left( f\left( X^{x, [\theta ]}_t \right) \Psi (t,x, [\theta ])+\,\partial ^i_x\Psi (t,x, [\theta ]) f\left( X^{x, [\theta ]}_t \right) \right] \right. \\&\quad = t^{-1/2} {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \left\{ I^1_{(i)}(\Psi )(t,x, [\theta ]) +\, \sqrt{t} \partial _x^i\Psi (t,x, [\theta ]) \right\} \right] , \end{aligned}$$which proves the result for \(|\alpha |=1\). Again, using Proposition 3.4, \(I^3_{\alpha }(\Psi ) \in {\mathbb {K}}^{q+2}_r({\mathbb {R}},(k \wedge n)-1) \) when \(|\alpha |=1\). We can therefore iterate this argument another \(|\alpha |-1\) times to obtain the result for all \(\alpha \) satisfying \(|\alpha | \le [n \wedge k]\).
-
4.
This follows from parts 2 and 3. \(\square \)
4.2 Integration by parts in the measure variable
We now consider derivatives of the function\([\theta ] \mapsto {\mathbb {E}}[f (X^{x, [\theta ]}_t)]\).
Proposition 4.2
Let \(f \in {\mathcal {C}}^{\infty }_b({\mathbb {R}}^N;{\mathbb {R}})\) and \(\Psi \in {\mathbb {K}}^q_r({\mathbb {R}},n)\).
-
1.
If \(|\beta | \le [n \wedge (k-2)]\), then
$$\begin{aligned} {\mathbb {E}}\left[ \partial ^{\beta }_{\mu }\left( f\left( X^{x, [\theta ]}_t \right) \right) ({\varvec{v}})\, \Psi (t,x, [\theta ])\right] = t^{-|\beta |/2} \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, {\mathcal {I}}^1_{\beta }(\Psi )(t,x, [\theta ], {\varvec{v}})\right] . \end{aligned}$$ -
2.
If \(|\beta | \le [n \wedge (k-2)]\), then
$$\begin{aligned} \partial ^{\beta }_{\mu } \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, \Psi (t,x, [\theta ])\right] ({\varvec{v}}) = t^{-|\beta |/2} \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, {\mathcal {I}}^3_{\beta }(\Psi )(t,x, [\theta ], {\varvec{v}})\right] . \end{aligned}$$ -
3.
If \(|\alpha | + |\beta | \le [n \wedge (k-2)]\), then
$$\begin{aligned} \partial ^{\beta }_{\mu } \, {\mathbb {E}}\left[ (\partial ^{\alpha }f)\left( X^{x, [\theta ]}_t \right) \, \Psi (t,x, [\theta ])\right] ({\varvec{v}})= & {} t^{-(|\alpha |+|\beta |)/2} {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \right. \\&\times \left. {\mathcal {I}}^3_{\beta }\left( I^2_{\alpha }(\Psi )\right) (t,x, [\theta ], {\varvec{v}})\right] . \end{aligned}$$
Proof
-
1.
We use again that for \(r \le t\),
$$\begin{aligned} \partial _x X^{x,[\theta ]}_t = \mathcal {\mathbf {D}}_r X^{x,[\theta ]}_t \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,\mu }_r . \end{aligned}$$This allows us to make the following computations for \(f \in {\mathcal {C}}^{\infty }_b({\mathbb {R}}^N;{\mathbb {R}})\),
$$\begin{aligned}&{\mathbb {E}}\left[ \partial _{\mu }\left( f\left( X^{x, [\theta ]}_t \right) \right) \Psi (t,x, [\theta ]) \right] \\&\quad =\, {\mathbb {E}}\left[ \partial f \left( X_t^{x, [\theta ]}\right) \, \partial _{\mu } X^{x, [\theta ]}_t \, \Psi (t,x, [\theta ]) \right] \\&\quad =\, \frac{1}{t} {\mathbb {E}}\left[ \int _0^t \partial f \left( X_t^{x, [\theta ]}\right) \, \,\partial _x X^{x, [\theta ]}_t \, \left( \partial _x X^{x, [\theta ]}\right) ^{-1}_t \partial _{\mu } X^{x, [\theta ]}_t(v) \Psi (t,x, [\theta ]) \, dr \right] \\&\quad =\, \frac{1}{t} {\mathbb {E}}\int _0^t \bigg \{ \partial f \left( X_t^{x, [\theta ]}\right) \, \mathcal {\mathbf {D}}_r X^{x, [\theta ]}_t \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\\&\qquad \times \left( X^{x, [\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x, [\theta ]}_r \left( \partial _x X^{x, [\theta ]}_t\right) ^{-1} \partial _{\mu } X^{x, [\theta ]}_t(v) \Psi (t,x, [\theta ]) \bigg \} dr \\&\quad =\, \frac{1}{t} {\mathbb {E}}\int _0^t \bigg \{ \mathcal {\mathbf {D}}_r f \left( X_t^{x, [\theta ]}\right) \, \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \\&\qquad \times \left( X^{x, [\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x, [\theta ]}_r \, \left( \partial _x X^{x, [\theta ]}_t\right) ^{-1} \partial _{\mu } X^{x, [\theta ]}_t(v) \Psi (t,x, [\theta ]) \bigg \} dr \\&\quad =\, \frac{1}{t} {\mathbb {E}}\bigg [ f \left( X_t^{x, [\theta ]}\right) \, \delta \bigg ( r \mapsto \left( \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\right. \\&\qquad \left. \times \left( X^{x, [\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x, [\theta ]}_r \, \left( \partial _x X^{x, [\theta ]}_t\right) ^{-1} \partial _{\mu } X^{x, [\theta ]}_t(v) \right) ^{\top } \ \Psi (t,x, [\theta ]) \bigg ) \bigg ]. \end{aligned}$$where we have used Malliavin integration by parts \({\mathbb {E}}\langle \mathcal {\mathbf {D}}\phi , u \rangle _{H_d} = {\mathbb {E}}\left[ \phi \, \delta (u) \right] \) in the last line. This proves the claim for \(|\beta |=1\). For general \(\beta \), it follows by iterating this integration by parts \(|\beta |\) times.
-
2.
$$\begin{aligned} \partial _{\mu } \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, \Psi (t,x, [\theta ])\right] (v)= & {} t^{-|\beta |/2} \, {\mathbb {E}}\left[ \partial _{\mu } \left( f\left( X^{x, [\theta ]}_t \right) \right) (v) \, \Psi (t,x, [\theta ]) \right. \\&\left. +\, f\left( X^{x, [\theta ]}_t \right) \, \partial _{\mu }\Psi (t,x, [\theta ],v)\right] . \end{aligned}$$
This is enough to prove the proposition when \(|\beta |=1\). For \(|\beta |>1\), simply repeat this argument.
-
3.
This follows from parts 1 and 2. \(\square \)
4.3 Integration by parts for McKean–Vlasov SDE with fixed initial condition
We now consider developing integration by parts formulae for derivatives of the function
We introduce the following operator acting on elements of \({\mathcal {K}}_r^q({\mathbb {R}}, M)\), the set of Kusuoka–Stroock processes on \({\mathbb {R}}^N\). For \(\alpha =(i)\)
and inductively, for \(\alpha =(\alpha _1, \ldots , \alpha _n)\),
Lemma 4.3
If \(V_0, \ldots , V_d\in {\mathcal {C}}^{k,k}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\) and \(\Phi \in {\mathcal {K}}^q_r({\mathbb {R}},n)\), then \(J_{\alpha }(\Phi )\) is well-defined for \(|\alpha |\le [n \wedge (k-2)]\), and
Moreover, if \(\Phi \in {\mathcal {K}}^0_r({\mathbb {R}},k)\) and \(V_0, \ldots , V_d\) are uniformly bounded, then
Proof
This is a direct result of Proposition 3.4 and Lemma 2.11. \(\square \)
Theorem 4.4
Let \(f \in {\mathcal {C}}^{\infty }_b({\mathbb {R}}^N;{\mathbb {R}})\). For all multi-indices \(\alpha \) on \(\{1, \ldots , N\}\) with \(|\alpha | \le k-2\)
In particular, we get the following bound
Proof
By the above discussion,
Now, we apply the IBPFs developed earlier in Proposition 4.1 part 3 and Theorem 4.2 part 3.
and we can iterate this argument \(|\alpha |\) times. \(\square \)
Corollary 4.5
Let \( f \in {\mathcal {C}}^{\infty }_b({\mathbb {R}}^N;{\mathbb {R}})\) and \(\alpha \) and \(\beta \) multi-indices on \(\{1, \ldots , N\}\) with \(|\alpha | + |\beta | \le k-2\). Then,
and \(I^2_{\beta }(J_{\alpha }(1))\in {\mathcal {K}}^{4|\alpha |+3|\beta |}_0({\mathbb {R}},k-2-|\alpha |-|\beta |)\).
Proof
Theorem 4.4 gives
with \(J_{\alpha }(1) \in {\mathcal {K}}^{4|\alpha |}({\mathbb {R}}, k-2-|\alpha |)\). Then, using Proposition 4.1 part 2, we get
\(\square \)
5 Connection with PDE
We return our attention to the PDE (1.2). The results of the last section suggest that for initial conditions \(g(z,\mu )=g(z)\), which do not depend on the measure, we can still expect there to be a classical solution, even if g is not differentiable. Indeed, we spell out the conditions under which this is true in Theorem 5.8. But first, let us consider whether the same can be true for initial conditions which do depend on the measure.
Example 5.1
Let \(g(z,\mu ) = g(\mu ) := \textstyle \left| \int y \, \mu (dy) \right| \) and \(V_0 \equiv 0\), \(V_1\equiv 1\) and \(N=d=1\), then
and
We now show that \([ \theta ] \mapsto g([X_t^{\theta }])\) is not differentiable. If we choose \(\theta \in L^2(\Omega )\) with \({\mathbb {E}}\theta =0\), then for any \(t>0\), \(h>0\) and any \(\gamma \in L^2(\Omega )\),
and this limit does not exist as \(h \rightarrow 0\). Hence, the Gâteaux derivative of the map \(L^2(\Omega ) \ni \theta \mapsto g( [X_t^{\theta }])\) does not exist.
The above example shows that for a function \(g:{\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\) which is Lipschitz continuous, we cannot, in general, expect \([\theta ] \mapsto {\mathbb {E}}\left( \, g \left( X^{x, [\theta ]}_t,\left[ X^{x, [\theta ]}_t \right] \right) \right) \) to be differentiable (for a fixed \(t>0\)) even when the coefficients in the equation for \(X^{x, [\theta ]}_t\) are smooth and uniformly elliptic. There are, however, interesting examples of initial conditions for which we can develop integration by parts formulas. Before we introduce this class of initial conditions, we consider what form derivatives of \( U(t,x,[\theta ]):= {\mathbb {E}}\left( g \left( X^{x, [\theta ]}_t, [X^{\theta }_t] \right) \right) \) take when g is smooth. The following result is Lemma 5.1 from [8].
Lemma 5.2
We assume that the function \(g :{\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\) admits continuous derivatives \(\partial _x g\) and \(\partial _{\mu }g\) satisfying for some \(q>0\) and \(0 \le p <2\)
and we assume \(V_0, \ldots , V_d\in {\mathcal {C}}^{1,1}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\). Then, \(\partial _{\mu }U\) exists and takes the following form:
Now we introduce a class of initial conditions \(g: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\) for which we will be able to develop integration by parts formulas.
Definition 5.3
((IC) \(_x\) and (IC) \(_v\)) We say that \(g: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\) is in the class (IC) if the following conditions hold:
-
1.
g is continuous with polynomial growth: i.e. there exists \(q>0\) such that for all \((x,[\theta ]) \in {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\): \(|g(x,[\theta ])| \le C (1+ |x|+ \Vert \theta \Vert _2)^q\).
-
2.
There exists a sequence of functions \((g_l)_{l \ge 1}\), \(g_l: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\) with polynomial growth such that \(g_l \rightarrow g\) uniformly on compacts and \(\partial _x g_l\) exists and also has polynomial growth for each \(l \ge 1\).
-
3.
For each \(l \ge 1\) there exists a function \(G_l: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \times {\mathbb {R}}^N \rightarrow {\mathbb {R}}\) which is either differentiable in x or v and \(\partial _{\mu } g_l(x,\mu ,v) = \partial _x G_l(x,\mu ,v)\) or \(\partial _{\mu } g_l(x,\mu ,v) = \partial _v G_l(x,\mu ,v)\). Moreover, each \(G_l\) and its derivatives satisfies the growth condition: there exist \(q>0\) and \(0\le r <1\) such that for all \((x,[\theta ],v) \in {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \times {\mathbb {R}}^N\):
$$\begin{aligned} |h(x,[\theta ],v)| \le C \left( 1 + |x|^q + \Vert \theta \Vert _2^q + |v|^{r} \right) . \end{aligned}$$where h is \(G_l\), \(\partial _x G_l\) or \(\partial _v G_l\). In addition, we assume that for all \((x,\mu ,v)\) the pointwise limit \(\lim _{l \rightarrow \infty } G_l(x,\mu ,v)\) exists and the function G defined by \(G(x,\mu ,v):= \lim _{l \rightarrow \infty } G_l(x,\mu ,v)\) is continuous and satisfies the same growth condition.
If \(\partial _{\mu } g_l= \partial _x G_l\) we say g is in the class (IC) \(_x\). If \(\partial _{\mu } g_l = \partial _v G_l\), we say g is in the class (IC) \(_v\).
We give some examples of functions g in the class (IC).
Example 5.4
-
1.
Functions with no dependence on the measure:
Suppose that \(g(x,\mu ) = \varphi (x)\) where \(\varphi \in {\mathcal {C}}_p({\mathbb {R}}^N;{\mathbb {R}})\). Then, let \((\varphi _l)_{l \ge 1}\) be a sequence of mollifications of \(\varphi \) and \((g_l)_{l \ge 1}\) the corresponding functions defined in the same way. Then, \(\partial _{\mu }g_l(x,\mu ,v)= 0\). So, g belongs to the class (IC) \(_x\) and G in this case would be \(G \equiv 0\).
-
2.
Centred random variables:
Suppose that \(g(x,\mu ) = \varphi \left( x- \textstyle \int y \mu (dy)\right) \) where \(\varphi \in {\mathcal {C}}_p({\mathbb {R}}^N;{\mathbb {R}})\). Then, let \((\varphi _l)_{l \ge 1}\) be a sequence of mollifications of \(\varphi \) and \((g_l)_{l \ge 1}\) the corresponding functions defined in the same way. Then, \(\partial _{\mu }g_l(x,\mu ,v)= - \partial \varphi _l(x- \textstyle \int y \mu (dy) )\). So, g belongs to the class (IC) \(_x\) and G in this case would be \(G(x,\mu ,v) = - \varphi (x-\textstyle \int y \mu (dy))\).
-
3.
First order interaction:
Suppose \(g(x,\mu ) := \textstyle \int \varphi (x,y) \mu (dy)\) where \(\varphi : {\mathbb {R}}^N \times {\mathbb {R}}^N \rightarrow {\mathbb {R}}\) is continuous with \(|\varphi (x,y)| \le C(1+ |x|^q + |y|^r)\) for some \(q>0\) and \(0 \le r <1\). Then, let \((\varphi _l)_{l \ge 1}\) be a sequence of mollifications of \(\varphi \) and \((g_l)_{l \ge 1}\) the corresponding functions defined in the same way. Then, \(\partial _{\mu }g_l(x,\mu ,v)= \partial _v \varphi _l(x,v)\). So, g belongs to the class (IC) \(_v\) and G in this case would be \(G(x,\mu ,v) = \varphi (x,v)\). Note, this example includes the case of convolutions where \(\varphi (x,y)= \varphi (x-y)\).
-
4.
Second order interaction:
Suppose \(g(x,\mu ) := \textstyle \int \varphi (x,y,z) \mu (dy) \mu (dz)\) where \(\varphi : {\mathbb {R}}^{3N} \rightarrow {\mathbb {R}}\) is continuous with \(|\varphi (x,y,z)| \le C(1+ |x|^q + |y|^r + |z|^r)\) for some \(q>0\) and \(0 \le r <1\). Then, let \((\varphi _l)_{l \ge 1}\) be a sequence of mollifications of \(\varphi \) and \((g_l)_{l \ge 1}\) the corresponding functions defined in the same way. Then, \(\partial _{\mu }g_l(x,\mu ,v)= \textstyle \int \left[ \partial _v \varphi _l(x,v,y) + \partial _v \varphi _l (x,y,v) \right] \mu (dy)\). So, g belongs to the class (IC) \(_v\) and G in this case would be
$$\begin{aligned} G(x,\mu ,v) = \int \left[ \varphi (x,v,y) + \varphi (x,y,v) \right] \mu (dy). \end{aligned}$$ -
5.
Polynomials on the Wasserstein space:
Suppose \(g(x,\mu ) = \textstyle \prod _{i=1}^n \int \varphi _i(x,y) \mu (dy) \), where \(n \ge 1\) and each \(\varphi _i: {\mathbb {R}}^N \times {\mathbb {R}}^N \rightarrow {\mathbb {R}}\) is continuous with \(|\varphi _i(x,y)| \le C(1+ |x|^q )\) for some \(q>0\). Then, let \((\varphi _{i,l})_{l \ge 1}\) be a sequence of mollifications of \(\varphi _i\) and \((g_l)_{l \ge 1}\) the corresponding functions defined in the same way. Then,
$$\begin{aligned} \partial _{\mu }g_l(x,\mu ,v)= \sum _{j=1}^n \prod _{i=1,i \ne j}^n \left( \int \varphi _{i,l}(x,y) \mu (dy) \right) \partial _v \varphi _{j,l}(x,v). \end{aligned}$$Therefore g belongs to the class (IC) \(_v\) and G in this case would be
$$\begin{aligned} G(x,\mu ,v)= \sum _{j=1}^n \prod _{i=1,i \ne j}^n \left( \int \varphi _i(x,y) \mu (dy) \right) \varphi _j(x,v). \end{aligned}$$
Now, we introduce the hypotheses under which we will be able to prove existence and uniqueness of a solution to the PDE (1.2).
- (H1): :
-
(UE) holds, and the coefficients \(V_0, \ldots , V_d\in {\mathcal {C}}^{3,3}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N); {\mathbb {R}}^N)\), and \(g:{\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\) is in the class (IC) \(_x\).
- (H2): :
-
(UE) holds, and the coefficients \(V_0, \ldots , V_d\in {\mathcal {C}}^{3,3}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N); {\mathbb {R}}^N)\) as well as being uniformly bounded, and that \(g:{\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\) is in the class (IC) \(_v\).
Lemma 5.5
Under either (H1) or (H2), for the function \(U(t,x,[\theta ]):= {\mathbb {E}} \left[ g\left( X^{x, [\theta ]}_t \left[ X^{\theta }_t\right] \right) \right] \), the derivative functions
exist and are continuous. Moreover, for all compacts \(K \subset {\mathcal {P}}_2({\mathbb {R}}^N)\)
Proof
Under both (H1) and (H2), g is in the class (IC), so there is a sequence of functions \((g_l)_{l \ge 1}\) approximating g. Let \(U_l(t,x,[\theta ]= {\mathbb {E}}\left[ g_l(X^{x, [\theta ]}_t,[X^{\theta }_t])\right] \) . From Proposition 4.1 we know that for \(i,j \in \{1, \ldots , N\}\)
By the growth assumption on \(g_l\), Hölder’s inequality and the moment estimates already obtained for the processes \(X^{x, [\theta ]}_t,X^{\theta }_t\) and the Kusuoka–Stroock processes in (2.2), (2.4) and Proposition 6.10, we can show that the expectations above are bounded independently of \(l \ge 1\). By dominated convergence, we can take the limit in each equation. Now, each of the Kusuoka–Stroock processes appearing in the above representations for the derivatives are, by definition, jointly continuous in \((t,x,[\theta ])\) in \(L^p(\Omega )\), \(p \ge 1\). So is \((t,x,[\theta ]) \mapsto g(X^{x, [\theta ]}_t,\left[ X^{\theta }_t \right] )\) by Theorem 3.2 (which guarantees that \((t,x,[\theta ]) \mapsto X^{x, [\theta ]}_t\) is a Kusuoka–Stroock process) and the continuity of g.
To lighten notation, we restrict to the case \(N=1\) through the rest of this proof. First, we assume (H1) holds, so g is in the class (IC) \(_x\). Note that \(g_l\) satisfies the hypotheses of Lemma 5.2, which gives
Now, we recall the following identity connecting \(\mathcal {\mathbf {D}}_r X^{x,[\theta ]}_t\) and \( \partial _x X^{x,[\theta ]}_r\):
So,
and, applying Proposition 4.1 part 2, we get
Similarly,
and applying Proposition 4.1 part 2 again, we get
So, in this case, (5.2) can be rewritten as
To show that \(\textstyle \sup _{[\theta ] \in K} {\mathbb {E}}\left| \partial _{\mu }U(t,x,[\theta ],\theta )\right| ^2 < \infty \), we note that all processes on the right hands side of (5.3) have moments of all orders bounded polynomially in \(\Vert \theta \Vert _2\) except \(\widetilde{X}^{\tilde{\theta }}_t\) in the final term. For the final term, by the growth conditions on \(G_l\),
Clearly this is bounded in \([\theta ]\) over compacts in \({\mathcal {P}}_2({\mathbb {R}}^N)\).
Now, we consider the derivative \(\partial _v \partial _{\mu } U_l\). We note that in the definition of \( {\mathcal {I}}^1(t,x,[\theta ],v)\), the only term depending on v is \(\partial _{\mu } X^{x, [\theta ]}_t(v)\). Since \(V_0, \ldots , V_d\in {\mathcal {C}}^{3,3}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\) by assumption, \( \partial _v {\mathcal {I}}^1(t,x,[\theta ],v)\) exists and we obtain:
We again use that
Of course, this identity also holds for ‘tilde’ processes defined on \(\left( \tilde{\Omega }, \tilde{\mathcal {F}}, \tilde{\mathbb {P}}\right) \) and we denote by \(\widetilde{\mathcal {\mathbf {D}}}\) the Malliavin derivative on this space. So, using the above identity and the Malliavin chain rule, we obtain
and, applying the integration by parts formula in Proposition 4.1 on the space \(\left( \tilde{\Omega }, \tilde{\mathcal {F}}, \tilde{\mathbb {P}}\right) \), we get
So, (5.4) becomes
We can check each expectation above is finite by using the growth conditions on the functions \(g_l\), \(G_l\) and their derivatives along with Hölder’s inequality and the moment estimates on the processes involved, similar to before. In particular, note that we can obtain estimates on (5.3) and (5.5) independently of l. This allows us to use dominated convergence to pass to the limit in these equations.
Now, suppose that (H2) holds instead of (H1). Under (H2), g in the class (IC) \(_v\). By Lemma 5.2, we have an expression for \(\partial _{\mu }U_l\) and using the special form of \(\partial _{\mu }g_l\) for initial conditions in the class (IC) \(_v\), we get
We again use that
Of course, this identity also holds for ‘tilde’ processes defined on \(\left( \tilde{\Omega }, \tilde{\mathcal {F}}, \tilde{\mathbb {P}}\right) \) and we denote by \(\widetilde{\mathcal {\mathbf {D}}}\) the Malliavin derivative on this space. So, using the above identity and the Malliavin chain rule, we obtain
and, applying the integration by parts formula in Proposition 4.1 on the space \(\left( \tilde{\Omega }, \tilde{\mathcal {F}}, \tilde{\mathbb {P}}\right) \), we get
Similarly,
and applying the integration by parts formula in Proposition 4.2 on the space \(\left( \tilde{\Omega }, \tilde{\mathcal {F}}, \tilde{\mathbb {P}}\right) \), we get
Here we explain the reason for insisting that the coefficients \(V_0, \ldots , V_d\) are bounded: the Kusuoka–Stroock process \(\widetilde{{\mathcal {I}}}^1(1)(t,x,[\theta ],v)\) is bounded in \(L^p(\tilde{\Omega })\) uniformly in \((x,[\theta ],v)\). This allows us to evaluate at \(x=\tilde{\theta }\) and take expectation with respect to \(\widetilde{{\mathbb {E}}}\). If the coefficients are not bounded, the bound we have on \(\Vert \widetilde{{\mathcal {I}}}^1(1)(t,x,[\theta ],v)\Vert _p\) grows like \(|x|^4\) according to Proposition 3.4 and we cannot guarantee that \(\textstyle {\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ \widetilde{{\mathcal {I}}}^1(1)(t,\tilde{\theta },[\theta ],v) \right] \) is finite.
Putting the above integration by parts formulas together and using Proposition 4.2 on the space \(\left( \Omega , {\mathcal {F}}, {\mathbb {P}}\right) \) for the first term on the right hand side of (5.6), we see that it can be re-written as
and we note the RHS does not depend on derivatives of the functions g and G. Also,
so, applying Proposition 4.1, we get
\(\square \)
Remark 5.6
Immediately from the proof of Lemma 5.5 one can deduce the following gradient bounds for the function \(U(t,x,[\theta ]):= {\mathbb {E}}\left[ g\left( X^{x, [\theta ]}_t \left[ X^{\theta }_t\right] \right) \right] \) under the same conditions (H1) or (H2): There exists positive constants C and q such that for any \((t,x,[\theta ]) \in (0,T] \times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N), v \in {\mathbb {R}}^N \)
We now define what we mean by a classical solution to the PDE (1.2).
Definition 5.7
Suppose that \(U: [0,T] \times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\) satisfies (1.2) and
exist and are continuous. Moreover, suppose that for all \((x,\theta ) \in {\mathbb {R}}^N \times L^2(\Omega )\)
Then we say that U is a classical solution to the PDE (1.2).
Theorem 5.8
Suppose that either (H1) or (H2) holds. Then
is a classical solution of the PDE (1.2). Moreover, U is unique among all of the classical solutions satisfying the polynomial growth condition \(\left| U(t,x,[\theta ])\right| \le C (1+|x|+\Vert \theta \Vert _2)^q\) for some \(q>0\) and all \((t,x,[\theta ]) \in [0,T] \times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\).
Proof
Existence To prove continuity at the boundary, we use continuity of g and the fact that
which follows from (2.5).
Now, we note that by the flow property we have, for \(h >0\),
so that,
Hence,
The idea is to expand the first term using the chain rule introduced in [12] and the second term using Itô’s formula. Then, dividing by h and sending it to 0, along with continuity of the terms appearing in the expansion, will prove that U indeed solves the PDE (1.2).
Lemma 5.5 guarantees that we can apply the chain rule proved in [12]. We apply it to the function \(U(t,x,\cdot )\) to get
Itô’s formula applied to \(U(t,\cdot ,[X_h^{\theta }])\) gives
We want the final term to be square integrable, so that it is a true martingale with zero expectation. We have that for some \(q>0\),
so that for all \(p \ge 1\),
and by the linear growth of \(V_j^i\), we have
Hence, the final term is indeed square integrable, and has zero expectation.
Putting the expansions back into (5.11), we get
By the earlier results on continuity of U and its derivatives and the a priori continuity of the coefficients \(V_0, \ldots , V_d\) we see that the integrand on the right-hand side is a continuous function of h. Dividing by h and sending it to zero, we see that U solves the PDE (1.2).
Uniqueness Fix any \(t \in (0,T]\) and any classical solution W with polynomial growth. Set \(\delta >0\), so
By the polynomial growth of W, this is square integrable. Now we expand the process \((W(t-s,X^{x, [\theta ]}_s,[X^{\theta }_s]))_{s \in [\delta ,t]}\) and use that W is a solution of the PDE (1.2), so that the drift is zero, to get
As we have already noted, this is square-integrable, so the stochastic integral is a true martingale with zero expectation. So taking expectation in the above expansion, we get:
Now, sending \(\delta \searrow 0\) and using continuity of W at the boundary (condition (5.10) in the definition of classical solution), the right hand side disappears, and we get that
which completes the proof. \(\square \)
6 Application to the density function
In this section, we apply the integration by parts formulae to the study of the density function p(t, x, z) of the McKean–Vlasov SDE started from a fixed point, \(X_t^{x,\delta _x}\), at a fixed time \(t \in [0,T]\). Throughout this section, we assume that (UE) holds and \(V_0, \ldots , V_d\in {\mathcal {C}}^{k,k}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\). We can consider \(X^{x, [\theta ]}_t\) as the solution of a classical SDE with time-dependent coefficients. Hence, under (UE), the smoothness of its density (call it \(q(t,x,[\theta ],\cdot )\)) has been studied in the classical work of Friedman [16]. Since \(p(t,x,z)=q(t,x,\delta _x,z)\), Friedman’s results also establish the smoothness of p(t, x, z) in the forward variable, z. However, they do not cover the smoothness of the function p(t, x, z) in the backward variable, x. The density p(t, x, z) has also been studied by Antonelli and Kohatsu-Higa in [1] under a Hörmander condition on the coefficients. In this case, they establish smoothness of the density in the forward variable, z, but do not establish estimates on the derivatives of this function. The theorem which follows esatblishes the smoothness of p(t, x, z) in the variables (x, z) and we also obtain estimates on its derivatives.
Theorem 6.1
Let \(\alpha , \beta \) be multi-indices on \(\{1, \ldots , N\}\) and let \(k \ge |\alpha |+|\beta |+N+2\). Then, for all \(t \in (0,T]\) and \(\theta \in L^2(\Omega )\), \(X_t^{x,\delta _x}\) has a density \(p(t,x, \cdot )\) such that \((x,z) \mapsto \partial _x^{\alpha } \, \partial _z^{\beta }p(t,x, z) \) exists and is continuous. Moreover, there exists a constant C which depends on T, N and bounds on the coefficients, such that for all \(t \in (0,T]\)
where \( \mu = 4|\alpha |+ 3 |\beta | + 3 N\) and \( \nu = \textstyle \frac{1}{2} (N + | \alpha | + | \beta | )\). If \(V_0, \ldots , V_d\) are bounded then the following estimate holds
Proof
Let \(\eta = (1,2, \ldots , N)\) and introduce the multi-dimensional indicator function \( \mathbf {1}_{ \{ z_0>z \} } := \textstyle \prod _{i=1}^N \mathbf {1}_{ \{ z_0^i>z^i \} } . \) For any \(g \in {\mathcal {C}}^{\infty }_0({\mathbb {R}}^N;{\mathbb {R}})\) the function f defined by
is in \( {\mathcal {C}}^{\infty }_p({\mathbb {R}}^N;{\mathbb {R}})\) and satisfies \(\partial ^{\eta } f = g\). Now, we first focus on \(p(t,x,\cdot )\), the density of \(X_t^{x,\delta _x}\).
where we have used at each step respectively: \(\partial ^{\eta } f = g\); Corollary 4.5; Eq. (6.3), and Fubini’s theorem. It then follows that, for any \(R>0\) and \(t \in (0,T]\), there exists \(C=C(R,t)>0\) such that
Then, it is a result from Taniguchi [35, Lemma 3.1] that \(X^{x,\delta _x}_t\) has a density function, \(p(t,x,\cdot )\) and that \(\partial _x^{\alpha } \, \partial _z^{\beta }p(t,x, z)\) exists. Once we know that a smooth density exists, it follows from (6.4) that we can identify \( \partial _x^{\alpha } \, \partial _z^{\beta } p(t,x,z)\) as
Now, the following estimates come from each term’s membership of the Kusuoka–Stroock class, as guaranteed by Proposition 3.4 and Corollary 4.5:
This proves the estimate (6.1). In addition, if \(V_0, \ldots , V_d\) are bounded, we can estimate
Now, we have that \(\textstyle \int _0^t V_0^i(X_s^{x,\delta _x},[X_s^{x,\delta _x}]) ds \le \Vert V_0\Vert _{\infty } t\) and the term
is a martingale with quadratic variation \( \langle M^i \rangle _t \le \textstyle \sum _{j=1}^d \Vert V_j\Vert ^2 t\). We can therefore apply the exponential martingale inequality to obtain
Then, we use \((a+b)^2 \ge \textstyle \frac{a^2}{2} - b^2\), which is re-arrangement of Young’s inequality, to get
So,
This establishes (6.2). \(\square \)
Notes
Equation (1.3) is therefore not an MVSDE.
When applying Lemma 6.7 to control the derivatives of \(X^{x, [\theta ]}_t\), \(a_0\) will be either 1 in the case of the \(\partial _{x_i}X^{x, [\theta ]}_t\) or 0 in all other cases.
When applying Lemma 6.7 to control the derivatives of \(X^{x, [\theta ]}_t\), \(a_0\) will be either 1 in the case of the \(\partial {x_i}X^{x, [\theta ]}_t\) and 0 in all other cases.
References
Antonelli, F., Kohatsu-Higa, A.: Rate of convergence of a particle method to the solution of the McKean–Vlasov equation. Ann. Appl. Probab. 12(2), 423–476 (2002)
Banos, D.: The Bismut-Elworthy-Li Formula for Mean-Field Stochastic Differential Equations. arXiv:1510.06961
Bensoussan, A., Frehse, J., Yam, S.C.P.: On the Interpretation of the Master Equation. arXiv:1503.07754
Bensoussan, A., Frehse, J., Yam, S.C.P.: The master equation in mean field theory. J. Math. Pures Appl. 103(6), 1441–1474 (2015)
Bergomi, L.: Smile Dynamics III. Available at SSRN 1493308
Bossy, M.: Some stochastic particle methods for nonlinear parabolic PDEs. In: ESAIM: proceedings, vol. 15, EDP Sciences, pp. 18–57 (2005)
Bossy, M., Talay, D.: A stochastic particle method for the McKean–Vlasov and the Burgers equation. Math. Comput. 66(217), 157–192 (1997)
Buckdahn, R., Li, J., Peng, S., Rainer, C.: Mean-Field Stochastic Differential Equations and Associated PDEs. ArXiv e-prints (2014). arXiv:1407.1215
Cardaliaguet, P.: Notes on Mean Field Games. From P.-L. Lions lectures at College de France (2010). http://www.science.unitn.it/~bagagiol/NotesByCardaliaguet.pdf
Carmona, R., Delarue, F.: Probabilistic analysis of mean-field games. SIAM J. Control Optim. 51(4), 2705–2734 (2013)
Carmona, R., Delarue, F.: Forward–backward stochastic differential equations and controlled McKean–Vlasov dynamics. Ann. Probab. 43(5), 2647–2700 (2015)
Chassagneux, J.-F., Crisan, D., Delarue, F.: A Probabilistic Approach to Classical Solutions of the Master Equation for Large Population Equilibria. ArXiv e-prints (2014). arXiv:1411.3009
Chaudru de Raynal, P.-E.: Strong Well-Posedness of McKean–Vlasov Stochastic Differential Equation with Hölder Drift. arXiv:1510.06961
Crisan, D., Manolarakis, K., Nee, C.: Cubature Methods and Applications, vol. Lecture Notes in Mathematics, Vol. 2081. Paris-Princeton Lectures on Mathematical Finance 2013 (2013)
Crisan, D., McMurray, E.: Cubature on Wiener Space for McKean–Vlasov SDEs with Smooth Scalar Interaction. arXiv:1703.04177
Friedman, A.: Partial Differential Equations of Parabolic Type. Prentice-Hall Inc, Englewood Cliffs (1964)
Guyon, J., Henry-Labordere, P.: The Smile Calibration Problem Solved. Available at SSRN 1885032
Hörmander, L.: Hypoelliptic second-order differential equations. Acta Math. 119, 147–171 (1967)
Huang, M., Caines, P.E., Malhamé, R.P.: Large-population cost-coupled LQG problems with nonuniform agents: individual-mass behavior and decentralized \(\epsilon \)-Nash equilibria. IEEE Trans. Automat. Control 52(9), 1560–1571 (2007)
Huang, M., Malhamé, R.P., Caines, P.E.: Large population stochastic dynamic games: closed-loop McKean–Vlasov systems and the Nash certainty equivalence principle. Commun. Inf. Syst. 6(3), 221–251 (2006)
Jourdain, B., Méléard, S., Woyczynski, W.A.: Nonlinear SDEs driven by Lévy processes and related PDEs. ALEA, Latin Am. J. Probab. 4, 1–29 (2008)
Kolokoltsov, V., Troeva, M.: On the Mean Field Games with Common Noise and the Mckean–Vlasov SPDEs. arXiv:1506.04594
Kunita, H.: Stochastic differential equations and stochastic flows of diffeomorphisms. In: École d’été de Probabilités de Saint-Flour, XII—1982, vol. 1097 of Lecture Notes in Math. Springer, Berlin, pp. 143–303 (1984)
Kusuoka, S.: Malliavin calculus revisited. J. Math. Sci. Univ. Tokyo 10(2), 261–277 (2003)
Kusuoka, S., Stroock, D.: Applications of the Malliavin calculus. III. J. Fac. Sci. Univ. Tokyo Sect. IA Math. 34(2), 391–442 (1987)
Lasry, J., Lions, P.: Jeux à champ moyen. I-le cas stationnaire. Comptes Rendus Mathematique 343(9), 619–625 (2006)
Lasry, J., Lions, P.: Jeux à champ moyen. II-horizon fini et contrôle optimal. Comptes Rendus Mathematique 343(10), 679–684 (2006)
Lasry, J., Lions, P.: Mean field games. Jpn. J. Math. 2(1), 229–260 (2007)
McKean Jr., H.P.: A class of Markov processes associated with nonlinear parabolic equations. Proc. Natl. Acad. Sci. USA 56, 1907–1911 (1966)
McKean Jr., H.P.: Propagation of chaos for a class of non-linear parabolic equations. Stochastic Differential Equations (Lecture Series in Differential Equations, Session 7, Catholic Univ., 1967) (1967)
Méléard, S.: Asymptotic behaviour of some interacting particle systems; McKean–Vlasovand Boltzmann models. In: Probabilistic Models for Nonlinear Partial Differential Equations, pp. 42–95 (1996)
Nualart, D.: The Malliavin Calculus and Telated Topics. Probability and Its Applications (New York), 2nd edn. Springer, Berlin (2006)
Protter, P.E.: Stochastic Integration and Differential Equations, vol. 21 of Stochastic Modelling and Applied Probability. Springer-Verlag, Berlin, 2nd edn. Version 2.1, Corrected third printing (2005)
Sznitman, A.-S.: Topics in propagation of chaos. In: Ecole d’Eté de Probabilités de Saint-Flour XIX 1989. Springer (1991)
Taniguchi, S.: Applications of Malliavin’s calculus to time-dependent systems of heat equations. Osaka J. Math. 22(2), 307–320 (1985)
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was partially supported by the Engineering and Physical Sciences Research Council [Grant No. EP/M506345/1].
Appendix
Appendix
1.1 Elements of Malliavin calculus
As indicated in the introduction, we will use some tools from Malliavin calculus to develop integration by parts formulas. Here we introduce the basic terminology. We follow the exposition in [14], with all proofs contained in the book by Nualart [32]. We denote \(H_d:=L^2([0,T];{\mathbb {R}}^d)\). and use this space to define the Malliavin derivative.
Definition 6.2
(Malliavin derivative) Let \(f \in {\mathcal {C}}_p^{\infty }({\mathbb {R}}^{n};{\mathbb {R}})\), for some \(n \in \mathbb {N}\), \(h_1,\ldots ,h_n \in H_d\) and \(F: \Omega \rightarrow \mathbb {R}\) be the functional given by:
where, for any \(h_i=(h_{i}^1,\dots ,h_{i}^d) \in H_d\)
Any functional of the form (6.5) is called smooth and we denote the class of all such functionals by \(\mathcal {S}\). Then the Malliavin derivative of F, denoted by \(\mathcal {\mathbf {D}}F \in L^2(\Omega ; H_d)\) is given by:
We note the isometry \(L^2(\Omega \times [0,T] ; \mathbb {R}^d) \simeq L^2(\Omega ; H_d)\). This allows us to identify \(\mathcal {\mathbf {D}}F\) with a process \(\left( \mathcal {\mathbf {D}}_r F\right) _{r \in [0,T]}\) taking values in \({\mathbb {R}}^d\), which we often do. We also denote by \(\left( \mathcal {\mathbf {D}}^j_r F\right) _{r \in [0,T]}\), \(j=1, \ldots , d,\) the components of this process.
The set of smooth functionals (random variables) \(\mathcal {S}\) is dense in \(L^p(\Omega )\), for any \(p\ge 1\) and \(\mathcal {\mathbf {D}}\) is closable as operator from \(L^p(\Omega )\) to \(L^p\left( \Omega ; H_d \right) \). We define \({\mathbb {D}}^{1,p}\) is the closure of the set \(\mathcal {S}\) within \(L^p(\Omega ; \mathbb {R}^d)\) with respect to the norm:
The higher order Malliavin derivatives are defined in a similar manner. For smooth random variables, we denote the iterated derivative by \(\mathcal {\mathbf {D}}^{(k)} F\), \(k\ge 2\), which is a random variable with values in \(H_d^{\otimes k}\) defined as
The above expression for \(\mathcal {\mathbf {D}}^{(k)} F\) coincides with that obtained by iteratively applying the Malliavin derivative. In an analogous way, one can close the operator \(\mathcal {\mathbf {D}}^{(k)}\) from \(L^p(\Omega )\) to \(L^p(\Omega ; H_d^{\otimes k})\). So, for any \(p \ge 1\) and natural \(k \ge 1\), we define \(\mathbb {D}^{k,p}\) to be the closure of \(\mathcal {S}\) with respect to the norm:
Moreover, there is nothing which restricts consideration to \(\mathbb {R}^d\)-valued random variables. Indeed, one can consider more general Hilbert space-valued random variables, and the theory would extend in an appropriate way. To this end, denote \(\mathbb {D}^{k,p}(E)\) to be the appropriate space of E-valued random variables, where E is some separable Hilbert space. For more details, see [32], where also the proof of the following chain rule formula can be found:
Proposition 6.3
(Chain rule for the Malliavin derivative) If \(\varphi : \mathbb {R}^m \rightarrow \mathbb {R}\) is a continuously differentiable function with bounded partial derivatives, and \(F = (F_1, \ldots , F_m)\) is a random vector with components belonging to \(\mathbb {D}^{1,p}\) for some \(p\ge 1\). Then \(\varphi (F) \in \mathbb {D}^{1,p}\), with
where \(\nabla \varphi \) is the row vector \((\partial ^1 \varphi , \ldots , \partial ^m \varphi )\) and DF is the matrix \((\mathcal {\mathbf {D}}^j F_i)_{1 \le i \le m,1 \le j \le d}\).
Lemma 6.4
(The Malliavin derivative and integration) Consider an \(\mathbb {F}\)-adapted process \(f:[0,T] \times \Omega \rightarrow {\mathbb {R}}^{d}\), and suppose that for each \(t \in [0,T]\) and \(i \in \{0 \ldots , d\}\), we have \(f_i(t) \in \mathbb {D}^{1,2}\). Moreover, suppose that:
Then \(F_t := \sum _{i=1}^d \int _0^t f_i(s) dB^i_s \in \mathbb {D}^{1,2}\), with
Similarly, for any \(i \in \{1, \ldots , d \} \), \(G^i_t:= \int _0^t f_i(s) ds\) is an element of \(\, \mathbb {D}^{1,2}\), with
Proof
See Nualart [32, Proposition 1.3.8] for details. \(\square \)
The divergence operator—which is the adjoint of the Malliavin derivative—plays a vital role in the construction of our integration by parts formula. This operator is also called the Skorohod integral. It coincides with a generalisation of the Itô integral to anticipating integrands. A detailed discussion of the divergence operator can be found in Nualart [32].
Definition 6.5
(Divergence operator) Denote by \(\delta \) the adjoint of the operator \(\mathcal {\mathbf {D}}\). That is, \(\delta \) is an unbounded operator on \(L^2(\Omega \times [0,T];\mathbb {R}^d)\) with values in \(L^2(\Omega ;{\mathbb {R}})\) such that:
-
1.
Dom \(\delta \) \(= \{u \in L^2(\Omega \times [0,T];\mathbb {R}^d); |\mathbb {E}(\left\langle \mathcal {\mathbf {D}}F,u\right\rangle _{H_d})| \le c \Vert F\Vert _{L^2(\Omega )},\ \forall F \in \mathbb {D}^{1,2}\}\).
-
2.
For every \(u \in \text {Dom } \delta \), then \(\delta (u) \in L^2(\Omega )\) satisfies:
$$\begin{aligned} \mathbb {E}(F \delta (u)) = \mathbb {E}(\left\langle \mathcal {\mathbf {D}}F,u\right\rangle _{H_d}). \end{aligned}$$
Remark 6.6
If \(u =(u^1,\ldots ,u^d)\in \text {Dom } \delta \) is \(\mathbb {F}\)-adapted, then the adjoint \(\delta (u)\), is nothing more than the Itô integral of u with respect to the d-dimensional Brownian motion \(B_t = (B_t^1,\ldots ,B_t^d)\). i.e.
1.2 Proofs from Sect. 3
The first goal of this section is to prove Theorem 3.2. Since each type of derivative (w.r.t. x, \(\mu \) or v) of \(X^{x, [\theta ]}_t\) satisfies a linear equation, we will introduce a general linear equation and, first, derive some a priori \(L^p\) estimates on the solution. Then, we will show this linear equation is again differentiable under certain assumptions on the coefficients. In the following, we consider an equation with coefficients \(a_1, a_2, a_3\), which depend on \((t,x,[\theta ],{\varvec{v}}) \in [0,T] \times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \times ({\mathbb {R}}^N)^{\# {\varvec{v}}} \) with initial condition given by a constant value \(a_0\).Footnote 3 Below, we denote \(v_r\) as one element of the tuple \({\varvec{v}}=(v_1, \ldots , v_{\# {\varvec{v}}})\).
Lemma 6.7
Let \(Y^{x, [\theta ]}({\varvec{v}})\) solve the following SDE
where, for all \(i =1 , \ldots , d\), the coefficients \((t,x,[\theta ],{\varvec{v}}) \mapsto a_k(t,x,[\theta ],{\varvec{v}})\) are continuous in \(L^p(\Omega )\) \(\forall p \ge 1\), \(k=1,2,3\), andFootnote 4
In (6.8), \(\widetilde{Y}^{\tilde{\theta }, [\theta ]}\) is a copy of \(Y^{x, [\theta ]}\) on the probability space \((\tilde{\Omega }, \tilde{\mathcal {F}}, \tilde{\mathbb {P}})\) driven by the Brownian motion \(\tilde{B}\) and with \(x=\tilde{\theta }.\) Similarly, \(\widetilde{Y}^{v_r, [\theta ]}\) is a copy of \(Y^{x, [\theta ]}\) on the probability space \((\tilde{\Omega }, \tilde{\mathcal {F}}, \tilde{\mathbb {P}})\) driven by the Brownian motion \(\tilde{B}\) and with \(x=v.\) If we make the following boundedness assumptions
-
1.
\({\mathop {\sup }\nolimits _{{x \in {\mathbb {R}}^N, [\theta ] \in {\mathcal {P}}_2({\mathbb {R}}^N), {\varvec{v}}\in ({\mathbb {R}}^N)^{\# {\varvec{v}}}}}} \Vert a_2(\cdot ,x,[\theta ],{\varvec{v}}) \Vert _{{\mathcal {S}}^p_T} < \infty \),
-
2.
\(a_1\) and \(a_3\) are uniformly bounded,
-
3.
\({\mathop {\sup }\nolimits _{{x \in {\mathbb {R}}^N, [\theta ] \in {\mathcal {P}}_2({\mathbb {R}}^N), {\varvec{v}}\in ({\mathbb {R}}^N)^{\# {\varvec{v}}}}}}\Vert a_2(\cdot ,\theta ,[\theta ],{\varvec{v}}) \Vert _{{\mathcal {S}}^2_T} < \infty \),
then we have the following estimate for \(C=C(p,T,a_1,a_3)\)
Moreover, we also get that the mapping
is continuous.
Proof
Wherever there is no confusion, we drop the arguments \((t,x,[\theta ],{\varvec{v}})\) to lighten notation. We will write, for example, \(a_3|_{v=\tilde{\theta }}\) to denote \(a_3(s,x,[\theta ],\tilde{\theta })\). Let \(\iota ,\kappa :[0,T]\mapsto [0,\infty )\) be defined as
We deduce from (6.8) and Burkholder–Davis–Gundy inequality that there exists a constant C such that for any \(t\in [0,T]\) we have
so by Gronwall’s inequality,
Then, applying the Burkholder–Davis–Gundy inequality and the above estimate to \(Y^{x,[\theta ]}_t({\varvec{v}})\) we deduce that
So applying Gronwall’s inequality again and our estimate on \(\iota (T)\) we get (6.9).
Now, for a quantity G depending on \((t,x,[\theta ],{\varvec{v}})\) we introduce the notation
We can split the difference \(Y^{x, [\theta ]}_t({\varvec{v}}) - Y^{x^{\prime }, [\theta ^{\prime }]}_{t^{\prime }}({\varvec{v}}^{\prime })\) into
and consider each term individually. First,
The integrand is bounded in \(L^p(\Omega )\) uniformly in time, so using the Burkholder–Davis–Gundy inequality, we get
Using the continuity assumption on \(a_0\), we see that this goes to 0 as \(t \rightarrow t^{\prime }\). Second,
This is again a linear equation. The same argument used to obtain (6.9), except using the \(L^p\)-norm instead of the \({\mathcal {S}}^p_T\)-norm, gives
Then, using Hölder’s inequality, the fact that \(Y_s^{x,[\theta ]} ({\varvec{v}})\) is bounded in \(L^p(\Omega )\) for all \(p \ge 1\) and the continuity assumptions on \(a_1, a_2, a_3\), we see that the above quantity goes to 0. The arguments for \( \Delta _{\theta } Y_{t^{\prime }}^{x^{\prime }}({\varvec{v}})\) and \(\Delta _{{\varvec{v}}} Y_{t^{\prime }}^{x^{\prime },[\theta ^{\prime }]}\) are almost identical. \(\square \)
Now, we consider the differentiability of the generic process \(Y^{x,[\theta ]}({\varvec{v}})\) satisfying the linear Eq. (6.8) under appropriate assumptions.
Proposition 6.8
Suppose that the process \(Y^{x,[\theta ]}({\varvec{v}})\) is as in Lemma 6.7. In addition to the assumptions of Lemma 6.7, we introduce the following differentiability assumptions:
-
(a)
For \(k=1,2,3\), all \((s,[\theta ],{\varvec{v}}) \in [0,T]\times {\mathcal {P}}_2({\mathbb {R}}^N)\times ({\mathbb {R}}^N)^{\# {\varvec{v}}}\) and each \(p \ge 1\), \({\mathbb {R}}^N \ni x \mapsto a_k(s,x,[\theta ],{\varvec{v}}) \in L^p(\Omega )\) is differentiable.
-
(b)
For \(k=1,2,3\), all \((s,[\theta ],x) \in [0,T]\times {\mathcal {P}}_2({\mathbb {R}}^N)\times ({\mathbb {R}}^N)^{\# {\varvec{v}}}\) and each \(p \ge 1\), \({\mathbb {R}}^N \ni v \mapsto a_k(s,x,[\theta ],{\varvec{v}}) \in L^p(\Omega )\) is differentiable.
-
(c)
For all \((s,x,{\varvec{v}}) \in [0,T] \times {\mathbb {R}}^N\times ({\mathbb {R}}^N)^{\# {\varvec{v}}}\) the mapping \(L^2(\Omega ) \ni \theta \mapsto a_2(s,\theta ,[\theta ],{\varvec{v}}) \in L^2(\Omega )\) is Fréchet differentiable.
-
(d)
\(a_k(s,x,[\theta ],{\varvec{v}}) \in {\mathbb {D}}^{1, \infty }\) for \(k=1,2,3\) and all \((s,x,[\theta ],{\varvec{v}}) \in [0,T] \times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\times {\mathbb {R}}^N\). Moreover, we assume the following estimates on the Malliavin derivatives hold.
$$\begin{aligned}&\sup _{r \in [0,T]} {\mathbb {E}}\sup _{s\in [0,T]} | \mathcal {\mathbf {D}}_r a_k(s,x,[\theta ],{\varvec{v}}) |^p < \infty , \quad k=0,1,2,3. \end{aligned}$$
Then, for all \(t \in [0,T]\) the following hold:
-
1.
Under assumption (a), \(x \mapsto Y^{x, [\theta ]}_t({\varvec{v}})\) is differentiable in \(L^p(\Omega )\) for all \(p \ge 1\) and
$$\begin{aligned} \partial _x Y_t^{x, [\theta ]}({\varvec{v}}):= L^p - \lim _{h \rightarrow 0} \frac{1}{|h|} \left( Y_t^{x+h, [\theta ]}({\varvec{v}}) - Y_t^{x, [\theta ]}({\varvec{v}}) \right) \end{aligned}$$satisfies
$$\begin{aligned} \partial _x Y^{x, [\theta ]}_t({\varvec{v}}) =&\sum _{i=0}^d \int _0^t \bigg \{ \partial _x a^i_1 \, Y^{x, [\theta ]}_s({\varvec{v}}) + a^i_1 \, \partial _x Y^{x, [\theta ]}_s ({\varvec{v}}) + \partial _x a^i_2 \\&+ \widetilde{{\mathbb {E}}} \left[ \left. \partial _x a^i_3\right| _{v=\tilde{\theta }} \widetilde{Y}^{\tilde{\theta }, [\theta ]}_s({\varvec{v}}) + \sum _{r=1}^{\# {\varvec{v}}} \left. \partial _x a^i_3\right| _{v=v_r} \widetilde{Y}^{v_r, [\theta ]}_s({\varvec{v}}) \right] \bigg \} \, dB^i_s. \end{aligned}$$ -
2.
Under assumption (b), \({\varvec{v}}\mapsto Y^{x, [\theta ]}_t({\varvec{v}})\) is differentiable in \(L^p(\Omega )\) for all \(p \ge 1\) and
$$\begin{aligned} \partial _{{\varvec{v}}} Y_t^{x, [\theta ]}({\varvec{v}}):= L^p - \lim _{h \rightarrow 0} \frac{1}{|h|} \left( Y_t^{x, [\theta ]}({\varvec{v}}+h) - Y_t^{x, [\theta ]}({\varvec{v}})\right) \end{aligned}$$satisfies
$$\begin{aligned} \partial _{v_j} Y^{x, [\theta ]}_t({\varvec{v}}) =&\sum _{i=0}^d \int _0^t \bigg \{a^i_1 \, \partial _{v_j} Y^{x, [\theta ]}_s({\varvec{v}}) + \partial _{v_j} a^i_2 + \widetilde{{\mathbb {E}}} \left[ \left. \partial _v a^i_3\right| _{v=v_j} \widetilde{Y}^{v_j, [\theta ]}_s ({\varvec{v}}) \right] \\&+ \widetilde{{\mathbb {E}}} \left[ \left. a^i_3\right| _{v=v_j} \partial _x \widetilde{Y}^{v_j, [\theta ]}_s ({\varvec{v}}) + \left. a^i_3\right| _{v=\tilde{\theta }} \partial _{v_j} \widetilde{Y}^{\tilde{\theta }, [\theta ]}_s ({\varvec{v}})\right. \\&\left. +\sum _{r=1}^{\# {\varvec{v}}}\left. a^i_3\right| _{v=v_r} \partial _{v_j} \widetilde{Y}^{v_r, [\theta ]}_s ({\varvec{v}}) \right] \bigg \} \, dB^i_s. \end{aligned}$$ -
3.
Under assumption (a), (b) and (c), the maps \(\theta \mapsto Y_t^{\theta ,[\theta ]}({\varvec{v}})\) and \(\theta \mapsto Y^{x, [\theta ]}_t({\varvec{v}})\) are Fréchet differentiable for all \((x,{\varvec{v}}) \in {\mathbb {R}}^N \times ({\mathbb {R}}^N)^{\# {\varvec{v}}}\), so \(\partial _{\mu }Y^{x,[\theta ]}_t({\varvec{v}})\) exists and it satisfies
$$\begin{aligned} \partial _{\mu } Y^{x, [\theta ]}_t ({\varvec{v}},v^{\prime }) =&\sum _{i=0}^d \int _0^t \bigg \{ \partial _{\mu } a^i_1 \, Y^{x, [\theta ]}_s({\varvec{v}}) + a^i_1 \, \partial _{\mu } Y^{x, [\theta ]}_s ({\varvec{v}},v^{\prime }) + \partial _{\mu } a^i_2 \\&+ \widetilde{{\mathbb {E}}} \left[ \partial _{\mu } a^i_3 \, \widetilde{Y}^{\tilde{\theta }, [\theta ]}_s({\varvec{v}}) + \partial _{v} a^i_3 \, \widetilde{Y}^{v^{\prime }, [\theta ]}_s({\varvec{v}}) \right. \\&\left. + \left. a^i_3 \right| _{v=\tilde{\theta }} \, \partial _{\mu } \widetilde{Y}^{\tilde{\theta }, [\theta ]}_s ({\varvec{v}},v^{\prime }) \right] \\&+ \widetilde{{\mathbb {E}}} \left[ \left. a^i_3 \right| _{v=v^{\prime }} \, \partial _{x} \widetilde{Y}^{v^{\prime }, [\theta ]}_s ({\varvec{v}}) \right. \\&\left. + \sum _{r=1}^{\# {\varvec{v}}} \left. a^i_3 \right| _{v=v_r} \, \partial _{\mu } \widetilde{Y}^{v_r, [\theta ]}_s ({\varvec{v}},v^{\prime }) \right] \bigg \} \, dB^i_s. \end{aligned}$$Moreover, we have the representation, for all \(\gamma \in L^2(\Omega )\),
$$\begin{aligned} D \left( Y_t^{\theta ,[\theta ]}({\varvec{v}})\right) (\gamma )= \left. \left( \partial _xY_t^{x,[\theta ]}({\varvec{v}}) \gamma + \widehat{{\mathbb {E}}} \left[ \partial _{\mu }Y_t^{x,[\theta ]}({\varvec{v}},\widehat{\theta }) \, \widehat{\gamma } \right] \right) \right| _{x=\theta }. \end{aligned}$$ -
4.
Under assumption (e), \(Y^{x, [\theta ]}_t \in {\mathbb {D}}^{1, \infty }\) and \(\mathcal {\mathbf {D}}_r Y^{x, [\theta ]}\) satisfies
$$\begin{aligned} \mathcal {\mathbf {D}}_r Y^{x, [\theta ]}_t({\varvec{v}}) =&\left( a^j_1 \, Y^{x, [\theta ]}_r(v) + a^j_2 + \widetilde{{\mathbb {E}}} \left[ a^j_3 \widetilde{Y}^{x, [\theta ]}_r({\varvec{v}}) \right] \right) _{j=1, \ldots , d}\nonumber \\&+ \sum _{i=0}^d \int _r^t \bigg \{ \mathcal {\mathbf {D}}_r a^i_1 \, Y^{x, [\theta ]}_s({\varvec{v}}) + a^i_1 \, \mathcal {\mathbf {D}}_r Y^{x, [\theta ]}_s({\varvec{v}})\nonumber \\&+ \mathcal {\mathbf {D}}_r a^i_2 + \widetilde{{\mathbb {E}}} \left[ \mathcal {\mathbf {D}}_r a^i_3|_{v=\tilde{\theta }} \widetilde{Y}^{x, [\theta ]}_s({\varvec{v}}) \right] \bigg \} \, dB^i_s. \end{aligned}$$(6.10)Moreover, the following bound holds:
$$\begin{aligned} \sup _{r \le t} {\mathbb {E}}\left[ \sup _{r \le t \le T} \left| \mathcal {\mathbf {D}}_r Y_t^{x, [\theta ]}({\varvec{v}}) \right| ^p \right] \le C \, \sup _{r \le t} {\mathbb {E}}\left[ \sup _{r \le t \le T} \left| \mathcal {\mathbf {D}}_r a_1\right| ^p \right] . \end{aligned}$$(6.11)
Proof
Parts 1. and 2. are standard results on differentiability of SDEs with respect to a real parameter.
-
3.
The arguments to show that the maps \(\theta \mapsto Y_t^{\theta ,[\theta ]}({\varvec{v}})\) and \(\theta \mapsto Y^{x, [\theta ]}_t({\varvec{v}})\) are Fréchet differentiable are essentially the same as those from Proposition 3.1 showing that \(\theta \mapsto X^{\theta ,[\theta ]}({\varvec{v}})\) and \(\theta \mapsto X^{x, [\theta ]}_t({\varvec{v}})\) are Fréchet differentiable, so we omit them.
Once we know these derivatives exist, it is fairly straightforward to see that they satisfy the equations
$$\begin{aligned} D (Y_t^{\theta ,[\theta ]}({\varvec{v}}))(\gamma ) =&\int _0^t \bigg \{ D a_1(\gamma )|_{x=\theta } \,Y^{\theta , [\theta ]}_s({\varvec{v}}) +\partial _x a_1|_{x=\theta } \, \gamma \, Y^{\theta , [\theta ]}_s({\varvec{v}}) \nonumber \\&+a_1|_{x=\theta } D (Y_s^{\theta ,[\theta ]}({\varvec{v}}))(\gamma )+ D a_2(\gamma )|_{x=\theta } \nonumber \\&+ \partial _x a_2|_{x=\theta } \, \gamma + \widetilde{{\mathbb {E}}} \left[ \partial _v a_3|_{x=\theta ,v=\tilde{\theta }} \, \widetilde{\gamma } \, \widetilde{Y}^{\tilde{\theta }, [\theta ]}_s({\varvec{v}}) \right] \nonumber \\&+ \widetilde{{\mathbb {E}}} \left[ \partial _x a_3|_{x=\theta ,v=\tilde{\theta }} \, \widetilde{\gamma } \, \widetilde{Y}^{\tilde{\theta }, [\theta ]}_s({\varvec{v}}) \right] \nonumber \\&+ \widetilde{{\mathbb {E}}} \left[ D a_3(\gamma )|_{x=\theta ,v=\tilde{\theta }} \, \widetilde{Y}^{\tilde{\theta }, [\theta ]}_s({\varvec{v}}) + a_3|_{x=\theta ,v=\tilde{\theta }} \,\right. \nonumber \\&\quad \left. D \left( \widetilde{Y}_s^{\tilde{\theta },[\theta ]}({\varvec{v}})\right) (\gamma ) \right] \nonumber \\&+ \sum _{r=1}^{\# {\varvec{v}}} \widetilde{{\mathbb {E}}} \left[ \left( D a^i_3(\gamma )|_{x=\theta ,v=v_r} \, + \partial _x a^i_3|_{x=\theta ,v=v_r} \, \gamma \right) \widetilde{Y}^{v_r, [\theta ]}_s({\varvec{v}}) \right] \nonumber \\&+ \widetilde{{\mathbb {E}}} \left[ a^i_3(\gamma )|_{x=\theta ,v=v_r}\, D \left( \widetilde{Y}^{v_r, [\theta ]}_s({\varvec{v}})\right) ( \gamma )\right] \bigg \} \, dB_s, \end{aligned}$$(6.12)and
$$\begin{aligned} D (Y_t^{x,[\theta ]}({\varvec{v}}))(\gamma ) =&\int _0^t \bigg \{ D a_1(\gamma ) Y^{x, [\theta ]}_s({\varvec{v}}) + a_1 D (Y_s^{x,[\theta ]}({\varvec{v}}))(\gamma ) + D a_2(\gamma ) \nonumber \\&+ \widetilde{{\mathbb {E}}} \left[ \partial _v a_3|_{v=\tilde{\theta }} \, \widetilde{\gamma } \, \widetilde{Y}^{\tilde{\theta }, [\theta ]}_s({\varvec{v}}) \right] \nonumber \\&+ \widetilde{{\mathbb {E}}} \left[ D a_3(\gamma ) |_{v=\tilde{\theta }} \, \widetilde{Y}^{\tilde{\theta }, [\theta ]}_s({\varvec{v}}) + a_3|_{v=\tilde{\theta }} D \left( \widetilde{Y}_s^{\tilde{\theta },[\theta ]}({\varvec{v}})\right) (\gamma ) \right] \nonumber \\&+ \sum _{r=1}^{\# {\varvec{v}}} \widetilde{{\mathbb {E}}} \left[ D a^i_3(\gamma )|_{v=v_r} \, \widetilde{Y}^{v_r, [\theta ]}_s({\varvec{v}}) + a^i_3(\gamma )|_{v=v_r}\, \right. \nonumber \\&\left. \quad D \left( \widetilde{Y}^{v_r, [\theta ]}_s({\varvec{v}})\right) ( \gamma ) \right] \bigg \} \, dB_s. \end{aligned}$$(6.13)Now, taking the equation we claim is satisfied by \(\partial _{\mu } Y^{x,[\theta ]}_t({\varvec{v}},v^{\prime })\), evaluating at \(v^{\prime }=\widehat{\theta }\), multiplying by \(\widehat{\gamma }\), and taking expectation with respect to \(\widehat{{\mathbb {P}}}\), we can see that \(\widehat{{\mathbb {E}}}\left[ \partial _{\mu } Y^{x,[\theta ]}_t({\varvec{v}},\widehat{\theta }) \widehat{\gamma }\right] \) satisfies the same equation as \(D(Y_t^{x,\theta }({\varvec{v}}))(\gamma )\), so by uniqueness, they are the same. Similarly, computing
$$\begin{aligned} \left. \left( \partial _xY_t^{x,[\theta ]}({\varvec{v}}) \gamma + \widehat{{\mathbb {E}}} \left[ \partial _{\mu }Y_t^{x,[\theta ]}({\varvec{v}},\widehat{\theta }) \, \widehat{\gamma } \right] \right) \right| _{x=\theta }, \end{aligned}$$we can see that it satisfies the same equation as \(D \left( Y_t^{\theta ,[\theta ]}({\varvec{v}})\right) (\gamma )\).
-
4.
Equation (6.8), fits into the standard framework for Malliavin differentiability of SDEs, since the only unkown term appearing inside the expectation with respect to \(\widetilde{{\mathbb {P}}}\) on the right hand side is \(\widetilde{Y}_s^{\tilde{\theta }, [\theta ]}\) does not depend on \(\omega \in \Omega \). The conclusion is therefore a standard result [32, Lemma 2.2.2]. The proof of the bound (6.11) is along the same lines as the proof of (6.9).
\(\square \)
We are now in a position to prove Theorem 3.2.
Proof of Theorem 3.2
To ease the burden on notation, we will prove the theorem for dimension \(N=1\). In this case, \(\alpha \) and \(\gamma \) are integers rather than multi-indices and \({\varvec{\beta }}\) is a multi-index on \( \{ 1, \ldots , \alpha \}\). We will show, by induction on \(I := \alpha + |{\varvec{\beta }}|+ \gamma \), that \( \partial ^{\gamma }_x \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu } X^{x, [\theta ]}_t\) exists and solves a linear equation of the form (6.8). We can then use Lemma 6.7 to obtain an \(L^p(\Omega )\) estimate on \( \partial ^{\gamma }_x \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu } X^{x, [\theta ]}_t\) at each level. In addition, we can obtain estimates on the \({\mathbb {D}}^{m,p}\)-norm of \( \partial ^{\gamma }_x \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu } X^{x, [\theta ]}_t\) at each level using arguments similar to the classical SDE case.
We will prove by induction that the following statements hold true for \(I=1, \ldots , k\):
-
(S1):
For all \(\alpha , {\varvec{\beta }}, \gamma \) satisfying \(\alpha + |{\varvec{\beta }}| + \gamma =I\), \(\partial ^{\gamma }_x \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu } X^{x, [\theta ]}_t({\varvec{v}})\) exists and solves a linear equation of the form (6.8). Moreover, \(\Vert \partial ^{\gamma }_x \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu } X^{x, [\theta ]}({\varvec{v}}) \Vert _{{\mathcal {S}}^p_T}\) is bounded independently of \((x,[\theta ],{\varvec{v}})\) for all \(p \ge 1\).
-
(S2):
\( \partial ^{\gamma }_x \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu } X^{x, [\theta ]}_t({\varvec{v}}) \in {\mathbb {D}}^{M-I,\infty }\) and, moreover,
$$\begin{aligned}&\sup _{r_1, \ldots , r_{M-I-1} \in [0,T]} {\mathbb {E}}\left[ \sup _{r_1 \vee \cdots \vee r_{M-I-1} \le t \le T} \left| \mathcal {\mathbf {D}}^{(M-I-1)}_{r_1, \ldots , r_{M-I-1}} \, \partial ^{\gamma }_x \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu } X^{x, [\theta ]}_t ({\varvec{v}}) \right| ^p \right] \\&\qquad \le C \,(1+|x|+\Vert \theta \Vert _2)^m, \end{aligned}$$for all \(p \ge 1\), where \(m=1\) unless the coefficients \(V_0, \ldots , V_d\) are bounded, in which case \(m=0\).
\(\underline{I=1}\):
(S1): \(\partial _x X^{x, [\theta ]}_t\) and \(\partial _{\mu } X^{x, [\theta ]}_t(v_1)\) exists and are continuous by Proposition 3.1. There is no derivative with respect to v at this level. We can write
in the form of Eq. (6.8) and identify the coeffcients:
We can now check that the assumptions of Lemma 6.7 are satisfied by the coefficients \(a_1, a_2, a_3\) above to obtain a bound on \(\Vert Y^{x,[\theta ]} (v_1) \Vert _{{\mathcal {S}}^p_T}\).
Going back to the equations satisfied by \(\partial _x X^{x, [\theta ]}_t\) and \( \partial _{\mu } X^{x, [\theta ]}_t(v)\), we see that the coefficients are \((k-1)\)-times differentiable with bounded Lipschitz derivatives. Nualart [32, Theorem 2.2.2] immediately tells us that \(\partial _x X^{x, [\theta ]}_t, \partial _{\mu } X^{x, [\theta ]}_t \in {\mathbb {D}}^{k-1,\infty }\). Using the bound in (6.11), we get for \(Y^{x,[\theta ]}_t = \partial _x X^{x, [\theta ]}_t\) or \( \partial _{\mu } X^{x, [\theta ]}_t(v)\),
Now, \(\partial ^2 V_i\) is bounded and it is easy to prove that
(where \(m=1\) unless the coefficients \(V_0, \ldots , V_d\) are bounded, in which case \(m=0\)) using a similar argument to deriving the bound (6.9) for the solution of a linear equation. So, we get the required bound on the first Malliavin derivative of \(Y^{x,[\theta ]}({\varvec{v}})\). For the higher order Malliavin derivatives, following the proof in [32, Theorem 2.2.2], we see that each order Malliavin derivative satisfies a linear equation. Importantly in the equation satisfied by higher-order Malliavin derivatives, the coefficient \(a_1^i\) in each equation is always \(\partial V_i(X^{x, [\theta ]}_s, [X^{\theta }_s])\). From the bound on the Malliavin derivative of a general linear equation in (6.11), we see that this is the only term which contribute to the estimate. Hence, the same bound holds as above for each different order Malliavin derivative. Moreover, if all of the coefficients are bounded, the estimate is uniform in \((x,[\theta ],{\varvec{v}})\).
\(\underline{2 \le I \le k}\):
(S1): By the induction hypothesis, for any \(\alpha , {\varvec{\beta }}, \gamma \) satisfying \(\alpha + |{\varvec{\beta }}| + \gamma = I\), we can write \(Y^{x,[\theta ]}_t({\varvec{v}}):=\partial ^{\gamma }_x \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu } X^{x, [\theta ]}_t({\varvec{v}})\) in the form of Eq. (6.8). Now, denote
We will write this in the form of Eq. (6.8) with coefficients \(b_1,b_2,b_3\). Using Proposition 6.8, we identify these coefficients as
Now, to obtain a bound on the \({\mathcal {S}}^p_T\)-norm of \(Z^{x,[\theta ]} ({\varvec{v}},v^{\prime })\) one just has to check that the coefficients \(b_1, b_2, b_3\) satisfy the assumptions of Lemma 6.7, which is straightforward.
(S2): This is the same as the case \(I=1\). \(\square \)
The functions belonging to the set \({\mathbb {K}}^q_r(E,M)\) satisfy the following properties, which we make use of when developing integration by parts formulas in Sect. 4.
Lemma 6.9
(Properties of local Kusuoka–Stroock processes) The following hold
-
1.
Suppose \(\Psi \in {\mathbb {K}}^q_{r}({\mathbb {R}},M)\) and \( \Psi \) is \(\mathbb {F}\)-adapted. For \(i=1,\ldots ,d\), define
$$\begin{aligned} g_i(t,x,\mu ) :=\int _0^t \Psi (s,x, \mu ) \, dB^i_s \quad \textit{and} \quad g_0(t,x, \mu ) := \int _0^t \Psi (s,x, \mu )\, ds . \end{aligned}$$Then, for \(i=1, \ldots , d\), \(g_i \in {\mathbb {K}}^q_{r+1}({\mathbb {R}},M)\) and \(g_0 \in {\mathbb {K}}^q_{r+2}({\mathbb {R}},M)\).
-
2.
If \(\Psi _i \in {\mathbb {K}}^{q_i}_{r_i}(E,M_i)\) for \(i=1, \ldots , n\), then
$$\begin{aligned} \prod _{i=1}^n \Psi _i \in {\mathbb {K}}^{q_1+ \cdots + q_n}_{r_1+ \ldots + r_n}(E, \min _{i}M_i) \quad \textit{and} \quad \sum _{i=1}^n \Psi _i \in {\mathbb {K}}^{\max _i q_i}_{\min _i r_i}(E,\min _i M_i). \end{aligned}$$ -
3.
If \(\Psi \in {\mathbb {K}}^q_r(H_d,M)\), then \(g(t,x,\mu ): = \textstyle \int _0^{t} \Psi (t,x,\mu )(r) \, dr \in {\mathbb {K}}^q_r({\mathbb {R}}^d,M)\). Conversely, if \( \tilde{\Psi }\in {\mathbb {K}}^q_r({\mathbb {R}}^d,M)\), then \( \tilde{g}(t,x,\mu ): = \tilde{\Psi }(\cdot ,x,\mu ) \mathbf {1}_{[0,t]}(\cdot ) \in {\mathbb {K}}_{r+1}^q(H_d,M)\).
-
4.
If \(\Psi \in {\mathbb {K}}^q_r({\mathbb {R}},M)\), then \(\mathcal {\mathbf {D}}\Psi \in {\mathbb {K}}^q_r(H_d,M-1)\).
-
5.
If \(\Psi \in {\mathbb {K}}^{q_1}_{r_1}({\mathbb {R}},M_1)\) and \(u \in {\mathbb {K}}^{q_2}_{r_2}(H_d,M_2)\) then, \(\langle \mathcal {\mathbf {D}}\Psi ,u \rangle _{H_d} \in {\mathbb {K}}^{q_1+q_2}_{r_1+r_2}({\mathbb {R}}, (M_1-1) \wedge M_2)\).
-
6.
If \(\Psi \in {\mathbb {K}}^{q_1}_{r_1}({\mathbb {R}}^N,M_1)\) and \(u \in {\mathbb {K}}^{q_2}_{r_2}(H_{d \times N},M_2)\) is \(\mathbb {F}\)-adapted then, \(\delta \left( u \Psi \right) \in {\mathbb {K}}^{q_1+q_2}_{r_1+r_2}({\mathbb {R}}, (M_1-1) \wedge M_2)\).
-
7.
If \(\, \Psi \in {\mathbb {K}}^q_{r}({\mathbb {R}},M)\) then, \(\partial _x \Psi \in {\mathbb {K}}^q_r({\mathbb {R}}, M-1)\) and \((x,v,\mu ) \mapsto \partial _{\mu } \Psi (x,\mu ,v)\) is a Kusuoka–Stroock process on \({\mathbb {R}}^{2N} \times {\mathcal {P}}_2({\mathbb {R}}^N)\) in the class \({\mathbb {K}}^q_r({\mathbb {R}}, M-1)\).
Proof
These results are straightforward generalisations of results in [25] and [14]. \(\square \)
Now, we show that certain processes, which will make up the Malliavin weights in our integration by parts formulas, belong to specific Kusuoka–Stroock classes. The arguments make extensive use of the properties of generic Kusuoka–Stroock processes on \({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\) in Lemma 6.9.
Proposition 6.10
If \(V_0, \ldots , V_d\in {\mathcal {C}}^{k,k}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\) and (UE) holds, then the following are true:
-
1.
Let \(|\alpha |=1\), and \( \,\Phi _1 = \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} (X^{x,[\theta ]}_{.}, [X_.^{\theta }]) \partial ^{\alpha }_x X^{x,[\theta ]}_. \mathbf {1}_{[0,t]}(\cdot )\). Then, \(\Phi _1 \in {\mathbb {K}}^2_1(H_{d},k-1)\) and if \(V_0, \ldots , V_d\) are uniformly bounded then \(\Phi _1 \in {\mathbb {K}}^0_1(H_{d},k-1)\).
-
2.
For all \(i,j \in \{1, \ldots , N\}\), \(\left( \partial _x X^{x,[\theta ]}_t\right) ^{-1}_{i,j} \in {\mathbb {K}}^1_0({\mathbb {R}},k-2)\) and if \(V_0, \ldots , V_d\) are uniformly bounded then \((\partial _x X^{x,[\theta ]}_t)^{-1}_{i,j} \in {\mathbb {K}}^0_0({\mathbb {R}},k-2)\).
-
3.
\((\partial _x X^{x,[\theta ]}_t)^{-1} \partial _{\mu } X^{x, [\theta ]}_t \in {\mathbb {K}}_{0}^{2}({\mathbb {R}}^{N \times N},k-2)\) and if \(V_0, \ldots , V_d\) uniformly bounded then \((\partial _x X^{x,[\theta ]}_t)^{-1} \partial _{\mu } X^{x, [\theta ]}_t \in {\mathbb {K}}_{0}^{0}({\mathbb {R}}^N,k-2)\).
Proof
-
1.
First, note that from Assumption 3.3, it follows that the matrix \(\left( \sigma \sigma ^{\top }\right) ^{-1}(x, \mu )\) has a an operator norm bounded uniformly in \((x, \mu )\). Therefore \(\sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} (\cdot , \cdot ) \) has linear growth. Also, its elements are k-times differentiable in \((x, [\theta ])\), so \(\sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} (X^{x,[\theta ]}_{t}, [X_t^{\theta }]) \in {\mathbb {K}}_0^1({\mathbb {R}}^{d \times N}, k)\). When \(|\alpha |=1\), \(\partial ^{\alpha }_x X^{x,\mu }_t \in {\mathbb {K}}_0^1({\mathbb {R}}^N, k-1)\) by part 7 of Lemma 6.9, so the product \(\sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} (X^{x,[\theta ]}_{t}, [X_t^{\theta }]) \partial ^{\alpha }_x X^{x,\mu }_t \in {\mathbb {K}}_1^2({\mathbb {R}}^d, k-1)\). Hence, by Lemma 6.9 part 3., \(\Phi _1 \in {\mathbb {K}}^2_1(H_{d},k)\).
-
2.
\((\partial _x X^{x ,[\theta ]}_t)^{-1}\) satisfies the following linear equation
$$\begin{aligned} \left( \partial _x X^{x, [\theta ]}_t\right) ^{-1} =&\, \text {Id}_N - \sum _{i=1}^d \int _0^t \left( \partial _x X^{x, [\theta ]}_s\right) ^{-1} \, \partial V_i \left( X^{x, [\theta ]}_s, \left[ X^{\theta }_s \right] \right) \, dB^i_s \nonumber \\&- \int _0^t \left( \partial _x X^{x, [\theta ]}_s\right) ^{-1} \, \partial \bar{V}_0 \left( X^{x, [\theta ]}_s, \left[ X^{\theta }_s\right] \right) \, ds, \end{aligned}$$(6.14)where \(\bar{V}_0 =V_0 - \textstyle \frac{1}{2} \sum _{j=1}^d \partial V_j V_j\). This can be seen by applying Itô’s formula to the product \((\partial _x X^{x ,[\theta ]}_t)^{-1} \partial _x X^{x, [\theta ]}_t\). The proof of Theorem 3.2 works just as well for this equation. The only thing to note is that the above equation contains second derivatives of the vector fields. This leads to the conclusion \((\partial _x X^{x, [\theta ]}_t)^{-1} \in {\mathbb {K}}^1_0({\mathbb {R}}^{N \times N},k-2)\).
-
3.
To prove the claim, it is enough to note \((\partial _x X^{x,\mu }_t)^{-1} \in {\mathbb {K}}_0^1({\mathbb {R}}^{N \times N}, k-2)\) from part 2 of this lemma and \(\partial _{\mu } X^{x, [\theta ]}_t \in {\mathbb {K}}_{0}^1({\mathbb {R}}^{N \times N },k-1)\), which comes from Lemma 6.9 part 7. \(\square \)
We can now prove Proposition 3.4.
Proof of Proposition 3.4
- \(I^1_{\alpha }\)::
-
First, fix \(|\alpha |=1\). We want to apply Lemma 6.9 part 6. with \(f=\Psi \) and \(u = \left( \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} (X^{x,\mu }_{.}, [X_.^{\theta }]) \partial _x X^{x,\mu }_.\right) _{\alpha } \mathbf {1}_{[0,t]} \). We recall Proposition 6.10 part 1. to see that \(u \in {\mathbb {K}}^2_1(H_{d},k-1)\) or \({\mathbb {K}}^0_1(H_{d},k-1)\) if \(V_i\) is uniformly bounded, which proves that
$$\begin{aligned}&\delta \left( r \mapsto \Psi (t,x, [\theta ]) \, \left( \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \left( X^{x,\mu }_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,\mu }_r \right) _{\alpha } \right) \\&\quad \in {\mathbb {K}}^{q+2}_{r+1}({\mathbb {R}},(k \wedge n)-1) \end{aligned}$$(or \({\mathbb {K}}^q_{r+1}({\mathbb {R}},k-1)\) if \(V_i\) is bounded) and hence, dividing by \(\sqrt{t}\), we get that \(I^1_{\alpha }(\Psi ) \in {\mathbb {K}}^{q+2}_r({\mathbb {R}},(k \wedge n)-1)\) for \(|\alpha |=1\). For \(|\alpha |>1\), we iterate this argument and get \(I^1_{\alpha }(\Psi ) \in {\mathbb {K}}^{q+2|\alpha |}_r({\mathbb {R}},(k \wedge n)-|\alpha |)\).
- \(I^2_{\alpha }\)::
-
We recall from Proposition 6.10 part 2. that: For all \(i,j \in \{1, \ldots , N\}\), \((\partial _x X^{x,\mu }_t)^{-1}_{i,j} \in {\mathbb {K}}^1_0({\mathbb {R}},k-2)\) and if \(V_i\) are uniformly bounded, \((\partial _x X^{x,\mu }_t)^{-1}_{i,j} \in {\mathbb {K}}^0_0({\mathbb {R}},k-2)\). So, the product \((\partial _x X^{x,\mu }_t)^{-1}_{j,i} \Psi (t,x, [\theta ]) \in {\mathbb {K}}^{q+1}_r({\mathbb {R}},n \wedge (k-2))\) and hence the sum \(\textstyle \sum _{j=1}^N (\partial _x X^{x,\mu }_t)^{-1}_{j,i} \Psi (t,x, [\theta ]) \in {\mathbb {K}}^{q+1}_r({\mathbb {R}},n \wedge (k-2))\). When the vector fields are uniformly bounded,
$$\begin{aligned} \sum _{j=1}^N \left( \partial _x X^{x,\mu }_t\right) ^{-1}_{j,i} \Psi (t,x, [\theta ]) \in {\mathbb {K}}^q_r({\mathbb {R}},n \wedge (k-2)). \end{aligned}$$Hence, by applying \(I^1\) to these terms and using the first result of this proposition, we get that \(I^2_{(i)}(\Psi ) \in {\mathbb {K}}^{q+3}_r({\mathbb {R}},[n \wedge (k-2)]-1)\). For \(|\alpha |>1\), we iterate this argument and get \(I^2_{\alpha }(\Psi ) \in {\mathbb {K}}^{q+3|\alpha |}_r({\mathbb {R}},[n \wedge (k-2)]-|\alpha |)\).
- \(I^3_{\alpha }\)::
-
Note that \(\sqrt{t} \partial ^i\Psi (t,x, [\theta ]) \in {\mathbb {K}}^q_{r+1}({\mathbb {R}},n-1)\) so that \(I^1_{(i)}(\Psi ) + \sqrt{t} \partial ^i\Psi \in {\mathbb {K}}^{q+2}_r({\mathbb {R}}, (n \wedge k)-1) \) . For \(|\alpha |>1\), we iterate this argument and get \(I^3_{\alpha }(\Psi ) \in {\mathbb {K}}^{q+2|\alpha |}_r({\mathbb {R}},(k \wedge n)-|\alpha |)\).
- \({\mathcal {I}}^1_{\alpha }\)::
-
We recall from Proposition 6.10 that \((\partial _x X^{x,\mu }_t)^{-1} \partial _{\mu } X^{x, [\theta ]}_t \in {\mathbb {K}}_{0}^{2}({\mathbb {R}}^{N \times N},k-2)\), so \((\partial _x X^{x,\mu }_t)^{-1} \partial _{\mu } X^{x, [\theta ]}_t \Psi (t,x, [\theta ]) \in {\mathbb {K}}_{r}^{q+2}({\mathbb {R}}^{N \times N},n \wedge (k-2))\), then we apply Lemma 6.9 part 6. with \(u = \left( \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} (X^{x,\mu }_{.}, [X_.^{\theta }]) \partial _x X^{x,\mu }_.\right) _{\alpha } \mathbf {1}_{[0,t]} \) which is in \({\mathbb {K}}^2_1(H_{d},k-1)\) as before, and \(f:= (\partial _x X^{x,\mu }_t)^{-1} \partial _{\mu } X^{x, [\theta ]}_t \Psi (t,x, [\theta ]) \in {\mathbb {K}}_{r}^{q+2}({\mathbb {R}}^{N \times N},n \wedge (k-2))\). So \(\delta (uf) \in {\mathbb {K}}^{q+4}_{r+1}({\mathbb {R}};[n \wedge (k-2)-1])\). Hence, \( {\mathcal {I}}^1_{\alpha }(\Psi ) \in {\mathbb {K}}^{q+4}_{r}({\mathbb {R}};[n \wedge (k-2)-1])\). For \(|\alpha |>1\), we iterate this argument and get \({\mathcal {I}}^1_{\alpha }(\Psi ) \in {\mathbb {K}}^{q+4|\alpha |}_r({\mathbb {R}},[n \wedge (k-2) ]-|\alpha |)\).
- \({\mathcal {I}}^3_{\alpha }\)::
-
Note that \(\sqrt{t} \partial _{\mu }\Psi (v) \in {\mathbb {K}}^q_{r+1}({\mathbb {R}}^{N \times N},n-1)\) so that \({\mathcal {I}}^1_{\gamma _1}(\Psi )(v) +( \partial _{\mu }\Psi (v))_{\beta _1} \in {\mathbb {K}}^{q+4|\alpha |}_r({\mathbb {R}},[n \wedge (k-2) ]-|\alpha |)\). \(\square \)
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Crisan, D., McMurray, E. Smoothing properties of McKean–Vlasov SDEs. Probab. Theory Relat. Fields 171, 97–148 (2018). https://doi.org/10.1007/s00440-017-0774-0
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-017-0774-0