1 Introduction

The main object of study in this paper is the McKean–Vlasov stochastic differential equation (MVSDE)

$$\begin{aligned} X^{\theta }_t = \theta + \int _0^t V_0 \left( X^{\theta }_s, \left[ X^{\theta }_s \right] \right) \, ds + \sum _{i=1}^d \int _0^t V_i \left( X^{\theta }_s, \left[ X^{\theta }_s \right] \right) \, dB^i_{s}, \end{aligned}$$
(1.1)

driven by a Brownian motion \(B= \left( B^1, \ldots , B^d \right) \), with coefficients \(V_0, \ldots , V_d: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\) and initial condition \(\theta \), a square-integrable random variable independent of B. Here and throughout, we denote by \([\xi ]\) the law of a random variable \( \xi \) and by \({\mathcal {P}}_2({\mathbb {R}}^N)\) the set of probability measures on \({\mathbb {R}}^N\) with finite second moment.

MVSDEs are equations whose coefficients depend on the law of the solution. They are also referred to as mean-field SDEs and their solutions are often called nonlinear diffusions. These MVSDEs provide a probabilistic representation to the solutions of a class of nonlinear PDEs. A particular example of such nonlinear PDEs was first studied by McKean [29]. These equations describe the limiting behaviour of an individual particle evolving within a large system of particles undergoing diffusive motion and interacting in a ‘mean-field’ sense, as the population size grows to infinity. A particular characteristic of the limiting behaviour of the system, is that any finite subset of particles become asymptotically independent of each other. This propagation of chaos phenomenon was studied by McKean [30] and Sznitman [34] among many other authors. Existence and uniqueness results, the theory of propagation of chaos and numerical methods have been studied in a variety of settings (see, for example, [6, 7, 21, 31]).

As MVSDEs can be interpreted as limiting equations for large systems, they are widely used as models in statistical physics [7, 31] as well as in the study of large-scale social interactions within the theory of mean-field games [10, 11, 19, 20, 26,27,28]. Recently, these equations have also appeared in the mathematical finance literature in the specification and calibration of multi-factor stochastic volatility and hybrid models [5, 17].

In this paper, we develop several new integration by parts formulae for solutions of MVSDE. In turn, these formulae enable us to use MVSDE to define the solution of a class of partial differential equations that has the form

$$\begin{aligned} \begin{array}{r@{\qquad }l} \left( \partial _t - \mathcal {L} \right) U(t,x,[\theta ]) = 0 &{}\text { for } (t,x,[\theta ]) \in (0,T] \times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\\ U(0,x,[\theta ]) = g(x,[\theta ]) &{}\text { for } (x,[\theta ]) \in {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N), \end{array} \end{aligned}$$
(1.2)

where \(g: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\) and the operator \(\mathcal {L}\) acts on sufficiently enough functions \(F:{\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\) and is defined

$$\begin{aligned} \mathcal {L} F(x,[\theta ]) =&\sum _{i=1}^N V_0^i(x,[\theta ]) \, \partial _{x_i} F(x,[\theta ]) + \frac{1}{2} \sum _{i,j=1}^N [\sigma \sigma ^{\top } (x,[\theta ])]_{i,j} \, \partial _{x_i} \partial _{x_j} F(x,[\theta ])\\&+ {\mathbb {E}}\left[ \sum _{i=1}^N V_0^i(\theta ,[\theta ]) \, \partial _{\mu } F(x,[\theta ], \theta )_i \right. \\&\left. +\, \frac{1}{2} \sum _{i,j=1}^N [\sigma \sigma ^{\top } (\theta ,[\theta ])]_{i,j} \, \partial _{v_j} \partial _{\mu } F(x,[\theta ],\theta )_i \right] , \end{aligned}$$

where \(\sigma (z, \mu )\) is the \(N \times d\) matrix with columns \(V_1(z,\mu ), \ldots , V_d(z,\mu )\). The last two terms in the description of \(\mathcal {L} F(x,[\theta ])\) involve the derivative with respect to the measure variable as introduced by Lions in his seminal lectures at the Collège de France (see [9] for details), which we describe in Sect. 2.3. Papers [3, 4, 22] present further details of the relevance of the class of nonlinear partial differential Eqs. (1.2)

For linear parabolic PDEs on \([0,T] \times {\mathbb {R}}^N\) it is well known from classical works such as [16, 18] that under uniform ellipticity or Hörmander condition, there exist classical solutions even when the initial condition is not differentiable. In this paper, we explore to what extent the same is true for the PDE (1.2) under a uniform ellipticity assumption. That is, we consider the question of whether the PDE (1.2) has classical solutions when the initial condition g is not differentiable. For this we exploit a probabilistic representation for the classical solutionFootnote 1 of the PDE (1.2) given in terms of a functional of \(X^{\theta }_t\) and of the solution of the following de-coupled equation:

$$\begin{aligned} X^{x, [\theta ]}_t = x + \int _0^t V_0 \left( X^{x, [\theta ]}_s, \left[ X^{\theta }_s \right] \right) \, ds + \sum _{i=1}^d \int _0^t V_i \left( X^{x, [\theta ]}_s, \left[ X^{\theta }_s \right] \right) \, dB^i_{s}. \end{aligned}$$
(1.3)

We say that this equation is de-coupled as the law appearing in the coefficients is \(\left[ X^{\theta }_s \right] \) (the solution of Eq. (1.1)), rather than the law of \(X^{x, [\theta ]}_t \), the solution to Eq. (1.3) itself.Footnote 2 In the following, we show that, for a certain class of functions \(g: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\) (not necessarily smooth), the function

$$\begin{aligned} U(t,x,[\theta ]) := {\mathbb {E}}\, g \left( X^{x, [\theta ]}_t,[ X^{\theta }_t ]\right) \quad \mathrm {for} \, (t,x,[\theta ]) \in [0,T] \times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\qquad \end{aligned}$$
(1.4)

solves the PDE (1.2). A similar result has been proved in [8, 12] under different conditions than ours and for an initial condition g that is sufficiently smooth.

For the stochastic flow \((X_t^x)_{t \ge 0}\) solving a classical SDE with initial condition \(x \in {\mathbb {R}}^N\), the standard strategy to show that the function \(u(t,x):={\mathbb {E}}\, g(X_t^x)\) is a classical solution of a linear PDE is to show, using the flow property of \(X_t^x\), that for \(h>0\), \(u(t+h,x)={\mathbb {E}}\, [u(t,X_h^{x})]\) and then show that u is regular enough to apply Itô’s formula to \( u(t,X_h^{x})\). Expanding this process using Itô’s formula and sending \(h \rightarrow 0\) shows that u does indeed solve the related PDE. For MVSDEs, one can develop a similar approach. In this setting, to expand a function depending not only on the process \((X^{x, [\theta ]}_t )_{t \ge 0}\) (where we can use the usual Itô formula) but also on the flow of measures \(\left( [X^{\theta }_t] \right) _{t \ge 0}\), we require an extension of the classical chain rule and we use here the chain rule proved in [12]. Our main focus is therefore to provide conditions under which U, defined in (1.4), is regular enough to apply the Itô formula and the extended chain rule.

For a general Lipschitz continuous function \(g:{\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\), we cannot expect for the mapping \((x,[\theta ]) \mapsto {\mathbb {E}}[ \, g (X^{x, [\theta ]}_t,[ X^{\theta }_t ]) ]\) to be differentiable (for a fixed \(t>0\)) even when the coefficients in the equation for \(X^{x, [\theta ]}_t\) are smooth and uniformly elliptic. This is shown in Example 5.1. We are, however, able to identify a class of non-smooth initial conditions (including interesting examples, see Example 5.4) for which we can develop integration by parts formulas and establish sufficient smoothness of the associated function U. For g in this class, we use Malliavin calculus to show that \((x,[\theta ]) \mapsto {\mathbb {E}}[ \, g (X^{x, [\theta ]}_t,[ X^{\theta }_t ]) ]\) is differentiable. The differentiability in the measure direction is somewhat surprising since there is no noise added in the measure direction, and this smoothing property seems to be new. We give further details of our results in the next section.

1.1 Outline and main results

In Sect. 2, we introduce the notation and the basic results related to MVSDEs. In particular, when describing the smoothness of the coefficients in Eqs. (1.1) and (1.3) in our assumptions, we introduce the notation \({\mathcal {C}}^{k,k}_{b,\text {Lip}}({\mathbb {R}}^N\times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\) for functions k-times differentiable with bounded, Lipschitz derivatives, which we introduce precisely in Sect. 2.3. Similarly, we use the notation \({\mathbb {K}}^q_r(E,M)\) to denote processes taking values in a Hilbert space E which are smooth in both Euclidean and measure variables as well as in the Malliavin sense and M denotes how many times the process can be differentiated. This class, which we call the class of Kusuoka–Stroock processes, is introduced in Sect. 2.4. The class represents a generalization of the class of processes introduced in [25] and analysed in [14].

In Sect. 3, we prove some results on the differentiability of \(X^{x, [\theta ]}_t\), the solution to Eq. (1.3), with respect to the parameters \((x,[\theta ])\). The main result of Sect. 3 is Theorem 3.2, which says that if \(V_0, \ldots , V_d\in {\mathcal {C}}^{k,k}_{b, Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\), then \((t, x, [\theta ]) \mapsto X_t^{x, [\theta ]} \in {\mathbb {K}}^1_{0}({\mathbb {R}}^N,k)\). This is proved in the Appendix 6.2. We then introduce the uniform ellipticity assumption (UE) in Assumption 3.3, used throughout the rest of the paper. The rest of the section details several corollaries, where we analyse the processes that will play the rôle of Malliavin weights in the integration by parts formulas and identify the class \({\mathbb {K}}^q_r(E,M)\) of Kusuoka–Stroock processes to which they belong.

With the main technical results complete, in Sect. 4 we develop integration by parts formulas for derivatives of \((x,[\theta ]) \mapsto {\mathbb {E}}f(X^{x, [\theta ]}_t)\) under (UE) and the assumption that \(V_0, \ldots , V_d\in {\mathcal {C}}^{k,k}_{b,\text {Lip}}({\mathbb {R}}^N\times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\). We do this for derivatives with respect to x and with respect to \(\mu \). In particular we show that (see Propositions 4.1 and 4.2), for \(f \in {\mathcal {C}}^{\infty }_b({\mathbb {R}}^N;{\mathbb {R}})\), \(\Psi \in {\mathbb {K}}^q_r({\mathbb {R}},n)\) and for \(|\alpha | + |\beta | \le [n \wedge (k-2)]\), we have

$$\begin{aligned} \partial ^{\alpha }_x \, {\mathbb {E}}\left[ (\partial ^{\beta } f)\left( X^{x, [\theta ]}_t\right) \, \Psi (t,x, [\theta ])\right]= & {} t^{-(|\alpha |+ |\beta |)/2} \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t\right) \, \right. \\&\times \left. I^3_{\alpha }\left( I^2_{\beta }(\Psi )\right) (t,x, [\theta ])\right] ,\\ \partial ^{\beta }_{\mu } \, {\mathbb {E}}\left[ (\partial ^{\alpha }f)\left( X^{x, [\theta ]}_t\right) \, \Psi (t,x, [\theta ])\right] ({\varvec{v}})= & {} t^{-(|\alpha |+|\beta |)/2} {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t\right) \, \right. \\&\times \left. {\mathcal {I}}^3_{\beta }\left( I^2_{\alpha }(\Psi )\right) (t,x, [\theta ], {\varvec{v}})\right] , \end{aligned}$$

where \(I^3_{\alpha }\left( I^2_{\beta }(\Psi )\right) \) and \({\mathcal {I}}^3_{\alpha }\left( I^2_{\beta }(\Psi )\right) \) are defined is defined in Sect. 4.1 and \(I^3_{\alpha }\left( I^2_{\beta }(\Psi )\right) \in {\mathbb {K}}_r^{q+2|\alpha |+3|\beta |}({\mathbb {R}}, m)\) and \({\mathcal {I}}^3_{\alpha }\left( I^2_{\beta }(\Psi )\right) \in {\mathbb {K}}_r^{q+4|\alpha |+3|\beta |}({\mathbb {R}}, m),\) where \(m=[n \wedge (k-2)]-|\alpha |-|\beta |\). We also consider integration by parts formulas for derivatives of the function \( x \mapsto {\mathbb {E}}f(X_t^{x, \delta _x})\) (see Theorem 4.4).

In Sect. 5, we return our attention to the PDE (1.2). In Definition 5.3, we introduce the class \(\mathbf (IC) \) of non-differentiable initial conditions g for which we are able to prove \((x,[\theta ]) \mapsto {\mathbb {E}}[ \, g (X^{x, [\theta ]}_t,[ X^{\theta }_t ]) ]\) is differentiable. We do this by extending the integration by parts formulas of Sect. 4 to cover this class. Then, for g in this class and assuming uniform ellipticity, and the coefficients \(V_0, \ldots , V_d\in {\mathcal {C}}^{3,3}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N); {\mathbb {R}}^N)\) (and possibly bounded depending on the exact form of g) we are able to prove the existence and uniqueness of solutions to the PDE (1.2). In particular, we show (see Theorem 5.8) that function U, defined in (1.4), is a classical solution of the PDE (1.2). Moreover, U is unique among all of the classical solutions satisfying the polynomial growth condition \(\left| U(t,x,[\theta ])\right| \le C (1+|x|+\Vert \theta \Vert _2)^q\) for some \(q>0\) and all \((t,x,[\theta ]) \in [0,T] \times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\).

Finally, in Sect. 6, we apply the integration by parts formulae to the study of the density function of \(X_t^{x,\delta _x}\). We study the smoothness of the density function and obtain estimates on its derivatives. The main result (see Theorem 6.1) states that, under suitable conditions, \(X_t^{x,\delta _x}\) has a density p(txz) such that \((x,z) \mapsto p(t,x, z)\) is differentiable a number of times dependent on the regularity of the coefficients. Indeed, when these derivatives exist, there exist a constant C such that

$$\begin{aligned} \left| \partial _x^{\alpha } \, \partial _z^{\beta } p(t,x,z) \right| \le C \, (1+ |x|)^{\mu } \, t^{- \nu } , \end{aligned}$$

where \( \mu = 4|\alpha |+ 3 |\beta | + 3 N\) and \( \nu = \textstyle \frac{1}{2} (N + | \alpha | + | \beta | )\). Moreover, if \(V_0, \ldots , V_d\) are bounded then the following Gaussian type estimate holds

$$\begin{aligned} \left| \partial _x^{\alpha } \, \partial _z^{\beta } p(t,x,z) \right| \le C \, t^{- \nu } \, \exp \left( - C \, \frac{|z-x|^2}{t} \right) . \end{aligned}$$

1.2 Comparison with other works

As mentioned previously, the PDE (1.2) is also studied in [8] and [12]. Let us explain the relationship between the results in those works and the results in this paper.

In [8], the authors prove that derivatives of \((x,[\theta ]) \mapsto X^{x, [\theta ]}_t\) exist up to second order. We also prove this as part of Theorem 3.2, although we extend this to derivatives of any order (assuming sufficient smoothness of the coefficients). In [8], the hypotheses on the continuity and differentiability of the coefficients are the same as ours The authors then consider initial conditions \(g:{\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\) for which the derivatives up to second order exist and are bounded, which they use to prove regularity of U. Since g is sufficiently smooth, they do not need to impose any non-degeneracy condition on the coefficients. In our work we remove the constraint on the smoothness of g at the expense of assuming non-degeneracy condition on the coefficients of the MVSDEs. In this sense, their results are complementary to ours.

The paper [12] has a completely different scope. The authors are interested in a nonlinear PDE on \([0,T]\times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\), called the master equation in reference to the theory of mean-field games. The PDE we consider is a special case of this, although again they assume that the function g is twice differentiable. Their strategy for proving regularity of U is also different. In their setting, the authors prove that derivatives of the lifted flow \({\mathbb {R}}^N \times L^2(\Omega ) \ni (x,\theta ) \mapsto X^{x, [\theta ]}_t\) exist up to second order (with derivatives in the variable \(\theta \) being Fréchet derivatives on the Hilbert space \(L^2(\Omega )\)) where \(X^{x, [\theta ]}_t\) is the forward component in a coupled forward-backward system. They use this result, along with sufficient smoothness of g, to prove that the lifted function \(\widetilde{U}\), defined on on \([0,T]\times {\mathbb {R}}^N \times L^2(\Omega )\) is sufficiently regular in the Fréchet sense. They then prove a result which allows them to recover regularity of the second order derivatives of U from properties of the second order Fréchet derivatives of \(\widetilde{U}\). Using their strategy, the authors of [12] are able to impose hypotheses which only involve conditions on derivatives of the coefficients \(\partial _{\mu }V_i(x,[\theta ],v)\) evaluated at \(v=\theta \in L^2(\Omega )\).

This is in contrast to our assumptions which impose conditions on \(\partial _{\mu }V_i(x,[\theta ],v)\) for all \((x,[\theta ],v) \in {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \times {\mathbb {R}}^N\).

More recently, two other works [2, 13] give some partial results related to the smoothness of the solutions of McKean–Vlasov SDEs. In [2], the Malliavin differentiability of McKean–Vlasov SDEs is studied using a stochastic perturbation approach of Bismut type. In [13], the strong well-posedness of a McKean–Vlasov SDEs is proven when the diffusion matrix is Lipschitz with respect to both the space and measure arguments and uniformly elliptic and the drift is bounded in space and Hölder continuous in the measure direction. Both works restrict themselves to the particular case when the coefficient dependence on the law of the solution is of scalar type. We obtain some related results in [15], under the same scalar dependence restriction, but under the more general Hörmander condition.

We base our results on the use of Malliavin calculus techniques. The new integration by parts formulae and, more importantly, the identification of the processes appearing in these formulae as Kusuoka–Stroock processes is key to our analysis. The use of Kusuoka–Stroock processes is a very versatile tool. Not only that it enables us to identify the solution of the PDE (1.2), but the also allows us to study the density of \(X_t^{x,\delta _x}\) and obtain both polynomial and Gaussian local bounds for their derivatives. We are not aware of similar bounds obtained elsewhere in the literature for densities of solutions of MVSDEs.

2 Preliminaries

2.1 Notation and basic setup

We work on a filtered probability space \((\Omega , {\mathcal {F}}, \mathbb {F}= \{{\mathcal {F}}_t\}_{t \in [0,T]} , {\mathbb {P}})\) which supports an \(\mathbb {F}\)-adapted d-dimensional Brownian Motion, \(B=(B^1, \ldots , B^d)\). We also often denote \(B^0(s)=s\) for \(s \in [0,T]\). We assume that there is a sufficiently rich sub-\(\sigma \)-algebra \(\mathcal {G} \subset {\mathcal {F}}\) independent of B such that all measures \(\mu \in {\mathcal {P}}_2({\mathbb {R}}^N)\) correspond to the law of a random variable in \(L^2((\Omega ,\mathcal {G}, {\mathbb {P}}) ;{\mathbb {R}}^N)\). Then, we define \(\mathbb {F}\) to be the filtration generated by B, completed and augmented by \(\mathcal {G}\). This is to ensure that in the sequel when we consider processes starting from arbitrary initial conditions \( \theta \in L^2(\Omega ;{\mathbb {R}}^N)\) these processes will be \(\mathbb {F}\)-adapted. We denote the \(L^p\) norm on \((\Omega , {\mathcal {F}},{\mathbb {P}})\) by \(\Vert \cdot \Vert _p \) and we also introduce the space \({\mathcal {S}}^p_T\) of continuous \(\mathbb {F}\)-adapted processes \(\varphi \) on [0, T], satisfying

$$\begin{aligned}&\left\| \varphi \right\| _{{\mathcal {S}}^p_T} = \left( {\mathbb {E}}\sup _{s \in [0,T]} | \varphi _s|^p \right) ^{1/p}<\infty . \end{aligned}$$

In addition to the probability space \((\Omega , {\mathcal {F}}, {\mathbb {P}})\), we will also make use of other probability spaces \(( \tilde{\Omega }, \tilde{\mathcal {F}}, \tilde{\mathbb {P}})\) and \((\widehat{\Omega }, \widehat{{\mathcal {F}}}, \widehat{{\mathbb {P}}})\) when performing the lifting operation associated with the Lions derivative. We assume that these satisfy the same conditions as \((\Omega , {\mathcal {F}}, {\mathbb {P}})\). We denote the \(L^p\) norm on each of these spaces by \(\Vert \cdot \Vert _p \) unless we want to emphasise which space we are working on, in which case we use \(\Vert \cdot \Vert _{L^p(\widetilde{\Omega })} \) etc. We use \(| \cdot |\) to denote the Euclidean norm. Throughout we denote by \(\alpha \) and \(\beta \) multi-indices on \(\{1, \ldots , N\}\) including the empty multi-index. We denote by \(Id_N\) the \(N \times N\) identity matrix. We also use some terminology from Malliavin calculus: we denote by \(\mathcal {\mathbf {D}}\) the Malliavin derivative and by \(\delta \) its adjoint, the Skorohod integral. We outline very briefly the basic operators of Malliavin calculus in Appendix 6.1.

2.2 Basic results on McKean–Vlasov SDEs

We study McKean–Vlasov SDEs with general Lipschitz interaction. The coefficients are functions from \({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\) to \({\mathbb {R}}^N\), where \({\mathcal {P}}_2({\mathbb {R}}^N)\) denotes the space of probability measures on \({\mathbb {R}}^N\) with finite second moment. We equip this space with the 2-Wasserstein metric, \(W_2\). For a general metric space (Md), we define the 2-Wasserstein metric on \({\mathcal {P}}_2(M)\) by

$$\begin{aligned} W_2(\mu , \nu ) = \inf _{\Pi \in {\mathcal {P}}_{\mu ,\nu }} \left( \int _{M \times M} d(x,y)^2 \, \Pi (dx,dy) \right) ^{1/2}, \end{aligned}$$

where \({\mathcal {P}}_{\mu ,\nu }\) denotes the set of measures on \(M \times M\) with marginals \(\mu \) and \(\nu \). When we refer to the Lipschitz property of the coefficients, it is with respect to product distance on \({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\).

Proposition 2.1

(Existence, uniqueness and \(L^p\) estimates) Suppose that \(\theta \in L^2( \Omega )\) and \(V_0, \ldots , V_d\) are uniformly Lipschitz continuous, then there exists a unique, strong solution to the equation

$$\begin{aligned} X^{\theta }_t = \theta + \sum _{i=0}^d \int _0^t V_i \left( X^{\theta }_s, \left[ X^{\theta }_s \right] \right) \, dB^i_s , \end{aligned}$$
(2.1)

and there exists a constant \(C=C(T)\), such that

$$\begin{aligned} \Vert X^{\theta } \Vert _{ {\mathcal {S}}^2_T} \le C \, \left( 1+ \Vert \theta \Vert _2 \right) . \end{aligned}$$
(2.2)

Similarly, there exists a unique, strong solution to the equation

$$\begin{aligned} X^{x, [\theta ]}_t = x + \sum _{i=0}^d \int _0^t V_i \left( X^{x, [\theta ]}_s, \left[ X^{\theta }_s \right] \right) \, dB^i_s , \end{aligned}$$
(2.3)

and there exists a constant \(C=C(p,T)\), such that for all \(p \ge 1\),

$$\begin{aligned} \Vert X^{x, [\theta ]}\Vert _{{\mathcal {S}}^p_T} \le C \, \left( 1+ |x|+ \Vert \theta \Vert _2 \right) . \end{aligned}$$
(2.4)

Moreover, for all \((x, \theta , t), (x^{\prime }, \theta ^{\prime }, t^{\prime }) \in {\mathbb {R}}^N \times L^2(\Omega ) \times [0,T]\) and \(p \ge 1\),

$$\begin{aligned} \left\| X^{x, [\theta ]} - X^{x^{\prime }, [\theta ^{\prime }]} \right\| _{{\mathcal {S}}^p_T} \le C \, \left( |x-x^{\prime }| + \Vert \theta - \theta ^{\prime }\Vert _2 \right) , \end{aligned}$$
(2.5)

and

$$\begin{aligned} \left\| X^{x, [\theta ]}_t - X^{x, [\theta ]}_{t^{\prime }} \right\| _p \le C \, (1+|x|+\Vert \theta \Vert _2) \, |t-t^{\prime }|^{\frac{1}{2}} . \end{aligned}$$
(2.6)

Finally, we have the following flow property for any \(t \in [0,T) \), \(s \in (t,T]\), \(x \in {\mathbb {R}}^N\) and \(\theta \in L^2(\Omega )\),

$$\begin{aligned} \left( X_{t+s}^{x,[\theta ]}, X_{t+s}^{\theta } \right) = \left( X_s^{X_{t}^{x,[\theta ]},[X_{t}^{[\theta ]}]}, X_s^{X_t^{\theta }} \right) \quad {\mathbb {P}}{\text {-}}a.s. \end{aligned}$$

Proof

The proof is standard and we leave it to the reader. We note that the proof of existence and uniqueness of a solution to Eq. (2.1) was proved in [34] for first-order McKean–Vlasov interaction. The case of a generic Lipschitz McKean–Vlasov interaction is covered in [21].\(\square \)

2.3 Differentiation in \({\mathcal {P}}_2({\mathbb {R}}^N)\)

In Sect. 5, we study an SDE with a general McKean–Vlasov dependence. We will be interested in differentiability of the stochastic flow associated to this SDE and an associated PDE on \([0,T] \times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\). We thus need a notion of derivative for a function on a space of probability measures. The notion of differentiability we use was introduced by P.-L. Lions in his lectures at the Collège de France, recorded in a set of notes by Cardaliaguet [9]. The underlying idea is very well exposed in [11], which we draw on here.

Lions’ notion of differentiability is based on the lifting of functions \(U: {\mathcal {P}}_2({\mathbb {R}}^N)\rightarrow {\mathbb {R}}\) into functions \(\tilde{U}\) defined on the Hilbert space \(L^2(\tilde{\Omega };{\mathbb {R}}^N)\) over some probability space \((\tilde{\Omega },\tilde{\mathcal {F}},\tilde{\mathbb {P}})\), \(\tilde{\Omega }\) being a Polish space and \(\tilde{\mathbb {P}}\) an atomless measure, by setting \(\tilde{U}({\tilde{X}})=U([{\tilde{X}}])\) for \({\tilde{X}}\in L^2(\tilde{\Omega };{\mathbb {R}}^N)\). Then, a function U is said to be differentiable at \(\mu _0\in {\mathcal {P}}_2({\mathbb {R}}^N)\) if there exists a random variable \(\tilde{X}_0\) with law \(\mu _0\) such that the lifted function \(\tilde{U}\) is Fréchet differentiable at \(\tilde{X}_0\). Whenever this is the case, the Fréchet derivative of \(\tilde{U}\) at \(\tilde{X}_0\) can be viewed as an element of \(L^2(\tilde{\Omega };{\mathbb {R}}^N)\) by identifying \(L^2(\tilde{\Omega };{\mathbb {R}}^N)\) and its dual. The derivative in a direction \(\tilde{\gamma }\in L^2(\tilde{\Omega };{\mathbb {R}}^N)\) is given by

$$\begin{aligned} D \tilde{U}(\tilde{X}_0) (\tilde{\gamma }) = \langle D \tilde{U} (\tilde{X}_0), \tilde{\gamma }\rangle _{L^2(\tilde{\Omega };{\mathbb {R}}^N)} = \widetilde{{\mathbb {E}}} \left[ D \tilde{U} (\tilde{X}_0) \cdot \tilde{\gamma }\right] . \end{aligned}$$

It then turns out (see Section 6 in [9] for details.) that the distribution of \(D \tilde{U} (\tilde{X}_0) \in L^2(\tilde{\Omega };{\mathbb {R}}^N)\) depends only upon the law \(\mu _0\) and not upon the particular random variable \( \tilde{X}_0\) having distribution \(\mu _0\). It is shown in [9] that, as a random variable, \(D \tilde{U} (\tilde{X}_0)\) is of the form \( g_{\mu _0}( \tilde{X}_0)\), where \( g_{\mu _0} : {\mathbb {R}}^N \rightarrow {\mathbb {R}}^N\) is a deterministic measurable function which is uniquely defined \(\mu _0\)-almost everywhere on \({\mathbb {R}}^N\), and is square-integrable with respect to the measure \(\mu _0\). We call \(\partial _{\mu }U(\mu _0):=g_{\mu _0}\) the derivative of U at \(\mu _0\). We use the notation \(\partial _{\mu } U(\mu _{0}, \cdot ) : {\mathbb {R}}^N \ni v \mapsto \partial _{\mu } U(\mu _{0},v) \in {\mathbb {R}}^N\), which satisfies, by definition,

$$\begin{aligned} D \tilde{U}(\tilde{X}_0) = g_{\mu _0}(\tilde{X}_0) =: \partial _{\mu }U(\mu _0, \tilde{X}_0). \end{aligned}$$

This holds for any random variable \(\tilde{X}_{0}\) with distribution \(\mu _0\), irrespective of the probability space on which it is defined.

In the sequel, we will consider functions which are differentiable globally on \( {\mathcal {P}}_2({\mathbb {R}}^N)\). Moreover, we will consider functions where for each \(\mu \in {\mathcal {P}}_2({\mathbb {R}}^N)\), there exists a version of the derivative \(\partial _{\mu }U(\mu )\) which is assumed to be a priori continuous as a function

$$\begin{aligned} {\mathcal {P}}_2({\mathbb {R}}^N ) \times {\mathbb {R}}^N \ni (\mu ,v) \mapsto \partial _{\mu }U(\mu ,v) \in {\mathbb {R}}^N. \end{aligned}$$

In this case such a version is unique since, for each \(\theta \in L^2(\Omega ; {\mathbb {R}}^N)\), \(\partial _{\mu }U([\theta ],v)\) is defined \([\theta ](dv)\)-a.e., so taking a Gaussian random variable G independent of \(\theta \), and \(\epsilon >0\), \(\partial _{\mu }U([\theta +\epsilon G],v)\) is defined (dv)-a.e. and taking \(\epsilon \rightarrow 0\) and using the continuity of \(\partial _{\mu }U\), identifies \(\partial _{\mu }U([\theta ],v)\) uniquely. We show how this definition works in practice in Examples 2.5 and 2.6.

For a function \(f: {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\), we can straightforwardly apply the above discussion to each component of \(f=(f^1, \ldots , f^N)\). To extend to higher derivatives we note that \(\partial _{\mu } f^i \) takes values in \({\mathbb {R}}^N\), so we denote its components by \( ( \partial _{\mu } f^i)_j : {\mathcal {P}}_2({\mathbb {R}}^N) \times {\mathbb {R}}^N \rightarrow {\mathbb {R}}\) for \(j=1, \ldots , N\) and, for a fixed \(v \in {\mathbb {R}}^N\), we can discuss again the differentiability of \( {\mathcal {P}}_2({\mathbb {R}}^N) \ni \mu \mapsto (\partial _{\mu } f^i)_j(\mu ,v) \in {\mathbb {R}}\). If the derivative of this function exists and there is continuous version of

$$\begin{aligned} {\mathcal {P}}_2({\mathbb {R}}^N) \times {\mathbb {R}}^N \times {\mathbb {R}}^N \ni (\mu ,v_1,v_2) \mapsto \partial _{\mu }( \partial _{\mu } f^i)_j(\mu ,v_1,v_2) \in {\mathbb {R}}^N, \end{aligned}$$

then it is unique. It makes sense to use the multi-index notation \(\partial ^{(j,k)}_{\mu } f^i: = ( \partial _{\mu }( \partial _{\mu } f^i)_j)_k\). Similarly, for higher derivatives, if for each \((i_0, \ldots , i_n) \in \{1, \ldots , N\}^{n+1}\),

$$\begin{aligned} \underbrace{\partial _{\mu }(\partial _{\mu } \ldots (\partial _{\mu } }_{n \text { times}} f^{i_0})_{i_1} \ldots )_{i_n} \end{aligned}$$

exists, we denote this \(\partial ^{\alpha }_{\mu }f^{i_0}\) with \(\alpha = (i_1, \ldots , i_n)\). Now, each derivative in \(\mu \) is a function of an ‘extra’ variable, so \(\partial ^{\alpha }_{\mu }f^{i_0}: {\mathcal {P}}_2({\mathbb {R}}^N) \times ({\mathbb {R}}^N)^n \rightarrow {\mathbb {R}}\). We always denote these variables, by \(v_1, \ldots , v_n\), so

$$\begin{aligned} {\mathcal {P}}_2({\mathbb {R}}^N) \times ({\mathbb {R}}^N)^n \ni (\mu , v_1, \ldots , v_n) \mapsto \partial ^{\alpha }_{\mu }f^{i_0}(\mu , v_1, \ldots , v_n) \in {\mathbb {R}}. \end{aligned}$$

When there is no possibility of confusion, we will abbreviate \((v_1, \ldots , v_n)\) to \({\varvec{v}}\), so that

$$\begin{aligned} \partial ^{\alpha }_{\mu }f^{i_0}(\mu , {\varvec{v}}) = \partial ^{\alpha }_{\mu }f^{i_0}(\mu , v_1, \ldots , v_n). \end{aligned}$$

For \({\varvec{v}}=(v_1, \ldots , v_n) \in ({\mathbb {R}}^N)^n\), we will denote

$$\begin{aligned} | {\varvec{v}}| := |v_1| + \cdots + |v_n|, \end{aligned}$$

with \(|\cdot |\) the Euclidean norm on \({\mathbb {R}}^N\). It then makes sense to discuss derivatives of the function \(\partial ^{\alpha }_{\mu }f^{i_0}\) with respect to the variables \(v_1, \ldots , v_n\). If, for some \(j \in \{1, \ldots , N\}\) and all \((\mu , v_1, \ldots ,v_{j-1}, v_{j+1}, \ldots , v_n) \in {\mathcal {P}}_2({\mathbb {R}}^N) \times ({\mathbb {R}}^N)^{n-1}\),

$$\begin{aligned} {\mathbb {R}}^N \ni v_j \mapsto \partial ^{\alpha }_{\mu }f^{i_0}(\mu , v_1, \ldots , v_n) \end{aligned}$$

is l-times continuously differentiable, we denote the derivatives \(\partial _{v_j}^{\beta _j}\partial ^{\alpha }_{\mu }f^{i_0}\), for \(\beta _j\) a multi-index on \(\{1, \ldots , N\}\) with \(|\beta _j| \le l\). Similar to the above, we will denote by \({\varvec{\beta }}\) the n-tuple of multi-indices \((\beta _1,\ldots , \beta _n)\). We also associate a length to \({\varvec{\beta }}\) by

$$\begin{aligned} |{\varvec{\beta }}|:=|\beta _1|+ \cdots +|\beta _n|, \end{aligned}$$

and denote \(\# {\varvec{\beta }}:=n\). Then, we denote by \({\mathcal {B}}_n\) the collection of all such \({\varvec{\beta }}\) with \(\# {\varvec{\beta }}:=n\), and \({\mathcal {B}}:= \textstyle \cup _{n \ge 1} {\mathcal {B}}_n\). Again, to lighten notation, we will use

$$\begin{aligned} \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu }f^i (\mu , {\varvec{v}}):= \partial _{v_{n}}^{\beta _{n}} \ldots \partial ^{\beta _1}_{v_1} \partial ^{\alpha }_{\mu }f^i(\mu , v_1, \ldots , v_n). \end{aligned}$$

The coefficients in Eqs. (2.1) and (2.3) are of the type \(V_0, \ldots , V_d: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\), so depend on a Euclidean variable as well as a measure variable. Considering functions on \({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\) raises a question about whether the order in which we take derivatives matters. A result from [8] says that derivatives commute when the mixed derivatives are Lipschitz continuous.

Lemma 2.2

(Lemma 4.1 in [8]) Let \(g: {\mathbb {R}}\times {\mathcal {P}}_2({\mathbb {R}}) \rightarrow {\mathbb {R}}\) and suppose that the derivative functions

$$\begin{aligned} (x, \mu , v ) \in {\mathbb {R}}\times {\mathcal {P}}_2({\mathbb {R}}) \times {\mathbb {R}}\rightarrow \left( \partial _{x} \partial _{\mu } g( x, \mu , v), \partial _{\mu } \partial _{x} g( x, \mu , v) \right) \in {\mathbb {R}}\times {\mathbb {R}}\end{aligned}$$

both exist and are Lipschitz continuous: i.e. there exists a constant \(C>0\) such that

$$\begin{aligned}&\left| \left( \partial _{x} \partial _{\mu } g, \partial _{\mu } \partial _{x} g \right) ( x, \mu , v) - \left( \partial _{x} \partial _{\mu } g, \partial _{\mu } \partial _{x} g \right) ( x^{\prime }, \mu ^{\prime }, v^{\prime }) \right| \\&\quad \le C \, \left( |x-x^{\prime }| + W_2(\mu ,\mu ^{\prime }) + |v-v^{\prime }| \right) . \end{aligned}$$

Then, the functions \(\partial _{x} \partial _{\mu } g\) and \(\partial _{\mu } \partial _{x} \) are identical.

With this in mind, we can introduce the following definition.

Definition 2.3

(\({\mathcal {C}}^{n,n}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) ; {\mathbb {R}}^N)\))

  1. (a)

    Let \(V: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\) with components \(V^1, \ldots , V^N: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\). We say that \(V \in {\mathcal {C}}^{1,1}_{b, Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) ;{\mathbb {R}}^N)\) if the following hold true: for each \(i=1, \ldots , N\), \(\partial _{\mu } V^i\) exists and \(\partial _xV\) exists. Moreover, assume that for all \((x, \mu , v) \in {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \times {\mathbb {R}}^N\)

    $$\begin{aligned} \left| \partial _x V^i(x,\mu ) \right| + \left| \partial _\mu V^i \left( x, \mu , v \right) \right| \le C. \end{aligned}$$

    In addition, suppose that \(\partial _{\mu }V^i\) and \(\partial _xV\) are Lipschitz in the sense that for all \((x, \mu , v), ( x^{\prime }, \mu ^{\prime }, v^{\prime } ) \in {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \times {\mathbb {R}}^N\),

    $$\begin{aligned} \left| \partial _{\mu } V^i(x,\mu ,v) - \partial _{\mu } V^i(x^{\prime }, \mu ^{\prime },v^{\prime }) \right|&\le C \left( |x-x^{\prime }| + W_2(\mu , \mu ^{\prime }) + |v-v^{\prime }| \right) , \\ \left| \partial _{x} V(x,\mu ) - \partial _{x} V(x^{\prime }, \mu ^{\prime }) \right|&\le C \left( |x-x^{\prime }| + W_2(\mu , \mu ^{\prime }) \right) . \end{aligned}$$
  2. (b)

    We say that \(V \in {\mathcal {C}}^{n,n}_{b, Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) ; {\mathbb {R}}^N) \) if the following hold true: for each \(i=1, \ldots , N\), and all multi-indices \(\alpha \) and \(\gamma \) on \(\{1, \ldots , N\}\) and all \({\varvec{\beta }}\in {\mathcal {B}}\) satisfying \(|\alpha | + |{\varvec{\beta }}| + |\gamma | \le n\), the derivatives

    $$\begin{aligned} \partial ^{\gamma }_x \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu }V^i(x,\mu ,{\varvec{v}}), \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu }\partial ^{\gamma }_xV^i(x,\mu ,{\varvec{v}}), \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\gamma }_x\partial ^{\alpha }_{\mu }V^i(x,\mu ,{\varvec{v}}) \end{aligned}$$

    exist. Moreover, suppose that each of these derivatives is bounded and Lipschitz.

  3. (c)

    We say that \(h \in {\mathcal {C}}^n_{b,Lip}({\mathcal {P}}_2({\mathbb {R}}^N) ; {\mathbb {R}}^N)\) if \(h:{\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\) does not depend on a Euclidean variable but otherwise satisfy the conditions in part (b).

Remark 2.4

  1. 1.

    For functions \(V:{\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\), we will also consider the lifting \(\tilde{V} : {\mathbb {R}}^N \times L^2( \Omega ) \rightarrow {\mathbb {R}}^N\). Then, for \(\xi \in L^2(\Omega )\), \(\tilde{V}(\xi , \xi )\) should be interpreted as \(\tilde{V}(\xi ( \omega ), \xi ) = V(\xi ( \omega ), [\xi ])\) with the first argument being considered pointwise by \( \omega \) and the second depending on the random variable \(\xi \) through its law.

  2. 2.

    From the bounds in Definition 2.3(a), we have the following simple consequences for the Fréchet derivative of the lifting \(\tilde{V}\) of V: for all \(x,x^{\prime } \in {\mathbb {R}}^N\) and \(\theta ,\theta ^{\prime }, \gamma ,\gamma ^{\prime } \in L^2(\Omega )\),

    $$\begin{aligned} \left| D \tilde{V}(x,\theta )(\gamma ) \right|&\le C \, \Vert \gamma \Vert _2\\ \left| D \tilde{V}(x,\theta )(\gamma ) - D \tilde{V}(x^{\prime },\theta ^{\prime })(\gamma ^{\prime }) \right|&\le C \left[ \Vert \gamma \Vert _2 \left( |x-x^{\prime }|+\Vert \theta - \theta ^{\prime }\Vert _2 \right) \right. \\&\qquad \left. +\, \Vert \gamma -\gamma ^{\prime }\Vert _2 \right] . \end{aligned}$$
  3. 3.

    Note that we cannot interchange the order of \(\partial _\mu \) and \(\partial _v\) in \( \partial _v \partial _{\mu }V(x,\mu ,v)\) since \(V(x,\mu )\) does not depend on v. However, if \(V \in {\mathcal {C}}^{n,n}_{b, Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) ; {\mathbb {R}}^N) \) then for all \(\alpha , {\varvec{\beta }}, \gamma \) with \(|\alpha | + |{\varvec{\beta }}| + |\gamma | \le n\), we have that

    $$\begin{aligned} \partial ^{\gamma }_x \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu } V(x,\mu ,{\varvec{v}}) =\partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\gamma }_x \partial ^{\alpha }_{\mu } V(x,\mu ,{\varvec{v}}) = \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu } \partial ^{\gamma }_x V(x,\mu ,{\varvec{v}}) \end{aligned}$$

    due to Lemma 2.2.

We now introduce some concrete examples of functions \(V: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\).

Example 2.5

(Scalar interaction) Take \(U \in {\mathcal {C}}^{k+1}_b({\mathbb {R}}^N \times {\mathbb {R}};{\mathbb {R}}^N)\), \(\phi \in {\mathcal {C}}^{k+1}_b({\mathbb {R}}^N;{\mathbb {R}})\) and \(\textstyle V(x,\mu ):=U(x, \int \phi d \mu )\).

Example 2.6

(First-order interaction) Take \(W \in {\mathcal {C}}^{k+1}_b({\mathbb {R}}^N \times {\mathbb {R}}^N;{\mathbb {R}}^N)\) and \( \textstyle V(x,\mu ):= \int W(x, \cdot )d \mu \).

Lemma 2.7

In both examples, \(V \in {\mathcal {C}}^{k,k}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N); {\mathbb {R}}^N)\).

The proof is straightforward.

2.4 Kusuoka–Stroock processes on \({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\)

In Sect. 4, we develop integration by parts formulas modelled on those developed in works of Kusuoka [24] along with Stroock [25] for solutions of classical SDEs. These integration by parts formulas take the form

$$\begin{aligned} \partial ^{\alpha }_x \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, \Psi (t,x, [\theta ]) \right]&= {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, \Psi _{\alpha }(t,x, [\theta ]) \right] ,\\ \partial ^{\beta }_{\mu } \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, \Psi (t,x, [\theta ]) \right] ({\varvec{v}})&= {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, \Psi _{\beta }(t,x, [\theta ], {\varvec{v}}) \right] \end{aligned}$$

for processes \(\Psi , \Psi _{\alpha },\Psi _{\beta }\) belonging to a specific class. We work with a class of processes similar to one introduced in [25], which we call the class of Kusuoka–Stroock processes.

Definition 2.8

(Kusuoka–Stroock processes on \({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\)) Let E be a separable Hilbert space and let \(r \in {\mathbb {R}}\), \(q,M \in \mathbb {N}\). We denote by \({\mathbb {K}}^q_{r}(E,M)\) the set of processes \(\Psi : [0,T] \times \mathbb {R}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow \mathbb {D}^{M,\infty }(E)\) satisfying the following:

  1. 1.

    For any multi-indices \(\alpha , {\varvec{\beta }}\), \(\gamma \) satisfying \(\vert \alpha \vert + |{\varvec{\beta }}| +|\gamma | \le M\), the function

    $$\begin{aligned}{}[0,T] \times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \ni (t,x , [\theta ]) \mapsto \partial ^{\gamma }_x \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu } \Psi (t,x,[\theta ], {\varvec{v}}) \in L^p(\Omega ) \end{aligned}$$

    exists and is continuous for all \(p \ge 1\).

  2. 2.

    For any \(p \ge 1\) and \(m \in {\mathbb {N}}\) with \(|\alpha | + |{\varvec{\beta }}| + |\gamma | +m \le M\), we have

    $$\begin{aligned} \sup _{ {\varvec{v}}\in ({\mathbb {R}}^N)^{\# {\varvec{\beta }}}} \sup _{t \in (0,T]} t^{-r/2}&\left\| \partial ^{\gamma }_x \partial ^{{\varvec{\beta }}}_{{\varvec{v}}} \partial ^{\alpha }_{\mu } \Psi (t,x,[\theta ], {\varvec{v}}) \right\| _{ {\mathbb {D}}^{m,p}(E) } \le C \, \left( 1 + |x| + \Vert \theta \Vert _2 \right) ^q. \end{aligned}$$
    (2.7)

Remark 2.9

This definition is different to that in [25] in the following ways:

  1. 1.

    The processes depend on a parameter \(\mu \in {\mathcal {P}}_2({\mathbb {R}}^N)\).

  2. 2.

    We keep track of polynomial growth in x of the \({\mathbb {D}}^{m,p}\)-norm through a parameter \(q>0\) instead of requiring it to be uniformly bounded.

  3. 3.

    We require continuity in \(L^p(\Omega )\) rather than almost surely.

Remark 2.10

  1. 1.

    The number M denotes how many times the Kusuoka–Stroock process can be differentiated; q measures the polynomial growth of the \({\mathbb {D}}^{m,p}\)-norm of the process in \((x,[\theta ])\), and r measures the growth in t.

  2. 2.

    In the definition, we are able to stipulate that the \({\mathbb {D}}^{m,p}\)-norm of all the derivatives will be uniformly bounded w.r.t. \({\varvec{v}}\) because in the sequel the only dependence on \({\varvec{v}}\) in any Kusuoka–Stroock processes will come from \(\partial _{\mu } X^{x, [\theta ]}_t(v)\). In Lemma 6.7 \(\partial _{\mu } X^{x, [\theta ]}_t(v)\) is bounded w.r.t v and this carries over to the \({\mathbb {D}}^{m,p}\)-norm.

To analyse the density of solutions of the MVSDE (2.1) started from a fixed initial point in \({\mathbb {R}}^N\), it is useful to have notation for Kusuoka–Stroock processes which do not depend on a measure \(\mu \in {\mathcal {P}}_2({\mathbb {R}}^N)\). We denote this class by \({\mathcal {K}}_r^q({\mathbb {R}},M)\). The following lemma says that if we take a Kusuoka–Stroock process on \({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\) and evaluate its measure argument at a Dirac mass, then this forms a Kusuoka–Stroock process on \({\mathbb {R}}^N \). Its proof is straightforward.

Lemma 2.11

If \(\Psi \in {\mathbb {K}}_r^q({\mathbb {R}}, M)\) and we define \(\Phi (t,x):=\Psi (t,x,\delta _x)\), then \(\Phi \in {\mathcal {K}}_r^q({\mathbb {R}},M)\).

3 Regularity of solutions of McKean–Vlasov SDEs

This section contains some basic results about solutions of the equations involved, their integrability and their differentiability with respect to parameters. Existence and uniqueness of solutions to (1.3) is covered in Sect. 2.2.

Proposition 3.1

(First-order derivatives) Suppose that \(V_0, \ldots , V_d\in {\mathcal {C}}^{1,1}_{b,\text {Lip}}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\). Then the following hold:

  1. (a)

    There exists a modification of \(X^{x, [\theta ]}\) such that, for all \(t \in [0,T]\), the map \(x \mapsto X_t^{x, \theta }\) is \({\mathbb {P}}\)-a.s. differentiable. We denote the derivative \(\partial _x X^{x, [\theta ]}\) and note that it solves the following SDE

    $$\begin{aligned} \partial _x X^{x, [\theta ]}_t = \text {Id}_N + \sum _{i=0}^d \int _0^t \partial V_i \left( X^{x, [\theta ]}_s, \left[ X^{\theta }_s \right] \right) \, \partial _x X^{x, [\theta ]}_s \, dB^i_s. \end{aligned}$$
    (3.1)
  2. (b)

    For all \(t \in [0,T]\), the maps \(\theta \mapsto X^{\theta }_t\) and \(\theta \mapsto X_t^{x, [\theta ]}\) are Fréchet differentiable in \(L^2(\Omega )\), i.e. there exists a linear continuous map \(D X^{\theta }_t : L^2(\Omega ) \rightarrow L^2(\Omega )\) such that for all \( \gamma \in L^2(\Omega )\),

    $$\begin{aligned} \Vert X_t^{\theta + \gamma }- X^{\theta }_t - D X^{\theta }_t(\gamma )\Vert _2 =o(\Vert \gamma \Vert _2) \quad \text { as } \Vert \gamma \Vert _2 \rightarrow 0 , \end{aligned}$$

    and similarly for \(X^{x, [\theta ]}_t\). These processes satisfy the following stochastic differential equations

    $$\begin{aligned} D X^{x, [\theta ]}_t (\gamma ) =&\sum _{i=0}^d \int _0^t \left[ \partial V_i\left( X^{x, [\theta ]}_s, \left[ X_s^{\theta }\right] \right) \, D X^{x, [\theta ]}_s(\gamma ) \right. \nonumber \\&\left. +\, D\tilde{V}_i\left( X^{x, [\theta ]}_s, X^{\theta }_s \right) \left( D X^{\theta }_s (\gamma ) \right) \right] \, dB^i_s, \end{aligned}$$
    (3.2)
    $$\begin{aligned} D X^{\theta }_t (\gamma ) =\,&\gamma + \sum _{i=0}^d \int _0^t \left[ \partial V_i\left( X^{\theta }_s, \left[ X_s^{\theta }\right] \right) \, DX^{\theta }_s (\gamma ) \right. \nonumber \\&\left. +\, D\tilde{V}_i\left( X^{\theta }_s, X^{\theta }_s\right) \left( D X^{\theta }_s (\gamma )\right) \right] \, dB^i_s , \end{aligned}$$
    (3.3)

    where we denote by \(\tilde{V}_i\) the lifting of \(V_i\) to a function on \({\mathbb {R}}^N \times L^2(\Omega )\). Moreover, for each \(x \in {\mathbb {R}}^N\), \(t \in [0,T]\), the map \({\mathcal {P}}_2({\mathbb {R}}^N) \ni [\theta ] \mapsto X^{x, [\theta ]}_t \in L^p(\Omega )\) is differentiable for all \(p \ge 1\). So, \(\partial _{\mu } X^{x, [\theta ]}_t(v) \) exists and it satisfies the following equation

    $$\begin{aligned} \partial _{\mu } X^{x, [\theta ]}_t(v)= & {} \sum _{i=0}^d \int _0^t \bigg \{ \partial V_i\left( X^{x, [\theta ]}_s, \left[ X_s^{\theta }\right] \right) \,\partial _{\mu } X^{x, [\theta ]}_s(v)\nonumber \\&+\, \widetilde{{\mathbb {E}}} \left[ \partial _{\mu }V_i \left( X^{x, [\theta ]}_s, \left[ X_s^{\theta }\right] , \widetilde{ X}^{v,[\theta ]}_s\right) \, \partial _x \widetilde{ X}_s^{v, [\theta ]} \right] \nonumber \\&+\, \widetilde{{\mathbb {E}}} \left[ \partial _{\mu }V_i \left( X^{x, [\theta ]}_s, \left[ X_s^{\theta }\right] , \widetilde{ X}^{\tilde{\theta }}_s\right) \, \partial _{\mu } \widetilde{ X}_s^{\tilde{\theta },[\theta ]}(v) \right] \bigg \} dB^i_s , \end{aligned}$$
    (3.4)

    where \(\widetilde{ X}^{\tilde{\theta }}_s\) is copy of \(X^{\theta }_s\) on the probability space \((\tilde{\Omega }, \tilde{\mathcal {F}}, \tilde{\mathbb {P}})\) driven by the Brownian motion \(\tilde{B}\) and with initial condition \(\tilde{\theta }\). Similarly, \( \partial _x \widetilde{X}_s^{v, [\theta ]} \) is a copy of \( \partial _x X_s^{v, [\theta ]} \) driven by the Brownian motion \(\tilde{B}\) and \(\partial _{\mu } \widetilde{ X}_s^{\tilde{\theta },[\theta ]}(v)= \left. \partial _{\mu } \widetilde{ X}_s^{x,[\theta ]}(v) \right| _{x = \tilde{\theta }}\). Finally, the following representation holds for all \(\gamma \in L^2(\Omega )\):

    $$\begin{aligned} D X^{x, [\theta ]}_t (\gamma ) = \widetilde{{\mathbb {E}}} \left[ \partial _{\mu } X^{x, [\theta ]}_t(\tilde{\theta }) \, \tilde{\gamma }\right] . \end{aligned}$$
    (3.5)
  3. (c)

    For all \(t \in [0,T]\), \(X^{x, [\theta ]}_t, X^{\theta }_t \in {\mathbb {D}}^{1, \infty }\). Moreover, \(\mathcal {\mathbf {D}}_r X^{x, [\theta ]}= \left( \mathcal {\mathbf {D}}^j_r (X^{x, [\theta ]})^i \right) _{\begin{array}{c} 1 \le i \le N \\ 1 \le j \le d \end{array}}\) satisfies, for \(0 \le r \le t\)

    $$\begin{aligned} \mathcal {\mathbf {D}}_r X^{x, [\theta ]}_t = \sigma \left( X^{x, [\theta ]}_r, \left[ X^{\theta }_r \right] \right) + \sum _{i=0}^d \int _r^t \partial V_i\left( X^{x, [\theta ]}_s, \left[ X^{\theta }_s \right] \right) \, \mathcal {\mathbf {D}}_r X^{x, [\theta ]}_s \, dB^i_{s},\nonumber \\ \end{aligned}$$
    (3.6)

    where \(\sigma (z, \mu )\) is the \(N \times d\) matrix with columns \(V_1(z,\mu ), \ldots , V_d(z,\mu )\).

Proof

  1. (a)

    Recalling again that \(X^{x, [\theta ]}\) satisfies a classical SDE with time-dependent coefficients, it follows from [23] Theorem 4.6.5 there exists a modification of \(X^{x, [\theta ]}_t\) which is continuously differentiable in x, and the first derivative satisfies Eq. (3.1).

  2. (b)

    It is shown in [12, Lemma 4.17] that the map \( \theta \mapsto (X^{\theta }_t,X^{x, [\theta ]}_t)\) is Fréchet differentiable. It is then easy to see the Fréchet derivative processes satisfy Eqs. (3.2) and (3.3). Now, we follow the idea in [8] to show that \(\partial _{\mu } X^{x, [\theta ]}_t(v)\) solves Eq. (3.4). We first re-write the equation for \(D X^{\theta }_t(\gamma )\) in terms of \(\partial _{\mu }V_i\) instead of the Fréchet derivative of the lifting \(\tilde{V}_i\), as follows

    $$\begin{aligned} D X^{\theta }_t (\gamma ) =\,&\gamma + \sum _{i=0}^d \int _0^t\bigg \{ \partial V_i\left( X^{\theta }_s, \left[ X_s^{\theta }\right] \right) \, DX^{\theta }_s (\gamma ) \nonumber \\&+ \widetilde{{\mathbb {E}}} \left[ \partial _{\mu } V_i \left( X^{\theta }_s, \left[ X^{\theta }_s \right] , \widetilde{X}^{\tilde{\theta }}_s\right) D\widetilde{X}_s^{\tilde{\theta }}(\widetilde{\gamma })\right] \bigg \} \, dB^i_s . \end{aligned}$$
    (3.7)

    Consider the equation satisfied by \(\partial _{\mu } X_s^{\tilde{\theta },[\theta ]}(v)\), evaluated at \(v= \widehat{\theta }\) and multiplied by \(\widehat{\gamma }\) with both random variables defined on a probability space \((\widehat{\Omega }, \widehat{{\mathcal {F}}}, \widehat{{\mathbb {P}}})\). Taking expectation with respect to \(\widehat{{\mathbb {P}}}\), we get

    $$\begin{aligned} \widehat{{\mathbb {E}}}\left[ \partial _{\mu } X_t^{\tilde{\theta },[\theta ]}(\widehat{\theta })\, \widehat{\gamma } \right] =&\sum _{i=0}^d \int _0^t \bigg \{ \partial V_i\left( X^{\theta }_s, \left[ X_s^{\theta }\right] \right) \, \widehat{{\mathbb {E}}}\left[ \partial _{\mu } X_s^{\tilde{\theta },[\theta ]}(\widehat{\theta })\, \widehat{\gamma } \right] \nonumber \\&+ \widehat{{\mathbb {E}}} \widetilde{{\mathbb {E}}} \left[ \partial _{\mu }V_i \left( X^{\theta }_s, [X_s^{\theta }], \widetilde{ X}^{\hat{\theta }, [ \theta ]}_s \right) \partial _x \widetilde{X}_s^{\widehat{\theta }, [\theta ]} \, \widehat{\gamma } \right] \nonumber \\&+ \widetilde{{\mathbb {E}}} \left[ \partial _{\mu }V_i \left( X^{\theta }_s, \left[ X_s^{\theta }\right] , \widetilde{ X}^{\tilde{\theta }}_s \right) \widehat{{\mathbb {E}}} \left[ \partial _{\mu } \widetilde{X}_s^{\tilde{\theta },[\theta ]}(\widehat{\theta }) \, \widehat{\gamma } \right] \right] \bigg \}dB^i_s. \end{aligned}$$
    (3.8)

    In the above equation, we are able to take \(\widehat{\gamma }\) inside the Itô integral with no problem since it is defined on a separate probability space to the Brownian motion, B. We are also able to interchange the order of the Itô integral and expectation with respect to \(\widehat{{\mathbb {P}}}\) using a stochastic Fubini theorem (see for example [33, Theorem 65]). Again, since \((\widehat{\theta }, \widehat{\gamma })\) are defined on a separate probability space,

    $$\begin{aligned} \widehat{{\mathbb {E}}} \widetilde{{\mathbb {E}}}&\left[ \partial _{\mu }V_i \left( X^{\theta }_s, \left[ X_s^{\theta }\right] , \widetilde{ X}^{\hat{\theta }, [ \theta ]}_s \right) \partial _x \widetilde{X}_s^{\widehat{\theta }, [\theta ]} \, \widehat{\gamma } \right] = \widetilde{{\mathbb {E}}} \left[ \partial _{\mu }V_i \left( X^{\theta }_s, \left[ X_s^{\theta }\right] , \widetilde{ X}^{\tilde{\theta }}_s \right) \partial _x \widetilde{X}_s^{\tilde{\theta }, [\theta ]} \, \tilde{\gamma } \right] , \end{aligned}$$

    which we can replace in Eq. (3.8) to get:

    $$\begin{aligned} \widehat{{\mathbb {E}}}\left[ \partial _{\mu } X_t^{\tilde{\theta },[\theta ]}(\widehat{\theta }) \, \widehat{\gamma } \right] =&\sum _{i=0}^d \int _0^t \bigg \{ \partial V_i\left( X^{\theta }_s, \left[ X_s^{\theta }\right] \right) \, \widehat{{\mathbb {E}}}\left[ \partial _{\mu } X_s^{\tilde{\theta },[\theta ]}(\widehat{\theta }) \, \widehat{\gamma } \right] \nonumber \\&+ \widetilde{{\mathbb {E}}} \left[ \partial _{\mu }V_i \left( X^{\theta }_s, \left[ X_s^{\theta }\right] , \widetilde{ X}^{\tilde{\theta }}_s \right) \left( \partial _x \widetilde{X}_s^{\tilde{\theta }, [\theta ]} \, \tilde{\gamma } \right. \right. \nonumber \\&\left. \left. +\,\widehat{{\mathbb {E}}} \left[ \partial _{\mu } \widetilde{X}_s^{\tilde{\theta },[\theta ]}(\widehat{\theta }) \, \widehat{\gamma } \right] \right) \right] \bigg \}dB^i_s. \end{aligned}$$
    (3.9)

    Now, taking Eq. (3.1), satisfied by \(\partial _x X^{x, [\theta ]}_t\) and evaluating at \(x= \theta \), multiplying by \(\gamma \) and adding to Eq. (3.8), we see that \( \partial _x X^{\theta ,[\theta ]}_t \gamma + \widehat{{\mathbb {E}}}\left[ \partial _{\mu } X_t^{\tilde{\theta },[\theta ]}(\widehat{\theta })\, \widehat{\gamma } \right] \) is equal to

    $$\begin{aligned}&\gamma + \sum _{i=0}^d \int _0^t \bigg \{ \partial V_i\left( X^{x, [\theta ]}_s, \left[ X_s^{\theta }\right] \right) \, \left( \partial _x X^{\theta ,[\theta ]}_s \gamma + \widehat{{\mathbb {E}}}\left[ \partial _{\mu } X_s^{\tilde{\theta },[\theta ]}(\widehat{\theta })\, \widehat{\gamma } \right] \right) \\&\quad + \widetilde{{\mathbb {E}}} \left[ \partial _{\mu }V_i \left( X^{x, [\theta ]}_s, \left[ X_s^{\theta }\right] , \widetilde{ X}^{\tilde{\theta }}_s \right) \left( \partial _x \widetilde{X}_s^{\tilde{\theta }, [\theta ]} \, \widetilde{\gamma } + \widehat{{\mathbb {E}}}\left[ \partial _{\mu } \widetilde{X}_s^{\tilde{\theta },[\theta ]}(\widehat{\theta }) \widehat{\gamma } \right] \right) \right] \bigg \} dB^i_s. \end{aligned}$$

    One can therefore see that the equation satisfied by \( \partial _x X^{\theta ,[\theta ]}_t \gamma + \widehat{{\mathbb {E}}}\left[ \partial _{\mu } X_t^{\tilde{\theta },[\theta ]}(\widehat{\theta })\, \widehat{\gamma } \right] \) is the same as Eq. (3.7) satisfied by \(D X^{\theta }_t(\gamma )\), so by uniqueness they are equal. This representation also makes clear the linearity and continuity of \(\gamma \mapsto D X^{\theta }_t(\gamma )\).

    Following essentially the same procedure shows that \(\widehat{{\mathbb {E}}} \left[ \partial _{\mu } X^{x, [\theta ]}_t(\widehat{\theta }) \, \widehat{\gamma } \right] \) satisfies the same equation as \(D X^{x, [\theta ]}_t(\gamma )\), so that (3.5) holds. Hence, by definition \(\partial _{\mu } X^{x, [\theta ]}_t(v)\) exists and satisfies Eq. (3.4). This representation also makes clear the linearity and continuity of \(\gamma \mapsto D X^{x, [\theta ]}_t(\gamma )\).

  3. (c)

    Let \(X^{\theta , n}\) denote the Picard approximation of the solution to the McKean–Vlasov SDE (2.1), given by

    $$\begin{aligned} X^{\theta , 0}_t&=\theta , \quad t \in [0,T] \\ X^{\theta , n}_t&= \theta + \sum _{i=0}^d \int _0^t V_i \left( X^{\theta , n}_s, \left[ X^{\theta , n-1}_s \right] \right) \, dB^i_s , \end{aligned}$$

    For each \(n \ge 1\), \(X^{\theta , n}\) is the solution of a classical SDE with time-dependent coefficients, which are differentiable in space, with each derivative of the coefficients being Lipschitz continuous. Therefore, by Nualart [32] Theorem 2.2.1 \(X^{\theta , n}_t \in {\mathbb {D}}^{1, \infty }\) for all \(t \in [0,T]\). The form of the equation satisfied by \(\mathcal {\mathbf {D}}X^{\theta , n}_t\) is the same as (3.6). It is then easy to show that \( \Vert X^{\theta , n}_t\Vert _{{\mathbb {D}}^{1, \infty }} < C(1 + \Vert \theta \Vert _2)\) uniformly in n. Now, since for all \(p \ge 2\), \( \Vert X^{\theta , n}_t - X^{\theta }_t\Vert _p \rightarrow 0\) as \(n \rightarrow \infty \), by Nualart [32] Lemma 1.5.3, \( X^{\theta }_t \in {\mathbb {D}}^{1, \infty }\). Similarly, \(X^{x, [\theta ]}_t \in {\mathbb {D}}^{1, \infty }\) since it solves a classical SDE with time-dependent coefficients. The measure term in the coefficients of the equation for \(X^{x, [\theta ]}_t\) is deterministic, so \(\mathcal {\mathbf {D}}_r(X^{x, [\theta ]}_t)\) satisfies the usual equation for the Malliavin derivative of an SDE which is precisely Eq. (3.6). \(\square \)

For our aplications, we need to extend the above result to higher order derivatives of \(X^{x, [\theta ]}_t\). The main result is summarised in the following theorem, which classifies \(X^{x, [\theta ]}_t\) as a Kusuoka–Stroock process.

Theorem 3.2

Suppose \(V_0, \ldots , V_d\in {\mathcal {C}}^{k,k}_{b, Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\), then \((t, x, [\theta ]) \mapsto X_t^{x, [\theta ]} \in {\mathbb {K}}^1_{0}({\mathbb {R}}^N,k)\). If, in addition, \(V_0, \ldots , V_d\) are uniformly bounded then \((t, x, [\theta ]) \mapsto X_t^{x, [\theta ]} \in {\mathbb {K}}^0_{0}({\mathbb {R}}^N,k)\).

Since each derivative process satisfies a linear equation (whose exact form is not important for our purposes) the proof is quite mechanical and reserved to the Appendix 6.2. Now we introduce some operators acting on Kusuoka–Stroock processes. These are the building blocks of the integration by parts formulae to come. For the rest of this section, we will need the following uniform ellipticity assumption.

Assumption 3.3

Let \(\sigma : {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^{N \times d}\) be given by

$$\begin{aligned} \sigma (z,\mu ):= \left[ V_1(z,\mu ) | \cdots | V_d(z, \mu ) \right] . \end{aligned}$$

We make the assumption that there exists \(\epsilon >0\) such that, for all \(\xi \in {\mathbb {R}}^N\), \(z \in {\mathbb {R}}^N\) and \(\mu \in {\mathcal {P}}_2({\mathbb {R}}^N)\),

$$\begin{aligned} \xi ^{\top } \sigma (z, \mu ) \sigma (z, \mu )^{\top } \xi \ge \epsilon | \xi |^2 . \end{aligned}$$

Now, for a multi-index \(\alpha \) on \(\{1, \ldots , N\}\), we introduce the following operators acting on elements of \({\mathbb {K}}^q_r({\mathbb {R}},n)\), defined for \(\alpha =(i)\), by

$$\begin{aligned} I^1_{(i)}(\Psi )(t,x, [\theta ]) :=&\, \frac{1}{\sqrt{t}} \, \delta \left( r \mapsto \Psi (t,x, [\theta ]) \, \left( \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \right. \right. \\&\left. \left. \times \left( X^{x,\mu }_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,\mu }_r \right) _{i} \right) , \\ I^2_{(i)}(\Psi )(t,x, [\theta ]) :=&\sum _{j=1}^N I^1_{(j)} \left( \left( \partial _x X^{x,\mu }_t\right) ^{-1}_{j,i} \Psi (t,x, [\theta ])\right) , \\ I^3_{(i)}(\Psi )(t,x, [\theta ]) :=&\, I^1_{(i)}(\Psi )(t,x,[\theta ]) + \sqrt{t} \partial ^i\Psi (t,x,[\theta ]), \\ {\mathcal {I}}^1_{(i)} (\Psi )(t,x, [\theta ],v_1) :=&\, \frac{1}{\sqrt{t}} \, \delta \left( r \mapsto \left( \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\right. \right. \\&\left. \left. \times \left( X^{x,\mu }_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,\mu }_r \, \left( \partial _x X^{x,\mu }_t\right) ^{-1} \partial _{\mu } X^{x, [\theta ]}_t(v_1) \right) _{i} \right. \\&\left. \times \,\Psi (t,x, [\theta ]) \right) , \\ {\mathcal {I}}^3_{(i)}(\Psi )(t,x, [\theta ],v_1) :=&{\mathcal {I}}^1_{(i)}(\Psi )(t,x, [\theta ],v_1) + \sqrt{t} (\partial _{\mu } \Psi )_{i}(t,x, [\theta ],v_1). \end{aligned}$$

For \(\alpha = (\alpha _1, \ldots , \alpha _n)\) we inductively define

$$\begin{aligned} I^1_{\alpha }:=\, I^1_{\alpha _n} \circ I^1_{\alpha _{n-1}} \circ \cdots \circ I^1_{\alpha _1} , \end{aligned}$$

and make analogous definitions for each of the other operators. The following result states that these operators are well-defined and describes how each operator transforms a given Kusuoka–Stroock process. The proof is contained in Appendix 6.2.

Proposition 3.4

If \(V_0, \ldots , V_d\in {\mathcal {C}}^{k,k}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\), (UE) holds and \(\Psi \in {\mathbb {K}}^q_r({\mathbb {R}},n)\), then \(I^1_{\alpha }(\Psi )\) and \(I^3_{\alpha }(\Psi )\), are all well-defined for \(|\alpha |\le (k \wedge n)\). \(I^2_{\alpha }(\Psi )\), \({\mathcal {I}}^1_{\alpha }(\Psi )\) and \({\mathcal {I}}^3_{\alpha }(\Psi )\) are well defined for \(|\alpha |\le n \wedge (k-2)\). Moreover,

$$\begin{aligned} I^1_{\alpha }(\Psi ), I^3_{\alpha }(\Psi )&\in {\mathbb {K}}^{q+2|\alpha |}_r({\mathbb {R}},(k \wedge n)-|\alpha |) , \\ I^2_{\alpha }(\Psi )&\in {\mathbb {K}}^{q+3|\alpha |}_r({\mathbb {R}},[n \wedge (k-2)]-|\alpha |) , \\ {\mathcal {I}}^1_{\alpha }(\Psi ), {\mathcal {I}}^3_{\alpha }(\Psi )&\in {\mathbb {K}}^{q+4|\alpha |}_r({\mathbb {R}},[n \wedge (k-2) ]-|\alpha |). \end{aligned}$$

If \(\Psi \in {\mathbb {K}}^0_r({\mathbb {R}},n)\) and \(V_0, \ldots , V_d\) are uniformly bounded, then

$$\begin{aligned} I^1_{\alpha }(\Psi ), I^3_{\alpha }(\Psi )&\in {\mathbb {K}}^{0}_r({\mathbb {R}},(k \wedge n)-|\alpha |) , \\ I^2_{\alpha }(\Psi )&\in {\mathbb {K}}^{0}_r({\mathbb {R}},[n \wedge (k-2)]-|\alpha |) ,\\ {\mathcal {I}}^1_{\alpha }(\Psi ), {\mathcal {I}}^3_{\alpha }(\Psi )&\in {\mathbb {K}}^{0}_r({\mathbb {R}},[n \wedge (k-2) ]-|\alpha |). \end{aligned}$$

4 Integration by parts formulae for the de-coupled equation

Having introduced some operators acting on Kusuoka–Stroock processes, we now show how to use these operators to construct Malliavin weights in integration by parts formulas. We first develop integration by parts formulas for derivatives of \(x \mapsto {\mathbb {E}}\, f(X^{x, [\theta ]}_t)\) and then separately \([\theta ] \mapsto {\mathbb {E}}\, f(X^{x, [\theta ]}_t)\). In the last part of this section, we will show how to combine these results to construct integration by parts formulas for derivatives of the function \(x \mapsto {\mathbb {E}}\,f(X^{x,\delta _x}_t)\).

4.1 Integration by parts in the space variable

Proposition 4.1

Let \(f \in {\mathcal {C}}^{\infty }_b({\mathbb {R}}^N;{\mathbb {R}})\) and \(\Psi \in {\mathbb {K}}^q_r({\mathbb {R}},n)\)

  1. 1.

    If \(|\alpha | \le [n \wedge k]\), then

    $$\begin{aligned} {\mathbb {E}}\left[ \partial ^{\alpha }_x\left( f\left( X^{x, [\theta ]}_t \right) \right) \, \Psi (t,x, [\theta ])\right] = t^{-|\alpha |/2} \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, I^1_{\alpha }(\Psi )(t,x, [\theta ])\right] . \end{aligned}$$
  2. 2.

    If \(|\alpha | \le [n \wedge (k-2)]\), then

    $$\begin{aligned} {\mathbb {E}}\left[ (\partial ^{\alpha }f)\left( X^{x, [\theta ]}_t \right) \, \Psi (t,x, [\theta ])\right] = t^{-|\alpha |/2} \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, I^2_{\alpha }(\Psi )(t,x, [\theta ])\right] . \end{aligned}$$
  3. 3.

    If \(|\alpha | \le [n \wedge k]\), then

    $$\begin{aligned} \partial ^{\alpha }_x \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, \Psi (t,x, [\theta ])\right] = t^{-|\alpha |/2} \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, I^3_{\alpha }(\Psi )(t,x, [\theta ])\right] . \end{aligned}$$
  4. 4.

    If \(|\alpha | + |\beta | \le [n \wedge (k-2)]\), then

    $$\begin{aligned} \partial ^{\alpha }_x \, {\mathbb {E}}\left[ (\partial ^{\beta } f)\left( X^{x, [\theta ]}_t \right) \, \Psi (t,x, [\theta ])\right]= & {} t^{-(|\alpha |+ |\beta |)/2} \, {\mathbb {E}}[f\left( X^{x, [\theta ]}_t \right) \\&I^3_{\alpha }\left( I^2_{\beta }(\Psi )\right) (t,x, [\theta ])] . \end{aligned}$$

Proof

  1. 1.

    First, we note that Eq. (3.1) satisfied by \(\partial _x X^{x,[\theta ]}_t\) and Eq. (3.6) satisfied by \(\mathcal {\mathbf {D}}_r X^{x,[\theta ]}_t\) are the same except their initial conditions. It therefore follows that for \(r \le t\),

    $$\begin{aligned} \partial _x X^{x,[\theta ]}_t = \mathcal {\mathbf {D}}_r X^{x,[\theta ]}_t \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r . \end{aligned}$$

    This allows us to make the following computations for \(f \in {\mathcal {C}}^{\infty }_b({\mathbb {R}}^N;{\mathbb {R}})\),

    $$\begin{aligned} {\mathbb {E}}\left[ \partial _x \left[ f\left( X^{x, [\theta ]}_t \right) \right] \Psi (t,x, [\theta ]) \right] =&\, {\mathbb {E}}\left[ \partial f \left( X_t^{x,[\theta ]} \right) \,\partial _x X^{x,[\theta ]}_t\, \Psi (t,x, [\theta ]) \right] \\ =&\, \frac{1}{t} {\mathbb {E}}\left[ \int _0^t \partial f \left( X_t^{x,[\theta ]} \right) \, \partial _x X^{x,[\theta ]}_t \Psi (t,x, [\theta ]) \, dr \right] \\ =&\, \frac{1}{t} {\mathbb {E}}\left[ \int _0^t \partial f \left( X_t^{x,[\theta ]} \right) \, \mathcal {\mathbf {D}}_r X^{x,[\theta ]}_t \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\right. \\&\times \left. \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r \Psi (t,x, [\theta ]) \, dr \right] \\ =&\, \frac{1}{t} {\mathbb {E}}\left[ \int _0^t \mathcal {\mathbf {D}}_r f \left( X_t^{x,[\theta ]} \right) \, \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \right. \\&\times \left. \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r \, \Psi (t,x, [\theta ]) \, dr \right] \\ =&\, \frac{1}{t} {\mathbb {E}}\left[ f \left( X_t^{x,[\theta ]} \right) \, \delta \left( r \mapsto \left( \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\right. \right. \right. \\&\times \left. \left. \left. \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r \right) ^{\top } \Psi (t,x, [\theta ]) \right) \right] , \end{aligned}$$

    where we have used Malliavin integration by parts \({\mathbb {E}}\langle \mathcal {\mathbf {D}}\phi , u \rangle _{H_d} = {\mathbb {E}}\left[ \phi \, \delta (u) \right] \) in the last line. This proves the result for \(|\alpha |=1\). By Proposition 3.4, \(I^1_{\alpha }(\Psi ) \in {\mathbb {K}}^{q+2}_r({\mathbb {R}},(k \wedge n)-1) \) when \(|\alpha |=1\). We can therefore iterate this argument another \(|\alpha |-1\) times to obtain the result for all \(\alpha \) satisfying \(|\alpha | \le [n \wedge k]\).

  2. 2.

    By the chain rule,

    $$\begin{aligned} {\mathbb {E}}\left[ (\partial ^{i} f)\left( X^{x, [\theta ]}_t \right) \Psi (t,x, [\theta ])\right] =&\sum _{j=1}^N {\mathbb {E}}\left[ \partial _{x_i} \left( f\left( X^{x, [\theta ]}_t \right) \right) \left( \left( \partial _x X^{x,[\theta ]}_t\right) ^{-1} \right) ^{j,i} \right. \\&\left. \times \,\Psi (t,x, [\theta ])\right] \\ =&\, t^{-1/2} \, \sum _{j=1}^N {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) I^1_{(j)} \left( \left( \left( \partial _x X^{x,[\theta ]}_t\right) ^{-1} \right) ^{j,i}\right. \right. \\&\left. \left. \times \,\Psi (t,x, [\theta ])\right) \right] \\ =&\, t^{-1/2} \,{\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, I^2_{(i)}(\Psi )(t,x, [\theta ])\right] . \end{aligned}$$

    By Proposition 3.4, \(I^2_{(i)}(\Psi ) \in {\mathbb {K}}^{q+3}_r \left( {\mathbb {R}},[n \wedge (k-2)]-1 \right) \), so since \(|\alpha | \le [n \wedge (k-2)]\), we can apply this argument another \(|\alpha |-1\) times to get the result.

  3. 3.

    We compute, for any \(i=1, \ldots , N\)

    $$\begin{aligned}&\partial ^{i}_x \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \Psi (t,x, [\theta ])\right] \\&\quad = {\mathbb {E}}\left[ \partial ^i_x \left( f\left( X^{x, [\theta ]}_t \right) \Psi (t,x, [\theta ])+\,\partial ^i_x\Psi (t,x, [\theta ]) f\left( X^{x, [\theta ]}_t \right) \right] \right. \\&\quad = t^{-1/2} {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \left\{ I^1_{(i)}(\Psi )(t,x, [\theta ]) +\, \sqrt{t} \partial _x^i\Psi (t,x, [\theta ]) \right\} \right] , \end{aligned}$$

    which proves the result for \(|\alpha |=1\). Again, using Proposition 3.4, \(I^3_{\alpha }(\Psi ) \in {\mathbb {K}}^{q+2}_r({\mathbb {R}},(k \wedge n)-1) \) when \(|\alpha |=1\). We can therefore iterate this argument another \(|\alpha |-1\) times to obtain the result for all \(\alpha \) satisfying \(|\alpha | \le [n \wedge k]\).

  4. 4.

    This follows from parts 2 and 3. \(\square \)

4.2 Integration by parts in the measure variable

We now consider derivatives of the function\([\theta ] \mapsto {\mathbb {E}}[f (X^{x, [\theta ]}_t)]\).

Proposition 4.2

Let \(f \in {\mathcal {C}}^{\infty }_b({\mathbb {R}}^N;{\mathbb {R}})\) and \(\Psi \in {\mathbb {K}}^q_r({\mathbb {R}},n)\).

  1. 1.

    If \(|\beta | \le [n \wedge (k-2)]\), then

    $$\begin{aligned} {\mathbb {E}}\left[ \partial ^{\beta }_{\mu }\left( f\left( X^{x, [\theta ]}_t \right) \right) ({\varvec{v}})\, \Psi (t,x, [\theta ])\right] = t^{-|\beta |/2} \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, {\mathcal {I}}^1_{\beta }(\Psi )(t,x, [\theta ], {\varvec{v}})\right] . \end{aligned}$$
  2. 2.

    If \(|\beta | \le [n \wedge (k-2)]\), then

    $$\begin{aligned} \partial ^{\beta }_{\mu } \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, \Psi (t,x, [\theta ])\right] ({\varvec{v}}) = t^{-|\beta |/2} \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, {\mathcal {I}}^3_{\beta }(\Psi )(t,x, [\theta ], {\varvec{v}})\right] . \end{aligned}$$
  3. 3.

    If \(|\alpha | + |\beta | \le [n \wedge (k-2)]\), then

    $$\begin{aligned} \partial ^{\beta }_{\mu } \, {\mathbb {E}}\left[ (\partial ^{\alpha }f)\left( X^{x, [\theta ]}_t \right) \, \Psi (t,x, [\theta ])\right] ({\varvec{v}})= & {} t^{-(|\alpha |+|\beta |)/2} {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \right. \\&\times \left. {\mathcal {I}}^3_{\beta }\left( I^2_{\alpha }(\Psi )\right) (t,x, [\theta ], {\varvec{v}})\right] . \end{aligned}$$

Proof

  1. 1.

    We use again that for \(r \le t\),

    $$\begin{aligned} \partial _x X^{x,[\theta ]}_t = \mathcal {\mathbf {D}}_r X^{x,[\theta ]}_t \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,\mu }_r . \end{aligned}$$

    This allows us to make the following computations for \(f \in {\mathcal {C}}^{\infty }_b({\mathbb {R}}^N;{\mathbb {R}})\),

    $$\begin{aligned}&{\mathbb {E}}\left[ \partial _{\mu }\left( f\left( X^{x, [\theta ]}_t \right) \right) \Psi (t,x, [\theta ]) \right] \\&\quad =\, {\mathbb {E}}\left[ \partial f \left( X_t^{x, [\theta ]}\right) \, \partial _{\mu } X^{x, [\theta ]}_t \, \Psi (t,x, [\theta ]) \right] \\&\quad =\, \frac{1}{t} {\mathbb {E}}\left[ \int _0^t \partial f \left( X_t^{x, [\theta ]}\right) \, \,\partial _x X^{x, [\theta ]}_t \, \left( \partial _x X^{x, [\theta ]}\right) ^{-1}_t \partial _{\mu } X^{x, [\theta ]}_t(v) \Psi (t,x, [\theta ]) \, dr \right] \\&\quad =\, \frac{1}{t} {\mathbb {E}}\int _0^t \bigg \{ \partial f \left( X_t^{x, [\theta ]}\right) \, \mathcal {\mathbf {D}}_r X^{x, [\theta ]}_t \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\\&\qquad \times \left( X^{x, [\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x, [\theta ]}_r \left( \partial _x X^{x, [\theta ]}_t\right) ^{-1} \partial _{\mu } X^{x, [\theta ]}_t(v) \Psi (t,x, [\theta ]) \bigg \} dr \\&\quad =\, \frac{1}{t} {\mathbb {E}}\int _0^t \bigg \{ \mathcal {\mathbf {D}}_r f \left( X_t^{x, [\theta ]}\right) \, \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \\&\qquad \times \left( X^{x, [\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x, [\theta ]}_r \, \left( \partial _x X^{x, [\theta ]}_t\right) ^{-1} \partial _{\mu } X^{x, [\theta ]}_t(v) \Psi (t,x, [\theta ]) \bigg \} dr \\&\quad =\, \frac{1}{t} {\mathbb {E}}\bigg [ f \left( X_t^{x, [\theta ]}\right) \, \delta \bigg ( r \mapsto \left( \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\right. \\&\qquad \left. \times \left( X^{x, [\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x, [\theta ]}_r \, \left( \partial _x X^{x, [\theta ]}_t\right) ^{-1} \partial _{\mu } X^{x, [\theta ]}_t(v) \right) ^{\top } \ \Psi (t,x, [\theta ]) \bigg ) \bigg ]. \end{aligned}$$

    where we have used Malliavin integration by parts \({\mathbb {E}}\langle \mathcal {\mathbf {D}}\phi , u \rangle _{H_d} = {\mathbb {E}}\left[ \phi \, \delta (u) \right] \) in the last line. This proves the claim for \(|\beta |=1\). For general \(\beta \), it follows by iterating this integration by parts \(|\beta |\) times.

  2. 2.
    $$\begin{aligned} \partial _{\mu } \, {\mathbb {E}}\left[ f\left( X^{x, [\theta ]}_t \right) \, \Psi (t,x, [\theta ])\right] (v)= & {} t^{-|\beta |/2} \, {\mathbb {E}}\left[ \partial _{\mu } \left( f\left( X^{x, [\theta ]}_t \right) \right) (v) \, \Psi (t,x, [\theta ]) \right. \\&\left. +\, f\left( X^{x, [\theta ]}_t \right) \, \partial _{\mu }\Psi (t,x, [\theta ],v)\right] . \end{aligned}$$

    This is enough to prove the proposition when \(|\beta |=1\). For \(|\beta |>1\), simply repeat this argument.

  3. 3.

    This follows from parts 1 and 2. \(\square \)

4.3 Integration by parts for McKean–Vlasov SDE with fixed initial condition

We now consider developing integration by parts formulae for derivatives of the function

$$\begin{aligned} x \mapsto {\mathbb {E}}f\left( X_t^{x, \delta _x}\right) . \end{aligned}$$

We introduce the following operator acting on elements of \({\mathcal {K}}_r^q({\mathbb {R}}, M)\), the set of Kusuoka–Stroock processes on \({\mathbb {R}}^N\). For \(\alpha =(i)\)

$$\begin{aligned} J_{(i)}(\Phi )(t,x):= I^3_{(i)}(\Phi )(t,x,\delta _x) + {\mathcal {I}}^3_{(i)}(\Phi )(t,x,\delta _x) \end{aligned}$$

and inductively, for \(\alpha =(\alpha _1, \ldots , \alpha _n)\),

$$\begin{aligned} J_{\alpha }:= J_{\alpha _n} \circ J_{\alpha _1} \cdots \circ J_{\alpha _1}. \end{aligned}$$

Lemma 4.3

If \(V_0, \ldots , V_d\in {\mathcal {C}}^{k,k}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\) and \(\Phi \in {\mathcal {K}}^q_r({\mathbb {R}},n)\), then \(J_{\alpha }(\Phi )\) is well-defined for \(|\alpha |\le [n \wedge (k-2)]\), and

$$\begin{aligned} J_{\alpha }(\Phi ) \in {\mathcal {K}}^{q+4|\alpha |}_r({\mathbb {R}},[n \wedge (k-2)]-|\alpha |). \end{aligned}$$

Moreover, if \(\Phi \in {\mathcal {K}}^0_r({\mathbb {R}},k)\) and \(V_0, \ldots , V_d\) are uniformly bounded, then

$$\begin{aligned} J_{\alpha }(\Phi ) \in {\mathcal {K}}^{0}_r({\mathbb {R}},[n \wedge (k-2)]-|\alpha |). \end{aligned}$$

Proof

This is a direct result of Proposition 3.4 and Lemma 2.11. \(\square \)

Theorem 4.4

Let \(f \in {\mathcal {C}}^{\infty }_b({\mathbb {R}}^N;{\mathbb {R}})\). For all multi-indices \(\alpha \) on \(\{1, \ldots , N\}\) with \(|\alpha | \le k-2\)

$$\begin{aligned} \partial ^{\alpha }_x \, {\mathbb {E}}\left[ f\left( X_t^{x,\delta _x} \right) \right] = t^{-|\alpha |/2} \, {\mathbb {E}}\left[ f\left( X_t^{x,\delta _x} \right) \, J_{\alpha }(1)(t,x) \right] . \end{aligned}$$

In particular, we get the following bound

$$\begin{aligned} \left| \partial ^{\alpha }_x \, {\mathbb {E}}\left[ f\left( X_t^{x,\delta _x} \right) \right] \right| \le C \, \Vert f \Vert _{\infty } \, t^{-|\alpha |/2} \, (1+|x|)^{4 |\alpha |} . \end{aligned}$$

Proof

By the above discussion,

$$\begin{aligned} \partial ^{i}_x \, {\mathbb {E}}\left[ f\left( X_t^{x, \delta _x} \right) \right] = \partial _z^{i} \, {\mathbb {E}}\left. \left[ f\left( X_t^{z, \delta _x} \right) \right] \right| _{z=x} + \partial ^{i}_{\mu } {\mathbb {E}}\left. \left[ f\left( X_t^{x,[\theta ]} \right) \right] (v) \right| _{ [\theta ]=\delta _x,v=x} \end{aligned}$$

Now, we apply the IBPFs developed earlier in Proposition 4.1 part 3 and Theorem 4.2 part 3.

$$\begin{aligned} \partial _z^{i} \, {\mathbb {E}}\left. \left[ f\left( X_t^{z, \delta _x} \right) \right] \right| _{z=x}&= t^{-1/2} \, {\mathbb {E}}\left[ f\left( X_t^{x,\delta _x} \right) I^3_{(i)}(1)(t,x) \right] \\ \partial ^{i}_{\mu } {\mathbb {E}}\left. \left[ f\left( X_t^{x,[\theta ]} \right) \right] (v) \right| _{ [\theta ]=\delta _x,v=x}&= t^{-1/2} \, {\mathbb {E}}\left[ f\left( X_t^{x,\delta _x} \right) {\mathcal {I}}^3_{(i)}(1)(t,x, \delta _x, x) \right] \end{aligned}$$

and we can iterate this argument \(|\alpha |\) times. \(\square \)

Corollary 4.5

Let \( f \in {\mathcal {C}}^{\infty }_b({\mathbb {R}}^N;{\mathbb {R}})\) and \(\alpha \) and \(\beta \) multi-indices on \(\{1, \ldots , N\}\) with \(|\alpha | + |\beta | \le k-2\). Then,

$$\begin{aligned} \partial ^{\alpha }_x \, {\mathbb {E}}\left[ (\partial ^{\beta } f)\left( X_t^{x,\delta _x} \right) \right] = t^{-\frac{|\alpha |+|\beta |}{2}} \, {\mathbb {E}}\left[ f\left( X_t^{x,\delta _x} \right) \, I^2_{\beta }(J_{\alpha }(1))(t,x) \right] \end{aligned}$$

and \(I^2_{\beta }(J_{\alpha }(1))\in {\mathcal {K}}^{4|\alpha |+3|\beta |}_0({\mathbb {R}},k-2-|\alpha |-|\beta |)\).

Proof

Theorem 4.4 gives

$$\begin{aligned} \partial ^{\alpha }_x \, {\mathbb {E}}\left[ (\partial ^{\beta }f)\left( X_t^{x,\delta _x} \right) \right] = t^{-|\alpha |/2} \, {\mathbb {E}}\left[ (\partial ^{\beta }f)\left( X_t^{x,\delta _x} \right) \, J_{\alpha }(1)(t,x) \right] \end{aligned}$$

with \(J_{\alpha }(1) \in {\mathcal {K}}^{4|\alpha |}({\mathbb {R}}, k-2-|\alpha |)\). Then, using Proposition 4.1 part 2, we get

$$\begin{aligned} \partial ^{\alpha }_x \, {\mathbb {E}}\left[ (\partial ^{\beta } f)\left( X_t^{x,\delta _x} \right) \right] = t^{-\frac{|\alpha |+|\beta |}{2}} \, {\mathbb {E}}\left[ f\left( X_t^{x,\delta _x} \right) \, I^2_{\beta }(J_{\alpha }(1))(t,x) \right] . \end{aligned}$$

\(\square \)

5 Connection with PDE

We return our attention to the PDE (1.2). The results of the last section suggest that for initial conditions \(g(z,\mu )=g(z)\), which do not depend on the measure, we can still expect there to be a classical solution, even if g is not differentiable. Indeed, we spell out the conditions under which this is true in Theorem 5.8. But first, let us consider whether the same can be true for initial conditions which do depend on the measure.

Example 5.1

Let \(g(z,\mu ) = g(\mu ) := \textstyle \left| \int y \, \mu (dy) \right| \) and \(V_0 \equiv 0\), \(V_1\equiv 1\) and \(N=d=1\), then

$$\begin{aligned} X_t^{\theta } = \theta + B_t, \end{aligned}$$

and

$$\begin{aligned} g\left( \left[ X_t^{\theta }\right] \right) = \left| {\mathbb {E}}[\theta ]\right| . \end{aligned}$$

We now show that \([ \theta ] \mapsto g([X_t^{\theta }])\) is not differentiable. If we choose \(\theta \in L^2(\Omega )\) with \({\mathbb {E}}\theta =0\), then for any \(t>0\), \(h>0\) and any \(\gamma \in L^2(\Omega )\),

$$\begin{aligned} \frac{1}{ h} \left| g\left( \left[ X_t^{\theta + h \gamma }\right] \right) - g\left( \left[ X_t^{\theta }\right] \right) \right| = \frac{|h|}{h} \, \left| {\mathbb {E}}\gamma \right| \end{aligned}$$

and this limit does not exist as \(h \rightarrow 0\). Hence, the Gâteaux derivative of the map \(L^2(\Omega ) \ni \theta \mapsto g( [X_t^{\theta }])\) does not exist.

The above example shows that for a function \(g:{\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\) which is Lipschitz continuous, we cannot, in general, expect \([\theta ] \mapsto {\mathbb {E}}\left( \, g \left( X^{x, [\theta ]}_t,\left[ X^{x, [\theta ]}_t \right] \right) \right) \) to be differentiable (for a fixed \(t>0\)) even when the coefficients in the equation for \(X^{x, [\theta ]}_t\) are smooth and uniformly elliptic. There are, however, interesting examples of initial conditions for which we can develop integration by parts formulas. Before we introduce this class of initial conditions, we consider what form derivatives of \( U(t,x,[\theta ]):= {\mathbb {E}}\left( g \left( X^{x, [\theta ]}_t, [X^{\theta }_t] \right) \right) \) take when g is smooth. The following result is Lemma 5.1 from [8].

Lemma 5.2

We assume that the function \(g :{\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\) admits continuous derivatives \(\partial _x g\) and \(\partial _{\mu }g\) satisfying for some \(q>0\) and \(0 \le p <2\)

$$\begin{aligned} \left| \partial _x g(x,[\theta ]) \right|&\le C \left( 1+|x|+\Vert \theta \Vert _2 \right) ^q\\ \left| \partial _\mu g(x,[\theta ],v) \right|&\le C \left( 1+|x|^q+\Vert \theta \Vert ^q_2 + |v|^p \right) \end{aligned}$$

and we assume \(V_0, \ldots , V_d\in {\mathcal {C}}^{1,1}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\). Then, \(\partial _{\mu }U\) exists and takes the following form:

$$\begin{aligned} \partial _{\mu }U(t,x,[\theta ],v) =&\, {\mathbb {E}}\left[ \partial g \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] \right) \partial _{\mu } X^{x, [\theta ]}_t (v) \right] \nonumber \\&+ \, {\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ \partial _{\mu } g \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \partial _{v} \widetilde{X}_t^{v,[\theta ]} \right. \nonumber \\&\left. +\, \partial _{\mu } g \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{\tilde{\theta }}_t \right) \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v) \right] . \end{aligned}$$
(5.1)

Now we introduce a class of initial conditions \(g: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\) for which we will be able to develop integration by parts formulas.

Definition 5.3

((IC) \(_x\) and (IC) \(_v\)) We say that \(g: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\) is in the class (IC) if the following conditions hold:

  1. 1.

    g is continuous with polynomial growth: i.e. there exists \(q>0\) such that for all \((x,[\theta ]) \in {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\): \(|g(x,[\theta ])| \le C (1+ |x|+ \Vert \theta \Vert _2)^q\).

  2. 2.

    There exists a sequence of functions \((g_l)_{l \ge 1}\), \(g_l: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\) with polynomial growth such that \(g_l \rightarrow g\) uniformly on compacts and \(\partial _x g_l\) exists and also has polynomial growth for each \(l \ge 1\).

  3. 3.

    For each \(l \ge 1\) there exists a function \(G_l: {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \times {\mathbb {R}}^N \rightarrow {\mathbb {R}}\) which is either differentiable in x or v and \(\partial _{\mu } g_l(x,\mu ,v) = \partial _x G_l(x,\mu ,v)\) or \(\partial _{\mu } g_l(x,\mu ,v) = \partial _v G_l(x,\mu ,v)\). Moreover, each \(G_l\) and its derivatives satisfies the growth condition: there exist \(q>0\) and \(0\le r <1\) such that for all \((x,[\theta ],v) \in {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \times {\mathbb {R}}^N\):

    $$\begin{aligned} |h(x,[\theta ],v)| \le C \left( 1 + |x|^q + \Vert \theta \Vert _2^q + |v|^{r} \right) . \end{aligned}$$

    where h is \(G_l\), \(\partial _x G_l\) or \(\partial _v G_l\). In addition, we assume that for all \((x,\mu ,v)\) the pointwise limit \(\lim _{l \rightarrow \infty } G_l(x,\mu ,v)\) exists and the function G defined by \(G(x,\mu ,v):= \lim _{l \rightarrow \infty } G_l(x,\mu ,v)\) is continuous and satisfies the same growth condition.

If \(\partial _{\mu } g_l= \partial _x G_l\) we say g is in the class (IC) \(_x\). If \(\partial _{\mu } g_l = \partial _v G_l\), we say g is in the class (IC) \(_v\).

We give some examples of functions g in the class (IC).

Example 5.4

  1. 1.

    Functions with no dependence on the measure:

    Suppose that \(g(x,\mu ) = \varphi (x)\) where \(\varphi \in {\mathcal {C}}_p({\mathbb {R}}^N;{\mathbb {R}})\). Then, let \((\varphi _l)_{l \ge 1}\) be a sequence of mollifications of \(\varphi \) and \((g_l)_{l \ge 1}\) the corresponding functions defined in the same way. Then, \(\partial _{\mu }g_l(x,\mu ,v)= 0\). So, g belongs to the class (IC) \(_x\) and G in this case would be \(G \equiv 0\).

  2. 2.

    Centred random variables:

    Suppose that \(g(x,\mu ) = \varphi \left( x- \textstyle \int y \mu (dy)\right) \) where \(\varphi \in {\mathcal {C}}_p({\mathbb {R}}^N;{\mathbb {R}})\). Then, let \((\varphi _l)_{l \ge 1}\) be a sequence of mollifications of \(\varphi \) and \((g_l)_{l \ge 1}\) the corresponding functions defined in the same way. Then, \(\partial _{\mu }g_l(x,\mu ,v)= - \partial \varphi _l(x- \textstyle \int y \mu (dy) )\). So, g belongs to the class (IC) \(_x\) and G in this case would be \(G(x,\mu ,v) = - \varphi (x-\textstyle \int y \mu (dy))\).

  3. 3.

    First order interaction:

    Suppose \(g(x,\mu ) := \textstyle \int \varphi (x,y) \mu (dy)\) where \(\varphi : {\mathbb {R}}^N \times {\mathbb {R}}^N \rightarrow {\mathbb {R}}\) is continuous with \(|\varphi (x,y)| \le C(1+ |x|^q + |y|^r)\) for some \(q>0\) and \(0 \le r <1\). Then, let \((\varphi _l)_{l \ge 1}\) be a sequence of mollifications of \(\varphi \) and \((g_l)_{l \ge 1}\) the corresponding functions defined in the same way. Then, \(\partial _{\mu }g_l(x,\mu ,v)= \partial _v \varphi _l(x,v)\). So, g belongs to the class (IC) \(_v\) and G in this case would be \(G(x,\mu ,v) = \varphi (x,v)\). Note, this example includes the case of convolutions where \(\varphi (x,y)= \varphi (x-y)\).

  4. 4.

    Second order interaction:

    Suppose \(g(x,\mu ) := \textstyle \int \varphi (x,y,z) \mu (dy) \mu (dz)\) where \(\varphi : {\mathbb {R}}^{3N} \rightarrow {\mathbb {R}}\) is continuous with \(|\varphi (x,y,z)| \le C(1+ |x|^q + |y|^r + |z|^r)\) for some \(q>0\) and \(0 \le r <1\). Then, let \((\varphi _l)_{l \ge 1}\) be a sequence of mollifications of \(\varphi \) and \((g_l)_{l \ge 1}\) the corresponding functions defined in the same way. Then, \(\partial _{\mu }g_l(x,\mu ,v)= \textstyle \int \left[ \partial _v \varphi _l(x,v,y) + \partial _v \varphi _l (x,y,v) \right] \mu (dy)\). So, g belongs to the class (IC) \(_v\) and G in this case would be

    $$\begin{aligned} G(x,\mu ,v) = \int \left[ \varphi (x,v,y) + \varphi (x,y,v) \right] \mu (dy). \end{aligned}$$
  5. 5.

    Polynomials on the Wasserstein space:

    Suppose \(g(x,\mu ) = \textstyle \prod _{i=1}^n \int \varphi _i(x,y) \mu (dy) \), where \(n \ge 1\) and each \(\varphi _i: {\mathbb {R}}^N \times {\mathbb {R}}^N \rightarrow {\mathbb {R}}\) is continuous with \(|\varphi _i(x,y)| \le C(1+ |x|^q )\) for some \(q>0\). Then, let \((\varphi _{i,l})_{l \ge 1}\) be a sequence of mollifications of \(\varphi _i\) and \((g_l)_{l \ge 1}\) the corresponding functions defined in the same way. Then,

    $$\begin{aligned} \partial _{\mu }g_l(x,\mu ,v)= \sum _{j=1}^n \prod _{i=1,i \ne j}^n \left( \int \varphi _{i,l}(x,y) \mu (dy) \right) \partial _v \varphi _{j,l}(x,v). \end{aligned}$$

    Therefore g belongs to the class (IC) \(_v\) and G in this case would be

    $$\begin{aligned} G(x,\mu ,v)= \sum _{j=1}^n \prod _{i=1,i \ne j}^n \left( \int \varphi _i(x,y) \mu (dy) \right) \varphi _j(x,v). \end{aligned}$$

Now, we introduce the hypotheses under which we will be able to prove existence and uniqueness of a solution to the PDE (1.2).

(H1): :

(UE) holds, and the coefficients \(V_0, \ldots , V_d\in {\mathcal {C}}^{3,3}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N); {\mathbb {R}}^N)\), and \(g:{\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\) is in the class (IC) \(_x\).

(H2): :

(UE) holds, and the coefficients \(V_0, \ldots , V_d\in {\mathcal {C}}^{3,3}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N); {\mathbb {R}}^N)\) as well as being uniformly bounded, and that \(g:{\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}^N\) is in the class (IC) \(_v\).

Lemma 5.5

Under either (H1) or (H2), for the function \(U(t,x,[\theta ]):= {\mathbb {E}} \left[ g\left( X^{x, [\theta ]}_t \left[ X^{\theta }_t\right] \right) \right] \), the derivative functions

$$\begin{aligned} (0,T]\times {\mathbb {R}}^N\times {\mathcal {P}}_2({\mathbb {R}}^N) \ni (t,x,[\theta ]) \mapsto&\left( \partial _x U(t,x,[\theta ]), \, \partial ^2_{x,x}U(t,x,[\theta ]) \right) \\ (0,T]\times {\mathbb {R}}^N\times {\mathcal {P}}_2({\mathbb {R}}^N)\times {\mathbb {R}}^N \ni (t,x,[\theta ],v) \mapsto&\left( \partial _{\mu }U(t,x,[\theta ],v),\right. \\&\left. \partial _v \partial _{\mu }U(t,x,[\theta ],v) \right) \end{aligned}$$

exist and are continuous. Moreover, for all compacts \(K \subset {\mathcal {P}}_2({\mathbb {R}}^N)\)

$$\begin{aligned} \sup _{[\theta ] \in K} {\mathbb {E}}\left| \partial _{\mu }U(t,x,[\theta ],\theta )\right| ^2 + \left| \partial _v \partial _{\mu }U(t,x,[\theta ],\theta ) \right| ^2 < \infty . \end{aligned}$$

Proof

Under both (H1) and (H2), g is in the class (IC), so there is a sequence of functions \((g_l)_{l \ge 1}\) approximating g. Let \(U_l(t,x,[\theta ]= {\mathbb {E}}\left[ g_l(X^{x, [\theta ]}_t,[X^{\theta }_t])\right] \) . From Proposition 4.1 we know that for \(i,j \in \{1, \ldots , N\}\)

$$\begin{aligned} \partial ^i_x U_l(t,x,[\theta ])&= t^{-1/2} \, {\mathbb {E}}\left[ g_l\left( X^{x, [\theta ]}_t,\left[ X^{\theta }_t \right] \right) I^1_{(i)}(1)(t,x,[\theta ]) \right] ,\\ \partial ^{(i,j)}_x U_l(t,x,[\theta ])&= t^{-1} \, {\mathbb {E}}\left[ g_l\left( X^{x, [\theta ]}_t,\left[ X^{\theta }_t \right] \right) I^1_{(i,j)}(1)(t,x,[\theta ]) \right] . \end{aligned}$$

By the growth assumption on \(g_l\), Hölder’s inequality and the moment estimates already obtained for the processes \(X^{x, [\theta ]}_t,X^{\theta }_t\) and the Kusuoka–Stroock processes in (2.2), (2.4) and Proposition 6.10, we can show that the expectations above are bounded independently of \(l \ge 1\). By dominated convergence, we can take the limit in each equation. Now, each of the Kusuoka–Stroock processes appearing in the above representations for the derivatives are, by definition, jointly continuous in \((t,x,[\theta ])\) in \(L^p(\Omega )\), \(p \ge 1\). So is \((t,x,[\theta ]) \mapsto g(X^{x, [\theta ]}_t,\left[ X^{\theta }_t \right] )\) by Theorem 3.2 (which guarantees that \((t,x,[\theta ]) \mapsto X^{x, [\theta ]}_t\) is a Kusuoka–Stroock process) and the continuity of g.

To lighten notation, we restrict to the case \(N=1\) through the rest of this proof. First, we assume (H1) holds, so g is in the class (IC) \(_x\). Note that \(g_l\) satisfies the hypotheses of Lemma 5.2, which gives

$$\begin{aligned} \partial _{\mu }U_l(t,x,[\theta ],v) =&\,{\mathbb {E}}\left[ \partial g_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] \right) \partial _{\mu } X^{x, [\theta ]}_t (v) \right] \nonumber \\&+ \, {\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ \partial _{x} G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \partial _{v} \widetilde{X}_t^{v,[\theta ]}\right. \nonumber \\&\left. +\, \partial _{x} G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{\tilde{\theta }}_t \right) \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v) \right] . \end{aligned}$$
(5.2)

Now, we recall the following identity connecting \(\mathcal {\mathbf {D}}_r X^{x,[\theta ]}_t\) and \( \partial _x X^{x,[\theta ]}_r\):

$$\begin{aligned} Id_N = \mathcal {\mathbf {D}}_r X^{x,[\theta ]}_t \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r \left( \partial _x X^{x,[\theta ]}_t\right) ^{-1}. \end{aligned}$$

So,

$$\begin{aligned}&\partial _{x} G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \partial _{v} \widetilde{X}_t^{v,[\theta ]} \\&\quad = \, \partial _{x} G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \mathcal {\mathbf {D}}_r X^{x,[\theta ]}_t \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\\&\qquad \times \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r \left( \partial _x X^{x,[\theta ]}_t\right) ^{-1} \partial _{v} \widetilde{X}_t^{v,[\theta ]} \\&\quad = \, \mathcal {\mathbf {D}}_r \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \right] \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \\&\qquad \times \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r \left( \partial _x X^{x,[\theta ]}_t\right) ^{-1} \partial _{v} \widetilde{X}_t^{v,[\theta ]} \end{aligned}$$

and, applying Proposition 4.1 part 2, we get

$$\begin{aligned}&{\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ \mathcal {\mathbf {D}}_r \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \right] \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\right. \\&\quad \left. \times \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r \left( \partial _x X^{x,[\theta ]}_t\right) ^{-1} \partial _{v} \widetilde{X}_t^{v,[\theta ]} \right] \\&\quad = \, t^{-1/2} \, {\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) I^2(1)(t,x,[\theta ]) \, \partial _{v} \widetilde{X}_t^{v,[\theta ]} \right] . \end{aligned}$$

Similarly,

$$\begin{aligned}&\partial _{x} G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{\tilde{\theta }}_t \right) \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v)\\&\quad = \, \partial _{x} G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{\tilde{\theta }}_t \right) \mathcal {\mathbf {D}}_r X^{x,[\theta ]}_t \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\\&\qquad \times \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r \left( \partial _x X^{x,[\theta ]}_t\right) ^{-1} \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v) \\&\quad = \, \mathcal {\mathbf {D}}_r \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] ,\widetilde{X}^{\tilde{\theta }}_t \right) \right] \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \\&\qquad \times \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r \left( \partial _x X^{x,[\theta ]}_t\right) ^{-1} \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v) \end{aligned}$$

and applying Proposition 4.1 part 2 again, we get

$$\begin{aligned}&{\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ \mathcal {\mathbf {D}}_r \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] ,\widetilde{X}^{\tilde{\theta }}_t \right) \right] \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\right. \\&\quad \left. \times \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r \left( \partial _x X^{x,[\theta ]}_t\right) ^{-1} \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v) \right] \\&\quad = \, t^{-1/2} \, {\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] ,\widetilde{X}^{\tilde{\theta }}_t \right) \, I^2(1)(t,x,[\theta ]) \, \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v) \right] . \end{aligned}$$

So, in this case, (5.2) can be rewritten as

$$\begin{aligned} \partial _{\mu }U_l(t,x,[\theta ],v) =&\, t^{-1/2} \, {\mathbb {E}}\bigg \{ g_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] \right) {\mathcal {I}}^1(t,x,[\theta ],v) \nonumber \\&+\, \widetilde{{\mathbb {E}}} \big [ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) I^2(1)(t,x,[\theta ]) \, \partial _{v} \widetilde{X}_t^{v,[\theta ]} \nonumber \\&+\, G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] ,\widetilde{X}^{\tilde{\theta }}_t \right) \, I^2(1)(t,x,[\theta ]) \, \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v) \big ] \bigg \}. \end{aligned}$$
(5.3)

To show that \(\textstyle \sup _{[\theta ] \in K} {\mathbb {E}}\left| \partial _{\mu }U(t,x,[\theta ],\theta )\right| ^2 < \infty \), we note that all processes on the right hands side of (5.3) have moments of all orders bounded polynomially in \(\Vert \theta \Vert _2\) except \(\widetilde{X}^{\tilde{\theta }}_t\) in the final term. For the final term, by the growth conditions on \(G_l\),

$$\begin{aligned}&\left| {\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] ,\widetilde{X}^{\tilde{\theta }}_t \right) \, I^2(1)(t,x,[\theta ]) \, \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v) \right] \right| ^2 \\&\quad \le \left\| G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] ,\widetilde{X}^{\tilde{\theta }}_t \right) \right\| ^2_{L^{2/r}(\Omega \times \tilde{\Omega })} \left\| I^2(1)(t,x,[\theta ]) \right\| ^2_{L^{4/(1-r)}(\Omega \times \tilde{\Omega })} \, \\&\qquad \times \left\| \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v) \right\| ^2_{L^{4/(1-r)}(\Omega \times \tilde{\Omega })} \\&\quad \le C \left( {\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ \left( 1 + | X^{x, [\theta ]}_t|^q+ \Vert X^{\theta }_t\Vert _2^q + |\widetilde{X}^{\tilde{\theta }}_t|^r \right) ^{2/r} \right] \right) ^r \, (1+|x|+\Vert \theta \Vert _2)^6 \\&\quad \le C {\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ \left( 1 + | X^{x, [\theta ]}_t|^{2q/r}+ \Vert X^{\theta }_t\Vert _2^{2q/r} + |\widetilde{X}^{\tilde{\theta }}_t|^2 \right) \right] \, (1+|x|+\Vert \theta \Vert _2)^6 \\&\quad \le C \left( 1 + |x|^{2q/r}+ \Vert \theta \Vert _2^{2q/r} + \Vert \theta \Vert _2^2 \right) \, (1+|x|+\Vert \theta \Vert _2)^6. \end{aligned}$$

Clearly this is bounded in \([\theta ]\) over compacts in \({\mathcal {P}}_2({\mathbb {R}}^N)\).

Now, we consider the derivative \(\partial _v \partial _{\mu } U_l\). We note that in the definition of \( {\mathcal {I}}^1(t,x,[\theta ],v)\), the only term depending on v is \(\partial _{\mu } X^{x, [\theta ]}_t(v)\). Since \(V_0, \ldots , V_d\in {\mathcal {C}}^{3,3}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\) by assumption, \( \partial _v {\mathcal {I}}^1(t,x,[\theta ],v)\) exists and we obtain:

$$\begin{aligned} \partial _v\partial _{\mu }U_l(t,x,[\theta ],v) =&\, t^{-1/2} \, {\mathbb {E}}\bigg \{ g_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] \right) \, \partial _v {\mathcal {I}}^1(t,x,[\theta ],v) \nonumber \\&+ \widetilde{{\mathbb {E}}} \bigg [ \partial _v G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) I^2(1)(t,x,[\theta ]) \, \left( \partial _{v} \widetilde{X}_t^{v,[\theta ]}\right) ^2 \nonumber \\&+ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) I^2(1)(t,x,[\theta ]) \, \partial ^2_{v} \widetilde{X}_t^{v,[\theta ]} \nonumber \\&+ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] ,\widetilde{X}^{\tilde{\theta }}_t \right) \, I^2(1)(t,x,[\theta ]) \, \partial _v \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v) \bigg ] \bigg \}. \end{aligned}$$
(5.4)

We again use that

$$\begin{aligned} Id_N = \mathcal {\mathbf {D}}_r X^{x,[\theta ]}_t \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r \left( \partial _x X^{x,[\theta ]}_t\right) ^{-1}. \end{aligned}$$

Of course, this identity also holds for ‘tilde’ processes defined on \(\left( \tilde{\Omega }, \tilde{\mathcal {F}}, \tilde{\mathbb {P}}\right) \) and we denote by \(\widetilde{\mathcal {\mathbf {D}}}\) the Malliavin derivative on this space. So, using the above identity and the Malliavin chain rule, we obtain

$$\begin{aligned}&\partial _{v} G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) I^2(1)(t,x,[\theta ]) \, \left( \partial _{v} \widetilde{X}_t^{v,[\theta ]}\right) ^2 \\&\quad =\partial _{v} G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \widetilde{\mathcal {\mathbf {D}}}_r \widetilde{X}^{v,[\theta ]}_t \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \\&\qquad \times \left( \widetilde{X}^{v,[\theta ]}_r, \left[ \widetilde{X}_r^{\theta }\right] \right) \partial _x \widetilde{X}^{v,[\theta ]}_r I^2(1)(t,x,[\theta ]) \, \partial _{v} \widetilde{X}_t^{v,[\theta ]}\\&\quad = \widetilde{\mathcal {\mathbf {D}}}_r \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \right] \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\\&\qquad \times \left( \widetilde{X}^{v,[\theta ]}_r, \left[ \widetilde{X}_r^{\theta }\right] \right) \partial _x \widetilde{X}^{v,[\theta ]}_r I^2(1)(t,x,[\theta ]) \, \partial _{v} \widetilde{X}_t^{v,[\theta ]} \end{aligned}$$

and, applying the integration by parts formula in Proposition 4.1 on the space \(\left( \tilde{\Omega }, \tilde{\mathcal {F}}, \tilde{\mathbb {P}}\right) \), we get

$$\begin{aligned}&{\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ \widetilde{\mathcal {\mathbf {D}}}_r \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \right] \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\right. \\&\quad \left. \times \left( \widetilde{X}^{v,[\theta ]}_r, \left[ \widetilde{X}_r^{\theta }\right] \right) \partial _x \widetilde{X}^{v,[\theta ]}_r I^2(1)(t,x,[\theta ]) \, \partial _{v} \widetilde{X}_t^{v,[\theta ]} \right] \\&\quad =t^{-1/2} \, {\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \widetilde{I}^2 \left( \partial _{x} \widetilde{X}_{\cdot }^{\cdot ,\cdot }\right) (t,v,[\theta ]) \, I^2(1)(t,x,[\theta ]) \right] . \end{aligned}$$

So, (5.4) becomes

$$\begin{aligned}&\partial _v\partial _{\mu }U_l(t,x,[\theta ],v) = t^{-1} \, {\mathbb {E}}\bigg \{ \sqrt{t} \, g_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] \right) \, \partial _v {\mathcal {I}}^1(t,x,[\theta ],v) \nonumber \\&\quad + \widetilde{{\mathbb {E}}} \big [ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \, I^2(1)(t,x,[\theta ]) \nonumber \\&\quad \times \left( \widetilde{I}^2 \left( \partial _{x} \widetilde{X}_{\cdot }^{\cdot ,\cdot }\right) (t,v,[\theta ]) + \sqrt{t} \, \, \partial ^2_{v} \widetilde{X}_t^{v,[\theta ]} \right) \nonumber \\&\quad + \sqrt{t} \, G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] ,\widetilde{X}^{\tilde{\theta }}_t \right) \, I^2(1)(t,x,[\theta ]) \, \partial _v \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v) \big ] \bigg \}. \end{aligned}$$
(5.5)

We can check each expectation above is finite by using the growth conditions on the functions \(g_l\), \(G_l\) and their derivatives along with Hölder’s inequality and the moment estimates on the processes involved, similar to before. In particular, note that we can obtain estimates on (5.3) and (5.5) independently of l. This allows us to use dominated convergence to pass to the limit in these equations.

Now, suppose that (H2) holds instead of (H1). Under (H2), g in the class (IC) \(_v\). By Lemma 5.2, we have an expression for \(\partial _{\mu }U_l\) and using the special form of \(\partial _{\mu }g_l\) for initial conditions in the class (IC) \(_v\), we get

$$\begin{aligned} \partial _{\mu }U_l(t,x,[\theta ],v) =&\, {\mathbb {E}}\left[ \partial g_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] \right) \partial _{\mu } X^{x, [\theta ]}_t (v) \right] \nonumber \\&+ \, {\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ \partial _{v} G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \partial _{v} \widetilde{X}_t^{v,[\theta ]} \right. \nonumber \\&\left. +\, \partial _{v} G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{\tilde{\theta }}_t \right) \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v) \right] . \end{aligned}$$
(5.6)

We again use that

$$\begin{aligned} Id_N = \mathcal {\mathbf {D}}_r X^{x,[\theta ]}_t \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \left( X^{x,[\theta ]}_r, \left[ X_r^{\theta }\right] \right) \partial _x X^{x,[\theta ]}_r \left( \partial _x X^{x,[\theta ]}_t\right) ^{-1}. \end{aligned}$$

Of course, this identity also holds for ‘tilde’ processes defined on \(\left( \tilde{\Omega }, \tilde{\mathcal {F}}, \tilde{\mathbb {P}}\right) \) and we denote by \(\widetilde{\mathcal {\mathbf {D}}}\) the Malliavin derivative on this space. So, using the above identity and the Malliavin chain rule, we obtain

$$\begin{aligned}&\partial _{v} G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \partial _{v} \widetilde{X}_t^{v,[\theta ]} \\&\quad = \partial _{v} G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \widetilde{\mathcal {\mathbf {D}}}_r \widetilde{X}^{v,[\theta ]}_t \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \left( \widetilde{X}^{v,[\theta ]}_r, \left[ \widetilde{X}_r^{\theta }\right] \right) \partial _x \widetilde{X}^{v,[\theta ]}_r\\&\quad = \widetilde{\mathcal {\mathbf {D}}}_r \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \right] \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \left( \widetilde{X}^{v,[\theta ]}_r, \left[ \widetilde{X}_r^{\theta }\right] \right) \partial _x \widetilde{X}^{v,[\theta ]}_r \end{aligned}$$

and, applying the integration by parts formula in Proposition 4.1 on the space \(\left( \tilde{\Omega }, \tilde{\mathcal {F}}, \tilde{\mathbb {P}}\right) \), we get

$$\begin{aligned}&{\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ \widetilde{\mathcal {\mathbf {D}}}_r \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \right] \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \left( \widetilde{X}^{v,[\theta ]}_r, \left[ \widetilde{X}_r^{\theta }\right] \right) \partial _x \widetilde{X}^{v,[\theta ]}_r \right] \\&\quad = t^{-1/2} \, {\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \widetilde{I}(1)(t,v,[\theta ])\right] . \end{aligned}$$

Similarly,

$$\begin{aligned}&\partial _{v} G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{\tilde{\theta }}_t \right) \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v)\\&\quad = \partial _{v} G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{\tilde{\theta }}_t \right) \widetilde{\mathcal {\mathbf {D}}}_r \widetilde{X}^{\tilde{\theta }}_t \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\\&\qquad \times \left( \widetilde{X}^{\tilde{\theta }}_r, \left[ X_r^{\theta }\right] \right) \partial _x \widetilde{X}^{\tilde{\theta },[\theta ]}_r \left( \partial _x \widetilde{X}^{\tilde{\theta },[\theta ]}_t\right) ^{-1} \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v) \\&\quad = \widetilde{\mathcal {\mathbf {D}}}_r \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] ,\widetilde{X}^{\tilde{\theta }}_t \right) \right] \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1} \\&\qquad \times \left( \widetilde{X}^{\tilde{\theta }}_r, \left[ X_r^{\theta }\right] \right) \partial _x \widetilde{X}^{\tilde{\theta },[\theta ]}_r \left( \partial _x \widetilde{X}^{\tilde{\theta },[\theta ]}_t\right) ^{-1} \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v) \end{aligned}$$

and applying the integration by parts formula in Proposition 4.2 on the space \(\left( \tilde{\Omega }, \tilde{\mathcal {F}}, \tilde{\mathbb {P}}\right) \), we get

$$\begin{aligned}&{\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ \widetilde{\mathcal {\mathbf {D}}}_r \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] ,\widetilde{X}^{\tilde{\theta }}_t \right) \right] \sigma ^{\top } \left( \sigma \sigma ^{\top }\right) ^{-1}\right. \\&\quad \times \left. \left( \widetilde{X}^{\tilde{\theta }}_r, \left[ X_r^{\theta }\right] \right) \partial _x \widetilde{X}^{\tilde{\theta },[\theta ]}_r \left( \partial _x \widetilde{X}^{\tilde{\theta },[\theta ]}_t\right) ^{-1} \partial _{\mu } \widetilde{X}_t^{\tilde{\theta },[\theta ]}(v) \right] \\&\quad = t^{-1/2} \, {\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] ,\widetilde{X}^{\tilde{\theta }}_t \right) \, \widetilde{{\mathcal {I}}}^1(1)(t,\tilde{\theta },[\theta ],v) \right] . \end{aligned}$$

Here we explain the reason for insisting that the coefficients \(V_0, \ldots , V_d\) are bounded: the Kusuoka–Stroock process \(\widetilde{{\mathcal {I}}}^1(1)(t,x,[\theta ],v)\) is bounded in \(L^p(\tilde{\Omega })\) uniformly in \((x,[\theta ],v)\). This allows us to evaluate at \(x=\tilde{\theta }\) and take expectation with respect to \(\widetilde{{\mathbb {E}}}\). If the coefficients are not bounded, the bound we have on \(\Vert \widetilde{{\mathcal {I}}}^1(1)(t,x,[\theta ],v)\Vert _p\) grows like \(|x|^4\) according to Proposition 3.4 and we cannot guarantee that \(\textstyle {\mathbb {E}}\widetilde{{\mathbb {E}}} \left[ \widetilde{{\mathcal {I}}}^1(1)(t,\tilde{\theta },[\theta ],v) \right] \) is finite.

Putting the above integration by parts formulas together and using Proposition 4.2 on the space \(\left( \Omega , {\mathcal {F}}, {\mathbb {P}}\right) \) for the first term on the right hand side of (5.6), we see that it can be re-written as

$$\begin{aligned}&\partial _{\mu }U_l(t,x,[\theta ],v) = t^{-1/2} \, {\mathbb {E}}\bigg \{ g_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] \right) {\mathcal {I}}^1(t,x,[\theta ],v) \nonumber \\&\quad + \widetilde{{\mathbb {E}}} \left[ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \widetilde{I}^1(1)(t,v,[\theta ]) \right. \nonumber \\&\quad \left. + \, G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] ,\widetilde{X}^{\tilde{\theta }}_t \right) \, \widetilde{{\mathcal {I}}}^1(1)(t,\tilde{\theta },[\theta ],v) \right] \bigg \} \end{aligned}$$
(5.7)

and we note the RHS does not depend on derivatives of the functions g and G. Also,

$$\begin{aligned}&\partial _v \partial _{\mu }U_l(t,x,[\theta ],v) = t^{-1/2} \, {\mathbb {E}}\bigg \{ g_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] \right) \partial _v {\mathcal {I}}^1(t,x,[\theta ],v) \nonumber \\&\quad + \widetilde{{\mathbb {E}}} \bigg [ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \partial _v \widetilde{I}^1(1)(t,v,[\theta ]) \nonumber \\&\quad +G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] ,\widetilde{X}^{\tilde{\theta }}_t \right) \, \partial _v \widetilde{{\mathcal {I}}}^1(1)(t,\tilde{\theta },[\theta ],v) \bigg \} \nonumber \\&\quad + \partial _v G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \, \partial _v \widetilde{X}^{v,[\theta ]}_t \, \widetilde{I}^1(1)(t,v,[\theta ]) \bigg ] \bigg \} \end{aligned}$$
(5.8)

so, applying Proposition 4.1, we get

$$\begin{aligned}&\partial _v \partial _{\mu }U_l(t,x,[\theta ],v) = t^{-1/2} \, {\mathbb {E}}\bigg \{ g_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] \right) \partial _v {\mathcal {I}}^1(t,x,[\theta ],v) \nonumber \\&\quad + \widetilde{{\mathbb {E}}} \bigg [ G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \partial _v \widetilde{I}^1(1)(t,v,[\theta ]) \nonumber \\&\quad + G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] ,\widetilde{X}^{\tilde{\theta }}_t \right) \, \partial _v \widetilde{{\mathcal {I}}}^1(1)(t,\tilde{\theta },[\theta ],v) \bigg \} \nonumber \\&\quad + t^{-1/2} G_l \left( X^{x, [\theta ]}_t, \left[ X^{\theta }_t \right] , \widetilde{X}^{v,[\theta ]}_t \right) \, \widetilde{I}^1 \left( \widetilde{I}^1(1)\right) (t,v,[\theta ]) \bigg ] \bigg \}. \end{aligned}$$
(5.9)

\(\square \)

Remark 5.6

Immediately from the proof of Lemma 5.5 one can deduce the following gradient bounds for the function \(U(t,x,[\theta ]):= {\mathbb {E}}\left[ g\left( X^{x, [\theta ]}_t \left[ X^{\theta }_t\right] \right) \right] \) under the same conditions (H1) or (H2): There exists positive constants C and q such that for any \((t,x,[\theta ]) \in (0,T] \times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N), v \in {\mathbb {R}}^N \)

$$\begin{aligned} \left| \partial ^i_x U(t,x,[\theta ])\right| ,|\partial _{\mu } U(t,x,[\theta ])|&\le \ Ct^{-1/2} (1+|x|+\Vert \theta \Vert _2)^q,\\ \left| \partial ^{(i,j)}_x U(t,x,[\theta ])\right| , |\partial _{v}\partial _{\mu } U(t,x,[\theta ])|&\le Ct^{-1} (1+|x|+\Vert \theta \Vert _2)^q. \end{aligned}$$

We now define what we mean by a classical solution to the PDE (1.2).

Definition 5.7

Suppose that \(U: [0,T] \times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N) \rightarrow {\mathbb {R}}\) satisfies (1.2) and

$$\begin{aligned} (0,T]\times {\mathbb {R}}^N\times {\mathcal {P}}_2({\mathbb {R}}^N) \ni (t,x,[\theta ]) \mapsto&\left( \partial _x U(t,x,[\theta ]), \, \partial ^2_{x,x}U(t,x,[\theta ]) \right) \\ (0,T]\times {\mathbb {R}}^N\times {\mathcal {P}}_2({\mathbb {R}}^N)\times {\mathbb {R}}^N \ni (t,x,[\theta ],v) \mapsto&\left( \partial _{\mu }U(t,x,[\theta ],v), \,\right. \\&\left. \partial _v \partial _{\mu }U(t,x,[\theta ],v) \right) \end{aligned}$$

exist and are continuous. Moreover, suppose that for all \((x,\theta ) \in {\mathbb {R}}^N \times L^2(\Omega )\)

$$\begin{aligned} \lim _{(t,y,[\gamma ]) \rightarrow (0,x,[\theta ]) } U(t,y,[\gamma ]) = g(x,[\theta ]). \end{aligned}$$
(5.10)

Then we say that U is a classical solution to the PDE (1.2).

Theorem 5.8

Suppose that either (H1) or (H2) holds. Then

$$\begin{aligned} U(t,x,[\theta ]):= {\mathbb {E}}\left( g\left( X_t^{x, [\theta ]},\left[ X^{\theta }_t \right] \right) \right) \end{aligned}$$

is a classical solution of the PDE (1.2). Moreover, U is unique among all of the classical solutions satisfying the polynomial growth condition \(\left| U(t,x,[\theta ])\right| \le C (1+|x|+\Vert \theta \Vert _2)^q\) for some \(q>0\) and all \((t,x,[\theta ]) \in [0,T] \times {\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N)\).

Proof

Existence To prove continuity at the boundary, we use continuity of g and the fact that

$$\begin{aligned} \left\| X^{\theta }_t - \theta \right\| _2 + \left\| X^{x, [\theta ]}_t - x \right\| _2 \rightarrow 0 \quad \text { as } t \rightarrow 0, \end{aligned}$$

which follows from (2.5).

Now, we note that by the flow property we have, for \(h >0\),

$$\begin{aligned} \left( X_{t+h}^{x,[\theta ]}, X_{t+h}^{\theta } \right) = \left( X_t^{X_{h}^{x,[\theta ]},[X_{h}^{[\theta ]}]}, X_t^{X_h^{\theta }} \right) \end{aligned}$$

so that,

$$\begin{aligned} U(t+h,x,[\theta ])&= {\mathbb {E}}\left[ g \left( X_{t+h}^{x,[\theta ]},\left[ X^{\theta }_{t+h}\right] \right) \right] \\&= {\mathbb {E}}\left[ {\mathbb {E}}\left. \left\{ g \left( X_t^{X_h^{x,[\theta ]},\left[ X_h^{\theta } \right] }, \left[ X_t^{X_h^{\theta }}\right] \right) \right\} \right| \mathcal {F}_h \right] \\&= {\mathbb {E}}\, U\left( t,X_h^{x,[\theta ]},\left[ X_h^{\theta } \right] \right) . \end{aligned}$$

Hence,

$$\begin{aligned} U(t+h,x,[\theta ]) - U(t,x,[\theta ]) =&\, {\mathbb {E}}\, U\left( t,X_h^{x,[\theta ]},\left[ X_h^{\theta } \right] \right) - U(t,x,[\theta ]) \nonumber \\ =&\left\{ U\left( t,x,\left[ X_h^{\theta } \right] \right) - U(t,x,[\theta ]) \right\} \nonumber \\&+ {\mathbb {E}}\left\{ U\left( t,X_h^{x,[\theta ]},\left[ X_h^{\theta } \right] \right) - U\left( t,x,\left[ X_h^{\theta } \right] \right) \right\} . \end{aligned}$$
(5.11)

The idea is to expand the first term using the chain rule introduced in [12] and the second term using Itô’s formula. Then, dividing by h and sending it to 0, along with continuity of the terms appearing in the expansion, will prove that U indeed solves the PDE (1.2).

Lemma 5.5 guarantees that we can apply the chain rule proved in [12]. We apply it to the function \(U(t,x,\cdot )\) to get

$$\begin{aligned}&U\left( t,x,\left[ X_h^{\theta } \right] \right) - U(t,x,[\theta ]) = \int _0^h {\mathbb {E}}\left[ \sum _{i=1}^N V_0^i\left( X_r^{ \theta },\left[ X_r^{ \theta } \right] \right) \, \partial _{\mu } U\left( t,x,\left[ X_r^{ \theta }\right] ,X_r^{ \theta } \right) _i \right] \, dr \\&\quad + \frac{1}{2} \int _0^h {\mathbb {E}}\left[ \sum _{i,j=1}^N \left[ \sigma \sigma ^{\top } \left( X_r^{ \theta } ,\left[ X_r^{ \theta } \right] \right) \right] _{i,j} \, \partial _{v_j} \partial _{\mu } U\left( t,x,\left[ X_r^{ \theta } \right] ,X_r^{ \theta } \right) _i \right] \, dr. \end{aligned}$$

Itô’s formula applied to \(U(t,\cdot ,[X_h^{\theta }])\) gives

$$\begin{aligned}&U\left( t,X_h^{x,[\theta ]},\left[ X_h^{\theta } \right] \right) - U\left( t,x,\left[ X_h^{\theta } \right] \right) \\&\quad = \int _0^h \sum _{i=1}^N V_0^i\left( X_r^{x,[\theta ]},\left[ X_r^{\theta }\right] \right) \, \partial _{x_i} U\left( t,X_r^{x,[\theta ]},\left[ X_h^{\theta } \right] \right) \, dr \\&\quad + \frac{1}{2} \int _0^h \sum _{i,j=1}^N \left[ \sigma \sigma ^{\top } \left( X_r^{x,[\theta ]},\left[ X_r^{\theta }\right] \right) \right] _{i,j} \, \partial _{x_i} \partial _{x_j} U\left( t,X_r^{x,[\theta ]},\left[ X_h^{\theta } \right] \right) \, dr \\&\quad + \int _0^h \sum _{j=1}^d \sum _{i=1}^N V_j^i\left( X_r^{x,[\theta ]},\left[ X_r^{\theta }\right] \right) \, \partial _{x_i} U\left( t,X_r^{x,[\theta ]},\left[ X_h^{\theta } \right] \right) \, dB^j_r. \end{aligned}$$

We want the final term to be square integrable, so that it is a true martingale with zero expectation. We have that for some \(q>0\),

$$\begin{aligned} \left| \partial _{x_i} U(t,x,[\theta ]) \right|&\le t^{-1/2} \left\| g\left( X^{x, [\theta ]}_t,\left[ X^{\theta }_t \right] \right) \right\| _2 \left\| I^1_{(i)}(1)(t,x,[\theta ])\right\| _2 \\&\le C\, t^{-1/2} \left\| \left( 1+ \left| X^{x, [\theta ]}_t \right| + \Vert X^{\theta }_t\Vert _2 \right) ^q \right\| _2 \left( 1+ |x|+\Vert \theta \Vert _2 \right) ^3 \\&\le C \, t^{-1/2} \left( 1+ |x|+\Vert \theta \Vert _2\right) ^{q+3}, \end{aligned}$$

so that for all \(p \ge 1\),

$$\begin{aligned} {\mathbb {E}}\left| \partial _{x_i} U \left( t,X_r^{x,[\theta ]},\left[ X_h^{\theta } \right] \right) \right| ^p&\le C\,t^{-1/2} {\mathbb {E}}\left( 1+ \left| X_r^{x,[\theta ]} \right| + \left\| X_h^{\theta }\right\| _2\right) ^{p(q+3)} \\&\le C\,t^{-1/2} {\mathbb {E}}\left( 1+ \left| x \right| + \left\| \theta \right\| _2\right) ^{p(q+3)}, \end{aligned}$$

and by the linear growth of \(V_j^i\), we have

$$\begin{aligned} {\mathbb {E}}\left| V_j^i\left( X_r^{x,[\theta ]},\left[ X_r^{\theta }\right] \right) \right| ^p \le C (1+|x|+\Vert \theta \Vert _2)^p. \end{aligned}$$

Hence, the final term is indeed square integrable, and has zero expectation.

Putting the expansions back into (5.11), we get

$$\begin{aligned}&U(t+h,x,[\theta ]) - U(t,x,[\theta ]) = \int _0^h {\mathbb {E}}\left[ \sum _{i=1}^N V_0^i\left( X_r^{ \theta } ,\left[ X_r^{ \theta } \right] \right) \partial _{\mu } U\left( t,x,\left[ X_r^{ \theta } \right] ,X_r^{ \theta } \right) _i \right] dr \\&\quad + \frac{1}{2} \int _0^h {\mathbb {E}}\left[ \sum _{i,j=1}^N \left[ \sigma \sigma ^{\top } \left( X_r^{ \theta } ,\left[ X_r^{ \theta } \right] \right) \right] _{i,j} \, \partial _{v_j} \partial _{\mu } U\left( t,x,\left[ X_r^{ \theta } \right] ,X_r^{\theta } \right) _i \right] \, dr \\&\quad + {\mathbb {E}}\int _0^h \sum _{i=1}^N V_0^i\left( X_r^{x,[\theta ]},\left[ X_r^{\theta }\right] \right) \, \partial _{x_i} U\left( t,X_h^{x,[\theta ]},\left[ X_h^{\theta } \right] \right) \, dr \\&\quad + \frac{1}{2} {\mathbb {E}}\int _0^h \sum _{i,j=1}^N \left[ \sigma \sigma ^{\top } \left( X_r^{x,[\theta ]},\left[ X_r^{\theta }\right] \right) \right] _{i,j} \, \partial _{x_i} \partial _{x_j} U\left( t,X_h^{x,[\theta ]},\left[ X_h^{\theta } \right] \right) \, dr. \end{aligned}$$

By the earlier results on continuity of U and its derivatives and the a priori continuity of the coefficients \(V_0, \ldots , V_d\) we see that the integrand on the right-hand side is a continuous function of h. Dividing by h and sending it to zero, we see that U solves the PDE (1.2).

Uniqueness Fix any \(t \in (0,T]\) and any classical solution W with polynomial growth. Set \(\delta >0\), so

$$\begin{aligned} W(t,x,[\theta ]) - W\left( 0,X^{x, [\theta ]}_t,\left[ X^{\theta }_t \right] \right) =&W(t,x,[\theta ]) - W\left( \delta ,X^{x, [\theta ]}_{t-\delta },\left[ X^{\theta }_{t-\delta }\right] \right) \\&+ W\left( \delta ,X^{x, [\theta ]}_{t-\delta },\left[ X^{\theta }_{t-\delta }\right] \right) \\&- W\left( 0,X^{x, [\theta ]}_t,\left[ X^{\theta }_t \right] \right) . \end{aligned}$$

By the polynomial growth of W, this is square integrable. Now we expand the process \((W(t-s,X^{x, [\theta ]}_s,[X^{\theta }_s]))_{s \in [\delta ,t]}\) and use that W is a solution of the PDE (1.2), so that the drift is zero, to get

$$\begin{aligned}&W(t,x,[\theta ]) - W\left( 0,X^{x, [\theta ]}_t,\left[ X^{\theta }_t \right] \right) \\&\quad = \sum _{j=1}^d \sum _{i=1}^N \int _{\delta }^t V_j^i\left( X_r^{x,[\theta ]},\left[ X_r^{\theta }\right] \right) \, \partial _{x_i} W\left( t-r,X_r^{x,[\theta ]},\left[ X_r^{\theta }\right] \right) \, dB^j_r \\&\quad + W\left( \delta ,X^{x, [\theta ]}_{t-\delta },[X^{\theta }_{t-\delta }]\right) - W\left( 0,X^{x, [\theta ]}_t,\left[ X^{\theta }_t \right] \right) . \end{aligned}$$

As we have already noted, this is square-integrable, so the stochastic integral is a true martingale with zero expectation. So taking expectation in the above expansion, we get:

$$\begin{aligned} W(t,x,[\theta ]) - {\mathbb {E}}W\left( 0,X^{x, [\theta ]}_t,\left[ X^{\theta }_t \right] \right) =&{\mathbb {E}}\left[ W\left( \delta ,X^{x, [\theta ]}_{t-\delta },\left[ X^{\theta }_{t-\delta }\right] \right) \right. \\&\left. -W\left( 0,X^{x, [\theta ]}_t,\left[ X^{\theta }_t \right] \right) \right] . \end{aligned}$$

Now, sending \(\delta \searrow 0\) and using continuity of W at the boundary (condition (5.10) in the definition of classical solution), the right hand side disappears, and we get that

$$\begin{aligned} W(t,x,[\theta ]) = {\mathbb {E}}\, W\left( 0,X^{x, [\theta ]}_t,\left[ X^{\theta }_t \right] \right) = {\mathbb {E}}\left[ g \left( X^{x, [\theta ]}_t,\left[ X^{\theta }_t \right] \right) \right] , \end{aligned}$$

which completes the proof. \(\square \)

6 Application to the density function

In this section, we apply the integration by parts formulae to the study of the density function p(txz) of the McKean–Vlasov SDE started from a fixed point, \(X_t^{x,\delta _x}\), at a fixed time \(t \in [0,T]\). Throughout this section, we assume that (UE) holds and \(V_0, \ldots , V_d\in {\mathcal {C}}^{k,k}_{b,Lip}({\mathbb {R}}^N \times {\mathcal {P}}_2({\mathbb {R}}^N);{\mathbb {R}}^N)\). We can consider \(X^{x, [\theta ]}_t\) as the solution of a classical SDE with time-dependent coefficients. Hence, under (UE), the smoothness of its density (call it \(q(t,x,[\theta ],\cdot )\)) has been studied in the classical work of Friedman [16]. Since \(p(t,x,z)=q(t,x,\delta _x,z)\), Friedman’s results also establish the smoothness of p(txz) in the forward variable, z. However, they do not cover the smoothness of the function p(txz) in the backward variable, x. The density p(txz) has also been studied by Antonelli and Kohatsu-Higa in [1] under a Hörmander condition on the coefficients. In this case, they establish smoothness of the density in the forward variable, z, but do not establish estimates on the derivatives of this function. The theorem which follows esatblishes the smoothness of p(txz) in the variables (xz) and we also obtain estimates on its derivatives.

Theorem 6.1

Let \(\alpha , \beta \) be multi-indices on \(\{1, \ldots , N\}\) and let \(k \ge |\alpha |+|\beta |+N+2\). Then, for all \(t \in (0,T]\) and \(\theta \in L^2(\Omega )\), \(X_t^{x,\delta _x}\) has a density \(p(t,x, \cdot )\) such that \((x,z) \mapsto \partial _x^{\alpha } \, \partial _z^{\beta }p(t,x, z) \) exists and is continuous. Moreover, there exists a constant C which depends on T, N and bounds on the coefficients, such that for all \(t \in (0,T]\)

$$\begin{aligned} \left| \partial _x^{\alpha } \, \partial _z^{\beta } p(t,x,z) \right|&\le C \, (1+ |x|)^{\mu } \, t^{- \nu } , \end{aligned}$$
(6.1)

where \( \mu = 4|\alpha |+ 3 |\beta | + 3 N\) and \( \nu = \textstyle \frac{1}{2} (N + | \alpha | + | \beta | )\). If \(V_0, \ldots , V_d\) are bounded then the following estimate holds

$$\begin{aligned} \left| \partial _x^{\alpha } \, \partial _z^{\beta } p(t,x;z) \right|&\le C \, t^{- \nu } \, \exp \left( - C \, \frac{|z-x|^2}{t} \right) . \end{aligned}$$
(6.2)

Proof

Let \(\eta = (1,2, \ldots , N)\) and introduce the multi-dimensional indicator function \( \mathbf {1}_{ \{ z_0>z \} } := \textstyle \prod _{i=1}^N \mathbf {1}_{ \{ z_0^i>z^i \} } . \) For any \(g \in {\mathcal {C}}^{\infty }_0({\mathbb {R}}^N;{\mathbb {R}})\) the function f defined by

$$\begin{aligned} f(z_0) := \int _{{\mathbb {R}}^N} g (z) \mathbf {1}_{\{z_0>z \} } \, dz \end{aligned}$$
(6.3)

is in \( {\mathcal {C}}^{\infty }_p({\mathbb {R}}^N;{\mathbb {R}})\) and satisfies \(\partial ^{\eta } f = g\). Now, we first focus on \(p(t,x,\cdot )\), the density of \(X_t^{x,\delta _x}\).

$$\begin{aligned} \partial _x^{\alpha } \, {\mathbb {E}}\left[ (\partial ^{\beta } g)\left( X_t^{x,\delta _x} \right) \right]&= \partial _x^{\alpha } \, {\mathbb {E}}\left[ (\partial ^{\beta * \eta } f)\left( X_t^{x,\delta _x} \right) \right] \nonumber \\&= t^{-(|\eta | + |\beta |+|\alpha | )/2} \, {\mathbb {E}}[ f\left( X_t^{x,\delta _x} \right) I^2_{\beta *\eta }(J_{\alpha }(1))(t,x)] \nonumber \\&= t^{\frac{-(N + |\beta |+ | \alpha |)}{2}} \, {\mathbb {E}}\left[ \left( \int _{{\mathbb {R}}^N} g(z) \mathbf {1}_{ \{X_t^{x,\delta _x}> z \} } \, dz \right) \, I^2_{\beta *\eta }(J_{\alpha }(1))(t,x) \right] \nonumber \\&= t^{\frac{-(N + |\beta |+ | \alpha | )}{2}} \, \int _{{\mathbb {R}}^N} g (z) \, {\mathbb {E}}\left[ \mathbf {1}_{ \{X_t^{x,\delta _x} > z \} } I^2_{\beta *\eta }(J_{\alpha }(1))(t,x) \right] \, dz, \end{aligned}$$
(6.4)

where we have used at each step respectively: \(\partial ^{\eta } f = g\); Corollary 4.5; Eq. (6.3), and Fubini’s theorem. It then follows that, for any \(R>0\) and \(t \in (0,T]\), there exists \(C=C(R,t)>0\) such that

$$\begin{aligned} \sup _{|x|\le R} \left( \left| \partial _x^{\alpha } \, {\mathbb {E}}\left[ (\partial ^{\beta } g)(X^x_t)\right] \right| + \left| \partial _x^{\alpha } \, {\mathbb {E}}\left[ (\partial ^{\beta } g)\left( X^{x, [\theta ]}_t \right) \right] \right| \right) \le C \, \Vert g\Vert _{\infty }. \end{aligned}$$

Then, it is a result from Taniguchi [35, Lemma 3.1] that \(X^{x,\delta _x}_t\) has a density function, \(p(t,x,\cdot )\) and that \(\partial _x^{\alpha } \, \partial _z^{\beta }p(t,x, z)\) exists. Once we know that a smooth density exists, it follows from (6.4) that we can identify \( \partial _x^{\alpha } \, \partial _z^{\beta } p(t,x,z)\) as

$$\begin{aligned} \partial _x^{\alpha } \, \partial _z^{\beta } p(t,x,z)&= t^{\frac{-(N + |\beta |+ | \alpha | )}{2}} \, (-1)^{|\beta |} \, {\mathbb {E}}\left[ \mathbf {1}_{ \{X_t^{x,\delta _x} > z \} } I^2_{\beta *\eta }(J_{\alpha }(1))(t,x) \right] . \end{aligned}$$

Now, the following estimates come from each term’s membership of the Kusuoka–Stroock class, as guaranteed by Proposition 3.4 and Corollary 4.5:

$$\begin{aligned} \Vert I^2_{\beta *\eta }(J_{\alpha }(1))(t,x) \Vert _{p}&\le C\, (1+|x|)^{\mu }. \end{aligned}$$

This proves the estimate (6.1). In addition, if \(V_0, \ldots , V_d\) are bounded, we can estimate

$$\begin{aligned} \left\| \mathbf {1}_{ \{X_t^{x,\delta _x}> z \} } \right\| _{p} =&\, {\mathbb {P}}\left( {\displaystyle \bigcap _{i=1}^N} \left\{ \left( X_t^{x,\delta _x} \right) ^i> z^i\right\} \right) \\ \le&\min _{i=1, \ldots , N} {\mathbb {P}}\left( \left( X_t^{x,\delta _x} \right) ^i> z^i \right) \\ =&\min _{i=1, \ldots , N} {\mathbb {P}}\left( \sum _{j=1}^d \int _0^t V_j^i\left( X_s^{x,\delta _x},\left[ X_s^{x,\delta _x}\right] \right) dB^j_s > z^i \right. \\&\left. -\, x^i - \int _0^t V_0^i\left( X_s^{x,\delta _x},\left[ X_s^{x,\delta _x}\right] \right) ds \right) . \end{aligned}$$

Now, we have that \(\textstyle \int _0^t V_0^i(X_s^{x,\delta _x},[X_s^{x,\delta _x}]) ds \le \Vert V_0\Vert _{\infty } t\) and the term

$$\begin{aligned} M^i_t= \sum _{j=1}^d \int _0^t V_j^i\left( X_s^{x,\delta _x},\left[ X_s^{x,\delta _x}\right] \right) dB^j_s , \end{aligned}$$

is a martingale with quadratic variation \( \langle M^i \rangle _t \le \textstyle \sum _{j=1}^d \Vert V_j\Vert ^2 t\). We can therefore apply the exponential martingale inequality to obtain

$$\begin{aligned}&\left\| \mathbf {1}_{ \{X_t^{x,\delta _x} > z \} } \right\| _{p} \le \min _{i=1, \ldots , N} \exp \left( - c' \frac{ |z^i - x^i - t \, \Vert V_0 \Vert _{\infty } \, |^2 }{t} \right) . \end{aligned}$$

Then, we use \((a+b)^2 \ge \textstyle \frac{a^2}{2} - b^2\), which is re-arrangement of Young’s inequality, to get

$$\begin{aligned} \frac{ |z^i - x^i - t \, \Vert V_0 \Vert _{\infty } \, |^2 }{t} \ge \frac{ |z^i - x^i |^2 }{2t} - \Vert V_0 \Vert ^2_{\infty }. \end{aligned}$$

So,

$$\begin{aligned} \min _{i=1, \ldots , N} \exp \left( - c^{\prime } \frac{ |z^i - x^i - t \, \Vert V_0 \Vert _{\infty } \, |^2 }{t} \right) \le&\min _{i=1, \ldots , N} \exp \left( - C \frac{ |z^i - x^i |^2 }{t} \right) \\&\times \exp \left( c^{\prime } \Vert V_0 \Vert ^2_{\infty }\right) \\ \le&\,C \exp \left( - C \frac{ |z - x |^2 }{t} \right) . \end{aligned}$$

This establishes (6.2). \(\square \)