1 Introduction

In [42], von Renesse and Sturm introduced a diffusion process on the \(L_2\)-Wasserstein space \({\mathcal {P}}_2(\mathbb {R})\), satisfying following properties: the large deviations in small time are given by the Wasserstein distance and the martingale term which arises when expanding any smooth function \(\phi \) of the measure argument along the process has exactly the square norm of the Wasserstein gradient of \(\phi \) as local quadratic variation. There are several examples of diffusions on Wasserstein spaces, either constructed via Dirichlet forms [2, 28, 42], as limits of particle systems [21, 22] or as a system of infinitely many particles [32, 34]. Some diffusive properties of those processes have already been proved, as e.g. a large deviation principle [29] or as a restoration of uniqueness for McKean–Vlasov equations [34].

We prove in this paper another well-known diffusive property. Indeed, we control the gradient of the semi-group associated to a diffusion process on the \(L_2\)-Wasserstein space. For finite-dimensional diffusions, this gradient estimate can be obtained from a Bismut-Elworthy-Li integration by parts formula. Bismut, Elworthy and Li showed that the gradient of the semi-group \(P_t \phi \) associated to the stochastic differential equation \( \mathrm {d} X_t = \sigma (X_t) \mathrm {d} W_t + b(X_t) \mathrm {d}t\) on \(\mathbb {R}^n\) can be expressed as follows

$$\begin{aligned} \nabla (P_t \phi )_{x_0} (v_0) = \frac{1}{t} {\mathbb {E}} \left[ \phi (X_t) \int _0^t \langle V_s, \sigma (X_s) \mathrm {d}W_s \rangle \right] , \end{aligned}$$

where \(V_s\) is a certain stochastic process starting at \(v_0\) (see [3, 18, 19]). In particular, that equality shows that \(\Vert \nabla (P_t \phi )_{x_0} (v_0)\Vert \) is of order \(t^{-1/2}\) for small times. Important domains of application of Bismut-Elworthy-Li formulae are among others geometry [1, 39, 40], non-linear PDEs [13, 43] or finance [20, 35]. Recent interest has emerged for similar results in infinite dimension. First, Bismut-Elworthy-Li formulae were proved for Kolmogorov equations on Hilbert spaces and for reaction-diffusion systems in bounded domains of \(\mathbb {R}^n\), see [11, 15, 16]. Recently, Crisan and McMurray [12] and Baños [5] proved Bismut–Elworthy–Li formulae for McKean–Vlasov equations \(\mathrm {d}X_t = b(t, X_t, \mu _t) \mathrm {d}t + \sigma (t, X_t, \mu _t) \mathrm {d}W_t\), with \(\mu _t = {\mathcal {L}}(X_t)\). For other recent smoothing results on McKean–Vlasov equations and mean-field games, see also [4, 7, 9, 10].

1.1 A gradient estimate for a Wasserstein diffusion on the torus

In this paper, we construct a system of infinitely many particles moving on the one-dimensional torus \({\mathbb {T}}=\mathbb {S}^1\), identified with the interval \([0,2\pi ]\). Considering for each time the empirical measure associated to that system, we get a diffusion process on \({\mathcal {P}}({\mathbb {T}})\), space of probability measures on \({\mathbb {T}}\). Then we average out that process over the realizations of an additive noise \(\beta \). This averaging increases the regularity of the process and leads to a gradient estimate of the associated semi-group.

To state more precisely the main result of the paper, let us introduce the following equation

$$\begin{aligned} x_t^g(u)= g(u) + \sum _{k \in \mathbb {Z}} f_k \int _0^t \mathfrak {R}( e^{-ik x_s^g(u)} \mathrm {d} W_s^k)+\beta _t, \quad t \geqslant 0,\ u \in [0,1]. \end{aligned}$$
(1)

Hereabove, \(\beta \), \((W^{\mathfrak {R},k})_{k \in \mathbb {Z}}\) and \((W^{\mathfrak {I},k})_{k \in \mathbb {Z}}\) are independent standard real-valued Brownian motions, the notation \(\mathfrak {R}\) denotes the real part of a complex number and \(W^k_\cdot :=W^{\mathfrak {R},k}_\cdot + i W^{\mathfrak {I},k}_\cdot \). The sequence \((f_k)_{k \in \mathbb {Z}}\) is fixed and real-valued, typically of the form \(f_k= \frac{C}{(1+k^2)^{\alpha /2}}\). Lastly, the initial condition \(g:[0,1]\rightarrow \mathbb {R}\) is a \({\mathcal {C}}^1\)-function with positive derivative satisfying \(g(1)=g(0)+2\pi \), so that g is seen as the quantile function of a probability measure \(\nu _0^g\) on \({\mathbb {T}}\).

For each \(u \in [0,1]\), \((x_t^g(u))_{t \in [0,T]}\) represents the trajectory of the stochastic particle starting at g(u), which is driven both by a common noise \(W=(W^k)_k\) and by an idiosyncratic noise \(\beta \), taking over the terminology usually used for McKean–Vlasov equation. This terminology is justified since Eq. (1) can be seen as the counterpart on the torus \({\mathbb {T}}\) of the equation on the real line studied in [34] and defined by

$$\begin{aligned} y_t^g (u) = g(u) + \int _\mathbb {R}f(k) \int _0^t \mathfrak {R}( e^{-ik y_s ^g(u) }\mathrm {d}w(k,s))+\beta _t, \quad t \geqslant 0,\ u \in \mathbb {R}, \end{aligned}$$

where \((w(k,t))_{k \in \mathbb {R}, t\in [0,T]}\) is a complex-valued Brownian sheet on \( \mathbb {R}\times [0,T]\).

Moreover, we will show that the cloud of all particles is spread over the whole torus. More precisely, for each \(t \in [0,T]\), the probability measure \(\nu _t^g= {\text {Leb}}_{[0,1]} \circ (x_t^g)^{-1}\) has a density \(q_t^g\) w.r.t. Lebesgue measure on \({\mathbb {T}}\) such that \(q_t^g(x)>0\) for all \(x \in {\mathbb {T}}\). Instead of studying the process \((\nu _t^g)_{t \in [0,T]}\), we consider a more regular process defined by averaging over the realizations of \(\beta \):

$$\begin{aligned} \mu _t^g= ({\text {Leb}}_{[0,1]} \otimes {\mathbb {P}}^\beta ) \circ (x_t^g)^{-1}, \end{aligned}$$
(2)

i.e. \(\mu _t^g\) is the probability measure on \({\mathbb {T}}\) with density \(p_t^g(x):={\mathbb {E}}^\beta \left[ q_t^g (x)\right] \), \(x\in {\mathbb {T}}\), w.r.t. Lebesgue measureFootnote 1. In other words, since we assume that \(\beta \) and \((W^k)_{k \in \mathbb {Z}}\) are independent, \(\mu _t^g\) is the conditional law of \(x_t^g\) given \((W^k)_{k \in \mathbb {Z}}\).

The semi-group associated to \((\mu _t^g)_{t \in [0,T]}\) is defined, for any bounded and continuous function \(\phi :{\mathcal {P}}_2({\mathbb {T}}) \rightarrow \mathbb {R}\), by \(P_t \phi (\mu _0^g):= {\mathbb {E}} \left[ \phi (\mu _t^g) \right] \). Alternatively, denoting the lift of \(\phi \) by \({\widehat{\phi }}(X):= \phi (({\text {Leb}}_{[0,1]} \otimes {\mathbb {P}}^\beta ) \circ X^{-1})\) for any random variable \(X \in L_2([0,1]\times \Omega ^\beta )\), we can also define the semi-group by \(\widehat{P_t \phi }(g) = {\mathbb {E}} \left[ {\widehat{\phi }}\left( x_t^g \right) \right] \).

The main theorem of this paper states the following upper bound for the Fréchet derivative of \(g \mapsto \widehat{P_t \phi }(g)\) depending only on the \(L_\infty \)-norm of \(\phi \). Assume that \(g \in {\mathcal {C}}^{3+\theta }\) with \(\theta >0\) and let h be a 1-periodic \({\mathcal {C}}^1\)-function. If \(\phi \) is sufficiently regular and if \(f_k= \frac{C}{(1+k^2)^{\alpha /2}}\), \(k \in \mathbb {Z}\), with \(\alpha \in \left( \frac{7}{2},\frac{9}{2} \right) \), then there is \(C_g\) independent of h such that

$$\begin{aligned} \left| D \widehat{P_t\phi }(g) \cdot h \right| =\left| \frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} P_t \phi (\mu _0^{g+\rho h})\right| \leqslant C_g \frac{\Vert \phi \Vert _{L_\infty }}{t^{2+\theta }}\Vert h\Vert _{\mathcal C^1}, \end{aligned}$$
(3)

for any \(t \in (0,T]\). The precise assumptions over \(\phi \) and the precise statement of this theorem are given below by Definition 11 and Theorem 15, respectively. Moreover, \(C_g\) depends polynomially on \(\Vert g'''\Vert _{L_\infty }\), \(\Vert g''\Vert _{L_\infty }\), \(\Vert g'\Vert _{L_\infty }\) and \(\Vert \frac{1}{g'}\Vert _{L_\infty }\).

1.2 Comments on the main result

The order \(\alpha \) of the polynomial decrease of \((f_k)_{k \in \mathbb {Z}}\) has a key role in this paper. In Eq. (1), the diffusion coefficient in front of the noise W is written as a Fourier series, \((f_k)_{k \in \mathbb {Z}}\) being the sequence of Fourier coefficients. Therefore, it should not be surprising that the larger \(\alpha \) is, the more regular the solution to (1) isFootnote 2. Nevertheless, when we apply Girsanov’s Theorem with respect to W, which is part of a standard method introduced by Thalmaier and Wang [39, 40], we need \(\alpha \) to be sufficiently small, in order to be able to invert the Fourier seriesFootnote 3. So there is a balance regarding the choice of \(\alpha \), which explains why we assume in our main result \(\alpha \) to be bounded from above and from below.

Moreover, the question of the order \(\alpha \) is highly related to the rate \(t^{-(2+\theta )}\) appearing in (3). Usually, we expect a rate of \(t^{-1/2}\) for diffusions. As we have already mentioned, this rate follows directly from a Bismut-Elworthy-Li integration by parts formula. However, adapting the usual strategy based on Kunita’s expansion as in [39, 40], we do not get an exact integration by parts formula here. Indeed, the failure of the latter strategy in our case comes from the fact that it is impossible to choose \(\alpha \) which is simultaneously large enough to ensure a sufficient regularity of the solution and small enough to be able to invert the Fourier series. We refer to remark 22 below for a justification of that claim.

Therefore, the main new strategy introduced in this paper is to regularize the derivative of the solutionFootnote 4. By doing this, we get an approximate integration by parts formula, in the sense that there is an additional remainder term appearing in the formula:

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} P_t \phi (\mu _0^{g+\rho g' h}) =\frac{1}{t} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \phi (\mu _t^g) \sum _{k \in \mathbb {Z}} \int _0^t \mathfrak {R}( \overline{\lambda _s^{k,\varepsilon }} \mathrm {d}W_s^k) \right] +{\mathcal {O}}(\varepsilon ), \end{aligned}$$

where \(\lambda ^{k,\varepsilon }_\cdot \) is a stochastic processFootnote 5. Controlling the remainder term leads us, by a final bootstrap argument, to the desired upper bound on \(| D \widehat{P_t\phi }(g) \cdot h |\), at the prize of worsening the rate of blow-up. We are not claiming that the rate of \(t^{-(2+\theta )}\) is sharp, but we expect that a rate of \(t^{-1/2}\) is unachievable for this process. Let us mention that the author improves this rate of blow-up to \(t^{-(1+\theta )}\), at the prize of assuming \(\mathcal C^{4+\theta }\)-regularity of g and h. Since the proof is long and technical, it is not included in this paper but we refer to [33, Chapter IV] for all the details and for an application to a gradient estimate for an inhomogeneous SPDE with Hölder continuous source term.

Furthermore, the idiosyncratic noise \(\beta \) is important as well. Of course, the addition of \(\beta \) does not change dramatically the dynamics of the process, since it acts as a rotation on the circle of the whole system. Nevertheless, as it has already been pointed out, the diffusion process \((\mu _t^g)_{t \in [0,T]}\) is defined by an average over the realizations of \(\beta \). The importance of that averaging is consistent with SPDE theory. Indeed, \((\mu _t^g)_{t \in [0,T]}\) solves the following equation:

$$\begin{aligned} \mathrm {d} \mu _t^g - \frac{1+ \sum _{k \in \mathbb {Z}} f_k^2}{2} \Delta (\mu _t^g) \mathrm {d}t +\partial _x \bigg ( \sum _{k \in \mathbb {Z}} f_k \mathfrak {R}\big ( e^{-ik \; \cdot \;} \mathrm {d}W_t^k \big )\; \mu _t^g \bigg )&=0 , \end{aligned}$$

with initial condition \(\mu _t^g |_{t=0} = \mu _0^g\). The noise \(\beta \) manifests in the additional term \(\frac{1}{2}\) in front of \(\Delta (\mu _t^g)\). On the level of the densities, \((p_t^g)_{t\in [0,T]}\) solves the following equation

$$\begin{aligned} \mathrm {d} p_t^g (v)= -\partial _v \bigg ( p_t^g(v) \sum _{k \in \mathbb {Z}} f_k \mathfrak {R}( e^{-ikv} \mathrm {d}W_t^k ) \bigg ) +\lambda (p_t^g)''(v) \mathrm {d}t, \end{aligned}$$

with \(\lambda = \frac{1+ \sum _{k \in \mathbb {Z}} f_k^2}{2}\). Denis and Stoica showed in [14, 17] that the above equation is well-posed—and they also gave energy estimates—if \(\lambda \) is strictly larger than the critical threshold \(\lambda _{{\text {crit}}}=\frac{\sum _{k \in \mathbb {Z}} f_k^2}{2}\). If we considered Eq. (1) without \(\beta \), we would exactly obtain the above equation with \(\lambda =\lambda _{{\text {crit}}}\). Therefore, it seems that adding a level of randomness is crucial to get our estimate. Precisely, in our above-described strategy, the Brownian motion \(\beta \) plays a key role in controlling the remainder term.

In addition, let us note that we study a process on the one-dimensional circle \({\mathbb {T}}\), and not on the real line as e.g. in [22, 34]. We made this choice rather for technical reasons, in order to deal with processes of compactly supported measures having a positive density on the whole space. The main result is restricted to functions g which have a strictly positive derivative, meaning that the associated measure has a density with respect to Lebesgue measure on the torus. The constant \(C_g\) tends to infinity when \(\min _{u \in [0,1]} g'(u)\) gets closer to zero. The assumptions on the regularity of g and h seem reasonable since our model is close to the process \((\mu ^{\mathrm {MMAF}}_t)_{t}\) called modified massive Arratia flow introduced in [22], which has highly singular coefficients. Indeed, Konarovskyi and von Renesse showed in [29] that \((\mu ^{\mathrm {MMAF}}_t)_{t}\), which is almost surely of finite support for all \(t>0\), solves the following SPDE:

$$\begin{aligned} \mathrm {d}\mu ^{\mathrm {MMAF}}_t = \Gamma (\mu ^{\mathrm {MMAF}}_t)\mathrm {d}t + \mathrm {div} (\sqrt{\mu ^{\mathrm {MMAF}}_t}\mathrm {d} W_t), \end{aligned}$$

where \(\Gamma \) is defined by \(<f,\Gamma (\nu )>=\frac{1}{2} \sum _{x \in \mathrm {Supp}(\nu )} f''(x)\) for any \(f \in \mathcal C^2_{\mathrm {b}} (\mathbb {R})\).

1.3 Organization of the paper

The goal of Sect. 2 is to define properly Eq. (1) and to state the main result of the paper. The proof of the theorem is then divided into four main steps. We start in the short Sect. 3 by splitting the gradient of the semi-group into two parts, one regularized term and one remainder term, which are separately studied in Sects. 4 and 5, respectively. Finally in Sect 6, we complete the proof by a bootstrap argument.

2 Statement of the main result

The main result of the paper is stated in Paragraph 2.4. Before, we define precisely the diffusion on the torus (Paragraph 2.1), its associated semi-group (Paragraph 2.2) and the assumptions on the test functions (Paragraph 2.3).

2.1 A diffusion on the torus

In this paper, we study the following stochastic differential equation on a fixed time interval [0, T]

$$\begin{aligned} \mathrm {d}x_t^g(u) = \sum _{k \in \mathbb {Z}} f_k\; \mathfrak {R}\left( e^{-ik x_t^g(u)} \mathrm {d}W^k_t \right) + \mathrm {d}\beta _t, \quad t \in [0,T], \; u \in \mathbb {R}, \end{aligned}$$
(4)

with initial condition \(x^g_0=g\). In this paragraph, we first define the assumptions made on \(W^k\), \(\beta \), \(f_k\) and g, where we emphasise the interpretation of \((x^g_t)_{t \in [0,T]}\) as a diffusion on the torus. Then we state existence, uniqueness and some important properties of solutions to Eq. (4).

Let \(\beta \), \((W^{\mathfrak {R},k})_{k \in \mathbb {Z}}\), \((W^{\mathfrak {I},k})_{k \in \mathbb {Z}}\) be a collection of independent standard real-valued Brownian motions. Thus \(W^k_\cdot =W^{\mathfrak {R},k}_\cdot + i W^{\mathfrak {I},k}_\cdot \) denotes a \(\mathbb {C}\)-valued Brownian motion. The notation \(\mathfrak {R}\) denotes the real part of a complex number, so that Eq. (4) can alternatively be written as follows

$$\begin{aligned} \mathrm {d}x_t^g(u)&= \sum _{k \in \mathbb {Z}} f_k \cos (kx_t^g(u)) \mathrm {d}W_t^{\mathfrak {R},k} + \sum _{k \in \mathbb {Z}} f_k \sin (kx_t^g(u)) \mathrm {d}W_t^{\mathfrak {I},k}+\mathrm {d}\beta _t. \end{aligned}$$

Definition 1

We say that \(f:=(f_k)_{k \in \mathbb {Z}}\) is of order \(\alpha >0\) if there are \(c>0\) and \(C>0\) such that \(\frac{c}{\langle k \rangle ^\alpha }\leqslant |f_k| \leqslant \frac{C}{\langle k \rangle ^\alpha }\) for every \(k \in \mathbb {Z}\), where \(\langle k \rangle :=(1+|k|^2)^{1/2}\).

Note that if f is of order \(\alpha > \frac{1}{2}\), then for each \(u \in \mathbb {R}\), the particle \((x^g_t(u))_{t\in [0,T]}\) has a finite quadratic variation equal to \(\langle x^g(u),x^g(u) \rangle _t= \Big (\sum _{k\in \mathbb {Z}} f_k^2+1\Big ) t\).

Let \({\mathbb {T}}\) be the one-dimensional torus, that we identify with the interval \([0,2\pi ]\). \({\mathcal {P}}({\mathbb {T}})\) denotes the space of probability measures on the torus. We consider the \(L_2\)-Wasserstein metric \(W_2^{\mathbb {T}}\) on \({\mathcal {P}}({\mathbb {T}})\), defined by \( W_2^{\mathbb {T}}(\mu ,\nu ):=\inf _{\pi \in \Pi (\mu ,\nu )} \left( \int _{{\mathbb {T}}^2} d^{\mathbb {T}}(x,y)^2 \; \mathrm {d}\pi (x,y) \right) ^{1/2} \), where \(\Pi (\mu ,\nu )\) is the set of probability measures on \({\mathbb {T}}^2\) with first marginal \(\mu \) and second marginal \(\nu \) and where \(d^{\mathbb {T}}\) is the distance on the torus defined by \(d^{\mathbb {T}}(x,y):=\inf _{k \in \mathbb {Z}} |x-y-2k\pi |\), where \(x,y \in \mathbb {R}\).

Definition 2

Let \({\mathscr {G}}^1\) be the set of \({\mathcal {C}}^1\)-functions \(g:\mathbb {R}\rightarrow \mathbb {R}\) such that for every \(u \in \mathbb {R}\), \(g'(u)>0\) and \(g(u+1)=g(u)+2\pi \). Let \(\sim \) be the following equivalence relation on \({\mathscr {G}}^1\): \(g_1 \sim g_2\) if and only if there exists \(c\in \mathbb {R}\) such that \(g_2(\cdot )=g_1(\cdot +c)\). We denote by \({\mathbf {G}}^1\) the set of equivalence classes \({\mathscr {G}}^1 / \sim \).

An interpretation for Definition 2 is that the initial condition g is seen as the quantile function (or inverse c.d.f. function) associated with the measure \(\nu _0^g \in {\mathcal {P}}({\mathbb {T}})\) with density \(p(x)=\frac{1}{g'(g^{-1}(x))}\), \(x \in [0,2\pi ]\), with respect to Lebesgue measure on \({\mathbb {T}}\). There is a one-to-one correspondence between \({\mathbf {G}}^1\) and the set of positive densities on the torus, see Paragraph A.1 in “Appendix” for more details.

Existence, uniqueness, continuity and differentiability of solutions to (4) depend on the order \(\alpha \) of f and on the regularity of g, as shown by the following two propositions. The proofs, which are classical, are left to “Appendix”.

Proposition 3

Let \(g \in {\mathscr {G}}^1\) and f be of order \(\alpha > \frac{3}{2}\). Then for each \(u\in \mathbb {R}\), strong existence and pathwise uniqueness in \({\mathcal {C}}(\mathbb {R}\times [0,T])\) hold for Eq. (4). Moreover almost surely, for every \(u\in [0,1]\), \((x_t^g(u))_{t\in [0,T]}\) satisfies Eq. (4) and for every \(t\in [0,T]\), \(u \mapsto x^g_t(u)\) is strictly increasing.

Proof

See paragraph A.2 in “Appendix”. \(\square \)

For every \(j\in \mathbb {N}\) and \( \theta \in [0,1)\), let \(\mathcal C^{j+\theta }\) denote the set of \({\mathcal {C}}^j\)-functions whose \(j^\mathrm {th}\) derivative is \(\theta \)-Hölder continuous. By extension, \({\mathscr {G}}^{j+\theta } \subseteq {\mathscr {G}}^1\) and \({\mathbf {G}}^{j+\theta } \subseteq {\mathbf {G}}^1\), are the subsets of \({\mathscr {G}}^1\) and of \({\mathbf {G}}^1\) consisting of all \(\mathcal C^{j+\theta }\)-functions and \({\mathcal {C}}^{j+\theta }\)-equivalence classes, respectively.

Proposition 4

Let \(j \geqslant 1\), \(\theta \in (0,1)\), \(g \in {\mathscr {G}}^{j+\theta }\) and f be of order \(\alpha > j + \frac{1}{2} + \theta \). Then almost surely, for every \(t\in [0,T]\), the map \(u \mapsto x_t^g(u)\) is a \({\mathcal {C}}^{j+\theta '}\)-function for every \(\theta '< \theta \). Moreover, its first derivative satisfies almost surely

$$\begin{aligned} \partial _u x_t^g(u)&=g'(u) \exp \bigg ( \sum _{k \in \mathbb {Z}} f_k \int _0^t \mathfrak {R}\left( -ik e^{-ik x_s^g(u)} \mathrm {d}W^k_s \right) -\frac{t}{2} \sum _{k \in \mathbb {Z}} f_k^2 k^2 \bigg ), \nonumber \\&\quad u\in \mathbb {R}, t\in [0,T]. \end{aligned}$$
(5)

Proof

See paragraph A.2 in “Appendix”. \(\square \)

The next proposition states that the flow of the SDE preserves the equivalence classes of quantile functions that we introduced in Definition 2.

Proposition 5

Let \(\theta \in (0,1)\), \(g \in {\mathscr {G}}^{1+\theta }\) and f be of order \(\alpha > \frac{3}{2}+ \theta \). Then almost surely, for every \(t\in [0,T]\), the map \(u \mapsto x_t^g(u)\) belongs to \(\mathscr {G}^1\). Moreover, if \(g_1 \sim g_2\), then almost surely, \(x_t^{g_1} \sim x_t^{g_2}\) for every \(t \in [0,T]\).

Proof

By Propositions 3 and 4, it is clear that \(u \mapsto x_t^g(u)\) belongs to \({\mathcal {C}}^1\) and that \(\partial _u x_t^g (u)>0\) for every \(u\in \mathbb {R}\). Furthermore, let \((y_t^g)_{t\in [0,T]}\) be the process defined by \(y_t^g(u):=x_t^g(u+1)-2\pi \). By definition of \({\mathscr {G}}^{1+\theta }\), \(g(u+1)-2\pi =g(u)\), thus for every \(t\in [0,T]\) and \(u\in \mathbb {R}\), \(y_t^g(u)= g(u) + \sum _{k \in \mathbb {Z}} f_k \int _0^t \mathfrak {R}\left( e^{-ik y_s^g(u)} \mathrm {d}W^k_s \right) +\beta _t\). Therefore \((y_t^g(u))_{t\in [0,T], u\in \mathbb {R}}\) and \((x_t^g(u))_{t\in [0,T], u\in \mathbb {R}}\) satisfy the same equation and belong to \(\mathcal C(\mathbb {R}\times [0,T])\). By Proposition 3, there is a unique solution in this space. Thus for every \(t\in [0,T]\) and every \(u\in \mathbb {R}\), \(x_t^g(u+1)-2\pi =x_t^g(u)\). We deduce that \(x_t^g\) belongs to \({\mathscr {G}}^1\) for every \(t\in [0,T]\).

The proof of the second statement is similar; if there is \(c\in \mathbb {R}\) such that \(g_2(u)=g_1(u+c) \) for every \(u \in \mathbb {R}\), then the processes \((x_t^{g_2}(u))_{t\in [0,T], u\in \mathbb {R}}\) and \((x_t^{g_1}(u+c))_{t\in [0,T], u\in \mathbb {R}}\) satisfy the same equation and are equal. \(\square \)

By Proposition 5, we are able to give a meaning to Eq. (4) with initial value g in \({\mathbf {G}}^{1+\theta }\). Indeed, for each \(t\in [0,T]\), the solution \(x_t^g\) will take its values in \({\mathbf {G}}^{1+\theta '}\) for every \(\theta ' < \theta \). More generally, by Proposition 4, if the initial condition g belongs to \({\mathbf {G}}^{j+\theta }\) for \(j \geqslant 1\) and \(\theta \in (0,1)\), then for each \(t\in [0,T]\), \(x_t^g\) belongs to \(\mathbf{G}^{j+\theta '}\) for every \(\theta ' < \theta \).

Furthermore, the \(L_p\)-norms (in the space variable) of the derivatives \(\partial _u^{(j)} x_t^g\), \(j \geqslant 1\), and of \(\frac{1}{\partial _u x_t^g}\) can be easily controlled with respect to the initial conditions. All the inequalities that will be needed later in this paper were listed and proved in Lemma 39 in “Appendix”.

To conclude this paragraph, let us mention that the solution to Eq. (4) can be equivalently seen as a solution to the following parametric SDE

$$\begin{aligned} Z_t^x = x+ \sum _{k \in \mathbb {Z}} f_k \int _0^t \mathfrak {R}\left( e^{-ik Z_s^x} \mathrm {d}W^k_s \right) + \beta _t, \quad x \in \mathbb {R}. \end{aligned}$$
(6)

Under the same assumptions over f, well-posedness and regularity of solutions to Eq. (6) can be shown, see Proposition 40 in “Appendix”. Moreover, \(Z_t\) is closely related to \(x_t^g\) by the following identities.

Proposition 6

Let \(g\in {\mathbf {G}}^{1+\theta }\) and f be of order \(\alpha > \frac{3}{2}+\theta \) for some \(\theta \in (0,1)\). Then almost surely, for every \(t\in [0,T]\) and \(u\in [0,1]\),

$$\begin{aligned} Z_t^{g(u)}&=x_t^g(u), \end{aligned}$$
(7)
$$\begin{aligned} \partial _x Z_t^{g(u)}&= \frac{\partial _u x_t^g(u)}{g'(u)}. \end{aligned}$$
(8)

Proof

See end of paragraph A.2 in “Appendix”. \(\square \)

2.2 Semi-group averaged out by idiosyncratic noise

According to Proposition 5, \(u \in [0,1] \mapsto x_t^g(u)\) is for each fixed t a quantile function of the measure \(\nu _t^g \in {\mathcal {P}}({\mathbb {T}})\) defined by \(\nu _t^g := {\text {Leb}}_{[0,1]} \circ (x_t^g)^{-1}\). However, the stochastic process \((\nu _t^g)_{t \in [0,T]}\) is not regular enough to obtain a gradient estimate for the associated semi-group. Therefore, we average out the realization of the noise \(\beta \) by defining \(\mu _t^g := ({\text {Leb}}_{[0,1]} \otimes {\mathbb {P}}^\beta ) \circ (x_t^g)^{-1}\). In terms of densities, if \(q_t^g\) is the density of \(\nu _t^g\), then \(p_t^g(x):={\mathbb {E}}^\beta \left[ q_t^g (x)\right] \), \(x\in {\mathbb {T}}\), is the density of \(\mu _t^g\).

To be more precise, we define three sources of randomness, for the noises \(W^k\), \(\beta \) and the initial condition g, respectively. Let \((\Omega ^W, {\mathcal {G}}^W, ({\mathcal {G}}^W_t)_{t\in [0,T]}, \mathbb P^W)\) and \((\Omega ^\beta , {\mathcal {G}}^\beta , (\mathcal G^\beta _t)_{t\in [0,T]}, {\mathbb {P}}^\beta )\) be filtered probability spaces satisfying usual conditions, on which we define a \((\mathcal G^W_t)_{t\in [0,T]}\)-adapted collection \(((W^{k}_t)_{t\in [0,T]})_{k \in \mathbb {Z}}\) of independent \(\mathbb {C}\)-valued Brownian motions and a \(({\mathcal {G}}^\beta _t)_{t\in [0,T]}\)-adapted standard Brownian motion \((\beta _t)_{t\in [0,T]}\), respectively. Let \((\Omega ^0, \mathcal G^0, {\mathbb {P}}^0)\) be another probability space rich enough to support \({\mathbf {G}}^1\)-valued random variables with any possible distribution. We denote by \({\mathbb {E}}^B\), \({\mathbb {E}}^W\) and \({\mathbb {E}}^0\) the expectations associated to \({\mathbb {P}}^\beta \), \({\mathbb {P}}^W\) and \({\mathbb {P}}^0\), respectively. Let \((\Omega , {\mathcal {G}}, ({\mathcal {G}}_t)_{t\in [0,T]}, {\mathbb {P}})\) be the filtered probability space defined by \(\Omega :=\Omega ^W \times \Omega ^\beta \times \Omega ^0\), \({\mathcal {G}}:= {\mathcal {G}}^W \otimes {\mathcal {G}}^\beta \otimes {\mathcal {G}}^0\), \({\mathcal {G}}_t:= \sigma ((\mathcal G^W_s)_{s\leqslant t},({\mathcal {G}}^\beta _s)_{s\leqslant t},{\mathcal {G}}^0)\) and \({\mathbb {P}}:= {\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \otimes \mathbb P^0\). Without loss of generality, we assume the filtration \(({\mathcal {G}}_t)_{t\in [0,T]}\) to be complete and, up to adding negligible subsets to \({\mathcal {G}}^0\), we assume that \(\mathcal G_0={\mathcal {G}}^0\).

Definition 7

Fix \(t\in [0,T]\) and \(\omega \in \Omega ^W \times \Omega ^0\). Then \(x_t^g(\omega )\) is a random variable from \([0,1] \times \Omega ^\beta \) to \(\mathbb {R}\). We denote by \(\mu _t^g(\omega )\) its law, that is:

$$\begin{aligned} \mu _t^g(\omega ) := \left( {\text {Leb}}_{[0,1]} \otimes {\mathbb {P}}^\beta \right) \circ \left( x_t^g(\omega ) \right) ^{-1}. \end{aligned}$$

In particular, \(\mu _t^g\) is a random variable defined on \(\Omega ^W \times \Omega ^0\) with values in \({\mathcal {P}}_2(\mathbb {R})\).

Define now the semi-group \((P_t)_{t \in [0,T]}\) associated to \((\mu ^g_t)_{t\in [0,T]}\). Let \(\phi :{\mathcal {P}}_2(\mathbb {R})\rightarrow \mathbb {R}\) be a bounded and continuous function. Let \({\widehat{\phi }}: L_2([0,1]\times \Omega ^\beta ) \rightarrow \mathbb {R}\) be the lifted function of \(\phi \), defined by \({\widehat{\phi }}(X):= \phi (({\text {Leb}}_{[0,1]} \otimes {\mathbb {P}}^\beta ) \circ X^{-1})\). In other words, \({\widehat{\phi }}(X)= \phi ({\mathcal {L}}_{[0,1] \times \Omega ^\beta }(X))\), where \(\mathcal L_{[0,1] \times \Omega ^\beta }(X)\) denotes the law of the random variable \(X:[0,1] \times \Omega ^\beta \rightarrow \mathbb {R}\).

Definition 8

For every \(t \in \mathbb {R}\) and \(\mu \in {\mathcal {P}}_2(\mathbb {R})\),

$$\begin{aligned} P_t\phi (\mu ):= {\mathbb {E}}^W \left[ {\widehat{\phi }}(Z_t^X) \right] , \end{aligned}$$

where \((Z_t^x)\) is the solution to SDE (6) and \(\mu ={\text {Leb}}_{[0,1]} \circ X^{-1}\).

Proposition 9

\(P_t \phi \) is well-defined and for every \(t\in [0,T]\) and \(g \in {\mathbf {G}}^1\),

$$\begin{aligned} P_t \phi (\mu _0^g) = {\mathbb {E}}^W \left[ {\widehat{\phi }}\left( x_t^g \right) \right] = {\mathbb {E}}^W \left[ \phi (\mu _t^g)\right] . \end{aligned}$$
(9)

Proof

By Proposition 40, the parametric SDE (6) is strongly well-posed. Thus, if \(X,X' \in L_2[0,1]\) have same law, i.e. \({\mathcal {L}}_{[0,1]}(X)={\mathcal {L}}_{[0,1] }(X')\), then \({\mathbb {P}}^W\)-almost surely for every \(t\in [0,T]\), \(\mathcal L_{[0,1] \times \Omega ^\beta }(Z_t^X)={\mathcal {L}}_{[0,1]\times \Omega ^\beta }(Z_t^{X'})\). It follows that \({\mathbb {E}}^W \left[ {\widehat{\phi }}(Z_t^X) \right] ={\mathbb {E}}^W \left[ {\widehat{\phi }}(Z_t^{X'}) \right] \) for all \(t \in [0,T]\), so \(\widehat{P_t \phi } (X):= {\mathbb {E}}^W \left[ {\widehat{\phi }}(Z_t^X) \right] \) does not depend on the representative X of the law \(\mu \).

Moreover \({\mathbb {P}}^W\)-almost surely , \({\widehat{\phi }} (x_t^g)= \phi (\mu _t^g)\). Furthermore by Proposition 6, \({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \)-almost surely, for every \(t\in [0,T]\) and for every \(u\in [0,1]\), \(Z_t^{g(u)}=x_t^g(u)\). In particular, \(\mathbb P^W\)-almost surely and for every \(t\in [0,T]\), \(Z_t^{g(u)}=x_t^g(u)\) holds true \({\text {Leb}}_{[0,1]} \otimes \mathbb P^\beta \)-almost surely. Therefore,

$$\begin{aligned} P_t \phi (\mu _0^g) = \widehat{P_t \phi }(g) = {\mathbb {E}}^W \left[ {\widehat{\phi }}\left( Z_t^g \right) \right] = {\mathbb {E}}^W \left[ {\widehat{\phi }}\left( x_t^g \right) \right] = {\mathbb {E}}^W \left[ \phi (\mu _t^g)\right] , \end{aligned}$$

which proves (9). \(\square \)

2.3 Assumptions on the test functions

The semi-group \((P_t)_{t \in [0,T]}\) acts on bounded and continuous functions \(\phi :{\mathcal {P}}_2(\mathbb {R})\rightarrow \mathbb {R}\). We will assume the assumptions on \(\phi \) defined hereafter.

Definition 10

We define an equivalence class on \({\mathcal {P}}_2(\mathbb {R})\) by: \(\mu \sim \nu \) if and only if \(\mu (A+2\pi \mathbb {Z})=\nu (A+2\pi \mathbb {Z})\) for any \(A \in {\mathcal {B}}[0,2\pi ]\). A function \(\phi : {\mathcal {P}}_2(\mathbb {R}) \rightarrow \mathbb {R}\) is said to be \({\mathbb {T}}\)-stable if \(\phi (\mu )=\phi (\nu )\) for any \(\mu \sim \nu \). In particular, \(\phi \) induces a map from \(\mathcal P({\mathbb {T}})\) to \(\mathbb {R}\).

In particular, for a \({\mathbb {T}}\)-stable function \(\phi \) and \(X \in L_2(\Omega )\), \({\widehat{\phi }}(X)= {\widehat{\phi }}(\{X\})\), where \(\{x\}\) is the unique number in \([0,2\pi )\) such that \(x-\{x\} \in 2\pi \mathbb {Z}\). Let us mention two important classes of examples of \({\mathbb {T}}\)-stable functions:

  • if \(h: \mathbb {R}\rightarrow \mathbb {R}\) is a \(2\pi \)-periodic function, the map \(\phi :\mu \in {\mathcal {P}}_2(\mathbb {R}) \mapsto \int _\mathbb {R}h(x) \mathrm {d}\mu (x)\) is \({\mathbb {T}}\)-stable. The \(2\pi \)-periodicity condition ensures that \({\widehat{\phi }}(X)={\mathbb {E}} \left[ h(X) \right] ={\mathbb {E}} \left[ h(\{X\}) \right] ={\widehat{\phi }}(\{X\})\).

  • if \(h: \mathbb {R}\rightarrow \mathbb {R}\) is a \(2\pi \)-periodic function, the map \(\phi :\mu \in {\mathcal {P}}_2(\mathbb {R}) \mapsto \int _{\mathbb {R}^2} h(x-y) \mathrm {d}(\mu \otimes \mu )(x,y)\) is also \({\mathbb {T}}\)-stable.

If the reader is not familiar with the L-derivative \(\partial _\mu \phi \), we refer to Paragraph A.3 in “Appendix” for a short introduction.

Definition 11

A function \(\phi :{\mathcal {P}}_2(\mathbb {R})\rightarrow \mathbb {R}\) is said to satisfy the \(\phi \)-assumptions if the following three conditions hold:

\((\phi 1)\):

\(\phi \) is \({\mathbb {T}}\)-stable, bounded and continuous on \({\mathcal {P}}_2(\mathbb {R})\).

\((\phi 2)\):

\(\phi \) is L-differentiable and \(\sup _{\mu \in {\mathcal {P}}_2(\mathbb {R})} \int _\mathbb {R}|\partial _\mu \phi (\mu ) (x) |^2 \mathrm {d}\mu (x) <+\infty \).

\((\phi 3)\):

The Fréchet derivative \(D{\widehat{\phi }}\) is Lipschitz-continuous: there is \(C>0\) such that

$$\begin{aligned} {\mathbb {E}} \left[ |D {\widehat{\phi }}(X)-D {\widehat{\phi }}(Y)|^2 \right] \leqslant C{\mathbb {E}} \left[ |X-Y|^2 \right] , \quad X,Y \in L_2(\Omega ). \end{aligned}$$

Remark 12

Assumption \((\phi 2)\) implies that \({\widehat{\phi }}\) is Lipschitz-continuous. Therefore \({\widehat{\phi }}\) belongs to \({\mathcal {C}}_{{\mathrm {b}}}^{1,1}(L_2(\Omega ))\), the space of bounded and Lipschitz continuous functions on \(L_2(\Omega )\) whose Fréchet derivative is also bounded and Lipschitz continuous on \(L_2(\Omega )\). Let us mention that the inf-sup convolution method introduced by Lasry and Lions allows to construct, for each bounded uniformly continuous function \(\varphi \) defined on \(L_2(\Omega )\), a sequence \((\varphi _n)_n\) of \({\mathcal {C}}_{{\mathrm {b}}}^{1,1}(L_2(\Omega ))\)-functions converging uniformly to \(\varphi \) on \(L_2(\Omega )\), see [31].

The following statement shows that the class of functions satisfying the \(\phi \)-assumptions is stable under the action of \((P_t)_{t \in [0,T]}\).

Proposition 13

Assume that f is of order \(\alpha > \frac{5}{2}\). Let \(\phi :{\mathcal {P}}_2(\mathbb {R})\rightarrow \mathbb {R}\) be a function satisfying the \(\phi \)-assumptions. Then for every \(t\in [0,T]\), \(P_t \phi :\mathcal P_2(\mathbb {R})\rightarrow \mathbb {R}\) also satisfies the \(\phi \)-assumptions. Moreover, for any fixed \(t\in [0,T]\), the Fréchet derivative of \(g \mapsto P_t\phi (\mu _0^g)\) is given by

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} P_t \phi (\mu _0^{g+\rho h}) = D \widehat{P_t\phi }(g) \cdot h&= {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 D{\widehat{\phi }}(x_t^g)_u \; \frac{\partial _u x_t^g(u)}{g'(u)} h(u) \mathrm {d}u \right] \nonumber \\&= {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \partial _\mu \phi (\mu _t^g) (x_t^g(u)) \frac{\partial _u x_t^g(u)}{g'(u)} h(u) \mathrm {d}u \right] . \end{aligned}$$
(10)

Note that \(D{\widehat{\phi }}(x_t^g)\) is an element of the dual of \(L_2([0,1] \times \Omega ^\beta )\), identified here with an element of \(L_2([0,1] \times \Omega ^\beta )\). The proof of Proposition 13 is given in Paragraph A.4 in “Appendix”.

2.4 Statement of the main theorem

The main result of this paper is a gradient estimate for the semi-group \((P_t)_{t \in [0,T]}\) associated to \((\mu _t^g)_{t \in [0,T]}\), which is given at points \(g \in {\mathbf {G}}^{3+\theta }\) and for directions of perturbations h defined as follows.

Definition 14

We denote by \(\Delta ^1\) the set of 1-periodic \(\mathcal C^1\)-functions \(h:\mathbb {R}\rightarrow \mathbb {R}\). We define the following norm on \(\Delta ^1\): \( \Vert h\Vert _{{\mathcal {C}}^1}:= \sup _{u \in [0,1]} |h(u)| + \sup _{u \in [0,1]} |h'(u)|\).

A simple computation shows that for \(|\rho |\ll 1\), \(g+\rho h\) still belongs to \({\mathbf {G}}^1\). Let us state the main theorem.

Theorem 15

Let \(\phi :{\mathcal {P}}_2(\mathbb {R})\rightarrow \mathbb {R}\) satisfy the \(\phi \)-assumptions. Let \(\theta \in (0,1)\) and f be of order \(\alpha =\frac{7}{2}+\theta \). Let \(g \in {\mathbf {G}}^{3+\theta }\) and \(h \in \Delta ^1\) be two deterministic functions. Then there is \(C_g\) independent of h such that for every \(t \in (0,T]\)

$$\begin{aligned} \left| \frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} P_t \phi (\mu _0^{g+\rho h})\right| \leqslant C_g \frac{\Vert \phi \Vert _{L_\infty }}{t^{2+\theta }}\Vert h\Vert _{{\mathcal {C}}^1}, \end{aligned}$$
(11)

where \(C_g\) is bounded when \(\Vert g'''\Vert _{L_\infty }+\Vert g''\Vert _{L_\infty }+\Vert g'\Vert _{L_\infty } + \Vert \textstyle \frac{1}{g'}\Vert _{L_\infty }\) is bounded.

In the following section, we split the l.h.s. of (11) into two terms, \(I_1\) and \(I_2\), which will be studied separately in Sects. 4 and 5, respectively.

3 Preparation for the proof

To start this paragraph, we rewrite the gradient of the semi-group in terms of the L-derivative of \(P_t \phi \) and of the linear functional derivative of \(P_t \phi \). We refer to paragraph A.3 in “Appendix” for a short remainder about definitions and relationships between the different types of derivatives.

For convenience, the following lemma is written for a perturbation \(g'h\) instead of h (the corresponding result for h can be naturally obtained by applying the following formula to \(\frac{h}{g'}\) instead of h). For later purposes, the lemma is stated for random functions g and h, with a \(\mathcal G_0\)-measurable randomness. Recall that within this framework, g and h are independent of \(((W^k)_{k \in \mathbb {Z}}, \beta )\).

Lemma 16

Let \(\phi \), \(\theta \) and f be as in Theorem 15. Let g and h be \({\mathcal {G}}_0\)-measurable random variables with values respectively in \({\mathbf {G}}^{3+\theta }\) and \(\Delta ^1\). Then for every \(t\in [0,T]\),

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} P_t \phi (\mu _0^{g+\rho g' h})&= \int _0^1 \partial _\mu (P_t\phi ) (\mu _0^g) (g(u)) \; g'(u) h(u) \;\mathrm {d}u \nonumber \\&={\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \partial _\mu \phi (\mu _t^g) (x_t^g(u)) \; \partial _u x_t^g(u) h(u) \mathrm {d}u \right] \nonumber \\&=-\int _0^1 \frac{\delta P_t \phi }{\delta m} (\mu _0^g)(g(u)) \; h'(u) \;\mathrm {d}u. \end{aligned}$$
(12)

Proof

Fix \(\omega ^0\) in an almost-sure event of \(\Omega ^0\) such that \(g=g(\omega ^0)\) belongs to \({\mathbf {G}}^{3+\theta }\) and \(h=h(\omega ^0)\) belongs to \(\Delta ^1\). Since \(g'\) is 1-periodic and positive, \(\rho _0:=\frac{1}{\Vert h'\Vert _{L_\infty }} \inf _{u \in \mathbb {R}} g'(u)\) is positive and \(g+\rho h\) belongs to \({\mathbf {G}}^1\) for every \(\rho \in (-\rho _0,\rho _0)\). The first equality in (12) follows from the definition of the L-derivative:

$$\begin{aligned}&\frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} P_t \phi (\mu _0^{g+\rho g' h}) = \frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} \widehat{P_t \phi } (g+\rho g' h) = D \widehat{P_t\phi }(g) \cdot (g'h )\nonumber \\&\quad = \int _0^1 D \widehat{P_t\phi }(g)_u \; g'(u)h(u) \mathrm {d}u \nonumber \\&\quad =\int _0^1 \partial _\mu (P_t\phi ) (\mu _0^g) (g(u))\; g'(u) h(u) \mathrm {d}u. \end{aligned}$$
(13)

The second equality in (12) was already stated in Proposition 13. For the third equality in (12), we use the relationship between the L-derivative and the functional linear derivative (see Proposition 41)

$$\begin{aligned} \int _0^1 \partial _\mu (P_t\phi ) (\mu _0^g) (g(u))\; g'(u) h(u) \mathrm {d}u= & {} \int _0^1 \partial _v \left\{ \frac{\delta P_t \phi }{\delta m}(\mu _0^g) \right\} (g(u)) g'(u)\; h(u) \mathrm {d}u \\= & {} \int _0^1 \partial _u \left\{ \frac{\delta P_t \phi }{\delta m} (\mu _0^g) (g(\cdot )) \right\} (u) \;h(u) \mathrm {d}u \\= & {} \left[ \frac{\delta P_t \phi }{\delta m} (\mu _0^g) (g(u)) \; h(u) \right] _0^1 \\&-\int _0^1 \frac{\delta P_t \phi }{\delta m} (\mu _0^g) (g(u)) \;h'(u) \mathrm {d}u, \end{aligned}$$

by an integration by parts formula. Furthermore, \(v \mapsto \frac{\delta P_t \phi }{\delta m} (\mu _0^g) (v)\) is \(2\pi \)-periodic. This follows from Proposition 45 in “Appendix”, because on the one hand \(P_t \phi \) satisfies the \(\phi \)-assumptions by Proposition 13 and on the other hand the probability measure \(\mu _0^g\) has density \(g'\) on the torus, which is strictly positive everywhere. It follows that \(\frac{\delta P_t \phi }{\delta m} (\mu _0^g) (g(1))= \frac{\delta P_t \phi }{\delta m} (\mu _0^g) (g(0)+2\pi )= \frac{\delta P_t \phi }{\delta m} (\mu _0^g) (g(0))\). Since h is 1-periodic, we conclude that \(\left[ \frac{\delta P_t \phi }{\delta m} (\mu _0^g) (g(u)) h(u) \right] _0^1=0\). \(\square \)

By Proposition 5, \(x_t^g\) belongs to \(\mathbf{G}^1\) for every \(t\in [0,T]\). In particular, \(u \mapsto x_t^g(u)\) is invertible and its inverse \(F_t^g:=(x_t^g)^{-1}\) satisfies \(F_t^g(x+2\pi )=F_t^g(x)+1\). We define \((A_t^g)_{t\in [0,T]}\) by

$$\begin{aligned} A_t^g:= (\partial _u x_t^g)(F_t^g(\cdot )) \; h(F_t^g(\cdot )). \end{aligned}$$
(14)

and \((A_t^{g,\varepsilon })_{t\in [0,T]}\) by

$$\begin{aligned} A_t^{g,\varepsilon } := A_t^g *\varphi _\varepsilon = \int _\mathbb {R}A_t^g (\cdot -y) \varphi _\varepsilon (y) \mathrm {d}y, \end{aligned}$$
(15)

where \(\varphi (x)=\frac{1}{\sqrt{2\pi }}e^{-x^2/2}\) and \(\varphi _\varepsilon (x)=\frac{1}{\varepsilon }\varphi (\frac{x}{\varepsilon })\).

Lemma 17

For any \(t \in [0,T]\) and \(\varepsilon >0\), \(A_t^g\) is a \(2\pi \)-periodic \({\mathcal {C}}^1\)-function and \(A_t^{g,\varepsilon }\) is a \(2\pi \)-periodic \({\mathcal {C}}^\infty \)-function.

Proof

The periodicity property follows from the fact that h and \(\partial _u x_t^g\) are 1-periodic and from \(F_t^g(x+2\pi )=F_t^g(x)+1\). Furthermore, by Proposition 4, \(u \mapsto x_t^g(u)\) belongs to \({\mathcal {C}}^{3+\theta '}\) for some \(\theta ' <\theta \), thus \(\partial _u x_t^g \in {\mathcal {C}}^{2+\theta '}\). Therefore, \(A_t^g\) is a \({\mathcal {C}}^1\)-function. The properties of \(A_t^{g,\varepsilon }\) follows. \(\square \)

We conclude this paragraph by splitting the derivative of \(P_t\phi \) into two terms. \(I_1\) involves the regularized function \(A_t^{g,\varepsilon }\), whereas \(I_2\) is a remainder term for which we have to show that it is small with respect to \(|\varepsilon |\).

Proposition 18

Under the same assumptions as in Lemma 16 and for every \(t\in [0,T]\)

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} P_t \phi (\mu _0^{g+\rho g' h}) =I_1+I_2, \end{aligned}$$

where

$$\begin{aligned} I_1&:= \frac{1}{t} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \!\!\int _0^t \partial _\mu \phi (\mu _t^g) (x_t^g(u)) \frac{\partial _u x_t^g(u)}{\partial _u x_s^g(u)} A_s^{g,\varepsilon }(x_s^g(u)) \mathrm {d}s \mathrm {d}u \right] ;\\ I_2&:= \frac{1}{t} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \!\!\int _0^t \partial _\mu \phi (\mu _t^g) (x_t^g(u)) \frac{\partial _u x_t^g(u)}{\partial _u x_s^g(u)} (A_s^g-A_s^{g,\varepsilon })(x_s^g(u)) \mathrm {d}s \mathrm {d}u \right] . \end{aligned}$$

Proof

By definition of \((A_t^g)_{t\in [0,T]}\),

$$\begin{aligned} h(u)&= \frac{1}{t} \int _0^t h(u) \mathrm {d}s = \frac{1}{t} \int _0^t \frac{\partial _u x_s^g(F_s^g(x_s^g(u))) h(F_s^g(x_s^g(u))) }{\partial _u x_s^g(u)} \mathrm {d}s\\&= \frac{1}{t} \int _0^t \frac{ A^g_s(x_s^g(u))}{\partial _u x_s^g(u)} \mathrm {d}s. \end{aligned}$$

Therefore, Eq. (12) rewrites

$$\begin{aligned}&\frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} P_t \phi (\mu _0^{g+\rho g' h}) \\&\quad = {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \partial _\mu \phi (\mu _t^g) (x_t^g(u)) \partial _u x_t^g(u) \left( \frac{1}{t} \int _0^t \frac{A_s^g(x_s^g(u))}{\partial _u x_s^g(u)} \mathrm {d}s \right) \mathrm {d}u \right] . \end{aligned}$$

The r.h.s. of the latter equality is clearly equal to \(I_1+I_2\). \(\square \)

The proof of Theorem 15 is now divided into three main steps. In the following two sections, we will study separately \(I_1\) and \(I_2\). This will lead to the estimate stated in Corollary 32. In Sect 6, we conclude the proof by iterating the result of Corollary 32 over successive time intervals.

4 Analysis of \(I_1\)

In order to control \(I_1\), we adapt in this section a method of proof introduced in [39]. To follow that strategy, we take benefit from the fact that \(A_t^{g,\varepsilon }\), in contrast to \(A_t^g\), is as regular as needed. The drawback is that the control on \(I_1\) blows up when \(\varepsilon \) goes to 0, which is the reason why the explosion rate is \(t^{-2-\theta }\) in Theorem 15 and not \(t^{-1/2}\) as in [39].

Let us define

$$\begin{aligned} a_t(u)=\int _0^t \frac{g'(u)}{\partial _u x_s^g(u)} A_s^{g,\varepsilon }(x_s^g(u)) \mathrm {d}s. \end{aligned}$$

Using that notation, \(I_1\) rewrites as follows

$$\begin{aligned} I_1=\frac{1}{t} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \partial _\mu \phi (\mu _t^g) (x_t^g(u)) \frac{\partial _u x_t^g(u)}{g'(u)} a_t(u) \mathrm {d}u \right] . \end{aligned}$$

The goal of this section is to prove the following inequality:

Proposition 19

Let \(\phi \), \(\theta \) and f be as in Theorem 15. Let g and h be \({\mathcal {G}}_0\)-measurable random variables with values respectively in \({\mathbf {G}}^{3+\theta }\) and \(\Delta ^1\). Then there is \(C>0\) independent of g, h and \(\theta \) such that for every \(t\in [0,T]\), for every \(\varepsilon \in (0,1)\),

$$\begin{aligned} |I_1|&\leqslant C \; \frac{\Vert \phi \Vert _{L_\infty }}{\varepsilon ^{3+2\theta }\sqrt{t}} C_1(g) \Vert h\Vert _{\mathcal C^1}. \end{aligned}$$
(16)

where \(C_1(g)=1+\Vert g'''\Vert _{L_4}^2 + \Vert g''\Vert _{L_\infty }^{6} + \Vert g'\Vert _{L_\infty }^8 + \left\| \frac{1}{g'} \right\| _{L_\infty }^{8}\).

The proof of the proposition is based on writing the SDE satisfied by \((Z_t^{g(u)+\rho a_t(u)})_{t\in [0,T]}\), where \(Z_t^x\) is the solution to (6). We recall this expansion, known as Kunita’s theorem, in the following lemma:

Lemma 20

Let f be of order \(\alpha > \frac{3}{2}+\theta \) for some \(\theta \in (0,1)\). Let \((\zeta _t)_{t\in [0,T]}\) be a \(({\mathcal {G}}_t)_{t\in [0,T]}\)-adapted process such that \(t \mapsto \zeta _t\) is absolutely continuous, \(\zeta _0=0\) almost surely and \({\mathbb {E}} \left[ \int _0^T |\zeta _t| \mathrm {d}t \right] \) is finite. Then almost surely, for every \(x\in \mathbb {R}\), \(t \in [0,T]\) and \(\rho \in \mathbb {R}\),

$$\begin{aligned}&Z_t^{x+\rho \zeta _t} =Z_0^x+ \sum _{k \in \mathbb {Z}} f_k \int _0^t \mathfrak {R}\left( e^{-ik Z_s^{x+\rho \zeta _s}} \mathrm {d}W^k_s \right) + \beta _t + \rho \int _0^t \partial _x Z_s^{x+\rho \zeta _s} \;{\dot{\zeta }}_s \;\mathrm {d}s. \end{aligned}$$

Proof

This is an application of Theorem 3.3.1 in [27, Chapter III]. \(\square \)

4.1 Fourier inversion on the torus

A key ingredient in the study of \(I_1\) is the following Fourier inversion of \(A_t^{g,\varepsilon }\), which we will later use in order to apply Girsanov’s theorem.

Lemma 21

Let \(\theta \in (0,1)\), \(g \in {\mathbf {G}}^{3+\theta }\) and f be of order \(\alpha =\frac{7}{2}+\theta \). Fix \(\varepsilon \in (0,1)\). Then there is a collection of \(({\mathcal {G}}_t)_{t\in [0,T]}\)-adapted \(\mathbb {C}\)-valued processes \(((\lambda _t^k)_{t\in [0,T]})_{k \in \mathbb {Z}}\) such that for every \(t\in [0,T]\), the following equality holds

$$\begin{aligned} A_t^{g,\varepsilon } (y) = \sum _{k \in \mathbb {Z}} f_k e^{-ik y} \lambda _t^k, \end{aligned}$$
(17)

and such that there is a constant \(C>0\) independent of \(\varepsilon \) satisfying for each \(t\in [0,T]\)

$$\begin{aligned} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sum _{k \in \mathbb {Z}} \int _0^t |\lambda _s^k|^2 \mathrm {d}s \right] \leqslant C \frac{t}{\varepsilon ^{6+4\theta }} \Vert h\Vert _{{\mathcal {C}}^1}^2 C_1(g)^2, \end{aligned}$$
(18)

where \(C_1(g)=1+\Vert g'''\Vert _{L_4}^2 + \Vert g''\Vert _{L_\infty }^{6} + \Vert g'\Vert _{L_\infty }^{6} + \left\| \frac{1}{g'} \right\| _{L_\infty }^{8}\).

Proof

Fix \(t\in (0,T]\). The map \(y \mapsto A_t^{g,\varepsilon } (y)\) is a \(2\pi \)-periodic \({\mathcal {C}}^1\)-function. Therefore, by Dirichlet’s Theorem, that map is equal to the sum of its Fourier series

$$\begin{aligned} A_t^{g,\varepsilon } (y) = \sum _{k \in \mathbb {Z}} c_k(A_t^{g,\varepsilon }) e^{-ik y} , \end{aligned}$$

where \(c_k(A):=\frac{1}{2\pi } \int _0^{2\pi } A(y) e^{iky} \mathrm {d}y\) for every \(2\pi \)-periodic function A and for every \(k \in \mathbb {Z}\).

Let us define \(\lambda _t^k:= \frac{c_k(A_t^{g,\varepsilon })}{f_k}\). Since \((A_t^{g,\varepsilon })_{t\in [0,T]}\) is \(({\mathcal {G}}_t)\)-adapted, it is clear that for each \(k \in \mathbb {Z}\), \((\lambda _t^k)_{t\in [0,T]}\) is also \(({\mathcal {G}}_t)\)-adapted. Equality (17) clearly holds true. Moreover,

$$\begin{aligned} \sum _{k \in \mathbb {Z}} \int _0^t |\lambda _s^k|^2 \mathrm {d}s = \sum _{k \in \mathbb {Z}} \int _0^t \left| \frac{c_k(A_s^{g,\varepsilon })}{f_k}\right| ^2 \mathrm {d}s. \end{aligned}$$

Compute the Fourier coefficient \(c_k(A_s^{g,\varepsilon })\):

$$\begin{aligned} c_k(A_s^{g,\varepsilon }) = c_k(A_s^g *\varphi _\varepsilon )&=\frac{1}{2\pi } \int _0^{2\pi } \left( \int _\mathbb {R}A_s^g(y-x) \varphi _\varepsilon (x) \mathrm {d}x \right) e^{iky} \mathrm {d}y \nonumber \\&=\int _\mathbb {R}\varphi _\varepsilon (x) e^{ikx} \left( \frac{1}{2\pi } \int _0^{2\pi } A_s^g(y-x)e^{ik(y-x)} \mathrm {d}y \right) \mathrm {d}x \nonumber \\&= c_k(A_s^g) \int _\mathbb {R}\varphi (x) e^{ik \varepsilon x}\mathrm {d}x. \end{aligned}$$
(19)

Since \(\int _\mathbb {R}\varphi (x) e^{ik \varepsilon x}\mathrm {d}x=\frac{1}{\sqrt{2\pi }} \int _\mathbb {R}e^{-x^2/2} e^{ik \varepsilon x}\mathrm {d}x=e^{-k^2 \varepsilon ^2/2}\), there is in particular \(C >0\) such that for every \(k\in \mathbb {Z}\backslash \{0\}\) and for every \(\varepsilon >0\), \(\left| \int _\mathbb {R}\varphi (x) e^{ik \varepsilon x}\mathrm {d}x \right| \leqslant \frac{C}{|k\varepsilon |^{3+2\theta } }\).

Moreover, \(A_s^g\) is a \({\mathcal {C}}^1\)-function. Thus there is C independent of k and s such that for every \(k\in \mathbb {Z}\backslash \{0\}\), \(|c_k(A_s^g)| \leqslant \frac{C}{|k|} \Vert \partial _x A_s^g \Vert _{L_\infty }\). Furthermore, \(|c_0(A_s^{g,\varepsilon })|=|c_0(A_s^g)| \leqslant \Vert A_s^g \Vert _{L_\infty }\). Since f is of order \(\alpha =\frac{7}{2}+\theta \), there is C such that for every \(k\in \mathbb {Z}\backslash \{0\}\), \(\frac{1}{|f_k|}\leqslant C |k|^{\frac{7}{2}+\theta }\). Thus we have

$$\begin{aligned} \sum _{k \in \mathbb {Z}} \int _0^t \frac{|c_k(A_s^{g,\varepsilon })|^2}{f_k^2} \mathrm {d}s&= \int _0^t \frac{|c_0(A_s^{g,\varepsilon })|^2}{f_0^2} \mathrm {d}s + \sum _{k \ne 0} \int _0^t \frac{|c_k(A_s^{g,\varepsilon })|^2}{f_k^2} \mathrm {d}s \\&\leqslant C \int _0^t \Vert A_s^g \Vert ^2_{L_\infty } \mathrm {d}s + C \sum _{k \ne 0} \int _0^t |k|^{7+2\theta } \frac{1}{|k\varepsilon |^{6+4\theta }} \frac{1}{|k|^2}\Vert \partial _x A_s^g \Vert ^2_{L_\infty } \mathrm {d}s \\&\leqslant \frac{C}{\varepsilon ^{6+4\theta }} \int _0^t \Vert A_s^g \Vert ^2_{\mathcal C^1} \mathrm {d}s, \end{aligned}$$

because \(1 \leqslant \frac{1}{\varepsilon }\) and because the sum \(\sum _{k \ne 0} \frac{1}{|k|^{1+2\theta }}\) converges. Thus there is a constant \(C>0\) independent of \(\varepsilon \) satisfying the \({\mathbb {P}}^W \otimes \mathbb P^\beta \)-almost surely for every \(t\in [0,T]\)

$$\begin{aligned} \sum _{k \in \mathbb {Z}} \int _0^t |\lambda _s^k|^2 \mathrm {d}s \leqslant \frac{C}{\varepsilon ^{6+4\theta }} \int _0^t \Vert A_s^g \Vert ^2_{{\mathcal {C}}^1} \mathrm {d}s. \end{aligned}$$
(20)

Let us compute \({\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \Vert A_s^g \Vert ^2_{{\mathcal {C}}^1} \right] \). Recall that \(\Vert A_s^g\Vert _{L_\infty } \leqslant \Vert \partial _u x_s^g\Vert _{L_\infty } \Vert h\Vert _{L_\infty }\). Thus for every \(s \in [0,T]\),

$$\begin{aligned} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left\| A_s^g \right\| ^2_{L_\infty } \right]&\leqslant \Vert h\Vert _{L_\infty }^2 {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sup _{t\leqslant T}\Vert \partial _u x_t^g\Vert _{L_\infty }^2 \right] \\&\leqslant C \Vert h\Vert _{L_\infty }^2 (1+\Vert g'' \Vert _{L_2}^2 + \Vert g'\Vert _{L_\infty }^4), \end{aligned}$$

where we used inequality (71) given in “Appendix”. Moreover, the derivative of \(A_s^g\) is equal to:

$$\begin{aligned} \partial _x A_s^g (x)&=\frac{(h' \;\partial _u x_s^g + h \;\partial ^{(2)}_u x_s^g) (F_s^g(x))}{\partial _u x_s^g(F_s^g(x))} =h' (F_s^g(x)) + \frac{h (F_s^g(x)) \;\partial ^{(2)}_u x_s^g (F_s^g(x))}{\partial _u x_s^g(F_s^g(x))} . \end{aligned}$$

We deduce that

$$\begin{aligned} \Vert \partial _x A_s^g \Vert _{L_\infty } \leqslant C \Vert h\Vert _{\mathcal C^1}\left( 1+\Vert \partial ^{(2)}_u x_s^g\Vert _{L_\infty }\left\| \frac{1}{\partial _u x_s^g} \right\| _{L_\infty }\right) . \end{aligned}$$
(21)

Therefore, for every \(s \leqslant T\),

$$\begin{aligned} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left\| \partial _x A_s^g \right\| ^2_{L_\infty } \right]&\leqslant C \Vert h\Vert _{{\mathcal {C}}^1}^2 \left( 1+ {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sup _{t \leqslant T}\Vert \partial ^{(2)}_u x_t^g\Vert ^4_{L_\infty } \right] + {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sup _{t \leqslant T}\left\| \frac{1}{\partial _u x_t^g} \right\| ^4_{L_\infty } \right] \right) \nonumber \\&\leqslant C \Vert h\Vert _{{\mathcal {C}}^1}^2 \left( 1+\Vert g'''\Vert _{L_4}^4 + \Vert g''\Vert _{L_\infty }^{12} + \Vert g'\Vert _{L_\infty }^{16} + \left\| \frac{1}{g'} \right\| _{L_\infty }^{16} \right) , \end{aligned}$$
(22)

by (71) and (72) and because g belongs to \({\mathbf {G}}^{3+\theta }\) and \(\alpha =\frac{7}{2}+\theta \). We deduce that

$$\begin{aligned}&{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sum _{k \in \mathbb {Z}} \int _0^t |\lambda _s^k|^2 \mathrm {d}s \right] \nonumber \\&\quad \leqslant \frac{Ct}{\varepsilon ^{6+4\theta }} \Vert h\Vert _{{\mathcal {C}}^1}^2 \left( 1+\Vert g'''\Vert _{L_4}^4 + \Vert g''\Vert _{L_\infty }^{12} + \Vert g'\Vert _{L_\infty }^{16} + \left\| \frac{1}{g'} \right\| _{L_\infty }^{16} \right) , \end{aligned}$$

which is inequality (18). This completes the proof of the lemma. \(\square \)

Remark 22

After the proof of Lemma 21, we can now explain precisely why a regularization of \(A^g\) was needed. Imagine for a while that instead of looking for the Fourier inverse of \(A^{g,\varepsilon }\) in (17), we were looking for the Fourier inverse of \(A^g\). In order to prove an inequality like (18), we would have to show that

$$\begin{aligned} \sum _{k \in \mathbb {Z}} (1+k^2)^\alpha \int _0^t \left| c_k(A_s^{g})\right| ^2 \mathrm {d}s < \infty . \end{aligned}$$

The latter sum converges if \({\mathbb {E}} \left[ \int _0^t \Vert A_s^g \Vert ^2_{\mathcal C^p} \mathrm {d}s \right] \) is bounded for a certain \(p > 1+2\alpha \). In turn, if the latter expectation is bounded then for almost every s, \(y \mapsto A_s^g(y)\) is of class \({\mathcal {C}}^p\). But we know, by definition (14) of \(A_s^g\) and by Proposition 4, that \(A_s^g \in \mathcal C^p\) if \(h \in {\mathcal {C}}^p\), \(g \in {\mathcal {C}}^{p+\theta }\) and \(\alpha >p+\frac{1}{2}\). The regularity of h is not a big problem, since we could simply assume higher regularity in the assumptions of Theorem 15. However, it is impossible to choose \(\alpha \) so that both inequalities \(p > 1+2\alpha \) and \(\alpha >p+\frac{1}{2}\) hold simultaneously. Regularizing \(A^g\) allows to work with \(A_s^{g,\varepsilon } \in \mathcal C^p\) without having to assume that \(\alpha > p +\frac{1}{2}\).

4.2 A Bismut–Elworthy-like formula

Let us state and prove an integration by parts formula, close to Bismut-Elworthy formula.

Proposition 23

Let \(\theta \in (0,1)\), \(g \in {\mathbf {G}}^{3+\theta }\) and f be of order \(\alpha =\frac{7}{2}+\theta \). For every \(t \in (0,T]\),

$$\begin{aligned} I_1 =\frac{1}{t} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \phi (\mu _t^g) \sum _{k \in \mathbb {Z}} \int _0^t \mathfrak {R}( \overline{\lambda _s^k} \mathrm {d}W_s^k) \right] , \end{aligned}$$
(23)

where \(\overline{\lambda _s^{k}}\) denotes the complex conjugate of \(\lambda _s^{k}\).

In view of proving Proposition 23, let us introduce the following stopping times. Let \(M_0\) be an integer large enough so that for every \(u\in \mathbb {R}\), \(\frac{1}{M_0}< g'(u) < M_0\). For every \(M \geqslant M_0\), define:

$$\begin{aligned} \tau _M^1&:= \inf \{ t\geqslant 0: \left\| \partial _u x_t^g \right\| _{L_\infty } \geqslant M \}\wedge T; \nonumber \\ \tau _M^2&:= \inf \{ t\geqslant 0: \left\| \textstyle \frac{1}{\partial _u x_t^g(\cdot )} \right\| _{L_\infty } \geqslant M \}\wedge T; \nonumber \\ \tau _M&:= \tau _M^1 \wedge \tau _M^2. \end{aligned}$$
(24)

Since \(g \in {\mathbf {G}}^{3+\theta }\) and f is of order \(\alpha > \frac{7}{2}\), inequalities (71) and (72) imply that

$$\begin{aligned} {\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \left[ \tau _M < T \right] \underset{M \rightarrow +\infty }{\longrightarrow } 0. \end{aligned}$$
(25)

Lemma 24

Let \(M \geqslant M_0\), \(\theta \in (0,1)\), \(g \in {\mathbf {G}}^{3+\theta }\) and f be of order \(\alpha =\frac{7}{2}+\theta \). Fix \(\varepsilon \in (0,1)\). Then there is a collection of \(({\mathcal {G}}_t)_{t\in [0,T]}\)-adapted \(\mathbb {C}\)-valued processes \(((\lambda _t^{k,M})_{t\in [0,T]})_{k \in \mathbb {Z}}\) such that for every \(t\in [0,T]\), the following equality holds

$$\begin{aligned} {\mathbb {1}}_{\{ t \leqslant \tau _M \}} A_t^{g,\varepsilon } (y) = \sum _{k \in \mathbb {Z}} f_k e^{-ik y} \lambda _t^{k,M}, \end{aligned}$$
(26)

and there is a constant \(C_{M,\varepsilon }>0\) such that \({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \)-almost surely, \(\sum _{k \in \mathbb {Z}} \int _0^T |\lambda _t^{k,M}|^2 \mathrm {d}t \leqslant C_{M,\varepsilon }\).

Proof

Define for every \(t\in [0,T]\), \(\lambda _t^{k,M}:= {\mathbb {1}}_{\{ t \leqslant \tau _M \}} \frac{c_k(A_t^{g,\varepsilon })}{f_k}= {\mathbb {1}}_{\{ t \leqslant \tau _M \}} \lambda _t^k\). Similarly as in the proof of Lemma 21, there is a constant \(C>0\) such that for every \(k\in \mathbb {Z}\backslash \{0\}\) and for every \(\varepsilon >0\), \(\left| \int _\mathbb {R}\varphi (x) e^{ik \varepsilon x}\mathrm {d}x \right| \leqslant \frac{C}{|k\varepsilon |^{4+2\theta } }\). Furthermore, for every \(k \in \mathbb {Z}\), \(|c_k(A_s^g)|\leqslant \Vert A_s^g \Vert _{L_2({\mathbb {T}})}\). Thus we have

$$\begin{aligned} \sum _{k \in \mathbb {Z}} \int _0^T |\lambda _t^{k,M}|^2 \mathrm {d}t&\leqslant C \int _0^T {\mathbb {1}}_{\{ t \leqslant \tau _M \}}|c_0(A_t^g)|^2 \mathrm {d}t\\&\quad + \sum _{k \ne 0} \int _0^T {\mathbb {1}}_{\{ t \leqslant \tau _M \}} |k|^{7+2\theta }\frac{1}{|k\varepsilon |^{8+4\theta }} |c_k(A_t^g)|^2 \mathrm {d}t \\&\leqslant \frac{C}{\varepsilon ^{8+4\theta }} \int _0^T {\mathbb {1}}_{\{ t \leqslant \tau _M \}} \Vert A_t^g \Vert ^2_{L_2({\mathbb {T}})} \mathrm {d}t. \end{aligned}$$

By Definition (14), for every \(s\in [0,T]\),

$$\begin{aligned} {\mathbb {1}}_{\{ t \leqslant \tau _M \}} \left\| A_t^g \right\| _{L_2({\mathbb {T}})}&\leqslant C {\mathbb {1}}_{\{ t \leqslant \tau _M \}} \left\| A_t^g \right\| _{L_\infty ({\mathbb {T}})} \leqslant C{\mathbb {1}}_{\{ t \leqslant \tau _M^1 \}} \left\| h\right\| _{L_\infty } \left\| \partial _u x_t^g\right\| _{L_\infty } \\&\leqslant C M \left\| h\right\| _{L_\infty }. \end{aligned}$$

Since the constant does not depend on t, we deduce the statement of the lemma. \(\square \)

Define the \(({\mathcal {G}}_t)\)-adapted process \((a^M_t)_{t\in [0,T]}\) by \(a^M_t= a_{t \wedge \tau _M}\), in other words:

$$\begin{aligned} a^M_t(u):=\int _0^t {\mathbb {1}}_{\{ s \leqslant \tau _M \}} \frac{g'(u)}{\partial _u x_s^g(u)} A_s^{g,\varepsilon } (x_s^g(u)) \mathrm {d}s. \end{aligned}$$
(27)

We easily check that for every \(u\in \mathbb {R}\), \(a^M_0(u)=0\) and that \({\dot{a}}^M_t(u)=\frac{g'(u)}{\partial _u x_t^g(u)} {\mathbb {1}}_{\{ t \leqslant \tau _M \}} A_t^{g,\varepsilon } (x_t^g(u))\) is a 1-periodic and continuous function of \(u\in \mathbb {R}\).

Lemma 25

Let \(\varepsilon \in (0,1)\). For every \(M \geqslant M_0\), there are two constants \(C^a_M\) (depending on T, M, \(g'\) and h) and \(C_{M,\varepsilon }\) (depending on T, M, \(\varepsilon \) and h) such that for every \(t\in [0,T]\)

$$\begin{aligned} \Vert a_t^M\Vert _{L_\infty }&\leqslant C_M^a ; \end{aligned}$$
(28)
$$\begin{aligned} \int _0^{t \wedge \tau _M}\Vert A_s^{g,\varepsilon } \Vert ^2_{{\mathcal {C}}^1} \mathrm {d}s&\leqslant C_{M,\varepsilon }. \end{aligned}$$
(29)

Proof

By definition of \(\tau _M\), \(|a^M_t(u)| \leqslant T \Vert g'\Vert _{L_\infty } M \sup _{s \leqslant \tau _M} \Vert A_s^{g,\varepsilon }\Vert _{L_\infty }\) for every \(t\in [0,T]\) and \(u\in \mathbb {R}\). Since \(A_s^{g,\varepsilon }\) is \(2\pi \)-periodic and \(A_s^{g,\varepsilon }=A_s^g *\varphi _\varepsilon \), with \(\left\| \varphi _\varepsilon \right\| _{L_1(\mathbb {R})}=1\), we have \(\left\| A_s^{g,\varepsilon }\right\| _{L_\infty }\leqslant \left\| A_s^g\right\| _{L_\infty ({\mathbb {T}})}\). Recall that by definition (14), \(\left\| A_s^g\right\| _{L_\infty ({\mathbb {T}})} \leqslant \left\| h\right\| _{L_\infty } \left\| \partial _u x_s^g\right\| _{L_\infty }\). We deduce that

$$\begin{aligned} {\mathbb {1}}_{\{s \leqslant \tau _M \}} \left\| A_s^{g,\varepsilon }\right\| _{L_\infty ({\mathbb {T}})} \leqslant \left\| h\right\| _{L_\infty } {\mathbb {1}}_{\{s \leqslant \tau _M^1 \}}\left\| \partial _u x_s^g\right\| _{L_\infty } \leqslant M\left\| h\right\| _{L_\infty }. \end{aligned}$$
(30)

Therefore, inequality (28) holds with \(C^a_M:= T \Vert g'\Vert _{L_\infty } M^2 \Vert h\Vert _{L_\infty }\).

For every \(t\in [0,T]\), \( \partial _x A_t^{g,\varepsilon }= A_t^g *\partial _x \varphi _\varepsilon \). Since \(\left\| \partial _x \varphi _\varepsilon \right\| _{L_1(\mathbb {R})}\leqslant \frac{C}{\varepsilon }\), we obtain

$$\begin{aligned} {\mathbb {1}}_{\{t \leqslant \tau _M \}} \left\| \partial _x A_t^{g,\varepsilon }\right\| _{L_\infty ({\mathbb {T}})} \leqslant \frac{C}{\varepsilon }\left\| h\right\| _{L_\infty } {\mathbb {1}}_{\{t \leqslant \tau _M^1 \}}\left\| \partial _u x_t^g\right\| _{L_\infty } \leqslant \frac{CM}{\varepsilon }\left\| h\right\| _{L_\infty }. \end{aligned}$$
(31)

It follows from (30) and (31) that \(\int _0^{t \wedge \tau _M}\Vert A_s^{g,\varepsilon } \Vert ^2_{{\mathcal {C}}^1} \mathrm {d}s \leqslant T \frac{C}{\varepsilon ^2} M^2 \Vert h\Vert _{L_\infty }^2\) for every \(t\in [0,T]\), whence we obtain (29). \(\square \)

Using the constant \(C_M^a\) appearing in (28), we define \(\rho _0:=\rho _0(M)=\frac{1}{2 C^a_M}\). The following lemma makes use of Kunita’s expansion.

Lemma 26

Let f be of order \(\alpha =\frac{7}{2}+\theta \). Define the auxiliary process \((Y_t^{\rho ,M})_{t\in [0,T]}\) as the solution to:

$$\begin{aligned} Y_t^{\rho ,M} (u)&= g(u) +\sum _{k \in \mathbb {Z}} f_k \int _0^t \mathfrak {R}\left( e^{-ik Y_s^{\rho ,M}(u)} \mathrm {d}W^k_s \right) \nonumber \\&\quad + \beta _t + \rho \int _0^t {\mathbb {1}}_{\{ s \leqslant \tau _M \}} A_s^{g,\varepsilon } (Y_s^{\rho ,M}(u)) \mathrm {d}s. \end{aligned}$$
(32)

Then there exists C depending on M, f, \(g'\), h, T and \(\varepsilon \) such that for every \(\rho \in (-\rho _0,\rho _0)\) and for every \(t\in [0,T]\),

$$\begin{aligned} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 |Z_t^{g(u)+\rho a_t^M(u)} - Y_t^{\rho ,M} (u)|^2 \mathrm {d}u \right] ^{1/2}\leqslant C |\rho |^{5/4}. \end{aligned}$$
(33)

Proof

Fix \(u\in \mathbb {R}\) and write the equation satisfied by \((Z_t^{g(u)+\rho a_t^M(u)})_{t\in [0,T]}\). We apply Kunita’s expansion (Lemma 20) with \((\zeta _t)_{t\in [0,T]}:=(a_t^M(u))_{t\in [0,T]}\), \(x=g(u)\) and \(\zeta _t=a_t^M(u)\). By inequality (28), we have \({\mathbb {E}} \left[ \int _0^T |\zeta _t|\mathrm {d}t \right] \leqslant C_M\). Thus \({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \)-almost surely,

(34)

where we used the identities (7) and (8).

Comparing Eq. (34) with Eq. (32) satisfied by \((Y_t^{\rho ,M} (u))_{t\in [0,T]}\), we have for every \(t\in [0,T]\),

$$\begin{aligned} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 |Z_t^{g(u)+\rho a_t^M(u)} - Y_t^{\rho ,M} (u)|^2 \mathrm {d}u \right] \leqslant 3(E_1+ \rho ^2 E_2+\rho ^2 E_3), \end{aligned}$$
(35)

where

$$\begin{aligned} E_1&:= {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \Bigg |\sum _{k \in \mathbb {Z}}\int _0^t f_k \mathfrak {R}\left( (e^{-ik Z_s^{g(u)+\rho a_s^M(u)}}-e^{-ik Y_s^{\rho ,M}(u)}) \mathrm {d}W^k_s \right) \Bigg |^2 \mathrm {d}u \right] ; \\ E_2&:= {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \left| \int _0^t {\mathbb {1}}_{\{ s \leqslant \tau _M \}} (A_s^{g,\varepsilon } (Z_s^{g(u)}) - A_s^{g,\varepsilon } (Y_s^{\rho ,M}(u)) ) \mathrm {d}s \right| ^2 \mathrm {d}u \right] ; \\ E_3&:= {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \left| \int _0^t (\partial _x Z_s^{g(u)+\rho a_s^M(u)}-\partial _x Z_s^{g(u)} )\frac{g'(u)}{\partial _u x_s^g(u)} {\mathbb {1}}_{\{ s \leqslant \tau _M \}} A_s^{g,\varepsilon } (x_s^g(u)) \mathrm {d}s \right| ^2 \mathrm {d}u \right] . \end{aligned}$$

Control on \(E_1\). By Itô’s isometry and since \(y \mapsto e^{-iky}\) is k-Lipschitz,

$$\begin{aligned} E_1&\leqslant {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \!\!\int _0^t \sum _{k \in \mathbb {Z}} f_k^2 k^2 \left| Z_s^{g(u)+\rho a_s^M(u)} - Y_s^{\rho ,M}(u)\right| ^2 \mathrm {d}s \mathrm {d}u \right] \nonumber \\&\leqslant C \int _0^t {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \left| Z_s^{g(u)+\rho a_s^M(u)} - Y_s^{\rho ,M}(u)\right| ^2 \mathrm {d}u \right] \mathrm {d}s , \end{aligned}$$
(36)

because \(\sum _{k \in \mathbb {Z}} f_k^2 k^2 <+\infty \), since \(\alpha > \frac{3}{2}\).

Control on \(E_2\). By (29), there is \(C>0\) such that \(\int _0^{t \wedge \tau _M} \left\| \partial _x A_s^{g,\varepsilon }\right\| _{L_\infty ({\mathbb {T}})}^2 \mathrm {d}s \leqslant C\). It follows from Cauchy-Schwarz inequality that

$$\begin{aligned} E_2&\leqslant {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \int _0^{t \wedge \tau _M} \left\| \partial _x A_s^{g,\varepsilon }\right\| _{L_\infty ({\mathbb {T}})}^2 \mathrm {d}s \; \int _0^t |Z_s^{g(u)} - Y_s^{\rho ,M}(u)|^2 \mathrm {d}s \mathrm {d}u \right] \\&\leqslant C \int _0^t {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \left| Z_s^{g(u)+\rho a_s^M(u)} - Y_s^{\rho ,M}(u)\right| ^2 \mathrm {d}u \right] \mathrm {d}s \\&\quad + C \int _0^1 \!\! \int _0^t {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left| Z_s^{g(u)} - Z_s^{g(u)+\rho a_s^M(u)} \right| ^2 \right] \mathrm {d}s \mathrm {d}u. \end{aligned}$$

Moreover, by inequality (28), \(|\rho a_t^M(u)| \leqslant \rho _0 C^a_M =\frac{1}{2}\) for every \(t\in [0,T]\), \(u\in \mathbb {R}\) and \(\rho \in (-\rho _0,\rho _0)\). Fix \(u\in [0,1]\). Let \(J_u\) be the interval \([g(u)-\frac{1}{2}, g(u)+\frac{1}{2}]\). By inequality (76) and by Kolmogorov’s Lemma (see [38, p.26, Thm I.2.1]), it follows that, up to considering a modification of the process \((Z_t^x)_{x\in J_u}\), there is a constant \(C_{{\text {Kol}}}\) independent of u such that

$$\begin{aligned} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sup _{x,y \in J_u, x\ne y} \frac{\sup _{t\leqslant T} |Z_t^x - Z_t^y |^2}{|x-y|^{1/2}} \right] \leqslant C_{{\text {Kol}}}. \end{aligned}$$

We deduce that for every \(\rho \in (-\rho _0,\rho _0)\),

$$\begin{aligned}&{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left| Z_s^{g(u)} - Z_s^{g(u)+\rho a_s^M(u)} \right| ^2 \right] \\&\quad \leqslant {\mathbb {E}}^W {\mathbb {E}}^\beta \bigg [ {\mathbb {1}}_{\{\rho a_t^M(u)\ne 0 \}} \frac{\sup _{t \leqslant T} | Z_t^{g(u)} - Z_t^{g(u)+\rho a_t^M(u)} |^2 }{|\rho a_t^M(u)|^{1/2}} |\rho a_t^M(u)|^{1/2}\bigg ] \\&\quad \leqslant C C_{{\text {Kol}}} |\rho |^{1/2}, \end{aligned}$$

where the constants are independent of s and u. We conclude that for every \(\rho \in (-\rho _0,\rho _0)\)

$$\begin{aligned} E_2 \leqslant C\int _0^t {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \left| Z_s^{g(u)+\rho a_s^M(u)} - Y_s^{\rho ,M}(u)\right| ^2 \mathrm {d}u \right] \mathrm {d}s + C |\rho |^{1/2}. \end{aligned}$$
(37)

Control on \(E_3\). By definition (24) of \(\tau ^2_M\), for every \(s \leqslant \tau _M\), \(\left\| \textstyle \frac{1}{\partial _u x_s^g(\cdot )} \right\| _{L_\infty } \leqslant M\). Thus

$$\begin{aligned} E_3\leqslant & {} \Vert g'\Vert _{L_\infty } M {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \!\! \int _0^t \left| \partial _x Z_s^{g(u)+\rho a_s^M(u)}-\partial _x Z_s^{g(u)} \right| ^2 \mathrm {d}s \; \int _0^{t\wedge \tau _M} \Vert A_s^{g,\varepsilon }\Vert _{L_\infty } ^2 \mathrm {d}s \mathrm {d}u \right] \\\leqslant & {} C {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \!\! \int _0^t \left| \partial _x Z_s^{g(u)+\rho a_s^M(u)}-\partial _x Z_s^{g(u)} \right| ^2 \mathrm {d}s \mathrm {d}u \right] , \end{aligned}$$

where the last inequality follows from (29). By inequality (78) and the fact that f is of order \(\alpha > \frac{5}{2}\), we can apply as before Kolmogorov’s Lemma on \(\partial _x Z\) instead of Z. We get for every \(\rho \in (-\rho _0,\rho _0)\),

$$\begin{aligned} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left| \partial _x Z_s^{g(u)+\rho a_s^M(u)}-\partial _x Z_s^{g(u)} \right| ^2 \right] \leqslant C C_{{\text {Kol}}} |\rho |^{1/2}. \end{aligned}$$

Therefore, for every \(\rho \in (-\rho _0,\rho _0)\), \(E_3 \leqslant C |\rho |^{1/2}\).

Conclusion. Putting together the last inequality with (35), (36) and (37), we obtain for every \(t\in [0,T]\)

$$\begin{aligned}&{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 |Z_t^{g(u)+\rho a_t^M(u)} - Y_t^{\rho ,M} (u)|^2 \mathrm {d}u \right] \\&\quad \leqslant C \int _0^t {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \left| Z_s^{g(u)+\rho a_s^M(u)} - Y_s^{\rho ,M}(u)\right| ^2 \mathrm {d}u \right] \mathrm {d}s +C |\rho |^{5/2}. \end{aligned}$$

By Gronwall’s inequality, the proof of Lemma 26 is complete. \(\square \)

Finally, the following lemma states a Bismut-Elworthy formula. Remark that the only difference with the formula of Proposition 23 is the localization by \(\tau _M\).

Lemma 27

Let \(\theta \in (0,1)\), \(g \in {\mathbf {G}}^{3+\theta }\) and f be of order \(\alpha =\frac{7}{2}+\theta \). For every \(M \geqslant M_0\) and for every \(t\in [0,T]\),

$$\begin{aligned}&{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \partial _\mu \phi (\mu _t^g) (x_t^g(u)) \frac{\partial _u x_t^g(u)}{g'(u)} a_t^M(u) \mathrm {d}u \right] \nonumber \\&={\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \phi (\mu _t^g) \sum _{k \in \mathbb {Z}} \int _0^t \mathfrak {R}(\overline{\lambda _s^{k,M}} \mathrm {d}W_s^k) \right] . \end{aligned}$$
(38)

Proof

Take the real part of equality (26) with \(y=Y_s^{\rho ,M}(u)\). Recall that \(A_s^{g,\varepsilon }\) and \(f_k\) are real-valued. We obtain for every \(M \geqslant M_0\), for every \(u\in \mathbb {R}\), for every \(\rho \in (-\rho _0(M),\rho _0(M))\) and for every \(s \in [0,T]\),

$$\begin{aligned} {\mathbb {1}}_{\{ s \leqslant \tau _M \}} A_s^{g,\varepsilon } (Y_s^{\rho ,M}(u)) = \sum _{k \in \mathbb {Z}} f_k \mathfrak {R}\left( e^{-ik Y_s^{\rho ,M}(u)} \lambda _s^{k,M} \right) . \end{aligned}$$

Thus, we rewrite equality (32) in the following way: for every \(t\in [0,T]\)

$$\begin{aligned} Y_t^{\rho ,M} (u) = g(u) +\sum _{k \in \mathbb {Z}}\int _0^t f_k \mathfrak {R}\left( e^{-ik Y_s^{\rho ,M}(u)} (\mathrm {d}W^k_s +\rho \lambda _s^{k,M} \mathrm {d}s) \right) + \beta _t. \end{aligned}$$

Recall that \(\lambda _s^{k,M}\) is complex-valued. Define for every \(t\in [0,T]\)

$$\begin{aligned} {\mathcal {E}}^\rho _t :=\exp \Bigg (-\rho \sum _{k \in \mathbb {Z}} \int _0^t \mathfrak {R}(\overline{\lambda _s^{k,M}} \mathrm {d}W_s^k) -\frac{\rho ^2}{2} \sum _{k \in \mathbb {Z}} \int _0^t |\lambda _s^{k,M}|^2 \mathrm {d}s \Bigg ). \end{aligned}$$

Recall that by Lemma 24, there is a constant \(C_{M,\varepsilon }>0\) such that \({\mathbb {P}}^W \otimes \mathbb P^\beta \)-almost surely, \(\sum _{k \in \mathbb {Z}} \int _0^T |\lambda _s^{k,M}|^2 \mathrm {d}s \leqslant C_{M,\varepsilon }\). It follows from Novikov’s condition that the process \(({\mathcal {E}}_t^\rho )_{t\in [0,T]}\) is a \({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \)-martingale. Let \({\mathbb {P}}^\rho \) be the probability measure on \(\Omega ^W \times \Omega ^\beta \) such that \({\mathbb {P}}^\rho \) is absolutely continuous with respect to \({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \) with density \(\frac{\mathrm {d} {\mathbb {P}}^\rho }{\mathrm {d} ({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta )}={\mathcal {E}}^\rho _T\). By Girsanov’s Theorem, \(((W_t^k +\rho \lambda _t^{k,M})_{t\in [0,T]})_{k \in \mathbb {Z}}\) is a collection of independent \({\mathbb {P}}^\rho \)-Brownian motions, independent of \((\beta ,{\mathcal {G}}_0)\). By uniqueness in law of Eq. (4), the law of \((Y_t^{\rho ,M})_{t\in [0,T]}\) under \({\mathbb {P}}^\rho \) is equal to the law of \((x_t^g)_{t\in [0,T]}\) under \({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \).

Fix \(t\in [0,T]\). Recall that \({\widehat{\phi }}(Y):= \phi (\mathcal L_{[0,1] \times \Omega ^\beta }(Y))\) for every \(Y \in L_2([0,1]\times \Omega ^\beta )\). Then

$$\begin{aligned} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ {\widehat{\phi }}(Y_t^{\rho ,M} ) {\mathcal {E}}_t^\rho \right] = {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ {\widehat{\phi }}(Y_t^{\rho ,M} ) \frac{\mathrm {d} \mathbb P^\rho }{\mathrm {d}({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta )} \right] = {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ {\widehat{\phi }}(x_t^g) \right] . \end{aligned}$$

The r.h.s. does not depend on \(\rho \), so we have

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ {\widehat{\phi }}(Y_t^{\rho ,M} ) {\mathcal {E}}_t^\rho \right] =0. \end{aligned}$$
(39)

Let us now prove that \(\frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ {\widehat{\phi }}(Z_t^{g+\rho a_t^M}) {\mathcal {E}}_t^\rho \right] =0\). By assumption \((\phi 2)\), \({\widehat{\phi }}\) is a Lipschitz-continuous function. By Lemma 26, we have for every \(\rho \in (-\rho _0, \rho _0)\)

$$\begin{aligned}&\left| {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ {\widehat{\phi }}(Z_t^{g+\rho a_t^M}) {\mathcal {E}}_t^\rho \right] -{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ {\widehat{\phi }}(Y_t^{\rho ,M} ) {\mathcal {E}}_t^\rho \right] \right| \\&\quad \leqslant {\mathbb {E}}^W \left[ |{\widehat{\phi }}(Z_t^{g+\rho a_t^M}) - {\widehat{\phi }}(Y_t^{\rho ,M} ) |^2\right] ^{1/2} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left| {\mathcal {E}}_t^\rho \right| ^2 \right] ^{1/2} \\&\quad \leqslant \Vert {\widehat{\phi }}\Vert _{{\text {Lip}}} {\mathbb {E}}^W \left[ \Vert Z_t^{g+\rho a_t^M} - Y_t^{\rho ,M}\Vert _{{L_2([0,1] \times \Omega ^\beta )}}^2\right] ^{1/2} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left| {\mathcal {E}}_t^\rho \right| ^2 \right] ^{1/2} \\&\quad \leqslant C_{M,\varepsilon } |\rho |^{5/4}{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left| {\mathcal {E}}_t^\rho \right| ^2 \right] ^{1/2}. \end{aligned}$$

Moreover, recalling that \(\sum _{k \in \mathbb {Z}} \int _0^T |\lambda _s^{k,M}|^2 \mathrm {d}s \leqslant C_{M,\varepsilon }\) (see Lemma 24)

$$\begin{aligned}&{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left| {\mathcal {E}}_t^\rho \right| ^2 \right] \\&\quad \leqslant e^{\rho _0^2 C_{M,\varepsilon }} \; {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \textstyle \exp \Big (\! -2\rho \sum _{k \in \mathbb {Z}} \int _0^t \mathfrak {R}(\overline{\lambda _s^{k,M}} \mathrm {d}W_s^k) -\frac{(2\rho )^2}{2} \sum _{k \in \mathbb {Z}} \int _0^t |\lambda _s^{k,M}|^2 \mathrm {d}s \Big ) \right] \\&\quad =e^{\rho _0^2 C_{M,\varepsilon }}, \end{aligned}$$

since the exponential term is a \({\mathbb {P}}^W \otimes \mathbb P^\beta \)-martingale. Therefore,

$$\begin{aligned} \left| {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ {\widehat{\phi }}(Z_t^{g+\rho a_t^M}) {\mathcal {E}}_t^\rho \right] -{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ {\widehat{\phi }}(Y_t^{\rho ,M} ) {\mathcal {E}}_t^\rho \right] \right| \leqslant C_{M,\varepsilon }|\rho |^{5/4}. \end{aligned}$$
(40)

It follows from (39) and (40) that \(\frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ {\widehat{\phi }}(Z_t^{g+\rho a_t^M}) {\mathcal {E}}_t^\rho \right] =0\). By (10), we compute

$$\begin{aligned} 0&= \frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ {\widehat{\phi }}(Z_t^{g+\rho a_t^M}) {\mathcal {E}}_t^\rho \right] \\&= {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 D{\widehat{\phi }}(Z_t^g)_u \; \partial _x Z_t^{g(u)} a_t^M(u) \mathrm {d}u \right] \\&\quad - {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ {\widehat{\phi }}(Z_t^g) \sum _{k \in \mathbb {Z}} \int _0^t \mathfrak {R}(\overline{\lambda _s^{k,M}} \mathrm {d}W_s^k) \right] \\&= {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \partial _\mu \phi (\mu _t^g) (x_t^g(u)) \frac{\partial _u x_t^g(u)}{g'(u)} a_t^M(u) \mathrm {d}u \right] \\&\quad - {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \phi (\mu _t^g) \sum _{k \in \mathbb {Z}} \int _0^t \mathfrak {R}(\overline{\lambda _s^{k,M}} \mathrm {d}W_s^k) \right] . \end{aligned}$$

We used Proposition 6 for the last equality. Therefore, equality (38) holds true. \(\square \)

Finally, we prove Proposition 23.

Proof (Proposition 23)

We want to prove

$$\begin{aligned}&{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \partial _\mu \phi (\mu _t^g) (x_t^g(u)) \frac{\partial _u x_t^g(u)}{g'(u)} a_t(u) \mathrm {d}u \right] \\&\quad ={\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \phi (\mu _t^g) \sum _{k \in \mathbb {Z}} \int _0^t \mathfrak {R}(\overline{\lambda _s^{k}} \mathrm {d}W_s^k) \right] . \end{aligned}$$

which is equivalent to (23). In order to obtain that equality, it is sufficient to pass to the limit when \(M \rightarrow +\infty \) in (38). Recall that by (25), \({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \left[ \tau _M < T \right] \rightarrow _{M \rightarrow +\infty } 0\). Since \(\{ \tau _M < T\}_{M \geqslant M_0}\) is a non-increasing sequence of events, it follows that \({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \)-almost surely, \({\mathbb {1}}_{\{ t \leqslant \tau _M \}} \rightarrow {\mathbb {1}}_{\{ t \leqslant T \}}\). Thus, it only remains to prove uniform integrability of both members of equality (38). Precisely, we want to prove:

$$\begin{aligned} \sup _{M \geqslant M_0} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \bigg ( \partial _\mu \phi (\mu _t^g) (x_t^g(u)) \frac{\partial _u x_t^g(u)}{g'(u)} a_t^M(u) \bigg )^{3/2} \mathrm {d}u \right]&<+\infty ; \end{aligned}$$
(41)
$$\begin{aligned} \sup _{M \geqslant M_0} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \bigg (\phi (\mu _t^g) \sum _{k \in \mathbb {Z}} \int _0^t \mathfrak {R}(\overline{\lambda _s^{k,M}} \mathrm {d}W_s^k)\bigg )^{2} \right]&<+\infty . \end{aligned}$$
(42)

Proof of (41). For every \(M \geqslant M_0\), by Hölder’s inequality

$$\begin{aligned}&{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \bigg ( \partial _\mu \phi (\mu _t^g) (x_t^g(u)) \frac{\partial _u x_t^g(u)}{g'(u)} a_t^M(u) \bigg )^{3/2} \mathrm {d}u \right] \\&\quad \leqslant {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \bigg ( \partial _\mu \phi (\mu _t^g) (x_t^g(u)) \bigg )^2 \mathrm {d}u \right] ^{3/4}{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \bigg | \frac{\partial _u x_t^g(u)}{g'(u)} a_t^M(u) \bigg |^6 \mathrm {d}u \right] ^{1/4}. \end{aligned}$$

By assumption \((\phi 2)\), \({\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \big ( \partial _\mu \phi (\mu _t^g) (x_t^g(u)) \big )^2 \mathrm {d}u \right] \) is bounded. Moreover, by inequality (68),

$$\begin{aligned}&{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \bigg | \frac{\partial _u x_t^g(u)}{g'(u)} a_t^M(u) \bigg |^6 \mathrm {d}u \right] \\&\quad \leqslant C \left\| \frac{1}{g'} \right\| ^6_{L_\infty } \Vert g'\Vert ^6_{L_{12}} \; {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \big | a_t^M(u) \big |^{12} \mathrm {d}u \right] ^{1/2}. \end{aligned}$$

By definition (27) of \(a_t^M\), we have

$$\begin{aligned} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 |a_t^M(u) |^{12} \mathrm {d}u \right]&\leqslant {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 T^{11} \int _0^t \left| \frac{g'(u)}{\partial _u x_s^g(u)} A_s^{g,\varepsilon } (x_s^g(u)) \right| ^{12} \mathrm {d}s \mathrm {d}u \right] . \end{aligned}$$

Remark that for every \(s\in [0,T]\), \(\Vert A_s^{g,\varepsilon }\Vert _{L_\infty }\leqslant \Vert A_s^g\Vert _{L_\infty } \leqslant \Vert \partial _u x_s^g\Vert _{L_\infty } \Vert h\Vert _{L_\infty }\). Thus

$$\begin{aligned}&{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 |a_t^M(u) |^{12} \mathrm {d}u \right] \\&\quad \leqslant C \Vert g'\Vert ^{12}_{L_\infty } \Vert h\Vert ^{12}_{L_\infty } {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sup _{t\leqslant T}\Vert \partial _u x_t^g\Vert ^{12}_{L_\infty } \sup _{t\leqslant T}\left\| \textstyle \frac{1}{\partial _u x_t^g(\cdot )} \right\| ^{12}_{L_\infty } \right] \\&\quad \leqslant C \Vert g'\Vert ^{12}_{L_\infty } \Vert h\Vert ^{12}_{L_\infty } {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sup _{t\leqslant T}\Vert \partial _u x_t^g\Vert ^{24}_{L_\infty } \right] ^{1/2} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sup _{t\leqslant T}\left\| \textstyle \frac{1}{\partial _u x_t^g(\cdot )} \right\| ^{24}_{L_\infty } \right] ^{1/2} \leqslant C, \end{aligned}$$

where the constant C does not depend on M. The last inequality is obtained by inequalities (71) and (72), because \(g \in {\mathbf {G}}^{2+\theta }\) and \(\alpha >\frac{5}{2}+\theta \). We deduce (41).

Proof of (42).  For every \(M \geqslant M_0\) and \(t\in [0,T]\)

$$\begin{aligned} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \bigg (\phi (\mu _t^g) \sum _{k \in \mathbb {Z}} \int _0^t \mathfrak {R}(\overline{\lambda _s^{k,M}} \mathrm {d}W_s^k)\bigg )^{2} \right]&\leqslant \Vert \phi \Vert ^2_{L_\infty } {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sum _{k \in \mathbb {Z}} \int _0^t |\lambda _s^{k,M}|^2 \mathrm {d}s \right] \nonumber \\&\leqslant \Vert \phi \Vert ^2_{L_\infty } {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sum _{k \in \mathbb {Z}} \int _0^t |\lambda _s^k|^2 \mathrm {d}s \right] , \end{aligned}$$

since \(\lambda _s^{k,M}= {\mathbb {1}}_{\{ s \leqslant \tau _M \}} \lambda _s^k\). By inequality (20), \({\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sum _{k \in \mathbb {Z}} \int _0^t |\lambda _s^k|^2 \mathrm {d}s \right] \) is bounded, so we deduce (42). It completes the proof of Proposition 23. \(\square \)

4.3 Conclusion of the analysis

Putting together Lemma 21 and Proposition 23, we conclude the proof of Proposition 19.

Proof (Proposition 19)

By Cauchy-Schwarz inequality applied to (23),

$$\begin{aligned} |I_1| \leqslant \frac{1}{t}\Vert \phi \Vert _{L_\infty } {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sum _{k \in \mathbb {Z}} \int _0^t |\lambda _s^k |^2 \mathrm {d}s \right] ^{1/2} \leqslant \frac{C}{t}\Vert \phi \Vert _{L_\infty } \frac{\sqrt{t}}{\varepsilon ^{3+2\theta }} C_1(g) \Vert h\Vert _{{\mathcal {C}}^1}, \end{aligned}$$

where we applied inequality (18). \(\square \)

Remark 28

Note that we have proved a Bismut-Elworthy-Li integration by parts formula up to a remainder term. Indeed, by propositions 18 and 23, we proved that

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} P_t \phi (\mu _0^{g+\rho g' h}) =\frac{1}{t} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \phi (\mu _t^g) \sum _{k \in \mathbb {Z}} \int _0^t \mathfrak {R}( \overline{\lambda _s^k} \mathrm {d}W_s^k) \right] +I_2, \end{aligned}$$

where it should be recalled that \(((\lambda _t^k)_{t\in [0,T]})_{k \in \mathbb {Z}}\), defined by (17), depends on \(\varepsilon \). In the next section, we prove that \(I_2\) is of order \({\mathcal {O}}(\varepsilon )\).

5 Analysis of \(I_2\)

In this section, we look for an upper bound of \(|I_2|\). Define \(H_s^{g,\varepsilon }(u):= \frac{(A_s^g-A_s^{g,\varepsilon })(x_s^g(u))}{\partial _u x_s^g(u)}\). Then \(I_2\) rewrites as follows

$$\begin{aligned} I_2= \frac{1}{t} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \!\!\int _0^t \partial _\mu \phi (\mu _t^g) (x_t^g(u)) \;\partial _u x_t^g(u) \; H_s^{g,\varepsilon }(u) \;\mathrm {d}s \mathrm {d}u \right] . \end{aligned}$$
(43)

Moreover, let \((K_t^{g,\varepsilon })_{t\in [0,T]}\) be the process defined by:

$$\begin{aligned} K_t^{g,\varepsilon }(u):= \int _0^t H_s^{g,\varepsilon }(u) \frac{1}{t-s}\int _s^t \partial _u x_r^g(u) \mathrm {d}\beta _r \; \mathrm {d}s. \end{aligned}$$
(44)

We also introduce the notation \(\left[ \frac{\delta \psi }{\delta m} \right] \), denoting the zero-average linear functional derivative. For every \(\mu \in {\mathcal {P}}_2(\mathbb {R})\) and \(v \in \mathbb {R}\),

$$\begin{aligned} \left[ \frac{\delta \psi }{\delta m} \right] (\mu ) (v):= \frac{\delta \psi }{\delta m} (\mu ) (v) - \int _\mathbb {R}\frac{\delta \psi }{\delta m} (\mu ) (v') \mathrm {d}\mu (v'). \end{aligned}$$
(45)

The main result of this section is the following proposition.

Proposition 29

Under the same assumptions as Proposition 19, there is \(C>0\) independent of g, h and \(\theta \) such that for every \(t\in [0,T]\) and \(\varepsilon \in (0,1)\),

$$\begin{aligned} |I_2|&\leqslant \frac{C}{\sqrt{t}}\;\varepsilon \Vert h\Vert _{{\mathcal {C}}^1} C_2(g) \; {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left| \int _0^1 \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) (x_t^g(u)) \frac{K_t^{g,\varepsilon } (u)}{\left\| K_t^{g,\varepsilon } \right\| _{L_\infty }}\mathrm {d}u\right| ^2 \right] ^{1/2}. \end{aligned}$$
(46)

where \(C_2(g) = 1+ \Vert g'''\Vert _{L_8}^3 + \Vert g''\Vert _{L_\infty }^{12}+\Vert g'\Vert _{L_\infty }^{12}+ \left\| \frac{1}{g'} \right\| ^{24}_{L_\infty }\).

5.1 Progressive measurability

We start by showing that Yamada-Watanabe Theorem applies here, i.e. that the processes \((x_t^g)_{t\in [0,T]}\), \((\mu _t^g)_{t\in [0,T]}\) and \((H_t^{g,\varepsilon })_{t\in [0,T]}\) can all be written as progressively measurable functions of u and the noises \((W^k)_{k \in \mathbb {Z}}\) and \(\beta \).

For that purpose, let \((\Theta , {\mathcal {B}}(\Theta ))\) be the canonical space defined by \(\Theta = {\mathcal {C}}([0,T], \mathbb {C})^\mathbb {Z}\times {\mathcal {C}}([0,T], \mathbb {R})\) and \({\mathcal {B}}(\Theta )= {\mathcal {B}}(\mathcal C([0,T], \mathbb {C})^\mathbb {Z}) \otimes {\mathcal {B}}({\mathcal {C}}([0,T], \mathbb {R}))\). Let \({\mathbf {P}}\) be the probability measure on \((\Theta , \mathcal B(\Theta ))\) defined as the distribution of \(((W^k)_{k \in \mathbb {Z}}, \beta )\) on \(\Omega ^W \times \Omega ^\beta \). Let \(\mathcal B_t({\mathcal {C}}([0,T], \mathbb {R})):=\sigma ({\mathbf {x}}(s);0\leqslant s \leqslant t)\); in other words the process \(({\mathcal {B}}_t({\mathcal {C}}([0,T], \mathbb {R})))_{t\in [0,T]}\) is the canonical filtration on \(({\mathcal {C}}([0,T], \mathbb {R}), {\mathcal {B}}({\mathcal {C}}([0,T], \mathbb {R}) ))\). Similarly, let \((\mathcal B_t({\mathcal {C}}([0,T], \mathbb {C})^\mathbb {Z}))_{t\in [0,T]}\) be the canonical filtration on \(({\mathcal {C}}([0,T], \mathbb {C})^\mathbb {Z}, {\mathcal {B}}(\mathcal C([0,T], \mathbb {C})^\mathbb {Z}))\). Let \((\widehat{{\mathcal {B}}}_t(\Theta ))_{t\in [0,T]}\) be the augmentation of the filtration \((\mathcal B_t({\mathcal {C}}([0,T], \mathbb {C})^\mathbb {Z})\otimes {\mathcal {B}}_t({\mathcal {C}}([0,T], \mathbb {R}) ))_{t\in [0,T]}\) by the null sets of \({\mathbf {P}}\). Those notations are inspired by the textbook [26, pp.308-311]. We denote elements of \(\Theta \) in bold, e.g. \((({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \in \Theta \).

Lemma 30

Let \(\theta \), \(\varepsilon \), g, f and h be as in Proposition 29. Then

  1. (a)

    there is a \({\mathcal {B}}(\mathbb {R}) \otimes \mathcal B(\Theta )/{\mathcal {B}}({\mathcal {C}}([0,T], \mathbb {R}))\)-measurable function

    $$\begin{aligned} {\mathcal {X}}: \mathbb {R}\times \Theta&\rightarrow {\mathcal {C}}([0,T], \mathbb {R}) \\ (u,({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}})&\mapsto \mathcal X(u,({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \end{aligned}$$

    which is, for every fixed \(t\in [0,T]\), \({\mathcal {B}}(\mathbb {R}) \otimes \widehat{{\mathcal {B}}}_t(\Theta ) /{\mathcal {B}}_t({\mathcal {C}}([0,T], \mathbb {R}))\)-measurable, such that \({\mathcal {X}}\) is continuous in u for \({\mathbf {P}}\)-almost every fixed \((({\mathbf {w}}^k)_{k\in \mathbb {Z}}, \mathbf{b}) \in \Theta \) and such that \({\mathbb {P}}^W \otimes \mathbb P^\beta \)-almost surely, for every \(u\in \mathbb {R}\) and for every \(t\in [0,T]\),

    $$\begin{aligned} x_t^g(u)= {\mathcal {X}}_t (u,(W^k)_{k \in \mathbb {Z}}, \beta ); \end{aligned}$$
    (47)
  2. (b)

    there is a \({\mathcal {B}}({\mathcal {C}}([0,T], \mathbb {C})^\mathbb {Z}) /{\mathcal {B}}({\mathcal {C}}([0,T], {\mathcal {P}}_2(\mathbb {R})))\)-measurable function

    $$\begin{aligned} {\mathcal {P}}: {\mathcal {C}}([0,T], \mathbb {C})^\mathbb {Z}&\rightarrow {\mathcal {C}}([0,T], {\mathcal {P}}_2(\mathbb {R})) \\ ({\mathbf {w}}^k)_{k\in \mathbb {Z}}&\mapsto {\mathcal {P}}(({\mathbf {w}}^k)_{k\in \mathbb {Z}}) \end{aligned}$$

    which is, for every fixed \(t\in [0,T]\), \({\mathcal {B}}_t(\mathcal C([0,T], \mathbb {C})^\mathbb {Z}) /{\mathcal {B}}_t({\mathcal {C}}([0,T], \mathcal P_2(\mathbb {R})))\)-measurable, such that \({\mathbb {P}}^W\)-almost surely, for every \(t\in [0,T]\),

    $$\begin{aligned} \mu _t^g= {\mathcal {P}}_t ((W^k)_{k \in \mathbb {Z}}); \end{aligned}$$
    (48)
  3. (c)

    there is a progressively-measurable function \(\mathcal H:[0,T] \times \mathbb {R}\times \Theta \rightarrow \mathbb {R}\), i.e. for every \(t\in [0,T]\),

    $$\begin{aligned} {[}0,t] \times \mathbb {R}\times \Theta&\rightarrow \mathbb {R}\\ (s,u,({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}})&\mapsto \mathcal H_s(u,({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \end{aligned}$$

    is \({\mathcal {B}}[0,t] \otimes {\mathcal {B}}(\mathbb {R}) \otimes \widehat{\mathcal B}_t(\Theta ) /{\mathcal {B}}(\mathbb {R})\)-measurable, such that \({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \)-almost surely, for every \(u\in \mathbb {R}\) and for every \(t\in [0,T]\),

    $$\begin{aligned} H_t^{g,\varepsilon }(u)= {\mathcal {H}}_t (u,(W^k)_{k \in \mathbb {Z}}, \beta ). \end{aligned}$$
    (49)

Proof

Consider the canonical space \((\Theta , {\mathcal {B}}(\Theta ), (\widehat{{\mathcal {B}}}_t(\Theta ))_{t\in [0,T]}, {\mathbf {P}})\). By Proposition 3, there is a strong and pathwise unique solution to (4) with initial condition \(x_0^g=g\). Therefore, for every fixed \(u \in \mathbb {R}\), there is a unique solution \(({\mathbf {x}}^g_t(u))_{t\in [0,T]}\) to

$$\begin{aligned} {\mathbf {x}}^g_t(u)=g(u)+\sum _{k \in \mathbb {Z}} \int _0^t f_k \mathfrak {R}\left( e^{-ik {\mathbf {x}}^g_s(u)} \mathrm {d}{\mathbf {w}}^k_s \right) + {\mathbf {b}}_t. \end{aligned}$$

Proof of (a). By Yamada-Watanabe Theorem, the law of \((x^g, (W^k)_{k \in \mathbb {Z}}, \beta )\) under \({\mathbb {P}}^W \otimes \mathbb P^\beta \) is equal to the law of \(({\mathbf {x}}^g, ({\mathbf {w}}^k)_{k \in \mathbb {Z}}, {\mathbf {b}})\) under \({\mathbf {P}}\). This result is proved in [26, Prop. 5.3.20] for a finite-dimensional noise, but the proof is the same for the infinite-dimensional noise \((({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \in \Theta \). By a corollary to this theorem (see [26, Coro. 5.3.23]), it follows that for every \(u\in \mathbb {Q}\), there is a \(\mathcal B(\Theta )/{\mathcal {B}}({\mathcal {C}}([0,T], \mathbb {R}))\)-measurable function

$$\begin{aligned} {\mathcal {X}}^u: \Theta&\rightarrow {\mathcal {C}}([0,T], \mathbb {R}) \\ (({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}})&\mapsto \mathcal X^u(({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \end{aligned}$$

which is, for every fixed \(t\in [0,T]\), \(\widehat{\mathcal B}_t(\Theta ) /{\mathcal {B}}_t({\mathcal {C}}([0,T], \mathbb {R}))\)-measurable, such that \({\mathbf {P}}\)-almost surely, for every \(t\in [0,T]\),

$$\begin{aligned} {\mathbf {x}}^g_t(u)= {\mathcal {X}}^u_t (({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}). \end{aligned}$$
(50)

Moreover, again by Proposition 3, there is a \({\mathbf {P}}\)-almost sure event \(A \in {\mathcal {B}}(\Theta )\) such that for every \((({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \in A\), the function \((t,u) \mapsto {\mathbf {x}}^g_t(u)\) is continuous on \([0,T] \times \mathbb {R}\). Up to modifying the almost-sure event A, we may assume that for every \((({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \in A\) and for every \(u \in \mathbb {Q}\), equality (50) holds. Therefore, we can define a continuous function in the variable \(u \in \mathbb {R}\) by extending \(u \in \mathbb {Q}\mapsto {\mathcal {X}}^u\). More precisely, define for every \(u \in \mathbb {R}\), \((({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \in \Theta \),

$$\begin{aligned} {\mathcal {X}} (u, ({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}})= \left\{ \begin{aligned}&\lim _{\begin{array}{c} u_n \rightarrow u \\ (u_n)_n \in \mathbb {Q}^\mathbb {N} \end{array}} {\mathcal {X}}^{u_n} ( ({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \quad&\text { if } (({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \in A, \\&0 \quad&\text { otherwise.} \end{aligned} \right. \end{aligned}$$

In the latter definition, the limit exists and for every \(((\mathbf {w}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \in A\), \({\mathcal {X}}_t (u, (\mathbf {w}^k)_{k\in \mathbb {Z}}, {\mathbf {b}})= {\mathbf {x}}_t^g(u)\) holds for any \(t\in [0,T]\) and \(u \in \mathbb {R}\). By construction, for every \(((\mathbf {w}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \in \Theta \), \(u \in \mathbb {R}\mapsto \mathcal X (u, ({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \in {\mathcal {C}}([0,T], \mathbb {R})\) is continuous. It remains to show that \({\mathcal {X}}\) is progressively-measurable. Fix \(t\in [0,T]\). By construction of \({\mathcal {X}}^u\), we know that for every \(u\in \mathbb {Q}\),

$$\begin{aligned} {[}0,t] \times \Theta&\rightarrow \mathbb {R}\\ (s,({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}} )&\mapsto \mathcal X_s^u(({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \end{aligned}$$

is \({\mathcal {B}}[0,t] \otimes \widehat{{\mathcal {B}}}_t(\Theta )/ \mathcal B(\mathbb {R})\)-measurable. Since \({\mathcal {X}}\) is the limit of \(\mathcal X_n:= \sum _{k \in \mathbb {Z}} {\mathcal {X}}^{k/n} \mathbbm {1}_{\{ u \in [\frac{k}{n}, \frac{k+1}{n})\}}\), we deduce that for every \(t\in [0,T]\),

$$\begin{aligned} {[}0,t] \times \mathbb {R}\times \Theta&\rightarrow \mathbb {R}\\ (s,u,({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}} )&\mapsto \mathcal X_s(u,({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \end{aligned}$$

is \({\mathcal {B}}[0,t] \otimes {\mathcal {B}}(\mathbb {R}) \otimes \widehat{\mathcal B}_t(\Theta )/ {\mathcal {B}}(\mathbb {R})\)-measurable.

Recall that \({\mathcal {L}}^{{\mathbb {P}}^W \otimes {\mathbb {P}}^\beta } (x^g, (W^k)_{k \in \mathbb {Z}}, \beta )= {\mathcal {L}}^{{\mathbf {P}}} ({\mathbf {x}}^g, ({\mathbf {w}}^k)_{k \in \mathbb {Z}}, {\mathbf {b}})\). Since \({\mathbb {P}}\)-almost surely, for every \(u \in \mathbb {R}\) and for every \(t\in [0,T]\), \(\mathbf{x}_t^g(u)= {\mathcal {X}}_t (u, ({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}})\), we deduce that \({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \)-almost surely, for every \(u \in \mathbb {R}\) and for every \(t\in [0,T]\), equality (47) holds. It completes the proof of (a).

Proof of (b). This step is equivalent to find \(\mathcal P:{\mathcal {C}}([0,T], \mathbb {C})^\mathbb {Z}\rightarrow {\mathcal {C}}([0,T], {\mathcal {P}}_2(\mathbb {R}))\) such that for every bounded measurable function \(\Upsilon :\mathbb {R}\rightarrow \mathbb {R}\), the function

$$\begin{aligned} \langle \Upsilon ,{\mathcal {P}} \rangle : {\mathcal {C}}([0,T], \mathbb {C})^\mathbb {Z}&\rightarrow {\mathcal {C}}([0,T],\mathbb {R}) \\ ({\mathbf {w}}^k)_{k\in \mathbb {Z}}&\mapsto \langle \Upsilon , \mathcal P(({\mathbf {w}}^k)_{k\in \mathbb {Z}}) \rangle = \int _\mathbb {R}\Upsilon (x) \mathrm {d} {\mathcal {P}}(({\mathbf {w}}^k)_{k\in \mathbb {Z}}) (x) \end{aligned}$$

is \({\mathcal {B}}_t({\mathcal {C}}([0,T], \mathbb {C})^\mathbb {Z}) /{\mathcal {B}}_t(\mathcal C([0,T], \mathbb {R}))\)-measurable for every fixed \(t\in [0,T]\). Define \({\mathcal {P}}\) by duality: for every \(\Upsilon :\mathbb {R}\rightarrow \mathbb {R}\) bounded and measurable,

$$\begin{aligned} \langle \Upsilon , {\mathcal {P}}(({\mathbf {w}}^k)_{k\in \mathbb {Z}}) \rangle := \int _{{\mathcal {C}}([0,T],\mathbb {R})} \int _0^1 \Upsilon ( {\mathcal {X}}(v,(\mathbf {w}^k)_{k\in \mathbb {Z}}, {\mathbf {b}} )) \; \mathrm {d}v \; \mathrm {d} \mu _{\mathrm {Wiener}} ({\mathbf {b}}), \end{aligned}$$

where \(\mu _{\mathrm {Wiener}}\) denotes the Wiener measure on \({\mathcal {C}}([0,T],\mathbb {R})\). Thus \({\mathbb {P}}^W\)-almost surely, for every \(t \in [0,T]\), for every \(\Upsilon :\mathbb {R}\rightarrow \mathbb {R}\) bounded and measurable,

$$\begin{aligned} \langle \Upsilon , {\mathcal {P}}_t((W^k)_{k\in \mathbb {Z}}) \rangle&=\mathbb E^\beta \left[ \int _0^1 \Upsilon ( {\mathcal {X}}_t(v,(W^k)_{k\in \mathbb {Z}}, \beta )) \mathrm {d}v \right] = \langle \Upsilon , \mu _t^g \rangle , \end{aligned}$$

where the last equality follows from Definition 7. Thus we proved equality (48).

Moreover, for every \(t\in [0,T]\), by composition of two measurable functions,

$$\begin{aligned} {[}0,t] \times \mathbb {R}\times \Theta&\rightarrow \mathbb {R}\\ (s,u,({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}} )&\mapsto \Upsilon ( {\mathcal {X}}_s(u,({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}} )) \end{aligned}$$

is \({\mathcal {B}}[0,t] \otimes {\mathcal {B}}(\mathbb {R}) \otimes \widehat{\mathcal B}_t(\Theta )/ {\mathcal {B}}(\mathbb {R})\)-measurable. By Fubini’s Theorem, it follows that for every \(t\in [0,T]\),

$$\begin{aligned} {[}0,t] \times {\mathcal {C}}([0,T], \mathbb {C})^\mathbb {Z}&\rightarrow \mathbb {R}\\ (s,({\mathbf {w}}^k)_{k\in \mathbb {Z}} )&\mapsto \int _{\mathcal C([0,T],\mathbb {R})} \int _0^1 \Upsilon ( {\mathcal {X}}_s(v,(\mathbf {w}^k)_{k\in \mathbb {Z}}, {\mathbf {b}} )) \;\mathrm {d}v \; \mathrm {d} \mu _{\mathrm {Wiener}} ({\mathbf {b}}), \end{aligned}$$

is \({\mathcal {B}}[0,t] \otimes {\mathcal {B}}_t( {\mathcal {C}}([0,T], \mathbb {C})^\mathbb {Z})/{\mathcal {B}}(\mathbb {R})\)-measurable. This completes the proof of (b).

Proof of (c). Define, on the canonical space \((\Theta , {\mathcal {B}}(\Theta ))\), \({\mathbf {F}}_t^g= ({\mathbf {x}}_t^g)^{-1}\) and

$$\begin{aligned} {\mathbf {A}}_t^g&:= \partial _u {\mathbf {x}}_t^g({\mathbf {F}}_t^g(\cdot )) h({\mathbf {F}}_t^g(\cdot )) ; \\ {\mathbf {A}}_t^{g,\varepsilon }&:= \int _\mathbb {R}{\mathbf {A}}_t^g(\cdot -y) \varphi _\varepsilon (y) \mathrm {d}y ; \\ {\mathbf {H}}_t^{g,\varepsilon }(u)&:= \frac{1}{\partial _u {\mathbf {x}}_t^g(u)} ({\mathbf {A}}_t^g-{\mathbf {A}}_t^{g,\varepsilon })({\mathbf {x}}_t^g(u)). \end{aligned}$$

In order to prove that \({\mathbf {H}}^{g, \varepsilon }\) can be written as a progressively measurable function of u and \((({\mathbf {w}}^k)_{k \in \mathbb {Z}}, {\mathbf {b}})\), we will prove successively that this property holds for \(\partial _u {\mathbf {x}}^g\), \({\mathbf {F}}^g\), \({\mathbf {A}}^g\) and \({\mathbf {A}}^{g,\varepsilon }\) and we will deduce the result for \(\mathbf{H}^{g, \varepsilon }\) by composition of progressively measurable functions.

Let us start with \(\partial _u {\mathbf {x}}^g\). By Proposition 4, since \(g\in \mathbf{G}^{1+\theta }\) and \(\alpha > \frac{3}{2}+\theta \), \({\mathbf {P}}\)-almost surely, for every \(t\in [0,T]\), the map \(u\mapsto {\mathbf {x}}_t^g(u)\) is of class \({\mathcal {C}}^1\). Thus there exists a \(\mathbf{P}\)-almost-sure event \(A \in {\mathcal {B}}(\theta )\) such that for every \((({\mathbf {w}}^k)_{k \in \mathbb {Z}}, {\mathbf {b}}) \in A\), \( \mathbf{x}_t^g(u)={\mathcal {X}}_t (u, ({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}})\) holds for every \((t,u) \in [0,T] \times \mathbb {R}\) and such that \(u \mapsto {\mathbf {x}}_t^g(u)\) belongs to \({\mathcal {C}}^1\). Define for every \((({\mathbf {w}}^k)_{k \in \mathbb {Z}}, {\mathbf {b}}) \in A\), for every \((t,u) \in [0,T] \times \mathbb {R}\),

$$\begin{aligned} \partial _u{\mathcal {X}}_t(u,({\mathbf {w}}^k)_{k \in \mathbb {Z}}, {\mathbf {b}}):= \limsup _{\eta \searrow 0} \frac{{\mathcal {X}}_t(u+\eta ,({\mathbf {w}}^k)_{k \in \mathbb {Z}}, {\mathbf {b}})-{\mathcal {X}}_t(u,({\mathbf {w}}^k)_{k \in \mathbb {Z}}, {\mathbf {b}})}{\eta }. \end{aligned}$$

Thus for every \((({\mathbf {w}}^k)_{k \in \mathbb {Z}}, {\mathbf {b}}) \in A\) and for every \((t,u) \in [0,T] \times \mathbb {R}\), \(\partial _u \mathbf{x}_t^g(u)=\partial _u{\mathcal {X}}_t(u,({\mathbf {w}}^k)_{k \in \mathbb {Z}}, \mathbf{b})\). Moreover, by progressive-measurability of \({\mathcal {X}}\), it follows from the definition of \(\partial _u {\mathcal {X}}\) is also progressively measurable; more precisely, for every \(t\in [0,T]\),

$$\begin{aligned} {[}0,t] \times \mathbb {R}\times \Theta&\rightarrow \mathbb {R}\\ (s,u,({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}} )&\mapsto \partial _u {\mathcal {X}}_s(u,({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \end{aligned}$$

is \({\mathcal {B}}[0,t] \otimes {\mathcal {B}}(\mathbb {R}) \otimes \widehat{\mathcal B}_t(\Theta )/ {\mathcal {B}}(\mathbb {R})\)-measurable.

Now, consider \({\mathbf {F}}^g\). Define for every \(x\in [0,2\pi ]\)

$$\begin{aligned} \widetilde{{\mathbf {F}}}_t^g(x):=\int _0^1 {\mathbb {1}}_{\{\mathbf{x}_t^g(v)-{\mathbf {x}}_t^g(0) \leqslant x\}} \mathrm {d}v. \end{aligned}$$
(51)

Thus we have for every \(x\in [{\mathbf {x}}_t(0),{\mathbf {x}}_t(0)+2\pi ]\)

$$\begin{aligned} \widetilde{{\mathbf {F}}}_t^g(x-{\mathbf {x}}_t^g(0))=\int _0^1 \mathbbm {1}_{\{{\mathbf {x}}_t^g(v) \leqslant x\}} \mathrm {d}v =\int _0^1 \mathbbm {1}_{\{v\leqslant {\mathbf {F}}_t^g(x)\}} \mathrm {d}v={\mathbf {F}}_t^g(x). \end{aligned}$$

Therefore, since for every \(x\in \mathbb {R}\), \({\mathbf {F}}_t^g(x+2\pi )=\mathbf{F}_t^g(x)+1\), we have

$$\begin{aligned} {\mathbf {F}}_t^g(x)=\sum _{k \in \mathbb {Z}} {\mathbb {1}}_{\{ x-2\pi k \in [\mathbf{x}_t(0),{\mathbf {x}}_t(0)+2\pi ) \}} \left( \widetilde{\mathbf{F}}_t^g(x-2\pi k -{\mathbf {x}}_t^g(0))+k \right) . \end{aligned}$$

Hence it is sufficient to prove that we can write \(\widetilde{{\mathbf {F}}}_t^g\) as a progressively measurable function of x and \((({\mathbf {w}}^k)_{k \in \mathbb {Z}}, {\mathbf {b}})\). Recall that \({\mathbf {P}}\)-almost surely, \(u \mapsto {\mathbf {x}}^g (u) = \mathcal X(u,({\mathbf {w}}^k)_{k \in \mathbb {Z}}, {\mathbf {b}})\) is continuous. Thus there is \({\mathcal {I}}\) such that \({\mathbf {P}}\)-almost surely, for every \(v \in [0,1]\), for every \(x \in [0,2\pi ]\), \({\mathbb {1}}_{\{\mathbf{x}_\cdot ^g(v)-{\mathbf {x}}_\cdot ^g(0) \leqslant x\}} ={\mathcal {I}}_\cdot (v,x,({\mathbf {w}}^k)_{k \in \mathbb {Z}}, {\mathbf {b}})\) and such that for every \(t\in [0,T]\),

$$\begin{aligned} {[}0,t] \times [0,1] \times [0,2\pi ] \times \Theta&\rightarrow \mathbb {R}\\ (s,v,x,({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}} )&\mapsto \mathcal I_s(v,x,({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \end{aligned}$$

is \({\mathcal {B}}[0,t] \otimes {\mathcal {B}}([0,1] \times [0,2\pi ]) \otimes \widehat{{\mathcal {B}}}_t(\Theta )/ {\mathcal {B}}(\mathbb {R})\)-measurable. It follows from Fubini’s Theorem and from (51) that for every \(t\in [0,T]\),

$$\begin{aligned} {[}0,t] \times [0,2\pi ] \times \Theta&\rightarrow \mathbb {R}\\ (s,x,({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}} )&\mapsto \int _0^1 {\mathcal {I}}_s(v,x,({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}) \mathrm {d}v = \widetilde{{\mathbf {F}}}_s^g(x) \end{aligned}$$

is \({\mathcal {B}}[0,t] \otimes {\mathcal {B}}( [0,2\pi ]) \otimes \widehat{{\mathcal {B}}}_t(\Theta )/ {\mathcal {B}}(\mathbb {R})\)-measurable.

Let us conclude with \({\mathbf {A}}^g\), \({\mathbf {A}}^{g,\varepsilon }\) and \({\mathbf {H}}^{g, \varepsilon }\). First, remark that \({\mathbf {A}}^g\) is obtained by products and compositions of \(\partial _u {\mathbf {x}}^g\), \(\mathbf{F}^g\) and h, where h is a \({\mathcal {C}}^1\)-function. Thus \(x \mapsto {\mathbf {A}}^g(x)\) is a progressively measurable function of x and \((({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}})\). It follows also that \((x,y) \mapsto {\mathbf {A}}^g (x-y) \varphi _\varepsilon (y)\) is a progressively measurable function of x, y and \((\mathbf {w}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}\). By Fubini’s Theorem, we deduce that \(x \mapsto {\mathbf {A}}^{g, \varepsilon }(x)\) is a progressively measurable function of x and \((({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}})\). Again by products and compositions, it follows that there is a progressively measurable function \({\mathcal {H}}\) such that \(\mathbf{P}\)-almost surely, for every \(u\in \mathbb {R}\) and for every \(t\in [0,T]\),

$$\begin{aligned} {\mathbf {H}}_t^{g,\varepsilon }(u) = {\mathcal {H}}_t (u, ({\mathbf {w}}^k)_{k\in \mathbb {Z}}, {\mathbf {b}}). \end{aligned}$$

It follows that \({\mathbf {P}}^W \otimes {\mathbf {P}}^\beta \)-almost surely, equality (49) holds. It completes the proof of (c). \(\square \)

5.2 Idiosyncratic noise

Coming back to equality (43), and applying the relation \(\partial _\mu \phi (\mu _t^g) = \partial _v \left\{ \frac{\delta \phi }{\delta m} (\mu _t^g)\right\} \) (see Proposition 41 in “Appendix”), we have

$$\begin{aligned} I_2&= \frac{1}{t} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \!\!\int _0^t \partial _v \left\{ \frac{\delta \phi }{\delta m} (\mu _t^g)\right\} (x_t^g(u)) \; \partial _u x_t^g(u) \; H^{g,\varepsilon }_s(u) \;\mathrm {d}s \mathrm {d}u \right] \\&=\frac{1}{t}\int _0^1 \!\!\int _0^t {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \partial _u \left\{ \frac{\delta \phi }{\delta m} (\mu _t^g) (x_t^g(\cdot )) \right\} (u)\; H^{g,\varepsilon }_s(u) \right] \mathrm {d}s \mathrm {d}u. \end{aligned}$$

By Definition (45), \(\left[ \frac{\delta \phi }{\delta m}\right] \) is equal to \(\frac{\delta \phi }{\delta m}\) up to a constant, so their derivatives are equal and

$$\begin{aligned} I_2=\frac{1}{t}\int _0^1 \!\!\int _0^t {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \partial _u \left\{ \left[ \frac{\delta \phi }{\delta m}\right] (\mu _t^g) (x_t^g(\cdot )) \right\} (u) \; H^{g,\varepsilon }_s(u) \right] \mathrm {d}s \mathrm {d}u. \end{aligned}$$
(52)

In the following lemma, we prove that \(I_2\) can be expressed in terms of \(\frac{\delta \phi }{\delta m}\) instead of its derivative. This key step is, as shown below, a consequence of Girsanov’s Theorem applied with respect to the idiosyncratic noise \(\beta \).

Lemma 31

Let \(\theta \in (0,1)\), \(g \in {\mathbf {G}}^{1+\theta }\) and f be of order \(\alpha >\frac{3}{2}+\theta \). Let \(h \in \Delta ^1\) and \(\varepsilon >0\). Fix \(u\in [0,1]\) and \(s<t \in [0,T]\). Thus the following equality holds true

$$\begin{aligned}&{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \partial _u \left\{ \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) (x_t^g(\cdot )) \right\} (u) \; H^{g,\varepsilon }_s(u) \right] \nonumber \\&\quad ={\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g)(x_t^g(u)) \;H^{g,\varepsilon }_s(u) \; \frac{1}{t-s}\int _s^t \partial _u x_r^g(u) \mathrm {d}\beta _r \right] . \end{aligned}$$
(53)

Proof

Fix \(u\in [0,1]\) and \(s<t \in [0,T]\). Define, for every \(r\in [0,T]\), \(\xi _r:= \frac{1}{t-s}\int _0^r {\mathbb {1}}_{\{z \in [s,t] \}} \mathrm {d}z\).

For every \(\nu \in [-1,1]\), denote by \((x_r^\nu )_{r\in [0,T]}\) the process \((x_r^{g+\nu \xi _r})_{r\in [0,T]}\). By Proposition 6, \({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \)-almost surely, \(x_r^\nu (u)=x_r^{g+\nu \xi _r}(u)=Z_r^{g(u)+\nu \xi _r}\). Apply Kunita’s expansion (Lemma 20) to \(x=g(u)\) and \(\zeta _t=\xi _t\). We obtain for every \(u\in \mathbb {R}\), \({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \)-almost surely for every \(r\in [0,T]\) and every \(\nu \in [-1,1]\):

$$\begin{aligned} x_r^\nu (u) = g(u)+\sum _{k\in \mathbb {Z}} f_k \int _0^r \mathfrak {R}\left( e^{-ik x_z^\nu (u)} \mathrm {d}W_z^k \right) + \beta _r + \nu \int _0^r \partial _x Z_z^{g(u)+\nu \xi _z} {\dot{\xi }}_z \mathrm {d}z. \end{aligned}$$

Since both terms of the last equality are almost surely continuous with respect to \(u\in \mathbb {R}\), that equality holds almost surely for every \(u \in \mathbb {R}\).

For every \(\nu \in [-1,1]\), define the following stopping time

$$\begin{aligned} \sigma ^\nu :=\inf \left\{ r\geqslant 0: \left| \nu \int _0^r \partial _x Z_z^{g(u)+\nu \xi _z} {\dot{\xi }}_z \mathrm {d}\beta _z \right| \geqslant 1 \right\} \wedge T. \end{aligned}$$

Define the process \((y_r^\nu )_{r\in [0,T]}\) as the solution to

$$\begin{aligned} \mathrm {d}y_r^\nu (u)&= \sum _{k\in \mathbb {Z}} f_k \mathfrak {R}\left( e^{-ik y_r^\nu (u)} \mathrm {d}W_r^k \right) + \mathrm {d}\beta _r + \nu \mathbbm {1}_{\{ r \leqslant \sigma ^\nu \}} \partial _x Z_r^{g(u)+\nu \xi _r} {\dot{\xi }}_r \mathrm {d}r, \end{aligned}$$

and \(\beta ^\nu _r:= \beta _r + \nu \int _0^r {\mathbb {1}}_{\{ z \leqslant \sigma ^\nu \}} \partial _x Z_z^{g(u)+\nu \xi _z} {\dot{\xi }}_z \mathrm {d}z\). Let us define for every \(r\in [0,T]\)

$$\begin{aligned} {\mathcal {E}}^\nu _r=\exp \left( -\nu \int _0^{r\wedge \sigma ^\nu } \partial _x Z_z^{g(u)+\nu \xi _z} {\dot{\xi }}_z \mathrm {d}\beta _z -\frac{\nu ^2}{2} \int _0^{r\wedge \sigma ^\nu } \left| \partial _x Z_z^{g(u)+\nu \xi _z} {\dot{\xi }}_z\right| ^2 \mathrm {d} z\right) . \end{aligned}$$

By definition of \(\sigma ^\nu \), we have \({\mathcal {E}}^\nu _r \leqslant \exp \left( -\nu \int _0^{r\wedge \sigma ^\nu } \partial _x Z_z^{g(u)+\nu \xi _z} {\dot{\xi }}_z \mathrm {d}\beta _z \right) \leqslant \exp \left( 1 \right) \). In particular, \(({\mathcal {E}}^\nu _r)_{r\in [0,T]}\) is a \({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \)-martingale. Define \(\mathbb P^\nu \) as the absolutely continuous probability measure with respect to \({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \) with density \(\frac{\mathrm {d} {\mathbb {P}}^\nu }{\mathrm {d}({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta )}= {\mathcal {E}}^\nu _T\). Thus by Girsanov’s Theorem, the law under \({\mathbb {P}}^\nu \) of \(((W^k)_{k\in \mathbb {Z}},\beta ^\nu )\) is equal to the law under \({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \) of \(((W^k)_{k\in \mathbb {Z}},\beta )\). It follows that \((\Omega , {\mathcal {G}}, (\mathcal G_t)_{t\in [0,T]}, {\mathbb {P}}^\nu , y^\nu , (W^k)_{k \in \mathbb {Z}}, \beta ^\nu )\) is a weak solution to Eq. (4).

By Lemma 30 and Yamada-Watanabe Theorem,

$$\begin{aligned} {\mathcal {L}}^{{\mathbb {P}}^\nu } (y^\nu , (W^k)_{k \in \mathbb {Z}}, \beta ^\nu )= {\mathcal {L}}^{{\mathbb {P}}^W \otimes {\mathbb {P}}^\beta } (x^g, (W^k)_{k \in \mathbb {Z}}, \beta )= {\mathcal {L}}^{{\mathbf {P}}} ({\mathbf {x}}^g, ({\mathbf {w}}^k)_{k \in \mathbb {Z}}, {\mathbf {b}}), \end{aligned}$$

and \({\mathbb {P}}^\nu \)-almost surely, for every \(u\in \mathbb {R}\) and \(t\in [0,T]\),

$$\begin{aligned} y^\nu _t(u)= {\mathcal {X}}_t (u,(W^k)_{k \in \mathbb {Z}}, \beta ^\nu ). \end{aligned}$$
(54)

Moreover, we claim that \({\mathbb {E}}^\nu \left[ \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) (y_t^\nu (u)) H^{g,\varepsilon }_s(u) \right] \) does not depend on \(\nu \). Indeed, by (54)

$$\begin{aligned} F(\nu ):= & {} {\mathbb {E}}^\nu \left[ \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) (y_t^\nu (u)) H^{g,\varepsilon }_s(u) \right] \\= & {} {\mathbb {E}}^\nu \left[ \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) \big ({\mathcal {X}}_t (u,(W^k)_{k \in \mathbb {Z}}, \beta ^\nu )\big ) H^{g,\varepsilon }_s(u) \right] \\= & {} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) \big ({\mathcal {X}}_t (u,(W^k)_{k \in \mathbb {Z}}, \beta ^\nu )\big ) H^{g,\varepsilon }_s(u) \frac{\mathrm {d} {\mathbb {P}}^\nu }{\mathrm {d}({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta )} \right] . \end{aligned}$$

Furthermore, by Lemma 30

$$\begin{aligned}&F(\nu )\\&\quad ={\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left[ \frac{\delta \phi }{\delta m} \right] \big ({\mathcal {P}}_t ((W^k)_{k \in \mathbb {Z}})\big ) \; \big (\mathcal X_t (u,(W^k)_{k \in \mathbb {Z}}, \beta ^\nu )\big ) \; {\mathcal {H}}_s (u,(W^k)_{k \in \mathbb {Z}}, \beta ) \; \frac{\mathrm {d} {\mathbb {P}}^\nu }{\mathrm {d}(\mathbb P^W \otimes {\mathbb {P}}^\beta )} \right] \\&\quad ={\mathbb {E}}^\nu \left[ \left[ \frac{\delta \phi }{\delta m} \right] \big ({\mathcal {P}}_t ((W^k)_{k \in \mathbb {Z}})\big ) \; \big (\mathcal X_t (u,(W^k)_{k \in \mathbb {Z}}, \beta ^\nu )\big ) \; {\mathcal {H}}_s (u,(W^k)_{k \in \mathbb {Z}}, \beta ) \right] . \end{aligned}$$

Moreover, the processes \((\beta _r)\) and \((\beta ^\nu _r)\) are equal on the interval [0, s], because \(\xi _r\equiv 0\) on [0, s]. Since \(({\mathcal {H}}_r)_{r\in [0,T]}\) is progressively measurable, then \({\mathbb {P}}^\nu \)-almost surely, \({\mathcal {H}}_s (u,(W^k)_{k \in \mathbb {Z}}, \beta )= {\mathcal {H}}_s (u,(W^k)_{k \in \mathbb {Z}}, \beta ^\nu )\). Therefore,

$$\begin{aligned} F(\nu )&={\mathbb {E}}^\nu \left[ \left[ \frac{\delta \phi }{\delta m} \right] \big ({\mathcal {P}}_t ((W^k)_{k \in \mathbb {Z}})\big ) \; \big (\mathcal X_t (u,(W^k)_{k \in \mathbb {Z}}, \beta ^\nu )\big ) \; {\mathcal {H}}_s (u,(W^k)_{k \in \mathbb {Z}}, \beta ^\nu ) \right] \\&= {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left[ \frac{\delta \phi }{\delta m} \right] \big (\mathcal P_t ((W^k)_{k \in \mathbb {Z}})\big ) \; \big ({\mathcal {X}}_t (u,(W^k)_{k \in \mathbb {Z}}, \beta )\big ) \; {\mathcal {H}}_s (u,(W^k)_{k \in \mathbb {Z}}, \beta ) \right] , \end{aligned}$$

since the law of \(((W^k)_{k\in \mathbb {Z}},\beta ^\nu )\) under \(\mathbb P^\nu \) is equal to the law of \(((W^k)_{k\in \mathbb {Z}},\beta )\) under \({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \). The last term of that equality does not depend on \(\nu \) anymore, so we get finally

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}\nu }_{\vert \nu =0} F(\nu ) = \frac{\mathrm {d}}{\mathrm {d}\nu }_{\vert \nu =0} {\mathbb {E}}^\nu \left[ \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) (y_t^\nu (u)) H^{g,\varepsilon }_s(u) \right] =0. \end{aligned}$$
(55)

Furthermore,

$$\begin{aligned} {\mathbb {E}}^\nu \Big [\Big [ \frac{\delta \phi }{\delta m} \Big ](\mu _t^g) (y_t^\nu (u)) H^{g,\varepsilon }_s(u) \Big ]&={\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \Big [ \frac{\delta \phi }{\delta m} \Big ](\mu _t^g) (y_t^\nu (u)) H^{g,\varepsilon }_s(u) {\mathcal {E}}^\nu _t \right] \nonumber \\&={\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \Big [ \frac{\delta \phi }{\delta m} \Big ](\mu _t^g) (x_t^\nu (u)) H^{g,\varepsilon }_s(u){\mathcal {E}}^\nu _t \right] +R(\nu ), \end{aligned}$$
(56)

where \(R(\nu )= {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ {\mathbb {1}}_{\{ \sigma ^\nu <T \}} \Big ( \Big [ \frac{\delta \phi }{\delta m} \Big ](\mu _t^g) (y_t^\nu (u)) - \Big [ \frac{\delta \phi }{\delta m} \Big ](\mu _t^g) (x_t^\nu (u)) \Big ) H^{g,\varepsilon }_s(u){\mathcal {E}}^\nu _t \right] \); we used here the fact that \({\mathbb {1}}_{\{ \sigma ^\nu =T \}} (x_t^\nu (u)-y_t^\nu (u))=0\). Let us show that \(R(\nu )= {\mathcal {O}}(|\nu |^2)\). By Hölder’s inequality and by the fact that \({\mathcal {E}}_t^\nu \leqslant \exp (1)=e\), we have

$$\begin{aligned} |R(\nu )|&\leqslant 2 e ({\mathbb {P}}^W \otimes {\mathbb {P}}^\beta ) \Big [\sigma ^\nu <T\Big ]^{1/4} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ H^{g,\varepsilon }_s(u)^2 \right] ^{1/2}\nonumber \\&\quad {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sup _{v\in \mathbb {R}} \Big |\Big [ \frac{\delta \phi }{\delta m} \Big ](\mu _t^g) (v)\Big |^4 \right] ^{\frac{1}{4}}. \end{aligned}$$
(57)

We control the different terms appearing on the r.h.s. of (57). By Markov’s inequality, by Burkholder-Davis-Gundy inequality and by inequality (77), for every \(\nu \in [-1,1]\)

$$\begin{aligned} {\mathbb {P}}^W \otimes {\mathbb {P}}^\beta \Big [ \sigma ^\nu < T \Big ]&\leqslant {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sup _{r \leqslant T} \Big |\nu \int _0^r \partial _x Z_z^{g(u)+\nu \xi _z} \frac{{\mathbb {1}}_{\{z \in [s,t] \}}}{t-s} \mathrm {d}\beta _z \Big |^8 \right] \\&\leqslant C |\nu |^8 \; {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \Big |\int _s^t \Big |\partial _x Z_r^{g(u)+\nu \xi _r} \Big |^2 \frac{1}{(t-s)^2}\mathrm {d}r \Big |^4 \right] \\&\leqslant \frac{C}{(t-s)^5} |\nu |^8 \; {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _s^t \Big |\partial _x Z_r^{g(u)+\nu \xi _r} \Big |^8 \mathrm {d}r \right] \leqslant C |\nu |^8, \end{aligned}$$

where C is a constant depending on s and t changing from line to line.

Let us show that \({\mathbb {E}}^W {\mathbb {E}}^\beta \left[ H^{g,\varepsilon }_s(u)^2 \right] <+\infty \). Recall that \(H_s^{g,\varepsilon }(u):= \frac{(A_s^g-A_s^{g,\varepsilon })(x_s^g(u))}{\partial _u x_s^g(u)}\). Thus

$$\begin{aligned} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ H^{g,\varepsilon }_s(u)^2 \right] ^{1/2}&\leqslant {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left\| \frac{1}{\partial _u x_s^g} \right\| ^4_{L_\infty } \right] ^{1/4} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \Vert A_s^g - A_s^{g,\varepsilon }\Vert ^4_{L_\infty } \right] ^{1/4}. \end{aligned}$$

By (72), \({\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left\| \frac{1}{\partial _u x_s^g} \right\| ^4_{L_\infty } \right] \) is finite. Moreover, since \(A_s^{g,\varepsilon } = A_s^g *\varphi _\varepsilon \), we have \(\Vert A_s^g - A_s^{g,\varepsilon }\Vert _{L_\infty } \leqslant C \varepsilon \Vert \partial _x A_s^g \Vert _{L_\infty }\), where \(C= \int _\mathbb {R}|y|\varphi (y)\mathrm {d}y\). Using inequality (21) and an analogue to (22) with exponent 4 instead of 2, we check that \({\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \Vert \partial _x A_s^g \Vert _{L_\infty } \right] \) is finite. Thus there is C such that \({\mathbb {E}}^W {\mathbb {E}}^\beta \left[ H^{g,\varepsilon }_s(u)^2 \right] \leqslant C\).

Then show that \({\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sup _{v\in \mathbb {R}} \Big |\Big [ \frac{\delta \phi }{\delta m} \Big ](\mu _t^g) (v)\Big |^4 \right] \) is finite. By definition (45), for every \(v \in \mathbb {R}\),

$$\begin{aligned} \Big |\Big [ \frac{\delta \phi }{\delta m} \Big ](\mu _t^g) (v)\Big | \leqslant {\mathbb {E}}^\beta \left[ \int _0^1 \Big | \frac{\delta \phi }{\delta m}(\mu _t^g) (v) - \frac{\delta \phi }{\delta m} (\mu _t^g) (x_t^g(u)) \Big | \mathrm {d}u \right] . \end{aligned}$$
(58)

By inequality (80), there is a \(C>0\) such that \({\mathbb {P}}^W \)-almost surely for every \(x\in [0,2\pi ]\),

$$\begin{aligned} \left| \partial _v \left\{ \frac{\delta \phi }{\delta m}(\mu _t^g)\right\} (x)\right| =\left| \partial _\mu \phi (\mu _t^g)(x)\right| \leqslant C(1+2\pi ) + C {\mathbb {E}}^\beta \left[ \int _0^1 |x_t^g (u')| \mathrm {d}u'\right] . \end{aligned}$$
(59)

Thus there is \(C>0\) such that \({\mathbb {P}}^W \)-almost surely, for every \(v, v' \in [0,2\pi ]\),

$$\begin{aligned} \left| \frac{\delta \phi }{\delta m}(\mu _t^g)(v) - \frac{\delta \phi }{\delta m}(\mu _t^g)(v')\right| \leqslant C + C {\mathbb {E}}^\beta \left[ \int _0^1 |x_t^g (u')| \mathrm {d}u'\right] . \end{aligned}$$

By Proposition 45, \(v\mapsto \frac{\delta \phi }{\delta m}(\mu _t^g)(v) \) is \(2\pi \)-periodic, thus the latter inequality holds for every \(v, v' \in \mathbb {R}\). Combining that inequality with (58), we get for every \(v \in \mathbb {R}\),

$$\begin{aligned} \Big |\Big [ \frac{\delta \phi }{\delta m} \Big ](\mu _t^g) (v)\Big | \leqslant C + C {\mathbb {E}}^\beta \left[ \int _0^1 |x_t^g (u')| \mathrm {d}u'\right] . \end{aligned}$$

This leads to

$$\begin{aligned} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sup _{v\in \mathbb {R}} \Big |\Big [ \frac{\delta \phi }{\delta m} \Big ](\mu _t^g) (v)\Big |^4 \right] \leqslant C + C{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 |x_t^g (u')| \mathrm {d}u' \right] , \end{aligned}$$
(60)

which is finite. Thus \({\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sup _{v\in \mathbb {R}} \Big |\Big [ \frac{\delta \phi }{\delta m} \Big ](\mu _t^g) (v)\Big |^4 \right] \leqslant C\), whence we finally deduce, in view of inequality (57), that \(|R(\nu )| \leqslant C |\nu |^2\).

Thus \(R(\nu )= {\mathcal {O}}(|\nu |^2)\). It follows from (55) and (56) that

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}\nu }_{\vert \nu =0}{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) (x_t^\nu (u)) \;H^{g,\varepsilon }_s(u)\;{\mathcal {E}}^\nu _t \right] =0 \end{aligned}$$

By (60), \(\Big (\left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) (x_t^\nu (u))\Big )_{\nu \in [-1,1]}\) is uniformly integrable. Using inequality (59), we prove in the same way that \(\Big ( \partial _v \Big \{\left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) \Big \}(x_t^\nu (u))\Big )_{\nu \in [-1,1]}\) is also uniformly integrable. Recall that \(x_t^\nu (u)=Z_t^{g(u)+\nu \xi _t}\) and that, by inequality (77), \((\partial _x Z_t^{g(u)+\nu \xi _t})_{\nu \in [-1,1]}\) is uniformly integrable. Thus we get by differentiation:

$$\begin{aligned} 0&= \frac{\mathrm {d}}{\mathrm {d}\nu }_{\vert \nu =0} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) (Z_t^{g(u)+\nu \xi _t}) \; H^{g,\varepsilon }_s(u) \; {\mathcal {E}}^\nu _t \right] \\&= {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \partial _v \Big \{\left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) \Big \} (Z_t^{g(u)}) \; \partial _x Z_t^{g(u)} \; \xi _t \; H^{g,\varepsilon }_s(u) \right] \\&\quad - {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) (Z_t^{g(u)}) \; H^{g,\varepsilon }_s(u)\int _0^t \partial _x Z_r^{g(u)} {\dot{\xi }}_r \mathrm {d}\beta _r \right] \end{aligned}$$

Using \(Z_t^{g(u)}=x_t^g(u)\) and \(\partial _x Z_t^{g(u)} = \frac{\partial _u x_t^g(u)}{g'(u)}\) and recalling that \(\xi _r:=\frac{1}{t-s}\int _0^r {\mathbb {1}}_{\{z \in [s,t] \}} \mathrm {d}z\), we have proved that

$$\begin{aligned}&{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \partial _v \Big \{\left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) \Big \} (x_t^g(u)) \frac{\partial _u x_t^g(u)}{g'(u)} \; H^{g,\varepsilon }_s(u) \right] \\&\quad = {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) (x_t^g(u)) H^{g,\varepsilon }_s(u)\frac{1}{t-s}\int _s^t \frac{\partial _u x_r^g(u)}{g'(u)} \mathrm {d} \beta _r \right] . \end{aligned}$$

We multiply both sides by \(g'(u)\) and we obtain equality (53), since \(\partial _u \left\{ \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) (x_t^g(\cdot )) \right\} (u) =\partial _v \Big \{\left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) \Big \} (x_t^g(u)) \; \partial _u x_t^g(u)\). \(\square \)

5.3 Conclusion of the analysis

We conclude the proof of Proposition 29.

Proof (Proposition 29)

Putting together equalities (52) and (53) and definition (44) of \(K_t^{g,\varepsilon }\),

$$\begin{aligned} I_2 = \frac{1}{t} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \int _0^1 \left[ \frac{\delta \phi }{\delta m}\right] (\mu _t^g)(x_t^g(u)) K_t^{g,\varepsilon }(u) \mathrm {d}u \right] . \end{aligned}$$

By Cauchy-Schwarz inequality,

$$\begin{aligned} |I_2| \leqslant \frac{1}{t} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left\| K_t^{g,\varepsilon } \right\| _{L_\infty }^2 \right] ^{1/2} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left| \int _0^1 \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) (x_t^g(u)) \frac{K_t^{g,\varepsilon } (u)}{\left\| K_t^{g,\varepsilon } \right\| _{L_\infty }}\mathrm {d}u\right| ^2 \right] ^{1/2}. \end{aligned}$$

It remains to estimate \( {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left\| K_t^{g,\varepsilon } \right\| _{L_\infty }^2 \right] \). For every \(u\in [0,1]\),

$$\begin{aligned} \left| K_t^{g,\varepsilon }(u)\right|&\leqslant \int _0^t \left\| A_s^g-A_s^{g,\varepsilon } \right\| _{L_\infty } \frac{1}{|\partial _u x_s^g(u)|} \frac{1}{t-s}\left| \int _s^t \partial _u x_r^g(u) \mathrm {d}\beta _r\right| \mathrm {d}s \end{aligned}$$

By inequality (21),

$$\begin{aligned} \left\| A_s^g-A_s^{g,\varepsilon } \right\| _{L_\infty } \leqslant C \varepsilon \left\| \partial _x A_s^g \right\| _{L_\infty } \leqslant C \varepsilon \Vert h\Vert _{\mathcal C^1}\left( 1+\Vert \partial ^{(2)}_u x_s^g\Vert _{L_\infty }\left\| \frac{1}{\partial _u x_s^g} \right\| _{L_\infty }\right) . \end{aligned}$$

Thus we obtain

$$\begin{aligned} \left\| K_t^{g,\varepsilon } \right\| _{L_\infty }&\leqslant C\varepsilon \Vert h\Vert _{{\mathcal {C}}^1} \left\{ 1+ \textstyle \sup _{r\leqslant T}\left\| \partial ^{(2)}_u x_r^g \right\| _{L_\infty } \right\} \left\{ \textstyle \sup _{r\leqslant T} \left\| \frac{1}{\partial _u x_r^g} \right\| _{L_\infty }+\textstyle \sup _{r\leqslant T}\left\| \frac{1}{\partial _u x_r^g} \right\| _{L_\infty }^2\right\} \\&\quad \quad \cdot \int _0^t \frac{1}{t-s}\left\| \int _s^t \partial _u x_r^g(\cdot ) \mathrm {d}\beta _r \right\| _{L_\infty } \mathrm {d}s. \end{aligned}$$

By Hölder’s equality, we obtain

$$\begin{aligned} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left\| K_t^{g,\varepsilon } \right\| _{L_\infty }^2 \right] ^{1/2}&\leqslant C\varepsilon \Vert h\Vert _{{\mathcal {C}}^1} E_1E_2E_3, \end{aligned}$$
(61)

where

$$\begin{aligned} E_1&:= 1+ {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \textstyle \sup _{r\leqslant T} \Vert \partial ^{(2)}_u x_r^g \Vert _{L_\infty }^8 \right] ^{1/8};\\ E_2&:= {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \textstyle \sup _{r\leqslant T} \Vert \frac{1}{\partial _u x_r^g} \Vert _{L_\infty }^8 \right] ^{1/8}+{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \textstyle \sup _{r\leqslant T} \Vert \frac{1}{\partial _u x_r^g} \Vert _{L_\infty }^{16} \right] ^{1/8}; \\ E_3&:= {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left( \int _0^t \frac{1}{t-s}\left\| \int _s^t \partial _u x_r^g(\cdot ) \mathrm {d}\beta _r \right\| _{L_\infty } \mathrm {d}s \right) ^4 \right] ^{1/4}. \end{aligned}$$

Recall that g belongs to \({\mathbf {G}}^{3+\theta }\) and f is of order \(\alpha >\frac{7}{2}+\theta \). Thus by (71) and by (72)

$$\begin{aligned} E_1&\leqslant C(1+ \Vert g''' \Vert _{L_8} + \Vert g''\Vert _{L_\infty }^3+\Vert g'\Vert _{L_\infty }^3) ; \\ E_2&\leqslant C (1+\Vert g''\Vert _{L_\infty }^4+\Vert g'\Vert _{L_\infty }^4 + \Vert \textstyle \frac{1}{g'}\Vert _{L_\infty }^8) . \end{aligned}$$

Furthermore, \(E_3 \leqslant E_{3,1}+E_{3,2}\), where

$$\begin{aligned} E_{3,1}&:= {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \Big ( \int _0^t \frac{1}{t-s}\Big | \int _s^t \partial _u x_r^g(0) \mathrm {d}\beta _r \Big |\mathrm {d}s \Big )^4 \right] ^{1/4}; \\ E_{3,2}&:= {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \Big ( \int _0^t \frac{1}{t-s} \int _0^1 \Big | \int _s^t \partial ^{(2)}_u x_r^g(v) \mathrm {d}\beta _r \Big |\mathrm {d}s \mathrm {d}v\Big )^4 \right] ^{1/4}. \end{aligned}$$

By Hölder’s inequality, we have

$$\begin{aligned} E_{3,1}&\leqslant {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \Big ( \int _0^t \frac{1}{|t-s|^{1/2}}\mathrm {d}s \Big )^3 \int _0^t \frac{1}{|t-s|^{5/2}}\Big | \int _s^t \partial _u x_r^g(0) \mathrm {d}\beta _r \Big |^4 \mathrm {d}s \right] ^{1/4} \\&\leqslant C \; t^{3/8} \Big (\int _0^t \frac{1}{|t-s|^{5/2}} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \Big | \int _s^t \partial _u x_r^g(0) \mathrm {d}\beta _r \Big |^4 \right] \mathrm {d}s\Big )^{1/4}. \end{aligned}$$

By Burkholder-Davis-Gundy inequality, it follows that

$$\begin{aligned} E_{3,1}&\leqslant C \; t^{3/8} \Big ( \int _0^t \frac{|t-s|^2}{|t-s|^{5/2}}\mathrm {d}s \Big )^{1/4} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sup _{r\leqslant T} |\partial _u x_r^g(0)|^4 \right] ^{1/4}\leqslant C\sqrt{t} \Vert g'\Vert _{L_\infty }, \end{aligned}$$

where the last inequality holds by (69). By the same computation,

$$\begin{aligned} E_{3,2}&\leqslant C \sqrt{t}{\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \sup _{r\leqslant T} \int _0^1 |\partial _u^{(2)} x_r^g(v)|^4 \mathrm {d}v \right] ^{1/4} \leqslant C \sqrt{t}\;(1+\Vert g''\Vert _{L_\infty }+\Vert g'\Vert ^2_{L_\infty }), \end{aligned}$$

where the last inequality holds by (70). We deduce that \(E_3 \leqslant C \sqrt{t}\;(1+\Vert g''\Vert _{L_\infty }+\Vert g'\Vert ^2_{L_\infty })\). By inequality (61) and the estimates on \(E_i\), for \(i=1,2,3\), we finally get:

$$\begin{aligned} {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left\| K_t^{g,\varepsilon } \right\| _{L_\infty }^2 \right] ^{1/2}&\leqslant C \sqrt{t}\varepsilon \Vert h\Vert _{{\mathcal {C}}^1} C_2(g), \end{aligned}$$

where \(C_2(g) = 1+ \Vert g'''\Vert _{L_8}^3 + \Vert g''\Vert _{L_\infty }^{12}+\Vert g'\Vert _{L_\infty }^{12}+ \Vert \textstyle \frac{1}{g'}\Vert ^{24}_{L_\infty } \). \(\square \)

As a conclusion of Sects. 4 and 5, we have proved the following inequality.

Corollary 32

Let \(\phi \), \(\theta \) and f be as in Theorem 15. Let g and h be \({\mathcal {G}}_0\)-measurable random variables with values respectively in \({\mathbf {G}}^{3+\theta }\) and \(\Delta ^1\). Let \((K_t^{g,\varepsilon })_{t\in [0,T]}\) be defined by (44). Then there is \(C>0\) independent of g, h and \(\theta \) such that \({\mathbb {P}}^0\)-almost surely, for every \(t\in (0,T]\) and \(\varepsilon \in (0,1)\),

$$\begin{aligned}&\left| \frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} P_t \phi (\mu _0^{g+\rho g' h}) \right| \nonumber \\&\quad \leqslant C \; \frac{\Vert \phi \Vert _{L_\infty }}{\varepsilon ^{3+2\theta }\sqrt{t}} C_1(g) \Vert h\Vert _{{\mathcal {C}}^1} \nonumber \\&\quad \quad + \frac{C}{\sqrt{t}}\;\varepsilon \Vert h\Vert _{{\mathcal {C}}^1} C_2(g) \; {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left| \int _0^1 \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) (x_t^g(u)) \frac{K_t^{g,\varepsilon } (u)}{\left\| K_t^{g,\varepsilon } \right\| _{L_\infty }}\mathrm {d}u\right| ^2 \right] ^{1/2}, \end{aligned}$$
(62)

where \(C_1(g)=1+\Vert g'''\Vert _{L_4}^2 + \Vert g''\Vert _{L_\infty }^{6} + \Vert g'\Vert _{L_\infty }^8 + \left\| \frac{1}{g'} \right\| _{L_\infty }^{8}\) and \(C_2(g) = 1+ \Vert g'''\Vert _{L_8}^3 + \Vert g''\Vert _{L_\infty }^{12}+\Vert g'\Vert _{L_\infty }^{12}+ \left\| \frac{1}{g'} \right\| ^{24}_{L_\infty }\).

6 Proof of the main theorem

Essentially, Corollary 32 states that we can control the gradient of \(P_t \phi \) by the gradient of \(\phi \). By iterating the inequality over successive time steps, we will conclude the proof of Theorem 15.

Definition 33

Let \({\mathcal {K}}_t\) be the set of \({\mathcal {G}}_t\)-measurable random variables taking their values \({\mathbb {P}}\)-almost surely in the set of continuous 1-periodic functions \(k:\mathbb {R}\rightarrow \mathbb {R}\) satisfying \(\Vert k\Vert _{L_\infty }=1\).

Proposition 34

Let \(\phi \), \(\theta \), f and g be as in Theorem 15. Let \(t,s \in [0,T]\) such that \(t+s \leqslant T\). For each \({\mathcal {G}}_s\)-measurable function h with values in \(\Delta ^1\) satisfying \({\mathbb {P}}\)-almost surely \(\Vert h\Vert _{\mathcal C^1} \leqslant 4\), there exists \(C_g>0\) independent of s, t and h such that

$$\begin{aligned}&{\mathbb {E}} \left[ \left| \int _0^1 \left[ \frac{\delta P_{t}\phi }{\delta m}\right] (\mu _{s}^g) (x_{s}^g(u)) \; h' (u) \mathrm {d}u \right| ^2 \right] ^{\frac{1}{2}}\nonumber \\&\quad \leqslant C_g\frac{ \Vert \phi \Vert _{L_\infty }}{t^{2+\theta }} \nonumber \\&\qquad + \frac{1}{2^{3+\theta }} \sup _{k \in \mathcal K_{t+s}}{\mathbb {E}} \left[ \left| \int _0^1 \left[ \frac{\delta \phi }{\delta m} \right] (\mu _{t+s}^g) (x_{t+s}^g(u)) \; k(u) \mathrm {d}u\right| ^2 \right] ^{\frac{1}{2}}. \end{aligned}$$
(63)

Proof

By equality (12),

$$\begin{aligned}&\frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} P_t \phi (\mu _0^{g+\rho g' h}) \nonumber \\&\quad =-\int _0^1 \frac{\delta P_t \phi }{\delta m} (\mu _0^g)(g(u)) \; h'(u) \mathrm {d}u \nonumber \\&\quad = -\int _0^1 \left[ \frac{\delta P_t \phi }{\delta m} (\mu _0^g) (g(u)) - \int _0^1\frac{\delta P_t \phi }{\delta m} (\mu _0^g) (g(u')) \mathrm {d}u' \right] h'(u) \mathrm {d}u \nonumber \\&\quad =-\int _0^1 \left[ \frac{\delta P_t \phi }{\delta m}\right] (\mu _0^g)(g(u)) \; h'(u) \mathrm {d}u, \end{aligned}$$
(64)

where the second equality follows from the fact that h is 1-periodic and the last equality follows from (45). Apply now inequality (62) with \(\varepsilon _0=\frac{1}{2^{\frac{7}{2}+\theta }} \frac{\sqrt{t}}{C \Vert h\Vert _{{\mathcal {C}}^1} C_2(g)} \). For every \({\mathcal {G}}_0\)-measurable g and h,

$$\begin{aligned}&\left| \int _0^1 \left[ \frac{\delta P_t\phi }{\delta m}\right] (\mu _0^g) (g(u)) \;h' (u) \mathrm {d}u \right| \\&\quad \leqslant C\frac{ \Vert \phi \Vert _{L_\infty }}{t^{2+\theta }}C_3(g) \Vert h\Vert _{{\mathcal {C}}^1}^{4+2\theta }\\&\qquad + \frac{1}{2^{\frac{7}{2}+\theta }} \; {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \left| \int _0^1 \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) (x_t^g(u)) \frac{K_t^{g,\varepsilon _0} (u)}{\left\| K_t^{g,\varepsilon _0} \right\| _{L_\infty }}\mathrm {d}u\right| ^2 \right] ^{\frac{1}{2}}, \end{aligned}$$

where \(C_3(g)=C_1(g) C_2(g)^{3+2\theta }\). Moreover, \({\mathbb {E}}^W {\mathbb {E}}^\beta \left[ \cdot \right] ={\mathbb {E}} \left[ \cdot \vert {\mathcal {G}}_0 \right] \), since for any random variable X on \(\Omega \) and any \({\mathcal {G}}_0\)-measurable Y, \({\mathbb {E}} \left[ XY \right] = {\mathbb {E}}^0 {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ XY \right] ={\mathbb {E}}^0\left[ {\mathbb {E}}^W {\mathbb {E}}^\beta \left[ X \right] Y \right] \). Thus it follows from the latter inequality that:

$$\begin{aligned}&{\mathbb {E}} \left[ \left| \int _0^1 \left[ \frac{\delta P_t\phi }{\delta m}\right] (\mu _0^g) (g(u)) \;h' (u) \mathrm {d}u \right| ^2 \Big \vert \mathcal G_0 \right] \\&\quad \leqslant C \frac{\Vert \phi \Vert _{L_\infty }^2}{t^{4+2\theta }}C_3(g)^2 \Vert h\Vert _{{\mathcal {C}}^1}^{8+4\theta }\\&\qquad + \frac{2}{2^{7+2\theta }} \; {\mathbb {E}} \left[ \left| \int _0^1 \left[ \frac{\delta \phi }{\delta m} \right] (\mu _t^g) (x_t^g(u)) \frac{K_t^{g,\varepsilon _0} (u)}{\left\| K_t^{g,\varepsilon _0} \right\| _{L_\infty }}\mathrm {d}u\right| ^2 \Big \vert {\mathcal {G}}_0 \right] . \end{aligned}$$

Now, consider a deterministic function g and a \(\mathcal G_s\)-measurable h, where \(s \leqslant T -t\). Then, repeating the whole argument with the \({\mathcal {G}}_s\)-measurable variables \(x_s^g\) and h instead of g and a \({\mathcal {G}}_0\)-measurable h, respectively, we get:

$$\begin{aligned}&{\mathbb {E}} \left[ \left| \int _0^1 \left[ \frac{\delta P_t\phi }{\delta m}\right] (\mu _s^g) (x_s^g(u)) \;h' (u) \mathrm {d}u \right| ^2 \Big \vert {\mathcal {G}}_s \right] \\&\quad \leqslant C\frac{ \Vert \phi \Vert _{L_\infty }^2}{t^{4+2\theta }}C_3(x_s^g)^2 \Vert h\Vert _{\mathcal C^1}^{8+4\theta }\\&\qquad + \frac{1}{2^{6+2\theta }} \; {\mathbb {E}} \left[ \Bigg | \int _0^1 \left[ \frac{\delta \phi }{\delta m} \right] (\mu _{t+s}^{s,x_s^g}) (x_{t+s}^{s,x_s^g}(u)) \frac{K_{t+s}^{s,x_s^g,\varepsilon _s} (u)}{\Vert K_{t+s}^{s,x_s^g,\varepsilon _s} \Vert _{L_\infty }}\mathrm {d}u\Bigg |^2 \Big \vert {\mathcal {G}}_s \right] , \end{aligned}$$

where \(x_{t+s}^{s,x_s^g}(u)\) denotes the value at time \(t+s\) and at point u of the unique solution to (4) which is equal to \(x_s^g\) at time s and where \(\varepsilon _s\) is \(\mathcal G_s\)-measurable. By strong uniqueness of (4), we have the following flow property: \(x_{t+s}^{s,x_s^g}=x_{t+s}^g\) and \(\mu _{t+s}^{s,x_s^g}=\mu _{t+s}^g\). Therefore,

$$\begin{aligned}&{\mathbb {E}} \left[ \left| \int _0^1 \left[ \frac{\delta P_t\phi }{\delta m}\right] (\mu _s^g) (x_s^g(u)) \;h' (u) \mathrm {d}u \right| ^2 \Big \vert {\mathcal {G}}_s \right] \\&\quad \leqslant C\frac{\Vert \phi \Vert _{L_\infty }^2}{t^{4+2\theta }}C_3(x_s^g)^2 \Vert h\Vert _{{\mathcal {C}}^1}^{8+4\theta }\\&\qquad + \frac{1}{2^{6+2\theta }} \; {\mathbb {E}} \left[ \Bigg | \int _0^1 \left[ \frac{\delta \phi }{\delta m} \right] (\mu _{t+s}^g) (x_{t+s}^g(u)) \frac{K_{t+s}^{s,x_s^g,\varepsilon _s} (u)}{\Vert K_{t+s}^{s,x_s^g,\varepsilon _s} \Vert _{L_\infty }}\mathrm {d}u\Bigg |^2 \Big \vert {\mathcal {G}}_s \right] . \end{aligned}$$

Remark that \(u \mapsto K_{t+s}^{s,x_s^g,\varepsilon _s} (u) / \Vert K_{t+s}^{s,x_s^g,\varepsilon _s}\Vert _{L_\infty }\) belongs to \(\mathcal K_{t+s}\). Thus, taking the expectation of the latter inequality, there is \(C>0\) so that for every \({\mathcal {G}}_s\)-measurable function h satisfying \(\Vert h\Vert _{{\mathcal {C}}^1} \leqslant 4\)

$$\begin{aligned}&{\mathbb {E}} \left[ \left| \int _0^1 \left[ \frac{\delta P_t\phi }{\delta m}\right] (\mu _s^g) (x_s^g(u)) \;h' (u) \mathrm {d}u \right| ^2 \right] ^{\frac{1}{2}} \\&\quad \leqslant C\frac{\Vert \phi \Vert _{L_\infty }}{t^{2+\theta }} {\mathbb {E}} \left[ C_3(x_s^g)^2 \right] ^{\frac{1}{2}} \\&\qquad + \frac{1}{2^{3+\theta }} \; \sup _{k \in {\mathcal {K}}_{t+s}} {\mathbb {E}} \left[ \left| \int _0^1 \left[ \frac{\delta \phi }{\delta m} \right] (\mu _{t+s}^g) (x_{t+s}^g(u)) \; k(u) \;\mathrm {d}u\right| ^2 \right] ^{\frac{1}{2}}. \end{aligned}$$

In order to prove inequality (63), it remains to show that there is \(C_g\) such that \({\mathbb {E}} \left[ C_3(x_s^g)^2 \right] \leqslant C_g\). Since \(C_3(g)=C_1(g) C_2(g)^{3+2\theta }\), we have:

$$\begin{aligned} {\mathbb {E}} \left[ C_3(x_s^g)^2 \right]&={\mathbb {E}} \Big [ \Big (1+\Vert \partial _u^{(3)} x_s^g\Vert _{L_4}^2 + \Vert \partial _u^{(2)} x_s^g\Vert _{L_\infty }^{6} + \Vert \partial _u x_s^g\Vert _{L_\infty }^8 + \Vert \textstyle \frac{1}{\partial _u x_s^g} \Vert _{L_\infty }^{8} \Big ) ^2 \nonumber \\&\quad \cdot \left( 1+ \Vert \partial _u^{(3)} x_s^g\Vert _{L_8}^3 + \Vert \partial _u^{(2)} x_s^g\Vert _{L_\infty }^{12}+\Vert \partial _u x_s^g\Vert _{L_\infty }^{12}+ \Vert \textstyle \frac{1}{\partial _u x_s^g}\Vert ^{24}_{L_\infty } \right) ^{6+4\theta } \Big ]. \end{aligned}$$

We refer to (70), (71) and (72) to argue that the r.h.s. is bounded by a uniform constant in \(s \in [0,T]\) and depending polynomially on \(\Vert g'''\Vert _{L_\infty }\), \(\Vert g''\Vert _{L_\infty }\), \(\Vert g'\Vert _{L_\infty }\) and \(\Vert \textstyle \frac{1}{g'}\Vert _{L_\infty }\). The constant is finite since g belongs to \({\mathbf {G}}^{3+\theta }\). \(\square \)

Corollary 35

Let \(\phi \), \(\theta \), f and g satisfy the same assumption as in Proposition 34. Let \(t,s \in [0,T]\) such that \(2t+s \leqslant T\). For any \(h:\mathbb {R}\rightarrow \mathbb {R}\) be a \({\mathcal {G}}_s\)-measurable random variable with values in \(\Delta ^1\) satisfying \(\mathbb P\)-almost surely \(\Vert h\Vert _{{\mathcal {C}}^1} \leqslant 4\), there exists \(C_g>0\) independent of s, t and h such that

$$\begin{aligned}&{\mathbb {E}} \left[ \left| \int _0^1 \left[ \frac{\delta P_{2t}\phi }{\delta m}\right] (\mu _{s}^g) (x_{s}^g(u)) \; h' (u) \mathrm {d}u \right| ^2 \right] ^{\frac{1}{2}} \leqslant C_g\frac{ \Vert \phi \Vert _{L_\infty }}{t^{2+\theta }} \nonumber \\&\qquad + \frac{1}{2^{3+\theta }} \sup _{k \in \mathcal K_{t+s}}{\mathbb {E}} \left[ \left| \int _0^1 \left[ \frac{\delta P_t \phi }{\delta m} \right] (\mu _{t+s}^g) (x_{t+s}^g(u)) \; k(u) \mathrm {d}u\right| ^2 \right] ^{\frac{1}{2}}. \end{aligned}$$
(65)

Proof

We get the above inequality by applying (63) to \(P_t \phi \) instead of \(\phi \). We note that \(P_t (P_t \phi )=P_{2t} \phi \) and that \( \Vert P_t \phi \Vert _{L_\infty } \leqslant \Vert \phi \Vert _{L_\infty }\). \(\square \)

Fix \(t_0 \in (0,T]\). For every \(t\in (0,t_0]\), define

$$\begin{aligned} S_t:=\sup _{k \in {\mathcal {K}}_{t_0-t}}{\mathbb {E}} \left[ \left| \int _0^1 \left[ \frac{\delta P_t \phi }{\delta m} \right] (\mu _{t_0-t}^g) (x_{t_0-t}^g(u)) \; k(u) \; \mathrm {d}u\right| ^2 \right] ^{\frac{1}{2}}, \end{aligned}$$

where \( {\mathcal {K}}_{t_0-t}\) is defined by Definition 33.

Proposition 36

Let \(\phi \), \(\theta \), f and g be as in Theorem 15. For every \(t\in (0,\frac{t_0}{2}]\), we have:

$$\begin{aligned} S_{2t} \leqslant C_g\frac{\Vert \phi \Vert _{L_\infty }}{t^{2+\theta }} + \frac{1}{2^{3+\theta }}S_t. \end{aligned}$$
(66)

Proof

Fix \(t\in (0,\frac{t_0}{2}]\) and \(k \in {\mathcal {K}}_{t_0-2t}\). Hence k is a continuous 1-periodic function and a \(\mathcal G_{t_0-2t}\)-measurable random variable so that \({\mathbb {P}}\)-almost surely, \(\Vert k \Vert _{L_\infty }=\sup _{u \in [0,1]}|k(u)|=1\).

Let us denote by h the map defined for every \(u \in \mathbb {R}\) by \(h(u):= \int _0^u (k(v)-{\overline{k}}) \mathrm {d}v\), where \({\overline{k}}=\int _0^1 k(v) \mathrm {d}v\). We check that h is a \({\mathcal {G}}_{t_0-2t}\)-measurable 1-periodic \(\mathcal C^1\)-function. Moreover, \(\Vert h\Vert _{L_\infty }\leqslant 2\) and \(\Vert \partial _u h\Vert _{L_\infty } \leqslant 2\); thus \(\Vert h\Vert _{{\mathcal {C}}^1}\leqslant 4\). Therefore, the assumptions of Corollary 35 are satisfied, with \(s=t_0-2t\). We apply (65) with \(s=t_0-2t\):

$$\begin{aligned} {\mathbb {E}} \left[ \left| \int _0^1 \left[ \frac{\delta P_{2t}\phi }{\delta m}\right] (\mu _{t_0-2t}^g) (x_{t_0-2t}^g(u)) \; h' (u) \;\mathrm {d}u \right| ^2 \right] ^{\frac{1}{2}} \leqslant C_g\frac{ \Vert \phi \Vert _{L_\infty }}{t^{2+\theta }} + \frac{1}{2^{3+\theta }}S_t. \end{aligned}$$

Moreover, \(h'(u)=k(u)-{\overline{k}}\) and by definition (45), \(\int _0^1 \left[ \frac{\delta P_{2t}\phi }{\delta m}\right] (\mu _{t_0-2t}^g) (x_{t_0-2t}^g(u)) \cdot {\overline{k}} \;\mathrm {d}u =0\). Thus

$$\begin{aligned} {\mathbb {E}} \left[ \left| \int _0^1 \left[ \frac{\delta P_{2t}\phi }{\delta m}\right] (\mu _{t_0-2t}^g) (x_{t_0-2t}^g(u)) \; k(u) \; \mathrm {d}u \right| ^2 \right] ^{\frac{1}{2}} \leqslant C_g\frac{ \Vert \phi \Vert _{L_\infty }}{t^{2+\theta }} + \frac{1}{2^{3+\theta }}S_t, \end{aligned}$$

and by taking the supremum over all k in \({\mathcal {K}}_{t_0-2t}\), we get \(S_{2t} \leqslant C_g\frac{ \Vert \phi \Vert _{L_\infty }}{t^{2+\theta }} + \frac{1}{2^{3+\theta }}S_t\). \(\square \)

We complete the proof of Theorem 15.

Proof (Theorem 15)

It follows from Proposition 36 that for every \(t\in (0,\frac{t_0}{2}]\),

$$\begin{aligned} (2t)^{2+\theta } S_{2t} \leqslant 2^{2+\theta } C_g \Vert \phi \Vert _{L_\infty } + \frac{1}{2} t^{2+\theta } S_t. \end{aligned}$$

Therefore, denoting by \({\mathbf {S}}:=\sup _{t\in (0,t_0]} t^{2+\theta } S_t\), we have \({\mathbf {S}} \leqslant 2^{2+\theta } C_g \Vert \phi \Vert _{L_\infty } + \frac{1}{2} {\mathbf {S}}\). Since \({\mathbf {S}}\) is finite, we obtain \({\mathbf {S}} \leqslant 2^{3+\theta } C_g \Vert \phi \Vert _{L_\infty }\). Thus for every \(t_0 \in (0,T]\), \(t_0^{2+\theta } S_{t_0} \leqslant 2^{3+\theta } C_g \Vert \phi \Vert _{L_\infty }\). Therefore, for any deterministic 1-periodic function \(k:\mathbb {R}\rightarrow \mathbb {R}\) and for every \(t\in (0,T]\), we have

$$\begin{aligned} \left| \int _0^1 \left[ \frac{\delta P_t \phi }{\delta m} \right] (\mu _0^g) (g(u)) \; k(u)\; \mathrm {d}u\right| \leqslant C_g \frac{\Vert \phi \Vert _{L_\infty }}{t^{2+\theta }}\Vert k\Vert _{L_\infty }. \end{aligned}$$

Let \(h \in \Delta ^1\). Thus \(k=\partial _u \left( \frac{h}{g'} \right) \) is a 1-periodic function and we deduce that

$$\begin{aligned}&\left| \int _0^1 \left[ \frac{\delta P_t \phi }{\delta m} \right] (\mu _0^g) (g(u))\; \partial _u \left( \frac{h}{g'} \right) (u) \; \mathrm {d}u\right| \\&\quad \leqslant C_g \frac{\Vert \phi \Vert _{L_\infty }}{t^{2+\theta }}\left\| \partial _u \left( \frac{h}{g'} \right) \right\| _{L_\infty }\leqslant C_g \frac{\Vert \phi \Vert _{L_\infty }}{t^{2+\theta }}\left\| h\right\| _{{\mathcal {C}}^1}, \end{aligned}$$

for a new constant \(C_g\). Applying equality (64) with \(\frac{h}{g'}\) instead of \(g'\), we obtain

$$\begin{aligned}&\left| \frac{\mathrm {d}}{\mathrm {d}\rho }_{\vert \rho =0} P_t \phi (\mu _0^{g+\rho h})\right| = \left| \int _0^1 \left[ \frac{\delta P_t \phi }{\delta m} \right] (\mu _0^g) (g(u))\; \partial _u \left( \frac{h}{g'} \right) (u) \; \mathrm {d}u\right| \\&\quad \leqslant C_g \frac{\Vert \phi \Vert _{L_\infty }}{t^{2+\theta }}\left\| h\right\| _{{\mathcal {C}}^1}, \end{aligned}$$

which concludes the proof of the theorem. \(\square \)