1 Introduction

We consider the parabolic PDE with space-time random potential given by

$$\begin{aligned} \partial _t u^\varepsilon (x,t)&= \partial ^2_x u^\varepsilon (x,t)+\varepsilon ^{-\beta } V\left( \frac{x}{\varepsilon },\frac{t}{\varepsilon ^\alpha }\right) u^\varepsilon (x,t), \nonumber \\ u^\varepsilon (x,0)&= u_0(x), \end{aligned}$$
(1.1)

where \(x \in \mathbf{R},\,t \ge 0\) and \(V\) is a stationary centred random field. The homogenisation theory of equations of this type has been studied by a number of authors. The case when \(V\) is time-independent was considered in [1, 8]. The articles [4, 5] considered a situation where \(V\) is a stationary process as a function of time, but periodic in space. Purely periodic/quasiperiodic operators with large potential were also studied in [3, 9]. The case of a time-dependent Gaussian \(V\) was considered in [2], where also a Central Limit Theorem was established.

For \(\alpha \ge 2\) and \(\beta = {\alpha \over 2}\), (1.1) was studied in [10], where it was shown that its solutions converge as \(\varepsilon \rightarrow 0\) to the solutions to

$$\begin{aligned} \partial _t u(x,t) = \partial ^2_x u(x,t) + \bar{V} u(x,t),\quad u(x,0)=u_0(x), \end{aligned}$$
(1.2)

where the constant \(\bar{V}\) is given by

$$\begin{aligned} \bar{V} = \mathop \int \limits _0^\infty \Phi (0,t) \,dt, \end{aligned}$$
(1.3)

in the case \(\alpha > 2\) and

$$\begin{aligned} \bar{V} = \mathop \int \limits _0^\infty \mathop \int \limits _{-\infty }^\infty {e^{-{x^2 \over 4t}}\over 2\sqrt{\pi t}} \Phi (x,t) \,dx \,dt, \end{aligned}$$
(1.4)

in the case \(\alpha = 2\). Here, \(\Phi (x,t) = \mathbf{E}V(0,0)V(x,t)\) is the correlation function of \(V\) which is assumed to decay sufficiently fast.

In the case \(0<\alpha <2\), it was conjectured in [10] that the correct scaling to use in order to obtain a non-trivial limit is \(\beta = 1/2+\alpha /4\), but the corresponding value of \(\bar{V}\) was not obtained. Furthermore, the techniques used there seem to break down in this case. The main result of the present article is that the conjecture does indeed hold true and that the solutions to (1.1) do again converge to those of (1.2) as \(\varepsilon \rightarrow 0\). This time, the limiting constant \(\bar{V}\) is given by

$$\begin{aligned} \bar{V} = \frac{1}{2\sqrt{\pi }} \mathop \int \limits _0^\infty \frac{\overline{\Phi }(t)}{\sqrt{t}}\,dt, \end{aligned}$$
(1.5)

where we have set \(\overline{\Phi }(s):=\int _\mathbf{R}\Phi (x,s)dx\).

Remark 1.1

One can “guess” both (1.3) and (1.5) if we admit that (1.4) holds. Indeed, (1.3) is obtained from (1.4) by replacing \(\Phi (x,t)\) by \(\Phi (\delta x, t)\) and taking the limit \(\delta \rightarrow 0\). This reflects the fact that this corresponds to a situation in which, at the diffusive scale, the temporal oscillations of the potential are faster than the spatial oscillations. Similarly, (1.5) is obtained by replacing \(\Phi (x,t)\) with \(\delta ^{-1}\Phi (\delta ^{-1} x, t)\) and then taking the limit \(\delta \rightarrow 0\), reflecting the fact that we are in the reverse situation where spatial oscillations are faster. These arguments also allow to guess the correct exponent \(\beta \) in both regimes.

The techniques employed in the present article are very different from [10]: instead of relying on probabilistic techniques, we adapt the analytical techniques from [6]. Note that the techniques used here seem very well to be able to tackle the cases treated in [10]. Both methods necessitate quite involved estimates, and the results are not strictly equivalent. The range of application of the method of this paper seems to be wider. However, it is good also to have several possible methods for certain cases.

From now on, we will rewrite (1.1) as

$$\begin{aligned} \partial _t u^\varepsilon (x,t)=\partial ^2_x u^\varepsilon (x,t) + V_\varepsilon (x,t)u^\varepsilon (x,t) ,\quad u^\varepsilon (x,0) = u_0(x), \end{aligned}$$

where \(V_\varepsilon \) is the rescaled potential given by

$$\begin{aligned} V_\varepsilon (x,t) = \varepsilon ^{-(1/2+\alpha /4)} V\left( \frac{x}{\varepsilon },\frac{t}{\varepsilon ^\alpha }\right) . \end{aligned}$$

Before we proceed, we give a more precise description of our assumptions on the random potential \(V\).

1.1 Assumptions on the potential

Besides some regularity and integrability assumptions, our main assumption will be a sufficiently fast decay of maximal correlations for \(V\). Recall that the “maximal correlation coefficient” of \(V\), subsequently denoted by \(\varrho \), is given by the following definition where, for any given compact set \(K\subset \mathbf{R}^2\), we denote by \(\mathcal{F}_K\) the \(\sigma \)-algebra generated by \(\{V(x,t):(x,t) \in K\}\).

Definition 1.2

For any \(r > 0,\,\varrho (r)\) is the smallest value such that the bound

$$\begin{aligned} \mathbf{E}\bigl (\varphi _1(V)\varphi _2(V)\bigr ) \le \varrho (r) \sqrt{\mathbf{E}\varphi _1^2(V) \, \mathbf{E}\varphi _2^2(V)}, \end{aligned}$$

holds for any two compact sets \(K_1,\,K_2\) such that

$$\begin{aligned} d(K_1,K_2) {\mathop {=}\limits ^\mathrm{def}}\inf _{(x_1,t_1) \in K_1}\inf _{(x_2,t_2) \in K_2} (|x_1 - x_2| + |t_1-t_2|) \ge r, \end{aligned}$$

and any two random variables \(\varphi _i(V)\) such that \(\varphi _i(V)\) is \(\mathcal {F}_{K_i}\)-measurable and \(\mathbf{E}\varphi _i(V)\! =\! 0\).

Note that \(\varrho \) is a decreasing function. With this notation at hand, we then make the following assumption:

Assumption 1.3

The field \(V\) is stationary, centred, continuous, and \(\mathcal{C}^1\) in the \(x\)-variable. Furthermore,

$$\begin{aligned} \mathbf{E}\bigl (|V(x,t)|^p + |\partial _x V(x,t)|^p\bigr ) < \infty \end{aligned}$$

for every \(p > 0\).

For most of our results, we will furthermore require that the correlations of \(V\) decay sufficiently fast in the following sense:

Assumption 1.4

The maximal correlation function \(\varrho \) from Definition 1.2 satisfies \(\varrho (R) \lesssim (1+R)^{-q}\) for every \(q > 0\).

Remark 1.5

Retracing the steps of our proof, one can see that in order to obtain our main result, Theorem 1.8, we actually only need this bound for some sufficiently large \(q\). Similarly, the assumption on the \(x\)-differentiability of \(V\) is not absolutely necessary, but simplifies some of our arguments.

Let us first give a few examples of random fields satisfying our assumptions.

Example 1.6

Take a measure space \((\mathcal{M},\nu )\) with some finite measure \(\nu \) and a function \(\psi :\mathcal{M}\times \mathbf{R}^2 \rightarrow \mathbf{R}\) such that

$$\begin{aligned} \sup _{m \in \mathcal{M}} \sup _{x,t} {|\psi (m,x,t)| + |\partial _x \psi (m,x,t)| \over 1 + |x|^q + |t|^q}< \infty , \end{aligned}$$

for all \(q > 0\). Assume furthermore that \(\psi \) satisfies the centering condition

$$\begin{aligned} \mathop \int \limits _\mathbf{R}\mathop \int \limits _\mathbf{R}\mathop \int \limits _{\mathcal{M}} \psi (m,y,s)\,\nu (dm)\,dy\,ds = 0. \end{aligned}$$

Consider now a realisation \(\mu \) of the Poisson point process on \(\mathcal{M}\times \mathbf{R}^2\) with intensity measure \(\nu (dm)\,dy\,ds\) and set

$$\begin{aligned} V(x,t) = \mathop \int \limits _{\mathcal{M}} \mathop \int \limits _\mathbf{R}\mathop \int \limits _\mathbf{R}\psi (m,y-x,s-t)\,\mu (dm,dy,ds). \end{aligned}$$

Then \(V\) satisfies Assumptions 1.3 and 1.4.

Example 1.7

Take for \(V\) a centred Gaussian field with covariance \(\Phi \) such that

$$\begin{aligned} \sup _{x,t} {|\Phi (x,t)| + |\partial _x^2 \Phi (x,t)| \over 1 + |x|^q + |t|^q}< \infty , \end{aligned}$$

for all \(q > 0\). Then \(V\) does not quite satisfy Assumptions 1.3 and 1.4 because \(V\) and \(\partial _x V\) are not necessarily continuous. However, it is easy to check that our proofs still work in this case.

The advantage of Definition 1.2 is that it is invariant under the composition by measurable functions. In particular, given a finite number of independent random fields \(\{V_1,\ldots ,V_k\}\) of the type of Examples 1.6 and 1.7 (or, more generally, any mutually independent fields satisfying Assumptions 1.3 and 1.4) and a function \(F:\mathbf{R}^k \rightarrow \mathbf{R}\) such that

  1. 1.

    \(\mathbf{E}F(V_1(x,t),\ldots ,V_k(x,t)) = 0\),

  2. 2.

    \(F\), together with its first partial derivatives, grows no faster than polynomially at infinity.

Then, our results hold with \(V(x,t) = F(V_1(x,t),\ldots ,V_k(x,t))\).

1.2 Statement of the result

Consider the solution to the heat equation with constant potential

$$\begin{aligned} \partial _t u(x,t)&= \partial ^2_x u(x,t)+ \bar{V} u(x,t),\quad t\ge 0,x\in \mathbf{R}; \nonumber \\ u(x,0)&= u_0(x), \end{aligned}$$
(1.6)

where \(\bar{V}\) is defined by (1.5). Then, the main result of this article is the following convergence result:

Theorem 1.8

Let \(V\) be a random potential satisfying Assumptions 1.3 and 1.4, and let \(u_0\in \mathcal{C}^{3/2}(\mathbf{R})\) be of no more than exponential growth. Then, as \(\varepsilon \rightarrow 0\), one has \(u^\varepsilon (t,x)\rightarrow u(t,x)\) in probability, locally uniformly in \(x \in \mathbf{R}\) and \(t \ge 0\).

Remark 1.9

The precise assumption on \(u_0\) is that it belongs to the space \(\mathcal{C}^{3/2}_{e_\ell }\) for some \(\ell \in \mathbf{R}\), see Sect. 2.1 below for the definition of this space.

Remark 1.10

The fact that \(\mathbf{E}V = 0\) is of course not essential, since one can easily subtract the mean by performing a suitable rescaling of the solution.

To prove Theorem 1.8, we use the standard “trick” to introduce a corrector that “kills” the large potential \(V_\varepsilon \) to highest order. The less usual feature of this problem is that, in order to obtain the required convergence, it turns out to be advantageous to use two correctors, which ensures that the remaining terms can be brought under control. These correctors, which we denote by \(Y^\varepsilon \) and \(Z^\varepsilon \), are given by the solutions to the following inhomogeneous heat equations:

$$\begin{aligned} \partial _t Y^\varepsilon (x,t)&= \partial ^2_x Y^\varepsilon (x,t)+V_\varepsilon (x,t),\nonumber \\ \partial _t Z^\varepsilon (x,t)&= \partial ^2_x Z^\varepsilon (x,t)+ \left| \partial _x Y^\varepsilon (x,t)\right| ^2-\bar{V}_\varepsilon (t), \end{aligned}$$
(1.7)

where we have set \(\bar{V}_\varepsilon (t) = \mathbf{E}\left| \partial _x Y^\varepsilon (x,t)\right| ^2\). In both cases, we start with the flat (zero) initial condition at \(t=0\). Writing

$$\begin{aligned} v^\varepsilon (x,t)=u^\varepsilon (x,t)\exp \left[ -\left( Y^\varepsilon (x,t)+Z^\varepsilon (x,t) \right) \right] , \end{aligned}$$

Theorem 1.8 is then a consequence of the following two claims:

  1. 1.

    Both \(Y^\varepsilon \) and \(Z^\varepsilon \) converge locally uniformly to 0.

  2. 2.

    The process \(v^\varepsilon \) converges locally uniformly to the solution \(u\) of (1.6).

It is straightforward to verify that \(v^\varepsilon \) solves the equation

$$\begin{aligned} \partial _t v^\varepsilon =\partial ^2_x v^\varepsilon +\bar{V}_\varepsilon \, v^\varepsilon + 2 \left( \partial _x Y^\varepsilon +\partial _x Z^\varepsilon \right) \partial _x v^\varepsilon + \left( \left| \partial _x Z^\varepsilon \right| ^2 +2 \partial _x Z^\varepsilon \partial _x Y^\varepsilon \right) v^\varepsilon , \nonumber \\ \end{aligned}$$
(1.8)

with initial condition \(u_0\). The second claim will then essentially follow from the first (except that, due to the appearance of nonlinear terms involving the derivatives of the correctors, we need somewhat tighter control than just locally uniform convergence), combined with the fact that the function \(\bar{V}_\varepsilon (t)\) converges locally uniformly to the constant \(\bar{V}\).

Remark 1.11

One way of “guessing” the correct forms for the correctors \(Y^\varepsilon \) and \(Z^\varepsilon \) is to note the analogy of the problem with that of building solutions to the KPZ equation. Indeed, performing the Cole-Hopf transform \(h^\varepsilon = \log u^\varepsilon \), one obtains for \(h^\varepsilon \) the equation

$$\begin{aligned} \partial _t h^\varepsilon = \partial _x^2 h^\varepsilon + \bigl (\partial _x h^\varepsilon \bigr )^2 + V_\varepsilon , \end{aligned}$$

which, in the case where \(V_\varepsilon \) is replaced by space-time white noise, was recently analysed in detail in [6]. The correctors \(Y^\varepsilon \) and \(Z^\varepsilon \) then arise naturally in this analysis as the first terms in the Wild expansion of the KPZ equation.

This also suggests that it would be possible to find a diverging sequence of constants \(C_\varepsilon \) such that the solutions to

$$\begin{aligned} \partial _t u^\varepsilon (x,t)=\partial ^2_x u^\varepsilon (x,t)+\varepsilon ^{-{1+\alpha \over 2}} V\left( \frac{x}{\varepsilon },\frac{t}{\varepsilon ^\alpha } \right) u^\varepsilon (x,t) - C_\varepsilon u^\varepsilon (x,t), \end{aligned}$$

converge in law to the solutions to the multiplicative stochastic heat equation driven by space-time white noise. In the non-Gaussian case, this does still seem out of reach at the moment, although some recent progress can be found in [7].

The proof of Theorem 1.8 now goes as follows. In a first step, which is rather long and technical and constitutes Sect. 2 below, we obtain sharp a priori bounds for \(Y^\varepsilon \) and \(Z^\varepsilon \) in various norms. In a second step, which is performed in Sect. 3, we then combine these estimates in order to show that the only terms in (1.8) that matter are indeed the first two terms on the right hand side.

Remark 1.12

Throughout this article, the notation \(X \lesssim Y\) will be equivalent to the notation \(X \le C Y\) for some constant \(C\) independent of \(\varepsilon \).

2 Estimates of \(Y^\varepsilon \) and \(Z^\varepsilon \)

In this section, we shall prove that both \(Y^\varepsilon \) and \(Z^\varepsilon \) tend to zero as \(\varepsilon \rightarrow 0\), and establish further estimates on those sequences of functions which will be needed for taking the limit of the sequence \(v^\varepsilon \). But before doing so, let us first introduce some technical tools which will be needed both in this section and in the last one.

2.1 Weighted Hölder continuous spaces of functions and the heat semigroup

First of all, we define the notion of an admissible weight \(w\) as a function \(w:\mathbf{R}\rightarrow \mathbf{R}_+\) such that there exists a constant \(C\ge 1\) with

$$\begin{aligned} C^{-1} \le {w(x) \over w(y)} \le C, \end{aligned}$$
(2.1)

for all pairs \((x,y)\) with \(|x-y| \le 1\). Given such an admissible weight \(w\), we then define the space \(\mathcal{C}_w\) as the closure of \(\mathcal{C}_0^\infty \) under the norm

$$\begin{aligned} \Vert f\Vert _w= \Vert f\Vert _{0,w} = \sup _{x \in \mathbf{R}} { |f(x)| \over w(x)}. \end{aligned}$$

We also define \(\mathcal{C}^\beta _w\) for \(\beta \in (0,1)\) as the closure of \(\mathcal{C}_0^\infty \) under the norm

$$\begin{aligned} \Vert f\Vert _{\beta ,w} = \Vert f\Vert _w+ \sup _{|x-y| \le 1} {|f(x)-f(y)| \over w(x) |x-y|^\beta }. \end{aligned}$$

Similarly, for \(\beta \ge 1\), we define \(\mathcal{C}^\beta _w\) recursively as the closure of \(\mathcal{C}_0^\infty \) under the norm

$$\begin{aligned} \Vert f\Vert _{\beta ,w} = \Vert f\Vert _w+ \Vert f'\Vert _{\beta -1,w}. \end{aligned}$$

It is clear that, if \(w_1\) and \(w_2\) are two admissible weights, then so is \(w= w_1\,w_2\). Furthermore, it is a straightforward exercise to use the Leibniz rule to verify that there exists a constant \(C\) such that the bound

$$\begin{aligned} \Vert f_1 f_2\Vert _{\beta ,w} \le C \Vert f_1\Vert _{\beta _1,w_1} \Vert f_2\Vert _{\beta _2,w_2}, \end{aligned}$$
(2.2)

holds for every \(f_i \in \mathcal{C}_{w_i}^{\beta _i}\), provided that \(\beta \le \beta _1 \wedge \beta _2\).

We now show that a similar inequality still holds if one of the two Hölder exponents is negative. For \(\beta \in (-1,0)\), we can indeed define weighted spaces of negative “Hölder regularity” by postulating that \(\mathcal{C}^{\beta }_w\) is the closure of \(\mathcal{C}_0^\infty \) under the norm

$$\begin{aligned} \Vert f\Vert _{\beta ,w} = \sup _{|x-y| \le 1} {|\int _x^y f(z)\,dz| \over w(x) |x-y|^{\beta +1}}. \end{aligned}$$

In other words, we essentially want the antiderivative of \(f\) to belong to \(\mathcal{C}^{\beta +1}_w\), except that we do not worry about its growth.

With these notations at hand, we then have the bound:

Proposition 2.1

Let \(w_1\) and \(w_2\) be two admissible weights and let \(\beta _1 < 0 < \beta _2\) be such that \(\beta _2 > |\beta _1|\). Then, the bound (2.2) holds with \(\beta = \beta _1\).

Proof

We only need to show the bound for smooth and compactly supported elements \(f_1\) and \(f_2\), the general case then follows by density. Denote now by \(F_1\) an antiderivative for \(f_1\), so that

$$\begin{aligned} \int _x^y f_1(z)f_2(z)\,dz =\int _x^y f_2(z)\,dF_1(z), \end{aligned}$$

where the right hand side is a Riemann–Stieltjes integral. For any interval \(I\subset \mathbf{R}\), we now write

It then follows from Young’s inequality [12] that there exists a constant \(C\) depending only on the precise values of the \(\beta _i\) and on the constants appearing in the definition (2.1) of admissibility for the weights \(w_i\), such that

which is precisely the requested bound. \(\square \)

There are two types of admissible weights that will play a crucial role in the sequel:

$$\begin{aligned} e_\ell (x) {\mathop {=}\limits ^\mathrm{def}}\exp (- \ell |x|),\quad p_\kappa (x) {\mathop {=}\limits ^\mathrm{def}}1 + |x|^\kappa , \end{aligned}$$

where the exponent \(\kappa \) will always be positive, but \(\ell \) could have any sign. One has of course the identity

$$\begin{aligned} e_\ell \cdot e_m = e_{\ell +m}. \end{aligned}$$
(2.3)

Furthermore, it is straightforward to verify that there exists a constant \(C\) such that the bound

$$\begin{aligned} p_\kappa (x) e_\ell (x) \le C \ell ^{-\kappa }, \end{aligned}$$
(2.4)

holds uniformly in \(x \in \mathbf{R},\,\kappa \in (0,1]\), and \(\ell \in (0,1]\).

Finally, we have the following regularising property of the heat semigroup:

Proposition 2.2

Let \(\beta \in (-1,\infty )\), let \(\gamma > \beta \), and let \(\ell , \kappa \in \mathbf{R}\). Then, for every \(t>0\), the operator \(P_t\) extends to a bounded operator from \(\mathcal{C}^\beta _{e_\ell }\) to \(\mathcal{C}^\gamma _{e_\ell }\) and from \(\mathcal{C}^\beta _{p_\kappa }\) to \(\mathcal{C}^\gamma _{p_\kappa }\). Furthermore, for every \(\ell _0 > 0\) and \(\kappa _0 > 0\), there exists a constant \(C\) such that the bounds

$$\begin{aligned} \Vert P_t f\Vert _{\gamma ,e_\ell } \le C t^{-{\gamma - \beta \over 2}} \Vert f\Vert _{\beta ,e_\ell },\quad \Vert P_t g\Vert _{\gamma ,p_\kappa } \le C t^{-{\gamma - \beta \over 2}} \Vert g\Vert _{\beta ,p_\kappa }, \end{aligned}$$

hold for every \(f \in \mathcal{C}_{e_\ell }^\beta \), every \(g \in \mathcal{C}_{p_\kappa }^\beta \), every \(t \in (0,1]\), every \(|\ell | \le \ell _0\), and every \(|\kappa | \le \kappa _0\).

Proof

The proof is standard: one first verifies that the semigroup preserves these norms, so that the case \(\gamma = \beta \) is covered. The case of integer values of \(\gamma \) can easily be verified by an explicit calculation. The remaining values then follow by interpolation.\(\square \)

We close this section with a quantitative version of Kolmogorov’s continuity criterion, which will be used a couple of times in this paper.

Lemma 2.3

Let \(R\) be a compact subset of \(\mathbf{R}^d\) (for us \(d\) will be either 1 or 2), and let for each \(\varepsilon >0\,\{\xi ^\varepsilon _u,\, u\in R\}\) be a stochastic process such that for some positive constants \(C,\,\gamma \), and \(\delta ,\,\varrho \in \mathbf{R}\), all \(u;v\in R\),

$$\begin{aligned} \mathbf{E}\left[ |\xi ^\varepsilon _u-\xi ^\varepsilon _v|^\gamma \right] \le C\varepsilon ^\varrho |u-v|^{d+\delta }. \end{aligned}$$

Then there exists a continuous modification of \(\xi ^\varepsilon \) (which, as an abuse, we still write \(\xi ^\varepsilon \)), and for all \(0\le \beta <\delta /\gamma ,\,\varepsilon >0\), there exists a positive random variable \(\zeta _{\beta ,\varepsilon }\) such that

$$\begin{aligned} \mathbf{E}\left[ (\zeta _{\beta ,\varepsilon })^\gamma \right] \le C_\beta \varepsilon ^{\varrho }, \end{aligned}$$

where \(C_\beta \) depends only upon \(C,\,\beta ,\,d,\,\gamma ,\,\delta \) and the diameter of \(R\), and

$$\begin{aligned} |\xi ^\varepsilon _u-\xi ^\varepsilon _v|\le \zeta _{\beta ,\varepsilon }|u-v|^\beta , \end{aligned}$$

for all \(u,v\in R\) a.s.

Proof

The result follows readily from an application of Theorem 0.2.1 in [11] to the process \(\varepsilon ^{-\varrho /\gamma }\xi ^\varepsilon \). The claim about the constant \(C_\beta \) can be easily deduced from the proof of that Theorem. \(\square \)

2.2 Bounds and convergence of \(Y^\varepsilon \)

The main results of this section are Lemma 2.12 and Corollary 2.13 below. For any integer \(k\ge 2\), define the \(k\)-point correlation function \(\Phi ^{(k)}\) for \(x, t \in \mathbf{R}^k\) by

$$\begin{aligned} \Phi ^{(k)}(x,t) = \mathbf{E}\bigl (V(x_1,t_1)\ldots V(x_k, t_k)\bigr ). \end{aligned}$$

(In particular, \(\Phi ^{(2)}(x_1,t_1,x_2,t_2) = \Phi (x_1-x_2,t_1-t_2)\), where \(\Phi \) is the correlation function of \(V\) defined above.) With these notations at hand, we have the following bound which will prove to be useful:

Lemma 2.4

The function \(\Psi ^{(4)}\) given by

$$\begin{aligned} \Psi ^{(4)}(x,t) = \Phi ^{(4)}(x,t) - \Phi (x_1-x_2,t_1-t_2)\Phi (x_3-x_4,t_3-t_4), \end{aligned}$$

satisfies the bound

$$\begin{aligned} |\Psi ^{(4)}(x,t) |&\le \eta (|x_1-x_3|+|t_1-t_3|)\eta (|x_2-x_4|+|t_2-t_4|) \nonumber \\&+ \eta (|x_1-x_4|+|t_1-t_4|)\eta (|x_2-x_3|+|t_2-t_3|), \end{aligned}$$
(2.5)

where the function \(\eta :\mathbf{R}_+ \rightarrow \mathbf{R}_+\) is defined by

$$\begin{aligned} \eta (r)=\sqrt{K\varrho (r/3)},\quad \text { with }\ K=4\big (\Vert V(x,t)\Vert _{2}\Vert V^3(x,t)\Vert _{2}+ \Vert V^2(x,t)\Vert ^2_{2}\big ), \end{aligned}$$

where we write \(\Vert \cdot \Vert _2\) for the \(L^2(\Omega )\) norm of a real-valued random variable.

Remark 2.5

In the Gaussian case, one has the identity

$$\begin{aligned} \Psi ^{(4)}(x,t)&= \Phi (x_1-x_3,t_1-t_3)\Phi (x_2-x_4,t_2-t_4) \\&+\Phi (x_1-x_4,t_1-t_4)\Phi (x_2-x_3,t_2-t_3), \end{aligned}$$

so that the bound (2.5) follows from the fact that \(\varrho \) dominates the decay of the correlation function \(\Phi \).

Proof

For the sake of brevity denote \(\xi _j=(x_j,t_j)\). We set

$$\begin{aligned} R_1=\max \limits _{1\le i\le 4}\mathrm{dist}\left( \xi _i,\bigcup \limits _{j\not =i}\{\xi _j\}\right) ,\quad R_2=\max \mathrm{dist}\left( \{\xi _{i_1},\xi _{i_2}\}, \{\xi _{i_3},\xi _{i_4}\}\right) , \end{aligned}$$

where the second maximum is taken over all permutations \(\{i_1,i_2,i_3,i_4\}\) of \(\{1,2,3,4\}\).

Consider first the case \(R_1\ge R_2\). Without loss of generality we can assume that \(R_1=\mathrm{dist}(\xi _1,\bigcup \limits _{j\not =1}\{\xi _j\})\). It is easily seen that, in the case under consideration,

$$\begin{aligned} \mathrm{dist}((\xi _i, \xi _j)\le 3R_1,\quad i,\,j=1,\,2,\,3,\,4. \end{aligned}$$
(2.6)

Then the functions \(\Phi ^{(4)}\) and \(\Phi (\xi _1-\xi _2)\Phi (\xi _3-\xi _4)\) admit the following upper bounds:

$$\begin{aligned} |\Phi ^{(4)}(\xi _1,\xi _2,\xi _3,\xi _4)|&= |\mathbf{E}(V(\xi _1)V(\xi _2)V (\xi _3)V(\xi _4))| \\&\le \varrho (R_1)\Vert V(\xi _1)\Vert _{2}\Vert V(\xi _2)V(\xi _3)V(\xi _4)\Vert _{2} \\&\le \varrho (R_1)\Vert V(\xi )\Vert _{2}\Vert (V(\xi ))^3\Vert _{2}, \end{aligned}$$

and

$$\begin{aligned} \Phi (\xi _1-\xi _2)\Phi (\xi _3-\xi _4)\le \varrho (R_1)\Vert V\Vert ^2_{2} \,\Vert V\Vert ^2_{2} \end{aligned}$$

Therefore,

$$\begin{aligned} |\Psi ^{(4)}(x,t)|\le \varrho (\mathbf{R}_1) \left( \ \Vert V(\xi )\Vert _{2}\Vert (V(\xi ))^3\Vert _{2}+\Vert V\Vert ^4_{2}\right) \end{aligned}$$

From (2.6) and the fact that \(\varrho \) is a decreasing function we derive

$$\begin{aligned} K\varrho (R_1)=\eta (3R_1)\eta (3R_1)\le \eta (|\xi _1-\xi _3|)\eta (|\xi _2-\xi _4|). \end{aligned}$$

This yields the desired inequality.

Assume now that \(R_1<R_2\) and \(\mathrm{dist}(\{\xi _1,\xi _2\}, \{\xi _3,\xi _4\})= R_2\). In this case

$$\begin{aligned} \mathrm{dist}(\xi _1,\xi _2)<R_2 \quad \text {and}\ \ \mathrm{dist}(\xi _3,\xi _4)<R_2. \end{aligned}$$
(2.7)

Indeed, if we assume that \(\mathrm{dist}(\xi _1,\xi _2)\ge R_2\), then \(\mathrm{dist}(\xi _1,\{\xi _2,\xi _3,\xi _4\})\ge R_2\) and, thus, \(R_1\ge R_2\) which contradicts our assumption. We have

$$\begin{aligned} \big |\Psi ^{(4)}(\xi _1,\xi _2,\xi _3,\xi _4)\big |&= \big |\Phi ^{(4)}(\xi _1,\xi _2,\xi _3,\xi _4)-\Phi (\xi _1-\xi _2) \Phi (\xi _3-\xi _4)\big |\nonumber \\&= \big |\mathbf{E}\big ([V(\xi _1)V(\xi _2)-\mathbf{E}(V(\xi _1)V(\xi _2))] [V(\xi _3)V(\xi _4) \nonumber \\&-\mathbf{E}(V(\xi _3)V(\xi _4))]\big )\big |\nonumber \\&\le \varrho (R_2)\Vert (V(\xi ))^2\Vert ^2_{2}. \end{aligned}$$
(2.8)

In view of (2.7), \(\mathrm{dist}(\xi _1,\xi _3)\le 3R_2\) and \(\mathrm{dist}(\xi _2,\xi _4)\le 3R_2\). Therefore,

$$\begin{aligned} K\varrho (R_2)\le \eta (|\xi _1-\xi _3|)\eta (|\xi _2-\xi _4|), \end{aligned}$$

and the desired inequality follows.

It remains to consider the case \(R_1<R_2\) and \(\mathrm{dist}(\{\xi _1,\xi _3\}, \{\xi _2,\xi _4\})= R_2\); the case \(\mathrm{dist}(\{\xi _1,\xi _4\}, \{\xi _2,\xi _3\})= R_2\) can be addressed in the same way. In this case

$$\begin{aligned} \mathrm{dist}(\xi _1,\xi _2)\ge R_2,\quad \mathrm{dist}(\xi _1,\xi _4)\ge R_2, \quad \mathrm{dist}(\xi _1,\xi _3)< R_2. \end{aligned}$$

Therefore, \(\mathrm{dist}(\xi _1,\{\xi _2,\xi _3,\xi _4\})= \mathrm{dist}(\xi _1,\xi _3)\), and we have

$$\begin{aligned} |\Phi ^{(4)}(\xi _1,\xi _2,\xi _3,\xi _4)|&\le \varrho (|\xi _1-\xi _3|) \Vert V(\xi )\Vert _{2}\Vert (V(\xi ))^3\Vert _{2} \\ |\Phi (\xi _1-\xi _2)\Phi (\xi _3-\xi _4)|&\le \varrho (R_2)\Vert V\Vert ^4_{2}\le \varrho (|\xi _1-\xi _3|) \Vert V\Vert ^4_{2}. \end{aligned}$$

This yields

$$\begin{aligned} |\Psi ^{(4)}(\xi _1,\xi _2,\xi _3,\xi _4)|\le \varrho (|\xi _1-\xi _3|) \big (\Vert V(\xi )\Vert _{2}\Vert (V(\xi ))^3\Vert _{2}+ \Vert V\Vert ^4_{2}\big ) \end{aligned}$$

In the same way one gets

$$\begin{aligned} |\Psi ^{(4)}(\xi _1,\xi _2,\xi _3,\xi _4)|\le \varrho (|\xi _2-\xi _4|) \big (\Vert V(\xi )\Vert _{2}\Vert (V(\xi ))^3\Vert _{2}+ \Vert V\Vert ^4_{2}\big ) \end{aligned}$$

From the last two estimates we obtain

$$\begin{aligned} |\Psi ^{(4)}(\xi _1,\xi _2,\xi _3,\xi _4)|&\le \sqrt{\varrho (|\xi _1-\xi _3|)} \sqrt{\varrho (|\xi _2-\xi _4|)} \big (\Vert V(\xi )\Vert _{2}\Vert (V(\xi ))^3\Vert _{2}+ \Vert V\Vert ^4_{2}\big )\\&\le \eta (|\xi _1-\xi _3|)\eta (|\xi _2-\xi _4|). \end{aligned}$$

This implies the desired inequality and completes the proof of Lemma 2.4. \(\square \)

In order to prove our next result, we will need the following small lemma:

Lemma 2.6

Let \(F:\mathbf{R}_+ \rightarrow \mathbf{R}_+\) be an increasing function with \(F(r) \le r^q\). Then, \(\int _0^\infty (1+r)^{-p} dF(r) < \infty \) as soon as \(p > q > 0\).

Proof

We have \(\int _0^\infty (1+r)^{-p} dF(r) \le 1 + \int _1^\infty r^{-p} dF(r)\), so we only need to bound the latter. We write

$$\begin{aligned} \mathop \int \limits _1^\infty r^{-p} dF(r) \le \sum _{k \ge 0} \mathop \int \limits _{2^k}^{2^{k+1}}r^{-p}dF(r) \le \sum _{k \ge 0}2^{-pk} \mathop \int \limits _{2^k}^{2^{k+1}}dF(r) \le \sum _{k \ge 0}2^{-pk} 2^{q(k+1)}. \end{aligned}$$

This expression is summable as soon as \(p > q\), thus yielding the claim. \(\square \)

Lemma 2.7

Fix \(t > 0\) and let \(\varphi :\mathbf{R}\times \mathbf{R}_+\rightarrow \mathbf{R}_+\) be a smooth function with compact support. Define \(\varphi _\delta (x,t)=\delta ^{-3}\varphi \left( \frac{x}{\delta },\frac{t}{\delta ^2}\right) \). Then, for all \(p\ge 1,\,\varepsilon , \delta >0\), one has the bound

$$\begin{aligned}&\left[ \mathbf{E}\left| \mathop \int \limits _0^t\mathop \int \limits _\mathbf{R}\varphi _\delta (x-y,t-s)V_\varepsilon (y,s)\,dy\,ds\right| ^p\right] ^{1/p}\\&\quad \le C_\varphi \bigl (\varepsilon ^{-1/2-\alpha /4}\wedge \delta ^{-1/2}\varepsilon ^{-\alpha /4}\wedge \delta ^{-3/2}\varepsilon ^{\alpha /4} \bigr ), \end{aligned}$$

where \(C_\varphi \) depends on \(p\), on the supremum and the support of \(\varphi \), and on the bound of Assumption 1.3.

Proof

We consider separately the cases \(\delta >\max (\varepsilon ,\varepsilon ^{\alpha }),\,\delta <\min (\varepsilon ,\varepsilon ^\alpha )\), as well as \(\min (\varepsilon ,\varepsilon ^\alpha )\le \delta \le \max (\varepsilon ,\varepsilon ^{\alpha })\).

Assume first that \(\delta >\max (\varepsilon ,\varepsilon ^{\alpha })\). Without loss of generality we also assume that \(p\) is even, that is \(p=2k\) with \(k\in {\mathbb {N}}\). Then

$$\begin{aligned} {\mathcal {J}}_p^{\varepsilon ,\delta }:&= \mathbf{E}\left| \mathop \int \limits _0^t\mathop \int \limits _\mathbf{R}\varphi _\delta (x-y,t-s)V_\varepsilon (y,s)\,dy\,ds\right| ^p\\&= \mathop \int \limits _0^t\!\!\dots \!\mathop \int \limits _0^t\mathop \int \limits _\mathbf{R}\!\!\dots \!\mathop \int \limits _\mathbf{R}\prod \limits _{i=1}^{2k}\varphi _\delta (x-y_i, t-s_i) \mathbf{E}\left( \prod \limits _{i=1}^{2k}V_\varepsilon (y_i,s_i)\right) d\vec {y}d\vec {s}, \end{aligned}$$

where \(d\vec y=dy_1\dots dy_{2k}\) and \(d\vec {s}=ds_1\dots ds_{2k}\). Changing the variables \(\tilde{y}_i=\varepsilon ^{-1}y_i\) and \(\tilde{s}_i=\varepsilon ^{-\alpha }s_i\), and considering the definition of \(\varphi _\delta \) and \(V_\varepsilon \), we obtain

$$\begin{aligned} {\mathcal {J}}_p^{\varepsilon ,\delta }\!=\!\delta ^{-6k}\varepsilon ^{-k-\frac{\alpha k}{2}}\varepsilon ^{2k+2\alpha k} \int \limits _{[0,t/\varepsilon ^\alpha ]^{2k}}\int \limits _{{\mathbf {R}}^{2k}} \prod \limits _{i=1}^{2k}\varphi \Big (\frac{x\!-\!\varepsilon \tilde{y}_i}{\delta },\frac{t\!-\!\varepsilon ^\alpha \tilde{s}_i}{\delta ^2}\Big ) \mathbf{E}\left( \prod \limits _{i=1}^{2k}V(\tilde{y}_i,\tilde{s}_i)\right) d\vec {{\tilde{y}}}d\vec {{\tilde{s}}}. \end{aligned}$$

The support of the function \(\prod \limits _{i=1}^{2k}\varphi \big (\frac{x-\varepsilon \tilde{y}_i}{\delta },\frac{t-\varepsilon ^\alpha \tilde{s}_i}{\delta ^2}\big )\) belongs to the rectangle \((x-k\frac{\delta }{\varepsilon }s_\varphi ,x+k\frac{\delta }{\varepsilon }s_\varphi )^{2k} \times (t-k\frac{\delta ^2}{\varepsilon ^\alpha }s_\varphi ,t+ k\frac{\delta ^2}{\varepsilon ^\alpha }s_\varphi )^{2k}\), where \(s_\varphi \) is the diameter of support of \(\varphi =\varphi (y,s)\). Denote \(\Pi ^1_{\delta ,\varepsilon }=(0,2k\frac{\delta }{\varepsilon }s_\varphi )^{2k}\) and \(\Pi ^2_{\delta ,\varepsilon }=(0,2k\frac{\delta ^2}{\varepsilon ^\alpha }s_\varphi )^{2k}\). Since \(V(y,s)\) is stationary, we have

$$\begin{aligned} {\mathcal {J}}_p^{\varepsilon ,\delta }\le \delta ^{-6k}\varepsilon ^{-k-\frac{\alpha k}{2}}\varepsilon ^{2k+2\alpha k} \Vert \varphi \Vert \big ._{C}^{2k}\int \limits _{(0,2k\frac{\delta }{\varepsilon }s_\varphi )^{2k}} \int \limits _{(0,2k\frac{\delta ^2}{\varepsilon ^\alpha }s_\varphi )^{2k}} \left| \mathbf{E}\left( \prod \limits _{i=1}^{2k}V(\tilde{y}_i,\tilde{s}_i)\right) \right| d\vec {{\tilde{y}}}d\vec {{\tilde{s}}}. \nonumber \\ \end{aligned}$$
(2.9)

For any \(R\ge 0\) we introduce a subset of \({\mathbf {R}}^{4k}\)

$$\begin{aligned}&{\mathcal {V}}_{\delta ,\varepsilon }(R)\\&\quad =\left\{ \!(\tilde{y},\tilde{s})\!\in \! \Pi ^1_{\delta ,\varepsilon }\!\times \!\Pi ^2_{\delta ,\varepsilon }\,: \max \limits _{1\le j\le 2k}\mathrm{dist}(\tilde{y}_j,\bigcup \limits _{i\not =j}\tilde{y}_i)\!\le \! R, \max \limits _{1\le j\le 2k}\mathrm{dist}(\tilde{s}_j,\bigcup \limits _{i\not =j}\tilde{s}_i)\!\le \! R\!\right\} \!,\qquad \end{aligned}$$

and denote by \(|{\mathcal {V}}_{\delta ,\varepsilon }|(R)\) the Lebesgue measure of this set. It is easy to check that the set \({\mathcal {V}}_{\delta ,\varepsilon }(0)\) is the union of sets of the form

$$\begin{aligned} \left\{ (\tilde{y},\tilde{s})\in \Pi ^1_{\delta ,\varepsilon }\times \Pi ^2_{\delta ,\varepsilon }\,:\, \tilde{y}_{i_1}=\tilde{y}_{i_2}, \dots , \tilde{y}_{i_{2k-1}}=\tilde{y}_{i_{2k}},\ \tilde{s}_{j_1}=\tilde{s}_{j_2}, \dots , \tilde{s}_{j_{2k-1}}=\tilde{s}_{j_{2k}}\right\} \end{aligned}$$

with \(i_l\not =i_m\) and \(j_l\not =j_m\) if \(l\not =m\), that is, \({\mathcal {V}}_{\delta ,\varepsilon }(0)\) is the union of a finite number of subsets of \(2k\)-dimensional planes in \(R^{4k}\). The \(2k\)-dimensional measure of this set satisfies the following upper bound

$$\begin{aligned} |{\mathcal {V}}_{\delta ,\varepsilon }(0)|_{2k}\le C(k)\Big (\frac{\delta }{\varepsilon }\Big )^{k} \Big (\frac{\delta ^2}{\varepsilon ^\alpha }\Big )^{k}, \end{aligned}$$

Therefore,

$$\begin{aligned} |{\mathcal {V}}_{\delta ,\varepsilon }|(R)\lesssim \Big (\frac{\delta }{\varepsilon }\Big )^{k} \Big (\frac{\delta ^2}{\varepsilon ^\alpha }\Big )^{k} R^{2k}, \end{aligned}$$
(2.10)

For each \((\tilde{y}, \tilde{s})\in \mathcal {V}_{\delta ,\varepsilon }(R)\) we have

$$\begin{aligned} \Big |\mathbf{E}\Big (\prod \limits _{i=1}^{2k}V(\tilde{y}_i,\tilde{s}_i)\Big )\Big | \le \varrho (R)C_1(k)\Vert V\Vert _{L^2(\Omega )}\Vert V^{2k-1}\Vert _{L^2(\Omega )}. \end{aligned}$$
(2.11)

Combining (2.9), (2.10) and (2.11) yields

$$\begin{aligned} {\mathcal {J}}_p^{\varepsilon ,\delta }\lesssim \delta ^{-6k}\varepsilon ^{-k-\frac{\alpha k}{2}}\varepsilon ^{2k+2\alpha k} \mathop \int \limits _0^\infty \varrho (R)\,d|{\mathcal {V}}_{\delta ,\varepsilon }|(R) \lesssim \delta ^{-3k}\varepsilon ^{\frac{\alpha k}{2}}. \end{aligned}$$

Here, the last inequality holds due to Assumption 1.4, combined with (2.10) and Lemma 2.6. Therefore, recalling that \(p=2k\), we have the bound

$$\begin{aligned} \left( {\mathcal {J}}_p^{\varepsilon ,\delta }\right) ^{1/p}\lesssim \delta ^{-3/2}\varepsilon ^{\alpha /4}. \end{aligned}$$
(2.12)

In the case \(\delta <\min (\varepsilon ,\varepsilon ^\alpha )\) we have

$$\begin{aligned} {\mathcal {J}}_p^{\varepsilon ,\delta }&= \int \nolimits _{[0,t]^{2k}}\int \nolimits _{{\mathbf {R}}^{2k}}\prod \limits _{i=1}^{2k}\varphi _\delta (x-y_i, t-s_i) {\mathbf {E}}\left( \prod \limits _{i=1}^{2k}V_\varepsilon (y_i,s_i)\right) d\vec {y}\,d\vec {s}\\&\le \int \nolimits _{[0,t]^{2k}}\int \nolimits _{{\mathbf {R}}^{2k}}\prod \limits _{i=1}^{2k}|\varphi _\delta (x-y_i, t-s_i)|\, \left| {\mathbf {E}}\left( \prod \limits _{i=1}^{2k}V_\varepsilon (y_i,s_i)\right) \right| d\vec {y}\,d\vec {s}\\&\le {\mathbf {E}}\big ((V_\varepsilon (y_1,s_1)^{2k}\big )\mathop \int \nolimits _{[0,t]^{2k}}\int \nolimits _{{\mathbf {R}}^{2k}} \prod \limits _{i=1}^{2k}|\varphi _\delta (x-y_i, t-s_i)|d\vec {y}\,d\vec {s}\\&\lesssim \varepsilon ^{-k-\frac{\alpha k}{2}}\Vert \varphi \Vert _{L^1}^{2k}, \end{aligned}$$

so that

$$\begin{aligned} ({\mathcal {J}}_p^{\varepsilon ,\delta })^{1/p}\lesssim \varepsilon ^{-1/2-\alpha /4}. \end{aligned}$$
(2.13)

Finally, if we are in the regime \(\varepsilon <\delta <\varepsilon ^{\alpha /2}\), then

$$\begin{aligned} {\mathcal {J}}_p^{\varepsilon ,\delta }&= \delta ^{-6k}\varepsilon ^{k+{3\alpha \over 2} k} \int \limits _{[0,t/\varepsilon ^\alpha ]^{2k}}\int \limits _{{\mathbf {R}}^{2k}} \prod \limits _{i=1}^{2k}\varphi \left( \frac{x\!-\!\varepsilon \tilde{y}_i}{\delta },\frac{t\!-\!\varepsilon ^\alpha \tilde{s}_i}{\delta ^2}\right) \mathbf{E}\left( \prod \limits _{i=1}^{2k}V(\tilde{y}_i,\tilde{s}_i)\right) d\vec {{\tilde{y}}}\,d\vec {{\tilde{s}}}\\&\le \delta ^{-6k}\varepsilon ^{k+3\alpha k/2}\, \Vert \varphi \Vert _{L^\infty }^{2k} \int \limits _{(0,2k\frac{\delta ^2}{\varepsilon ^\alpha }s_\varphi )^{2k}} \int \limits _{(0,2k\frac{\delta }{\varepsilon }s_\varphi )^{2k}} \left| \mathbf{E}\left( \prod \limits _{i=1}^{2k}V(\tilde{y}_i,\tilde{s}_i)\right) \right| d\vec {{\tilde{y}}}\,d\vec {{\tilde{s}}}\\&\lesssim \delta ^{-6k}\varepsilon ^{k+3\alpha k/2}\, \Vert \varphi \Vert _{L^\infty }^{2k} \int \limits _{(0,2k\frac{\delta ^2}{\varepsilon ^\alpha }s_\varphi )^{2k}}\!\!\!\!\!\! \Vert V\Vert _{L^2(\Omega )}\Vert V^{2k-1}\Vert _{L^2(\Omega )}\,d\vec {{\tilde{s}}}\ \Bigg (\frac{\delta }{\varepsilon }\Bigg )^k\int \limits _0^\infty \varrho (R)R^{k-1}\,dR\\&\lesssim \delta ^{-k}\varepsilon ^{-\alpha k/2}. \end{aligned}$$

Hence,

$$\begin{aligned} ({\mathcal {J}}_p^{\varepsilon ,\delta })^{1/p}\lesssim \delta ^{-1/2}\varepsilon ^{-\alpha /4} \end{aligned}$$
(2.14)

so that, combining (2.12), (2.13) and (2.14), the desired estimate holds. \(\square \)

Lemma 2.8

Fix \(t > 0\) and let \(\varphi :\mathbf{R}\times \mathbf{R}_+\rightarrow \mathbf{R}_+\) be a function which is uniformly bounded and decays exponentially in \(x\), uniformly over \(s \in [0,t]\).

Then, for all \(p\ge 1,\,\varepsilon >0\), one has the bound

$$\begin{aligned} \left[ \mathbf{E}\left| \int _0^t\int _\mathbf{R}\varphi (x-y,t-s)V_\varepsilon (y,s)\,dy\,ds\right| ^p\right] ^{1/p} \le C_\varphi \left( \varepsilon ^{-1/2-\alpha /4}\wedge \varepsilon ^{-\alpha /4}\wedge \varepsilon ^{\alpha /4} \right) . \end{aligned}$$

Here, the proportionality constant depends on \(p\), on \(t\), on the bounds on \(\varphi \), and on the bounds of Assumption 1.3.

Proof

The proof of this lemma is similar (with some simplifications) to that of the previous statement. We leave it to the reader. \(\square \)

In the proof of the next Lemma, we shall exploit in an essential way the fact that

$$\begin{aligned} Y^\varepsilon (x,t)=\int _0^t\mathop \int \limits _\mathbf{R}p_{t-s}(x-y)V_\varepsilon (y,s)dyds. \end{aligned}$$

The fact that this integral converges follows readily from Assumption 1.3. Indeed

$$\begin{aligned} \mathbf{E}\int _0^t\mathop \int \limits _\mathbf{R}p_{t-s}(x-y) |V_\varepsilon (y,s)| dyds&= \mathop \int \limits _0^t\mathop \int \limits _\mathbf{R}p_{t-s}(x-y)\mathbf{E}[|V_\varepsilon (y,s)|]dyds\\&\le C\varepsilon ^{-(1/2+\alpha /4)}\mathop \int \limits _0^t\mathop \int \limits _\mathbf{R}p_{t-s}(x-y)dyds <\infty , \end{aligned}$$

hence

$$\begin{aligned} \mathop \int \limits _0^t\mathop \int \limits _\mathbf{R}p_{t-s}(x-y)|V_\varepsilon (y,s)|dyds<\infty \end{aligned}$$

a.s., and all the operations done in the next proof are valid a.s. in \(\omega \).

Lemma 2.9

For each \(p\ge 1\), there exists a constant \(C_p\) such that for all \(\varepsilon >0,\,t\ge 0,\,x\in \mathbf{R}\),

$$\begin{aligned}&\left[ \mathbf{E}\left( \left| Y^\varepsilon (x,t)\right| ^p\right) \right] ^{1/p}\le C_p(1+\sqrt{t})\varepsilon ^{\alpha /4} \end{aligned}$$
(2.15)
$$\begin{aligned}&\left[ \mathbf{E}\left( \left| \partial _x Y^\varepsilon (x,t)\right| ^p\right) \right] ^{1/p}\le C_p \end{aligned}$$
(2.16)
$$\begin{aligned}&\left[ \mathbf{E}\left( \left| \partial ^2_x Y^\varepsilon (x,t)\right| ^p\right) \right] ^{1/p}\le C_p\varepsilon ^{-1}. \end{aligned}$$
(2.17)

Proof

Our main ingredient is the existence of a function \(\psi :\mathbf{R}_+ \rightarrow [0,1]\) which is smooth, compactly supported in the interval \([1/2,2]\), and such that

$$\begin{aligned} \sum _{n \in \mathbf{Z}} \psi (2^{-n}r) = 1, \end{aligned}$$

for all \(r > 0\).

As a consequence, we can rewrite the heat kernel as

$$\begin{aligned} p_t(x) = \sum _{n \in \mathbf{Z}} 2^{-2n}\varphi _n(x,t), \end{aligned}$$
(2.18)

where

$$\begin{aligned} \varphi _n(x,t) = 2^{3n} \varphi (2^{n}x, 2^{2n} t),\quad \varphi (x,t) = p_t(x) \psi (\sqrt{x^2 + t}). \end{aligned}$$
(2.19)

The advantage of this formulation is that the function \(\varphi \) is smooth and compactly supported. The reason why we scale \(\varphi _n\) in this way, at the expense of still having a prefactor \(2^{-2n}\) in (2.18) is that this is the scaling used in Lemma 2.7 (setting \(\delta = 2^{-n}\)).

We use this decomposition to define \(Y^\varepsilon _n\) by

$$\begin{aligned} Y^\varepsilon _n(x,t) = 2^{-2n} \mathop \int \limits _0^t \mathop \int \limits _\mathbf{R}\varphi _n(x-y, t-s)\, V_\varepsilon (y,s)\,dy\,ds, \end{aligned}$$
(2.20)

so that, by (2.18), one has \(Y^\varepsilon = \sum _n Y^\varepsilon _n\). Setting \(\tilde{\varphi }(x,t) = \partial _x \varphi (x,t)\) and defining \(\tilde{\varphi }_n(x,t)= 2^{3n} \tilde{\varphi }(2^{n}x, 2^{2n} t)\) as in (2.19), the derivative of \(Y^\varepsilon \) can be decomposed in the same way:

$$\begin{aligned} \partial _x Y^\varepsilon _n(x,t) = 2^{-n} \mathop \int \limits _0^t \mathop \int \limits _\mathbf{R}\tilde{\varphi }_n(x-y, t-s)\, V_\varepsilon (y,s)\,dy\,ds. \end{aligned}$$
(2.21)

We first bound the derivative of \(Y^\varepsilon \). Since \(\tilde{\varphi }\) is smooth and compactly supported, the constants appearing in Lemma 2.7 do not depend on \(t\) and we have

$$\begin{aligned} \bigl (\mathbf{E}|\partial _x Y^\varepsilon _n(x,t)|^p\bigr )^{1/p} \lesssim 2^{n/2} \varepsilon ^{\alpha /4} \wedge 2^{-n/2} \varepsilon ^{-\alpha /4}= 2^{-|{n\over 2} + {\alpha \over 4}\log _2 \varepsilon |}. \end{aligned}$$

Since the sum (over \(n\)) of this quantity is bounded independently of \(\varepsilon \), (2.16) now follows by the triangle inequality.

Note that (2.17) follows from the same argument, if we integrate by parts (hence differentiate \(V_\varepsilon \)).

In order to finally establish (2.15), we bound \(Y^\varepsilon \) in a similar way. This time however, we combine all the terms with \(n < 0\) into one single term, setting

$$\begin{aligned} p_t^{-}(x) = \sum _{n \le 0} 2^{-2n} \varphi _n(x,t),\quad Y^\varepsilon _-(x,t) = \int _0^t \int _\mathbf{R}p_{t-s}^-(x-y)\, V_\varepsilon (y,s)\,dy\,ds, \end{aligned}$$

so that \(Y^\varepsilon = \sum _{n > 0} Y^\varepsilon _n + Y^\varepsilon _-\). Similarly to before, we obtain

$$\begin{aligned} \bigl (\mathbf{E}|Y^\varepsilon _n(x,t)|^p\bigr )^{1/p} \lesssim 2^{-n/2} \varepsilon ^{\alpha /4}. \end{aligned}$$
(2.22)

In order to bound \(Y^\varepsilon _-\), we apply Lemma 2.8 with \(\varphi = p^-\) and \(\varepsilon \le 1\), which yields

$$\begin{aligned} \bigl (\mathbf{E}|Y^\varepsilon _-(x,t)|^p\bigr )^{1/p} \lesssim \varepsilon ^{\alpha /4}. \end{aligned}$$

Combining this with (2.22), summed over \(n > 0\), yields the desired bound. \(\square \)

We deduce from Lemma 2.9 and Eq. 1.7

Corollary 2.10

As \(\varepsilon \rightarrow 0,\,\sup _{(x,t)\in D}|Y^\varepsilon |(x,t)\rightarrow 0\) in probability, for any bounded subset \(D\subset \mathbf{R}\times \mathbf{R}_+\).

Proof

It follows from Lemma 2.9 and Eq. 1.7 that for some \(a,b>0\) and all \(p\ge 1\), all bounded subsets \(D\subset \mathbf{R}_+\times \mathbf{R}\),

$$\begin{aligned}&\sup _{(x,t)\in D}\mathbf{E}\left[ |Y^\varepsilon (x,t)|^p\right] \lesssim \varepsilon ^{pa},\end{aligned}$$
(2.23)
$$\begin{aligned}&\sup _{(x,t)\in D}\mathbf{E}\left[ |\partial _xY^\varepsilon (x,t)|^p\right] \lesssim \varepsilon ^{-pb},\ \sup _{(x,t)\in D}\mathbf{E}\left[ |\partial _tY^\varepsilon (x,t)|^p\right] \lesssim \varepsilon ^{-pb}. \end{aligned}$$
(2.24)

We deduce from (2.23) that for all \((x,t), (y,s)\in D,\,p\ge 1\),

$$\begin{aligned} \mathbf{E}\left[ |Y^\varepsilon (x,t)-Y^\varepsilon (y,s)|^p\right] \lesssim \varepsilon ^{pa}, \end{aligned}$$

and from (2.24), writing \(Y^\varepsilon (x,t)-Y^\varepsilon (y,s)\) as the sum of an integral of \(\partial _xY^\varepsilon \) and an integral of \(\partial _tY^\varepsilon \), we get

$$\begin{aligned} \mathbf{E}\left[ |Y^\varepsilon (x,t)-Y^\varepsilon (y,s)|^p\right] \lesssim (|x-y|+|t-s|)^p\varepsilon ^{-pb}. \end{aligned}$$

Hence from Hölder’s inequality

$$\begin{aligned} \mathbf{E}[\left| Y^\varepsilon (x,t)-Y^\varepsilon (y,s)|^{\alpha +\beta }\right] \le (|x-y|+|t-s|)^\beta \varepsilon ^{\alpha a-\beta b}. \end{aligned}$$

Provided \(\beta >2\) and \(\alpha >\beta b/a\), we obtain an estimate which allows us to deduce the result from a combination of (2.23) and Kolmogorov’s Lemma 2.3. \(\square \)

We will also need

Lemma 2.11

The function \(t\rightarrow \bar{V}_\varepsilon (t)\) is continuous, and, for each \(\varepsilon >0\), there exists a positive constant \(\bar{V}^0_\varepsilon \) such that

$$\begin{aligned} \bar{V}_\varepsilon (t)\rightarrow \bar{V}_\varepsilon ^0,\quad \text {as }t\rightarrow \infty . \end{aligned}$$

Furthermore,

$$\begin{aligned} \lim \limits _{\varepsilon \rightarrow 0}\bar{V}_\varepsilon ^0=\bar{V}:= \int \limits _0^\infty \int \limits _{\mathbf {R}} \frac{\Phi (y,t)}{2\sqrt{\pi t}}\,dy\,dt, \end{aligned}$$

and \(\bar{V}_\varepsilon (t)\rightarrow \bar{V}\) as \(\varepsilon \rightarrow 0\), uniformly in \(t\in [1,+\infty ]\).

Proof

Writing \(\Phi _\varepsilon \) for the correlation function of \(V_\varepsilon \) and using the definition of \(\bar{V}_\varepsilon (t)\), we have

$$\begin{aligned} \bar{V}_\varepsilon (t)&= \mathbf{E}\left[ \left( \frac{\partial }{\partial x}\int \limits _0^t \int \limits _{\mathbf {R}}p_{t-s}(x-y) V_\varepsilon (y,s)\,dy\,ds\right) ^2 \right] \\&= \mathbf{E}\left[ \left( \int \limits _0^t \int \limits _{\mathbf {R}}p_{t-s}'(x-y) V_\varepsilon (y,s)\,dy\,ds\right) ^2 \right] \\&= \mathbf{E}\left[ \int \limits _0^t\!\!\!\int \limits _0^t\!\!\! \int \limits _{\mathbf {R}}\!\!\!\int \limits _{\mathbf {R}} p_{t-s}'(x-y)p_{t-r}'(x-z) V_\varepsilon (y,s) V_\varepsilon (z,r)\,dy\,dz\,ds\,dr \right] \\&= \int \limits _0^t\!\!\!\int \limits _0^t\!\!\! \int \limits _{\mathbf {R}}\!\!\!\int \limits _{\mathbf {R}} p_{t-s}'(x-y)p_{t-r}'(x-z) \Phi _\varepsilon (y-z,s-r)\,dy\,dz\,ds\,dr\\&= \int \limits _0^t\!\!\!\int \limits _0^t\!\!\! \int \limits _{\mathbf {R}}\!\!\!\int \limits _{\mathbf {R}} p_{s}'(y)p_{r}'(z) \Phi _\varepsilon (y-z,s-r)\,dy\,dz\,ds\,dr. \end{aligned}$$

It is easy to check that, for each \(\varepsilon >0\), this integral is a continuous function of \(t\) and that it converges, as \(t\rightarrow +\infty \). Performing the change of variables \(y'=\frac{y}{\varepsilon ^{1/2+\alpha /4}},\, z'=\frac{z}{\varepsilon ^{1/2+\alpha /4}},\,s'=\frac{s}{\varepsilon ^{1+\alpha /2}},\, r'=\frac{r}{\varepsilon ^{1+\alpha /2}}\), renaming the new variables and setting \(T_\varepsilon =\varepsilon ^{-1-\alpha /2}t\), we obtain

$$\begin{aligned} \bar{V}_\varepsilon (t) =\frac{1}{16\pi } \int \limits _0^{T_\varepsilon }\!\! \int \limits _0^{T_\varepsilon }\!\! \int \limits _{\mathbf {R}}\!\!\!\int \limits _{\mathbf {R}} \frac{y}{s^{3/2}} \frac{z}{r^{3/2}} e\big .^{-\frac{y^2}{4s}-\frac{z^2}{4r}} \Phi \left( \frac{y\!-\!z}{\varepsilon ^{\frac{1}{2}-\frac{\alpha }{4}}}, \frac{s\!-\!r}{\varepsilon ^{\frac{\alpha }{2}-1}}\right) \,dy\,dz\,ds\,dr. \end{aligned}$$

We represent the integral on the right-hand side as

$$\begin{aligned} \bar{V}_\varepsilon (t)= \frac{1}{16\pi } \int \limits _0^{T_\varepsilon }\!\! \int \limits _0^{T_\varepsilon }\!\! \int \limits _{\mathbf {R}}\!\!\!\int \limits _{\mathbf {R}} \frac{z^2}{s^{3/2} r^{3/2}} e\big .^{-\frac{z^2}{4s}-\frac{z^2}{4r}} \Phi \left( \frac{y\!-\!z}{\varepsilon ^{\frac{1}{2}-\frac{\alpha }{4}}}, \frac{s\!-\!r}{\varepsilon ^{\frac{\alpha }{2}-1}}\right) \,dy\,dz\,ds\,dr + r_\varepsilon (t). \nonumber \\ \end{aligned}$$
(2.25)

The further analysis relies on the following limit relation:

$$\begin{aligned} \lim \limits _{\varepsilon \rightarrow 0}\ \sup _{0<t\le +\infty }\,|r_\varepsilon (t)|=0. \end{aligned}$$
(2.26)

In order to justify it we denote \(\varkappa =\frac{1}{2}-\frac{\alpha }{4}\) and \(\varkappa _1=\frac{\varkappa }{10}\), and divide the integration area into four parts as follows

$$\begin{aligned} \Pi _1&= \left\{ (y,z,s,r)\in {\mathbf {R}}^2\times ({\mathbf {R}}^+)^2\,:\,s\le \varepsilon ^{\varkappa _1},\,r\le \varepsilon ^{\varkappa _1}\right\} ,\\ \Pi _2&= \left\{ (y,z,s,r)\in {\mathbf {R}}^2\times ({\mathbf {R}}^+)^2\,:\, \varepsilon ^{\varkappa _1}<s\le T_\varepsilon ,\,r\le \varepsilon ^{\varkappa _1}\right\} ,\\ \Pi _3&= \left\{ (y,z,s,r)\in {\mathbf {R}}^2\times ({\mathbf {R}}^+)^2\,:\,s\le \varepsilon ^{\varkappa _1},\, \varepsilon ^{\varkappa _1}<r\le T_\varepsilon \right\} ,\\ \Pi _4&= \left\{ (y,z,s,r)\in {\mathbf {R}}^2\times ({\mathbf {R}}^+)^2\,:\,\varepsilon ^{\varkappa _1}<s\le T_\varepsilon ,\,\varepsilon ^{\varkappa _1}<r\le T_\varepsilon \right\} . \end{aligned}$$

In \(\Pi _1\) we have

$$\begin{aligned} \int \limits _{\Pi _1}\frac{|y|\,|z|}{s^\frac{3}{2}r^\frac{3}{2}} e\big .^{-\frac{y^2}{4s}-\frac{z^2}{4r}} \Phi \left( \frac{y\!-\!z}{\varepsilon ^{\frac{1}{2}-\frac{\alpha }{4}}}, \frac{s\!-\!r}{\varepsilon ^{\frac{\alpha }{2}-1}}\right) \,dy\,dz\,ds\,dr\le C^2\int \limits _0^{\varepsilon ^{\varkappa _1}}\int \limits _0^{\varepsilon ^{\varkappa _1}} \,\frac{dsdr}{s^\frac{1}{2}r^\frac{1}{2}}=4C^2\varepsilon ^{\varkappa _1}. \nonumber \\ \end{aligned}$$
(2.27)

To estimate the integral over \(\Pi _2\) we first notice that there exists a constant \(C_1\) such that

$$\begin{aligned} \frac{|y|}{s^\frac{1}{2}}e^{-\frac{y^2}{4s}}\le C_1 \end{aligned}$$

uniformly over all \(s > 0\) and \(y\in \mathbf{R}\). Then,

$$\begin{aligned}&\displaystyle \int \limits _{\Pi _2} \displaystyle \!\frac{|y|\,|z|}{s^\frac{3}{2}r^\frac{3}{2}} e\big .^{-\frac{y^2}{4s}-\frac{z^2}{4r}} \Phi \left( \frac{y\!-\!z}{\varepsilon ^{\frac{1}{2}-\frac{\alpha }{4}}}, \frac{s\!-\!r}{\varepsilon ^{\frac{\alpha }{2}-1}}\right) \,dy\,dz\,ds\,dr\nonumber \\&\quad \displaystyle \le C_1\int \limits ^{T_\varepsilon }_{\varepsilon ^{\varkappa _1}} \int \limits _0^{\varepsilon ^{\varkappa _1}} \int \limits _{\mathbf {R}} \,\frac{|z|\,dz\,dr\,ds}{s\,r^\frac{3}{2}}e^{-\frac{z^2}{4r}} \int \limits _{\mathbf {R}}\Phi \left( \frac{y\!-\!z}{\varepsilon ^{\frac{1}{2}-\frac{\alpha }{4}}}, \frac{s\!-\!r}{\varepsilon ^{\frac{\alpha }{2}-1}}\right) \,dy\nonumber \\&\quad \displaystyle =C_1\varepsilon ^\varkappa \int \limits ^{T_\varepsilon }_{\varepsilon ^{\varkappa _1}} \int \limits _0^{\varepsilon ^{\varkappa _1}} \int \limits _{\mathbf {R}} \,e^{-\frac{z^2}{4r}}\,\overline{\Phi }\left( \frac{s\!-\!r}{\varepsilon ^{\frac{\alpha }{2}-1}} \right) \frac{|z|\,dz\,dr\,ds}{s\,r^\frac{3}{2}} \nonumber \\&\quad \displaystyle =CC_1\varepsilon ^\varkappa \int \limits ^{T_\varepsilon }_{\varepsilon ^{\varkappa _1}} \int \limits _0^{\varepsilon ^{\varkappa _1}} \,\overline{\Phi }\left( \frac{s\!-\!r}{\varepsilon ^{\frac{\alpha }{2}-1}}\right) \frac{dr\,ds}{s\,r^\frac{1}{2}} \le CC_1\varepsilon ^\varkappa \int \limits ^{T_\varepsilon }_{\varepsilon ^{\varkappa _1}} \int \limits _0^{\varepsilon ^{\varkappa _1}} \,\widehat{\overline{\Phi }}\left( \frac{s}{\varepsilon ^{\frac{\alpha }{2}-1}} \right) \frac{dr\,ds}{s\,r^\frac{1}{2}}\nonumber \\&\quad \displaystyle =2CC_1\varepsilon ^\varkappa \varepsilon ^{\frac{\varkappa _1}{2}} \int \limits ^{T_\varepsilon }_{\varepsilon ^{\varkappa _1}} \,\widehat{\overline{\Phi }}\left( \frac{s}{\varepsilon ^{\frac{\alpha }{2}-1}}\right) \frac{ds}{s} \le 2CC_1\varepsilon ^\varkappa \varepsilon ^{\frac{\varkappa _1}{2}} \int \limits ^{\infty }_{\varepsilon ^{\varkappa _1+2\varkappa }} \,\widehat{\overline{\Phi }}(s)\frac{ds}{s}\nonumber \\&\quad \displaystyle \le C_2 (\varkappa _1+2\varkappa )\varepsilon ^{\varkappa +\frac{\varkappa _1}{2}} |\log \varepsilon |; \end{aligned}$$
(2.28)

here \(\overline{\Phi }(t)=\int _{\mathbf {R}}\Phi (x,t)dx\), and \(\widehat{\overline{\Phi }}(t)\) stands for \(\max \{\overline{\Phi }(s)\,:\,t-1\le s\le t\}\). A similar estimate holds true for the integral over \(\Pi _3\). Therefore,

$$\begin{aligned} \lim \limits _{\varepsilon \rightarrow 0}\ \sup \limits _{0<t\le +\infty } \left| \int \limits _{\Pi _1\cup \Pi _2\cup \Pi _3}\frac{y}{s^{3/2}} \frac{z}{r^{3/2}} e\big .^{-\frac{y^2}{4s}-\frac{z^2}{4r}} \Phi \Big (\frac{y\!-\!z}{\varepsilon ^{\frac{1}{2}-\frac{\alpha }{4}}}, \frac{s\!-\!r}{\varepsilon ^{\frac{\alpha }{2}-1}}\Big ) \,dy\,dz\,ds\,dr\right| =0. \nonumber \\ \end{aligned}$$
(2.29)

We also have

$$\begin{aligned}&\displaystyle \int \limits _{\Pi _1\cup \Pi _2}\frac{z^2}{s^{3/2} r^{3/2}} e\big .^{-\frac{z^2}{4s}-\frac{z^2}{4r}} \Phi \left( \frac{y\!-\!z}{\varepsilon ^{\frac{1}{2}-\frac{\alpha }{4}}}, \frac{s\!-\!r}{\varepsilon ^{\frac{\alpha }{2}-1}}\right) \,dy\,dz\,ds\,dr \nonumber \\&\quad \displaystyle = C\varepsilon ^\varkappa \int \limits _{0}^{T_\varepsilon } \int \limits _0^{\varepsilon ^{\varkappa _1}} \overline{\Phi }\Big (\frac{s-r}{\varepsilon ^{-2\varkappa }}\Big ) \frac{ds\,dr}{(s+r)^\frac{3}{2}} =C\int \limits _{0}^{\varepsilon ^{2\varkappa }T_\varepsilon } \int \limits _0^{\varepsilon ^{\varkappa _1+2\varkappa }} \overline{\Phi }(s-r)\frac{ds\,dr}{(s+r)^\frac{3}{2}}\nonumber \\&\quad \displaystyle \le C\int \limits _{0}^1 \int \limits _0^{\varepsilon ^{\varkappa _1+2\varkappa }} \overline{\Phi }(s-r)\frac{ds\,dr}{(s+r)^\frac{3}{2}} + C\int \limits _1^\infty \int \limits _0^{\varepsilon ^{\varkappa _1+2\varkappa }} \overline{\Phi }(s-r)\frac{ds\,dr}{(s+r)^\frac{3}{2}} \le C_4\varepsilon ^\varkappa \nonumber \\ \end{aligned}$$
(2.30)

Combining this estimate with a similar estimate for the integral over \(\Pi _1\cup \Pi _3\), we obtain

$$\begin{aligned} \lim \limits _{\varepsilon \rightarrow 0}\ \sup \limits _{0<t\le +\infty } \left| \int \limits _{\Pi _1\cup \Pi _2\cup \Pi _3}\frac{z^2}{s^{3/2}r^{3/2}} e^{-\frac{z^2}{4s}-\frac{z^2}{4r}} \Phi \left( \frac{y\!-\!z}{\varepsilon ^{\frac{1}{2}-\frac{\alpha }{4}}},\frac{s\!-\!r}{\varepsilon ^{\frac{\alpha }{2}-1}}\right) \,dy\,dz\,ds\,dr\right| =0. \nonumber \\ \end{aligned}$$
(2.31)

In order to justify (2.26) it remains to show that

$$\begin{aligned} \lim \limits _{\varepsilon \rightarrow 0}\ \sup \limits _{0<t\le +\infty } \int \limits _{\Pi _4}\frac{\big |{yz} e\big .^{-\frac{y^2}{4s}-\frac{z^2}{4r}} -{z^2}e\big .^{-\frac{z^2}{4s}-\frac{z^2}{4r}}\big |}{s^{3/2} r^{3/2}} \Phi \left( \frac{y\!-\!z}{\varepsilon ^{\frac{1}{2}-\frac{\alpha }{4}}}, \frac{s\!-\!r}{\varepsilon ^{\frac{\alpha }{2}-1}}\right) \,dy\,dz\,ds\,dr\Big |=0 \nonumber \\ \end{aligned}$$
(2.32)

We first estimate

$$\begin{aligned} J_\varepsilon (t):&= \int \limits _{\Pi _4}\frac{|yz|}{s^{3/2} r^{3/2}} e\big .^{-\frac{z^2}{4r}} \big |e\big .^{-\frac{y^2}{4s}} -e\big .^{-\frac{z^2}{4s}}\big | \Phi \Big (\frac{y\!-\!z}{\varepsilon ^{\varkappa }}, \frac{s\!-\!r}{\varepsilon ^{-2\varkappa }}\Big ) \,dy\,dz\,ds\,dr\nonumber \\&\le \frac{1}{4}\int \limits _{\Pi _4}\frac{|yz|\,|z^2-y^2|}{s^{5/2} r^{3/2}} e\big .^{-\frac{z^2}{4r}} \left( e\big .^{-\frac{y^2}{4s}} +e\big .^{-\frac{z^2}{4s}}\right) \Phi \Big (\frac{y\!-\!z}{\varepsilon ^{\varkappa }}, \frac{s\!-\!r}{\varepsilon ^{-2\varkappa }}\Big ) \,dy\,dz\,ds\,dr \nonumber \\&\lesssim \varepsilon ^\varkappa \int \limits _{\Pi _4} \left( \frac{|y|^3+|y\!-\!z|^3}{s^{5/2} r^{3/2}}e\big .^{-\frac{y^2}{4s}} \!+\! \frac{|z|^3+|y\!-\!z|^3}{s^{5/2} r^{3/2}}e\big .^{-\frac{z^2}{4s}} \right) \nonumber \\&\quad \times \,e\big .^{-\frac{z^2}{4r}} \Phi _1\Big (\frac{y\!-\!z}{\varepsilon ^{\varkappa }}, \frac{s\!-\!r}{\varepsilon ^{-2\varkappa }}\Big ) \,dy\,dz\,ds\,dr \end{aligned}$$
(2.33)

with \(\Phi _1(x,t)=|x|\Phi (x,t)\); here we have used the inequality \(|e^a-e^b|\le |b-a|(e^a+e^b)\) and the estimates \(|yz||y+z|\le C(|y|^3+|y-z|^3)\) and \(|yz||y+z|\le C(|z|^3+|y-z|^3)\) that follow from the Young inequality. Let us estimate the integral

$$\begin{aligned}&\varepsilon ^\varkappa \int \limits _{\Pi _4} \frac{|y|^3}{s^{5/2} r^{3/2}}e\big .^{-\frac{y^2}{4s}} e\big .^{-\frac{z^2}{4r}} \Phi _1\Big (\frac{y\!-\!z}{\varepsilon ^{\varkappa }}, \frac{s\!-\!r}{\varepsilon ^{-2\varkappa }}\Big ) \,dy\,dz\,ds\,dr \\&\quad \le C_3\varepsilon ^\varkappa \int \limits _{\Pi _4} \frac{1}{s r^{3/2}} e\big .^{-\frac{z^2}{4r}} \Phi _1\Big (\frac{y\!-\!z}{\varepsilon ^{\varkappa }}, \frac{s\!-\!r}{\varepsilon ^{-2\varkappa }}\Big ) \,dy\,dz\,ds\,dr \\&\quad \le C_4\varepsilon ^{2\varkappa }\int \limits _{\varepsilon ^{\varkappa _1}}^\infty \int \limits _{\varepsilon ^{\varkappa _1}}^\infty \frac{1}{s r} \overline{\Phi }_1\Big (\frac{s\!-\!r}{\varepsilon ^{-2\varkappa }}\Big ) \,ds\,dr \\&\quad =C_4\varepsilon ^{2\varkappa }\int \limits _{\varepsilon ^{\varkappa _1+2\varkappa }}^\infty \int \limits _{\varepsilon ^{\varkappa _1+2\varkappa }}^\infty \overline{\Phi }_1\big (s\!-\!r\big )\frac{ds\,dr}{s r}\, \le C_5 \varepsilon ^{2\varkappa }(\log \varepsilon )^2; \end{aligned}$$

here \(C_3=\max (x^3e^{-x^2})\), and \(\overline{\Phi }_1(t)\) stands for \(\int _{\mathbf {R}}\Phi _1(x,t)dx\). Other terms on the right-hand side of (2.33) can be estimated in a similar way. Thus we obtain

$$\begin{aligned} \lim \limits _{\varepsilon \rightarrow 0}\ \sup \limits _{0<t\le \infty }\,J_\varepsilon (t)=0. \end{aligned}$$
(2.34)

The inequality

$$\begin{aligned} \int \limits _{\Pi _4}\frac{|yz-z^2|}{s^{3/2} r^{3/2}} e\big .^{-\frac{z^2}{4r}} e\big .^{-\frac{z^2}{4s}} \Phi \left( \frac{y\!-\!z}{\varepsilon ^{\varkappa }}, \frac{s\!-\!r}{\varepsilon ^{-2\varkappa }}\right) \,dy\,dz\,ds\,dr\le C\varepsilon ^\varkappa (\log \varepsilon )^2 \end{aligned}$$

can be obtained in the same way with a number of simplifications. This yields (2.26).

It remains to notice that

$$\begin{aligned}&\int \limits _{\varepsilon ^{\varkappa _1}}^{T_\varepsilon }\!\! \int \limits _{\varepsilon ^{\varkappa _1}}^{T_\varepsilon }\!\! \int \limits _{\mathbf {R}}\!\!\!\int \limits _{\mathbf {R}} \frac{z^2}{s^{3/2} r^{3/2}} e\big .^{-\frac{z^2}{4s}-\frac{z^2}{4r}} \Phi \Big (\frac{y\!-\!z}{\varepsilon ^{\varkappa }}, \frac{s\!-\!r}{\varepsilon ^{-2\varkappa }}\Big ) \,dy\,dz\,ds\,dr\\&\quad = \varepsilon ^\varkappa \int \limits _{\varepsilon ^{\varkappa _1}}^{T_\varepsilon }\!\! \int \limits _{\varepsilon ^{\varkappa _1}}^{T_\varepsilon }\!\! \int \limits _{\mathbf {R}} \frac{z^2}{s^{3/2} r^{3/2}} e\big .^{-\frac{z^2}{4s}-\frac{z^2}{4r}} \overline{\Phi }\Big (\frac{s\!-\!r}{\varepsilon ^{-2\varkappa }}\Big ) \,dz\,ds\,dr\\&\quad = C_0 \varepsilon ^\varkappa \int \limits _{\varepsilon ^{\varkappa _1}}^{T_\varepsilon }\!\! \int \limits _{\varepsilon ^{\varkappa _1}}^{T_\varepsilon } \overline{\Phi }\Big (\frac{s\!-\!r}{\varepsilon ^{-2\varkappa }}\Big ) \frac{ds\,dr}{(s+r)^{3/2}} =C_0\int \limits _{\varepsilon ^{\varkappa _1+2\varkappa }}^{\varepsilon ^{-\alpha }t} \int \limits _{\varepsilon ^{\varkappa _1+2\varkappa }}^{\varepsilon ^{-\alpha }t} \overline{\Phi }(s\!-\!r)\frac{ds\,dr}{(s+r)^{3/2}} \\&\quad =C_0\int \limits _{0}^{\infty } \int \limits _{0}^{\infty } \overline{\Phi }(s\!-\!r)\frac{ds\,dr}{(s+r)^{3/2}} +R_\varepsilon (t) \end{aligned}$$

with \(C_0=\int _{\mathbf {R}}z^2e^{-z^2/4}\,dz\), and

$$\begin{aligned} \lim \limits _{\varepsilon \rightarrow 0}\sup \limits _{1\le t\le +\infty }|R_\varepsilon (t)|=0. \end{aligned}$$

Combining the last two relations with (2.25) and (2.26), we obtain the desired statement. \(\square \)

Lemma 2.12

For any \(T>0\), any even integer \(k\ge 2\), any \(0<\beta <1/k\), any \(p>k\) and any \(\kappa > 0\), there exists a constant \(C\) such that for all \(0\le t\le T,\,\varepsilon >0\),

$$\begin{aligned}&\left( \mathbf{E}\Vert Y^\varepsilon (t)\Vert _{0,p_\kappa }^p\right) ^{1/p}\le C\ \varepsilon ^{\frac{\alpha }{4}(1-\kappa )},\quad \left( \mathbf{E}\Vert \partial _xY^\varepsilon (t)\Vert _{0,p_\kappa }^p\right) ^{1/p}\le C\ \varepsilon ^{-\kappa },\;\\&\left( \mathbf{E}\Vert \partial _x Y^\varepsilon (t)\Vert _{\beta ,p_\kappa }^p\right) ^{1/p} \le C \varepsilon ^{-\kappa }. \end{aligned}$$

Proof

We establish the estimates of the norms of \(\partial _xY^\varepsilon (t)\) only. The norm of \(Y^\varepsilon (t)\) is estimated similarly. Let \(q>1\) and \(p=qk\). For any \(x<y\), we have the identity

$$\begin{aligned} |\partial _xY^\varepsilon (t,y)-\partial _xY^\varepsilon (t,x)|^k=k \int _x^y(\partial _xY^\varepsilon (t,z)-\partial _xY^\varepsilon (t,x))^{k-1} \partial ^2_xY^\varepsilon (t,z)dz. \end{aligned}$$

Raising this to the power \(q\) and taking expectations, we obtain

$$\begin{aligned}&\mathbf{E}(|\partial _xY^\varepsilon (t,y)-\partial _xY^\varepsilon (t,x)|^{p})\nonumber \\&\quad \le k^q \left| \int _x^y(\partial _xY^\varepsilon (t,z)-\partial _xY^\varepsilon (t,x))^{k-1} \partial ^2_xY^\varepsilon (t,z)dz\right| ^q \nonumber \\&\quad \lesssim (y-x)^{q-1}\int _x^y\mathbf{E}\left( \left| (\partial _xY^\varepsilon (t,z)-\partial _xY^\varepsilon (t,x))^{k-1} \partial ^2_xY^\varepsilon (t,z)\right| ^q\right) dz\nonumber \\&\quad \lesssim (y-x)^q\sqrt{\mathbf{E}\bigl (\left| \partial _xY^\varepsilon (t,x)\right| ^{2q(k-1)}\bigr ) \mathbf{E}\left( \left| \partial ^2_xY^\varepsilon (t,x)\right| ^{2q}\right) } \lesssim (y-x)^q \varepsilon ^{-q},\nonumber \\ \end{aligned}$$
(2.35)

where we have used the stationarity (in \(z\)) of the processes \(\partial _xY^\varepsilon (t,z)\) and \(\partial ^2_xY^\varepsilon (t,z)\), as well as the estimates (2.16) and (2.17) from Lemma 2.9.

As a consequence of (2.16) and Kolmogorov’s Lemma 2.3, there exists a stationary sequence of positive random variables \(\{\xi _n\}_{n \in \mathbf{Z}}\) such that for every \(n \in \mathbf{Z}\), the bound

$$\begin{aligned} \sup _{x \in [n,n+1]} |\partial _x Y^\varepsilon (t,x)| \le \xi _n, \end{aligned}$$

holds almost surely, and such that \(\bigl (\mathbf{E}|\xi _n|^p\bigr )^{1/p} \lesssim \varepsilon ^{-1/k}\) for every \(p \ge 1\). The bound on \(\Vert \partial _x Y^\varepsilon (t)\Vert _{0,p_\kappa }\) then follows as follows. Choose \(p>1/\kappa \).

$$\begin{aligned} \Vert \partial _xY^\varepsilon (t)\Vert _{0,p_\kappa }&\le 2 \sup _{n\in \mathbf{Z}}\frac{\xi _n}{1+|n|^\kappa }\\&\le 2+2\sum _{n\in \mathbf{Z}}\left( \frac{\xi _n}{1+|n|^\kappa }\right) ^p\\ \mathbf{E}(\Vert \partial _xY^\varepsilon (t)\Vert _{0,p_\kappa })&\le 2+2\mathbf{E}(|\xi _n|^p)\sum _{n\in \mathbf{Z}}(1+|n|^\kappa )^{-p}\\&\le C\varepsilon ^{-p/k}. \end{aligned}$$

The bound on \(\Vert \partial _x Y^\varepsilon (t)\Vert _{\beta ,p_\kappa }\) follows in virtually the same way, using the fact that (2.35) also yields the bound

$$\begin{aligned} \sup _{x,y \in [n-1,n+1]} {|\partial _x Y^\varepsilon (t,x) - \partial _x Y^\varepsilon (t,y)| \over |x-y|^\beta } \le \tilde{\xi }_n, \end{aligned}$$

for some stationary sequence of random variables \(\tilde{\xi }_n\) which has all of its moments bounded in the same way as the sequence \(\{\xi _n\}\). \(\square \)

We further obtain the following bound on the “negative Hölder norm” of \(\partial _x Y^\varepsilon \):

Corollary 2.13

For any \(T>0,\,k\) being any even integer, \(p>k\) and \(\kappa =1/k\), there exists a constant \(C_{T,p,\kappa }\) such that

$$\begin{aligned} \left( \mathbf{E}\Vert \partial _xY^\varepsilon (t)\Vert _{-{1\over 4},p_\kappa }^p\right) ^{1/p}\le C_{T,p,\kappa }\ \varepsilon ^{\alpha /16 -\kappa }, \end{aligned}$$

for all \(0\le t\le T,\,\varepsilon >0\). \(\square \)

Proof

We note that

$$\begin{aligned} \Vert \partial _xY^\varepsilon (t)\Vert _{-{1\over 4},p_\kappa }&= \sup _{|x-y|\le 1} \frac{|Y^\varepsilon (t,x)-Y^\varepsilon (t,y)|}{p_\kappa (x)|x-y|^{3/4}},\\ \Vert Y^\varepsilon (t)\Vert _{0,p_\kappa }&= \sup _x\frac{|Y^\varepsilon (t,x)|}{p_\kappa (x)},\quad \Vert \partial _xY^\varepsilon (t)\Vert _{0,p_\kappa }=\sup _x\frac{|\partial _xY^\varepsilon (t,x)|}{p_\kappa (x)}. \end{aligned}$$

We have, for \(|x-y|\le 1\),

$$\begin{aligned} \frac{|Y^\varepsilon (t,x)\!-\!Y^\varepsilon (t,y)|}{p_\kappa (x)|x\!-\!y|^{3/4}}\!&= \! \left( \frac{|Y^\varepsilon (t,x)\!-\!Y^\varepsilon (t,y)|}{p_\kappa (x)}\right) ^{1/4} \left( \frac{|Y^\varepsilon (t,x)\!-\!Y^\varepsilon (t,y)|}{p_\kappa (x)|x\!-\!y|}\right) ^{3/4}\\ \!&\le \! \left( \frac{|Y^\varepsilon (t,x)|}{p_\kappa (x)}\!+\!C_\kappa \frac{|Y^\varepsilon (t,y)|}{p_\kappa (y)}\right) ^{1/4} \left( C_\kappa \sup _{x\le z\le y}\frac{|\partial _xY^\varepsilon (t,z)|}{p_\kappa (z)}\right) ^{3/4}\!. \end{aligned}$$

It remains to take supremums and apply Hölder’s inequality. \(\square \)

Remark 2.14

By interpolating in a similar way between the first and the third bound of Lemma 2.12, one could actually strengthen the second bound to obtain a bound on \(\mathbf{E}\Vert \partial _xY^\varepsilon (t)\Vert _{0,p_\kappa }^p\) by some positive power of \(\varepsilon \). This is however not required for our main result.

2.3 Bounds and convergence of \(Z^\varepsilon \)

The main result of this subsection is Lemma 2.18, which follows essentially from a combination of Lemma 2.15 and Lemma 2.17.

Lemma 2.15

For any \(T>0\), there exists a constant \(C_T\) such that for all \(\varepsilon >0,\,0\le t\le T\) and \(x\in \mathbf{R}\),

$$\begin{aligned} \left[ \mathbf{E}\left( \left| Z^\varepsilon (x,t)\right| ^2\right) \right] ^{1/2}\le C_T \varepsilon ^{\alpha }. \end{aligned}$$

Proof

The main ingredient in the proof is a bound on the correlation function of the right hand side of the equation for \(Z^\varepsilon \), which we denote by

$$\begin{aligned} \Lambda _\varepsilon (z,z') = \mathbf{Cov}\bigl (|\partial _x Y^\varepsilon (z)|^2, |\partial _x Y^\varepsilon (z')|^2\bigr ). \end{aligned}$$

Inserting the definition of \(Y^\varepsilon \), we obtain the identity

$$\begin{aligned} \Lambda _\varepsilon (z,z') = \int \!\!\cdots \!\!\int \! \tilde{P}(z\!-\!z_1)\tilde{P}(z\!-\!z_2)\tilde{P}(z'\!-\!z_3)\tilde{P}(z'\!-\!z_4) \Psi ^{(4)}_\varepsilon (z_1,\ldots ,z_4)\,dz_1\ldots dz_4, \end{aligned}$$

where

$$\begin{aligned} \tilde{P}(z) = \tilde{P}(x,t) = \partial _x p_t(x), \end{aligned}$$

with \(p_t\) the standard heat kernel and

$$\begin{aligned} \Psi ^{(4)}_\varepsilon (z_1,\ldots ,z_4) = \varepsilon ^{-2-\alpha }\Psi ^{(4)} \Bigl ({x_1\over \varepsilon },\ldots ,{x_4\over \varepsilon },{t_1\over \varepsilon ^\alpha }, \ldots ,{t_4\over \varepsilon ^\alpha }\Bigr ). \end{aligned}$$

Here, we used the shorthand notation \(z_i = (x_i,t_i)\), and integrals over \(z_i\) are understood to be shorthand for \(\int _0^t \int _\mathbf{R}\, dx_i\,dt_i\). We now make use of Lemma 2.4, which allows to factor this integral as

$$\begin{aligned} |\Lambda _\varepsilon (z,z')| \lesssim \Bigl (\varepsilon ^{-1-{\alpha \over 2} }\int \int \tilde{P}(z-z_1)\tilde{P}(z'-z_3) \varrho _\varepsilon (z_1-z_3)\,dz_1\, dz_3\Bigr )^2 {\mathop {=}\limits ^\mathrm{def}}\tilde{\varrho }_\varepsilon ^2(z,z'), \end{aligned}$$

where we used the shorthand notation

$$\begin{aligned} \varrho _\varepsilon (x,t) = \varrho \Bigl ({x\over \varepsilon }, {t\over \varepsilon ^\alpha }\Bigr ). \end{aligned}$$

We will show below that the following bound holds:

Lemma 2.16

For any \(\gamma \ge \frac{2}{2-\alpha }\),

$$\begin{aligned} \tilde{\varrho }_\varepsilon (z,z') \lesssim \left( 1 \wedge {\varepsilon ^{\alpha \gamma /2} \over d_p^\gamma (z,z')}\right) + (1+t+t')\varepsilon ^{\alpha /2} {\mathop {=}\limits ^\mathrm{def}} \zeta _ \varepsilon (z-z') + (1+t+t')\varepsilon ^{\alpha /2}, \end{aligned}$$

where \(d_p\) denotes the parabolic distance given by

$$\begin{aligned} d_p(z,z')^2 = |x-x'|^2 + |t-t'|. \end{aligned}$$

Taking this bound for granted, we write as in the proof of Lemma 2.9 \(Z^\varepsilon = Z^\varepsilon _- + \sum _{n > 0} Z^\varepsilon _n\) with

$$\begin{aligned} Z^\varepsilon _n(z) = 2^{-2n} \int \varphi _n(z-z')\, \bigl (|\partial _x Y^\varepsilon (z')|^2 - \bar{V}_\varepsilon (t')\bigr )\,dz', \end{aligned}$$

and similarly for \(Z^\varepsilon _-\). Squaring this expression and inserting the bound from Lemma 2.16, we obtain

$$\begin{aligned} \mathbf{E}|Z^\varepsilon _n(z)|^2&\lesssim 2^{-4n} \int \int \varphi _n(z-z')\varphi _n(z-z'')\, \bigl (\zeta _\varepsilon ^2(z'-z'') \!+\! (1\!+\!t'\!+\!t'')^2\varepsilon ^\alpha \bigr ) \,dz'\,dz''\\&\lesssim 2^{-n} \int \zeta _\varepsilon ^2(z') \,dz' + 2^{-4n} (1+t)^4\varepsilon ^\alpha , \end{aligned}$$

where we made use of the scaling of \(\varphi _n\) given by (2.19). Performing the corresponding bound for \(Z^\varepsilon _-\), we similarly obtain

$$\begin{aligned} \mathbf{E}|Z^\varepsilon _-(z)|^2\lesssim t \int \zeta _\varepsilon ^2(z') \,dz' + (1+t)^4\varepsilon ^\alpha . \end{aligned}$$

The claim now follows from the bound

$$\begin{aligned} \int \zeta _\varepsilon ^2(z') \,dz' \le \int _0^t \int _\mathbf{R}{\varepsilon ^{\alpha \gamma } \over \bigl (|x|^2 + |s|\bigr )^\gamma }\wedge 1 \,dx\,ds \lesssim \varepsilon ^{\alpha \gamma } +\varepsilon ^{2\alpha }. \end{aligned}$$

Consequently, for \(\varepsilon \le 1\), we get on the right hand side the power \((2\wedge \gamma )\alpha \) of \(\varepsilon \), and this for any \(\gamma \ge -\frac{2}{\alpha }\), so clearly the above right–hand side should be \(\varepsilon ^{2\alpha }\). \(\square \)

Proof of Lemma 2.16

Similarly to the proof of Lemma 2.9, we write

$$\begin{aligned} \tilde{\varrho }_\varepsilon (z,z') = \sum _{n_1\ge 0}\sum _{n_2\ge 0}\tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z'), \end{aligned}$$

with

$$\begin{aligned} \tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z') = \varepsilon ^{-1-{\alpha \over 2} }2^{-n_1-n_2}\int \int \tilde{\varphi }_{n_1}(z-z_1)\tilde{\varphi }_{n_2}(z'-z_2) \varrho _\varepsilon (z_1-z_2)\,dz_1\, dz_2. \end{aligned}$$

Here, for \(n \ge 1,\,\tilde{\varphi }_n\) is defined as in the proof of Lemma 2.9, whereas \(\tilde{\varphi }_0\) is different from what it was there and is defined as

$$\begin{aligned} \tilde{\varphi }_0(x,t) = \partial _x p_t^-(x). \end{aligned}$$

By symmetry, we can restrict ourselves to the case \(n_1 \ge n_2\), which we will do in the sequel. In the case where \(n_2 > 0\), the above integral could be restricted to the set of pairs \((z_1, z_2)\), such that their parabolic distance satisfies

$$\begin{aligned} d_p(z_1,z_2) \ge \bigl (d_p(z,z') - 2^{2-n_2}\bigr )_+, \end{aligned}$$

where \((\cdots )_+\) denotes the positive part of a number.

Replacing \(\tilde{\varphi }_{n_2}\) by its supremum and integrating out \(\tilde{\varphi }_{n_1}\) and \(\varrho _\varepsilon \) yields the bound

$$\begin{aligned} \tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z') \lesssim \bigl (1+\delta _{n_2,0}(t+t')\bigr )2^{2n_2-n_1} \varepsilon ^{\alpha /2} \int _{A_\varepsilon (n_2)} \varrho (z_3)\,dz_3, \end{aligned}$$

where \(A_\varepsilon (0) = \mathbf{R}^2\) and

$$\begin{aligned} A_\varepsilon (n_2) = \bigl \{z_3\,:\, d_p(0,z_3) \ge \varepsilon ^{-\alpha /2}\bigl (d_p(z,z') - 2^{2-n_2}\bigr )_+\bigr \}, \end{aligned}$$

for \(n_2 > 0\). (Remark that the prefactor \(1+t+t'\) is relevant only in the case \(n_1=n_2 = 0\).) It follows from the integrability of \(\varrho \) that one always has the bound

$$\begin{aligned} \tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z') \lesssim \bigl (1+\delta _{n_2,0}(t+t')\bigr ) 2^{2n_2-n_1} \varepsilon ^{\alpha /2}. \end{aligned}$$
(2.36)

Moreover, we deduce from Assumption 1.4 that, whenever \(n_2 > 0\) and \(d(z,z') \ge 2^{3-n_2}\), one has the improved bound: for any \(\gamma >0\),

$$\begin{aligned} \tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z') \lesssim 2^{2n_2-n_1} \varepsilon ^{\alpha /2} \left( 1 \wedge {\varepsilon ^{\alpha \gamma /2} \over d_p^\gamma (z,z')}\right) . \end{aligned}$$
(2.37)

The bound (2.36) is sufficient for our needs in the case \(n_2 = 0\), so we assume that \(n_2 > 0\) from now on.

We now obtain a second bound on \(\tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z')\) which will be useful in the regime where \(n_2\) is very large. Since the integral of \(\tilde{\varphi }_{n_1}\) is bounded independently of \(n_1\), we obtain

$$\begin{aligned} \tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z') \lesssim \varepsilon ^{-1-{\alpha \over 2} }2^{-n_1-n_2}\sup _{d_p(z_1,z) \le 2^{1-n_1}} \int \tilde{\varphi }_{n_2}(z'-z_2) \varrho _\varepsilon (z_1-z_2)\,dz_2. \nonumber \\ \end{aligned}$$
(2.38)

We now distinguish between three cases, which depend on the size of \(z-z'\).

Case 1: \(d_p(z,z') \le \varepsilon ^{\alpha /2}\). In this case, we proceed as in the proof of Lemma 2.7, which yields

$$\begin{aligned} \tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z')&\lesssim \varepsilon ^{-1-{\alpha \over 2} }2^{-n_1-n_2}\sup _{z_1} \int \tilde{\varphi }_{n_2}(z_2) \varrho _\varepsilon (z_2-z_1)\,dz_2\nonumber \\&\lesssim \varepsilon ^{-1-{\alpha \over 2} }2^{-n_1-n_2}\sup _{x_1} \int _{\mathbf{R}} \sup _{s} \varrho _\varepsilon (x_2-x_1,s) \int _{0}^t \tilde{\varphi }_{n_2}(x_2,t_2) \,dt_2\,dx_2\nonumber \\&\lesssim \varepsilon ^{-1-{\alpha \over 2} }2^{-n_1}\int _{\mathbf{R}} \sup _{s} \varrho _\varepsilon (x_2,s)\,dx_2\lesssim \varepsilon ^{-{\alpha \over 2} }2^{-n_1}. \end{aligned}$$
(2.39)

Case 2: \(|x-x'| \ge d_p(z,z')/2 \ge \varepsilon ^{\alpha /2}/2\). Note that in (2.38), the argument of \(\varrho _\varepsilon \) can only ever take values with \(|x_1 - x_2| \in B_\varepsilon (n_2)\) where

$$\begin{aligned} B_\varepsilon (n_2) = \left\{ \bar{x} \,:\, |\bar{x}| \ge \bigl (|x-x'| - 2^{2-n_2}\bigr )\right\} . \end{aligned}$$

As a consequence, we obtain the bound

$$\begin{aligned} \tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z') \lesssim \varepsilon ^{-1-{\alpha \over 2} }2^{-n_1-n_2}\sup _{\bar{x} \in B_\varepsilon (n_2)} \sup _{s \in \mathbf{R}} \varrho _\varepsilon (\bar{x},s). \end{aligned}$$

The case of interest to us for this bound will be \(2^{6-n_2} \le \varepsilon ^{\alpha /2}\), in which case we deduce from this calculation and Assumption 1.4 that

$$\begin{aligned} \tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z') \lesssim \varepsilon ^{-1-{\alpha \over 2} }2^{-n_1-n_2} \left( {\varepsilon \over d_p(z,z')}\right) ^\gamma , \end{aligned}$$

where \(\gamma \) is an arbitrarily large exponent. Choosing \(\gamma \ge \frac{2}{2-\alpha }\), we conclude that one also has the bound

$$\begin{aligned} \tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z') \lesssim \varepsilon ^{-{\alpha \over 2} }2^{-n_1}\left( 1\wedge {\varepsilon ^{\alpha /2} \over d_p(z,z')}\right) ^\gamma , \end{aligned}$$
(2.40)

which will be sufficient for our needs.

Case 3: \(|t-t'| \ge d_p^2(z,z')/2 \ge \varepsilon ^{\alpha }/2\). Similarly, we obtain

$$\begin{aligned} \tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z') \lesssim \varepsilon ^{-{\alpha \over 2} }2^{-n_1}\int _{\mathbf{R}} \sup _{s\in B_\varepsilon '(n_2)} \varrho _\varepsilon (x_2,s)\,dx_2, \end{aligned}$$

where

$$\begin{aligned} B_\varepsilon '(n_2) = \bigl \{s \,:\, |s| \ge \varepsilon ^{-\alpha } \bigl (|t-t'| - 2^{8-2n_2}\bigr )\bigr \}. \end{aligned}$$

Restricting ourselves again to the case \(2^{6-n_2} \le \varepsilon ^{\alpha /2}\), this yields as before

$$\begin{aligned} \tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z') \lesssim \varepsilon ^{-{\alpha \over 2} }2^{-n_1} \left( 1 \wedge {\varepsilon ^{\alpha /2} \over d_p(z,z')}\right) ^\gamma . \end{aligned}$$
(2.41)

It now remains to sum over all values \(n_1 \ge n_2\ge 0\).

For \(n_2 = 0\), we sum the bound (2.36), which yields

$$\begin{aligned} \sum _{n_1 \ge 0} \tilde{\varrho }_\varepsilon ^{n_1,0}(z,z') \le (1+ t+t') \varepsilon ^{\alpha /2}. \end{aligned}$$

In order to sum the remaining terms, we first consider the case \(d_p(z,z') < \varepsilon ^{\alpha /2}\). In this case, we use (2.36) and (2.39) to deduce that

$$\begin{aligned} \sum _{n_1 \ge n_2} \tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z') \lesssim 2^{n_2} \varepsilon ^{\alpha /2}\wedge 2^{-n_2} \varepsilon ^{-\alpha /2}, \end{aligned}$$

so that in this case \(\tilde{\varrho }_\varepsilon (z,z') \lesssim 1+(1+ t+t') \varepsilon ^{\alpha /2}\).

It remains to consider the case \(d_p(z,z') \ge \varepsilon ^{\alpha /2}\). For this, we break the sum over \(n_2\) in three pieces:

$$\begin{aligned} N_1&= \left\{ n_2 \ge 1\,:\, 2^{-n_2} \ge d(z,z')/8\right\} ,\\ N_2&= \left\{ n_2 \ge 1\,:\, 2^{-6} \varepsilon ^{\alpha /2} \le 2^{-n_2} < d(z,z')/8\right\} ,\\ N_3&= \left\{ n_2 \ge 1\,:\, 2^{-n_2} < 2^{-6} \varepsilon ^{\alpha /2}\right\} . \end{aligned}$$

For \(n_2 \in N_1\), we only make use of the bound (2.36). Summing first over \(n_1 \ge n_2\) and then over \(n_2 \in N_1\), we obtain

$$\begin{aligned} \sum _{n_2 \in N_1}\sum _{n_1 \ge n_2}\tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z') \lesssim {\varepsilon ^{\alpha /2} \over d_p(z,z')}. \end{aligned}$$

For \(n_2 \in N_2\), we only make use of the bound (2.37). Summing again first over \(n_1 \ge n_2\) and then over \(n_2 \in N_1\), we obtain

$$\begin{aligned} \sum _{n_2 \in N_2}\sum _{n_1 \ge n_2}\tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z') \lesssim {\varepsilon ^{\alpha \gamma /2} \over d_p^\gamma (z,z')}. \end{aligned}$$

In the last case, we similarly use either (2.40) or (2.41), depending on whether \(|x-x'| \ge d_p(z,z')/2\) or \(|t-t'| \ge d_p^2(z,z')/2\), which yields again

$$\begin{aligned} \sum _{n_2 \in N_3}\sum _{n_1 \ge n_2}\tilde{\varrho }_\varepsilon ^{n_1,n_2}(z,z') \lesssim {\varepsilon ^{\alpha \gamma /2} \over d_p^\gamma (z,z')}. \end{aligned}$$

Combining the above bounds, the claim follows. \(\square \)

Lemma 2.17

For any \(T>0,\,p\ge 1,\,\kappa >0,\,0\le \gamma <1\), there exists a constant \(C_{T,p,\kappa ,\gamma }\) such that for all \(0\le t\le T,\,\varepsilon >0\),

$$\begin{aligned} \bigl (\mathbf{E}\Vert \partial _x Z^\varepsilon (t)\Vert _{\gamma ,p_\kappa }^p\bigr )^{1/p} \le \bigl (\mathbf{E}\Vert Z^\varepsilon (t)\Vert _{\gamma +1,p_\kappa }^p\bigr )^{1/p} \le C_{T,p,\kappa ,\gamma } \varepsilon ^{-2\kappa }. \end{aligned}$$

Proof

The first inequality is obvious from the definition. For the second one, we use successively the second statement of Proposition 2.2 with \(\beta =0\), and \(\gamma \) replaced by \(\gamma +1\), and the second estimate from Lemma 2.12. As a consequence, we have indeed

$$\begin{aligned} \Vert Z^\varepsilon (t)\Vert _{\gamma +1,p_\kappa } \le \int _0^t\Vert P_{t-s}v^\varepsilon (s)\Vert _{\gamma +1,p_\kappa }ds \le Ct^{1-(\gamma +1)/2}\varepsilon ^{-2\kappa }, \end{aligned}$$

where we set \(v^\varepsilon (s):=|\partial _xY^\varepsilon (t)|^2-\mathbf{E}(|\partial _xY^\varepsilon (t)|^2)\). \(\square \)

Combining this result with Lemma 2.15, we deduce

Lemma 2.18

For any \(T>0,\,\kappa , \bar{\kappa }>0\), and \(p > {2/\kappa }\), there exists a constant \(C\) such that for all \(0\le t\le T,\,\varepsilon >0\),

$$\begin{aligned} \mathbf{E}\Vert Z^\varepsilon (t)\Vert _{0,p_\kappa }^p \le C \varepsilon ^{{\alpha p \over p+1} - \bar{\kappa }},\quad \mathbf{E}\Vert \partial _x Z^\varepsilon (t)\Vert _{0,p_\kappa }^p \le C \varepsilon ^{{\alpha p \over 2(p+1)} -\bar{\kappa }}. \end{aligned}$$

Proof

We first derive the bound on \(\mathbf{E}\Vert Z^\varepsilon (t)\Vert _{0,p_\kappa }^p\). For this, we set \(x_k = k \varepsilon ^{\gamma }\) with \(k \in \mathbf{Z}\), as well as \(I_k = [x_k, x_{k+1}]\). For any fixed function \(Z :\mathbf{R}\rightarrow \mathbf{R}\), we then have

$$\begin{aligned} \Vert Z\Vert _{L^\infty (I_k)} \le |Z(x_k)| + \varepsilon ^\gamma \Vert \partial _x Z\Vert _{L^\infty (I_k)} \le |Z(x_k)| + \varepsilon ^\gamma (1+|x_k|^\kappa ) \Vert \partial _x Z\Vert _{0,p_\kappa }, \end{aligned}$$

so that

$$\begin{aligned} \Vert Z^\varepsilon (t)\Vert _{0,p_\kappa }^p \lesssim \varepsilon ^{\gamma p} \Vert \partial _x Z^\varepsilon (t)\Vert _{0,p_\kappa }^p + \sup _{k \in \mathbf{Z}} {|Z^\varepsilon (t,x_k)|^p \over (1 + |x_k|^\kappa )^p}, \end{aligned}$$
(2.42)

with a proportionality constant depending only on \(p\). Using the Cauchy–Schwartz inequality, we furthermore obtain the bound

$$\begin{aligned} \mathbf{E}\sup _{k \in \mathbf{Z}} {|Z^\varepsilon (t,x_k)|^p \over (1 + |x_k|^\kappa )^p}&\le \sqrt{\mathbf{E}\sum _{k \in \mathbf{Z}} {|Z^\varepsilon (t,x_k)|^2 \over 1 + |x_k|^2}} \sqrt{\mathbf{E}\sup _{k \in \mathbf{Z}} {|Z^\varepsilon (t,x_k)|^{2p-2} \over (1 + |x_k|^\kappa )^{p-{2\over \kappa }}}} \\&\lesssim \varepsilon ^\alpha \sqrt{\sum _{k \in \mathbf{Z}} {1 \over 1 + |x_k|^2}} \sqrt{\mathbf{E}\Vert Z^\varepsilon (t)\Vert _{0,p_{\hat{\kappa }}}^{2p-2}}, \end{aligned}$$

where we have set

$$\begin{aligned} \hat{\kappa }= {\kappa p - 2 \over 2p-2}, \end{aligned}$$

and we used Lemma 2.15 to get \(\mathbf{E}|Z^\varepsilon (t,x_k)|^2 \le C\varepsilon ^{2\alpha }\). If \(\hat{\kappa }> 0\) (which explains the requirement on \(p\) in our assumptions), then it follows from Lemma 2.17 that the second factor in this expression is bounded by \(C \varepsilon ^{- \bar{\kappa }}\). On the other hand, one has

$$\begin{aligned} \sum _{k \in \mathbf{Z}} {1 \over 1 + |x_k|^2} \lesssim \varepsilon ^{-\gamma }, \end{aligned}$$

so that the expectation of the second term in (2.42) is bounded by \(C \varepsilon ^{\alpha - \gamma - \bar{\kappa }}\). Using again Lemma 2.17, the first term in (2.42) is bounded by \(C \varepsilon ^{p \gamma - \bar{\kappa }}\). Optimising over \(\gamma \) yields the required bound on \(\Vert Z^\varepsilon (t)\Vert _{0,p_\kappa }\).

Concerning the bounds on \(\Vert \partial _x Z^\varepsilon (t)\Vert _{0,p_\kappa }\), we use the easily verifiable fact that any function \(f\) defined on an interval \(I\) satisfies the bound

$$\begin{aligned} \Vert f'\Vert _{L^\infty } \le {2\Vert f\Vert _{L^\infty } \over |I|} + \Vert f'\Vert _\beta \, |I|^\beta . \end{aligned}$$

Cutting the real line into intervals of size \(\varepsilon ^\gamma \) as before, we deduce that

$$\begin{aligned} \Vert f'\Vert _{0,p_\kappa } \lesssim \varepsilon ^{-\gamma } \Vert f\Vert _{0,p_\kappa } + \varepsilon ^{\beta \gamma } \Vert f'\Vert _{\beta ,p_\kappa }. \end{aligned}$$

Choosing \(\beta \) very close to 1 and combining this with the bound just obtained on \(\mathbf{E}\Vert Z^\varepsilon (t)\Vert _{0,p_\kappa }^p\) as well as Lemma 2.17, we have

$$\begin{aligned} \mathbf{E}\Vert \partial _x Z^\varepsilon (t)\Vert _{0,p_\kappa }^p \lesssim \varepsilon ^{-\gamma p + {\alpha p \over p+1} - \bar{\kappa }} + \varepsilon ^{\gamma p - \bar{\kappa }} .\end{aligned}$$

Optimising over \(\gamma \) allows us to conclude. \(\square \)

We will need moreover

Corollary 2.19

As \(\varepsilon \rightarrow 0,\,Z^\varepsilon (x,t)\rightarrow 0\) in probability, locally uniformly in \((x,t)\).

Proof

It follows from estimate (2.16) that for any \(p>1\) and any bounded subset \(K\subset {\mathbf {R}}\times {\mathbf {R}}^+\), there exists a constant \(C_{p,K}\) such that

$$\begin{aligned} \mathbf{E}\left( \int \limits _K \big ||\partial _xY^\varepsilon (x,t)|^2-\overline{V}^\varepsilon \big |^p dxdt\right) \le C_{p,K}. \end{aligned}$$

Then, by the Nash estimate, we obtain

$$\begin{aligned} \mathbf{E}\Vert Z^\varepsilon \Vert _{C^\gamma (K)}\le C_{K}, \end{aligned}$$
(2.43)

where the Hölder exponent \(\gamma >0\) and \(C_{K}\) do not depend on \(\varepsilon \). As a consequence of the first estimate of Lemma 2.18, we have for \(p\) sufficiently large the bound

$$\begin{aligned} \mathbf{E}\Vert Z^\varepsilon \Vert ^p_{L^p(K)}\le C_{p,K}\varepsilon ^{\delta }, \end{aligned}$$
(2.44)

for some exponent \(\delta > 0\). Combining (2.43) and (2.44) one can easily derive the required convergence. \(\square \)

3 Proof of the main result

Before concluding with the proof of our main theorem, we prove a result for a parabolic heat equation with coefficients which live in spaces of weighted Hölder continuous functions.

We consider an abstract evolution equation of the type

$$\begin{aligned} \partial _t u = \partial _x^2 u + F\,\partial _x u + G\, u, \end{aligned}$$
(3.1)

where \(F\) and \(G\) are measurable functions of time, taking values in \(\mathcal{C}^{-\gamma }_{p_\kappa }\) for some suitable \(\kappa > 0\) and \(\gamma < {1\over 2}\). The main result of this section is the following:

Theorem 3.1

Let \(\gamma \) and \(\kappa \) be positive numbers such that \(\gamma + 2 \kappa < {1\over 2}\) and let \(F\) and \(G\) be functions in \(L^p_\mathrm{loc}(\mathbf{R}_+, \mathcal{C}^{-\gamma }_{p_\kappa })\) for every \(p \ge 1\).

Let furthermore \(\ell \in \mathbf{R}\) and \(u_0 \in \mathcal{C}^{3/2}_{e_\ell }\). Then, there exists a unique global mild solution to (3.1). Furthermore, this solution is continuous with values in \(\mathcal{C}^{3/2}_{e_m}\) for every \(m < \ell \) and, for every set of parameters \(\ell , m, \kappa , \gamma \) satisfying the above restrictions, there exists a value \(p\) such that the map \((u_0, F,G) \mapsto u\) is jointly continuous in these topologies.

Proof

We will show a slightly stronger statement, namely that for every \(\delta > 0\) sufficiently small, the mild solution has the property that \(u_t \in \mathcal{C}^{\frac{3}{2}}_{e_{\ell -\delta t}}\) for \(t \in [0,T]\) for arbitrary values of \(T>0\). We fix \(T,\,\delta \) and \(\ell \) from now on.

We then write

$$\begin{aligned} |\!|\!|u |\!|\!|_{\delta ,\ell ,T} {\mathop {=}\limits ^\mathrm{def}}\sup _{t \in [0,T]} \Vert u_t\Vert _{{3\over 2},e_{\ell -\delta t}}, \end{aligned}$$

and we denote by \(\mathcal{B}_{\delta ,\ell ,T}\) the corresponding Banach space. With this notation at hand, we define a map \(\mathcal{M}_T :\mathcal{B}_{\delta ,\ell ,T} \rightarrow \mathcal{B}_{\delta ,\ell ,T}\) by

$$\begin{aligned} \bigl (\mathcal{M}_T u\bigr )_t = \int _0^t P_{t-s} \bigl (F_s \,\partial _x u_s + G_s\, u_s\bigr )\,ds,\quad t \in [0,T]. \end{aligned}$$

It follows from Proposition 2.2 that we have the bound

$$\begin{aligned} \big \Vert \bigl (\mathcal{M}_T u\bigr )_t \big \Vert _{{3\over 2},e_{\ell -\delta t}} \le C\int _0^t (t-s)^{-{3 + 2\gamma \over 4}}\bigl \Vert F_s \,\partial _x u_s + G_s\, u_s\bigr \Vert _{-\gamma ,e_{\ell -\delta t}}\,ds. \end{aligned}$$

Combining Proposition 2.1 with (2.3) and (2.4), we furthermore obtain the bound

$$\begin{aligned} \bigl \Vert F_s \,\partial _x u_s\bigr \Vert _{-\gamma ,e_{\ell -\delta t}}&\le C \bigl (\delta |t-s|\bigr )^{-\kappa } \Vert F_s\Vert _{-\gamma ,p_\kappa } \bigl \Vert \partial _x u_s\bigr \Vert _{{1\over 2},e_{\ell -\delta s}} \\&\le C \bigl (\delta |t-s|\bigr )^{-\kappa } \Vert F_s\Vert _{-\gamma ,p_\kappa } |\!|\!|u |\!|\!|_{\delta ,\ell ,T}, \end{aligned}$$

where the proportionality constant \(C\) is uniformly bounded for \(\delta \in (0,1]\) and bounded \(\ell \) and \(s\). A similar bound holds for \(G_s u_s\) so that, combining these bounds and using Hölder’s inequality for the integral over \(t\), we obtain the existence of constants \(\zeta > 0\) and \(p>1\) such that the bound

$$\begin{aligned} |\!|\!|\mathcal{M}_T u |\!|\!|_{\delta ,\ell ,T} \le C \delta ^{-\kappa } T^{\zeta } \bigl (\Vert F\Vert _{L^p(\mathcal{C}^{-\gamma }_{p_\kappa })} + \Vert G\Vert _{L^p(\mathcal{C}^{-\gamma }_{p_\kappa })}\bigr ) |\!|\!|u |\!|\!|_{\delta ,\ell ,T}, \end{aligned}$$

holds. Since the norm of this operator is strictly less than \(1\) provided that \(T\) is small enough, the short-time existence and uniqueness of solutions follow from Banach’s fixed point theorem. The existence of solutions up to the final time \(T\) follows by iterating this argument, noting that the interval of short-time existence restarting from \(u(t)\) at time \(t\) can be bounded from below by a constant that is uniform over all \(t \in [0,T]\), as a consequence of the linearity of the equation.

Actually, we obtain the bound

$$\begin{aligned} \Vert u_t\Vert _{{3\over 2},e_{\ell -\delta t}} \lesssim \exp \left( C t \left( \Vert F\Vert _{L^p(\mathcal{C}^{-\gamma }_{p_\kappa })} + \Vert G\Vert _{L^p(\mathcal{C}^{-\gamma }_{p_\kappa })}\right) ^{1/\zeta } \right) \Vert u_0\Vert _{{3\over 2},e_\ell }, \end{aligned}$$

where the constants \(C\) and \(\zeta \) depend on the choice of \(\ell \) and \(\delta \).

The solutions are obviously linear in \(u_0\) since the equation is linear in \(u\). It remains to show that the solutions also depend continuously on \(F\) and \(G\). Let \(\bar{u}\) be the solution to the equation

$$\begin{aligned} \partial _t \bar{u} = \partial _x^2 \bar{u} + \bar{F}\,\partial _x \bar{u} + \bar{G}\, \bar{u}, \end{aligned}$$
(3.2)

and write \(\varrho = u - \bar{u}\). The difference \(\varrho \) then satisfies the equation

$$\begin{aligned} \partial _t \varrho = \partial _x^2 \varrho + F\,\partial _x \varrho + G\, \varrho + (F - \bar{F})\,\partial _x \bar{u} + (G - \bar{G})\,\bar{u}, \end{aligned}$$

with zero initial condition. Similarly to before, we thus have

$$\begin{aligned} \varrho _t = \bigl (\mathcal{M}_T \varrho \bigr )_t + \int _0^t P_{t-s} \bigl ((F_s - \bar{F}_s)\,\partial _x \bar{u}_s + (G_s - \bar{G}_s)\,\bar{u}_s\bigr )\,ds. \end{aligned}$$

It follows from the above bounds that

$$\begin{aligned} |\!|\!|\varrho |\!|\!|_{\delta ,\ell ,T} \lesssim |\!|\!|\mathcal{M}_T \varrho |\!|\!|_{\delta ,\ell ,T} \!+\! C \delta ^{-\kappa } T^\zeta \bigl (\Vert F-\bar{F}\Vert _{L^p(\mathcal{C}^{-\gamma }_{p_\kappa })} \!+\! \Vert G-\bar{G}\Vert _{L^p(\mathcal{C}^{-\gamma }_{p_\kappa })}\bigr ) |\!|\!|\bar{u} |\!|\!|_{\delta ,\ell ,T}. \end{aligned}$$

Over short times, the required continuity statement thus follows at once. Over fixed times, it follows as before by iterating the argument. \(\square \)

Remark 3.2

In principle, one could obtain a similar result for less regular initial conditions, but this does not seem worth the additional effort in this context.

We now have finally all the ingredients in place to give the proof of our main result.

Proof of Theorem 1.8

We apply Theorem 3.1 with \(\gamma = {1\over 4}\) and \(\kappa = {1\over 10}\). Note that the equation (1.8) for \(v^\varepsilon \) is precisely of the form (3.1) with

$$\begin{aligned} F = F^\varepsilon =2 \partial _x Y^\varepsilon + 2\partial _x Z^\varepsilon ,\quad G = G^\varepsilon =|\partial _x Z^\varepsilon |^2 + 2\,\partial _x Z^\varepsilon \partial _x Y^\varepsilon . \end{aligned}$$

It follows from Corollary 2.13 and Lemma 2.18 that, for every \(p > 0\) there exists \(\delta > 0\), such that one has the bound

$$\begin{aligned} \left| \mathbf{E}\int _0^T \Vert F^\varepsilon \Vert _{-\gamma ,p_\kappa }^p\,dt\right| \lesssim \varepsilon ^{\delta }. \end{aligned}$$

Similarly, it follows from Lemmas 2.12 and  2.18 that one also has the bound

$$\begin{aligned} \left| \mathbf{E}\int _0^T \Vert G^\varepsilon \Vert _{0,p_\kappa }^p\,dt\right| \lesssim \varepsilon ^{\delta }, \end{aligned}$$

for a possibly different constant \(\delta > 0\). These estimates imply that for every \(p>0,\,\Vert F^\varepsilon \Vert _{L^p(\mathcal{C}^{-\gamma }_{p_\kappa })} + \Vert G^\varepsilon \Vert _{L^p(\mathcal{C}^{-\gamma }_{p_\kappa })}\) tends to zero in probability as \(\varepsilon \rightarrow 0\). As a consequence of Theorem 3.1, this shows immediately that \(v^\varepsilon \rightarrow u\) in probability, locally uniformly both in space and in time. We conclude by recalling that from Corollary 2.10 and 2.19, the correctors \(Y^\varepsilon \) and \(Z^\varepsilon \) themselves converge locally uniformly to 0 in probability. \(\square \)