1 Introduction

1.1 Main result

We are interested in parabolic equations in divergence form when \(d\ge 3\):

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \displaystyle {\partial _t u_\varepsilon (t,x,\omega )=\frac{1}{2}\nabla \cdot \left( \tilde{a}\left( \frac{x}{\varepsilon },\omega \right) \nabla u_\varepsilon (t,x,\omega )\right) } &{}\text {on } \mathbb {R}_+\times \mathbb {R}^d,\\ \displaystyle {u_\varepsilon (0,x,\omega )=f(x) } &{} \text {on } \mathbb {R}^d, \end{array} \right. \end{aligned}$$
(1.1)

where \(\omega \in \Omega \) denotes a particular random realization sampled from a probability space \((\Omega ,\mathcal {F},\mathbb {P})\), the function f is bounded and smooth, and \(\tilde{a}: \mathbb {R}^d\times \Omega \rightarrow \mathbb {R}^{d\times d}\) is a random field of symmetric matrices satisfying the uniform ellipticity condition

$$\begin{aligned} C^{-1}|\xi |^2\le \xi ^T \tilde{a}(x,\omega )\xi \le C|\xi |^2. \end{aligned}$$
(1.2)

Standard homogenization theory shows that under the assumptions of stationarity and ergodicity on the random field \(\tilde{a}(x,\omega )\), there exists a deterministic matrix \(\bar{A}\) such that \(u_\varepsilon \) converges to the solution \(u_{\mathrm {hom}}\) of a “homogenized” equation:

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \displaystyle {\partial _t u_{\mathrm {hom}}(t,x)=\frac{1}{2}\nabla \cdot (\bar{A}\nabla u_{\mathrm {hom}}(t,x)) } &{}\text {on}\quad \mathbb {R}_+\times \mathbb {R}^d,\\ \displaystyle {u_{\mathrm {hom}}(0,x)=f(x) } &{} \text {on}\quad \mathbb {R}^d. \end{array} \right. \end{aligned}$$
(1.3)

The goal of this paper is to further analyze the difference between \(u_\varepsilon (t,x,\omega )\) and \(u_{\mathrm {hom}}(t,x)\), in a pointwise sense. We assume that the coefficients \(\tilde{a}\) have a short range of dependence (more precisely, that they can be written as a local function of a homogeneous Poisson point process). For each fixed (tx), we show that

$$\begin{aligned} u_\varepsilon (t,x,\omega )-u_{\mathrm {hom}}(t,x) = \varepsilon \nabla u_{\mathrm {hom}}(t,x)\cdot \tilde{\phi }(x/\varepsilon ,\omega )+o(\varepsilon ), \end{aligned}$$
(1.4)

where \(\tilde{\phi }\) is the (stationary) corrector, and where \(o(\varepsilon )/\varepsilon \rightarrow 0\) in \(L^1(\Omega )\).

1.2 Context

There is a large body of literature on stochastic homogenization, starting from the work of Kozlov [27] and Papanicolaou–Varadhan [35] on divergence form operators. Their results show that as the correlation length of the random coefficients goes to zero, the operator converges in a certain sense to the one with constant coefficients. The qualitative convergence essentially comes from an ergodic theorem. In order to provide convergence rates, a quantification of ergodicity is required. The first quantitative result was given by Yurinskii [37], where an algebraic rate was obtained. Other suboptimal results were obtained in [29]. Caffarelli and Souganidis considered nonlinear equations, and also derived an error estimate [9].

Optimal results have started appearing only very recently, beginning with the groundbreaking work of Gloria and Otto [18, 19] and Gloria, Neukamm and Otto [15, 16]. Further developments include [13, 17, 20, 28].

We would like in particular to draw the reader’s attention to the results in [16]. There, linear elliptic equations in divergence-form on the d-dimensional torus \(\mathbb {T}\) are considered (so that there is no boundary layer), and a two-scale expansion is proved, in the sense that

$$\begin{aligned} \Vert u_\varepsilon (x,\omega ) -u_{\mathrm {hom}}(x) - \varepsilon \nabla u_{\mathrm {hom}}(x) \cdot \tilde{\phi }(x/\varepsilon ,\omega ) \Vert _{H^1(\mathbb {T})} = O(\varepsilon ), \end{aligned}$$

with obvious notation for \(u_\varepsilon \) and \(u_{\mathrm {hom}}\), and where \(O(\varepsilon )/\varepsilon \) is bounded in \(L^2(\Omega )\) uniformly over \(\varepsilon \). (Striclty speaking, the equations studied there are discrete, and a minor modification in the definition of \(u_\varepsilon \) is necessary in order to suppress the discretization error.) This statement is probably best understood as the summary of two estimates: one on \(u_\varepsilon \), and one on its gradient:

$$\begin{aligned}&\left(\int _{\mathbb {T}} \left|u_\varepsilon (x,\omega ) -u_{\mathrm {hom}}(x)\right|^2 \, dx\right)^{1/2} = O(\varepsilon ),\\&\left(\int _{\mathbb {T}} \left|\nabla u_\varepsilon (x,\omega ) - \nabla u_{\mathrm {hom}}(x) - \nabla \tilde{\phi }(x/\varepsilon ,\omega ) \nabla u_{\mathrm {hom}}(x)\right|^2 \, dx\right)^{1/2} = O(\varepsilon ). \end{aligned}$$

In particular, it does not follow from this result that

$$\begin{aligned} u_\varepsilon (x,\omega ) - u_{\mathrm {hom}}(x) - \varepsilon \nabla u_{\mathrm {hom}}(x) \cdot \tilde{\phi }(x/\varepsilon ,\omega ) = o(\varepsilon ). \end{aligned}$$
(1.5)

In fact, one of us (JCM) started this project with the belief that the expansion (1.5) was wrong in general; that in order for it to be true, an additional symmetry property of the coefficients had to be assumed, a good candidate being the invariance of the law of the coefficients under the transformation \(z \mapsto -z\). Even the weaker fact that

$$\begin{aligned} \mathbb {E}\left\{ u_\varepsilon (x,\omega ) \right\} - u_{\mathrm {hom}}(x) = o(\varepsilon ) \end{aligned}$$
(1.6)

seemed a priori unlikely to be true in general. For the most part, this belief was based on three observations:

  1. (1)

    Numerical evidence, in the discrete setting, indicates that \(\varepsilon ^{-1}(\mathbb {E}\{u_\varepsilon (x,\omega )\} - u_{\mathrm {hom}}(x)) \) does not converge to 0 for “generic” periodic environments, see [11, Section 4.4.2 and Figure 15];

  2. (2)

    A simple toy model was proposed in [11, Remark 4.4] to “explain” that \(\varepsilon ^{-1}(\mathbb {E}\{u_\varepsilon (x,\omega ) \} - u_{\mathrm {hom}}(x))\) should be of order 1 in general: when summing i.i.d. random variables, the rate of convergence in the central limit theorem is generically of order \(\varepsilon \) when \(\varepsilon ^{-2}\) random variables are summed; but it is of order \(\varepsilon ^2\) when the law of the random variables is invariant under the transformation \(z \mapsto -z\);

  3. (3)

    In the regime of small ellipticity contrast, Conlon and Fahim showed that the \(\mathbb {E}\{u_\varepsilon (x,\omega )\} - u_{\mathrm {hom}}(x) = O(\varepsilon ^2)\) when the law of the coefficients is invariant under the transformation \(z \mapsto -z\), but they only show that it is \(O(\varepsilon )\) in general; see [10, Theorem 1.2, Proposition A.1, Remark 8 and Lemma A.2].

Despite these strong indications to the contrary, our result (1.4) on the parabolic equation implies the corresponding result for the elliptic equation. That is, the expansion (1.5) is actually true in general (i.e. without it being necessary to assume that the law of the coefficients is invariant under a transformation such as \(z \mapsto -z\)).

Why are there so convincing arguments to the contrary? It seems to us that the core of the matter is that the foregoing observations (1–3) all concern discrete equations (i.e. where the underlying space is \(\mathbb {Z}^d\)), while our proof of (1.4) and (1.5) applies to continuous equations. Interestingly, we do not know how to prove our result (or the weaker statement (1.6)) in the discrete setting without making use of an assumption such as the invariance of the law of the coefficients under the transformation \(z \mapsto -z\).

Finally, we would like to point out that while it is fairly easy to pass from a result on the parabolic equation to one on the elliptic equation, the converse does not seem to be possible. In fact, we are not aware of any previous “two-scale expansion” result for parabolic equations.

1.3 The probabilistic approach

From a probabilistic point of view, homogenizing a differential operator with random coefficients corresponds to proving an invariance principle for a random motion in random environment. Kipnis and Varadhan have developed a general central limit theorem for additive functionals of reversible Markov processes [25]. A large class of random motions in random environment can be analyzed by following their approach, using also the idea of the “medium seen from the moving particle” (see [26] and the references therein). The proof is based on a martingale decomposition and an application of the martingale central limit theorem (CLT).

In order to make this argument quantitative, two ingredients are necessary. One is a quantitative version of the martingale CLT; the other is a quantitative estimate on the speed of convergence to equilibrium of the medium seen from the particle. This route was already pursued in [21, 30, 31]. The quantitative martingale CLT developed in [31] for general martingales was further explored in [21]. It was shown there that by focusing on continuous martingales, one can express the first-order correction in the CLT in simple terms involving the quadratic variation of the martingale. This will provide us with a suitable quantitative martingale CLT. In addition, we will also need to assert that the process of the environment seen from the particle converges to equilibrium sufficiently fast. This question was first investigated in [29], and we will borrow from there the idea that it is sometimes sufficient to understand the convergence to equilibrium of the environment as seen by a standard Brownian motion (independent of the environment). Furthermore, we will rely crucially on moment bounds on the corrector and on the gradients of the Green function recently obtained in [14, 20]. All these tools will enable us to identify a deterministic first-order correction to the expansion in (1.4), which we will finally show to be zero.

1.4 Other relevant work

The probabilistic approach is particularly well-suited for obtaining pointwise information such as (1.4). While such pointwise results are relatively rare, the precise behavior of more global random quantities has received considerable attention. In particular, a central limit theorem for the averaged energy density was derived in [8, 34, 36]. The large-scale correlations and then the scaling limit of the corrector are investigated in [32, 33]. A comparable study of the scaling limit of the fluctuations of \(u_\varepsilon \) was performed in [22]. We stress however that this result only characterizes the fluctuations of \(u_\varepsilon \), but not the bias \(\mathbb {E}[u_\varepsilon ] - u_\mathrm {hom}\). The desire to understand the typical size of the bias (cf. (1.6)) is what initiated our study.

For other types of equations, e.g. a deterministic operator perturbed by a highly oscillatory random potential, fluctuations around homogenized limits have been analyzed in different contexts [4, 5, 13, 21], see a review [6]. From a probabilistic perspective, it corresponds to a random motion independent of the random environment.

1.5 Organization and notation

The rest of the paper is organized as follows. We make assumptions on the random field \(\tilde{a}(x,\omega )\) and state the main results in Sect. 2. Then we present a standard approach to diffusions in random environments in Sect. 3. Some key estimates of the correctors and the Green functions are contained in Sect. 4. The proof of the main results are presented in Sects. 5, 6 and 7.

We write \(a\lesssim b\) when \(a\le Cb\) with a constant C independent of \(\varepsilon ,t,x\). The normal distribution with mean \(\mu \) and variance \(\sigma ^2\) is denoted by \(N(\mu ,\sigma ^2)\), and \(q_t(x)\) is the density of N(0, t). The Fourier transform is defined by \(\hat{f}(\xi )=\int _{\mathbb {R}^d}f(x)e^{-i\xi \cdot x}dx\). We will have two independent probability spaces with the associated expectations denoted by \(\mathbb {E},\mathbb {E}_B\) respectively. The expectation in the product probability space is then denoted by \(\mathbb {E}\mathbb {E}_B\).

2 Assumptions and main results

Let \(\mathcal {M}\) be an arbitrary metric space equipped with its Borel \(\sigma \)-algebra, and let \(\mu \) be a \(\sigma \)-finite measure on \(\mathcal {M}\). We let \(\omega \) be a Poisson point process on \(\mathcal {M} \times \mathbb {R}^d\) with intensity measure \({\mathrm {d}}\mu (m) \, {\mathrm {d}}x\). We think of \(\omega \) as an element of the probability space \((\Omega ,\mathcal {F},\mathbb {P})\), where \(\Omega \) is the collection of countable subsets of \(\mathcal {M} \times \mathbb {R}^d\), and \(\mathcal {F}\) is the smallest \(\sigma \)-algebra that makes the maps

$$\begin{aligned} \left\{ \begin{array}{lll} \Omega &{} \rightarrow &{} \mathbb {N}\cup \left\{ +\infty \right\} \\ \omega &{} \mapsto &{} \mathrm {Card} (\omega \cap A) \end{array} \right. \end{aligned}$$

measurable, for every measurable \(A \subseteq \mathcal {M} \times \mathbb {R}^d\). For a construction of such Poisson point processes, we refer to [24, Section 2.5]. For any measurable \(S\subseteq \mathbb {R}^d\), we denote the \(\sigma \)-algebra generated by the Poisson point process restricted to \(\mathcal {M}\times S\) by \(\mathcal {F}_S\).

The group of translations of \(\mathbb {R}^d\) can be naturally lifted to the space \(\Omega \) by defining, for every \(x \in \mathbb {R}^d\),

$$\begin{aligned} \tau _x \omega = \left\{ (m,x+z) \ : \ (m,z) \in \omega \right\} = \omega + (0,x). \end{aligned}$$

It is a classical result that \(\left\{ \tau _x,x\in \mathbb {R}^d\right\} \) satisfies the following properties:

  1. (1)

    Measure-preserving: \(\mathbb {P}\circ \tau _x=\mathbb {P}\).

  2. (2)

    Ergodicity: if a measurable set \(A \subseteq \Omega \) is such that for every \(x \in \mathbb {R}^d\), \(A = \tau _x(A)\), then \(\mathbb {P}(A)\in \{0,1\}\).

  3. (3)

    Stochastic continuity: for any \(\delta >0\) and f bounded measurable,

    $$\begin{aligned} \lim _{h\rightarrow 0}\mathbb {P}\left\{ |f(\tau _h\omega )-f(\omega )|\ge \delta \right\} =0. \end{aligned}$$

We denote the inner product and norm on \(L^2(\Omega )\) by \(\langle .,.\rangle \) and \(\Vert .\Vert \) respectively, and define the operator \(T_x\) on \(L^2(\Omega )\) as \(T_x f:=f\circ \tau _{-x}\). The family \(\{T_x,x\in \mathbb {R}^d\}\) forms a d-parameter group of unitary operators on \(L^2(\Omega )\). Stochastic continuity implies that the group is strongly continuous, and ergodicity asserts that a function f is constant if and only if \(T_xf=f\) for all \(x\in \mathbb {R}^d\).

Let \(\{D_k,k=1,\ldots ,d\}\) be the generators of the group \(\{T_x,x\in \mathbb {R}^d\}\). They correspond to differentiations in \(L^2(\Omega )\) in the canonical directions denoted by \(\{e_k,k=1,\ldots ,d\}\). The gradient is then denoted by \(D:=(D_1,\ldots ,D_d)\), and we define the Sobolev space \(H^1(\Omega )\) as the completion of smooth functions under the norm \(\Vert f\Vert _{H^1}^2:=\langle f,f\rangle +\sum _{k=1}^d\langle D_kf,D_kf\rangle \).

Any function f on \(\Omega \) can be extended to a stationary random field \(\tilde{f}(x,\omega ):=f(\tau _{-x}\omega )\). The random coefficients \(\tilde{a}(x,\omega )\) appearing in (1.1) are given by \(\tilde{a}(x,\omega )=a(\tau _{-x}\omega )\) for some measurable \(a:\Omega \rightarrow \mathbb {R}^{d\times d}\). We further make the following assumptions on a:

  1. (1)

    Uniform ellipticity and smoothness. For every \(\omega \in \Omega \), \(a(\omega )\) is a symmetric matrix satisfying

    $$\begin{aligned} C^{-1}|\xi |^2\le \xi ^T a(\omega )\xi \le C|\xi |^2 \end{aligned}$$
    (2.1)

    for some constant \(C>0\). Each entry \(\tilde{a}_{ij}(x,\omega )=a_{ij}(\tau _{-x}\omega )\) has \(C^2\) sample paths whose first and second order derivatives are uniformly bounded in \((x,\omega )\).

  2. (2)

    Local dependence. There exists \(C > 0\) such that for all \(x\in \mathbb {R}^d\), \(\tilde{a}(x,\omega )=a(\tau _{-x}\omega )\) is \(\mathcal {F}_{\{y:|y-x|\le C\}}\)-measurable for some constant \(C>0\).

The coefficient field \(a(\omega )\) can for instance be constructed by choosing a “shape function” \(g: \mathcal {M} \times \mathbb {R}^d \rightarrow E\) for some measurable vector space E (e.g. the space of symmetric matrices) and a “cut-off function” \(F : E \rightarrow \mathbb {R}^{d\times d}\) (that can be used to ensure uniform ellipticity), and letting

$$\begin{aligned} a(\omega ) = F\left(\sum _{(m,z) \in \omega } g(m,z)\right). \end{aligned}$$

The condition of local dependence on a is guaranteed if g(mz) is non-zero only for z varying in a compact set. As we will see below, the Poisson structure is only used to establish the covariance estimate (4.2) and then prove Propositions 4.6 and 4.7. Although the law of the Poisson point process is invariant under transformations such as \(z \mapsto -z\), this is of course not the case in general for the coefficient field \(\tilde{a}(x,\omega )\) itself.

The following is our main theorem.

Theorem 2.1

Assume \(f\in \mathcal {C}_c^\infty (\mathbb {R}^d)\). For every (tx), there exists \(C_\varepsilon \rightarrow 0\) in \(L^1(\Omega )\) such that

$$\begin{aligned} u_\varepsilon (t,x,\omega )-u_{\mathrm {hom}}(t,x) = \varepsilon \nabla u_{\mathrm {hom}}(t,x)\cdot \phi (\tau _{-x/\varepsilon }\omega )+\varepsilon C_\varepsilon . \end{aligned}$$
(2.2)

Here \(\phi =(\phi _{e_1},\ldots ,\phi _{e_d})\), where \(\phi _{e_k}\) is the (zero-mean) stationary corrector in the canonical direction \(e_k\).

Remark 2.2

The existence of \(\phi \) is given by Theorem 4.1. An examination of the proof reveals that the smoothness condition on f can be relaxed. It suffices to assume that sufficiently many weak derivatives of f belong to \(L^2(\mathbb {R}^d)\) (i.e., the Fourier transform \(\hat{f}\) is such that \(\hat{f}(\xi )(1+|\xi |)^n\) is integrable for some large n).

Remark 2.3

It would be interesting to quantify the convergence of \(\mathbb {E}[C_\varepsilon ]\) to 0. We discuss a possible approach to show that \(\mathbb {E}[C_\varepsilon ] \lesssim \sqrt{\varepsilon }\) (up to logarithmic corrections) in Remark 6.8 below.

Remark 2.4

To the best of our knowledge, Theorem 2.1 was not known even in the periodic case. We explain how to adapt our methods to this setting in Sect. 8.

Theorem 2.1 gives, for every (tx), the existence of some \(C_\varepsilon = C_\varepsilon (t,x)\) such that (2.2) holds. Our proof actually shows more. In particular, for every \(T >0\), \(\sup _{x \in \mathbb {R}^d, t \leqslant T} \mathbb {E}\{|C_\varepsilon (t,x)|\}\) tends to 0 as \(\varepsilon \) tends to 0, and we also obtain some control on the growth of this quantity as T grows. Therefore, we can derive a similar result for elliptic equations, which we now describe more precisely.

Let \(U_\varepsilon (x,\omega )\) and \(U_{\mathrm {hom}}(x)\) solve the following equations on \(\mathbb {R}^d\) respectively

$$\begin{aligned} U_\varepsilon (x,\omega )-\frac{1}{2}\nabla \cdot \left( \tilde{a}\left( \frac{x}{\varepsilon },\omega \right) \nabla U_\varepsilon (x,\omega )\right)= & {} f(x),\end{aligned}$$
(2.3)
$$\begin{aligned} U_{\mathrm {hom}}(x)-\frac{1}{2}\nabla \cdot (\bar{A} \nabla U_{\mathrm {hom}}(x))= & {} f(x). \end{aligned}$$
(2.4)

Theorem 2.5

Under the same assumption as in Theorem 2.1 and for every x, there exists \(\tilde{C}_\varepsilon \rightarrow 0\) in \(L^1(\Omega )\) such that

$$\begin{aligned} U_\varepsilon (x,\omega )-U_{\mathrm {hom}}(x) =\varepsilon \nabla U_{\mathrm {hom}}(x)\cdot \phi (\tau _{-x/\varepsilon }\omega )+\varepsilon \tilde{C}_\varepsilon . \end{aligned}$$

3 Diffusions in random environments

In this section, we present a standard approach to diffusions in random environments, including the process of the medium seen from the particle, corrector equations and the martingale decomposition. A complete introduction can be found in [26, Chapter 9], so we do not present the details.

For every fixed \(\omega \in \Omega , x\in \mathbb {R}^d\) and \(\varepsilon >0\), we define the diffusion process \(X_t^\omega \) on \(\mathbb {R}^d\), starting from \(x/\varepsilon \), by the Itô stochastic differential equation

$$\begin{aligned} dX_t^\omega =\tilde{b}(X_t^\omega ,\omega )dt+\tilde{\sigma }(X_t^\omega ,\omega )dB_t. \end{aligned}$$
(3.1)

Here, the drift \(\tilde{b}=(\tilde{b}_1,\ldots ,\tilde{b}_d)\) is defined by \(\tilde{b}_i=\frac{1}{2}\sum _{j=1}^d \partial _{x_j}\tilde{a}_{ji}\), the diffusion matrix is \(\tilde{\sigma }=\sqrt{\tilde{a}}\), and the driving force \(B_t=(B_t^1,\ldots ,B_t^d)\) is a standard d-dimensional Brownian motion built on a different probability space \((\Sigma ,\mathcal {A},\mathbb {P}_B)\) with the associated expectation \(\mathbb {E}_B\). (Although we keep it implicit in the notation, note that the starting point of the diffusion depends on \(\varepsilon \).)

The medium or environment seen from the particle is the process taking values in \(\Omega \) defined by

$$\begin{aligned} \omega _s:=\tau _{-X_s^\omega }\omega . \end{aligned}$$
(3.2)

The following lemma is taken from [26, Proposition 9.8].

Lemma 3.1

\((\omega _s)_{s \,\geqslant \, 0}\) is a Markov process that is reversible and ergodic with respect to the measure \(\mathbb {P}\). Its generator is given by

$$\begin{aligned}L:=\frac{1}{2}\sum _{i,j=1}^d D_i(a_{ij}D_j).\end{aligned}$$

The diffusively rescaled process \(\varepsilon X_{t/\varepsilon ^2}^\omega \) starts from x, with an infinitesimal generator given by

$$\begin{aligned} L_\varepsilon ^\omega :=\frac{1}{2}\sum _{i,j=1}^d\tilde{a}_{ij}\left( \frac{x}{\varepsilon },\omega \right) \partial _{x_i}\partial _{x_j}+\frac{1}{\varepsilon }\tilde{b}\left( \frac{x}{\varepsilon },\omega \right) \cdot \nabla =\frac{1}{2}\nabla \cdot \left( \tilde{a}\left( \frac{x}{\varepsilon },\omega \right) \nabla \right) . \end{aligned}$$
(3.3)

Hence, we can express the solution to (1.1) as an average with respect to the diffusion process \(\varepsilon X_{t/\varepsilon ^2}^\omega \), i.e., for every fixed \(\omega \in \Omega , t>0, x\in \mathbb {R}^d, \varepsilon >0\), we have

$$\begin{aligned} u_\varepsilon (t,x,\omega )=\mathbb {E}_B\left\{ f\left( \varepsilon X_{t/\varepsilon ^2}^\omega \right) \right\} . \end{aligned}$$
(3.4)

With the above probabilistic representation, the problem reduces to an analysis of the asymptotic behavior of \(\varepsilon X_{t/\varepsilon ^2}^\omega \). In view of (3.1), the process can be written as

$$\begin{aligned} \varepsilon X_{t/\varepsilon ^2}^\omega= & {} x+\varepsilon \int _0^{t/\varepsilon ^2}\tilde{b}\left( X_s^\omega ,\omega \right) \, ds+\varepsilon \int _0^{t/\varepsilon ^2}\tilde{\sigma }\left( X_s^\omega ,\omega \right) \, dB_s\\= & {} x+ \varepsilon \int _0^{t/\varepsilon ^2}b(\omega _s) \, ds+\varepsilon \int _0^{t/\varepsilon ^2}\sigma (\omega _s) \, dB_s. \end{aligned}$$

It is clear that \(b=(b_1,\ldots ,b_d)\) with \(b_i=\frac{1}{2}\sum _{j=1}^d D_ja_{ji}\) and \(\sigma =\sqrt{a}\).

The idea is to decompose the drift term \(\varepsilon \int _0^{t/\varepsilon ^2}b(\omega _s)ds\) as a martingale plus some small remainder. Since it is an additive functional of a stationary and ergodic Markov process, we can use the Kipnis–Varadhan method. For any \({\uplambda }>0\), the \({\uplambda }\)-corrector in the direction of \(\xi \in \mathbb {R}^d\), denoted by \(\phi _{{\uplambda },\xi }\), is defined as the solution in \(L^2(\Omega )\) to the following equation:

$$\begin{aligned} ({\uplambda }-L)\phi _{{\uplambda },\xi }=\xi \cdot b. \end{aligned}$$
(3.5)

By Itô’s formula,

$$\begin{aligned} \begin{aligned} \tilde{\phi }_{{\uplambda },\xi }\left( X_{t/\varepsilon ^2}^\omega ,\omega \right) - \tilde{\phi }_{{\uplambda },\xi }\left( X_{0}^\omega ,\omega \right)&= \int _0^{t/\varepsilon ^2} L^\omega _1 \tilde{\phi }_{{\uplambda },\xi }\left( X_s^\omega ,\omega \right) \, ds \\&\quad + \sum _{i,j = 1}^d \int _0^t \partial _{x_i} \tilde{\phi }_{{\uplambda },\xi }\left( X_s^\omega ,\omega \right) \tilde{\sigma }_{ij}\left( X_s^\omega ,\omega \right) \, dB_s^j. \end{aligned} \end{aligned}$$
(3.6)

Hence, the projection on \(\xi \) of the drift term can be decomposed as

$$\begin{aligned} \begin{aligned} \int _0^{t/\varepsilon ^2}(\xi \cdot b)(\omega _s) \, ds=&\int _0^{t/\varepsilon ^2}{\uplambda }\phi _{{\uplambda },\xi }(\omega _s) \, ds-\phi _{{\uplambda },\xi }(\omega _{t/\varepsilon ^2})+\phi _{{\uplambda },\xi }(\omega _0)\\&+\sum _{i,j=1}^d \int _0^{t/\varepsilon ^2}D_i\phi _{{\uplambda },\xi }(\omega _s)\sigma _{ij}(\omega _s) \, dB_s^j, \end{aligned} \end{aligned}$$

so the projection on \(\xi \) of the rescaled process admits the following representation:

$$\begin{aligned} \xi \cdot \left( \varepsilon X_{t/\varepsilon ^2}^\omega \right) =\xi \cdot x+R_t^\varepsilon ({\uplambda })+M_t^\varepsilon ({\uplambda }), \end{aligned}$$
(3.7)

where the remainder \(R_t^\varepsilon ({\uplambda })\) and the martingale \(M_t^\varepsilon ({\uplambda })\) are given by

$$\begin{aligned} R_t^\varepsilon ({\uplambda }):= & {} \varepsilon \int _0^{t/\varepsilon ^2}{\uplambda }\phi _{{\uplambda },\xi }(\omega _s) \, ds-\varepsilon \phi _{{\uplambda },\xi }(\omega _{t/\varepsilon ^2})+\varepsilon \phi _{{\uplambda },\xi }(\omega _0),\end{aligned}$$
(3.8)
$$\begin{aligned} M_t^\varepsilon ({\uplambda }):= & {} \sum _{j=1}^d \varepsilon \int _0^{t/\varepsilon ^2}\sum _{i=1}^d\left( D_i\phi _{{\uplambda },\xi }(\omega _s)+\xi _i\right) \sigma _{ij}(\omega _s) \, dB_s^j. \end{aligned}$$
(3.9)

We point out that Eq. (3.5) on the probability space \(\Omega \) corresponds to the following PDE on the physical space \(\mathbb {R}^d\):

$$\begin{aligned} \left( {\uplambda }-L_1^\omega \right) \tilde{\phi }_{{\uplambda },\xi }=\xi \cdot \tilde{b}, \end{aligned}$$
(3.10)

where we recall that \(\tilde{\phi }_{{\uplambda },\xi }(x,\omega )=\phi _{{\uplambda },\xi }(\tau _{-x}\omega )\). Letting \(G^\omega _{\uplambda }(x,y)\) be the Green function associated with \({\uplambda }-L_1^\omega \), we have the integral representation

$$\begin{aligned} \phi _{{\uplambda },\xi }(\tau _{-x}\omega )=\int _{\mathbb {R}^d}G^\omega _{\uplambda }(x,y)\xi \cdot b(\tau _{-y}\omega ) \, dy. \end{aligned}$$
(3.11)

We briefly discuss the proof of homogenization, see [26, Chapter 9] for details. For the remainder, it can be shown that \({\uplambda }\langle \phi _{{\uplambda },\xi },\phi _{{\uplambda },\xi }\rangle \rightarrow 0\) as \({\uplambda }\rightarrow 0\), so by applying Lemma 3.1 and choosing \({\uplambda }=\varepsilon ^2\), we obtain \(\mathbb {E}\mathbb {E}_B\{|R_t^\varepsilon ({\uplambda })|^2\}\rightarrow 0\) as \(\varepsilon \rightarrow 0\). For the martingale, we can first show that \(D\phi _{{\uplambda },\xi }\) converges in \(L^2(\Omega )\), with the limit formally written as \(D\phi _\xi \). Then by a martingale central limit theorem, \(M_t^\varepsilon ({\uplambda })\) converges in distribution to a Gaussian with mean zero and variance \(\sigma _\xi ^2:=\xi ^T\bar{A}\xi \), where the homogenized matrix \(\bar{A}\) is given by

$$\begin{aligned} \bar{A}_{ij}=\mathbb {E}\left\{ \left( e_i+D \phi _{e_i}\right) ^Ta\left( e_j+D\phi _{e_j}\right) \right\} . \end{aligned}$$
(3.12)

We can express the solution (3.4) in the Fourier domain using (3.7) as

$$\begin{aligned} u_\varepsilon (t,x,\omega )=\frac{1}{(2\pi )^d}\int _{\mathbb {R}^d} \hat{f}(\xi )e^{i\xi \cdot x}\mathbb {E}_B\left\{ e^{i R_t^\varepsilon ({\uplambda })} e^{iM_t^\varepsilon ({\uplambda })}\right\} \, d\xi . \end{aligned}$$
(3.13)

By the convergence of \(R_t^\varepsilon ({\uplambda })\rightarrow 0\) and \(M_t^\varepsilon ({\uplambda })\rightarrow N(0,\sigma _\xi ^2)\), it can be shown that

$$\begin{aligned} u_\varepsilon (t,x,\omega )\rightarrow u_{\mathrm {hom}}(t,x)=\frac{1}{(2\pi )^d}\int _{\mathbb {R}^d}\hat{f}(\xi )e^{i\xi \cdot x}e^{-\frac{1}{2}\xi ^T\bar{A}\xi t} \, d\xi \end{aligned}$$
(3.14)

in probability.

4 Properties of correctors and functionals of the environment seen from the particle

In this section, we first present some key estimates on the corrector \(\phi _{{\uplambda },\xi }\) and the Green function \(G_{\uplambda }^\omega (x,y)\). Then we analyze the decorrelation rate of certain functionals of the corrector by an application of the spectral gap inequality. In the end, we estimate the variance decay of functionals of the environmental process by a comparison of resolvents. Throughout the section, \(\xi \) is a fixed vector in \(\mathbb {R}^d\).

The following two theorems are borrowed from [20, Proposition 1] and [14, Corollary 1.5].

Theorem 4.1

[20] Recall that \(d\ge 3\). There exists \(\phi _{\xi }\in H^1(\Omega )\) such that \(\phi _{{\uplambda },\xi }\rightarrow \phi _\xi \) in \(H^1(\Omega )\) as \({\uplambda }\) tends to 0. Furthermore, the p-th moments of \(\phi _{{\uplambda },\xi },D\phi _{{\uplambda },\xi },\phi _\xi ,D\phi _\xi \) are uniformly bounded in \({\uplambda }\) for any \(p<\infty \).

Remark 4.2

From (3.5), it is clear that \(\mathbb {E}\{\phi _{{\uplambda },\xi }\}=0\), so \(\mathbb {E}\{\phi _\xi \}=0\).

Remark 4.3

For the gradient of the corrector \(D\phi _{{\uplambda },\xi }\), [20, Proposition 1] proves

$$\begin{aligned} \mathbb {E}\left\{ \left(\int _{|x|\le 1}|\nabla \tilde{\phi }_{{\uplambda },\xi }(x,\omega )|^2dx\right)^p\right\} \le C_p \end{aligned}$$

for any \(p>0\), i.e., a high moment bound of some spatial average. This can be improved with additional regularity assumptions on \(\tilde{a}\). Recall that for almost every \(\omega \), \(\tilde{\phi }_{{\uplambda },\xi }(x,\omega )\) is the weak solution to

$$\begin{aligned} {\uplambda }\tilde{\phi }_{{\uplambda },\xi }(x,\omega )-\frac{1}{2}\nabla \cdot \tilde{a}(x,\omega )(\xi + \nabla \tilde{\phi }_{{\uplambda },\xi }(x,\omega ))=0, \end{aligned}$$

and since the sample path of \(\tilde{a}_{ij}(x,\omega )\) is \(C^2\) and hence Hölder continuous (uniformly over \(\omega \)), the following estimate is given by standard Hölder regularity theory [23, Theorems 3.13 and 3.1]

$$\begin{aligned} |\nabla \tilde{\phi }_{{\uplambda },\xi }(0,\omega )|^2\le C\left( 1+\int _{|x|\le 1} |\tilde{\phi }_{{\uplambda },\xi }(x,\omega )|^2dx+\int _{|x|\le 1} |\nabla \tilde{\phi }_{{\uplambda },\xi }(x,\omega )|^2dx\right) , \end{aligned}$$

with the constant C independent of \(\omega \) and \({\uplambda }\leqslant 1\). By taking expectation, we derive a bound on the \(L^p\) norm of \(D\phi _{{\uplambda },\xi }\) that is uniform in \({\uplambda }\leqslant 1\).

Theorem 4.4

[1, 14] Recall that \(d\ge 3\). For every \(p>0\), there exists \(C_p < \infty \) such that for every \({\uplambda }\geqslant 0\) and \(x,y\in \mathbb {R}^d\),

$$\begin{aligned} \mathbb {E}\left\{ |\nabla _x G^\omega _{\uplambda }(x,y)|^p\right\} ^{\frac{1}{p}}\le & {} \frac{C_p}{|x-y|^{d-1}},\\ \mathbb {E}\left\{ |\nabla _x\nabla _y G^\omega _{\uplambda }(x,y)|^p\right\} ^{\frac{1}{p}}\le & {} \frac{C_p}{|x-y|^{d}}, \end{aligned}$$

where the constant \(C_p>0\) does not depend on \({\uplambda }\), and \(\nabla _x\nabla _y\) denotes the mixed second order derivatives.

The Poisson structure that we assume enables us to decompose the randomness into i.i.d. random variables, i.e., we have \(\omega =\{\eta _k,k\in \mathbb {Z}^d\}\) with \(\eta _k\) the Poisson point process restricted on \(\mathcal {M}\times \{k+[0,1)^d\}\). In this way, we can use a spectral gap inequality given by [15, Lemma 1] to estimate the decorrelation rates of functions on \(\Omega \). For any \(f\in L^2(\Omega )\) with \(\mathbb {E}\{f\}=0\), the inequality shows

$$\begin{aligned} \mathbb {E}\left\{ f^2\right\} \le \sum _{k\in \mathbb {Z}^d}\mathbb {E}\left\{ |\partial _k f|^2\right\} , \end{aligned}$$
(4.1)

with \(\partial _k f:=f-\mathbb {E}\{f|\{\eta _i, i\ne k\}\}\) describing the dependence of f on \(\eta _k\).

By following the same argument, a covariance estimate can be derived, i.e., for any \(f,g\in L^2(\Omega )\) with \(\mathbb {E}\{f\}=\mathbb {E}\{g\}=0\), we have

$$\begin{aligned} |\mathbb {E}\left\{ fg\right\} |\le \sum _{k\in \mathbb {Z}^d}\sqrt{\mathbb {E}\left\{ |\partial _k f|^2\right\} }\sqrt{\mathbb {E}\left\{ |\partial _k g|^2\right\} }. \end{aligned}$$
(4.2)

We further claim that

$$\begin{aligned} \mathbb {E}\{|\partial _k f|^2\}= \frac{1}{2} \, \mathbb {E}\{|f-f_k|^2\}. \end{aligned}$$
(4.3)

Here \(f_k(\omega ):=f(\omega _k)\) with \(\omega _k:=\{\eta _i,i\ne k\}\cup \{\tilde{\eta }_k\}\) and \(\tilde{\eta }_k\) an independent copy of \(\eta _k\), i.e., \(\omega _k\) is a perturbation of \(\omega \) at k. First, since conditional expectation is an \(L^2\) projection, we have \(\mathbb {E}\{|\partial _k f|^2\}=\mathbb {E}\{f^2\}-\mathbb {E}\{|\mathbb {E}\{f|\{\eta _i, i\ne k\}\}|^2\}\). Secondly, \(\mathbb {E}\{|f-f_k|^2\}=2\mathbb {E}\{f^2\}-2\mathbb {E}\{ff_k\}\) and \(\mathbb {E}\{ff_k\}=\mathbb {E}\{|\mathbb {E}\{f|\{\eta _i, i\ne k\}\}|^2\}\) by conditioning on \(\{\eta _i,i\ne k\}\). So (4.3) is proved.

Combining (4.2) and (4.3), we obtain

$$\begin{aligned} |\mathbb {E}\{fg\}|\le \sum _{k\in \mathbb {Z}^d}\sqrt{\mathbb {E}\{|f-f_k|^2\}}\sqrt{\mathbb {E}\{|g-g_k|^2\}}. \end{aligned}$$
(4.4)

This will be our main tool to estimate the decorrelation rate of functionals on \(\Omega \).

Remark 4.5

The covariance estimate also holds for the random checkerboard structure, e.g., let \(\tilde{a}(x,\omega )=\eta _k\) if \(x-k\in [0,1)^d\), with \(\{\eta _k, k\in \mathbb {Z}^d\}\) i.i.d. matrix-valued random variables. However, in that case \(\tilde{a}(x,\omega )\) is only stationary with respect to shifts in \(\mathbb {Z}^d\), and such situations are not covered by Theorems 4.1 and 4.4.

The following is an estimate of the decorrelation rate of \(\phi _\xi \).

Proposition 4.6

\(|\mathbb {E}\{\phi _\xi (\tau _0\omega )\phi _\xi (\tau _{-x}\omega )\}|\lesssim |\xi |^2(1\wedge \frac{1}{|x|^{d-2}})\).

Proof

By Theorem 4.1, \(\phi _{{\uplambda },\xi }\rightarrow \phi _\xi \) in \(L^2(\Omega )\), so we only need to show that the estimate holds for \(\phi _{{\uplambda },\xi }\) with an implicit constant independent of \({\uplambda }\). Clearly, it suffices to consider |x| sufficiently large.

By (4.4) we have

$$\begin{aligned} \begin{aligned}&|\mathbb {E}\{\phi _{{\uplambda },\xi }(\tau _0\omega )\phi _{{\uplambda },\xi }(\tau _{-x}\omega )\}|\\&\,\le \sum _{k\in \mathbb {Z}^d}\sqrt{\mathbb {E}\{|\phi _{{\uplambda },\xi }(\tau _0\omega )-\phi _{{\uplambda },\xi }(\tau _0\omega _k)|^2\}}\sqrt{\mathbb {E}\{|\phi _{{\uplambda },\xi }(\tau _{-x}\omega )-\phi _{{\uplambda },\xi }(\tau _{-x}\omega _k)|^2\}}, \end{aligned} \end{aligned}$$
(4.5)

where \(\omega _k\) is obtained by replacing \(\eta _k\) in \(\omega \) by an independent copy \(\tilde{\eta }_k\).

Now we only need to control \(\mathbb {E}\{|\phi _{{\uplambda },\xi }(\tau _{-x}\omega )-\phi _{{\uplambda },\xi }(\tau _{-x}\omega _k)|^2\}\) for \(x\in \mathbb {R}^d, k\in \mathbb {Z}^d\). Since it is bounded, we consider the case when \(|x-k|\) is large. Recall that we write \(\tilde{\phi }_{{\uplambda },\xi }(x,\omega )=\phi _{{\uplambda },\xi }(\tau _{-x}\omega )\), and that

$$\begin{aligned} {\uplambda }\tilde{\phi }_{{\uplambda },\xi }(x,\omega )-\frac{1}{2}\nabla \cdot (\tilde{a}(x,\omega )\nabla \tilde{\phi }_{{\uplambda },\xi }(x,\omega ))= & {} \xi \cdot \tilde{b}(x,\omega ),\end{aligned}$$
(4.6)
$$\begin{aligned} {\uplambda }\tilde{\phi }_{{\uplambda },\xi }(x,\omega _k)-\frac{1}{2}\nabla \cdot (\tilde{a}(x,\omega _k)\nabla \tilde{\phi }_{{\uplambda },\xi }(x,\omega _k))= & {} \xi \cdot \tilde{b}(x,\omega _k). \end{aligned}$$
(4.7)

As a consequence,

$$\begin{aligned} \begin{aligned}&\tilde{\phi }_{{\uplambda },\xi }(x,\omega )-\tilde{\phi }_{{\uplambda },\xi }(x,\omega _k)\\&\,=\int _{\mathbb {R}^d}G_{\uplambda }^\omega (x,y)\left( \xi \cdot (\tilde{b}(y,\omega )-\tilde{b}(y,\omega _k))+\frac{1}{2}\nabla \cdot (\tilde{a}(y,\omega )-\tilde{a}(y,\omega _k))\nabla \tilde{\phi }_{{\uplambda },\xi }(y,\omega _k)\right) \, dy\\&\,=-\int _{\mathbb {R}^d}\nabla _y G_{\uplambda }^\omega (x,y)\left( \frac{1}{2} (\tilde{a}(y,\omega )-\tilde{a}(y,\omega _k))\xi +\frac{1}{2}(\tilde{a}(y,\omega )-\tilde{a}(y,\omega _k))\nabla \tilde{\phi }_{{\uplambda },\xi }(y,\omega _k)\right) \, dy, \end{aligned} \end{aligned}$$
(4.8)

since \(\xi \cdot \tilde{b} = \frac{1}{2} \nabla \cdot (\tilde{a} \xi )\). By the assumptions on a, \(\tilde{a}(y,\omega )-\tilde{a}(y,\omega _k)=0\) when \(|y-k|\ge C\) for some constant C, so

$$\begin{aligned} |\tilde{\phi }_{{\uplambda },\xi }(x,\omega )-\tilde{\phi }_{{\uplambda },\xi }(x,\omega _k)|\lesssim \int _{|y-k|\le C}|\nabla _y G_{\uplambda }^\omega (x,y)|(|\xi |+|\nabla \tilde{\phi }_{{\uplambda },\xi }(y,\omega _k)|) \, dy, \end{aligned}$$
(4.9)

which implies

$$\begin{aligned} \begin{aligned}&\mathbb {E}\left\{ |\tilde{\phi }_{{\uplambda },\xi }(x,\omega )-\tilde{\phi }_{{\uplambda },\xi }(x,\omega _k)|^2\right\} \\&\quad \lesssim |\xi |^2\int _{|y-k|\le C}\mathbb {E}\left\{ |\nabla _y G_{\uplambda }^\omega (x,y)|^2\right\} \, dy\\&\qquad + \int _{|y-k|\le C}\sqrt{\mathbb {E}\left\{ |\nabla _y G_{\uplambda }^\omega (x,y)|^4\right\} }\sqrt{\mathbb {E}\left\{ |\nabla \tilde{\phi }_{{\uplambda },\xi }(y,\omega _k)|^4\right\} } \, dy. \end{aligned} \end{aligned}$$
(4.10)

By Theorem 4.1 and the fact that \(\tilde{\phi }_{{\uplambda },\xi }\) is linear in \(\xi \), we first observe that

$$\begin{aligned} \sqrt{\mathbb {E}\left\{ |\nabla \tilde{\phi }_{{\uplambda },\xi }(y,\omega _k)|^4\right\} }\lesssim |\xi |^2, \end{aligned}$$
(4.11)

then we apply Theorem 4.4 on the r.h.s. of (4.10) to derive

$$\begin{aligned} \sqrt{\mathbb {E}\left\{ |\phi _{{\uplambda },\xi }(\tau _{-x}\omega )-\phi _{{\uplambda },\xi }(\tau _{-x}\omega _k)|^2\right\} }\lesssim |\xi |\left( 1\wedge \frac{1}{|x-k|^{d-1}}\right) . \end{aligned}$$
(4.12)

Now we have

$$\begin{aligned} |\mathbb {E}\{\phi _{{\uplambda },\xi }(\tau _0\omega )\phi _{{\uplambda },\xi }(\tau _{-x}\omega )\}|\lesssim |\xi |^2\sum _{k\in \mathbb {Z}^d} \left( 1\wedge \frac{1}{|k|^{d-1}}\right) \left( 1\wedge \frac{1}{|x-k|^{d-1}}\right) \lesssim \frac{|\xi |^2}{|x|^{d-2}}, \end{aligned}$$
(4.13)

where the last inequality comes from Lemma 9.1. The proof is complete. \(\square \)

Define

$$\begin{aligned} \begin{aligned} \psi _\xi :=&(\xi +D\phi _\xi )^Ta(\xi +D\phi _\xi )-\xi ^T\bar{A}\xi \\ =&\sum _{i,j=1}^d \xi _i\xi _j\left( (e_i+D\phi _{e_i})^Ta(e_j+D \phi _{e_j})-\bar{A}_{ij}\right) , \end{aligned} \end{aligned}$$
(4.14)

by the definition of the homogenized matrix \(\bar{A}\) in (3.12), \(\psi \) has mean zero and we can write it as \(\psi _\xi =\sum _{i,j=1}^d\xi _i\xi _j\psi _{ij}\) with

$$\begin{aligned} \psi _{ij}:=(e_i+D\phi _{e_i})^Ta(e_j+D \phi _{e_j})-\bar{A}_{ij}. \end{aligned}$$
(4.15)

The following is an estimate of the decorrelation rate of \(\psi _\xi \).

Proposition 4.7

\(|\mathbb {E}\{\psi _\xi (\tau _0\omega )\psi _\xi (\tau _{-x}\omega )\}|\lesssim |\xi |^4( 1\wedge \frac{\log (2+|x|)}{|x|^d})\).

Proof

First we define \(\psi _{{\uplambda },\xi }:=(\xi +D\phi _{{\uplambda },\xi })^Ta(\xi +D\phi _{{\uplambda },\xi })-\xi ^T\bar{A}_{\uplambda }\xi \), where \(\bar{A}_{\uplambda }\) is chosen so that \(\psi _{{\uplambda },\xi }\) has zero mean. By Theorem 4.1, \(\psi _{{\uplambda },\xi }\rightarrow \psi _\xi \) in \(L^2(\Omega )\), so we only need to consider \(\psi _{{\uplambda },\xi }\) and show that the estimate holds uniformly in \({\uplambda }\).

Similarly, we apply (4.4) to obtain

$$\begin{aligned} \begin{aligned}&|\mathbb {E}\{\psi _{{\uplambda },\xi }(\tau _0\omega )\psi _{{\uplambda },\xi }(\tau _{-x}\omega )\}|\\&\,\le \sum _{k\in \mathbb {Z}^d}\sqrt{\mathbb {E}\{|\psi _{{\uplambda },\xi }(\tau _0\omega )-\psi _{{\uplambda },\xi }(\tau _0\omega _k)|^2\}}\sqrt{\mathbb {E}\{|\psi _{{\uplambda },\xi }(\tau _{-x}\omega )-\psi _{{\uplambda },\xi }(\tau _{-x}\omega _k)|^2\}}, \end{aligned} \end{aligned}$$
(4.16)

with \(\omega _k\) the perturbation of \(\omega \) at k.

For any vector \(x_i,y_i\in \mathbb {R}^d\) and matrix \(A_i\in \mathbb {R}^{d\times d}\), \(i=1,2\), we have

$$\begin{aligned} |x_1^TA_1y_1-x_2^TA_2y_2|\le |x_1-x_2|\cdot |y_1|\cdot \Vert A_1\Vert +|x_2|\cdot |y_1|\cdot \Vert A_1-A_2\Vert +|x_2|\cdot |y_1-y_2|\cdot \Vert A_2\Vert , \end{aligned}$$
(4.17)

with \(\Vert .\Vert \) denoting the matrix norm here, so by the moment bounds of \(D\phi _{{\uplambda },\xi }\), we derive

$$\begin{aligned} \begin{aligned} \mathbb {E}\left\{ |\psi _{{\uplambda },\xi }(\tau _{-x}\omega )-\psi _{{\uplambda },\xi }(\tau _{-x}\omega _k)|^2\right\}&\lesssim |\xi |^4 \sqrt{\mathbb {E}\left\{ \Vert a(\tau _{-x}\omega )-a(\tau _{-x}\omega _k)\Vert ^4\right\} }\\&\quad +|\xi |^2\sqrt{\mathbb {E}\left\{ |D\phi _{{\uplambda },\xi }(\tau _{-x}\omega )\!-\!D\phi _{{\uplambda },\xi }(\tau _{-x}\omega _k)|^4\right\} }. \end{aligned} \end{aligned}$$

First, \(\sqrt{\mathbb {E}\{\Vert a(\tau _{-x}\omega )-a(\tau _{-x}\omega _k)\Vert ^4\}}\lesssim 1_{|x-k|\le C}\) by the local dependence of a on \(\omega \).

Secondly, recalling (4.8),

$$\begin{aligned} \begin{aligned} \partial _{x_i}\tilde{\phi }_{{\uplambda },\xi }(x,\omega )-\partial _{x_i}\tilde{\phi }_{{\uplambda },\xi }(x,\omega _k)&=-\int _{\mathbb {R}^d}\nabla _y \partial _{x_i}G_{\uplambda }^\omega (x,y)\left( \frac{1}{2} (\tilde{a}(y,\omega )-\tilde{a}(y,\omega _k))\xi \right. \\&\quad \left. +\,\frac{1}{2}(\tilde{a}(y,\omega )-\tilde{a}(y,\omega _k))\nabla \tilde{\phi }_{{\uplambda },\xi }(y,\omega _k)\right) \, dy. \end{aligned} \end{aligned}$$

By the same discussion as in the proof of Proposition 4.6, we obtain

$$\begin{aligned} \sqrt{\mathbb {E}\left\{ |\nabla \tilde{\phi }_{{\uplambda },\xi }(x,\omega )-\nabla \tilde{\phi }_{{\uplambda },\xi }(x,\omega _k)|^4\right\} }\lesssim |\xi |^2\left( 1\wedge \frac{1}{|x-k|^{2d}}\right) . \end{aligned}$$
(4.18)

To summarize, since \(D\phi _{{\uplambda },\xi }(\tau _{-x}\omega )=\nabla \tilde{\phi }_{{\uplambda },\xi }(x,\omega )\), we have

$$\begin{aligned} \mathbb {E}\left\{ |\psi _{{\uplambda },\xi }(\tau _{-x}\omega )-\psi _{{\uplambda },\xi }(\tau _{-x}\omega _k)|^2\right\} \lesssim |\xi |^4\left( 1_{|x-k|\le C}+1\wedge \frac{1}{|x-k|^{2d}}\right) , \end{aligned}$$

so

$$\begin{aligned} |\mathbb {E}\{\psi _{{\uplambda },\xi }(\tau _0\omega )\psi _{{\uplambda },\xi }(\tau _{-x}\omega )\}|\lesssim & {} |\xi |^4\sum _{k\in \mathbb {Z}^d}\left( 1\wedge \frac{1}{|k|^d}\right) \left( 1\wedge \frac{1}{|x-k|^d}\right) \\\lesssim & {} |\xi |^4\frac{\log (2+|x|)}{|x|^d}, \end{aligned}$$

where the last inequality comes from Lemma 9.1. The proof is complete. \(\square \)

For any \(f\in L^2(\Omega )\) with \(\mathbb {E}\{f\}=0\), we are interested in the variance decay of

$$\begin{aligned} f_t:=\mathbb {E}_B\{f(\omega _t)\}. \end{aligned}$$
(4.19)

Since \(\omega _t=\tau _{-X_t^\omega }\omega \) and \(X_t^\omega \) is driven by the generator \(L_1^\omega =\frac{1}{2}\nabla \cdot (\tilde{a}(x,\omega )\nabla )\) with \(\tilde{a}\) being strictly positive definite, heuristically \(X_t^\omega \) should spread at least as fast as a Brownian motion with a sufficiently small diffusion constant. In other words, letting \(f_t^o:=\mathbb {E}_B\{f(\omega _t^o)\}\) with \(\omega _t^o=\tau _{-B_s}\omega \), we expect the decay to 0 of \(f_t\) to be at least as fast as that of \(f_t^o\) (up to rescaling the time by a suitable constant). The following result is a precise statement of this idea (see [29, Lemma 5.1] for a classical proof).

Proposition 4.8

For any \({\uplambda }\geqslant 0\),

$$\begin{aligned} \int _0^\infty e^{-{\uplambda }t}\mathbb {E}\{|f_{t}|^2\} \, dt\le C\int _0^\infty e^{-{\uplambda }t}\mathbb {E}\{|f_{t}^o|^2\} \, dt. \end{aligned}$$

The constant \(C>0\) only depends on the ellipticity constant in (2.1).

For \(f=\phi _\xi \) or \(\psi _\xi \), the following results holds.

Proposition 4.9

$$\begin{aligned} \mathbb {E}\left\{ |\mathbb {E}_B\{\phi _\xi (\omega _{t})\}|^2\right\} \lesssim |\xi |^2 \left| \begin{array}{ll} t^{-\frac{1}{2}} &{} \text {if } d = 3, \\ t^{-1}\log (2+t) &{} \text {if } d = 4, \\ t^{-1} &{} \text {if } d \geqslant 5. \end{array} \right. \end{aligned}$$

Proof

First, for any f we have

$$\begin{aligned} \mathbb {E}\left\{ \left| f_{t/2}^o\right| ^2\right\} =\mathbb {E}\left\{ \left| \mathbb {E}_B\left\{ f(\tau _{-B_{t/2}}\omega )\right\} \right| ^2\right\} =\mathbb {E}\left\{ \mathbb {E}_{B^1,B^2}\left\{ f\left( \tau _{-B^1_{t/2}}\omega \right) f\left( \tau _{-B^2_{t/2}}\omega \right) \right\} \right\} , \end{aligned}$$

where \(B^1,B^2\) are two independent Brownian motions and \(\mathbb {E}_{B^1,B^2}\) denotes the average with respect to them.

Next let \(f=\phi _\xi \) and \(R_{\phi _\xi }\) be the covariance function of \(\phi _\xi \)(and recalling that \(q_t\) is the density of the law N(0, t)), we obtain

$$\begin{aligned} \begin{aligned} \mathbb {E}\left\{ \left| f_{t/2}^o\right| ^2\right\}&= \mathbb {E}_{B^1,B^2}\left\{ R_{\phi _\xi }\left( B^1_{t/2}-B^2_{t/2}\right) \right\} =\int _{\mathbb {R}^d}R_{\phi _\xi }(x)q_t(x) \, dx\\&= \int _{\mathbb {R}^d}R_{\phi _\xi }(\sqrt{t}x)q_1(x) \, dx \lesssim |\xi |^2\int _{\mathbb {R}^d} 1\wedge \frac{1}{|\sqrt{t}x|^{d-2}} q_1(x) \, dx\\&\lesssim |\xi |^2\left( 1\wedge \frac{1}{t^{\frac{d}{2}-1}}\right) , \end{aligned} \end{aligned}$$

where we used the result \(|R_{\phi _\xi }(x)|\lesssim |\xi |^2(1\wedge |x|^{2-d})\) given by Proposition 4.6.

Since \(\mathbb {E}\{|f_{t/2}|^2\}\) decreases in t, from Proposition 4.8 we have

$$\begin{aligned} \mathbb {E}\left\{ |f_{t/2}|^2\right\} \le \frac{C{\uplambda }\int _0^\infty e^{-{\uplambda }s}\mathbb {E}\{|f_{s/2}^o|^2\} \, ds}{1-e^{-{\uplambda }t}}\lesssim \frac{C{\uplambda }|\xi |^2\int _0^\infty e^{-{\uplambda }s}(1\wedge s^{-\frac{d}{2}+1}) \, ds}{1-e^{-{\uplambda }t}} \end{aligned}$$
(4.20)

for any \({\uplambda }>0\). We can choose \({\uplambda }=1/t\) on the r.h.s. of the above display and derive

$$\begin{aligned} \mathbb {E}\{|f_{t/2}|^2\}\lesssim |\xi |^2 \left| \begin{array}{l@{\quad }l} t^{-\frac{1}{2}} &{} \text {if } d = 3, \\ t^{-1}\log (2+t) &{} \text {if } d = 4, \\ t^{-1} &{} \text {if } d \geqslant 5. \end{array} \right. \end{aligned}$$
(4.21)

The proof is complete. \(\square \)

Proposition 4.10

\(\int _0^\infty \mathbb {E}\{|\mathbb {E}_B\{\psi _\xi (\omega _t)\}|^2\} \, dt\lesssim |\xi |^4\).

Proof

Let \(f=\psi _\xi \), by Proposition 4.8 we have

$$\begin{aligned} \int _0^\infty \mathbb {E}\{|f_{t}|^2\} \, dt\le C\int _0^\infty \mathbb {E}\{|f_{t}^o|^2\} \, dt, \end{aligned}$$
(4.22)

so we only need to prove that \(\int _0^\infty \mathbb {E}\{|f_{t}^o|^2\} \, dt\lesssim |\xi |^4\). Let \(R_{\psi _\xi }\) be the covariance function of \(\psi _\xi \). By the same argument as in Proposition 4.9,

$$\begin{aligned} \int _0^\infty \mathbb {E}\{|f_{t}^o|^2\} \, dt=\int _0^\infty \int _{\mathbb {R}^d}R_{\psi _\xi }(x)q_{2t}(x) \, dxdt. \end{aligned}$$
(4.23)

By Proposition 4.7, \(|R_{\psi _\xi }(x)|\lesssim |\xi |^4(1\wedge |x|^{-d}\log (2+|x|))\), so after integrating in t we obtain

$$\begin{aligned} \int _0^\infty \int _{\mathbb {R}^d}R_{\psi _\xi }(x)q_{2t}(x) \, dxdt\lesssim \int _{\mathbb {R}^d}\frac{|R_{\psi _\xi }(x)|}{|x|^{d-2}} \, dx\lesssim |\xi |^4 \end{aligned}$$
(4.24)

since \(d\ge 3\). The proof is complete. \(\square \)

Before presenting the proof of the main theorem, we decompose the error as

$$\begin{aligned} \begin{aligned} u_\varepsilon (t,x,\omega )-u_{\mathrm {hom}}(t,x)&=\frac{1}{(2\pi )^d}\int _{\mathbb {R}^d} \hat{f}(\xi )e^{i\xi \cdot x}\mathbb {E}_B\{e^{i R_t^\varepsilon ({\uplambda })} e^{iM_t^\varepsilon ({\uplambda })}\} \, d\xi \\&\quad -\frac{1}{(2\pi )^d}\int _{\mathbb {R}^d}\hat{f}(\xi )e^{i\xi \cdot x}e^{-\frac{1}{2}\xi ^T\bar{A}\xi t} \, d\xi . \end{aligned} \end{aligned}$$
(4.25)

Since \(u_\varepsilon -u_{\mathrm {hom}}\) does not depend on \({\uplambda }\), we can send \({\uplambda }\rightarrow 0\) on the r.h.s. of the above display. By Theorem 4.1, \(R_t^\varepsilon ({\uplambda })\rightarrow R_t^\varepsilon \) and \(M_t^\varepsilon ({\uplambda })\rightarrow M_t^\varepsilon \) in \(L^2(\Omega \times \Sigma )\), where

$$\begin{aligned} R_t^\varepsilon:= & {} -\varepsilon \phi _{\xi }(\omega _{t/\varepsilon ^2})+\varepsilon \phi _{\xi }(\omega _0),\end{aligned}$$
(4.26)
$$\begin{aligned} M_t^\varepsilon:= & {} \sum _{j=1}^d \varepsilon \int _0^{t/\varepsilon ^2}\sum _{i=1}^d(D_i\phi _{\xi }(\omega _s)+\xi _i)\sigma _{ij}(\omega _s) \, dB_s^j. \end{aligned}$$
(4.27)

Therefore, the error can be rewritten as

$$\begin{aligned} \begin{aligned} u_\varepsilon (t,x,\omega )-u_{\mathrm {hom}}(t,x)&=\frac{1}{(2\pi )^d}\int _{\mathbb {R}^d} \hat{f}(\xi )e^{i\xi \cdot x}\mathbb {E}_B\left\{ \left( e^{i R_t^\varepsilon }-1\right) e^{iM_t^\varepsilon }\right\} \, d\xi \\&\quad +\frac{1}{(2\pi )^d}\int _{\mathbb {R}^d} \hat{f}(\xi )e^{i\xi \cdot x}\left( \mathbb {E}_B\left\{ e^{iM_t^\varepsilon }\right\} -e^{-\frac{1}{2}\xi ^T\bar{A}\xi t}\right) \, d\xi . \end{aligned} \end{aligned}$$
(4.28)

The first part measures how small the remainder \(R_t^\varepsilon \) is, and the second part measures how close the martingale \(M_t^\varepsilon \) is to a Brownian motion. It turns out that the error coming from the remainder generates the random, centered fluctuation, while the error coming from the martingale is of lower order. We will analyze them separately in the following two sections.

5 An analysis of the remainder

We define the error coming from the remainder in (4.28) as

$$\begin{aligned} \mathcal {E}_1:=\frac{1}{(2\pi )^d}\int _{\mathbb {R}^d} \hat{f}(\xi )e^{i\xi \cdot x}\mathbb {E}_B\left\{ \left( e^{i R_t^\varepsilon }-1\right) e^{iM_t^\varepsilon }\right\} \, d\xi . \end{aligned}$$
(5.1)

Let \(\phi =(\phi _{e_1},\ldots ,\phi _{e_d})\). The goal of this section is to show

Proposition 5.1

$$\begin{aligned} \mathbb {E}\{|\mathcal {E}_1-\varepsilon \nabla u_{\mathrm {hom}}(t,x)\cdot \phi (\tau _{-x/\varepsilon }\omega )|\}\le C(1+t^\frac{1}{2}) \left| \begin{array}{l@{\quad }l} \varepsilon ^{\frac{4}{3}} &{} \text {if } d = 3, \\ \varepsilon ^{\frac{3}{2}}|\log \varepsilon |^\frac{1}{2} &{} \text {if } d = 4, \\ \varepsilon ^\frac{3}{2} &{} \text {if } d \geqslant 5, \end{array} \right. \end{aligned}$$

where C is some constant.

Recall that

$$\begin{aligned} R_t^\varepsilon =-\varepsilon \phi _{\xi }(\omega _{t/\varepsilon ^2})+\varepsilon \phi _{\xi }(\omega _0). \end{aligned}$$

By Theorem 4.1 and the stationarity of \(\omega _s\), we obtain that

$$\begin{aligned} \mathbb {E}\mathbb {E}_B\{|R_t^\varepsilon |^4\}\lesssim |\xi |^4\varepsilon ^4. \end{aligned}$$
(5.2)

Using the fact that \(|e^{ix}-1-ix|\le x^2\) and \(\hat{f}(\xi )|\xi |^2\in L^1(\mathbb {R}^d)\), we derive

$$\begin{aligned} \mathbb {E}\{|\mathcal {E}_1-\mathcal {E}_2|^2\}\lesssim \varepsilon ^4, \end{aligned}$$
(5.3)

where

$$\begin{aligned} \mathcal {E}_2:=\frac{1}{(2\pi )^d}\int _{\mathbb {R}^d} \hat{f}(\xi )e^{i\xi \cdot x}\mathbb {E}_B\left\{ iR_t^\varepsilon e^{iM_t^\varepsilon }\right\} \, d\xi . \end{aligned}$$
(5.4)

Now we only need to analyze \(\mathcal {E}_2\). The two terms in \(R_t^\varepsilon \) are analyzed separately. For \(-\varepsilon \phi _\xi (\omega _{t/\varepsilon ^2})\), we can use the variance decay of \(\mathbb {E}_B\{\phi _\xi (\omega _t)\}\) when t is large. For \(\varepsilon \phi _\xi (\omega _0)\), since it is independent of the Brownian path, we expect that \(e^{iM_t^\varepsilon }\) averages itself. This will be proved by applying a special case of a quantitative martingale central limit theorem, which we present as the following proposition.

Proposition 5.2

[31, Theorem 3.2] If \(M_t\) is a continuous martingale and \(\langle M\rangle _t\) is its predictable quadratic variation, \(W_t\) is a standard Brownian motion, then

$$\begin{aligned} d_{1,k}(M_t,\sigma W_t)\le (k\vee 1)\mathbb {E}\{ |\langle M\rangle _t-\sigma ^2t|\}, \end{aligned}$$
(5.5)

with the distance \(d_{k}\) defined as

$$\begin{aligned} d_{1,k}(X,Y)=\sup \{|\mathbb {E}\{f(X)-f(Y)\}|: f\in C_b^2(\mathbb {R}), \Vert f'\Vert \le 1, \Vert f''\Vert _\infty \le k\}.\qquad \end{aligned}$$
(5.6)

Remark 5.3

In fact, the argument in [31] simplifies when we assume (as we do here) that the martingale \(M_t\) is continuous. In this case, the multiplicative constant \((k \vee 1)\) in (5.5) can be replaced by k, and the condition \(\Vert f'\Vert \leqslant 1\) in (5.6) can be dropped.

We also need the following second moment estimate of additive functionals of \(\omega _s\).

Lemma 5.4

For any \(f\in L^2(\Omega )\), we have

$$\begin{aligned} \mathbb {E}\mathbb {E}_B\left\{ \left( \int _0^t f(\omega _s) \, ds\right) ^2\right\} \le 2t\int _0^t\mathbb {E}\left\{ |\mathbb {E}_B\{f(\omega _{s/2})\}|^2\right\} \, ds. \end{aligned}$$

Proof

The proof is a standard calculation. First, by stationarity we have

$$\begin{aligned} \begin{aligned} \mathbb {E}\mathbb {E}_B\left\{ \left( \int _0^t f(\omega _s\right) \, ds)^2\right\}&=2\int _{0\le s\le u\le t} \mathbb {E}\mathbb {E}_B\left\{ f(\omega _s)f(\omega _u)\right\} \, dsdu\\&=2\int _{0\le s\le u\le t}\mathbb {E}\mathbb {E}_B\left\{ f(\omega _0)f(\omega _{u-s})\right\} \, dsdu. \end{aligned} \end{aligned}$$
(5.7)

Secondly, we change variable \(s\mapsto u-s\) and integrate in u to obtain

$$\begin{aligned} 2\int _{0\le s\le u\le t}\mathbb {E}\mathbb {E}_B\{f(\omega _0)f(\omega _{u-s})\} \, dsdu =2\int _0^t(t-s)\mathbb {E}\mathbb {E}_B\{f(\omega _0)f(\omega _s)\} \, ds. \end{aligned}$$
(5.8)

By reversibility we further derive

$$\begin{aligned} \begin{aligned} 2\int _0^t(t-s)\mathbb {E}\mathbb {E}_B\{f(\omega _0)f(\omega _s)\} \, ds&=2\int _0^t(t-s)\mathbb {E}\{|\mathbb {E}_B\{f(\omega _{s/2})\}|^2\} \, ds\\&\le 2t\int _0^t\mathbb {E}\{|\mathbb {E}_B\{f(\omega _{s/2})\}|^2\} \, ds. \end{aligned} \end{aligned}$$
(5.9)

The proof is complete. \(\square \)

Now we can combine (5.3) with the following Lemmas 5.5 and 5.6 to complete the proof of Proposition 5.1.

Lemma 5.5

$$\begin{aligned} \mathbb {E}\left\{ \left( \int _{\mathbb {R}^d}|\hat{f}(\xi )| \left| \mathbb {E}_B\left\{ \phi _\xi (\omega _{t/\varepsilon ^2})e^{iM_t^\varepsilon }\right\} \right| \, d\xi \right) ^2\right\} \lesssim \left| \begin{array}{l@{\quad }l} \varepsilon ^{\frac{2}{3}} &{} \text {if } d = 3, \\ \varepsilon |\log \varepsilon | &{} \text {if } d = 4, \\ \varepsilon &{} \text {if } d \geqslant 5. \end{array} \right. \end{aligned}$$

Proof

First, we have for any \(u\in (0,t)\) that

$$\begin{aligned} \mathbb {E}_B\left\{ \phi _\xi (\omega _{t/\varepsilon ^2})e^{iM_u^\varepsilon }\right\} =\mathbb {E}_B\left\{ \mathbb {E}_B\left\{ \phi _\xi (\omega _{t/\varepsilon ^2})|\mathcal {F}_{u/\varepsilon ^2}\right\} e^{iM_u^\varepsilon }\right\} , \end{aligned}$$
(5.10)

where \(\mathcal {F}_s\) is the natural filtration associated with \(B_s\). By the stationarity of \(\omega _s\), we obtain

$$\begin{aligned} \begin{aligned} \mathbb {E}\left\{ \left| \mathbb {E}_B\left\{ \phi _\xi (\omega _{t/\varepsilon ^2})e^{iM_u^\varepsilon }\right\} \right| ^2\right\}&\le \mathbb {E}\mathbb {E}_B\left\{ \left| \mathbb {E}_B\left\{ \phi _\xi (\omega _{t/\varepsilon ^2})\right| \mathcal {F}_{u/\varepsilon ^2}\right\} |^2\right\} \\&= \mathbb {E}\left\{ \left| \mathbb {E}_B\left\{ \phi _\xi (\omega _{(t-u)/\varepsilon ^2})\right\} \right| ^2\right\} . \end{aligned} \end{aligned}$$
(5.11)

Secondly, we have

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {R}^d}|\hat{f}(\xi )|\mathbb {E}\mathbb {E}_B\left\{ \left| \phi _\xi (\omega _{t/\varepsilon ^2})\left( e^{iM_t^\varepsilon }-e^{iM_u^\varepsilon }\right) \right| ^2\right\} \, d\xi \\&\quad \le \int _{\mathbb {R}^d}|\hat{f}(\xi )|\mathbb {E}\mathbb {E}_B\left\{ |\phi _\xi (\omega _{t/\varepsilon ^2})|^2|M_t^\varepsilon -M_u^\varepsilon |^2\right\} \, d\xi \\&\quad \le \int _{\mathbb {R}^d}|\hat{f}(\xi )|\sqrt{\mathbb {E}\mathbb {E}_B\left\{ |\phi _\xi (\omega _{t/\varepsilon ^2})|^4\right\} }\sqrt{\mathbb {E}\mathbb {E}_B\left\{ |M_t^\varepsilon -M_u^\varepsilon |^4\right\} } \, d\xi . \end{aligned} \end{aligned}$$
(5.12)

By moment bounds of \(\phi _\xi \), the first factor \(\sqrt{\mathbb {E}\mathbb {E}_B\{|\phi _\xi (\omega _{t/\varepsilon ^2})|^4\}}\lesssim |\xi |^2\). For the second factor, we apply moment inequalities of martingales to derive

$$\begin{aligned} \sqrt{\mathbb {E}\mathbb {E}_B\left\{ |M_t^\varepsilon -M_u^\varepsilon |^4\right\} }\lesssim \sqrt{\mathbb {E}\mathbb {E}_B\left\{ |\langle M^\varepsilon \rangle _t-\langle M^\varepsilon \rangle _u|^2\right\} }, \end{aligned}$$
(5.13)

with \(\langle M^\varepsilon \rangle _t\) the quadratic variation of \(M_t^\varepsilon \):

$$\begin{aligned} \begin{aligned} \langle M^\varepsilon \rangle _t&=\sum _{j=1}^d \varepsilon ^2\int _0^{t/\varepsilon ^2}\left( \sum _{i=1}^d (D_i\phi _\xi (\omega _s)+\xi _i)\sigma _{ij}(\omega _s)\right) ^2 \,ds\\&=\varepsilon ^2\int _0^{t/\varepsilon ^2}(\xi +D\phi _{\xi }(\omega _s))^Ta(\omega _s)(\xi +D\phi _\xi (\omega _s)) \,ds. \end{aligned} \end{aligned}$$
(5.14)

By moment bounds of \(D\phi _{\xi }\), we have \(\sqrt{\mathbb {E}\mathbb {E}_B\{|M_t^\varepsilon -M_u^\varepsilon |^4\}}\lesssim (t-u)|\xi |^2\). Therefore, we have obtained

$$\begin{aligned} \int _{\mathbb {R}^d}|\hat{f}(\xi )|\mathbb {E}\mathbb {E}_B\left\{ \left| \phi _\xi (\omega _{t/\varepsilon ^2})\left( e^{iM_t^\varepsilon }-e^{iM_u^\varepsilon }\right) \right| ^2\right\} \, d\xi \lesssim t-u. \end{aligned}$$
(5.15)

Now we can write

$$\begin{aligned} \mathbb {E}_B\left\{ \phi _\xi (\omega _{t/\varepsilon ^2})e^{iM_t^\varepsilon }\right\} =\mathbb {E}_B\left\{ \phi _\xi (\omega _{t/\varepsilon ^2})(e^{iM_t^\varepsilon }-e^{iM_u^\varepsilon })\right\} +\mathbb {E}_B\left\{ \phi _\xi (\omega _{t/\varepsilon ^2})e^{iM_u^\varepsilon }\right\} ,\nonumber \\ \end{aligned}$$
(5.16)

and derive

$$\begin{aligned}&\int _{\mathbb {R}^d}|\hat{f}(\xi )|\mathbb {E}\left\{ \left| \mathbb {E}_B\left\{ \phi _\xi (\omega _{t/\varepsilon ^2})e^{iM_t^\varepsilon }\right\} \right| ^2\right\} \, d\xi \nonumber \\&\quad \lesssim t-u+\int _{\mathbb {R}^d}|\hat{f}(\xi )|\mathbb {E}\left\{ \left| \mathbb {E}_B\left\{ \phi _\xi (\omega _{(t-u)/\varepsilon ^2})\right\} \right| ^2\right\} \, d\xi . \end{aligned}$$
(5.17)

By Proposition 4.9,

$$\begin{aligned} \int _{\mathbb {R}^d}|\hat{f}(\xi )|\mathbb {E}\left\{ \left| \mathbb {E}_B\left\{ \phi _\xi (\omega _{t/\varepsilon ^2})e^{iM_t^\varepsilon }\right\} \right| ^2\right\} \, d\xi \lesssim t-u+\left| \begin{array}{l@{\quad }l} (\frac{\varepsilon ^2}{t-u})^{\frac{1}{2}} &{} \text {if } d = 3, \\ \frac{\varepsilon ^2}{t-u}\log \left( 2+\frac{t-u}{\varepsilon ^2}\right) &{} \text {if } d = 4, \\ \frac{\varepsilon ^2}{t-u} &{} \text {if } d \geqslant 5. \end{array} \right. \end{aligned}$$
(5.18)

After optimizing with respect to u on the r.h.s. of the above display, we complete the proof. \(\square \)

Lemma 5.6

$$\begin{aligned}&\mathbb {E}\left\{ \left| \frac{1}{(2\pi )^d}\int _{\mathbb {R}^d}\hat{f}(\xi )e^{i\xi \cdot x}i\varepsilon \phi _\xi (\omega _0)\mathbb {E}_B\{e^{iM_t^\varepsilon }\} \, d\xi -\varepsilon \nabla u_{\mathrm {hom}}(t,x)\cdot \phi (\tau _{-x/\varepsilon }\omega )\right| \right\} \\&\qquad \lesssim \varepsilon ^2\sqrt{t}. \end{aligned}$$

Proof

For almost every fixed \(\omega \in \Omega \) and \(\varepsilon >0\),

$$\begin{aligned} M_t^\varepsilon =\sum _{j=1}^d \varepsilon \int _0^{t/\varepsilon ^2}\sum _{i=1}^d(D_i\phi _{\xi }(\omega _s)+\xi _i)\sigma _{ij}(\omega _s) \, dB_s^j \end{aligned}$$
(5.19)

is a continuous square integrable martingale on \((\Sigma ,\mathcal {A},\mathbb {P}_B)\), so by Proposition 5.2, we have

$$\begin{aligned} \left| \mathbb {E}_B\left\{ e^{iM_t^\varepsilon }\right\} -e^{-\frac{1}{2}\sigma _\xi ^2t}\right| \le \mathbb {E}_B\left\{ \left| \langle M^\varepsilon \rangle _t-\sigma _\xi ^2 t\right| \right\} , \end{aligned}$$
(5.20)

where \(\langle M^\varepsilon \rangle _t\) is the quadratic variation of \(M_t^\varepsilon \):

$$\begin{aligned} \begin{aligned} \langle M^\varepsilon \rangle _t&=\sum _{j=1}^d \varepsilon ^2\int _0^{t/\varepsilon ^2}\left( \sum _{i=1}^d (D_i\phi _\xi (\omega _s)+\xi _i)\sigma _{ij}(\omega _s)\right) ^2 \, ds\\&= \varepsilon ^2\int _0^{t/\varepsilon ^2}(\xi +D\phi _{\xi }(\omega _s))^Ta(\omega _s)(\xi +D\phi _\xi (\omega _s)) \, ds, \end{aligned} \end{aligned}$$
(5.21)

and \(\sigma _\xi ^2=\xi ^T\bar{A}\xi \), with the homogenized matrix \(\bar{A}\) given by (3.12).

Thus we have derived

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {R}^d}|\hat{f}(\xi )|\mathbb {E}\left\{ \left| \phi _\xi (\omega _0)\left( \mathbb {E}_B\left\{ e^{iM_t^\varepsilon }\right\} -e^{-\frac{1}{2}\sigma _\xi ^2t}\right) \right| \right\} \, d\xi \\&\,\lesssim \int _{\mathbb {R}^d}|\hat{f}(\xi )||\xi |\sqrt{\mathbb {E}\mathbb {E}_B\left\{ \left| \langle M^\varepsilon \rangle _t-\sigma _\xi ^2 t\right| ^2\right\} } \, d\xi . \end{aligned} \end{aligned}$$

By recalling (4.14), \(\langle M^\varepsilon \rangle _t-\sigma _\xi ^2 t=\varepsilon ^2\int _0^{t/\varepsilon ^2}\psi _\xi (\omega _s) \, ds\), so we apply Lemma 5.4 and Proposition 4.10 to obtain

$$\begin{aligned} \mathbb {E}\mathbb {E}_B\left\{ \left| \langle M^\varepsilon \rangle _t-\sigma _\xi ^2 t\right| ^2\right\}&= \mathbb {E}\mathbb {E}_B\left\{ \left| \varepsilon ^2\int _0^{t/\varepsilon ^2}\psi _\xi (\omega _s) \, ds\right| ^2\right\} \nonumber \\&\lesssim \varepsilon ^2t\int _0^{t/\varepsilon ^2}\mathbb {E}\left\{ |\mathbb {E}_B\{\psi _\xi (\omega _{s/2})\}|^2\right\} \, ds \nonumber \\&\lesssim \varepsilon ^2t|\xi |^4. \end{aligned}$$
(5.22)

To summarize, we have

$$\begin{aligned} \begin{aligned}&\mathbb {E}\left\{ \left| \frac{1}{(2\pi )^d}\int _{\mathbb {R}^d}\hat{f}(\xi )e^{i\xi \cdot x}i\varepsilon \phi _\xi (\omega _0)(\mathbb {E}_B\{e^{iM_t^\varepsilon }\}-e^{-\frac{1}{2}\sigma _\xi ^2t}) \, d\xi \right| \right\} \\&\,\lesssim \varepsilon \int _{\mathbb {R}^d}|\hat{f}(\xi )||\xi |\sqrt{\mathbb {E}\mathbb {E}_B\left\{ |\langle M^\varepsilon \rangle _t-\sigma _\xi ^2 t|^2\right\} } \, d\xi \lesssim \varepsilon ^2\sqrt{t}. \end{aligned} \end{aligned}$$
(5.23)

Since \(\phi _\xi =\sum _{k=1}^d \xi _k\phi _{e_k}\) and \(\omega _0=\tau _{-X_0^\omega }\omega =\tau _{-x/\varepsilon }\omega \), it is straightforward to check that

$$\begin{aligned} \frac{1}{(2\pi )^d}\int _{\mathbb {R}^d}\hat{f}(\xi )e^{i\xi \cdot x}i\varepsilon \phi _\xi (\omega _0)e^{-\frac{1}{2}\sigma _\xi ^2t} \, d\xi =\varepsilon \nabla u_{\mathrm {hom}}(t,x)\cdot \phi (\tau _{-x/\varepsilon }\omega ). \end{aligned}$$
(5.24)

The proof is complete. \(\square \)

6 An analysis of the martingale

We define the error coming from the martingale part in (4.28) as

$$\begin{aligned} \begin{aligned} \mathcal {E}_3&:=\frac{1}{(2\pi )^d}\int _{\mathbb {R}^d} \hat{f}(\xi )e^{i\xi \cdot x}\left( \mathbb {E}_B\{e^{iM_t^\varepsilon }\}-e^{-\frac{1}{2}\xi ^T\bar{A}\xi t}\right) \, d\xi \\&=\frac{1}{(2\pi )^d}\int _{\mathbb {R}^d} \hat{f}(\xi )e^{i\xi \cdot x}\left( \mathbb {E}_B\{e^{iM_t^\varepsilon }\}-e^{-\frac{1}{2}\sigma _\xi ^2 t}\right) \, d\xi . \end{aligned} \end{aligned}$$
(6.1)

By the estimate in (5.22), we already have

$$\begin{aligned} \begin{aligned} \mathbb {E}\{|\mathcal {E}_3|^2\}&\lesssim \int _{\mathbb {R}^d} |\hat{f}(\xi )| \mathbb {E}\left\{ \left| \mathbb {E}_B\{e^{iM_t^\varepsilon }\}-e^{-\frac{1}{2}\sigma _\xi ^2 t}\right| ^2\right\} \, d\xi \\&\le \int _{\mathbb {R}^d}|\hat{f}(\xi )| \mathbb {E}\mathbb {E}_B\left\{ \left| \langle M^\varepsilon \rangle _t-\sigma _\xi ^2t\right| ^2\right\} \, d\xi \lesssim \varepsilon ^2t. \end{aligned} \end{aligned}$$
(6.2)

Thus \(\mathcal {E}_3\) is of order at most \(\varepsilon \), and we need to refine this estimate to show that it is actually of lower order. The following is the main result of this section.

Proposition 6.1

$$\begin{aligned} \mathbb {E}\{|\mathcal {E}_3|\}\le \varepsilon C_\varepsilon (t) \end{aligned}$$
(6.3)

with \(C_\varepsilon (t)\rightarrow 0\) as \(\varepsilon \rightarrow 0\) and \(C_\varepsilon (t)\le C(1+t)\) for some constant \(C>0\).

The proof of Proposition 6.1 can be decomposed into two parts. One part consists in showing that (6.3) holds with \(\mathcal {E}_3\) replaced by

$$\begin{aligned} \mathcal {E}_3-\varepsilon t\sum _{i,j,k=1}^d c_{ijk}\partial _{x_ix_jx_k}u_{\mathrm {hom}}(t,x) \end{aligned}$$

for some constants \(c_{ijk}\) defined below, see (6.12). In other words,

$$\begin{aligned} \varepsilon t\sum _{i,j,k=1}^d c_{ijk}\partial _{x_ix_jx_k}u_{\mathrm {hom}}(t,x) \end{aligned}$$

is what we find to be the deterministic error at the order of \(\varepsilon \). The second part consists in observing that actually, the constants \(c_{ijk}\) are all equal to zero!

We begin by defining \(c_{ijk}\), and then observing that they are in fact zero. The following lemma from the proof of [25, Theorem 1.8] is needed, and we present a proof here for the sake of convenience.

Lemma 6.2

For any \(V\in L^2(\Omega )\) with mean zero, let \(\varphi _{\uplambda }\) be the regularized corrector, i.e., \(({\uplambda }-L)\varphi _{\uplambda }=V\). If

$$\begin{aligned} \mathbb {E}\mathbb {E}_B\left\{ \left( \frac{1}{t^\frac{1}{2}}\int _0^tV(\omega _s) \, ds\right) ^2\right\} \le C \end{aligned}$$

for some constant \(C>0\) independent of t, then \({\uplambda }\langle \varphi _{\uplambda },\varphi _{\uplambda }\rangle \rightarrow 0\) and \(D_k\varphi _{\uplambda }\) converges in \(L^2(\Omega )\), \(k=1,\ldots ,d\).

Proof

First, by the calculation in Lemma 5.4, we have

$$\begin{aligned} \mathbb {E}\mathbb {E}_B\left\{ \left( \frac{1}{t^\frac{1}{2}}\int _0^tV(\omega _s) \, ds\right) ^2\right\} =\frac{2}{t}\int _0^t\int _0^s\langle e^{uL}V,V\rangle \, duds. \end{aligned}$$
(6.4)

Since \(\int _0^s\langle e^{uL}V,V\rangle \, du\) is non-decreasing as a function of s, the l.h.s. of the above display being bounded is equivalent with \(\int _0^\infty \langle e^{uL}V,V\rangle \, du<\infty \), i.e. \(\langle V,(-L)^{-1}V\rangle <\infty \). Let \(U(d\xi )\) be the projection valued measure associated with \(-L\), i.e., \(-L=\int _0^\infty \xi \, U(d\xi )\), and \(\nu (d\xi )\) be the spectral measure associated with V, i.e. \(\nu (d\xi )=\langle U(d\xi ) V,V\rangle \). The fact that \(\langle V,(-L)^{-1}V\rangle <\infty \) is equivalent to

$$\begin{aligned} \int _0^\infty \frac{1}{\xi } \, \nu (d\xi )<\infty . \end{aligned}$$
(6.5)

It follows that

$$\begin{aligned} {\uplambda }\langle \varphi _{\uplambda },\varphi _{\uplambda }\rangle =\int _0^\infty \frac{{\uplambda }}{({\uplambda }+\xi )^2} \, \nu (d\xi )\rightarrow 0 \end{aligned}$$
(6.6)

as \({\uplambda }\rightarrow 0\) by the dominated convergence theorem. By the uniform ellipticity, we have

$$\begin{aligned} \langle D(\varphi _{{\uplambda }_1}-\varphi _{{\uplambda }_2}),D(\varphi _{{\uplambda }_1}-\varphi _{{\uplambda }_2})\rangle \lesssim \langle \varphi _{{\uplambda }_1}-\varphi _{{\uplambda }_2},-L(\varphi _{{\uplambda }_1}-\varphi _{{\uplambda }_2})\rangle , \end{aligned}$$
(6.7)

and since

$$\begin{aligned} \langle \varphi _{{\uplambda }_1},-L\varphi _{{\uplambda }_2}\rangle =\int _0^\infty \frac{\xi }{({\uplambda }_1+\xi )({\uplambda }_2+\xi )} \, \nu (d\xi )\rightarrow \int _0^\infty \frac{1}{\xi } \, \nu (d\xi ) \end{aligned}$$
(6.8)

as \({\uplambda }_1,{\uplambda }_2\rightarrow 0\), we further obtain

$$\begin{aligned} \langle D(\varphi _{{\uplambda }_1}-\varphi _{{\uplambda }_2}),D(\varphi _{{\uplambda }_1}-\varphi _{{\uplambda }_2})\rangle \rightarrow 0. \end{aligned}$$
(6.9)

The proof is complete. \(\square \)

For \(\psi _{ij}=(e_i+D\phi _{e_i})^Ta(e_j+D\phi _{e_j})-\bar{A}_{ij} \) (\( i,j=1,\ldots ,d\)), a polarization of the inequality in (5.22) ensures that

$$\begin{aligned} \mathbb {E}\mathbb {E}_B\left\{ \left| \varepsilon \int _0^{t/\varepsilon ^2}\psi _{ij}(\omega _s) \, ds\right| ^2\right\} \lesssim t, \end{aligned}$$
(6.10)

i.e., the asymptotic variance is finite, so we can apply Lemma 6.2: letting \(\Psi _{{\uplambda },ij}\) be the regularized corrector associated with \(\psi _{ij}\), i.e.,

$$\begin{aligned} ({\uplambda }-L)\Psi _{{\uplambda },ij}=\psi _{ij}, \end{aligned}$$
(6.11)

we have \({\uplambda }\langle \Psi _{{\uplambda },ij},\Psi _{{\uplambda },ij}\rangle \rightarrow 0\) as \({\uplambda }\rightarrow 0\). We also have the convergence of \(D_k\Psi _{{\uplambda },ij}\) in \(L^2(\Omega )\), with the limit formally written as \(D_k\Psi _{ij}:=\lim _{{\uplambda }\rightarrow 0}D_k\Psi _{{\uplambda },ij}\).

Let \(D\Psi _{ij}=(D_1\Psi _{ij},\ldots ,D_d\Psi _{ij})\), then the constant \(c_{ijk}\) for \(i,j,k=1,\ldots ,d\) is given by

$$\begin{aligned} c_{ijk}:=\frac{1}{2}\mathbb {E}\left\{ (D\Psi _{ij})^Ta(e_k+D\phi _{e_k})\right\} . \end{aligned}$$
(6.12)

Lemma 6.3

\(c_{ijk}=0\) for \(i,j,k=1,\ldots ,d\).

Proof

By the \(L^2\) convergence of \(D\Psi _{{\uplambda },ij}\rightarrow D\Psi _{ij}\) and \(D\phi _{{\uplambda },e_k}\rightarrow D\phi _{e_k}\), we have

$$\begin{aligned} c_{ijk}=\lim _{{\uplambda }\rightarrow 0}\frac{1}{2}\mathbb {E}\left\{ (D\Psi _{{\uplambda },ij})^Ta(e_k+D\phi _{{\uplambda },e_k})\right\} . \end{aligned}$$
(6.13)

An integration by parts leads to

$$\begin{aligned}&\frac{1}{2}\mathbb {E}\left\{ (D\Psi _{{\uplambda },ij})^Ta(e_k+D\phi _{{\uplambda },e_k})\right\} \nonumber \\&\qquad =\left\langle \Psi _{{\uplambda },ij}, \frac{1}{2}\sum _{m,n=1}^d D_m(a_{mn}D_n\phi _{{\uplambda },e_k})+\frac{1}{2}\sum _{m=1}^dD_ma_{mk}\right\rangle . \end{aligned}$$
(6.14)

The r.h.s. of the above display can be rewritten as \(\langle \Psi _{{\uplambda },ij},L\phi _{{\uplambda },e_k}+e_k\cdot b\rangle \), and by recalling the equation satisfied by the regularized corrector (3.5), we have

$$\begin{aligned} \frac{1}{2}\mathbb {E}\left\{ (D\Psi _{{\uplambda },ij})^Ta(e_k+D\phi _{{\uplambda },e_k})\right\} = \langle \Psi _{{\uplambda },ij},{\uplambda }\phi _{{\uplambda },e_k}\rangle , \end{aligned}$$
(6.15)

which goes to zero as \({\uplambda }\rightarrow 0\). The proof is complete. \(\square \)

To refine the estimation of \(\mathcal {E}_3\), we need a more accurate estimation of \(\mathbb {E}_B\{e^{iM_t^\varepsilon }\}-e^{-\frac{1}{2}\sigma _\xi ^2 t}\) compared with the one obtained by Proposition 5.2. This is given by the following quantitative martingale central limit theorem.

Proposition 6.4

[21, Proposition 3.2] If \(M_t\) is a continuous martingale and \(\langle M\rangle _t\) is its predictable quadratic variation, \(W_t\) is a standard Brownian motion, then for any \(f\in \mathcal {C}_b(\mathbb {R})\) with up to third order bounded and continuous derivatives, we have

$$\begin{aligned} \left| \mathbb {E}\left\{ f(M_t)-f(\sigma W_t)-\frac{1}{2}f''(M_\tau )(\langle M\rangle _t-\sigma ^2 t)\right\} \right| \le C\Vert f'''\Vert _\infty \mathbb {E}\left\{ \left| \langle M\rangle _t-\sigma ^2t \right| ^{\frac{3}{2}}\right\} , \end{aligned}$$

where \(\tau =\sup \{s\in [0,t]|\langle M\rangle _s\le \sigma ^2t\}\), \(\Vert f'''\Vert _\infty \) denotes the supreme bound of \(f'''\), and C is some universal constant.

Remark 6.5

In the discrete-space setting, the corresponding martingales have jumps, and we do not know how to adapt Proposition 6.4 and the subsequent argument to recover Theorem 2.1 in this case.

By the above proposition, we have for almost every \(\omega \in \Omega \) that

$$\begin{aligned} \left| \mathbb {E}_B\{e^{iM_t^\varepsilon }\}-e^{-\frac{1}{2}\sigma _\xi ^2 t}+\frac{1}{2}\mathbb {E}_B\left\{ e^{iM_\tau ^\varepsilon }\left( \langle M^\varepsilon \rangle _t-\sigma _\xi ^2t\right) \right\} \right| \le C\mathbb {E}_B\left\{ \left| \langle M^\varepsilon \rangle _t-\sigma _\xi ^2t\right| ^\frac{3}{2}\right\} , \end{aligned}$$
(6.16)

where

$$\begin{aligned} \tau =\sup \left\{ s\in [0,t]: \sum _{j=1}^d \varepsilon ^2\int _0^{s/\varepsilon ^2}\left( \sum _{i=1}^d (D_i\phi _\xi (\omega _u)+\xi _i)\sigma _{ij}(\omega _u)\right) ^2 \, du\leqslant \sigma _\xi ^2 t\right\} . \end{aligned}$$
(6.17)

Combining with (5.22), we obtain

$$\begin{aligned} \mathbb {E}\{|\mathcal {E}_3-\mathcal {E}_4|\}\lesssim \int _{\mathbb {R}^d}|\hat{f}(\xi )|\mathbb {E}\mathbb {E}_B\left\{ |\langle M^\varepsilon \rangle _t-\sigma _\xi ^2t|^\frac{3}{2}\right\} \, d\xi \lesssim \varepsilon ^\frac{3}{2}t^\frac{3}{4} \end{aligned}$$
(6.18)

for

$$\begin{aligned} \begin{aligned} \mathcal {E}_4:=&-\frac{1}{2(2\pi )^d}\int _{\mathbb {R}^d}\hat{f}(\xi )e^{i\xi \cdot x}\mathbb {E}_B\left\{ e^{iM_\tau ^\varepsilon }\left( \langle M^\varepsilon \rangle _t-\sigma _\xi ^2t\right) \right\} \, d\xi \\ =&-\frac{1}{2(2\pi )^d}\int _{\mathbb {R}^d}\hat{f}(\xi )e^{i\xi \cdot x}\mathbb {E}_B\left\{ e^{iM_\tau ^\varepsilon }\varepsilon ^2\int _0^{t/\varepsilon ^2}\psi _\xi (\omega _s)ds\right\} \, d\xi . \end{aligned} \end{aligned}$$
(6.19)

Define

$$\begin{aligned} \mathcal {E}_5:=-\frac{1}{2(2\pi )^d}\int _{\mathbb {R}^d}\hat{f}(\xi )e^{i\xi \cdot x}\mathbb {E}_B\left\{ e^{iM_t^\varepsilon }\varepsilon ^2\int _0^{t/\varepsilon ^2}\psi _\xi (\omega _s)ds\right\} \, d\xi . \end{aligned}$$
(6.20)

The following Lemmas 6.6 and 6.7 combine with (6.18) to complete the proof of Proposition 6.1.

Lemma 6.6

\(\mathbb {E}\{|\mathcal {E}_4-\mathcal {E}_5|\}\lesssim \varepsilon ^\frac{3}{2}t^\frac{3}{4}\).

Proof

By (5.22), we know \(\mathbb {E}\mathbb {E}_B\{|\varepsilon ^2\int _0^{t/\varepsilon ^2}\psi _\xi (\omega _s) \, ds|^2\}\lesssim \varepsilon ^2t|\xi |^4\), so

$$\begin{aligned} \mathbb {E}\{|\mathcal {E}_4-\mathcal {E}_5|\}\lesssim \varepsilon t^\frac{1}{2}\int _{\mathbb {R}^d}|\hat{f}(\xi )||\xi |^2\sqrt{\mathbb {E}\mathbb {E}_B\left\{ |M_\tau ^\varepsilon -M_t^\varepsilon |^2\right\} } \, d\xi . \end{aligned}$$
(6.21)

By the definition of \(\tau \), we have

$$\begin{aligned} \int _{\mathbb {R}^d}|\hat{f}(\xi )||\xi |^2\sqrt{\mathbb {E}\mathbb {E}_B\left\{ |M_\tau ^\varepsilon -M_t^\varepsilon |^2\right\} } \, d\xi\lesssim & {} \int _{\mathbb {R}^d}|\hat{f}(\xi )||\xi |^2\sqrt{\mathbb {E}\mathbb {E}_B\left\{ |\sigma _\xi ^2t-\langle M^\varepsilon \rangle _t|\right\} } \, d\xi \nonumber \\\lesssim & {} \varepsilon ^\frac{1}{2}t^\frac{1}{4}, \end{aligned}$$
(6.22)

so \(\mathbb {E}\{|\mathcal {E}_4-\mathcal {E}_5|\}\lesssim \varepsilon ^\frac{3}{2}t^\frac{3}{4}\). The proof is complete. \(\square \)

Lemma 6.7

$$\begin{aligned} \mathbb {E}\left\{ \left| \mathcal {E}_5-\varepsilon t\sum _{i,j,k=1}^d c_{ijk}\partial _{x_ix_jx_k}u_{\mathrm {hom}}(t,x)\right| \right\} \le \varepsilon C_\varepsilon (t), \end{aligned}$$

with \(C_\varepsilon (t)\rightarrow 0\) as \(\varepsilon \rightarrow 0\) and \(C_\varepsilon (t)\le C(1+t)\) for some constant C.

Proof

We write

$$\begin{aligned} \frac{\mathcal {E}_5}{\varepsilon }=-\frac{1}{2(2\pi )^d}\int _{\mathbb {R}^d}\hat{f}(\xi )e^{i\xi \cdot x}\mathbb {E}_B\left\{ e^{iM_t^\varepsilon }\varepsilon \int _0^{t/\varepsilon ^2}\psi _\xi (\omega _s) \, ds\right\} \, d\xi , \end{aligned}$$
(6.23)

where \(\varepsilon \int _0^{t/\varepsilon ^2}\psi _\xi (\omega _s) \, ds\) is of central limit scaling. To apply the Kipnis–Varadhan method, the only condition we need to check is the finiteness of the asymptotic variance, and this is already given by (5.22), i.e. we have

$$\begin{aligned} \mathbb {E}\mathbb {E}_B\left\{ \left| \varepsilon \int _0^{t/\varepsilon ^2}\psi _\xi (\omega _s) \, ds\right| ^2\right\} \lesssim t|\xi |^4. \end{aligned}$$
(6.24)

Therefore, we can write \(\varepsilon \int _0^{t/\varepsilon ^2}\psi _\xi (\omega _s) \, ds=\mathcal {R}_t^\varepsilon +\mathcal {M}_t^\varepsilon \) with

$$\begin{aligned} \begin{aligned} \mathcal {R}_t^\varepsilon&=\varepsilon \int _0^{t/\varepsilon ^2}{\uplambda }\Psi _{{\uplambda },\xi }(\omega _s) \, ds-\varepsilon \Psi _{{\uplambda },\xi }(\omega _{t/\varepsilon ^2})+\varepsilon \Psi _{{\uplambda },\xi }(\omega _0)\\&\quad +\sum _{j=1}^d \varepsilon \int _0^{t/\varepsilon ^2}\sum _{i=1}^d \left( D_i\Psi _{{\uplambda },\xi }(\omega _s)-D_i\Psi _\xi (\omega _s)\right) \sigma _{ij}(\omega _s) \, dB_s^j, \end{aligned} \end{aligned}$$
(6.25)

and

$$\begin{aligned} \mathcal {M}_t^\varepsilon =\sum _{j=1}^d \varepsilon \int _0^{t/\varepsilon ^2}\sum _{i=1}^d D_i\Psi _\xi (\omega _s)\sigma _{ij}(\omega _s) \, dB_s^j. \end{aligned}$$
(6.26)

Recall that the formally-written random variable \(D_i\Psi _\xi \) is the \(L^2\)-limit of \(D_i\Psi _{{\uplambda },\xi }\) as \({\uplambda }\rightarrow 0\), with \(\Psi _{{\uplambda },\xi }\) solving the regularized corrector equation

$$\begin{aligned} ({\uplambda }-L)\Psi _{{\uplambda },\xi }=\psi _\xi . \end{aligned}$$
(6.27)

Since \(\psi _\xi =\sum _{i,j=1}^d\xi _i\xi _j\psi _{ij}\), by linearity we have \(\Psi _{{\uplambda },\xi }=\sum _{i,j=1}^d \xi _i\xi _j\Psi _{{\uplambda },ij}\), with \(\Psi _{{\uplambda },ij}\) solving

$$\begin{aligned} ({\uplambda }-L)\Psi _{{\uplambda },ij}=\psi _{ij}. \end{aligned}$$
(6.28)

Now we can write

$$\begin{aligned} \frac{\mathcal {E}_5}{\varepsilon }=-\frac{1}{2(2\pi )^d}\int _{\mathbb {R}^d}\hat{f}(\xi )e^{i\xi \cdot x}\mathbb {E}_B\left\{ e^{iM_t^\varepsilon }\left( \mathcal {R}_t^\varepsilon +\mathcal {M}_t^\varepsilon \right) \right\} d\xi . \end{aligned}$$
(6.29)

First, by choosing \({\uplambda }=\varepsilon ^2\) and using the stationarity of \(\omega _s\) we have

$$\begin{aligned}&\mathbb {E}\mathbb {E}_B\left\{ \left| \varepsilon \int _0^{t/\varepsilon ^2}{\uplambda }\Psi _{{\uplambda },\xi }(\omega _s) \, ds-\varepsilon \Psi _{{\uplambda },\xi }(\omega _{t/\varepsilon ^2})+\varepsilon \Psi _{{\uplambda },\xi }(\omega _0)\right| ^2\right\} \nonumber \\&\qquad \lesssim {\uplambda }\langle \Psi _{{\uplambda },\xi },\Psi _{{\uplambda },\xi }\rangle (1+t^2). \end{aligned}$$
(6.30)

For the stochastic integral, we have

$$\begin{aligned} \begin{aligned}&\mathbb {E}\mathbb {E}_B\left\{ \left| \sum _{j=1}^d \varepsilon \int _0^{t/\varepsilon ^2}\sum _{i=1}^d (D_i\Psi _{{\uplambda },\xi }(\omega _s)-D_i\Psi _\xi (\omega _s))\sigma _{ij}(\omega _s) \, dB_s^j\right| ^2\right\} \\&\quad =\mathbb {E}\mathbb {E}_B\left\{ \sum _{j=1}^d \varepsilon ^2\int _0^{t/\varepsilon ^2}\left( \sum _{i=1}^d \left( D_i\Psi _{{\uplambda },\xi }(\omega _s)-D_i\Psi _\xi (\omega _s)\right) \sigma _{ij}(\omega _s)\right) ^2 \, ds\right\} \\&\quad \lesssim \sum _{i=1}^d\left\langle D_i\Psi _{{\uplambda },\xi }-D_i\Psi _\xi ,D_i\Psi _{{\uplambda },\xi }-D_i\Psi _\xi \right\rangle t. \end{aligned} \end{aligned}$$
(6.31)

Therefore,

$$\begin{aligned} \begin{aligned} \mathbb {E}\mathbb {E}_B\left\{ |\mathcal {R}_t^\varepsilon |^2\right\}&\lesssim {\uplambda }\left\langle \Psi _{{\uplambda },\xi },\Psi _{{\uplambda },\xi }\right\rangle (1+t^2)\\&\quad +\sum _{i=1}^d\left\langle D_i\Psi _{{\uplambda },\xi }-D_i\Psi _\xi ,D_i\Psi _{{\uplambda },\xi }-D_i\Psi _\xi \right\rangle t. \end{aligned} \end{aligned}$$
(6.32)

By Lemma 6.2, \({\uplambda }\langle \Psi _{{\uplambda },\xi },\Psi _{{\uplambda },\xi }\rangle \rightarrow 0\) as \({\uplambda }\rightarrow 0\), and \(D_i\Psi _{{\uplambda },\xi }\rightarrow D_i\Psi _\xi \) in \(L^2(\Omega )\), so we derive

$$\begin{aligned} \int _{\mathbb {R}^d}|\hat{f}(\xi )| \mathbb {E}\mathbb {E}_B\{|\mathcal {R}_t^\varepsilon |\} \, d\xi \le C_\varepsilon (1+t) \end{aligned}$$
(6.33)

with \(C_\varepsilon \rightarrow 0\) as \(\varepsilon \rightarrow 0\).

Secondly, for the martingale part \(\mathbb {E}_B\{e^{iM_t^\varepsilon }\mathcal {M}_t^\varepsilon \}\), it is clear that \(M_t^\varepsilon \) and \(\mathcal {M}_t^\varepsilon \) are written as \(\sum _{j=1}^d\varepsilon \int _0^{t/\varepsilon ^2}f_j(\omega _s) \, dB_s^j\) and \(\sum _{j=1}^d\varepsilon \int _0^{t/\varepsilon ^2}g_j(\omega _s) \, dB_s^j\) for some \(f_j,g_j\in L^2(\Omega )\) respectively. We claim that for fixed \(\xi \in \mathbb {R}^d,t>0\)

$$\begin{aligned} \mathbb {E}\left\{ \left| \mathbb {E}_B\left\{ e^{iM_t^\varepsilon }\mathcal {M}_t^\varepsilon \right\} -c_\xi \right| \right\} \rightarrow 0 \end{aligned}$$
(6.34)

for some constant \(c_\xi \).

Recall that \(\omega _s\) depends on \(\varepsilon \) through the initial condition \(\omega _0=\tau _{-x/\varepsilon }\omega \). By stationarity we can shift the environment \(\omega \) by an amount of \(x/\varepsilon \) without changing the value of \(\mathbb {E}\{|\mathbb {E}_B\{e^{iM_t^\varepsilon }\mathcal {M}_t^\varepsilon \}-c_\xi |\}\). So we can assume \(\omega _s=\tau _{X_s^\omega }\omega \) with \(X_0^\omega =0\).

For almost every \(\omega \in \Omega \), by ergodicity we have

$$\begin{aligned}&\sum _{j=1}^d \varepsilon ^2\int _0^{t/\varepsilon ^2}f_j^2(\omega _s) \, ds\rightarrow t\sum _{j=1}^d \langle f_j,f_j\rangle \\&\sum _{j=1}^d \varepsilon ^2\int _0^{t/\varepsilon ^2}g_j^2(\omega _s) \, ds\rightarrow t\sum _{j=1}^d \langle g_j,g_j\rangle \\&\sum _{j=1}^d \varepsilon ^2\int _0^{t/\varepsilon ^2}f_j(\omega _s)g_j(\omega _s) \, ds\rightarrow t\sum _{j=1}^d \langle f_j,g_j\rangle \end{aligned}$$

almost surely in \(\Sigma \). Thus by a martingale central limit theorem [12, page 339, Theorem 1.4], we have that for almost every \(\omega \in \Omega \),

$$\begin{aligned} (M_t^\varepsilon ,\mathcal {M}_t^\varepsilon )\Rightarrow (N_1,N_2) \end{aligned}$$
(6.35)

in distribution in \(\Sigma \), where \((N_1,N_2)\) is a Gaussian vector with mean zero and whose covariance matrix is determined by \(\mathbb {E}\{N_1^2\}=t\sum _{j=1}^d \langle f_j,f_j\rangle \), \(\mathbb {E}\{N_2^2\}=t\sum _{j=1}^d \langle g_j,g_j\rangle \), and \(\mathbb {E}\{N_1N_2\}=t\sum _{j=1}^d \langle f_j,g_j\rangle \).

Now let \(g_K(x)=(x\wedge K)\vee (-K)\) be a continuous and bounded cutoff function for \(K>0\), and \(h_K(x)=x-g_K(x)\) we have

$$\begin{aligned} \mathbb {E}_B\left\{ e^{iM_t^\varepsilon }\mathcal {M}_t^\varepsilon \right\} =\mathbb {E}_B\left\{ e^{iM_t^\varepsilon }g_K(\mathcal {M}_t^\varepsilon )\right\} +\mathbb {E}_B\left\{ e^{iM_t^\varepsilon }h_K(\mathcal {M}_t^\varepsilon )\right\} \end{aligned}$$
(6.36)

It is clear that \(\mathbb {E}\mathbb {E}_B\{|\mathcal {M}_t^\varepsilon |^2\}\lesssim t|\xi |^4\), so

$$\begin{aligned} \mathbb {E}\mathbb {E}_B\left\{ \left| h_K(\mathcal {M}_t^\varepsilon )\right| \right\} \le \mathbb {E}\mathbb {E}_B\left\{ \left| \mathcal {M}_t^\varepsilon \right| 1_{|\mathcal {M}_t^\varepsilon |\ge K}\right\} \le \frac{1}{K}\mathbb {E}\mathbb {E}_B\left\{ |\mathcal {M}_t^\varepsilon |^2\right\} \lesssim \frac{t|\xi |^4}{K}. \end{aligned}$$
(6.37)

Therefore,

$$\begin{aligned} \begin{aligned}&\limsup _{\varepsilon \rightarrow 0}\mathbb {E}\left\{ \left| \mathbb {E}_B\left\{ e^{iM_t^\varepsilon }\mathcal {M}_t^\varepsilon \right\} -\mathbb {E}\left\{ e^{iN_1}N_2\right\} \right| \right\} \\&\quad \le \lim _{\varepsilon \rightarrow 0}\mathbb {E}\left\{ \left| \mathbb {E}_B\left\{ e^{iM_t^\varepsilon }g_K(\mathcal {M}_t^\varepsilon )\right\} -\mathbb {E}\left\{ e^{iN_1}g_K(N_2)\right\} \right| \right\} \\&\qquad +\left| \mathbb {E}\left\{ e^{iN_1}h_K(N_2)\right\} \right| +\mathbb {E}\mathbb {E}_B\left\{ \left| e^{iM_t^\varepsilon }h_K(\mathcal {M}_t^\varepsilon )\right| \right\} |\\&\quad \lesssim \left| \mathbb {E}\left\{ e^{iN_1}h_K(N_2)\right\} \right| +\frac{t|\xi |^4}{K}. \end{aligned} \end{aligned}$$
(6.38)

Letting \(K\rightarrow \infty \), (6.34) is proved for \(c_\xi =\mathbb {E}\{e^{iN_1}N_2\}\).

For the constant \(c_\xi \), we have

$$\begin{aligned} c_\xi =ie^{-\frac{1}{2}\mathbb {E}\{N_1^2\}}\mathbb {E}\{N_1N_2\} \end{aligned}$$
(6.39)

(this can be easily seen by differentiating the formula for \(\mathbb {E}\{e^{i N_1 + i\zeta N_2}\}\) with respect to \(\zeta \)). Recall that \(f_j = \sum _{i=1}^d (D_i \phi _\xi + \xi _i) \sigma _{ij}\) and \(g_j = \sum _{i = 1}^d D_i \Psi _{\xi } \sigma _{ij}\). After some calculation, we obtain

$$\begin{aligned} \sum _{j=1}^d\langle f_j,g_j\rangle =\sum _{i,j,k=1}^d \xi _i\xi _j\xi _k\mathbb {E}\left\{ (D\Psi _{ij})^Ta(e_k+D\phi _{e_k})\right\} , \end{aligned}$$
(6.40)

so, recalling (6.12),

$$\begin{aligned} c_\xi =2ie^{-\frac{1}{2}\sigma _\xi ^2t}t\sum _{i,j,k=1}^d \xi _i\xi _j\xi _k c_{ijk}. \end{aligned}$$
(6.41)

By the above expression of \(c_\xi \) and the fact that \(\mathbb {E}\mathbb {E}_B\{|\mathcal {M}_t^\varepsilon |^2\}\lesssim t|\xi |^4\), we have

$$\begin{aligned} \mathbb {E}\left\{ \left| \mathbb {E}_B\left\{ e^{iM_t^\varepsilon }\mathcal {M}_t^\varepsilon \right\} -c_\xi \right| \right\} \lesssim t^\frac{1}{2}|\xi |^2+t|\xi |^3, \end{aligned}$$
(6.42)

so applying the dominated convergence theorem, we conclude that for \(t>0\)

$$\begin{aligned} \int _{\mathbb {R}^d}|\hat{f}(\xi )|\mathbb {E}\left\{ \left| \mathbb {E}_B\left\{ e^{iM_t^\varepsilon }\mathcal {M}_t^\varepsilon \right\} -c_\xi \right| \right\} \, d\xi \rightarrow 0 \end{aligned}$$
(6.43)

as \(\varepsilon \rightarrow 0\).

To summarize, by combining (6.33) and (6.43) we have proved

$$\begin{aligned} \mathbb {E}\left\{ \left| \frac{\mathcal {E}_5}{\varepsilon }+\frac{1}{2\left( 2\pi \right) ^d}\int _{\mathbb {R}^d}\hat{f}\left( \xi \right) e^{i\xi \cdot x}c_\xi \, d\xi \right| \right\} \rightarrow 0 \end{aligned}$$
(6.44)

as \(\varepsilon \rightarrow 0\), and the following bound holds

$$\begin{aligned} \mathbb {E}\left\{ \left| \frac{\mathcal {E}_5}{\varepsilon }+\frac{1}{2(2\pi )^d}\int _{\mathbb {R}^d}\hat{f}(\xi )e^{i\xi \cdot x}c_\xi \, d\xi \right| \right\} \le C(1+t) \end{aligned}$$
(6.45)

for some constant \(C>0\) independent of (tx).

Now we only need to note that

$$\begin{aligned} \begin{aligned}&-\frac{1}{2(2\pi )^d}\int _{\mathbb {R}^d}\hat{f}(\xi )e^{i\xi \cdot x}c_\xi \, d\xi \\&\,={t}\sum _{i,j,k=1}^d c_{ijk}\frac{1}{(2\pi )^d}\int _{\mathbb {R}^d}\hat{f}(\xi )(-i) \xi _i\xi _j\xi _ke^{i\xi \cdot x}e^{-\frac{1}{2}\sigma _\xi ^2t} \, d\xi \\&\,=t\sum _{i,j,k=1}^dc_{ijk}\partial _{x_ix_jx_k}u_{\mathrm {hom}}(t,x) \end{aligned} \end{aligned}$$
(6.46)

to complete the proof. \(\square \)

Remark 6.8

From the proof above, we see that in order to estimate the rate of convergence to 0 of \(\mathbb {E}\{|C_\varepsilon |\}\) in Theorem 2.1, the rates of convergence of \({\uplambda }\langle \Psi _{{\uplambda },\xi },\Psi _{{\uplambda },\xi } \rangle \) to 0 and of \(D\Psi _{{\uplambda },\xi }\) to \(D\Psi _\xi \) as \({\uplambda }\rightarrow 0\) need to be quantified. This in turn could be obtained by reinforcing Proposition 4.10 to

$$\begin{aligned} \mathbb {E}\left\{ \left| \mathbb {E}_B\{\psi _\xi (\omega _t)\}\right| ^2\right\} \lesssim t^{-\gamma }, \end{aligned}$$
(6.47)

for some \(\gamma > 1\). More precisely, spectral computations similar to those of [29] show that (6.47) implies

$$\begin{aligned} {\uplambda }\left\langle \Psi _{{\uplambda },\xi },\Psi _{{\uplambda },\xi } \right\rangle \lesssim {\uplambda }^{(\gamma -1) \wedge 1}, \end{aligned}$$

and the same estimate for \(\mathbb {E}\{|D \Psi _{{\uplambda },\xi }-D \Psi _\xi |^2\}\). It was shown in [18, Theorem 2.1] that the spatial averages of \(\psi _\xi \) behave as if \(\psi _\xi \) was a local function of the coefficient field. If \(\psi _\xi \) is replaced by a truly local function, then the methods of [15] show that (6.47) holds with \(\gamma = d/2\). For our actual function \(\psi _\xi \), it is thus natural to expect (6.47) to hold at least for every \(\gamma < d/2\), but a proof of this stronger result would require more work, so we preferred to present a simpler argument here.

7 Results on elliptic equations

The solutions to elliptic equations can be written as

$$\begin{aligned} U_\varepsilon (x,\omega )=\int _0^\infty e^{-t}u_\varepsilon (t,x,\omega ) \, dt\end{aligned}$$
(7.1)
$$\begin{aligned} U_{\mathrm {hom}}(x)=\int _0^\infty e^{-t}u_{\mathrm {hom}}(t,x) \, dt. \end{aligned}$$
(7.2)

Recall the error decomposition for fixed (tx) in the parabolic case

$$\begin{aligned} u_\varepsilon (t,x,\omega )-u_{\mathrm {hom}}(t,x) =\varepsilon \nabla u_{\mathrm {hom}}(t,x)\cdot \phi (\tau _{-x/\varepsilon }\omega )+\varepsilon C_\varepsilon (t,x), \end{aligned}$$
(7.3)

where \(C_\varepsilon (t,x)\rightarrow 0\) in \(L^1(\Omega )\). By Propositions 5.1 and 6.1, we actually have

$$\begin{aligned} \mathbb {E}\{|C_\varepsilon (t,x)|\}\le C(1+t) \end{aligned}$$
(7.4)

for some constant \(C>0\), so by the dominated convergence theorem

$$\begin{aligned} \int _0^\infty e^{-t}\mathbb {E}\{|C_\varepsilon (t,x)|\} \, dt\rightarrow 0 \end{aligned}$$
(7.5)

as \(\varepsilon \rightarrow 0\). Therefore, we obtain the error decomposition for fixed x in the elliptic case

$$\begin{aligned} U_\varepsilon (x,\omega )-U_{\mathrm {hom}}(x) = \varepsilon \int _0^\infty e^{-t}\nabla u_{\mathrm {hom}}(t,x) \, dt\cdot \phi (\tau _{-x/\varepsilon }\omega )+\varepsilon \tilde{C}_\varepsilon (x) \end{aligned}$$
(7.6)

with \(\tilde{C}_\varepsilon (x)\rightarrow 0\) in \(L^1(\Omega )\).

The first term on the r.h.s. of (7.6) gives

$$\begin{aligned} \varepsilon \int _0^\infty e^{-t}\nabla u_{\mathrm {hom}}(t,x) \, dt\cdot \phi (\tau _{-x/\varepsilon }\omega )=\varepsilon \nabla U_{\mathrm {hom}}(x)\cdot \phi (\tau _{-x/\varepsilon }\omega ), \end{aligned}$$
(7.7)

which completes the proof of Theorem 2.5.

8 Results for periodic coefficients

It is natural to ask whether the same result holds for periodic rather than random coefficients. To understand the first order errors in periodic homogenization is a classical problem, however the pointwise expansion proved in this paper does not seem to be known. Our approach applies with some minor modifications, which we now briefly discuss.

The existence of a “stationary” corrector now becomes trivial. We assume the coefficient \(\tilde{a}(x)\) is defined on the \(d-\)dimensional torus \(\mathbb {T}\), and by the fact that \(\tilde{b}=(\tilde{b}_1,\ldots ,\tilde{b}_d)\) with \(\tilde{b}_i=\frac{1}{2}\sum _{j=1}^d \partial _{x_j}\tilde{a}_{ji}\), we have

$$\begin{aligned} \int _{\mathbb {T}} \tilde{b}(x)dx=0. \end{aligned}$$

By the Fredholm alternative, the corrector equation

$$\begin{aligned} -\frac{1}{2}\nabla \cdot \tilde{a}(x)\nabla \tilde{\phi }_\xi =\xi \cdot \tilde{b} \end{aligned}$$

has a unique solution satisfying \(\int _{\mathbb {T}} \tilde{\phi }_\xi (x)dx=0\). The same discussion applies to \(\tilde{\psi }_\xi =(\xi +\nabla \tilde{\phi }_\xi )^T\tilde{a}(\xi +\nabla \tilde{\phi }_\xi )-\xi ^T\bar{A}\xi \) since \(\int _{\mathbb {T}} \tilde{\psi }_\xi (x)dx=0\), that is, there exists a unique \(\tilde{\Psi }_\xi \) solving

$$\begin{aligned} -\frac{1}{2}\nabla \cdot \tilde{a}(x)\nabla \tilde{\Psi }_\xi =\tilde{\psi }_\xi \end{aligned}$$

such that \(\int _{\mathbb {T}} \tilde{\Psi }_\xi (x)dx=0\). Since we assume \(\tilde{a}\) to be Hölder regular, the functions \(\tilde{\phi }_\xi , \nabla \tilde{\phi }_\xi \) and \(\tilde{\Psi }_\xi \) are bounded in x (see [23, Theorem 3.13]).

Our estimates of variance decay in Propositions 4.9 and 4.10 can be replaced by a spectral gap inequality in the periodic setting. For the diffusion on the torus given by

$$\begin{aligned} dX_t=\tilde{b}(X_t)dt+\tilde{\sigma }(X_t)dB_t, \end{aligned}$$

the Lebegue measure on \(\mathbb {T}\) is the unique invariant measure and the following estimate holds [7, Page 373, Theorem 3.2]:

$$\begin{aligned} \sup _{X_0\in \mathbb {T}}|\mathbb {E}_B\{ g(X_t)\}|\lesssim e^{-\rho t} \sup _{x\in \mathbb {T}}|g(x)|, \end{aligned}$$
(8.1)

for some \(\rho >0\), provided \(\int _{\mathbb {T}} g(x)dx=0\). This enables to replace the estimates of Propositions 4.9 and 4.10 by exponential bounds.

With the above two points in mind, we apply the same arguments to derive a result similar to Theorem 2.1: for every fixed (tx),

$$\begin{aligned} u_\varepsilon \left( t,x\right) -u_{\mathrm {hom}}\left( t,x\right) = \varepsilon \nabla u_{\mathrm {hom}}\left( t,x\right) \cdot \tilde{\phi }\left( \frac{x}{\varepsilon }\right) +o\left( \varepsilon \right) \end{aligned}$$

where \(\tilde{\phi }=(\tilde{\phi }_{e_1},\ldots ,\tilde{\phi }_{e_d})\).