Abstract
We investigate the first-order correction in the homogenization of linear parabolic equations with random coefficients. In dimension 3 and higher and for coefficients having a finite range of dependence, we prove a pointwise version of the two-scale expansion. A similar expansion is derived for elliptic equations in divergence form. The result is surprising, since it was not expected to be true without further symmetry assumptions on the law of the coefficients.
1 Introduction
1.1 Main result
We are interested in parabolic equations in divergence form when \(d\ge 3\):
where \(\omega \in \Omega \) denotes a particular random realization sampled from a probability space \((\Omega ,\mathcal {F},\mathbb {P})\), the function f is bounded and smooth, and \(\tilde{a}: \mathbb {R}^d\times \Omega \rightarrow \mathbb {R}^{d\times d}\) is a random field of symmetric matrices satisfying the uniform ellipticity condition
Standard homogenization theory shows that under the assumptions of stationarity and ergodicity on the random field \(\tilde{a}(x,\omega )\), there exists a deterministic matrix \(\bar{A}\) such that \(u_\varepsilon \) converges to the solution \(u_{\mathrm {hom}}\) of a “homogenized” equation:
The goal of this paper is to further analyze the difference between \(u_\varepsilon (t,x,\omega )\) and \(u_{\mathrm {hom}}(t,x)\), in a pointwise sense. We assume that the coefficients \(\tilde{a}\) have a short range of dependence (more precisely, that they can be written as a local function of a homogeneous Poisson point process). For each fixed (t, x), we show that
where \(\tilde{\phi }\) is the (stationary) corrector, and where \(o(\varepsilon )/\varepsilon \rightarrow 0\) in \(L^1(\Omega )\).
1.2 Context
There is a large body of literature on stochastic homogenization, starting from the work of Kozlov [27] and Papanicolaou–Varadhan [35] on divergence form operators. Their results show that as the correlation length of the random coefficients goes to zero, the operator converges in a certain sense to the one with constant coefficients. The qualitative convergence essentially comes from an ergodic theorem. In order to provide convergence rates, a quantification of ergodicity is required. The first quantitative result was given by Yurinskii [37], where an algebraic rate was obtained. Other suboptimal results were obtained in [29]. Caffarelli and Souganidis considered nonlinear equations, and also derived an error estimate [9].
Optimal results have started appearing only very recently, beginning with the groundbreaking work of Gloria and Otto [18, 19] and Gloria, Neukamm and Otto [15, 16]. Further developments include [1–3, 17, 20, 28].
We would like in particular to draw the reader’s attention to the results in [16]. There, linear elliptic equations in divergence-form on the d-dimensional torus \(\mathbb {T}\) are considered (so that there is no boundary layer), and a two-scale expansion is proved, in the sense that
with obvious notation for \(u_\varepsilon \) and \(u_{\mathrm {hom}}\), and where \(O(\varepsilon )/\varepsilon \) is bounded in \(L^2(\Omega )\) uniformly over \(\varepsilon \). (Striclty speaking, the equations studied there are discrete, and a minor modification in the definition of \(u_\varepsilon \) is necessary in order to suppress the discretization error.) This statement is probably best understood as the summary of two estimates: one on \(u_\varepsilon \), and one on its gradient:
In particular, it does not follow from this result that
In fact, one of us (JCM) started this project with the belief that the expansion (1.5) was wrong in general; that in order for it to be true, an additional symmetry property of the coefficients had to be assumed, a good candidate being the invariance of the law of the coefficients under the transformation \(z \mapsto -z\). Even the weaker fact that
seemed a priori unlikely to be true in general. For the most part, this belief was based on three observations:
-
(1)
Numerical evidence, in the discrete setting, indicates that \(\varepsilon ^{-1}(\mathbb {E}\{u_\varepsilon (x,\omega )\} - u_{\mathrm {hom}}(x)) \) does not converge to 0 for “generic” periodic environments, see [11, Section 4.4.2 and Figure 15];
-
(2)
A simple toy model was proposed in [11, Remark 4.4] to “explain” that \(\varepsilon ^{-1}(\mathbb {E}\{u_\varepsilon (x,\omega ) \} - u_{\mathrm {hom}}(x))\) should be of order 1 in general: when summing i.i.d. random variables, the rate of convergence in the central limit theorem is generically of order \(\varepsilon \) when \(\varepsilon ^{-2}\) random variables are summed; but it is of order \(\varepsilon ^2\) when the law of the random variables is invariant under the transformation \(z \mapsto -z\);
-
(3)
In the regime of small ellipticity contrast, Conlon and Fahim showed that the \(\mathbb {E}\{u_\varepsilon (x,\omega )\} - u_{\mathrm {hom}}(x) = O(\varepsilon ^2)\) when the law of the coefficients is invariant under the transformation \(z \mapsto -z\), but they only show that it is \(O(\varepsilon )\) in general; see [10, Theorem 1.2, Proposition A.1, Remark 8 and Lemma A.2].
Despite these strong indications to the contrary, our result (1.4) on the parabolic equation implies the corresponding result for the elliptic equation. That is, the expansion (1.5) is actually true in general (i.e. without it being necessary to assume that the law of the coefficients is invariant under a transformation such as \(z \mapsto -z\)).
Why are there so convincing arguments to the contrary? It seems to us that the core of the matter is that the foregoing observations (1–3) all concern discrete equations (i.e. where the underlying space is \(\mathbb {Z}^d\)), while our proof of (1.4) and (1.5) applies to continuous equations. Interestingly, we do not know how to prove our result (or the weaker statement (1.6)) in the discrete setting without making use of an assumption such as the invariance of the law of the coefficients under the transformation \(z \mapsto -z\).
Finally, we would like to point out that while it is fairly easy to pass from a result on the parabolic equation to one on the elliptic equation, the converse does not seem to be possible. In fact, we are not aware of any previous “two-scale expansion” result for parabolic equations.
1.3 The probabilistic approach
From a probabilistic point of view, homogenizing a differential operator with random coefficients corresponds to proving an invariance principle for a random motion in random environment. Kipnis and Varadhan have developed a general central limit theorem for additive functionals of reversible Markov processes [25]. A large class of random motions in random environment can be analyzed by following their approach, using also the idea of the “medium seen from the moving particle” (see [26] and the references therein). The proof is based on a martingale decomposition and an application of the martingale central limit theorem (CLT).
In order to make this argument quantitative, two ingredients are necessary. One is a quantitative version of the martingale CLT; the other is a quantitative estimate on the speed of convergence to equilibrium of the medium seen from the particle. This route was already pursued in [21, 30, 31]. The quantitative martingale CLT developed in [31] for general martingales was further explored in [21]. It was shown there that by focusing on continuous martingales, one can express the first-order correction in the CLT in simple terms involving the quadratic variation of the martingale. This will provide us with a suitable quantitative martingale CLT. In addition, we will also need to assert that the process of the environment seen from the particle converges to equilibrium sufficiently fast. This question was first investigated in [29], and we will borrow from there the idea that it is sometimes sufficient to understand the convergence to equilibrium of the environment as seen by a standard Brownian motion (independent of the environment). Furthermore, we will rely crucially on moment bounds on the corrector and on the gradients of the Green function recently obtained in [14, 20]. All these tools will enable us to identify a deterministic first-order correction to the expansion in (1.4), which we will finally show to be zero.
1.4 Other relevant work
The probabilistic approach is particularly well-suited for obtaining pointwise information such as (1.4). While such pointwise results are relatively rare, the precise behavior of more global random quantities has received considerable attention. In particular, a central limit theorem for the averaged energy density was derived in [8, 34, 36]. The large-scale correlations and then the scaling limit of the corrector are investigated in [32, 33]. A comparable study of the scaling limit of the fluctuations of \(u_\varepsilon \) was performed in [22]. We stress however that this result only characterizes the fluctuations of \(u_\varepsilon \), but not the bias \(\mathbb {E}[u_\varepsilon ] - u_\mathrm {hom}\). The desire to understand the typical size of the bias (cf. (1.6)) is what initiated our study.
For other types of equations, e.g. a deterministic operator perturbed by a highly oscillatory random potential, fluctuations around homogenized limits have been analyzed in different contexts [4, 5, 13, 21], see a review [6]. From a probabilistic perspective, it corresponds to a random motion independent of the random environment.
1.5 Organization and notation
The rest of the paper is organized as follows. We make assumptions on the random field \(\tilde{a}(x,\omega )\) and state the main results in Sect. 2. Then we present a standard approach to diffusions in random environments in Sect. 3. Some key estimates of the correctors and the Green functions are contained in Sect. 4. The proof of the main results are presented in Sects. 5, 6 and 7.
We write \(a\lesssim b\) when \(a\le Cb\) with a constant C independent of \(\varepsilon ,t,x\). The normal distribution with mean \(\mu \) and variance \(\sigma ^2\) is denoted by \(N(\mu ,\sigma ^2)\), and \(q_t(x)\) is the density of N(0, t). The Fourier transform is defined by \(\hat{f}(\xi )=\int _{\mathbb {R}^d}f(x)e^{-i\xi \cdot x}dx\). We will have two independent probability spaces with the associated expectations denoted by \(\mathbb {E},\mathbb {E}_B\) respectively. The expectation in the product probability space is then denoted by \(\mathbb {E}\mathbb {E}_B\).
2 Assumptions and main results
Let \(\mathcal {M}\) be an arbitrary metric space equipped with its Borel \(\sigma \)-algebra, and let \(\mu \) be a \(\sigma \)-finite measure on \(\mathcal {M}\). We let \(\omega \) be a Poisson point process on \(\mathcal {M} \times \mathbb {R}^d\) with intensity measure \({\mathrm {d}}\mu (m) \, {\mathrm {d}}x\). We think of \(\omega \) as an element of the probability space \((\Omega ,\mathcal {F},\mathbb {P})\), where \(\Omega \) is the collection of countable subsets of \(\mathcal {M} \times \mathbb {R}^d\), and \(\mathcal {F}\) is the smallest \(\sigma \)-algebra that makes the maps
measurable, for every measurable \(A \subseteq \mathcal {M} \times \mathbb {R}^d\). For a construction of such Poisson point processes, we refer to [24, Section 2.5]. For any measurable \(S\subseteq \mathbb {R}^d\), we denote the \(\sigma \)-algebra generated by the Poisson point process restricted to \(\mathcal {M}\times S\) by \(\mathcal {F}_S\).
The group of translations of \(\mathbb {R}^d\) can be naturally lifted to the space \(\Omega \) by defining, for every \(x \in \mathbb {R}^d\),
It is a classical result that \(\left\{ \tau _x,x\in \mathbb {R}^d\right\} \) satisfies the following properties:
-
(1)
Measure-preserving: \(\mathbb {P}\circ \tau _x=\mathbb {P}\).
-
(2)
Ergodicity: if a measurable set \(A \subseteq \Omega \) is such that for every \(x \in \mathbb {R}^d\), \(A = \tau _x(A)\), then \(\mathbb {P}(A)\in \{0,1\}\).
-
(3)
Stochastic continuity: for any \(\delta >0\) and f bounded measurable,
$$\begin{aligned} \lim _{h\rightarrow 0}\mathbb {P}\left\{ |f(\tau _h\omega )-f(\omega )|\ge \delta \right\} =0. \end{aligned}$$
We denote the inner product and norm on \(L^2(\Omega )\) by \(\langle .,.\rangle \) and \(\Vert .\Vert \) respectively, and define the operator \(T_x\) on \(L^2(\Omega )\) as \(T_x f:=f\circ \tau _{-x}\). The family \(\{T_x,x\in \mathbb {R}^d\}\) forms a d-parameter group of unitary operators on \(L^2(\Omega )\). Stochastic continuity implies that the group is strongly continuous, and ergodicity asserts that a function f is constant if and only if \(T_xf=f\) for all \(x\in \mathbb {R}^d\).
Let \(\{D_k,k=1,\ldots ,d\}\) be the generators of the group \(\{T_x,x\in \mathbb {R}^d\}\). They correspond to differentiations in \(L^2(\Omega )\) in the canonical directions denoted by \(\{e_k,k=1,\ldots ,d\}\). The gradient is then denoted by \(D:=(D_1,\ldots ,D_d)\), and we define the Sobolev space \(H^1(\Omega )\) as the completion of smooth functions under the norm \(\Vert f\Vert _{H^1}^2:=\langle f,f\rangle +\sum _{k=1}^d\langle D_kf,D_kf\rangle \).
Any function f on \(\Omega \) can be extended to a stationary random field \(\tilde{f}(x,\omega ):=f(\tau _{-x}\omega )\). The random coefficients \(\tilde{a}(x,\omega )\) appearing in (1.1) are given by \(\tilde{a}(x,\omega )=a(\tau _{-x}\omega )\) for some measurable \(a:\Omega \rightarrow \mathbb {R}^{d\times d}\). We further make the following assumptions on a:
-
(1)
Uniform ellipticity and smoothness. For every \(\omega \in \Omega \), \(a(\omega )\) is a symmetric matrix satisfying
$$\begin{aligned} C^{-1}|\xi |^2\le \xi ^T a(\omega )\xi \le C|\xi |^2 \end{aligned}$$(2.1)for some constant \(C>0\). Each entry \(\tilde{a}_{ij}(x,\omega )=a_{ij}(\tau _{-x}\omega )\) has \(C^2\) sample paths whose first and second order derivatives are uniformly bounded in \((x,\omega )\).
-
(2)
Local dependence. There exists \(C > 0\) such that for all \(x\in \mathbb {R}^d\), \(\tilde{a}(x,\omega )=a(\tau _{-x}\omega )\) is \(\mathcal {F}_{\{y:|y-x|\le C\}}\)-measurable for some constant \(C>0\).
The coefficient field \(a(\omega )\) can for instance be constructed by choosing a “shape function” \(g: \mathcal {M} \times \mathbb {R}^d \rightarrow E\) for some measurable vector space E (e.g. the space of symmetric matrices) and a “cut-off function” \(F : E \rightarrow \mathbb {R}^{d\times d}\) (that can be used to ensure uniform ellipticity), and letting
The condition of local dependence on a is guaranteed if g(m, z) is non-zero only for z varying in a compact set. As we will see below, the Poisson structure is only used to establish the covariance estimate (4.2) and then prove Propositions 4.6 and 4.7. Although the law of the Poisson point process is invariant under transformations such as \(z \mapsto -z\), this is of course not the case in general for the coefficient field \(\tilde{a}(x,\omega )\) itself.
The following is our main theorem.
Theorem 2.1
Assume \(f\in \mathcal {C}_c^\infty (\mathbb {R}^d)\). For every (t, x), there exists \(C_\varepsilon \rightarrow 0\) in \(L^1(\Omega )\) such that
Here \(\phi =(\phi _{e_1},\ldots ,\phi _{e_d})\), where \(\phi _{e_k}\) is the (zero-mean) stationary corrector in the canonical direction \(e_k\).
Remark 2.2
The existence of \(\phi \) is given by Theorem 4.1. An examination of the proof reveals that the smoothness condition on f can be relaxed. It suffices to assume that sufficiently many weak derivatives of f belong to \(L^2(\mathbb {R}^d)\) (i.e., the Fourier transform \(\hat{f}\) is such that \(\hat{f}(\xi )(1+|\xi |)^n\) is integrable for some large n).
Remark 2.3
It would be interesting to quantify the convergence of \(\mathbb {E}[C_\varepsilon ]\) to 0. We discuss a possible approach to show that \(\mathbb {E}[C_\varepsilon ] \lesssim \sqrt{\varepsilon }\) (up to logarithmic corrections) in Remark 6.8 below.
Remark 2.4
To the best of our knowledge, Theorem 2.1 was not known even in the periodic case. We explain how to adapt our methods to this setting in Sect. 8.
Theorem 2.1 gives, for every (t, x), the existence of some \(C_\varepsilon = C_\varepsilon (t,x)\) such that (2.2) holds. Our proof actually shows more. In particular, for every \(T >0\), \(\sup _{x \in \mathbb {R}^d, t \leqslant T} \mathbb {E}\{|C_\varepsilon (t,x)|\}\) tends to 0 as \(\varepsilon \) tends to 0, and we also obtain some control on the growth of this quantity as T grows. Therefore, we can derive a similar result for elliptic equations, which we now describe more precisely.
Let \(U_\varepsilon (x,\omega )\) and \(U_{\mathrm {hom}}(x)\) solve the following equations on \(\mathbb {R}^d\) respectively
Theorem 2.5
Under the same assumption as in Theorem 2.1 and for every x, there exists \(\tilde{C}_\varepsilon \rightarrow 0\) in \(L^1(\Omega )\) such that
3 Diffusions in random environments
In this section, we present a standard approach to diffusions in random environments, including the process of the medium seen from the particle, corrector equations and the martingale decomposition. A complete introduction can be found in [26, Chapter 9], so we do not present the details.
For every fixed \(\omega \in \Omega , x\in \mathbb {R}^d\) and \(\varepsilon >0\), we define the diffusion process \(X_t^\omega \) on \(\mathbb {R}^d\), starting from \(x/\varepsilon \), by the Itô stochastic differential equation
Here, the drift \(\tilde{b}=(\tilde{b}_1,\ldots ,\tilde{b}_d)\) is defined by \(\tilde{b}_i=\frac{1}{2}\sum _{j=1}^d \partial _{x_j}\tilde{a}_{ji}\), the diffusion matrix is \(\tilde{\sigma }=\sqrt{\tilde{a}}\), and the driving force \(B_t=(B_t^1,\ldots ,B_t^d)\) is a standard d-dimensional Brownian motion built on a different probability space \((\Sigma ,\mathcal {A},\mathbb {P}_B)\) with the associated expectation \(\mathbb {E}_B\). (Although we keep it implicit in the notation, note that the starting point of the diffusion depends on \(\varepsilon \).)
The medium or environment seen from the particle is the process taking values in \(\Omega \) defined by
The following lemma is taken from [26, Proposition 9.8].
Lemma 3.1
\((\omega _s)_{s \,\geqslant \, 0}\) is a Markov process that is reversible and ergodic with respect to the measure \(\mathbb {P}\). Its generator is given by
The diffusively rescaled process \(\varepsilon X_{t/\varepsilon ^2}^\omega \) starts from x, with an infinitesimal generator given by
Hence, we can express the solution to (1.1) as an average with respect to the diffusion process \(\varepsilon X_{t/\varepsilon ^2}^\omega \), i.e., for every fixed \(\omega \in \Omega , t>0, x\in \mathbb {R}^d, \varepsilon >0\), we have
With the above probabilistic representation, the problem reduces to an analysis of the asymptotic behavior of \(\varepsilon X_{t/\varepsilon ^2}^\omega \). In view of (3.1), the process can be written as
It is clear that \(b=(b_1,\ldots ,b_d)\) with \(b_i=\frac{1}{2}\sum _{j=1}^d D_ja_{ji}\) and \(\sigma =\sqrt{a}\).
The idea is to decompose the drift term \(\varepsilon \int _0^{t/\varepsilon ^2}b(\omega _s)ds\) as a martingale plus some small remainder. Since it is an additive functional of a stationary and ergodic Markov process, we can use the Kipnis–Varadhan method. For any \({\uplambda }>0\), the \({\uplambda }\)-corrector in the direction of \(\xi \in \mathbb {R}^d\), denoted by \(\phi _{{\uplambda },\xi }\), is defined as the solution in \(L^2(\Omega )\) to the following equation:
By Itô’s formula,
Hence, the projection on \(\xi \) of the drift term can be decomposed as
so the projection on \(\xi \) of the rescaled process admits the following representation:
where the remainder \(R_t^\varepsilon ({\uplambda })\) and the martingale \(M_t^\varepsilon ({\uplambda })\) are given by
We point out that Eq. (3.5) on the probability space \(\Omega \) corresponds to the following PDE on the physical space \(\mathbb {R}^d\):
where we recall that \(\tilde{\phi }_{{\uplambda },\xi }(x,\omega )=\phi _{{\uplambda },\xi }(\tau _{-x}\omega )\). Letting \(G^\omega _{\uplambda }(x,y)\) be the Green function associated with \({\uplambda }-L_1^\omega \), we have the integral representation
We briefly discuss the proof of homogenization, see [26, Chapter 9] for details. For the remainder, it can be shown that \({\uplambda }\langle \phi _{{\uplambda },\xi },\phi _{{\uplambda },\xi }\rangle \rightarrow 0\) as \({\uplambda }\rightarrow 0\), so by applying Lemma 3.1 and choosing \({\uplambda }=\varepsilon ^2\), we obtain \(\mathbb {E}\mathbb {E}_B\{|R_t^\varepsilon ({\uplambda })|^2\}\rightarrow 0\) as \(\varepsilon \rightarrow 0\). For the martingale, we can first show that \(D\phi _{{\uplambda },\xi }\) converges in \(L^2(\Omega )\), with the limit formally written as \(D\phi _\xi \). Then by a martingale central limit theorem, \(M_t^\varepsilon ({\uplambda })\) converges in distribution to a Gaussian with mean zero and variance \(\sigma _\xi ^2:=\xi ^T\bar{A}\xi \), where the homogenized matrix \(\bar{A}\) is given by
We can express the solution (3.4) in the Fourier domain using (3.7) as
By the convergence of \(R_t^\varepsilon ({\uplambda })\rightarrow 0\) and \(M_t^\varepsilon ({\uplambda })\rightarrow N(0,\sigma _\xi ^2)\), it can be shown that
in probability.
4 Properties of correctors and functionals of the environment seen from the particle
In this section, we first present some key estimates on the corrector \(\phi _{{\uplambda },\xi }\) and the Green function \(G_{\uplambda }^\omega (x,y)\). Then we analyze the decorrelation rate of certain functionals of the corrector by an application of the spectral gap inequality. In the end, we estimate the variance decay of functionals of the environmental process by a comparison of resolvents. Throughout the section, \(\xi \) is a fixed vector in \(\mathbb {R}^d\).
The following two theorems are borrowed from [20, Proposition 1] and [14, Corollary 1.5].
Theorem 4.1
[20] Recall that \(d\ge 3\). There exists \(\phi _{\xi }\in H^1(\Omega )\) such that \(\phi _{{\uplambda },\xi }\rightarrow \phi _\xi \) in \(H^1(\Omega )\) as \({\uplambda }\) tends to 0. Furthermore, the p-th moments of \(\phi _{{\uplambda },\xi },D\phi _{{\uplambda },\xi },\phi _\xi ,D\phi _\xi \) are uniformly bounded in \({\uplambda }\) for any \(p<\infty \).
Remark 4.2
From (3.5), it is clear that \(\mathbb {E}\{\phi _{{\uplambda },\xi }\}=0\), so \(\mathbb {E}\{\phi _\xi \}=0\).
Remark 4.3
For the gradient of the corrector \(D\phi _{{\uplambda },\xi }\), [20, Proposition 1] proves
for any \(p>0\), i.e., a high moment bound of some spatial average. This can be improved with additional regularity assumptions on \(\tilde{a}\). Recall that for almost every \(\omega \), \(\tilde{\phi }_{{\uplambda },\xi }(x,\omega )\) is the weak solution to
and since the sample path of \(\tilde{a}_{ij}(x,\omega )\) is \(C^2\) and hence Hölder continuous (uniformly over \(\omega \)), the following estimate is given by standard Hölder regularity theory [23, Theorems 3.13 and 3.1]
with the constant C independent of \(\omega \) and \({\uplambda }\leqslant 1\). By taking expectation, we derive a bound on the \(L^p\) norm of \(D\phi _{{\uplambda },\xi }\) that is uniform in \({\uplambda }\leqslant 1\).
Theorem 4.4
[1, 14] Recall that \(d\ge 3\). For every \(p>0\), there exists \(C_p < \infty \) such that for every \({\uplambda }\geqslant 0\) and \(x,y\in \mathbb {R}^d\),
where the constant \(C_p>0\) does not depend on \({\uplambda }\), and \(\nabla _x\nabla _y\) denotes the mixed second order derivatives.
The Poisson structure that we assume enables us to decompose the randomness into i.i.d. random variables, i.e., we have \(\omega =\{\eta _k,k\in \mathbb {Z}^d\}\) with \(\eta _k\) the Poisson point process restricted on \(\mathcal {M}\times \{k+[0,1)^d\}\). In this way, we can use a spectral gap inequality given by [15, Lemma 1] to estimate the decorrelation rates of functions on \(\Omega \). For any \(f\in L^2(\Omega )\) with \(\mathbb {E}\{f\}=0\), the inequality shows
with \(\partial _k f:=f-\mathbb {E}\{f|\{\eta _i, i\ne k\}\}\) describing the dependence of f on \(\eta _k\).
By following the same argument, a covariance estimate can be derived, i.e., for any \(f,g\in L^2(\Omega )\) with \(\mathbb {E}\{f\}=\mathbb {E}\{g\}=0\), we have
We further claim that
Here \(f_k(\omega ):=f(\omega _k)\) with \(\omega _k:=\{\eta _i,i\ne k\}\cup \{\tilde{\eta }_k\}\) and \(\tilde{\eta }_k\) an independent copy of \(\eta _k\), i.e., \(\omega _k\) is a perturbation of \(\omega \) at k. First, since conditional expectation is an \(L^2\) projection, we have \(\mathbb {E}\{|\partial _k f|^2\}=\mathbb {E}\{f^2\}-\mathbb {E}\{|\mathbb {E}\{f|\{\eta _i, i\ne k\}\}|^2\}\). Secondly, \(\mathbb {E}\{|f-f_k|^2\}=2\mathbb {E}\{f^2\}-2\mathbb {E}\{ff_k\}\) and \(\mathbb {E}\{ff_k\}=\mathbb {E}\{|\mathbb {E}\{f|\{\eta _i, i\ne k\}\}|^2\}\) by conditioning on \(\{\eta _i,i\ne k\}\). So (4.3) is proved.
Combining (4.2) and (4.3), we obtain
This will be our main tool to estimate the decorrelation rate of functionals on \(\Omega \).
Remark 4.5
The covariance estimate also holds for the random checkerboard structure, e.g., let \(\tilde{a}(x,\omega )=\eta _k\) if \(x-k\in [0,1)^d\), with \(\{\eta _k, k\in \mathbb {Z}^d\}\) i.i.d. matrix-valued random variables. However, in that case \(\tilde{a}(x,\omega )\) is only stationary with respect to shifts in \(\mathbb {Z}^d\), and such situations are not covered by Theorems 4.1 and 4.4.
The following is an estimate of the decorrelation rate of \(\phi _\xi \).
Proposition 4.6
\(|\mathbb {E}\{\phi _\xi (\tau _0\omega )\phi _\xi (\tau _{-x}\omega )\}|\lesssim |\xi |^2(1\wedge \frac{1}{|x|^{d-2}})\).
Proof
By Theorem 4.1, \(\phi _{{\uplambda },\xi }\rightarrow \phi _\xi \) in \(L^2(\Omega )\), so we only need to show that the estimate holds for \(\phi _{{\uplambda },\xi }\) with an implicit constant independent of \({\uplambda }\). Clearly, it suffices to consider |x| sufficiently large.
By (4.4) we have
where \(\omega _k\) is obtained by replacing \(\eta _k\) in \(\omega \) by an independent copy \(\tilde{\eta }_k\).
Now we only need to control \(\mathbb {E}\{|\phi _{{\uplambda },\xi }(\tau _{-x}\omega )-\phi _{{\uplambda },\xi }(\tau _{-x}\omega _k)|^2\}\) for \(x\in \mathbb {R}^d, k\in \mathbb {Z}^d\). Since it is bounded, we consider the case when \(|x-k|\) is large. Recall that we write \(\tilde{\phi }_{{\uplambda },\xi }(x,\omega )=\phi _{{\uplambda },\xi }(\tau _{-x}\omega )\), and that
As a consequence,
since \(\xi \cdot \tilde{b} = \frac{1}{2} \nabla \cdot (\tilde{a} \xi )\). By the assumptions on a, \(\tilde{a}(y,\omega )-\tilde{a}(y,\omega _k)=0\) when \(|y-k|\ge C\) for some constant C, so
which implies
By Theorem 4.1 and the fact that \(\tilde{\phi }_{{\uplambda },\xi }\) is linear in \(\xi \), we first observe that
then we apply Theorem 4.4 on the r.h.s. of (4.10) to derive
Now we have
where the last inequality comes from Lemma 9.1. The proof is complete. \(\square \)
Define
by the definition of the homogenized matrix \(\bar{A}\) in (3.12), \(\psi \) has mean zero and we can write it as \(\psi _\xi =\sum _{i,j=1}^d\xi _i\xi _j\psi _{ij}\) with
The following is an estimate of the decorrelation rate of \(\psi _\xi \).
Proposition 4.7
\(|\mathbb {E}\{\psi _\xi (\tau _0\omega )\psi _\xi (\tau _{-x}\omega )\}|\lesssim |\xi |^4( 1\wedge \frac{\log (2+|x|)}{|x|^d})\).
Proof
First we define \(\psi _{{\uplambda },\xi }:=(\xi +D\phi _{{\uplambda },\xi })^Ta(\xi +D\phi _{{\uplambda },\xi })-\xi ^T\bar{A}_{\uplambda }\xi \), where \(\bar{A}_{\uplambda }\) is chosen so that \(\psi _{{\uplambda },\xi }\) has zero mean. By Theorem 4.1, \(\psi _{{\uplambda },\xi }\rightarrow \psi _\xi \) in \(L^2(\Omega )\), so we only need to consider \(\psi _{{\uplambda },\xi }\) and show that the estimate holds uniformly in \({\uplambda }\).
Similarly, we apply (4.4) to obtain
with \(\omega _k\) the perturbation of \(\omega \) at k.
For any vector \(x_i,y_i\in \mathbb {R}^d\) and matrix \(A_i\in \mathbb {R}^{d\times d}\), \(i=1,2\), we have
with \(\Vert .\Vert \) denoting the matrix norm here, so by the moment bounds of \(D\phi _{{\uplambda },\xi }\), we derive
First, \(\sqrt{\mathbb {E}\{\Vert a(\tau _{-x}\omega )-a(\tau _{-x}\omega _k)\Vert ^4\}}\lesssim 1_{|x-k|\le C}\) by the local dependence of a on \(\omega \).
Secondly, recalling (4.8),
By the same discussion as in the proof of Proposition 4.6, we obtain
To summarize, since \(D\phi _{{\uplambda },\xi }(\tau _{-x}\omega )=\nabla \tilde{\phi }_{{\uplambda },\xi }(x,\omega )\), we have
so
where the last inequality comes from Lemma 9.1. The proof is complete. \(\square \)
For any \(f\in L^2(\Omega )\) with \(\mathbb {E}\{f\}=0\), we are interested in the variance decay of
Since \(\omega _t=\tau _{-X_t^\omega }\omega \) and \(X_t^\omega \) is driven by the generator \(L_1^\omega =\frac{1}{2}\nabla \cdot (\tilde{a}(x,\omega )\nabla )\) with \(\tilde{a}\) being strictly positive definite, heuristically \(X_t^\omega \) should spread at least as fast as a Brownian motion with a sufficiently small diffusion constant. In other words, letting \(f_t^o:=\mathbb {E}_B\{f(\omega _t^o)\}\) with \(\omega _t^o=\tau _{-B_s}\omega \), we expect the decay to 0 of \(f_t\) to be at least as fast as that of \(f_t^o\) (up to rescaling the time by a suitable constant). The following result is a precise statement of this idea (see [29, Lemma 5.1] for a classical proof).
Proposition 4.8
For any \({\uplambda }\geqslant 0\),
The constant \(C>0\) only depends on the ellipticity constant in (2.1).
For \(f=\phi _\xi \) or \(\psi _\xi \), the following results holds.
Proposition 4.9
Proof
First, for any f we have
where \(B^1,B^2\) are two independent Brownian motions and \(\mathbb {E}_{B^1,B^2}\) denotes the average with respect to them.
Next let \(f=\phi _\xi \) and \(R_{\phi _\xi }\) be the covariance function of \(\phi _\xi \)(and recalling that \(q_t\) is the density of the law N(0, t)), we obtain
where we used the result \(|R_{\phi _\xi }(x)|\lesssim |\xi |^2(1\wedge |x|^{2-d})\) given by Proposition 4.6.
Since \(\mathbb {E}\{|f_{t/2}|^2\}\) decreases in t, from Proposition 4.8 we have
for any \({\uplambda }>0\). We can choose \({\uplambda }=1/t\) on the r.h.s. of the above display and derive
The proof is complete. \(\square \)
Proposition 4.10
\(\int _0^\infty \mathbb {E}\{|\mathbb {E}_B\{\psi _\xi (\omega _t)\}|^2\} \, dt\lesssim |\xi |^4\).
Proof
Let \(f=\psi _\xi \), by Proposition 4.8 we have
so we only need to prove that \(\int _0^\infty \mathbb {E}\{|f_{t}^o|^2\} \, dt\lesssim |\xi |^4\). Let \(R_{\psi _\xi }\) be the covariance function of \(\psi _\xi \). By the same argument as in Proposition 4.9,
By Proposition 4.7, \(|R_{\psi _\xi }(x)|\lesssim |\xi |^4(1\wedge |x|^{-d}\log (2+|x|))\), so after integrating in t we obtain
since \(d\ge 3\). The proof is complete. \(\square \)
Before presenting the proof of the main theorem, we decompose the error as
Since \(u_\varepsilon -u_{\mathrm {hom}}\) does not depend on \({\uplambda }\), we can send \({\uplambda }\rightarrow 0\) on the r.h.s. of the above display. By Theorem 4.1, \(R_t^\varepsilon ({\uplambda })\rightarrow R_t^\varepsilon \) and \(M_t^\varepsilon ({\uplambda })\rightarrow M_t^\varepsilon \) in \(L^2(\Omega \times \Sigma )\), where
Therefore, the error can be rewritten as
The first part measures how small the remainder \(R_t^\varepsilon \) is, and the second part measures how close the martingale \(M_t^\varepsilon \) is to a Brownian motion. It turns out that the error coming from the remainder generates the random, centered fluctuation, while the error coming from the martingale is of lower order. We will analyze them separately in the following two sections.
5 An analysis of the remainder
We define the error coming from the remainder in (4.28) as
Let \(\phi =(\phi _{e_1},\ldots ,\phi _{e_d})\). The goal of this section is to show
Proposition 5.1
where C is some constant.
Recall that
By Theorem 4.1 and the stationarity of \(\omega _s\), we obtain that
Using the fact that \(|e^{ix}-1-ix|\le x^2\) and \(\hat{f}(\xi )|\xi |^2\in L^1(\mathbb {R}^d)\), we derive
where
Now we only need to analyze \(\mathcal {E}_2\). The two terms in \(R_t^\varepsilon \) are analyzed separately. For \(-\varepsilon \phi _\xi (\omega _{t/\varepsilon ^2})\), we can use the variance decay of \(\mathbb {E}_B\{\phi _\xi (\omega _t)\}\) when t is large. For \(\varepsilon \phi _\xi (\omega _0)\), since it is independent of the Brownian path, we expect that \(e^{iM_t^\varepsilon }\) averages itself. This will be proved by applying a special case of a quantitative martingale central limit theorem, which we present as the following proposition.
Proposition 5.2
[31, Theorem 3.2] If \(M_t\) is a continuous martingale and \(\langle M\rangle _t\) is its predictable quadratic variation, \(W_t\) is a standard Brownian motion, then
with the distance \(d_{k}\) defined as
Remark 5.3
In fact, the argument in [31] simplifies when we assume (as we do here) that the martingale \(M_t\) is continuous. In this case, the multiplicative constant \((k \vee 1)\) in (5.5) can be replaced by k, and the condition \(\Vert f'\Vert \leqslant 1\) in (5.6) can be dropped.
We also need the following second moment estimate of additive functionals of \(\omega _s\).
Lemma 5.4
For any \(f\in L^2(\Omega )\), we have
Proof
The proof is a standard calculation. First, by stationarity we have
Secondly, we change variable \(s\mapsto u-s\) and integrate in u to obtain
By reversibility we further derive
The proof is complete. \(\square \)
Now we can combine (5.3) with the following Lemmas 5.5 and 5.6 to complete the proof of Proposition 5.1.
Lemma 5.5
Proof
First, we have for any \(u\in (0,t)\) that
where \(\mathcal {F}_s\) is the natural filtration associated with \(B_s\). By the stationarity of \(\omega _s\), we obtain
Secondly, we have
By moment bounds of \(\phi _\xi \), the first factor \(\sqrt{\mathbb {E}\mathbb {E}_B\{|\phi _\xi (\omega _{t/\varepsilon ^2})|^4\}}\lesssim |\xi |^2\). For the second factor, we apply moment inequalities of martingales to derive
with \(\langle M^\varepsilon \rangle _t\) the quadratic variation of \(M_t^\varepsilon \):
By moment bounds of \(D\phi _{\xi }\), we have \(\sqrt{\mathbb {E}\mathbb {E}_B\{|M_t^\varepsilon -M_u^\varepsilon |^4\}}\lesssim (t-u)|\xi |^2\). Therefore, we have obtained
Now we can write
and derive
By Proposition 4.9,
After optimizing with respect to u on the r.h.s. of the above display, we complete the proof. \(\square \)
Lemma 5.6
Proof
For almost every fixed \(\omega \in \Omega \) and \(\varepsilon >0\),
is a continuous square integrable martingale on \((\Sigma ,\mathcal {A},\mathbb {P}_B)\), so by Proposition 5.2, we have
where \(\langle M^\varepsilon \rangle _t\) is the quadratic variation of \(M_t^\varepsilon \):
and \(\sigma _\xi ^2=\xi ^T\bar{A}\xi \), with the homogenized matrix \(\bar{A}\) given by (3.12).
Thus we have derived
By recalling (4.14), \(\langle M^\varepsilon \rangle _t-\sigma _\xi ^2 t=\varepsilon ^2\int _0^{t/\varepsilon ^2}\psi _\xi (\omega _s) \, ds\), so we apply Lemma 5.4 and Proposition 4.10 to obtain
To summarize, we have
Since \(\phi _\xi =\sum _{k=1}^d \xi _k\phi _{e_k}\) and \(\omega _0=\tau _{-X_0^\omega }\omega =\tau _{-x/\varepsilon }\omega \), it is straightforward to check that
The proof is complete. \(\square \)
6 An analysis of the martingale
We define the error coming from the martingale part in (4.28) as
By the estimate in (5.22), we already have
Thus \(\mathcal {E}_3\) is of order at most \(\varepsilon \), and we need to refine this estimate to show that it is actually of lower order. The following is the main result of this section.
Proposition 6.1
with \(C_\varepsilon (t)\rightarrow 0\) as \(\varepsilon \rightarrow 0\) and \(C_\varepsilon (t)\le C(1+t)\) for some constant \(C>0\).
The proof of Proposition 6.1 can be decomposed into two parts. One part consists in showing that (6.3) holds with \(\mathcal {E}_3\) replaced by
for some constants \(c_{ijk}\) defined below, see (6.12). In other words,
is what we find to be the deterministic error at the order of \(\varepsilon \). The second part consists in observing that actually, the constants \(c_{ijk}\) are all equal to zero!
We begin by defining \(c_{ijk}\), and then observing that they are in fact zero. The following lemma from the proof of [25, Theorem 1.8] is needed, and we present a proof here for the sake of convenience.
Lemma 6.2
For any \(V\in L^2(\Omega )\) with mean zero, let \(\varphi _{\uplambda }\) be the regularized corrector, i.e., \(({\uplambda }-L)\varphi _{\uplambda }=V\). If
for some constant \(C>0\) independent of t, then \({\uplambda }\langle \varphi _{\uplambda },\varphi _{\uplambda }\rangle \rightarrow 0\) and \(D_k\varphi _{\uplambda }\) converges in \(L^2(\Omega )\), \(k=1,\ldots ,d\).
Proof
First, by the calculation in Lemma 5.4, we have
Since \(\int _0^s\langle e^{uL}V,V\rangle \, du\) is non-decreasing as a function of s, the l.h.s. of the above display being bounded is equivalent with \(\int _0^\infty \langle e^{uL}V,V\rangle \, du<\infty \), i.e. \(\langle V,(-L)^{-1}V\rangle <\infty \). Let \(U(d\xi )\) be the projection valued measure associated with \(-L\), i.e., \(-L=\int _0^\infty \xi \, U(d\xi )\), and \(\nu (d\xi )\) be the spectral measure associated with V, i.e. \(\nu (d\xi )=\langle U(d\xi ) V,V\rangle \). The fact that \(\langle V,(-L)^{-1}V\rangle <\infty \) is equivalent to
It follows that
as \({\uplambda }\rightarrow 0\) by the dominated convergence theorem. By the uniform ellipticity, we have
and since
as \({\uplambda }_1,{\uplambda }_2\rightarrow 0\), we further obtain
The proof is complete. \(\square \)
For \(\psi _{ij}=(e_i+D\phi _{e_i})^Ta(e_j+D\phi _{e_j})-\bar{A}_{ij} \) (\( i,j=1,\ldots ,d\)), a polarization of the inequality in (5.22) ensures that
i.e., the asymptotic variance is finite, so we can apply Lemma 6.2: letting \(\Psi _{{\uplambda },ij}\) be the regularized corrector associated with \(\psi _{ij}\), i.e.,
we have \({\uplambda }\langle \Psi _{{\uplambda },ij},\Psi _{{\uplambda },ij}\rangle \rightarrow 0\) as \({\uplambda }\rightarrow 0\). We also have the convergence of \(D_k\Psi _{{\uplambda },ij}\) in \(L^2(\Omega )\), with the limit formally written as \(D_k\Psi _{ij}:=\lim _{{\uplambda }\rightarrow 0}D_k\Psi _{{\uplambda },ij}\).
Let \(D\Psi _{ij}=(D_1\Psi _{ij},\ldots ,D_d\Psi _{ij})\), then the constant \(c_{ijk}\) for \(i,j,k=1,\ldots ,d\) is given by
Lemma 6.3
\(c_{ijk}=0\) for \(i,j,k=1,\ldots ,d\).
Proof
By the \(L^2\) convergence of \(D\Psi _{{\uplambda },ij}\rightarrow D\Psi _{ij}\) and \(D\phi _{{\uplambda },e_k}\rightarrow D\phi _{e_k}\), we have
An integration by parts leads to
The r.h.s. of the above display can be rewritten as \(\langle \Psi _{{\uplambda },ij},L\phi _{{\uplambda },e_k}+e_k\cdot b\rangle \), and by recalling the equation satisfied by the regularized corrector (3.5), we have
which goes to zero as \({\uplambda }\rightarrow 0\). The proof is complete. \(\square \)
To refine the estimation of \(\mathcal {E}_3\), we need a more accurate estimation of \(\mathbb {E}_B\{e^{iM_t^\varepsilon }\}-e^{-\frac{1}{2}\sigma _\xi ^2 t}\) compared with the one obtained by Proposition 5.2. This is given by the following quantitative martingale central limit theorem.
Proposition 6.4
[21, Proposition 3.2] If \(M_t\) is a continuous martingale and \(\langle M\rangle _t\) is its predictable quadratic variation, \(W_t\) is a standard Brownian motion, then for any \(f\in \mathcal {C}_b(\mathbb {R})\) with up to third order bounded and continuous derivatives, we have
where \(\tau =\sup \{s\in [0,t]|\langle M\rangle _s\le \sigma ^2t\}\), \(\Vert f'''\Vert _\infty \) denotes the supreme bound of \(f'''\), and C is some universal constant.
Remark 6.5
In the discrete-space setting, the corresponding martingales have jumps, and we do not know how to adapt Proposition 6.4 and the subsequent argument to recover Theorem 2.1 in this case.
By the above proposition, we have for almost every \(\omega \in \Omega \) that
where
Combining with (5.22), we obtain
for
Define
The following Lemmas 6.6 and 6.7 combine with (6.18) to complete the proof of Proposition 6.1.
Lemma 6.6
\(\mathbb {E}\{|\mathcal {E}_4-\mathcal {E}_5|\}\lesssim \varepsilon ^\frac{3}{2}t^\frac{3}{4}\).
Proof
By (5.22), we know \(\mathbb {E}\mathbb {E}_B\{|\varepsilon ^2\int _0^{t/\varepsilon ^2}\psi _\xi (\omega _s) \, ds|^2\}\lesssim \varepsilon ^2t|\xi |^4\), so
By the definition of \(\tau \), we have
so \(\mathbb {E}\{|\mathcal {E}_4-\mathcal {E}_5|\}\lesssim \varepsilon ^\frac{3}{2}t^\frac{3}{4}\). The proof is complete. \(\square \)
Lemma 6.7
with \(C_\varepsilon (t)\rightarrow 0\) as \(\varepsilon \rightarrow 0\) and \(C_\varepsilon (t)\le C(1+t)\) for some constant C.
Proof
We write
where \(\varepsilon \int _0^{t/\varepsilon ^2}\psi _\xi (\omega _s) \, ds\) is of central limit scaling. To apply the Kipnis–Varadhan method, the only condition we need to check is the finiteness of the asymptotic variance, and this is already given by (5.22), i.e. we have
Therefore, we can write \(\varepsilon \int _0^{t/\varepsilon ^2}\psi _\xi (\omega _s) \, ds=\mathcal {R}_t^\varepsilon +\mathcal {M}_t^\varepsilon \) with
and
Recall that the formally-written random variable \(D_i\Psi _\xi \) is the \(L^2\)-limit of \(D_i\Psi _{{\uplambda },\xi }\) as \({\uplambda }\rightarrow 0\), with \(\Psi _{{\uplambda },\xi }\) solving the regularized corrector equation
Since \(\psi _\xi =\sum _{i,j=1}^d\xi _i\xi _j\psi _{ij}\), by linearity we have \(\Psi _{{\uplambda },\xi }=\sum _{i,j=1}^d \xi _i\xi _j\Psi _{{\uplambda },ij}\), with \(\Psi _{{\uplambda },ij}\) solving
Now we can write
First, by choosing \({\uplambda }=\varepsilon ^2\) and using the stationarity of \(\omega _s\) we have
For the stochastic integral, we have
Therefore,
By Lemma 6.2, \({\uplambda }\langle \Psi _{{\uplambda },\xi },\Psi _{{\uplambda },\xi }\rangle \rightarrow 0\) as \({\uplambda }\rightarrow 0\), and \(D_i\Psi _{{\uplambda },\xi }\rightarrow D_i\Psi _\xi \) in \(L^2(\Omega )\), so we derive
with \(C_\varepsilon \rightarrow 0\) as \(\varepsilon \rightarrow 0\).
Secondly, for the martingale part \(\mathbb {E}_B\{e^{iM_t^\varepsilon }\mathcal {M}_t^\varepsilon \}\), it is clear that \(M_t^\varepsilon \) and \(\mathcal {M}_t^\varepsilon \) are written as \(\sum _{j=1}^d\varepsilon \int _0^{t/\varepsilon ^2}f_j(\omega _s) \, dB_s^j\) and \(\sum _{j=1}^d\varepsilon \int _0^{t/\varepsilon ^2}g_j(\omega _s) \, dB_s^j\) for some \(f_j,g_j\in L^2(\Omega )\) respectively. We claim that for fixed \(\xi \in \mathbb {R}^d,t>0\)
for some constant \(c_\xi \).
Recall that \(\omega _s\) depends on \(\varepsilon \) through the initial condition \(\omega _0=\tau _{-x/\varepsilon }\omega \). By stationarity we can shift the environment \(\omega \) by an amount of \(x/\varepsilon \) without changing the value of \(\mathbb {E}\{|\mathbb {E}_B\{e^{iM_t^\varepsilon }\mathcal {M}_t^\varepsilon \}-c_\xi |\}\). So we can assume \(\omega _s=\tau _{X_s^\omega }\omega \) with \(X_0^\omega =0\).
For almost every \(\omega \in \Omega \), by ergodicity we have
almost surely in \(\Sigma \). Thus by a martingale central limit theorem [12, page 339, Theorem 1.4], we have that for almost every \(\omega \in \Omega \),
in distribution in \(\Sigma \), where \((N_1,N_2)\) is a Gaussian vector with mean zero and whose covariance matrix is determined by \(\mathbb {E}\{N_1^2\}=t\sum _{j=1}^d \langle f_j,f_j\rangle \), \(\mathbb {E}\{N_2^2\}=t\sum _{j=1}^d \langle g_j,g_j\rangle \), and \(\mathbb {E}\{N_1N_2\}=t\sum _{j=1}^d \langle f_j,g_j\rangle \).
Now let \(g_K(x)=(x\wedge K)\vee (-K)\) be a continuous and bounded cutoff function for \(K>0\), and \(h_K(x)=x-g_K(x)\) we have
It is clear that \(\mathbb {E}\mathbb {E}_B\{|\mathcal {M}_t^\varepsilon |^2\}\lesssim t|\xi |^4\), so
Therefore,
Letting \(K\rightarrow \infty \), (6.34) is proved for \(c_\xi =\mathbb {E}\{e^{iN_1}N_2\}\).
For the constant \(c_\xi \), we have
(this can be easily seen by differentiating the formula for \(\mathbb {E}\{e^{i N_1 + i\zeta N_2}\}\) with respect to \(\zeta \)). Recall that \(f_j = \sum _{i=1}^d (D_i \phi _\xi + \xi _i) \sigma _{ij}\) and \(g_j = \sum _{i = 1}^d D_i \Psi _{\xi } \sigma _{ij}\). After some calculation, we obtain
so, recalling (6.12),
By the above expression of \(c_\xi \) and the fact that \(\mathbb {E}\mathbb {E}_B\{|\mathcal {M}_t^\varepsilon |^2\}\lesssim t|\xi |^4\), we have
so applying the dominated convergence theorem, we conclude that for \(t>0\)
as \(\varepsilon \rightarrow 0\).
To summarize, by combining (6.33) and (6.43) we have proved
as \(\varepsilon \rightarrow 0\), and the following bound holds
for some constant \(C>0\) independent of (t, x).
Now we only need to note that
to complete the proof. \(\square \)
Remark 6.8
From the proof above, we see that in order to estimate the rate of convergence to 0 of \(\mathbb {E}\{|C_\varepsilon |\}\) in Theorem 2.1, the rates of convergence of \({\uplambda }\langle \Psi _{{\uplambda },\xi },\Psi _{{\uplambda },\xi } \rangle \) to 0 and of \(D\Psi _{{\uplambda },\xi }\) to \(D\Psi _\xi \) as \({\uplambda }\rightarrow 0\) need to be quantified. This in turn could be obtained by reinforcing Proposition 4.10 to
for some \(\gamma > 1\). More precisely, spectral computations similar to those of [29] show that (6.47) implies
and the same estimate for \(\mathbb {E}\{|D \Psi _{{\uplambda },\xi }-D \Psi _\xi |^2\}\). It was shown in [18, Theorem 2.1] that the spatial averages of \(\psi _\xi \) behave as if \(\psi _\xi \) was a local function of the coefficient field. If \(\psi _\xi \) is replaced by a truly local function, then the methods of [15] show that (6.47) holds with \(\gamma = d/2\). For our actual function \(\psi _\xi \), it is thus natural to expect (6.47) to hold at least for every \(\gamma < d/2\), but a proof of this stronger result would require more work, so we preferred to present a simpler argument here.
7 Results on elliptic equations
The solutions to elliptic equations can be written as
Recall the error decomposition for fixed (t, x) in the parabolic case
where \(C_\varepsilon (t,x)\rightarrow 0\) in \(L^1(\Omega )\). By Propositions 5.1 and 6.1, we actually have
for some constant \(C>0\), so by the dominated convergence theorem
as \(\varepsilon \rightarrow 0\). Therefore, we obtain the error decomposition for fixed x in the elliptic case
with \(\tilde{C}_\varepsilon (x)\rightarrow 0\) in \(L^1(\Omega )\).
The first term on the r.h.s. of (7.6) gives
which completes the proof of Theorem 2.5.
8 Results for periodic coefficients
It is natural to ask whether the same result holds for periodic rather than random coefficients. To understand the first order errors in periodic homogenization is a classical problem, however the pointwise expansion proved in this paper does not seem to be known. Our approach applies with some minor modifications, which we now briefly discuss.
The existence of a “stationary” corrector now becomes trivial. We assume the coefficient \(\tilde{a}(x)\) is defined on the \(d-\)dimensional torus \(\mathbb {T}\), and by the fact that \(\tilde{b}=(\tilde{b}_1,\ldots ,\tilde{b}_d)\) with \(\tilde{b}_i=\frac{1}{2}\sum _{j=1}^d \partial _{x_j}\tilde{a}_{ji}\), we have
By the Fredholm alternative, the corrector equation
has a unique solution satisfying \(\int _{\mathbb {T}} \tilde{\phi }_\xi (x)dx=0\). The same discussion applies to \(\tilde{\psi }_\xi =(\xi +\nabla \tilde{\phi }_\xi )^T\tilde{a}(\xi +\nabla \tilde{\phi }_\xi )-\xi ^T\bar{A}\xi \) since \(\int _{\mathbb {T}} \tilde{\psi }_\xi (x)dx=0\), that is, there exists a unique \(\tilde{\Psi }_\xi \) solving
such that \(\int _{\mathbb {T}} \tilde{\Psi }_\xi (x)dx=0\). Since we assume \(\tilde{a}\) to be Hölder regular, the functions \(\tilde{\phi }_\xi , \nabla \tilde{\phi }_\xi \) and \(\tilde{\Psi }_\xi \) are bounded in x (see [23, Theorem 3.13]).
Our estimates of variance decay in Propositions 4.9 and 4.10 can be replaced by a spectral gap inequality in the periodic setting. For the diffusion on the torus given by
the Lebegue measure on \(\mathbb {T}\) is the unique invariant measure and the following estimate holds [7, Page 373, Theorem 3.2]:
for some \(\rho >0\), provided \(\int _{\mathbb {T}} g(x)dx=0\). This enables to replace the estimates of Propositions 4.9 and 4.10 by exponential bounds.
With the above two points in mind, we apply the same arguments to derive a result similar to Theorem 2.1: for every fixed (t, x),
where \(\tilde{\phi }=(\tilde{\phi }_{e_1},\ldots ,\tilde{\phi }_{e_d})\).
References
Armstrong, S.N., Smart, C.K.: Quantitative stochastic homogenization of convex integral functionals. Annales scientifiques de l’Ecole normale supérieure (2015, to appear)
Armstrong, S.N., Mourrat, J.-C.: Lipschitz regularity for elliptic equations with random coefficients. Arch. Ration. Mech. Anal. (2015, to appear)
Armstrong, S.N., Kuusi, T., Mourrat, J.-C.: Mesoscopic higher regularity and subadditivity in elliptic homogenization. arXiv:1507.06935 (2015, preprint)
Bal, G.: Central limits and homogenization in random media. Multiscale Model. Simul. 7, 677–702 (2008)
Bal, G.: Homogenization with large spatial random potential. Multiscale Model. Simul. 8, 1484–1510 (2010)
Bal, G., Gu, Y.: Limiting models for equation with large random potential; a review. Commun. Math. Sci. 13, 729–748 (2015)
Bensoussan, A., Lions, J.-L., Papanicolau, G.: Asymptotic Analysis for Periodic Structures. Elsevier, Amsterdam (1978)
Biskup, M., Salvi, M., Wolff, T.: A central limit theorem for the effective conductance: linear boundary data and small ellipticity contrasts. Commun. Math. Phys. 328, 701–731 (2014)
Caffarelli, L.A., Souganidis, P.E.: Rates of convergence for the homogenization of fully nonlinear uniformly elliptic pde in random media. Invent. Math. 180, 301–360 (2010)
Conlon, J., Fahim, A.: Strong convergence to the homogenized limit of parabolic equations with random coefficients. Trans. AMS 367, 3041–3093 (2015)
Egloffe, A.-C., Gloria, A., Mourrat, J.-C., Nguyen, T.N.: Random walk in random environment, corrector equation, and homogenized coefficients: from theory to numerics, back and forth. IMA J. Numer. Anal. 35, 499–545 (2015)
Ethier, S.N., Kurtz, T.G.: Markov Processes: Characterization and Convergence. Wiley, New York (1986)
Figari, R., Orlandi, E., Papanicolaou, G.: Mean field and gaussian approximation for partial differential equations with random coefficients. SIAM J. Appl. Math. 42, 1069–1077 (1982)
Gloria, A., Marahrens, D.: Annealed estimates on the Green functions and uncertainty quantification. arXiv:1409.0569 (2014, preprint)
Gloria, A., Neukamm, S., Otto, F.: Quantification of ergodicity in stochastic homogenization: optimal bounds via spectral gap on glauber dynamics. Invent. Math. 199, 455–515 (2015)
Gloria, A., Neukamm, S., Otto, F.: An optimal quantitative two-scale expansion in stochastic homogenization of discrete elliptic equations. ESAIM Math. Model. Numer. Anal. 48, 325–346 (2014)
Gloria, A., Neukamm, S., Otto, F.: A regularity theory for random elliptic operators. arXiv:1409.2678 (2015, preprint)
Gloria, A., Otto, F.: An optimal variance estimate in stochastic homogenization of discrete elliptic equations. Ann. Probab. 39, 779–856 (2011)
Gloria, A., Otto, F.: An optimal error estimate in stochastic homogenization of discrete elliptic equations. Ann. Appl. Probab. 22, 1–28 (2012)
Gloria, A., Otto, F.: Quantitative results on the corrector equation in stochastic homogenization. J. Eur. Math. Soc. (2015, to appear)
Gu, Y., Bal, G.: Fluctuations of parabolic equations with large random potentials. SPDEs Anal. Comput. 3, 1–51 (2015)
Gu, Y., Mourrat, J.-C.: Scaling limit of fluctuations in stochastic homogenization. arXiv:1503.00578 (2015, preprint)
Han, Q., Lin, F.: Elliptic partial differential equations, New York (1997)
Kingman, J.F.C.: Poisson Processes. Oxford University Press, New York (1993)
Kipnis, C., Varadhan, S.: Central limit theorem for additive functionals of reversible markov processes and applications to simple exclusions. Commun. Math. Phys. 104, 1–19 (1986)
Komorowski, T., Landim, C., Olla, S.: Fluctuations in Markov Processes: Time Symmetry and Martingale Approximation, vol. 345. Springer, Berlin (2012)
Kozlov, S.M.: Averaging of random operators. Matematicheskii Sbornik 151, 188–202 (1979)
Marahrens, D., Otto, F.: Annealed estimates on the Green’s function. Probab. Theory Related Fields (2015, to appear)
Mourrat, J.-C.: Variance decay for functionals of the environment viewed by the particle. Annales de l’Institut H. Poincaré Probabilités et Statistiques 47, 294–327 (2011)
Mourrat, J.-C.: A quantitative central limit theorem for the random walk among random conductances. Electron. J. Probab. 17(97), 1–17 (2012)
Mourrat, J.-C.: Kantorovich distance in the martingale CLT and quantitative homogenization of parabolic equations with random coefficients. Probab. Theory Related Fields 160(1–2), 279–314 (2014)
Mourrat, J.-C., Nolen, J.: A scaling limit of the corrector in stochastic homogenization. arXiv:1502.07440 (2015, preprint)
Mourrat, J.-C., Otto, F.: Correlation structure of the corrector in stochastic homogenization. Ann. Probab. (2015, to appear)
Nolen, J.: Normal approximation for a random elliptic equation. Probab. Theory Related Fields 159, 661–700 (2014)
Papanicolaou, G.C., Varadhan, S.R.S.: Boundary value problemswith rapidly oscillating random coefficients, in random fields, vols. I, II (Esztergom, 1979), Colloq. Math. Soc. János Bolyai, vol. 27, pp. 835–873. North Holland, Amsterdam (1981)
Rossignol, R.: Noise-stability and central limit theorems for effective resistance of random electric networks. Ann. Probab. (2015, to appear)
Yurinskii, V.: Averaging of symmetric diffusion in random medium. Sib. Math. J. 27, 603–613 (1986)
Acknowledgments
YG would like to thank his Ph.D. advisor Prof. Guillaume Bal for many helpful discussions on the subject.
Author information
Authors and Affiliations
Corresponding author
Appendix A: Estimating convolution of powers
Appendix A: Estimating convolution of powers
Lemma 9.1
When \(d\ge 3\), for any \(x\in \mathbb {R}^d\),
Proof
The proofs of (9.1) and (9.2) are similar, so we only consider (9.1).
First, for \(|x|>100\), we divide \(\mathbb {Z}^d\) into three regions, \((I)=\{k\in \mathbb {Z}^d: |k|\le |x|,|k|\le |x-k|\}\), \((II)=\{k\in \mathbb {Z}^d: |k-x|\le |x|,|k|>|x-k|\}\), \((III)=\{k\in \mathbb {Z}^d:|k|>|x|,|k-x|>|x|\}\). Then it is clear that in (I), we have \(|x-k|\ge |x|/2\), so
Similarly, in (II) we have \(|k|\ge |x|/2\), so
In (III), \(|x-k|\ge |k|/2\), so
Now for \(|x|\le 100\), it is clear that the summation is bounded since \(d\ge 3\), so the proof of (9.1) is complete. \(\square \)
Rights and permissions
About this article
Cite this article
Gu, Y., Mourrat, JC. Pointwise two-scale expansion for parabolic equations with random coefficients. Probab. Theory Relat. Fields 166, 585–618 (2016). https://doi.org/10.1007/s00440-015-0667-z
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-015-0667-z
Keywords
- Quantitative homogenization
- Martingale
- Central limit theorem
- Diffusion in random environment
Mathematics Subject Classification
- 35B27
- 35K05
- 60G44
- 60F05
- 60K37