1 Introduction

The results of this paper should be seen as an extension of those first obtained in Sznitman and Zeitouni [12] for stationary diffusion processes in random environment on \(\mathbb {R}^d\), for \(d\ge 3\), which are a small perturbation of Brownian motion and which satisfy a restricted isotropy condition and finite range dependence. The framework depends upon an underlying probability space \(\left( {\varOmega },\mathcal {F},\mathbb {P}\right) \), which can be viewed as indexing the collection of all equations or environments described, for each \(x\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\), by the coefficients

$$\begin{aligned} A(x,\omega )\in \mathcal {S}(d)\quad \text {and}\quad b(x,\omega )\in \mathbb {R}^d, \end{aligned}$$
(1.1)

for \(\mathcal {S}(d)\) the space of \(d\times d\) symmetric matrices.

More precisely, the stationarity is described by a group of transformations \(\left\{ \tau _x\right\} _{x\in \mathbb {R}^d}\) preserving the measure of \({\varOmega }\) and satisfying, for each \(x,y\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\),

$$\begin{aligned} A(x+y,\omega )=A(x,\tau _y\omega )\quad \text {and}\quad b(x+y,\omega )=b(x,\tau _y\omega ). \end{aligned}$$
(1.2)

There exists \(R>0\) quantifying the finite-range dependence such that, whenever subsets \(A,B\subset \mathbb {R}^d\) satisfy \(d(A,B)\ge R\), the sigma-algebras

$$\begin{aligned} \sigma \left( A(x,\omega ), b(x,\omega )\;|\;x\in A\right) \quad \text {and}\quad \sigma \left( A(x,\omega ), b(x,\omega )\;|\;x\in B\right) \quad \text {are independent.} \end{aligned}$$
(1.3)

The coefficients are isotropic in the sense that for every orthogonal transformation \(r:\mathbb {R}^d\rightarrow \mathbb {R}^d\) preserving the coordinate axis, for every \(x\in \mathbb {R}^d\), the random variables

$$\begin{aligned} \left( rb(x,\omega ), rA(x,\omega )r^t\right) \quad \text {and}\quad \left( b(rx,\omega ), A(rx,\omega )\right) \quad \text {have the same law.} \end{aligned}$$
(1.4)

Finally, the perturbation is described, for a parameter \(\eta >0\) to be chosen small, by the condition

$$\begin{aligned} |b(x,\omega )|< \eta \quad \text {and}\quad |A(x,\omega )-I|< \eta \quad \text {on}\quad \mathbb {R}^d\times {\varOmega }. \end{aligned}$$
(1.5)

We remark that these assumptions are identical to model considered in [12] and are the continuous counterpart of the model first studied in the discrete setting by Bricmont and Kupiainen [2].

The coefficients will be sufficiently regular to guarantee, for each \(x\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\), the well-posedness of the martingale problem corresponding to the generator

$$\begin{aligned} \frac{1}{2}\sum _{i,j=1}^da_{ij}(y,\omega )\frac{\partial ^2}{\partial y_i \partial y_j}+\sum _{i=1}^db_i(y,\omega )\frac{\partial }{\partial y_i}, \end{aligned}$$

where we have written \(A=(a_{ij})_{i,j=1}^d\) for the diffusion matrix. See Stroock and Varadhan [11, Chapter 6, 7]. We denote by \(P_{x,\omega }\) the corresponding probability measure on the space of continuous paths \(\hbox {C}([0,\infty );\mathbb {R}^d)\) and recall that, almost surely with respect to \(P_{x,\omega }\), paths \(X_t\in \hbox {C}([0,\infty );\mathbb {R}^d)\) satisfy the stochastic differential equation

$$\begin{aligned} \left\{ \begin{array}{l} dX_t=b(X_t,\omega )dt+\sigma (X_t,\omega )dB_t, \\ X_0=x,\end{array}\right. \end{aligned}$$

for \(A(y,\omega )=\sigma (y,\omega )\sigma (y,\omega )^t\), and for \(B_t\) a standard Brownian motion under \(P_{x,\omega }\) with respect to the canonical right continuous filtration on \(\hbox {C}([0,\infty );\mathbb {R}^d)\).

We now present our main result where in the statement we write, for every measurable subset \(E\in \mathcal {F}\), using the transformation group appearing in (1.2),

$$\begin{aligned} P_t(\omega ,E)=P_{0,\omega }\left( \tau _{X_t}\omega \in E\right) . \end{aligned}$$
(1.6)

Theorem 1.1

There exists a unique probability measure \(\pi \) on \(({\varOmega },\mathcal {F})\) which is absolutely continuous with respect to \(\mathbb {P}\) and satisfies, for every \(t\ge 0\) and \(E\in \mathcal {F}\),

$$\begin{aligned}\pi (E)=\int _{{\varOmega }}P_t(\omega ,E)\;d\pi .\end{aligned}$$

Furthermore, \(\pi \) is mutually absolutely continuous with respect to \(\mathbb {P}\) and defines an ergodic probability measure with respect to the canonical Markov process on \({\varOmega }\) defining (1.6).

Theorem 1.1 is obtained by analyzing the long term behavior of solutions \(u:\mathbb {R}^d\times [0,\infty )\times {\varOmega }\rightarrow \mathbb {R}\) satisfying

$$\begin{aligned} \left\{ \begin{array}{ll} u_t=\frac{1}{2}{{\mathrm{tr}}}(A(y,\omega )D^2u)+b(y,\omega )\cdot Du &{}\quad \text {on}\quad \mathbb {R}^d\times (0,\infty ), \\ u(x,0,\omega )=f(x,\omega ) &{}\quad \text {on}\quad \mathbb {R}^d\times \left\{ 0\right\} ,\end{array}\right. \end{aligned}$$
(1.7)

since, for \(\mathbf{{1}}_E:{\varOmega }\rightarrow {\varOmega }\) the indicator function of \(E\in \mathcal {F}\), for \(f_E(x,\omega )=\mathbf{{1}}_E(\tau _x\omega ),\) if \(u_E(x,t,\omega )\) satisfies (1.7) with initial data \(f_E(x,\omega )\), then, for each \(\omega \in {\varOmega }\) and \(t\ge 0\),

$$\begin{aligned} u_E(0,t,\omega )=E_{0,\omega }\left( \mathbf{{1}}_E(\tau _{X_t}\omega )\right) =P_{0,\omega }\left( \tau _{X_t}\omega \in E\right) =P_t(\omega , E). \end{aligned}$$

Indeed, along an exponentially increasing sequence of time scales \(L_n^2\), see (2.18), the invariant measure \(\pi \) is first identified, for every \(E\in \mathcal {F}\), as the limit

$$\begin{aligned} \pi (E)=\lim _{n\rightarrow \infty }\mathbb {E}\left( u_E(0,L_n^2,\omega )\right) . \end{aligned}$$

We prove that the limit exists in Propositions 3.10 and 3.11 and, in Proposition 3.12, we prove that \(\pi \) defines a probability measure on \(({\varOmega },\mathcal {F})\) which is absolutely continuous with respect to \(\mathbb {P}\).

An almost sure characterization of \(\pi \) is then established along the full limit, as \(t\rightarrow \infty \), for a class of subsets \(E\in \mathcal {F}\) whose indicator functions satisfy a version of (1.3), see Proposition 4.3. Precisely, on a subset of full probability depending on E,

$$\begin{aligned} \lim _{t\rightarrow \infty }u_E(0,t,\omega )=\pi (E). \end{aligned}$$
(1.8)

Here, we use crucially the results of [12], where it is shown that, with high probability, there exists a coupling at large length and time scales between the diffusion process generated in environment \(\omega \) by coefficients \(A(y,\omega )\) and \(b(y,\omega )\) and a Brownian motion with deterministic variance, see Control 2.2. Notice, however, that this coupling cannot in general provide an effective comparison between solutions of (1.7) and solutions \(\overline{u}:\mathbb {R}^d\times [0,\infty )\times {\varOmega }\rightarrow \mathbb {R}\) satisfying, for \(\overline{\alpha }>0\) defined in Theorem 2.1, the deterministic equation

$$\begin{aligned} \left\{ \begin{array}{ll} \overline{u}_t=\frac{\overline{\alpha }}{2}{\varDelta }\overline{u}&{}\quad \text {on}\quad \mathbb {R}^d\times (0,\infty ), \\ \overline{u}(x,0,\omega )=f(x,\omega ) &{}\quad \text {on}\quad \mathbb {R}^d\times \left\{ 0\right\} ,\end{array}\right. \end{aligned}$$
(1.9)

since, for stationary initial data, we expect

$$\begin{aligned} \lim _{t\rightarrow \infty }u(0,t,\omega )= & {} \int _{{\varOmega }}f(0,\omega )\;d\pi \quad \text {and}\quad \lim _{t\rightarrow \infty }\overline{u}(0,t,\omega )\\= & {} \int _{{\varOmega }}f(0,\omega )\;d\mathbb {P}=\mathbb {E}(f(0,\omega )). \end{aligned}$$

However, in Proposition 3.9, this coupling does provide a means by which the solution of (1.7) can be effectively compared, with high probability, on large length and time scales, to a quantity which, for suitable initial data, is nearly constant. That is, with high probability, we obtain an effective comparison between the solution \(u(x,t,\omega )\) of (1.7) at time \(L_{n+1}^2\) with the solution of (1.9) at time \(L_{n+1}^2-6L_n^2\) corresponding to initial data \(u(x,6L_n^2,\omega )\).

This is essentially to say that \(u(x,L_{n+1}^2,\omega )\) is an averaged version of \(u(x,6L_n^2,\omega )\), where we provide a quantitative version of the averaging in Proposition 4.4 for subsets whose characteristic function satisfies a version of (1.3), see Propositions 4.2 and 4.3. In combination, the comparison and averaging complete the proof of (1.8).

Finally, in [12], localization estimates for the diffusion in environment \(\omega \) are obtained with high probability, see Control 2.3. We use this localization in Proposition 4.6 to upgrade the convergence along the discrete sequence \(L_n^2\) to the full limit, as \(t\rightarrow \infty \), at the cost of obtaining the convergence on a marginally smaller portion of space. The proof of invariance and uniqueness then follow by standard arguments, see Proposition 4.7 and Theorem 4.8.

As an application of Proposition 4.6, we establish a homogenization result for oscillating initial data which are locally measurable with respect to the coefficients. Precisely, we define, for each \(R>0\), the sigma algebra

$$\begin{aligned} \sigma _{B_R}=\sigma \left( A(x,\omega ),b(x,\omega )\;|\;x\in B_R\right) , \end{aligned}$$

and consider functions \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) which are stationary with respect to the translation group \(\left\{ \tau _x\right\} _{x\in \mathbb {R}^d}\) and satisfy \(f(0,\omega )\in L^\infty ({\varOmega },\sigma _{B_R})\), where \(L^\infty ({\varOmega },\sigma _{B_R})\) denotes the space of bounded \(\sigma _{B_R}\)-measurable functions on \({\varOmega }\).

Theorem 1.2

Suppose that \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) and \(R>0\) satisfy, for each \(x,y\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\),

$$\begin{aligned} f(x+y,\omega )=f(x,\tau _y\omega ), \end{aligned}$$

with \(f(0,\omega )\in L^\infty ({\varOmega },\sigma _{B_R})\). For each \(\epsilon >0\), let \(u^\epsilon :\mathbb {R}^d\times [0,\infty )\times {\varOmega }\rightarrow \mathbb {R}\) denote the solution to

$$\begin{aligned} \left\{ \begin{array}{ll} u^\epsilon _t=\frac{1}{2}{{\mathrm{tr}}}(A(x/\epsilon ,\omega )D^2u^\epsilon )+\frac{1}{\epsilon }b(x/\epsilon ,\omega )\cdot Du^\epsilon &{}\quad \text {on}\quad \mathbb {R}^d\times (0,\infty ), \\ u^\epsilon (x,0,\omega )=f(x/\epsilon ,\omega ) &{}\quad \text {on}\quad \mathbb {R}^d\times \left\{ 0\right\} .\end{array}\right. \end{aligned}$$
(1.10)

There exists a subset of full probability such that, as \(\epsilon \rightarrow 0\),

$$\begin{aligned} u^\epsilon \rightarrow \int _{{\varOmega }}f(0,\omega )\;d\pi \quad \text {locally uniformly on}\quad \mathbb {R}^d\times (0,\infty ). \end{aligned}$$

These methods also apply to equations like (1.10) involving an oscillating righthand side and to the analogous time independent problems. See Theorems 5.2 and 5.3.

We remark that, in the case \(b(y,\omega )=0\), the existence of an invariant measure and applications to homogenization were established by Papanicolaou and Varadhan [9] and Yurinsky [13]. Furthermore, when equation (1.7) may be rewritten in divergence form, results have been obtained by De Masi et al. [3], Kozlov [5], Olla [6], Osada [7] and Papanicolaou and Varadhan [8]. We point the interested reader to the introduction of [12] for a more complete list of references regarding related problems in the discrete setting.

The paper is organized as follows. In Sect. 2, we present our notation and assumptions as well as provide a summary of the aspects of [12] most relevant to our arguments. We identify the invariant measure in Sect. 3 and, in Sect. 4, we prove that the invariant measure is indeed invariant and unique. Finally, in Sect. 5, we prove the general homogenization result for functions which are locally measurable with respect to the coefficients.

2 Preliminaries

2.1 Notation

Elements of \(\mathbb {R}^d\) and \([0,\infty )\) are denoted by x and y and t respectively and (xy) denotes the standard inner product on \(\mathbb {R}^d\). We write Dv and \(v_t\) for the derivative of the scalar function v with respect to \(x\in \mathbb {R}^d\) and \(t\in [0,\infty )\), while \(D^2v\) stands for the Hessian of v. The spaces of \(k\times l\) and \(k\times k\) symmetric matrices with real entries are respectively written \(\mathcal {M}^{k\times l}\) and \(\mathcal {S}(k)\). If \(M\in \mathcal {M}^{k\times l}\), then \(M^t\) is its transpose and \(|M|\) is its norm \(|M|={{\mathrm{tr}}}(MM^t)^{1/2}.\) If M is a square matrix, we write \({{\mathrm{tr}}}(M)\) for the trace of M. The Euclidean distance between subsets \(A,B\subset \mathbb {R}^d\) is

$$\begin{aligned} d(A,B)=\inf \left\{ |a-b|\;|\;a\in A, b\in B\right\} \end{aligned}$$

and, for an index \(\mathcal {A}\) and a family of measurable functions \(\left\{ f_\alpha :\mathbb {R}^d\times {\varOmega }\rightarrow \mathbb {R}^{n_\alpha }\right\} _{\alpha \in \mathcal {A}}\), we write

$$\begin{aligned}\sigma (f_\alpha (x,\omega )\;|\;x\in A, \alpha \in \mathcal {A})\end{aligned}$$

for the sigma algebra generated by the random variables \(f_\alpha (x,\omega )\) for \(x\in A\) and \(\alpha \in \mathcal {A}\). For \(U\subset \mathbb {R}^d\), \({{\mathrm{USC}}}(U;\mathbb {R}^d)\), \({{\mathrm{LSC}}}(U;\mathbb {R}^d)\), \({{\mathrm{BUC}}}(U;\mathbb {R}^d)\), \({{\mathrm{Lip}}}(U;\mathbb {R}^d)\), \(\hbox {C}^{0,\beta }(U;\mathbb {R}^d)\) and \(\hbox {C}^k(U;\mathbb {R}^d)\) are the spaces of upper-semicontinuous, lower-semicontinuous, bounded continuous, Lipschitz continuous, \(\beta \)-Hölder continuous and k-continuously differentiable functions on U with values in \(\mathbb {R}^d\). For \(f:\mathbb {R}^d\rightarrow \mathbb {R}\), we write \({{\mathrm{Supp}}}(f)\) for the support of f. Furthermore, \(B_R\) and \(B_R(x)\) are respectively the open balls of radius R centered at zero and \(x\in \mathbb {R}^d\). For a real number \(r\in \mathbb {R}\) we write \(\left[ r\right] \) for the largest integer less than or equal to r. Finally, throughout the paper we write C for constants that may change from line to line but are independent of \(\omega \in {\varOmega }\) unless otherwised indicated.

2.2 The random environment

There exists an underlying probability space \(({\varOmega },\mathcal {F},\mathbb {P})\) indexing the individual realizations of the random environment. Since the environment is described, for each \(x\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\), by the diffusion matrix \(A(x,\omega )\) and drift \(b(x,\omega )\), we may take

$$\begin{aligned} \mathcal {F}=\sigma \left( A(x,\omega ),b(x,\omega )\;|\;x\in \mathbb {R}^d\right) . \end{aligned}$$
(2.1)

Furthermore, we assume this space is equipped with a

$$\begin{aligned} \text {group of ergodic, measure-preserving transformations}\; \left\{ \tau _x:{\varOmega }\rightarrow {\varOmega }\right\} _{x\in \mathbb {R}^d}, \end{aligned}$$
(2.2)

such that the coefficients \(A:\mathbb {R}^d\times {\varOmega }\rightarrow \mathcal {S}(d)\) and \(b:\mathbb {R}^d\times {\varOmega }\rightarrow \mathbb {R}\) are bi-measurable stationary functions satisfying, for each \(x,y\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\),

$$\begin{aligned} A(x+y,\omega )=A(x,\tau _y\omega )\quad \text {and}\quad b(x+y,\omega )=b(x,\tau _y\omega ). \end{aligned}$$
(2.3)

We remark that the ergodicity is not an assumption, and can be deduced from (2.1) and (2.7).

We assume that the diffusion matrix and drift are bounded and Lipschitz uniformly for \(\omega \in {\varOmega }\). There exists \(C>0\) such that, for all \(y\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\),

$$\begin{aligned} |b(y,\omega )|\le C\quad \text {and}\quad |A(y,\omega )|\le C \end{aligned}$$
(2.4)

and, for all \(x,y\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\),

$$\begin{aligned} |b(x,\omega )-b(y,\omega )|\le C|x-y|\quad \text {and}\quad |A(x,\omega )-A(y,\omega )|\le C|x-y|. \end{aligned}$$
(2.5)

In addition, we assume that the diffusion matrix is uniformly elliptic uniformly in \({\varOmega }\). There exists \(\nu >1\) such that, for all \(y\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\),

$$\begin{aligned} \frac{1}{\nu } I\le A(y,\omega )\le \nu I. \end{aligned}$$
(2.6)

The coefficients satisfy a finite range dependence. There exists \(R>0\) such that, whenever \(A,B\subset \mathbb {R}^d\) satisfy \(d(A,B)\ge R\), the sigma algebras

$$\begin{aligned} \sigma (A(x,\omega ), b(x,\omega )\;|\;x\in A)\quad \text {and}\quad \sigma (A(x,\omega ), b(x,\omega )\;|\;x\in B)\quad \text {are independent.}\end{aligned}$$
(2.7)

The diffusion matrix and drift satisfy a restricted isotropy condition. For every orthogonal transformation \(r:\mathbb {R}^d\rightarrow \mathbb {R}^d\) which preserves the coordinate axes, for every \(x\in \mathbb {R}^d\),

$$\begin{aligned} (b(rx,\omega ),A(rx,\omega ))\quad \text {and}\quad (rb(x,\omega ),rA(x,\omega )r^t)\quad \text {have the same law.} \end{aligned}$$
(2.8)

And, finally, the diffusion matrix and drift are a small perturbation of the Laplacian. There exists \(\eta _0>0\), to later be chosen small, such that, for all \(y\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\),

$$\begin{aligned} |b(y,\omega )|\le \eta _0\quad \text {and}\quad |A(y,\omega )-I|\le \eta _0. \end{aligned}$$
(2.9)

To avoid cumbersome statements in what follows, we introduce a steady assumption.

$$\begin{aligned} \text {Assume}\;(2.1),(2.2), (2.3), (2.4), (2.5), (2.6), (2.7), (2.8)\;\text {and}\;(2.9). \end{aligned}$$
(2.10)

The collection of assumptions (2.4), (2.5) and (2.6) guarantee the well-posedness of the martingale problem set on \(\mathbb {R}^d\), for each \(\omega \in {\varOmega }\) and \(x\in \mathbb {R}^d\), associated to the generator

$$\begin{aligned} \frac{1}{2}\sum _{i,j=1}^da_{ij}(y,\omega )\frac{\partial ^2}{\partial y_i \partial y_j}+\sum _{i=1}^db_i(y,\omega )\frac{\partial }{\partial y_i}, \end{aligned}$$

see [11, Chapter 6, 7]. We write \(P_{x,\omega }\) and \(E_{x,\omega }\) for the corresponding probability measure and expectation on the space of continuous paths \(\hbox {C}([0,\infty );\mathbb {R}^d)\) and remark that, almost surely with respect to \(P_{x,\omega }\), paths \(X_t\in \hbox {C}([0,\infty );\mathbb {R}^d)\) satisfy the stochastic differential equation

$$\begin{aligned} \left\{ \begin{array}{l} dX_t=b(X_t,\omega )dt+\sigma (X_t,\omega )dB_t, \\ X_0=x,\end{array}\right. \end{aligned}$$
(2.11)

for \(A(y,\omega )=\sigma (y,\omega )\sigma (y,\omega )^t\), and for \(B_t\) a standard Brownian motion under \(P_{x,\omega }\) with respect to the canonical right-continuous filtration on \(\hbox {C}([0,\infty );\mathbb {R}^d)\).

We write \(\mathbb {P}_x=\mathbb {P}\times P_{x,\omega }\) and \(\mathbb {E}_x=\mathbb {E}\times E_{x,\omega }\) for the corresponding semi-direct product measure and expectation on \({\varOmega }\times \hbox {C}([0,\infty );\mathbb {R}^d)\). The annealed law \(\mathbb {P}_x\) inherits the translation invariance and restricted rotational invariance implied by (2.3) and (2.8). In particular, for all \(x,y\in \mathbb {R}^d\),

$$\begin{aligned} \mathbb {E}_{x+y}(X_t)=\mathbb {E}_y(x+X_t)=x+\mathbb {E}_y(X_t), \end{aligned}$$
(2.12)

and, for all orthogonal transformations r preserving the coordinate axis and \(x\in \mathbb {R}^d\),

$$\begin{aligned} \mathbb {E}_{x}(rX_t)=\mathbb {E}_{rx}(X_t). \end{aligned}$$
(2.13)

This stands in contrast to the quenched laws \(P_{x,\omega }\), for which no invariance properties can be expected to hold, in general.

2.3 A review of [12]

In this section, we review the aspects of [12] most relevant to our arguments. Observe that this summary is by no means complete, as considerably more was achieved in their paper than we mention here.

We are interested in the long term behavior of the equation, for a fixed, Hölder continuous function \(f:\mathbb {R}^d\rightarrow \mathbb {R}\),

$$\begin{aligned} \left\{ \begin{array}{ll} u_t=\frac{1}{2}{{\mathrm{tr}}}(A(x,\omega )D^2u)+b(x,\omega )\cdot Du &{}\quad \text {on}\quad \mathbb {R}^d\times (0,\infty ), \\ u=f(x) &{}\quad \text {on}\quad \mathbb {R}^d\times \left\{ 0\right\} .\end{array}\right. \end{aligned}$$
(2.14)

This is essentially achieved by comparing the solutions of (2.14) to the solution of the deterministic problem, for \(\overline{\alpha }>0\) identified in Theorem 2.1,

$$\begin{aligned} \left\{ \begin{array}{ll} \overline{u}_t=\frac{\overline{\alpha }}{2}{\varDelta }\overline{u} &{}\quad \text {on}\quad \mathbb {R}^d\times (0,\infty ), \\ \overline{u}=f(x) &{}\quad \text {on}\quad \mathbb {R}^d\times \left\{ 0\right\} ,\end{array}\right. \end{aligned}$$
(2.15)

along an increasing sequence of length and time scales.

The constant \(\overline{\alpha }\) defining (2.15) is identified in [12, Proposition 5.7] through a process we describe after introducing some notation. Fix the dimension

$$\begin{aligned} d\ge 3, \end{aligned}$$
(2.16)

and fix a Hölder exponent

$$\begin{aligned} \beta \in \left( 0,\frac{1}{2}\right] \quad \text {and a constant}\quad a\in \left( 0,\frac{\beta }{1000d}\right] . \end{aligned}$$
(2.17)

Let \(L_0\) be a large integer multiple of five. For each \(n\ge 0\), inductively define

$$\begin{aligned} \ell _n=5\left[ \frac{L_n^a}{5}\right] \quad \text {and}\quad L_{n+1}=\ell _n L_n, \end{aligned}$$
(2.18)

so that, for \(L_0\) sufficiently large, we have \(\frac{1}{2}L_n^{1+a}\le L_{n+1}\le 2L_n^{1+a}\). For each \(n\ge 0\), for \(c_0>0\), let

$$\begin{aligned} \kappa _n=\exp (c_0(\log \log (L_n))^2)\quad \text {and}\quad \tilde{\kappa }_n=\exp (2c_0(\log \log (L_n))^2),\end{aligned}$$
(2.19)

where we remark that, as n tends to infinity, \(\kappa _n\) is eventually dominated by every positive power of \(L_n\). Furthermore, define, for each \(n\ge 0\),

$$\begin{aligned} D_n=L_n\kappa _n\quad \text {and}\quad \tilde{D}_n=L_n\tilde{\kappa }_n.\end{aligned}$$
(2.20)

We choose \(L_0\) sufficiently large so that, for each \(n\ge 0\),

$$\begin{aligned} L_n<D_n< \tilde{D}_n< L_{n+1},\quad 4\tilde{\kappa }_n<\tilde{\kappa }_{n+1}\quad \text {and}\quad 3\tilde{D}_{n+1} < L_{n+1}^2. \end{aligned}$$
(2.21)

The following constants enter into the probabilistic statements below. Fix \(m_0\ge 2\) satisfying

$$\begin{aligned} (1+a)^{m_0-2}\le 100<(1+a)^{m_0-1}, \end{aligned}$$
(2.22)

and \(\delta >0\) and \(M_0>0\) satisfying

$$\begin{aligned} \delta =\frac{5}{32}\beta \quad \text {and}\quad M_0\ge 100d(1+a)^{m_0+2}. \end{aligned}$$
(2.23)

In the arguments to follow, we will use the fact that \(\delta \) and \(M_0\) are sufficiently larger than a.

We now describe the identification of \(\overline{\alpha }\). Recall, for each \(x\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\), the quenched law \(P_{x,\omega }\) on \(\hbox {C}([0,\infty );\mathbb {R}^d)\) and, for each \(x\in \mathbb {R}^d\), the annealed law \(\mathbb {P}_x\) on \({\varOmega }\times \hbox {C}([0,\infty );\mathbb {R}^d)\). The constant \(\overline{\alpha }\) is seen effectively as the limit of the effective diffusivities, in average, of the ensemble of equations (2.14) along the sequence of time steps \(L_n^2\). However, so as to apply the finite range dependence, see (2.7), the stopping time

$$\begin{aligned} T_n=\inf \left\{ s\ge 0\;|\;|X_s-X_0|\ge \tilde{D}_n\right\} \end{aligned}$$
(2.24)

is introduced, for each \(n\ge 0\), and the approximate effective diffusivity of ensemble (2.14) is defined as

$$\begin{aligned} \alpha _n=\frac{1}{dL_n^2}\mathbb {E}_0[|X_{T_n\wedge L_n^2}|^2]. \end{aligned}$$
(2.25)

The following theorem describes the control and convergence of the \(\alpha _n\) to \(\overline{\alpha }\), see [12, Proposition 5.7].

Theorem 2.1

Assume (2.10). There exists \(L_0\) and \(c_0\) sufficiently large and \(\eta _0>0\) sufficiently small such that, for all \(n\ge 0\),

$$\begin{aligned} \frac{1}{2\nu }\le \alpha _n\le 2\nu \quad \text {and}\quad |\alpha _{n+1}-\alpha _n|\le L_n^{-(1+\frac{9}{10})\delta }, \end{aligned}$$

which implies the existence of \(\overline{\alpha }>0\) satisfying

$$\begin{aligned} \frac{1}{2\nu }\le \overline{\alpha }\le 2\nu \quad \text {and}\quad \lim _{n\rightarrow \infty }\alpha _n=\overline{\alpha }. \end{aligned}$$

We discuss next the coupling between solutions of (2.14) and (2.15). The first step involves comparing solutions of (2.14), for each \(n\ge 0\), at time \(L_n^2\), with respect to a Hölder norm at scale \(L_n\), to solutions of the deterministic problem

$$\begin{aligned} \left\{ \begin{array}{ll} u_{n,t}=\frac{\alpha _n}{2}{\varDelta }u_n &{}\quad \text {on}\quad \mathbb {R}^d\times (0,\infty ), \\ u_{n,t}=f(x) &{}\quad \text {on}\quad \mathbb {R}^d\times \left\{ 0\right\} .\end{array}\right. \end{aligned}$$
(2.26)

To do so, introduce, for each \(n\ge 0\), the rescaled Hölder norm

$$\begin{aligned} |f|_n=\sup _{x\in \mathbb {R}^d}|f(x)|+L_n^\beta \sup _{x\ne y}\frac{|f(x)-f(y)|}{|x-y|^\beta }. \end{aligned}$$
(2.27)

A localized control of the difference between solutions of (2.14) and (2.26) at time \(L_n^2\) is obtained via a cutoff function. For each \(v>0\), let

$$\begin{aligned} \chi (y)=1\wedge (2-|y|)_+\quad \text {and}\quad \chi _{v}(y)=\chi \left( \frac{y}{v}\right) , \end{aligned}$$
(2.28)

and define, for each \(x\in \mathbb {R}^d\) and \(n\ge 0\),

$$\begin{aligned} \chi _{n,x}(y)=\chi _{30\sqrt{d}L_n}(y-x). \end{aligned}$$
(2.29)

The following result then describes the desired comparison between solutions of (2.14) and (2.26), at time \(L_n^2\), for Hölder continuous initial data.

We emphasize here that this control depends upon \(x\in \mathbb {R}^d\), \(\omega \in {\varOmega }\) and \(n\ge 0\). It is not true, in general, that this contraction is available for all such triples \((x,\omega ,n)\). However, as described below, it is shown in [12, Proposition 5.1] that such controls are available for large n, with high probability, on a large portion of space.

Controll 2.2

Fix \(x\in \mathbb {R}^d\), \(\omega \in {\varOmega }\) and \(n\ge 0\). Let u and \(u_n\) respectively denote the solutions of (2.14) and (2.26) corresponding to initial data \(f\in \hbox {C}^{0,\beta }(\mathbb {R}^d)\). We have

$$\begin{aligned} |\chi _{n,x}(y)\left( u(y,L_n^2)-u_n(y,L_n^2)\right) |_n\le L_n^{-\delta }|f|_n. \end{aligned}$$

The final control we will use concerns tail-estimates for the diffusion process. We wish to control, under \(P_{x,\omega }\), for \(X_t\in \hbox {C}([0,\infty );\mathbb {R}^d)\), the probability that

$$\begin{aligned} X_t^*=\max _{0\le s\le t}|X_s-X_0| \end{aligned}$$
(2.30)

is large with respect to the time elapsed. The desired result is similar to the standard exponential estimates for Brownian motion at large length scales.

As with Control 2.2, this control depends upon \(x\in \mathbb {R}^d\), \(\omega \in {\varOmega }\) and \(n\ge 0\). It is not true, in general, that this type of localization control is available for all such triples \((x,\omega ,n)\), but it is shown in [12, Proposition 2.2] that such controls are available for large n, with high probability, on a large portion of space.

Controll 2.3

Fix \(x\in \mathbb {R}^d\), \(\omega \in {\varOmega }\) and \(n\ge 0\). For each \(v\ge D_n\), for all \(|y-x|\le 30\sqrt{d}L_n\),

$$\begin{aligned} P_{y,\omega }(X^*_{L_n^2}\ge v)\le \exp \left( -\frac{v}{D_n}\right) . \end{aligned}$$

We now introduce the primary probabilistic statement concerning Controls 2.2 and 2.3. Notice that the event defined below does not include the control of traps described in [12, Proposition 3.3], which play in important role in propagating Control 2.2 in their arguments. Since we simply use the Hölder control there obtained, we do not require a further use of their control of traps.

Consider, for each \(x\in \mathbb {R}^d\), the event

$$\begin{aligned} B_n(x)=\left\{ \omega \in {\varOmega }\;|\;\text {Controls 2.2 and 2.3 hold for the triple}\;(x,\omega ,n).\right\} . \end{aligned}$$
(2.31)

Notice that, in view of (2.3), for all \(x\in \mathbb {R}^d\) and \(n\ge 0\),

$$\begin{aligned} \mathbb {P}(B_n(x))=\mathbb {P}(B_n(0)). \end{aligned}$$
(2.32)

It is therefore shown that the probability of the compliment of \(B_n(0)\) approaches zero as n tends to infinity, see [12, Theorem 1.1].

Theorem 2.4

Assume (2.10). There exist \(L_0\) and \(c_0\) sufficiently large and \(\eta _0>0\) sufficiently small such that, for each \(n\ge 0\),

$$\begin{aligned} \mathbb {P}\left( {\varOmega }{\setminus }B_n(0)\right) \le L_n^{-M_0}. \end{aligned}$$

We henceforth fix the constants \(L_0\), \(c_0\) and \(\eta _0\) appearing above.

$$\begin{aligned}&\text {Fix constants}\;L_0, c_0\;\text {and}\;\eta _0\;\text {satisfying (2.21) and the hypothesis of}\nonumber \\&\quad \text {Theorems 2.1 and 2.4.} \end{aligned}$$
(2.33)

We conclude this section with a few basic observations concerning Control 2.2, Control 2.3 and the Hölder norms introduced in (2.27). Since Control 2.2 cannot be expected to hold globally in space, it will be frequently necessary to introduce cutoff functions of the type appearing in (2.28). The primary purpose of Control 2.3 is to bound the error we introduce, as seen in the following proposition.

Proposition 2.5

Assume (2.10) and (2.33). Fix \(x\in \mathbb {R}^d\), \(\omega \in {\varOmega }\) and \(n\ge 0\) and suppose that Control 2.3 is satisfied for the triple \((x,\omega ,n)\). For \(f\in L^\infty (\mathbb {R}^d)\) satisfying

$$\begin{aligned} d\left( {{\mathrm{Supp}}}(f), B_{30\sqrt{d}L_n}(x)\right) \ge D_n+30\sqrt{d}L_n, \end{aligned}$$

let u(yt) satisfy (2.14) with initial data f(y). Then, for each \(|y-x|\le 30\sqrt{d}L_n\),

$$\begin{aligned} |u(y,L_n^2)|\le \exp \left( -\frac{d({{\mathrm{Supp}}}(f),y)}{D_n}\right) ||f||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$

Proof

The proof is immediate from the representation formula for the solution. We have, for each \(y\in \mathbb {R}^d\),

$$\begin{aligned}u(y,L_n^2)=E_{y,\omega }\left( f(X_{L_n^2})\right) .\end{aligned}$$

Therefore,

$$\begin{aligned} |u(y,L_n^2)|\le P_{y,\omega }\left( X_{L_n^2}^*\ge d({{\mathrm{Supp}}}(f),y)\right) ||f||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$

Since \(d({{\mathrm{Supp}}}(f), B_{30\sqrt{d}L_n}(x))\ge D_n+30\sqrt{d}L_n\), and since Control 2.3 is satisfied for the triple \((x,\omega ,n)\), this implies that, for all \(|y-x|\le 30\sqrt{d}L_n\),

$$\begin{aligned}|u(y,L_n^2)|\le \exp \left( -\frac{d({{\mathrm{Supp}}}(f),y)}{D_n}\right) ||f||_{L^\infty (\mathbb {R}^d)},\end{aligned}$$

which completes the argument.

The following two elementary propositions will be used to extend Control 2.2 to a larger portion of space. The first is an elementary and well-known fact concerning the product of Hölder continuous functions.

Proposition 2.6

For each \(n\ge 0\), for every \(f,g\in \hbox {C}^{0,\beta }(\mathbb {R}^d)\),

$$\begin{aligned} |fg|_n\le |f|_n|g|_n. \end{aligned}$$

The second will play the most important role in extending Control 2.2. The only observation is that the Hölder norms introduced in (2.27) occur at the length scale \(L_n\). Therefore, a function agreeing locally with Hölder continuous functions on scale \(L_n\) must itself be globally Hölder continuous. The proof is elementary and can be found in [12, Lemma A.1].

Proposition 2.7

Let I be an arbitrary index and \(n\ge 0\). If \(f:\mathbb {R}^d\rightarrow \mathbb {R}\) and \(\left\{ g_i:\mathbb {R}^d\rightarrow \mathbb {R}\right\} _{i\in I}\) are such that, for a collection \(\left\{ x_i\right\} _{i\in I}\subset \mathbb {R}^d\),

$$\begin{aligned} f=g_i\quad \text {on}\quad B(x_i, 20\sqrt{d}L_n) \quad \text {and}\quad {{\mathrm{Supp}}}(f)\subset \bigcup _{i\in I}B(x_i,10\sqrt{d}L_n), \end{aligned}$$
(2.34)

then

$$\begin{aligned} |f|_n\le 3\sup _{i\in I}|g_i|_n. \end{aligned}$$

3 The identification of the invariant measure

In order to identify the invariant measure, we will analyze the long term behavior of the solution \(u:\mathbb {R}^d\times [0,\infty )\times {\varOmega }\rightarrow \mathbb {R}\) satisfying

$$\begin{aligned} \left\{ \begin{array}{ll} u_t=\frac{1}{2}{{\mathrm{tr}}}(A(x,\omega )D^2u)+b(x,\omega )\cdot Du &{}\quad \text {on}\quad \mathbb {R}^d\times (0,\infty ), \\ u=f(x,\omega ) &{}\quad \text {on}\quad \mathbb {R}^d\times \left\{ 0\right\} .\end{array}\right. \end{aligned}$$
(3.1)

Therefore, to simplify the notation in what follows, we write, for each \(s\ge 0\) and \(\omega \in {\varOmega }\),

$$\begin{aligned} R_sf(x,\omega )=u(x,s,\omega ), \end{aligned}$$

for \(u(x,s,\omega )\) satisfying (3.1) with initial data \(f(y,\omega )\).

We will be particularly interested in translations of functions \(\tilde{f}\in L^\infty ({\varOmega })\) with respect to the translation group \(\left\{ \tau _x\right\} _{x\in \mathbb {R}^d}\), and therefore assume in many of the propositions to follow that a function \(f:\mathbb {R}^d\times {\varOmega }\rightarrow \mathbb {R}\) is stationary with respect to \(\left\{ \tau _x\right\} _{x\in \mathbb {R}^d}\). Precisely, for each \(x,y\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\),

$$\begin{aligned} f(x+y,\omega )=f(x,\tau _y\omega ). \end{aligned}$$
(3.2)

For every \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) satisfying (3.2), we identify a deterministic constant \(\overline{\pi }(f)\in \mathbb {R}\) which is effectively identified as the limit of the sequence defined, for each \(n\ge 0\), by

$$\begin{aligned} \mathbb {E}\left( R_{L_n^2}f(0,\omega )\right) . \end{aligned}$$
(3.3)

And, for \(\mathbf{{1}}_E:{\varOmega }\rightarrow \mathbb {R}\) the indicator function of a measurable subset \(E\in \mathcal {F}\), by taking \(f_E(x,\omega )=\mathbf{{1}}_E(\tau _x\omega )\), we define a measure \(\pi :\mathcal {F}\rightarrow \mathbb {R}\) on \(({\varOmega },\mathcal {F})\) by the rule

$$\begin{aligned} \pi (E)=\overline{\pi }(f_E).\end{aligned}$$
(3.4)

We will prove that \(\pi \) is a probability measure on \(({\varOmega },\mathcal {F})\) which is absolutely continuous with respect to \(\mathbb {P}\). And, for every \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) satisfying (3.2),

$$\begin{aligned} \overline{\pi }(f)=\int _{{\varOmega }}f(0,\omega )\;d\pi . \end{aligned}$$

The following two propositions describe the basic existence and regularity results concerning equation (3.1) for bounded and stationary initial data.

Proposition 3.1

Assume (2.10). For each \(\omega \in {\varOmega }\) and \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) there exists a unique solution \(u(x,t,\omega ):\mathbb {R}^d\times [0,\infty )\times {\varOmega }\rightarrow \mathbb {R}\) of (3.1) satisfying, for each \(T>0\) and \(\omega \in {\varOmega }\), \(u(x,t,\omega )\in {{\mathrm{BUC}}}(\mathbb {R}^d\times [0,T])\) with, for each \(\omega \in {\varOmega }\),

$$\begin{aligned} ||u(x,t,\omega )||_{L^\infty (\mathbb {R}^d\times [0,\infty ))}\le ||f(x,\omega )||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$

Furthermore, if \(f(x,\omega )\) satisfies (3.2), then for each \(t\ge 0\), the map \(u(x,t,\omega ):\mathbb {R}^d\times {\varOmega }\rightarrow \mathbb {R}^d\) is stationary. Precisely, for each \(x,y\in \mathbb {R}^d\), \(t\ge 0\) and \(\omega \in {\varOmega }\),

$$\begin{aligned} u(x,t,\tau _y\omega )=u(x+y,t,\omega ). \end{aligned}$$

Proof

The existence and uniqueness of a solution to (3.1) satisfying the above estimates, for each \(\omega \in {\varOmega }\), is an elementary consequence of (2.4), (2.5) and \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\). See, for instance, Friedman [4, Chapter 1, Theorem 12]. The stationarity is a consequence of (3.2) and the uniqueness since, for each \(\omega \in {\varOmega }\), both \(u(x,t,\tau _y\omega )\) and \(u(x+y,t,\omega )\) satisfy (3.1) for \(\tau _y\omega \).

Proposition 3.2

Assume (2.10). For each \(\omega \in {\varOmega }\), \(t\ge 1\) and \(g\in L^\infty (\mathbb {R}^d)\), for \(C>0\) independent of \(\omega \in {\varOmega }\) and \(t\ge 1\),

$$\begin{aligned} ||R_tg(x,\omega )||_{C^{0,\beta }(\mathbb {R}^d)}\le C||g||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$

Proof

Fix \(\omega \in {\varOmega }\) and \(g\in L^\infty (\mathbb {R}^d)\). Recall that, for each \(t\ge 0\) and \(x\in \mathbb {R}^d\), see [4, Chapter 1, Theorem 12],

$$\begin{aligned} R_tg(x,\omega )=E_{x,\omega }\left( g(X_t)\right) =\int _{\mathbb {R}^d}p(x,t,y,\omega )g(y)\;dy, \end{aligned}$$
(3.5)

for \(p(x,t,y,\omega ):\mathbb {R}^d\times (0,\infty )\times \mathbb {R}^d\rightarrow \mathbb {R}\) satisfying, for each \(0<t\le 1\), for \(C>0\) and \(c>0\) independent of \(\omega \),

$$\begin{aligned} |p(x,t,y,\omega )|\le Ct^{-d/2}e^{-c|x-y|^2/t}\quad \text {and}\quad |D_xp(x,t,y,\omega )|\le Ct^{-(d+1)/2}e^{-c|x-y|^2/t}. \end{aligned}$$
(3.6)

First, we observe that for each \(x\in \mathbb {R}^d\) and \(t\ge 0\), using (3.5),

$$\begin{aligned} |R_tg(x,\omega )|\le ||g||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$
(3.7)

It remains to bound the Hölder semi-norm.

Whenever \(x,y\in \mathbb {R}^d\) satisfy \(|x-y|\ge 1\),

$$\begin{aligned} |R_1g(x,\omega )-R_1g(y,\omega )|\le 2||g||_{L^\infty (\mathbb {R}^d)}\le 2|x-y|^\beta ||g||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$
(3.8)

And, whenever \(x,y\in \mathbb {R}^d\) satisfy \(|x-y|<1\), since (3.5) and (3.6) imply that \(R_1g(x,\omega )\) is Lipschitz with constant determined by \(||g||_{L^\infty (\mathbb {R}^d)}\), for \(C>0\) independent of \(\omega \in {\varOmega }\),

$$\begin{aligned} |R_1g(x,\omega )-R_1g(y,\omega )|\le C|x-y|||g||_{L^\infty (\mathbb {R}^d)}\le C|x-y|^\beta ||g||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$
(3.9)

Therefore, for each \(x,y\in \mathbb {R}^d\) and \(t\ge 1\), using (3.7), (3.8) and (3.9),

$$\begin{aligned}&|R_tg(x,\omega )-R_tg(y,\omega )|=|R_{t-1}(R_1g(x,\omega )-R_1g(y,\omega ))| \nonumber \\&\quad \le \sup _{x,y\in \mathbb {R}^d}|R_1g(x,\omega )-R_1g(y,\omega )|\le C|x-y|^\beta ||g||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$
(3.10)

The claim follows from (3.7), (3.8) and (3.10), since \(\omega \in {\varOmega }\) and \(g\in L^\infty (\mathbb {R}^d)\) were arbitrary.

Before proceeding, it is convenient to introduce some useful notation. We write, for each \(n\ge 0\) and \(f\in \hbox {C}^{0,\beta }(\mathbb {R}^d)\),

$$\begin{aligned} R_nf(x,\omega )=u(x,L_n^2), \end{aligned}$$
(3.11)

for u(xt) satisfying

$$\begin{aligned} \left\{ \begin{array}{ll} u_t=\frac{1}{2}{{\mathrm{tr}}}(A(x,\omega )D^2u)+b(x,\omega )\cdot Du &{}\quad \text {on}\quad \mathbb {R}^d\times (0,\infty ), \\ u=f(x) &{}\quad \text {on}\quad \mathbb {R}^d\times \left\{ 0\right\} .\end{array}\right. \end{aligned}$$

Similarly, for each \(n\ge 0\) and \(f\in \hbox {C}^{0,\beta }(\mathbb {R}^d)\),

$$\begin{aligned} \overline{R}_nf(x)=\overline{u}_n(x,t), \end{aligned}$$
(3.12)

for \(\overline{u}_n(x,t)\) satisfying

$$\begin{aligned}\left\{ \begin{array}{ll} \overline{u}_{n,t}=\frac{\alpha _n}{2}{\varDelta }\overline{u}_n &{}\quad \text {on}\quad \mathbb {R}^d\times (0,\infty ), \\ \overline{u}_n=f(x) &{}\quad \text {on}\quad \mathbb {R}^d\times \left\{ 0\right\} .\end{array}\right. \end{aligned}$$

And, finally, for each \(n\ge 0\) and \(f\in \hbox {C}^{0,\beta }(\mathbb {R}^d)\),

$$\begin{aligned} S_nf(x,\omega )=R_nf(x,\omega )-\overline{R}_nf(x).\end{aligned}$$
(3.13)

This allows us to restate Control 2.2 in the following equivalent way, where we recall from (2.29), for each \(x\in \mathbb {R}^d\) and \(n\ge 0\), the cutoff function \(\chi _{n,x}\).

Controll 3.3

Fix \(x\in \mathbb {R}^d\), \(\omega \in {\varOmega }\) and \(n\ge 0\). For each \(f\in \hbox {C}^{0,\beta }(\mathbb {R}^d)\),

$$\begin{aligned} |\chi _{n,x}S_nf|_n\le L_n^{-\delta }|f|_n. \end{aligned}$$

We now make two elementary observations concerning the interaction of the heat kernels \(\overline{R}_n\) introduced in (3.12) and the scaled Hölder norms introduced in (2.27), and an observation concerning the localization properties of the kernels \(\overline{R}_n\). Notice that, in the following proposition, we make use of Theorem 2.1, which in particular provides a lower bound for the \(\alpha _n\). This lower bound ensures that the kernels \(\overline{R}_n\) provide a sufficient regularization, uniformly in \(n\ge 0\), for our arguments to follow.

Proposition 3.4

Assume (2.10) and (2.33). There exists \(C>0\) satisfying, for each \(n\ge 0\) and \(f\in L^\infty (\mathbb {R}^d)\),

$$\begin{aligned} |\overline{R}_nf|_n\le C||f||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$

Proof

Fix \(n\ge 0\) and \(f\in L^\infty (\mathbb {R}^d)\). In view of (3.12), for each \(x\in \mathbb {R}^d\),

$$\begin{aligned} \overline{R}_nf(x)=\int _{\mathbb {R}^d}(4\pi \alpha _n L_n^2)^{-d/2}e^{-|x-y|^2/4\alpha _n L_n^2}f(y)\;dy. \end{aligned}$$

Therefore,

$$\begin{aligned} ||\overline{R}_nf(x)||_{L^\infty (\mathbb {R}^d)}\le ||f||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$
(3.14)

It remains to bound the Hölder semi-norm.

For each \(x\in \mathbb {R}^d\),

$$\begin{aligned} D\overline{R}_nf(x)=\pi ^{-d/2}(4\alpha _n L_n^2)^{-1/2}\int _{\mathbb {R}^d}\frac{x-y}{(4\alpha _n L_n^2)^{(d+1)/2}}e^{-|x-y|^2/4\alpha _n L_n^2}f(y)\;dy. \end{aligned}$$

Therefore, in view of Theorem 2.1, for each \(x\in \mathbb {R}^d\), for \(C>0\) independent of \(n\ge 0\) and \(f\in L^\infty (\mathbb {R}^d)\),

$$\begin{aligned} |D\overline{R}_nf(x)|= & {} \left| \pi ^{-d/2}(4\alpha _n L_n^2)^{-1/2}\int _{\mathbb {R}^d} ye^{-|y|^2}f\left( 4\alpha _nL_n^2)^{1/2}y+x\right) \;dy\right| \\\le & {} CL_n^{-1}||f||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$

So, whenever \(x,y\in \mathbb {R}^d\) satisfy \(0<|x-y|<L_n\),

$$\begin{aligned} L_n^\beta \frac{|\overline{R}_nf(x)-\overline{R}_nf(y)|}{|x-y|^\beta }\le CL_n^{\beta -1}||f||_{L^\infty (\mathbb {R}^d)}|x-y|^{1-\beta }\le ||f||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$
(3.15)

And, in view of (3.14), if \(|x-y|\ge L_n\),

$$\begin{aligned} L_n^\beta \frac{|\overline{R}_nf(x)-\overline{R}_nf(y)|}{|x-y|^\beta }\le 2||f||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$
(3.16)

The claim follows from (3.14), (3.15) and (3.16).

The following observation is elementary and well-known. The kernels \(\overline{R}_n\) preserve Hölder continuous initial data.

Proposition 3.5

For each \(n\ge 0\) and \(f\in \hbox {C}^{0,\beta }(\mathbb {R}^d)\),

$$\begin{aligned} |\overline{R}_nf|_n\le |f|_n. \end{aligned}$$

Finally, the following proposition describes the localization properties of the kernels \(\overline{R}_n\). Here, notice again the role of Theorem 2.1 and recall the cutoff function introduced in (2.28).

Proposition 3.6

Assume (2.10) and (2.33). There exits \(C=C(d)>0\) and \(c>0\) independent of n such that, for each \(f\in L^\infty (\mathbb {R}^d)\),

$$\begin{aligned} |\overline{R}_n(1-\chi _{\tilde{D}_n})f(0)|\le Ce^{-c\tilde{\kappa }_n^2}||f||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$

Proof

Fix \(n\ge 0\). Then, for \(C=C(d)>0\),

$$\begin{aligned} |\overline{R}_n(1-\chi _{\tilde{D}_n})f(0)|\le & {} \int _{\mathbb {R}^d{\setminus }B_{\tilde{D}_n}}(4\pi \alpha _n L_n^2)^{-d/2}e^{-|x-y|^2/4\alpha _n L_n^2}f(y)\;dy \\\le & {} C||f||_{L^\infty (\mathbb {R}^d)}\int _{\tilde{D}_n/2\sqrt{\alpha _n}L_n}^\infty re^{-r^2}\;dr. \end{aligned}$$

Therefore, using Theorem 2.1, there exists \(c>0\) independent of n such that, for \(C=C(d)>0\),

$$\begin{aligned}|\overline{R}_n(1-\chi _{\tilde{D}_n})f(0)|\le Ce^{-\tilde{\kappa }_n^2/4\alpha _n}||f||_{L^\infty (\mathbb {R}^d)}\le Ce^{-\tilde{c\kappa }_n^2}||f||_{L^\infty (\mathbb {R}^d)} ,\end{aligned}$$

which completes the argument.

We are now prepared to begin our identification of the measure. In order to exploit the finite range dependence in what follows, see (2.7), we introduce localized versions of the kernels \(R_n\). Define, for each \(n\ge 0\) and \(\omega \in {\varOmega }\),

$$\begin{aligned} \tilde{R}_nf(x,\omega )=\tilde{u}(x,L_n^2,\omega ), \end{aligned}$$

for \(\tilde{u}:\overline{B}_{6\tilde{D}_n}\times [0,\infty )\times {\varOmega }\rightarrow \mathbb {R}^d\) satisfying

$$\begin{aligned} \left\{ \begin{array}{ll} \tilde{u}_t=\frac{1}{2}{{\mathrm{tr}}}(A(y,\omega )D^2\tilde{u})+b(y,\omega )\cdot D\tilde{u} &{}\quad \text {on}\quad B_{6\tilde{D}_n}(x)\times (0,\infty ), \\ \tilde{u}=f(y,\omega ) &{}\quad \text {on}\quad B_{6\tilde{D}_n}(x)\times \left\{ 0\right\} \cup \partial B_{6\tilde{D}_n}(x)\times [0,\infty ).\end{array}\right. \end{aligned}$$
(3.17)

The following proposition describes the basic properties of the solutions to (3.17).

Proposition 3.7

Assume (2.10). For each \(x\in \mathbb {R}^d\) and \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) there exists a unique solution \(\tilde{u}(y,t,\omega ):\overline{B}_{6\tilde{D}_n}\times [0,\infty )\times {\varOmega }\rightarrow \mathbb {R}\) of (3.17) satisfying, for each \(T>0\), \(x\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\), \(\tilde{u}(y,t,\omega )\in {{\mathrm{BUC}}}(\overline{B}_{6\tilde{D}_n}(x)\times [0,T])\) with, for each \(\omega \in {\varOmega }\),

$$\begin{aligned} ||\tilde{u}(y,t,\omega )||_{L^\infty (\overline{B}_{6\tilde{D}_n}(x)\times [0,\infty ))}\le ||f(y,\omega )||_{L^\infty (\mathbb {R}^d\times {\varOmega })}. \end{aligned}$$

Furthermore, if \(f(x,\omega )\) satisfies (3.2), then for each \(n\ge 0\) and \(k\ge 0\), the map \((\tilde{R}_n)^kf(x,\omega ):\mathbb {R}^d\times {\varOmega }\rightarrow \mathbb {R}^d\) is stationary. Precisely, for each \(x,y\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\),

$$\begin{aligned} \left( \tilde{R}_n\right) ^kf(x,\tau _y\omega )=\left( \tilde{R}_n\right) ^kf(x+y,\omega ). \end{aligned}$$

Proof

Fix \(n\ge 0\) and \(k\ge 0\). The existence and uniqueness of a solution to (3.17) satisfying the above estimates, for each \(\omega \in {\varOmega }\), is an elementary consequence of (2.4), (2.5) and \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\). See, for instance [4, Chapter 3, Theorem 9]. The stationarity is a consequence of (2.3) and the uniqueness since, for each \(\omega \in {\varOmega }\) and \(x,y\in \mathbb {R}^d\), if \(\tilde{u}(\cdot ,\cdot ,\omega )\) satisfies (3.17) corresponding to \(\omega \) on \(\overline{B}_{6\tilde{D}_n}(x+y)\times [0,\infty )\) then \(\tilde{u}(\cdot +y,\cdot ,\omega )\) satisfies (3.17) corresponding to \(\tau _y\omega \) on \(\overline{B}_{6\tilde{D}_n}(x)\times [0,\infty )\).

We now obtain Controls 2.3 and 3.3 on a large portion of space, with high probability. Define, for each \(n\ge 0\),

$$\begin{aligned}\tilde{A}_n=\left\{ \omega \in {\varOmega }\;|\;\omega \in B_n(x)\;\text {for all}\;x\in L_n\mathbb {Z}^d\cap [-2L_{n+2}^2, 2L_{n+2}^2]^d.\right\} ,\end{aligned}$$

and, for each \(n\ge 0\),

$$\begin{aligned} A_n=\tilde{A_n}\cap \tilde{A}_{n+1}\cap \tilde{A}_{n+2}. \end{aligned}$$
(3.18)

The following proposition provides, for each \(n\ge 0\), a lower bound for the probability of \(A_n\). We remark that, in view of (2.17) and (2.23), the exponent \((2(1+a)^2-1)d-M_0<0\).

Proposition 3.8

Assume (2.10) and (2.33). For each \(n\ge 0\), for \(C>0\) independent of n,

$$\begin{aligned} \mathbb {P}({\varOmega }{\setminus }A_n)\le CL_n^{(2(1+a)^2-1)d-M_0}. \end{aligned}$$

Proof

In view of (2.32), for each \(n\ge 0\), for \(C>0\) independent of n,

$$\begin{aligned}\mathbb {P}({\varOmega }{\setminus }\tilde{A}_n)\le \sum _{x\in L_n\mathbb {Z}^d\cap [-2L_{n+2}^2, 2L_{n+2}^2]^d}\mathbb {P}\left( {\varOmega }{\setminus }B_n(x)\right) \le C\left( L_{n+2}^2/L_n\right) ^d\mathbb {P}\left( {\varOmega }{\setminus } B_n(0)\right) .\end{aligned}$$

Therefore, using Theorem 2.4, for each \(n\ge 0\), for \(C>0\) independent of n,

$$\begin{aligned} \mathbb {P}({\varOmega }{\setminus }\tilde{A}_n)\le CL_n^{(2(1+a)^2-1)d-M_0}. \end{aligned}$$

This implies that, for each \(n\ge 0\),

$$\begin{aligned}\mathbb {P}\left( {\varOmega }{\setminus }A_n\right)\le & {} \mathbb {P}\left( {\varOmega }{\setminus } \tilde{A}_n\right) +\mathbb {P}\left( {\varOmega }{\setminus }\tilde{A}_{n+1}\right) +\mathbb {P}\left( {\varOmega }{\setminus } \tilde{A}_{n+2}\right) \\\le & {} C\left( L_n^{(2(1+a)^2-1)d-M_0}+L_{n+1}^{(2(1+a)^2-1)d-M_0}+L_{n+2}^{(2(1+a)^2-1)d-M_0}\right) \\\le & {} CL_n^{(2(1+a)^2-1)d-M_0}, \end{aligned}$$

which completes the argument.

The following proposition is the essential step toward constructing the invariant measure and provides the first comparison between \(R_{n+1}f(x,\omega )\) to \(R_nf(x,\omega )\) for environments in the subset \(A_n\) defined in (3.18). Notice that the estimates contained below depend upon the unscaled \(\beta \)-Hölder norm of the initial data. Since the identification of the measure requires us to consider initial data \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\), we will later use Proposition 3.2 and apply the following result to \(R_1f(x,\omega )\). Observe that, in view of (2.17) and (2.23), the exponent \(\beta -7(\delta -5a)\) appearing below is negative.

Proposition 3.9

Assume (2.10) and (2.33). For each \(n\ge 0\), \(\omega \in A_n\), \(1\le k<\ell _{n+1}^2\) and \(f\in C^{0,\beta }(\mathbb {R}^d)\), for \(C>0\) independent of n,

$$\begin{aligned} \sup _{x\in B_{4\sqrt{k}\tilde{D}_{n+1}}}|\left( R_{n+1}\right) ^kf(x,\omega )-\left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6f(x,\omega )|\le CL_n^{\beta -7(\delta -5a)}||f||_{C^{0,\beta }(\mathbb {R}^d)}. \end{aligned}$$

Proof

Fix \(n\ge 0\), \(\omega \in A_n\), \(1\le k<\ell _{n+1}^2\) and \(f\in C^{0,\beta }(\mathbb {R}^d)\). In what follows, we suppress the dependence on \(\omega \in {\varOmega }\). Notice that (2.18), (2.19) and (2.20) imply that, since \(1\le k<\ell _{n+1}^2\),

$$\begin{aligned} 4\sqrt{k}\tilde{D}_{n+1}<\tilde{D}_{n+2}.\end{aligned}$$
(3.19)

Also, notice (2.21) implies that, in the definition of \(A_n\),

$$\begin{aligned} 3\tilde{D}_{n+2}< L_{n+2}^2. \end{aligned}$$
(3.20)

And, recall that

$$\begin{aligned} \left( R_{n+1}\right) ^kf(x)=\left( R_n\right) ^{k\ell _n^2}f(x). \end{aligned}$$

Fix \(x\in B_{4\sqrt{k}\tilde{D}_{n+1}}\) and define the cutoff function \(\tilde{\chi }_{n,x}:\mathbb {R}^d\rightarrow \mathbb {R}^d\), recalling (2.28),

$$\begin{aligned} \tilde{\chi }_{n,x}(y)=\chi _{\tilde{D}_{n+2}}(y-x)\quad \text {on}\quad \mathbb {R}^d. \end{aligned}$$
(3.21)

Since

$$\begin{aligned} ||R_nf||_{L^\infty (\mathbb {R}^d)}\le ||f||_{L^\infty (\mathbb {R}^d)}, \end{aligned}$$
(3.22)

and since \(x\in B_{4\sqrt{k}\tilde{D}_{n+1}}\) and \(\omega \in A_n\), Control 2.3, Proposition 2.5, (3.19) and (3.20) imply that

$$\begin{aligned}&|\left( R_n\right) ^{k\ell _n^2}f(x)-\left( R_n\right) ^{k\ell _n^2-1}\tilde{\chi }_{n,x}R_nf(x)|=|\left( R_n\right) ^{k\ell _n^2-1}(1-\tilde{\chi }_{n,x})R_nf(x)|\\&\quad \le e^{-\kappa _{n+2}}||f||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$

Proceeding inductively,

$$\begin{aligned}&|\left( R_n\right) ^{k\ell _n^2}f(x)-\left( \tilde{\chi }_{n,x}R_n\right) ^{k\ell _n^2}f(x)|\le k\ell _n^2e^{-\kappa _{n+2}}||f||_{L^\infty (\mathbb {R}^d)}\nonumber \\&\quad \le \ell _{n+1}^2\ell _n^2e^{-\kappa _{n+2}}||f||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$
(3.23)

We now write

$$\begin{aligned} \left( \tilde{\chi }_{n,x}R_n\right) ^{k\ell _n^2}f(x)=\left( \tilde{\chi }_{n,x}S_n+\tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2}f(x), \end{aligned}$$

and, for nonnegative integers \(k_i\ge 0\),

$$\begin{aligned}&\left( \tilde{\chi }_{n,x}S_n+\tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2}f(x)\\&\quad =\sum _{m=0}^{k\ell _n^2}\sum _{k_0+\cdots +k_m+m=k\ell _n^2}\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_0}\tilde{\chi }_{n,x}S_n\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_1} \ldots \tilde{\chi }_{n,x}S_n\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_m}f(x).\quad \end{aligned}$$

Since, for each \(n\ge 0\),

$$\begin{aligned} |f|_n\le L_n^\beta ||f||_{C^{0,\beta }(\mathbb {R}^d)}, \end{aligned}$$

and since \(x\in B_{4\sqrt{k}\tilde{D}_{n+1}}\) and \(\omega \in A_n\), Proposition 2.6, Proposition 2.7, Control 3.3, Proposition 3.4, Proposition 3.5, (3.19) and (3.20) imply that

$$\begin{aligned}&\left| \sum _{m=7}^{k\ell _n^2}\sum _{k_0+\cdots +k_m+m=k\ell _n^2}\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_0}\tilde{\chi }_{n,x}S_n\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_1} \ldots \tilde{\chi }_{n,x}S_n\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_m}f(x)\right| \\&\quad \le \sum _{m=7}^{k\ell _n^2} {k\ell _n^2 \atopwithdelims ()m} 3^mL_n^{\beta -m\delta }||f||_{C^{0,\beta }(\mathbb {R}^d)}. \end{aligned}$$

Therefore, for \(C>0\) independent of n, using (2.17) to write \(4a+2a^2<5a\), since \(1\le k<\ell _{n+1}^2\), the lefthand side of the above string of inequalities is bounded by

$$\begin{aligned} \sum _{m=7}^{k\ell _n^2} \frac{3^m}{m!}L_n^{\beta -m(\delta -5a)}||f||_{C^{0,\beta }(\mathbb {R}^d)}\le CL_n^{\beta -7(\delta -5a)}||f||_{C^{0,\beta }(\mathbb {R}^d)},\end{aligned}$$
(3.24)

where we remark that \(\beta -7(\delta -5a)<0\) in view of (2.17) and (2.23).

It remains to consider

$$\begin{aligned} \sum _{m=0}^{6}\sum _{k_0+\cdots +k_m+m=k\ell _n^2}\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_0}\tilde{\chi }_{n,x}S_n\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_1} \ldots \tilde{\chi }_{n,x}S_n\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_m}f(x). \end{aligned}$$
(3.25)

We will prove that, up to an error which vanishes as n approaches infinity, the above sum reduces to

$$\begin{aligned} \left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( R_n\right) ^6f(x). \end{aligned}$$

To do so, we consider each summand in m individually.

For the case \(m=0\), the single summand is

$$\begin{aligned} \left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2}f(x). \end{aligned}$$
(3.26)

For the case \(m=1\), observe that, as a consequence of the assumptions \(x\in B_{4\sqrt{k}\tilde{D}_{n+1}}\) and \(\omega \in A_n\), Proposition 2.6, Proposition 2.7, Control 3.3, Proposition 3.4, Proposition 3.5, (3.19) and (3.20), this summand in (3.25) may be bounded by using (2.17) to write \(4a+2a^2<5a\), for \(C>0\) independent of n,

$$\begin{aligned}&\left| \sum _{k_0+k_1+1=k\ell _n^2}\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_0}\tilde{\chi }_{n,x}S_n\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_1}f(x)-\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2-1}\tilde{\chi }_{n,x}S_nf(x)\right| \nonumber \\&\quad \le {k\ell _n^2 \atopwithdelims ()1}3L_n^{-\delta }||f||_{L^\infty (\mathbb {R}^d)}\le CL_n^{5a-\delta }||f||_{L^\infty (\mathbb {R}^d)}, \end{aligned}$$
(3.27)

where we observe that \(5a-\delta <0\) in view of (2.17) and (2.23). Furthermore,

$$\begin{aligned} \left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2-1}\tilde{\chi }_{n,x}S_nf(x)=\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2-1}\tilde{\chi }_{n,x}R_nf(x)-\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2}f(x). \end{aligned}$$
(3.28)

Notice the additive cancellation between (3.26) and the final term of (3.28).

In the following, we use the fact that, for every \(f\in L^\infty (\mathbb {R}^d)\),

$$\begin{aligned}||S_nf||_{L^\infty (\mathbb {R}^d)}\le 2||f||_{L^\infty (\mathbb {R}^d)}.\end{aligned}$$

Fix \(2\le m\le 6\). In this case, as in the case \(m=0\) and \(m=1\), Proposition 2.6, Proposition 2.7, Control 3.3, Proposition 3.4 and Proposition 3.5 allow us to reduce the sum to the single term \(k_i=0\) for all \(1\le i\le m\). Observe that, since \(x\in B_{4\sqrt{k}\tilde{D}_{n+1}}\) and \(\omega \in A_n\),

$$\begin{aligned} \left| \sum _{k_m\ne 0}\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_0}\tilde{\chi }_{n,x}S_n\ldots \left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_m}f(x)\right| \le {k\ell _n^2 \atopwithdelims ()m}3^mL_n^{-m\delta }||f||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$

And, generally, for \(1\le i\le m\), since \(x\in B_{4\sqrt{k}\tilde{D}_{n+1}}\) and \(\omega \in A_n\),

$$\begin{aligned}&\left| \sum _{k_i\ne 0,\;k_j=0\;\text {if}\;j>i}\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_0}\tilde{\chi }_{n,x}S_n\ldots \left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_i}\left( \tilde{\chi }_{n,x}S_n\right) ^{m-i}f(x)\right| \\&\quad \le {k\ell _n^2-m+i \atopwithdelims ()i}2^{m-i}3^iL_n^{-i\delta }||f||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$

Therefore, for \(C>0\) independent of \(2\le m\le 6\) and n, using (2.17) to write \(4a+2a^2<5a\),

$$\begin{aligned}&\left| \sum _{k_0+\cdots +k_m+m=k\ell _n^2}\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_0}\ldots \left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k_m}f(x)-\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2-m}\left( \tilde{\chi }_{n,x}S_n\right) ^mf(x)\right| \nonumber \\&\quad \le \sum _{i=1}^m{k\ell _n^2-m+i \atopwithdelims ()i}2^{m-i}3^iL_n^{-i\delta }||f||_{L^\infty (\mathbb {R}^d)}\le CL_n^{5a-\delta }||f||_{L^\infty (\mathbb {R}^d)}, \end{aligned}$$
(3.29)

where we observe that \(5a-\delta <0\) in view of (2.17) and (2.23).

Furthermore, again using Proposition 2.6, Proposition 2.7, Control 3.3, Proposition 3.4, Proposition 3.5 and (3.14), since \(x\in B_{4\sqrt{k}\tilde{D}_{n+1}}\) and \(\omega \in A_n\), for each \(2\le m\le 6\),

$$\begin{aligned}&\left| \left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2-m}\left( \tilde{\chi }_{n,x}S_n\right) ^mf(x)-\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2-m}\left( \tilde{\chi }_{n,x}S_n\right) ^{m-1}\tilde{\chi }_{n,x}R_nf(x)\right| \\&\quad =\left| \left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2-m}\left( \tilde{\chi }_{n,x}S_n\right) ^{m-1}\tilde{\chi }_{n,x}\overline{R}_nf(x)\right| \le 3^{m-1}L_n^{(m-1)\delta }||f||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$

Proceeding inductively, for each \(2\le m\le 6\), for \(C>0\) independent of m and n,

$$\begin{aligned}&\left| \left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2-m}\left( \tilde{\chi }_{n,x}S_n\right) ^mf(x)-\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2-m}\tilde{\chi }_{n,x}S_n\left( \tilde{\chi }_{n,x}R_n\right) ^{m-1}f(x)\right| \\&\quad \le CL_n^{-\delta }||f||_{L^\infty (\mathbb {R}^d)}, \end{aligned}$$

where we observe that

$$\begin{aligned}&\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2}f(x)+\sum _{m=1}^6\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2-m}\tilde{\chi }_{n,x}S_n\left( \tilde{\chi }_{n,x}R_n\right) ^{m-1}f(x) \nonumber \\&\quad =\left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{\chi }_{n,x}R_n\right) ^{6}f(x). \end{aligned}$$
(3.30)

And, using the assumptions \(x\in B_{4\sqrt{k}\tilde{D}_{n+1}}\), \(\omega \in A_n\) and \(1\le k<\ell _{n+1}^2\), Control 2.3, Proposition 2.5, Proposition 3.6, (3.19) and (3.20), there exists \(C>0\) and \(c>0\) independent of n and such that

$$\begin{aligned}&\left| \left( \tilde{\chi }_{n,x}\overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{\chi }_{n,x}R_n\right) ^6f(x)-\left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( R_n\right) ^6f(x)\right| \nonumber \\&\quad \le C\ell _{n+1}^2\ell _n^2e^{-c\kappa _{n+2}}||f||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$
(3.31)

Therefore, following the sequence of inequalities (3.20), (3.23), (3.24), (3.26), (3.27), (3.29), (3.30) and (3.31), we have obtained an effective comparison between \((R_{n+1})^kf(x)\) and the quantity regularized by the heat kernel \((\overline{R}_n)^{k\ell _n^2-6}R_n^6f(x)\) which takes the form, for \(C>0\) and \(c>0\) independent of n,

$$\begin{aligned}&|\left( R_{n+1}\right) ^kf(x)-\left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( R_n\right) ^6f(x)|=|\left( R_n\right) ^{k\ell _n^2}f(x)-\left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( R_n\right) ^6f(x)| \nonumber \\&\quad \le C\ell _{n+1}^2\ell _n^2e^{-c\kappa _{n+2}}||f||_{L^\infty (\mathbb {R}^d)}+CL_n^{\beta -7(\delta -5a)}||f||_{C^{0,\beta }(\mathbb {R}^d)}+CL_n^{5a-\delta }||f||_{L^\infty (\mathbb {R}^d)}.\nonumber \\ \end{aligned}$$
(3.32)

In view of (2.17), (2.18), (2.19) and (2.23) there exits \(C>0\) independent of n such that, for all \(n\ge 0\),

$$\begin{aligned}\ell _{n+1}^2\ell _n^2e^{-c\kappa _{n+2}}\le CL_n^{\beta -7(\delta -5a)}\quad \text {and}\quad L_n^{5a-\delta }\le CL_n^{\beta -7(\delta -5a)}.\end{aligned}$$

And, since \(||f||_{L^\infty (\mathbb {R}^d)}\le ||f||_{C^{0,\beta }(\mathbb {R}^d)}\), we have, using (3.32), for \(C>0\) independent of n,

$$\begin{aligned} |\left( R_{n+1}\right) ^kf(x,\omega )-\left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( R_n\right) ^6f(x,\omega )|\le CL_n^{\beta -7(\delta -5a)}||f||_{C^{0,\beta }(\mathbb {R}^d)}. \end{aligned}$$
(3.33)

Finally, since \(\omega \in A_n\), using (3.18) and the fact that \(6\tilde{D}_n< L_{n+2}^2\), for each \(y\in [-L_{n+2}^2,L_{n+2}^2]^d\), we have, using Control 2.3 and (3.17),

$$\begin{aligned} |\left( R_n\right) ^6f(y,\omega )-\left( \tilde{R}_n\right) ^6f(y,\omega )|\le 6e^{-\kappa _{n}}||f||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$
(3.34)

We write

$$\begin{aligned}&\left| \left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( R_n\right) ^6f(x)-\left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6f(x)\right| \\&\quad \le \left| \left( \overline{R}_n\right) ^{k\ell _n^2-6}\tilde{\chi }_{n,x}\left( \left( R_n\right) ^6-\left( \tilde{R}_n\right) ^6\right) f(x)\right| \\&\qquad +\left| \left( \overline{R}_n\right) ^{k\ell _n^2-6}(1-\tilde{\chi }_{n,x})\left( \left( R_n\right) ^6-\left( \tilde{R}_n\right) ^6\right) f(x)\right| \end{aligned}$$

and observe that, using Proposition 3.6, (3.19), (3.20), (3.22) and (3.34), since \(x\in B_{4\sqrt{k}\tilde{D}_{n+1}}\) and \(1\le k<\ell _{n+1}^2\), for \(C_1>0\) and \(c_1>0\) independent of n,

$$\begin{aligned} \left| \left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( R_n\right) ^6f(x)-\left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6f(x)\right| \le C_1e^{-c_1\kappa _n}||f||_{L^\infty (\mathbb {R}^d)}. \end{aligned}$$
(3.35)

Since \(||f||_{L^\infty (\mathbb {R}^d)}\le ||f||_{C^{0,\beta }(\mathbb {R}^d)}\) and since (2.17), (2.18), (2.19) and (2.23) imply that there exists \(C>0\) satisfying, for all \(n\ge 0\),

$$\begin{aligned}C_1e^{-c_1\kappa _n}\le C L_n^{\beta -7(\delta -5a)},\end{aligned}$$

we conclude that, in view of (3.33) and (3.35),

$$\begin{aligned} |\left( R_{n+1}\right) ^kf(x,\omega )-\left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6f(x,\omega )|\le CL_n^{\beta -7(\delta -5a)}||f||_{C^{0,\beta }(\mathbb {R}^d)}. \end{aligned}$$

Since \(n\ge 0\), \(1\le k<\ell _{n+1}^2\), \(\omega \in A_n\), \(x\in B_{4\sqrt{k}\tilde{D}_{n+1}}\) and \(f\in C^{0,\beta }(\mathbb {R}^d)\) were arbitrary, this completes the proof.

We now prepared to provide the initial characterization of the invariant measure \(\pi :\mathcal {F}\rightarrow \mathbb {R}\). In view of Proposition 3.9, for each \(n\ge 0\) and \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\), define

$$\begin{aligned} \pi _n(f)=\mathbb {E}\left( \left( \tilde{R}_n\right) ^6R_1f(0,\omega )\right) . \end{aligned}$$
(3.36)

The following two propositions prove that, for each \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) satisfying (3.2), the sequence \(\left\{ \pi _n(f)\right\} _{n=0}^\infty \) is Cauchy. Notice in particular that the rate of convergence depends only upon the \(L^\infty \)-norm of the initial condition.

Proposition 3.10

Assume (2.10) and (2.33). For each \(n\ge 0\) and \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) satisfying (3.2), for \(C>0\) independent of n,

$$\begin{aligned} |\pi _{n+1}(f)-\pi _n(f)|\le CL_n^{\beta -7(\delta -5a)}||f||_{L^\infty (\mathbb {R}^d\times {\varOmega })}. \end{aligned}$$

Proof

Fix \(n\ge 0\) and \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) satisfying (3.2). Since

$$\begin{aligned}||R_1f||_{L^\infty (\mathbb {R}^d\times {\varOmega })}\le ||f||_{L^\infty (\mathbb {R}^d\times {\varOmega })},\end{aligned}$$

we have, for each \(\omega \in A_n\), using Control 2.3 and (3.17),

$$\begin{aligned} |\left( R_{n+1}\right) ^6R_1f(0,\omega )-\left( \tilde{R}_{n+1}\right) ^6R_1f(0,\omega )|\!\le \! 6e^{-\kappa _{n+1}} ||f||_{L^\infty (\mathbb {R}^d\times {\varOmega })}, \end{aligned}$$
(3.37)

and, using Proposition 3.2 and Proposition 3.9 for \(k=6\), for \(C>0\) independent of n and f,

$$\begin{aligned}&|\left( R_{n+1}\right) ^6R_1f(0,\omega )-\left( \overline{R}_n\right) ^{6\ell _n^2-6}\left( \tilde{R}_n\right) ^6R_1f(0,\omega )|\nonumber \\&\quad \le CL_n^{\beta -7(\delta -5a)}||R_1f(x,\omega )||_{C^{0,\beta }(\mathbb {R}^d)} \nonumber \\&\quad \le CL_n^{\beta -7(\delta -5a)}||f||_{L^\infty (\mathbb {R}^d\times {\varOmega })}. \end{aligned}$$
(3.38)

Since (2.17), (2.18), (2.19) and (2.23) imply that there exists \(C>0\) satisfying, for each \(n\ge 0\),

$$\begin{aligned}e^{-\kappa _{n+1}}\le CL_n^{\beta -7(\delta -5a)},\end{aligned}$$

we have, for each \(\omega \in A_n\), in view of (3.37) and (3.38), for \(C>0\) independent of n,

$$\begin{aligned} \left| \left( \tilde{R}_{n+1}\right) ^6R_1f(0,\omega )-\left( \overline{R}_n\right) ^{6\ell _n^2-6}\left( \tilde{R}_n\right) ^6R_1f(0,\omega )\right| \le CL_n^{\beta -7(\delta -5a)}||f||_{L^\infty (\mathbb {R}^d\times {\varOmega })}. \end{aligned}$$
(3.39)

Therefore, since Proposition 3.1, the stationarity guaranteed by (3.2) and Proposition 3.7 imply that, for each \(x\in \mathbb {R}^d\),

$$\begin{aligned} \mathbb {E}\left( \left( \overline{R}_n\right) ^{6\ell _n^2-6}\left( \tilde{R}_n\right) ^6R_1f(x,\omega )\right) =\pi _n(f), \end{aligned}$$

and since (2.17) and (2.23) imply that, for each \(n\ge 0\),

$$\begin{aligned}L_{n}^{(2(1+a)^2-1)d-M_0}\le L_n^{\beta -7(\delta -5a)},\end{aligned}$$

by Proposition 3.8 and (3.39), for \(C>0\) independent of n,

$$\begin{aligned} |\pi _{n+1}(f)-\pi _n(f)|\le & {} CL_n^{\beta -7(\delta -5a)}||f||_{L^\infty (\mathbb {R}^d\times {\varOmega })}+2||f||_{L^\infty (\mathbb {R}^d\times {\varOmega })}\mathbb {P}\left( {\varOmega }{\setminus } A_n\right) \\\le & {} CL_n^{\beta -7(\delta -5a)}||f||_{L^\infty (\mathbb {R}^d\times {\varOmega })}, \end{aligned}$$

which, since \(n\ge 0\) and \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) were arbitrary, completes the argument.

Proposition 3.11

Assume (2.10) and (2.33). For each \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) satisfying (3.2) there exists a unique \(\overline{\pi }(f)\in \mathbb {R}\) satisfying

$$\begin{aligned} \overline{\pi }(f)=\lim _{n\rightarrow \infty }\pi _n(f). \end{aligned}$$

Furthermore, for each \(n\ge 0\), for \(C>0\) independent of n and f,

$$\begin{aligned} |\pi _n(f)-\overline{\pi }(f)|\le C||f||_{L^\infty (\mathbb {R}^d\times {\varOmega })}\sum _{m=n}^\infty L_m^{\beta -7(\delta -5a)}. \end{aligned}$$

Proof

In view of (2.17), (2.18) and (2.23), since \(\beta -7(\delta -5a)<0\), the ratio test implies that

$$\begin{aligned} \sum _{m=0}^\infty L_m^{\beta -7(\delta -5a)}<\infty . \end{aligned}$$

Since, for each \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) stationary in the sense of (3.2), Proposition 3.10 implies that the sequence \(\left\{ \pi _n(f)\right\} _{n=1}^\infty \) is Cauchy, there exists a unique \(\overline{\pi }(f)\in \mathbb {R}\) satisfying

$$\begin{aligned} \lim _{n\rightarrow \infty }\pi _n(f)=\overline{\pi }(f). \end{aligned}$$
(3.40)

Furthermore, for each \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) satisfying (3.2), the triangle inequality, Proposition 3.10 and (3.40) imply that, for each \(n\ge 0\), for \(C>0\) independent of n and f,

$$\begin{aligned} |\pi _n(f)-\overline{\pi }(f)|\le \sum _{m=n}^\infty |\pi _{m+1}(f)-\pi _m(f)|\le C||f||_{L^\infty (\mathbb {R}^d\times {\varOmega })}\sum _{m=n}^\infty L_m^{\beta -7(\delta -5a)}, \end{aligned}$$

which completes the argument.

We now identify what is shown in the next section to be the unique invariant measure. For every \(E\in \mathcal {F}\), write \(\mathbf{{1}}_E:{\varOmega }\rightarrow \mathbb {R}\) for the indicator function of \(E\subset {\varOmega }\), and define \(f_E:\mathbb {R}^d\times {\varOmega }\rightarrow \mathbb {R}\) using the translation group \(\left\{ \tau _x\right\} _{x\in \mathbb {R}^d}\), see (2.2),

$$\begin{aligned} f_E(x,\omega )=\mathbf{{1}}_E(\tau _x\omega ). \end{aligned}$$
(3.41)

We define \(\pi :\mathcal {F}\rightarrow \mathbb {R}\), for each \(E\in \mathcal {F}\), by the rule

$$\begin{aligned} \pi (E)=\overline{\pi }(f_E), \end{aligned}$$
(3.42)

and prove now that \(\pi \) defines a probability measure on \(({\varOmega },\mathcal {F})\) which is absolutely continuous with respect to \(\mathbb {P}\).

Proposition 3.12

Assume (2.10) and (2.33). The function \(\pi :\mathcal {F}\rightarrow \mathbb {R}\) defined in (3.42) is a probability measure on \(({\varOmega },\mathcal {F})\) which is absolutely continuous with respect to \(\mathbb {P}\).

Proof

For each \(E\in \mathcal {F}\), since \(0\le f_E\le 1\) on \(\mathbb {R}^d\times {\varOmega }\), the comparison principle implies that, for each \(n\ge 0\),

$$\begin{aligned}0\le \pi _n(f_E)\le 1, \end{aligned}$$

and, therefore, for each \(E\in \mathcal {F}\),

$$\begin{aligned} 0\le \pi (E)=\overline{\pi }(f_E)\le 1. \end{aligned}$$
(3.43)

Furthermore, since \(f_{\varOmega }\) is identically one and, since \(f_\emptyset \) is identically zero, we have, for each \(n\ge 0\),

$$\begin{aligned} \pi _n(f_{\varOmega })=1\quad \text {and}\quad \pi _n(f_\emptyset )=0. \end{aligned}$$

Therefore,

$$\begin{aligned} \pi ({\varOmega })=1\quad \text {and}\quad \pi (\emptyset )=0. \end{aligned}$$
(3.44)

It remains to prove that \(\pi \) is countably additive and absolutely continuous.

Let \(\left\{ A_i\right\} _{i=1}^\infty \subset \mathcal {F}\) be a countable collection of disjoint subsets. Since, for each \(n\ge 0\) and \(1\le m\le \infty \),

$$\begin{aligned}\pi _n(f_{\bigcup _{i=1}^m A_i})=\int _{{\varOmega }}\int _{\hbox {C}([0,\infty );\mathbb {R}^d)}R_1f_{\bigcup _{i=1}^m A_i}(X_{6L_n^2\wedge T_n},\omega )\;dP_{0,\omega }d\mathbb {P},\end{aligned}$$

for the stopping time

$$\begin{aligned}T_n=\inf \left\{ s\ge 0\;|\; X_s\notin B_{6\tilde{D}_n}\right\} ,\end{aligned}$$

the dominated convergence theorem implies that, for each \(n\ge 0\), there exists \(k_n\ge n\) such that

$$\begin{aligned} \left| \pi _n(f_{\bigcup _{i=1}^\infty A_i})-\pi _n(f_{\bigcup _{i=1}^{k_n}A_i})\right| \le \frac{1}{n}. \end{aligned}$$

Therefore, in view of Proposition 3.11, since each initial condition has unit \(L^\infty \)-norm, the triangle inequality implies, for \(C>0\) independent of n,

$$\begin{aligned} \left| \pi \left( \bigcup _{i=1}^\infty A_i\right) -\pi \left( \bigcup _{i=1}^{k_n}A_i\right) \right| \le \frac{1}{n}+C\sum _{m=n}^\infty L_m^{\beta -7(\delta -5a)}. \end{aligned}$$
(3.45)

Furthermore, since the \(\left\{ A_i\right\} _{i=1}^\infty \) are disjoint, for each \(n\ge 0\),

$$\begin{aligned} \pi \left( \bigcup _{i=1}^{k_n}A_i\right) =\sum _{i=1}^{k_n}\pi (A_i). \end{aligned}$$
(3.46)

Therefore, in view of (3.45) and (3.46), since we choose \(k_n\ge n\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\left| \pi \left( \bigcup _{i=1}^\infty A_i\right) -\pi \left( \bigcup _{i=1}^{k_n}A_i\right) \right|= & {} \lim _{n\rightarrow \infty }\left| \pi \left( \bigcup _{i=1}^\infty A_i\right) -\sum _{i=1}^{k_n}\pi \left( A_i\right) \right| \\= & {} \left| \pi \left( \bigcup _{i=1}^\infty A_i\right) -\sum _{i=1}^\infty \pi (A_i)\right| =0, \end{aligned}$$

which, since the family \(\left\{ A_i\right\} _{i=1}^\infty \) was arbitrary, completes the proof of countable additivity.

We now prove the absolute continuity. We first show that whenever \(E\in \mathcal {F}\) satisfies \(\mathbb {P}(E)=0\) we have \(R_1f_E(x,\omega )=0\) on \(\mathbb {R}^d\) for almost every \(\omega \in {\varOmega }\). To do so, recall that there exists a density \(p(x,1,y,\omega )\) satisfying for each \(x\in \mathbb {R}^d\), \(\omega \in {\varOmega }\) and \(E\in \mathcal {F}\),

$$\begin{aligned} R_1f_E(x,\omega )=\int _{\mathbb {R}^d}p(x,1,y,\omega )f_E(y,\omega )\;dy. \end{aligned}$$

Furthermore, for each \(x\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\), the probability measure defined by \(p(x,1,y,\omega )\;dy\) on \(\mathbb {R}^d\) is equivalent to Lebesgue measure. See, for instance [4, Chapter 1, Theorem 11].

Fix \(E\in \mathcal {F}\) satisfying \(\mathbb {P}(E)=0\). Then, for each \(x\in \mathbb {R}^d\), using (2.2) and \(\mathbb {P}(E)=0\), by Fubini’s theorem

$$\begin{aligned} \mathbb {E}\left( R_1f_E(x,\omega )\right)= & {} \int _{{\varOmega }}\int _{\mathbb {R}^d}p(x,1,y,\omega )\mathbf{{1}}_E(\tau _y\omega )\;dyd\mathbb {P}\\= & {} \int _{\mathbb {R}^d}\int _{{\varOmega }}p(x,1,y,\omega )\mathbf{{1}}_E(\tau _y\omega )\;d\mathbb {P}dy=0, \end{aligned}$$

since \(\mathbf{{1}}_E(\tau _y\omega )=0\) almost everywhere in \({\varOmega }\) for every \(y\in \mathbb {R}^d\). Therefore, Fubini’s theorem implies that, for every \(x\in \mathbb {R}^d\), there exists a subset \(A_x\subset {\varOmega }\) of full probability such that, for every \(\omega \in A_x\),

$$\begin{aligned}R_1f_E(x,\omega )=0.\end{aligned}$$

Define the subset of full probability

$$\begin{aligned}A=\bigcap _{x\in \mathbb {Q}^d} A_x,\end{aligned}$$

and observe that, for each \(\omega \in A\) and \(x\in \mathbb {Q}^d\),

$$\begin{aligned} R_1f_E(x,\omega )=0. \end{aligned}$$

Since Proposition 3.2 implies that, for every \(\omega \in {\varOmega }\), we have \(R_1f_E(x,\omega )\in C^{0,\beta }(\mathbb {R}^d)\), we conclude that, for every \(x\in \mathbb {R}^d\) and \(\omega \in A\),

$$\begin{aligned}R_1f_E(x,\omega )=0,\end{aligned}$$

and, therefore, for every \(\omega \in A\) and \(n\ge 0\),

$$\begin{aligned} \left( \tilde{R}_n\right) ^6R_1f_E(0,\omega )=0. \end{aligned}$$

Since \(\mathbb {P}(A)=1\), this implies that, for each \(n\ge 0\), \(\pi _n(f_E)=0\) and, therefore, that \(\pi (E)=0\). Since \(E\in \mathcal {F}\) satisfying \(\mathbb {P}(E)=0\) was arbitrary, this completes the argument.

In the final proposition of this section, we prove that for each \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) satisfying (3.2), the constant \(\overline{\pi }(f)\) characterizes the integral of \(f(0,\omega )\) with respect to \(\pi \). This is essentially an immediate consequence of the definition of \(\pi \) and the fact that the kernels \(R_t\) preserve the \(L^\infty \)-norm of initial data.

Proposition 3.13

Assume (2.10) and (2.33). For every \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) satisfying (3.2),

$$\begin{aligned} \overline{\pi }(f)=\int _{{\varOmega }}f(0,\omega )\;d\pi . \end{aligned}$$

Proof

We recall, for every subset \(E\in \mathcal {F}\) and \(\omega \in {\varOmega }\), the definition \(f_E(x,\omega )=\mathbf{{1}}_E(\tau _x\omega )\), and from which it follows immediately by definitions of \(\overline{\pi }\) and \(\pi \) that

$$\begin{aligned} \overline{\pi }(f_E)=\pi (E)=\int _{{\varOmega }}f(0,\omega )\;d\pi =\int _{{\varOmega }}{} \mathbf{{1}}_E(\omega )\;d\pi . \end{aligned}$$
(3.47)

And, since for every \(t\ge 0\), \(n\ge 0\) \(\omega \in {\varOmega }\) and \(f\in L^\infty (\mathbb {R}^d)\),

$$\begin{aligned} ||R_tf(x,\omega )||_{L^\infty (\mathbb {R}^d)}\le ||f||_{L^\infty (\mathbb {R}^d)}\quad \text {and}\quad ||\tilde{R}_nf(x,\omega )||_{L^\infty (\mathbb {R}^d)}\le ||f||_{L^\infty (\mathbb {R}^d)}, \end{aligned}$$

we have, for each \(n\ge 0\) and \(f,g\in L^\infty (\mathbb {R}^d\times {\varOmega })\) satisfying (3.2),

$$\begin{aligned} |\pi _n(f-g)|=|\pi _n(f)-\pi _n(g)|\le ||f-g||_{L^\infty (\mathbb {R}^d\times {\varOmega })}. \end{aligned}$$

Therefore, for every pair \(f,g\in L^\infty (\mathbb {R}^d\times {\varOmega })\) satisfying (3.2),

$$\begin{aligned} |\overline{\pi }(f)-\overline{\pi }(g)|\le ||f-g||_{L^\infty (\mathbb {R}^d\times {\varOmega })}. \end{aligned}$$
(3.48)

The claim now follows from (3.47), (3.48) and the definition of the Lebesgue integral.

4 The proof of invariance and uniqueness

In this section, we prove that the measure \(\pi \) defined in (3.42) is the unique invariant measure which is absolutely continuous with respect to \(\mathbb {P}\). Furthermore, \(\pi \) is mutually absolutely continuous with respect to \(\mathbb {P}\) and defines an ergodic probability measure for the canonical Markov process on \({\varOmega }\) defining (1.6). We observe that, for each \(t\ge 0\), \(\omega \in {\varOmega }\) and \(E\in \mathcal {F}\), for \(P_t(\omega ,E)\) defined below and as in (1.6),

$$\begin{aligned} R_tf_E(0,\omega )=P_{0,\omega }(\tau _{X_t}\omega \in E)=P_t(\omega , E). \end{aligned}$$

In order to prove invariance, therefore, it suffices to prove that, for each \(t\ge 0\) and \(E\in \mathcal {F}\),

$$\begin{aligned} \pi (E)=\int _{{\varOmega }}R_tf_E(0,\omega )\;d\pi . \end{aligned}$$

See Proposition 4.7.

To exploit the finite range dependence we define, for each \(R>0\), \(t\ge 1\) and \(\omega \in {\varOmega }\), the localized kernels

$$\begin{aligned} \tilde{R}_{t,R}f(x,\omega )=\tilde{u}_R(x,t,\omega ), \end{aligned}$$
(4.1)

for \(\tilde{u}:\overline{B}_R(x)\times [0,\infty )\times {\varOmega }\rightarrow R\) satisfying

$$\begin{aligned} \left\{ \begin{array}{ll}\tilde{u}_{R,t}=\frac{1}{2}{{\mathrm{tr}}}(A(y,\omega )D^2\tilde{u}_R)+b(y,\omega )\cdot D\tilde{u}_R &{}\quad \text {on}\quad B_R(x)\times (0,\infty ), \\ \tilde{u}_R=f &{}\quad \text {on}\quad B_R(x)\times \left\{ 0\right\} \cup \partial B_R(x)\times [0,\infty ).\end{array}\right. \end{aligned}$$
(4.2)

The following proposition controls the error we make due to this localization. And, in contrast to Control 2.3, we obtain this control globally for \(x\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\) at the cost of an effective length scale which is significantly larger than that appearing in Control 2.3. That is, this control is effective at length scale approximately t whereas Control 2.3 is effective at length scale approximately \(\sqrt{t}\).

Proposition 4.1

Assume (2.10). For each \(x\in \mathbb {R}^d\), \(t\ge 1\), \(\omega \in {\varOmega }\) and \(R>0\), for \(C>0\) independent of x, t, \(\omega \) and R, for every \(f\in L^\infty (\mathbb {R}^d)\),

$$\begin{aligned}|R_tf(x,\omega )-\tilde{R}_{t,R}f(x,\omega )|\le ||f||_{L^\infty (\mathbb {R}^d)}e^{-\frac{(R-Ct)_+^2}{Ct}}\end{aligned}$$

Proof

Fix \(R>0\), \(t\ge 1\), \(x\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\). Introduce the stopping time \(T_R:\hbox {C}([0,\infty );\mathbb {R}^d)\rightarrow \mathbb {R}\) defined by

$$\begin{aligned} T_R=\inf \left\{ s\ge 0\;|\;X_s\notin B_R(x)\right\} . \end{aligned}$$

Then, for each \(f\in L^\infty (\mathbb {R}^d)\), for \(X^*_t=\sup _{0\le s\le t}|X_s-X_0|\) defined in (2.30),

$$\begin{aligned} |R_tf(x,\omega )-\tilde{R}_{t,R}f(x,\omega )|\le ||f||_{L^\infty (\mathbb {R}^d)}P_{x,\omega }\left( X^*_t\ge R\right) . \end{aligned}$$
(4.3)

We recall that, almost surely with respect to \(P_{x,\omega }\), for \(B_s\) a Brownian motion on \(\mathbb {R}^d\) under \(P_{x,\omega }\) with respect to the canonical right-continuous filtration on \(\hbox {C}([0,\infty );\mathbb {R}^d)\), paths \(X_s\in \hbox {C}([0,\infty );\mathbb {R}^d)\) satisfy the stochastic differential equation

$$\begin{aligned} \left\{ \begin{array}{l} dX_s=b(X_s,\omega )dt+\sigma (X_s,\omega )dB_s, \\ X_0=x.\end{array}\right. \end{aligned}$$

Therefore, using the exponential inequality for Martingales, see Revuz and Yor [10, Chapter 2, Proposition 1.8], and (2.4) and (2.5), for every \(\tilde{R}\ge 0\), for \(C>0\) independent of \(\tilde{R}\), t, x and \(\omega \),

$$\begin{aligned} P_{x,\omega }\left( X^*_t\ge \tilde{R}+Ct\right) \le e^{-\frac{\tilde{R}^2}{Ct}}. \end{aligned}$$
(4.4)

Therefore, by choosing \(\tilde{R}=(R-CT)_+\) in (4.4), we conclude in view of (4.3) that, for \(C>0\) independent of x, t, \(\omega \) and R,

$$\begin{aligned} |R_tf(x,\omega )-\tilde{R}_{t,R}f(x,\omega )|\le ||f||_{L^\infty (\mathbb {R}^d)}e^{-\frac{(R-Ct)_+^2}{Ct}}, \end{aligned}$$

which, since x, t, \(\omega \) and R were arbitrary, completes the argument.

We define, for each subset \(A\subset \mathbb {R}^d\), the sub sigma algebra of \(\mathcal {F}\)

$$\begin{aligned} \sigma _A=\sigma \left( A(x,\omega ), b(x,\omega )\;|\;x\in A\right) . \end{aligned}$$
(4.5)

The following proposition uses stationarity, see (2.3), to describe the interaction between the transformation group \(\left\{ \tau _x\right\} _{x\in \mathbb {R}^d}\) and the sigma algebras \(\sigma _A\).

Proposition 4.2

Assume (2.10). For every subset \(A\subset \mathbb {R}^d\) and \(y\in \mathbb {R}^d\),

$$\begin{aligned} \tau _y\left( \sigma _A\right) =\left\{ \tau _y(B)\;|\;B\in \sigma _A\right\} =\sigma _{A-y}. \end{aligned}$$

Proof

Fix \(A\subset \mathbb {R}^d\). We identify \(\mathcal {S}(d)\) with \(\mathbb {R}^{(d+1)d/2}\) and write \(\mathcal {B}_d\) and \(\mathcal {B}_{(d+1)d/2}\) for the Borel sigma algebras on \(\mathbb {R}^d\) and \(\mathbb {R}^{(d+1)d/2}\) respectively. Observe that \(\sigma _A\) is generated by sets of the form, for fixed \(x\in A\), \(B_d\in \mathcal {B}_d\) and \(B_{(d+1)d/2}\in \mathcal {B}_{(d+1)d/2}\),

$$\begin{aligned} b(x,\omega )^{-1}(B_d)=\left\{ \omega \in {\varOmega }\;|\;b(x,\omega )\in B_d\right\} , \end{aligned}$$
(4.6)

and

$$\begin{aligned} A(x,\omega )^{-1}(B_{(d+1)d/2})=\left\{ \omega \in {\varOmega }\;|\;A(x,\omega )\in B_{(d+1)d/2}\right\} . \end{aligned}$$
(4.7)

Furthermore, since the group \( \{\tau _y \}_{y\in \mathbb {R}^d}\) is composed of invertible, measure-preserving transformations, for every fixed \(y\in \mathbb {R}^d\),

$$\begin{aligned} \tau _y\left( \sigma _A\right) =\left\{ \tau _yB\;|\;B\in \sigma _A\right\} , \end{aligned}$$

is a sigma algebra generated by sets of the form, for fixed \(x\in A\), \(B_d\in \mathcal {B}_d\) and \(B_{(d+1)d/2}\in \mathcal {B}_{(d+1)d/2}\),

$$\begin{aligned} \tau _y\left( b(x,\omega )^{-1}(B_d)\right) \quad \text {and}\quad \tau _y\left( A(x,\omega )^{-1}(B_{(d+1)d/2})\right) . \end{aligned}$$
(4.8)

And, in view of the stationarity (2.3), for each \(y\in \mathbb {R}^d\), \(x\in A\), \(B_d\in \mathcal {B}_d\) and \(B_{(d+1)d/2}\in \mathcal {B}_{(d+1)d/2}\),

$$\begin{aligned} \tau _y\left( b(x,\omega )^{-1}(B_d)\right) =b(x,\tau _{-y}\omega )^{-1}(B_d)=b(x-y,\omega )^{-1}(B_d), \end{aligned}$$
(4.9)

and

$$\begin{aligned} \tau _y\left( A(x,\omega )^{-1}(B_{(d+1)d/2})\right) =A(x,\tau _{-y}\omega )^{-1}(B_{(d+1)d/2})=A(x-y,\omega )^{-1}(B_{(d+1)d/2}). \end{aligned}$$
(4.10)

We therefore conclude, using (4.6), (4.7), (4.8), (4.9) and (4.10), for each \(y\in \mathbb {R}^d\),

$$\begin{aligned} \tau _y\left( \sigma _A\right) =\tau _y\sigma \left( A(x-y,\omega ), b(x-y,\omega )\;|\;x\in A \right) =\sigma _{A-y}. \end{aligned}$$
(4.11)

Since \(A\subset \mathbb {R}^d\) was arbitrary, this completes the argument.

We will later use the fact that

$$\begin{aligned} \mathcal {F}=\sigma \left( A(x,\omega ), b(x,\omega )\;|\;x\in \mathbb {R}^d\right) =\sigma \left( \bigcup _{R>0}\sigma _{B_R}\right) . \end{aligned}$$

This will allow us to obtain our general statement after considering measurable subsets \(E\subset {\varOmega }\) in the algebra of subsets \(\cup _{R>0}\sigma _{B_R}\), where it is shown in the next proposition that for these subsets we can effectively apply the finite range dependence, see (2.7).

Proposition 4.3

Assume (2.10) and (2.33). Suppose that, for \(R_1>0\), \(E\in \mathcal {F}\) satisfies \(E\in \sigma _{B_{R_1}}\). For each \(x\in \mathbb {R}^d\), \(t\ge 1\) and \(R_2>0\), see (3.41) and (4.1),

$$\begin{aligned} \sigma \left( \tilde{R}_{t,R_2}f_E(x,\omega )\right) \subset \sigma _{B_{R_2+R_1}(x)}. \end{aligned}$$

Proof

Fix \(E\in \mathcal {F}\) and \(R_1>0\) satisfying \(E\in \sigma _{B_{R_1}}\), \(x\in \mathbb {R}^d\), \(t\ge 1\) and \(R_2>0\). In view of the definition (4.1),

$$\begin{aligned} \sigma \left( \tilde{R}_{t,R_2}f_E(x,\omega )\right) \subset \sigma \left( A(y,\omega ), b(y,\omega ), f_E(y,\omega )\;|\;y\in B_{R_2}(x)\right) . \end{aligned}$$
(4.12)

And, since \(f_E(y,\omega )=\mathbf{{1}}_E(\tau _y\omega )\) and using Proposition 4.2, since \(E\in \sigma _{B_{R_1}}\), for every \(y\in \mathbb {R}^d\),

$$\begin{aligned} \sigma (f_E(y,\omega ))=\left\{ \tau _{-y}(E), {\varOmega }{\setminus }\tau _{-y}\left( E\right) \right\} \subset \tau _{-y}(\sigma _{B_{R_1}})=\sigma _{B_{R_1}+y}. \end{aligned}$$
(4.13)

Therefore, in view of (4.12) and (4.13),

$$\begin{aligned} \sigma \left( \tilde{R}_{t,R_2}f_E(x,\omega )\right) \subset \sigma \left( \bigcup _{y\in B_{R_2}(x)}\sigma _{B_{R_1}+y}\right) =\sigma _{B_{R_2}(x)+B_{R_1}}=\sigma _{B_{R_2+R_1}(x)}, \end{aligned}$$

which, since \(R_1\), \(E\in \sigma _{B_{R_1}}\), x, t, and \(R_2\) were arbitrary, completes the argument.

Recall that in Proposition 3.9, with high probability, we obtained an effective comparison between the kernels

$$\begin{aligned} R_{n+1}\quad \text {and}\quad \overline{R}_n^{\ell _n^2-6}R_n^6, \end{aligned}$$

where, in view of Theorem 2.1, the expectation is that the presence of the heat kernel will result significant averaging for appropriate initial data. The following proposition quantifies the effect of this averaging.

Proposition 4.4

Assume (2.10) and (2.33). Suppose that, for \(R_1>0\), \(E\in \mathcal {F}\) satisfies \(E\in \sigma _{B_{R_1}}\). For each \(n\ge 0\), \(1\le k<\ell _{n}^2\) and \(t\ge 0\) there exists \(C=C(t,R_1)>0\) independent of E, n and k, and there exists \(\zeta >0\) independent of \(R_1\), E, n, k and t, such that

$$\begin{aligned} \mathbb {P}\left( \sup _{x\in B_{4\sqrt{k}\tilde{D}_{n+1}}}\left| \left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6R_{1+t}f_E(x,\omega )-\pi _n(R_tf_E)\right| >\frac{C}{\tilde{\kappa }_n}\right) \le Ck^{-(1+\zeta )}L_n^{-\zeta }. \end{aligned}$$

Proof

Fix \(E\in \mathcal {F}\) and \(R_1>0\) satisfying \(E\in \sigma _{B_{R_1}}\), \(t\ge 0\), \(n\ge 0\) and \(1\le k<\ell _n^2\). We define

$$\begin{aligned} R_2=6\tilde{D}_n, \end{aligned}$$
(4.14)

and observe that, in view of Proposition 4.1, for every \(x\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\), for \(C_1>0\) independent of n, k, x, t and \(\omega \),

$$\begin{aligned} \left| R_{1+t}f_E(x\,\omega )-\tilde{R}_{1+t,R_2}f_E(x,\omega )\right|\le & {} ||f_E||_{L^\infty (\mathbb {R}^d\times {\varOmega })}e^{-\frac{(6\tilde{D}_n-C_1(1+t))_+^2}{C_1(1+t)}}\nonumber \\\le & {} e^{-\frac{(6\tilde{D}_n-C_1(1+t))_+^2}{C_1(1+t)}}. \end{aligned}$$
(4.15)

In order to obtain better localization properties, we consider the quantity

$$\begin{aligned} \left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6\tilde{R}_{1+t,R_2}f_E(0,\omega )=\int _{\mathbb {R}^d}p_{n,k}(y)\left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(y,\omega )\;dy, \end{aligned}$$
(4.16)

where, for each \(y\in \mathbb {R}^d\),

$$\begin{aligned}p_{n,k}(y)=(4\pi \alpha _n(kL_{n+1}^2-6L_n^2))^{-d/2}e^{-|y|^2/4\alpha _n (kL_{n+1}^2-6L_n^2)}.\end{aligned}$$

Here, observe that Theorem 2.1 implies, for \(C>0\) independent of n,

$$\begin{aligned} ||p_{n,k}(y)||_{L^\infty (\mathbb {R}^d)}\le Ck^{-d/2}L_{n+1}^{-d}. \end{aligned}$$
(4.17)

We now define, for each \(n\ge 0\),

$$\begin{aligned} \tilde{\pi }_n(R_tf_E)=\mathbb {E}\left( \left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6\tilde{R}_{1+t,R_2}f_E(0,\omega )\right) =\mathbb {E}\left( \left( \tilde{R}_n\right) ^6\tilde{R}_{1+t,R_2}f_E(0,\omega )\right) , \end{aligned}$$
(4.18)

and see, in view of (4.15), for each \(n\ge 0\),

$$\begin{aligned} |\pi _n(R_tf_E)-\tilde{\pi }_n(R_tf_E)|\le e^{-\frac{(6\tilde{D}_n-C(1+t))_+^2}{C(1+t)}}. \end{aligned}$$
(4.19)

Furthermore, using the stationary (2.3), and since \(f_E\) is stationary in the sense of (3.2), we have, for each \(x\in \mathbb {R}^d\),

$$\begin{aligned} \tilde{\pi }_n(R_tf_E)=\mathbb {E}\left( \left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6\tilde{R}_{1+t,R_2}f_E(x,\omega )\right) =\mathbb {E}\left( \left( \tilde{R}_n\right) ^6\tilde{R}_{1+t, R_2}f_E(x,\omega )\right) . \end{aligned}$$
(4.20)

The definition of \(\tilde{R}_n\) in (3.17) and the choice of \(R_2=6\tilde{D}_n\) in (4.14) imply that, for each \(x\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\),

$$\begin{aligned} \left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(x,\omega )=\tilde{R}_{6L_n^2+1+t,R_2}f_E(x,\omega ), \end{aligned}$$

and, using Proposition 3.6, for each \(x\in \mathbb {R}^d\),

$$\begin{aligned} \sigma \left( \left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(x,\omega )\right) \subset \sigma _{B_{R_2+R_1}(x)}=\sigma _{B_{6\tilde{D}_n+R_1}(x)}. \end{aligned}$$
(4.21)

Therefore, for \(R>0\) as in (2.7), whenever \(x,y\in \mathbb {R}^d\) satisfy \(|x-y|\ge 12\tilde{D}_n+2R_1+R,\) the random variables

$$\begin{aligned} \left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(x,\omega )\quad \text {and}\quad \left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(y,\omega )\quad \text {are independent.} \end{aligned}$$
(4.22)

We now write \(\tilde{\pi }_n=\tilde{\pi }_n(R_tf)\) and compute the variance

$$\begin{aligned}&\mathbb {E}\left( \left( \left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(0,\omega )-\tilde{\pi }_n\right) ^2\right) \nonumber \\&\quad =\mathbb {E}\left( \int _{\mathbb {R}^d}\int _{\mathbb {R}^d}p_{n,k}(y)p_{n,k}(z)\left( \left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(y,\omega )-\tilde{\pi }_n\right) \right. \nonumber \\&\qquad \times \left. \left( \left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(z,\omega )-\tilde{\pi }_n\right) \;dydz\right) . \end{aligned}$$
(4.23)

Since there exists \(C=C(R_1)>0\) such that, for all \(n\ge 0\),

$$\begin{aligned} C\tilde{D}_n\ge 12\tilde{D}_n+2R_1+R, \end{aligned}$$

and since, for each \(x\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\),

$$\begin{aligned} -1\le \left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(y,\omega )-\tilde{\pi }_n\le 1, \end{aligned}$$

we have, in view of (4.22), for \(C=C(R_1)>0\) independent of n, k and t,

$$\begin{aligned} \mathbb {E}\left( \left( \left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(0,\omega )-\tilde{\pi }_n\right) ^2\right) \le \int _{\mathbb {R}^d}\int _{C\tilde{D}_n}p_{n,k}(y)p_{n,k}(z)\;dydz. \end{aligned}$$

Therefore, using (4.17), for \(C=C(R_1)>0\) independent of n, k and t,

$$\begin{aligned} \mathbb {E}\left( \left( \left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6f_E(0,\omega )-\tilde{\pi }_n\right) ^2\right) \le Ck^{-d/2}L_{n+1}^{-d}\tilde{D}_n^d\le Ck^{-d/2}\tilde{\kappa }_{n}^d\ell _n^{-d}, \end{aligned}$$
(4.24)

and, together with (4.24), Chebyshev’s inequality implies that, for \(C=C(R_1)>0\) independent of n, k and t,

$$\begin{aligned} \mathbb {P}\left( \left| \left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(0,\omega )-\tilde{\pi }_n\right| \ge 1/\tilde{\kappa }_n\right) \le Ck^{-d/2}\tilde{\kappa }_{n}^{d+2}\ell _n^{-d}. \end{aligned}$$
(4.25)

We will now extend a version of this estimate to the whole of \(B_{4\sqrt{k}\tilde{D}_{n+1}}\).

Fix \(0<\gamma <1\) satisfying, in view of (2.16) and (2.17),

$$\begin{aligned} \frac{1}{1+a}<\gamma <1,\quad \text {which implies}\quad \frac{2}{d}<\gamma <1. \end{aligned}$$
(4.26)

Since \(f_E\) is stationary in the sense of (3.2), the stationarity (2.3) and (4.20) imply that, for \(C=C(R_1)>0\) independent of n, k, and t,

$$\begin{aligned}&\mathbb {P}\left( \sup _{x\in \left( \sqrt{k}L_{n+1}\right) ^\gamma \mathbb {Z}^d\cap B_{4\sqrt{k}\tilde{D}_{n+1}}}\left| \left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(x,\omega )-\tilde{\pi }_n\right| \ge 1/\tilde{\kappa }_n\right) \\&\quad \le C\left( \frac{4\sqrt{k}\tilde{D}_{n+1}}{\left( \sqrt{k}L_{n+1}\right) ^\gamma }\right) ^dk^{-d/2}\tilde{\kappa }_{n}^{d+2}\ell _n^{-d}\le C\tilde{\kappa }_{n+1}^d\tilde{\kappa }_n^{d+2}k^{-\gamma d/2}L_n^{(1-(1+a)\gamma )d}, \end{aligned}$$

where we observe (4.26) implies that \(-\gamma d/2<-1\) and \((1-(1+a)\gamma )d<0\). Therefore, in view of (2.18) and (2.19), there exists \(\zeta >0\) and \(C=C(R_1)>0\) independent of n, k, and t such that

$$\begin{aligned}&\mathbb {P}\left( \sup _{x\in \left( \sqrt{k}L_{n+1}\right) ^\gamma \mathbb {Z}^d\cap B_{4\sqrt{k}\tilde{D}_{n+1}}}\left| \left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(x,\omega )-\tilde{\pi }_n\right| \ge 1/\tilde{\kappa }_n\right) \nonumber \\&\quad \le Ck^{-(1+\zeta )}L_n^{-\zeta }. \end{aligned}$$
(4.27)

Using Theorem 2.1, Proposition 3.1 and (4.16), for each \(\omega \in {\varOmega }\) and \(x\in \mathbb {R}^d\), for \(C>0\) independent of n, k, t and \(R_1\),

$$\begin{aligned} ||D_x\left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(x,\omega )||_{L^\infty (\mathbb {R}^d)} \le \int _{\mathbb {R}^d}D_yp_{n,k}(y)\;dy\le Ck^{-1/2}L_{n+1}^{-1}. \end{aligned}$$
(4.28)

Since, for each \(x\in B_{4\sqrt{k}\tilde{D}_{n+1}}\) there exists \(y\in (\sqrt{k}L_{n+1})^\gamma \mathbb {Z}^d\cap B_{4\sqrt{k}\tilde{D}_{n+1}}\) satisfying \(|x-y|\le C(\sqrt{k}L_{n+1})^\gamma \) for \(C>0\) independent of n, we conclude that, in view of (4.28),

$$\begin{aligned}&\sup _{x\in B_{4\sqrt{k}\tilde{D}_{n+1}}}\left| \left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(x,\omega )-\tilde{\pi }_n\right| \nonumber \\&\qquad \le \sup _{x\in \left( \sqrt{k}L_{n+1}\right) ^\gamma \mathbb {Z}^d\cap B_{4\sqrt{k}\tilde{D}_{n+1}}}\left| \left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(x,\omega )\right. \nonumber \\&\qquad \quad \left. -\tilde{\pi }_n\right| +C\left( \sqrt{k}L_{n+1}\right) ^{\gamma -1}. \end{aligned}$$
(4.29)

Because (2.18) and (2.19) imply that there exists \(C>0\) independent of n such that, for all \(n\ge 0\) and \(k\ge 1\),

$$\begin{aligned} \left( \sqrt{k}L_{n+1}\right) ^{\gamma -1}\le C/\tilde{\kappa }_n, \end{aligned}$$
(4.30)

for \(C=C(R_1)>0\) independent of n, k, and t, using (4.27), (4.29) and (4.30),

$$\begin{aligned} \mathbb {P}\left( \sup _{x\in B_{4\sqrt{k}\tilde{D}_{n+1}}}|\left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6\tilde{R}_{t+1,R_2}f_E(x,\omega )-\tilde{\pi }_n|>\frac{C}{\kappa _n}\right) \le Ck^{-(1+\zeta )}L_n^{-\zeta }. \end{aligned}$$
(4.31)

Finally, since (2.19) and (2.20) imply that there exists \(C=C(t)>0\) such that, for \(C_1>0\) as in (4.15), for all \(n\ge 0\),

$$\begin{aligned} e^{-\frac{(6\tilde{D}_n-C_1(1+t))_+^2}{C_1(1+t)}}\le C/\tilde{\kappa }_n, \end{aligned}$$

we conclude in view of (4.15), (4.19) and (4.31) that there exists \(C=C(R_1,t)>0\) independent of n and k such that

$$\begin{aligned} \mathbb {P}\left( \sup _{x\in B_{4\sqrt{k}\tilde{D}_{n+1}}}|\left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6R_{t+1}f_E(x,\omega )-\pi _n(R_tf_E)|>\frac{C}{\kappa _n}\right) \le Ck^{-(1+\zeta )}L_n^{-\zeta }, \end{aligned}$$

which, since E, \(R_1\), n, k and t were arbitrary, completes the argument.

The following proposition is essentially a restatement of Proposition 3.9 best suited to our current circumstances, where we recall the definition of the subsets \(\left\{ A_n\right\} _{n=0}^\infty \) in (3.18).

Proposition 4.5

Assume (2.10) and (2.33). For every \(E\in \mathcal {F}\), \(n\ge 0\), \(1\le k<\ell _{n+1}^2\), \(t\ge 0\) and \(\omega \in A_n\), for \(C>0\) independent of E, n, k, t and \(\omega \),

$$\begin{aligned}\sup _{x\in B_{4\sqrt{k}\tilde{D}_{n+1}}}|\left( R_{n+1}\right) ^kR_{1+t}f_E(x,\omega )-\left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6R_{1+t}f_E(x,\omega )|\le CL_n^{\beta -7(\delta -5a)}. \end{aligned}$$

Proof

Fix \(E\in \mathcal {F}\), \(n\ge 0\), \(1\le k<\ell _{n+1}^2\) and \(\omega \in A_n\). In view of Proposition 3.2, there exists \(C>0\) independent of E, n, k, t and \(\omega \) such that

$$\begin{aligned}||R_{1+t}f_E||_{C^{0,\beta }(\mathbb {R}^d)}\le C ||f_E||_{L^\infty (\mathbb {R}^d)}\le C.\end{aligned}$$

Therefore, since \(\omega \in A_n\), Proposition 3.9 implies that, for \(C>0\) independent of E, n, k and \(\omega \),

$$\begin{aligned}&\sup _{x\in B_{4\sqrt{k}\tilde{D}_{n+1}}}|\left( R_{n+1}\right) ^kR_{1+t}f_E(x,\omega )-\left( \overline{R}_n\right) ^{k\ell _n^2-6}\left( \tilde{R}_n\right) ^6R_{1+t}f_E(x,\omega )|\\&\qquad \le CL_n^{\beta -7(\delta -5a)}||R_{1+t}f_E||_{C^{0,\beta }(\mathbb {R}^d)}\le CL_n^{\beta -7(\delta -5a)}, \end{aligned}$$

which, since E, n, k, t and \(\omega \) were arbitrary, completes the argument.

Observe that the convergence obtain in Proposition 4.5 occurs along the discrete sequence of time steps \(kL_n^2\) on \(4\sqrt{k}\tilde{D}_n\). We now upgrade this convergence along the full limit, as \(t\rightarrow \infty \), using Control 2.3. The cost is that the convergence now occurs on a marginally smaller portion of space.

Proposition 4.6

Assume (2.10) and (2.33). Suppose, for some \(R_1>0\), \(E\in \mathcal {F}\) satisfies \(E\in \sigma _{B_{R_1}}.\) For each \(n\ge 0\), \(1\le k<\ell _{n+1}^2\) and \(t\ge 0\), for \(C=C(R_1,t)>0\) and \(\zeta >0\) appearing in Proposition 4.4,

$$\begin{aligned} \mathbb {P}\left( \sup _{kL_{n+1}^2\le s\le (k+1)L_{n+1}^2}\sup _{x\in B_{\sqrt{k}\tilde{D}_{n+1}}}\left| R_sR_{1+t}(x,\omega )-\pi _n(R_tf_E)\right| >\frac{C}{\tilde{\kappa }_n}\right) \le Ck^{-(1+\zeta )}L_n^{-\zeta }. \end{aligned}$$

Proof

Fix \(E\in \mathcal {F}\) and \(R_1>0\) satisfying \(E\in \sigma _{B_{R_1}}\), \(n\ge 0\), \(1\le k<\ell _{n+1}^2\) and \(t\ge 0\). We observe that, in view of (2.17), (2.19) and (2.23), for each \(n\ge 0\), for \(\zeta >0\) defined in Proposition 4.4, there exists \(C>0\) independent of n satisfying,

$$\begin{aligned}L_n^{\beta -7(\delta -5a)}\le C/\tilde{\kappa }_n,\end{aligned}$$

and, for each \(n\ge 0\) and \(1\le k<\ell _{n+1}^2\),

$$\begin{aligned} L_n^{(2(1+a)^2-1)d-M_0}\le k^{-(1+\zeta )}L_n^{-\zeta }. \end{aligned}$$

Therefore, Propositions 3.8, 4.4 and 4.5 imply that, for \(C=C(R_1,t)>0\) independent of E, n and k,

$$\begin{aligned} \mathbb {P}\left( \sup _{x\in B_{4\sqrt{k}\tilde{D}_{n+1}}}\left| \left( R_{n+1}\right) ^kR_{1+t}f_E(x,\omega )-\pi _n(R_tf)\right| >\frac{C}{\tilde{\kappa }_n}\right) \le Ck^{-(1+\zeta )}L_n^{-\zeta }. \end{aligned}$$
(4.32)

Recall the cutoff function \(\chi _{2\sqrt{k}\tilde{D}_{n+1}}\) defined in (2.28). For each \(x\in B_{\sqrt{k}\tilde{D}_{n+1}}\) and \(kL_{n+1}^2\le s\le (k+1)L_{n+1}^2\), we write

$$\begin{aligned} |R_sR_{1+t}f_E(x,\omega )-\pi _n(R_tf_E)|= & {} \left| R_{s-kL_{n+1}^2}\left( \left( R_{n+1}\right) ^kR_{1+t}f_E(x,\omega )-\pi _n(R_tf_E)\right) \right| \\\le & {} \left| R_{s-kL_{n+1}^2}\chi _{2\sqrt{k}\tilde{D}_{n+1}}\left( \left( R_{n+1}\right) ^kR_{1+t}f_E(x,\omega )-\pi _n(R_tf_E)\right) \right| \\&+\left| R_{s-kL_{n+1}^2}(1-\chi _{2\sqrt{k}\tilde{D}_{n+1}})\left( \left( R_{n+1}\right) ^kR_{1+t}f_E(x,\omega )-\pi _n(R_tf_E)\right) \right| . \end{aligned}$$

Since, for each \(x\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\),

$$\begin{aligned} -1<\left( R_{n+1}\right) ^kR_{1+t}f_E(x,\omega )-\pi _n(R_tf_E)<1, \end{aligned}$$

we have, for each \(\omega \in A_n\) and \(x\in B_{\sqrt{k}\tilde{D}_{n+1}}\), using Control 2.3, since \(0\le s-kL_{n+1}^2\le L_{n+1}^2\),

$$\begin{aligned} \left| R_{s-kL_{n+1}^2}(1-\chi _{2\sqrt{k}\tilde{D}_{n+1}})\left( \left( R_{n+1}\right) ^kR_{1+t}f_E(x,\omega )-\pi _n(R_tf_E)\right) \right| \le e^{-\kappa _{n+1}}. \end{aligned}$$
(4.33)

Furthermore, for each \(x\in \mathbb {R}^d\) and \(\omega \in {\varOmega }\),

$$\begin{aligned}&\left| R_{s-kL_{n+1}^2}\chi _{2\sqrt{k}\tilde{D}_{n+1}}\left( \left( R_{n+1}\right) ^kR_{1+t}f_E(x,\omega )-\pi _n(R_tf_E)\right) \right| \nonumber \\&\quad \le \sup _{x\in B_{4\sqrt{k}\tilde{D}_{n+1}}}\left| \left( R_{n+1}\right) ^kR_{1+t}f_E(x,\omega )-\pi _n(R_tf_E)\right| . \end{aligned}$$
(4.34)

To conclude, observe that (2.17), (2.18), (2.19) and (2.23) imply that there exists \(C>0\) satisfying, for each \(n\ge 0\),

$$\begin{aligned}e^{-\kappa _n}\le C/\tilde{\kappa }_n.\end{aligned}$$

Therefore, because \(kL_{n+1}^2\le s< (k+1)L_{n+1}^2\) was arbitrary, using (4.33) and (4.34), for \(C=C(R_1,t)>0\) independent of E, n and k,

$$\begin{aligned}&\mathbb {P}\left( \sup _{kL_{n+1}^2\le s\le (k+1)L_{n+1}^2}\sup _{x\in B_{\sqrt{k}\tilde{D}_{n+1}}}|R_sR_{1+t}f_E(x,\omega )-\pi _n(R_tf_E)|>\frac{C}{\tilde{\kappa }_n}\right) \\&\quad \le \mathbb {P}\left( \sup _{x\in B_{4\sqrt{k}\tilde{D}_{n+1}}}|\left( R_{n+1}\right) ^kR_{1+t}f_E(x,\omega )-\pi _n(R_tf_E)|>\frac{C}{\tilde{\kappa }_n}\right) +\mathbb {P}\left( {\varOmega }{\setminus } A_n\right) . \end{aligned}$$

Since, for each \(n\ge 0\) and \(1\le k< \ell _{n+1}^2\),

$$\begin{aligned} L_n^{(2(1+a)^2-1)d-M_0}\le k^{-(1+\zeta )}L_n^{-\zeta }, \end{aligned}$$

in view of Proposition 3.8 and (4.32), for \(C=C(R_1,t)>0\) independent of E, n and k,

$$\begin{aligned}&\mathbb {P}\left( \sup _{kL_{n+1}^2\le s\le (k+1)L_{n+1}^2}\sup _{x\in B_{\sqrt{k}\tilde{D}_{n+1}}}|R_sR_{1+t}f_E(x,\omega )-\pi _n(R_tf_E)|>\frac{C}{\tilde{\kappa }_n}\right) \\&\quad \le Ck^{-(1+\zeta )}L_n^{-\zeta }, \end{aligned}$$

which completes the proof.

We are now prepared to present our main result. Define, for each \(n\ge 0\), \(1\le k<\ell _{n+1}^2\), \(t\ge 0\) and \(E\in \mathcal {F}\) satisfying, for some \(R_1>0\), \(E\in \sigma _{B_{R_1}}\), for \(C=C(R_1,t)>0\) as in Proposition 4.6,

$$\begin{aligned} B_{n,k,t}(E)=\left\{ \omega \in {\varOmega }\;|\;\sup _{kL_{n+1}^2\le s\le (k+1)L_{n+1}^2}\sup _{x\in B_{\sqrt{k}\tilde{D}_{n+1}}}\left| R_sR_{1+t}f_E(x,\omega )-\pi _n(R_tf_E)\right| \le \frac{C}{\kappa _n}\;\right\} , \end{aligned}$$
(4.35)

and, for each \(n\ge 0\),

$$\begin{aligned} B_{n,t}(E)=\bigcap _{k=1}^{\ell _{n+1}^2-1}B_{n,k,t}(E). \end{aligned}$$
(4.36)

We have, using Proposition 4.6, for each \(n\ge 0\), \(t\ge 0\), \(E\in \mathcal {F}\) satisfying, for some \(R_1>0\), \(E\in \sigma _{B_{R_1}},\) for \(C=C(R_1,t)>0\) independent of n and E,

$$\begin{aligned}\mathbb {P}\left( {\varOmega }{\setminus }B_{n,t}(E)\right) \le \sum _{k=1}^{\ell _{n+1}^2-1}\mathbb {P}\left( {\varOmega }{\setminus } B_{n,k,t}(E)\right) \le \sum _{k=1}^{\ell _{n+1}^2-1}Ck^{-(1+\zeta )}L_n^{-\zeta }\le CL_{n}^{-\zeta }.\end{aligned}$$

And, in view of (2.18), for each \(t\ge 0\) and \(E\in \mathcal {F}\) satisfying \(E\in \sigma _{B_{R_1}}\), for some \(R_1>0\),

$$\begin{aligned}\sum _{n=0}^\infty \mathbb {P}\left( {\varOmega }{\setminus } B_{n,t}(E)\right) <\infty .\end{aligned}$$

We therefore define, for each \(t\ge 0\) and \(E\in \mathcal {F}\) satisfying \(E\in \sigma _{B_{R_1}}\), for some \(R_1>0\), the subset

$$\begin{aligned} {\varOmega }_{t}(E)=\left\{ \omega \in {\varOmega }\;|\;\text {There exists}\;\overline{n}(\omega )\ge 0\;\text {such that, for all}\;n\ge \overline{n},\;\omega \in B_{n,t}.\right\} , \end{aligned}$$
(4.37)

where the Borel–Cantelli lemma implies that, for every \(t\ge 0\) and \(E\in \mathcal {F}\) satisfying \(E\in \sigma _{B_{R_1}}\), for some \(R_1>0\),

$$\begin{aligned} \mathbb {P}({\varOmega }_t(E))=1. \end{aligned}$$

We now present the invariance property of the measure \(\pi \). In the proof, we use that fact that

$$\begin{aligned} \mathcal {F}=\sigma \left( \bigcup _{R>0}\sigma _{B_R}\right) . \end{aligned}$$

That is, the sigma algebra \(\mathcal {F}\) is generated by subsets satisfying the hypothesis of Proposition 4.3.

Proposition 4.7

Assume (2.10) and (2.33). For every \(E\in \mathcal {F}\) and \(t\ge 0\),

$$\begin{aligned} \pi (E)=\overline{\pi }(R_tf_E)=\int _{{\varOmega }}R_tf_E(0,\omega )\;d\pi . \end{aligned}$$

Proof

We define

$$\begin{aligned} \tilde{\mathcal {F}}=\bigcup _{R>0}\sigma _{B_R}, \end{aligned}$$
(4.38)

and observe that \(\tilde{\mathcal {F}}\) is an algebra of subsets of \({\varOmega }\). That is, \(\tilde{\mathcal {F}}\) is closed under relative complements and finite unions. Furthermore, for every \(E\in \tilde{\mathcal {F}}\), there exists \(R_1=R_1(E)>0\) satisfying \(E\in \sigma _{B_{R_1}}\).

Fix \(t\ge 0\) and \(E\in \tilde{\mathcal {F}}\). For every \(\omega \in {\varOmega }_t(E)\cap {\varOmega }_0(E)\), (4.35) and (4.36) imply that

$$\begin{aligned} \pi (E)=\overline{\pi }(f_E)=\lim _{s\rightarrow \infty }R_sR_1f_E(0,\omega )=\lim _{s\rightarrow \infty }R_sR_{t+1}f_E(0,\omega )=\overline{\pi }(R_tf_E). \end{aligned}$$

And, in view of Proposition 3.13, since \(R_tf_E\) satisfies (3.2),

$$\begin{aligned} \overline{\pi }(R_tf_E)=\int _{{\varOmega }}R_tf_E(0,\omega )\;d\pi . \end{aligned}$$

Therefore, for every \(t\ge 0\) and \(E\in \tilde{\mathcal {F}}\),

$$\begin{aligned} \pi (E)=\int _{{\varOmega }}R_tf_E(0,\omega )\;d\pi . \end{aligned}$$

To conclude, the absolute continuity of \(\pi \) with respect to \(\mathbb {P}\) and the dominated convergence theorem imply that, using a repetition of the argument appearing in Proposition 3.12, for each \(t\ge 0\), the rule

$$\begin{aligned}E\rightarrow \int _{{\varOmega }}R_tf_E(0,\omega )\;d\pi \end{aligned}$$

defines a probability measure on \(({\varOmega }, \mathcal {F})\). Therefore, since

$$\begin{aligned} \mathcal {F}=\sigma (\tilde{\mathcal {F}}), \end{aligned}$$

the Caratheodory Extension Theorem implies that, for every \(E\in \mathcal {F}\) and \(t\ge 0\),

$$\begin{aligned} \pi (E)=\int _{{\varOmega }}R_tf_E(0,\omega )\;d\pi , \end{aligned}$$

which completes the argument.

In the final proposition of this section, we prove that the invariant measure \(\pi \) is the unique invariant measure which is absolutely continuous with respect to \(\mathbb {P}\). Furthermore, \(\pi \) is mutually absolutely continuous with respect to \(\mathbb {P}\) and defines an ergodic probability measure for the canonical Markov process on \({\varOmega }\) defining (1.6). The proof of ergodicity is presented for the convenience of the reader, since it is virtually identical to that presented in [9, Theorem 2.1].

Recall that the ergodicity of (2.2) implies that whenever \(E\in \mathcal {F}\) satisfies, with equality up to sets of measure zero,

$$\begin{aligned} \tau _x(E)=E\quad \text {for every}\quad x\in \mathbb {R}^d\quad \text {then}\quad \mathbb {P}(E)=1\quad \text {or}\quad \mathbb {P}(E)=0. \end{aligned}$$
(4.39)

Theorem 4.8

Assume (2.10) and (2.33). There exists a unique invariant measure which is absolutely continuous with respect to \(\mathbb {P}\). Furthermore, the invariant measure is mutually absolutely continuous with respect to \(\mathbb {P}\) and defines an ergodic probability measure for the canonical Markov process on \({\varOmega }\) defining (1.6).

Proof

The measure \(\pi \) constructed according to (3.42) was shown in Proposition 3.12 to be absolutely continuous with respect to \(\mathbb {P}\) and, was shown to be invariant in Proposition 4.7. It therefore suffices to prove the uniqueness.

Suppose that \(\mu \) is a probability measure on \(({\varOmega }, \mathcal {F})\) which is absolutely continuous with respect to \(\mathbb {P}\) and satisfies, for each \(t\ge 0\) and \(E\in \mathcal {F}\),

$$\begin{aligned} \mu (E)=\int _{{\varOmega }}R_tf_E(0,\omega )\;d\mu . \end{aligned}$$
(4.40)

Fix \(E\in \tilde{\mathcal {F}}\). In view of (4.35) and (4.36), for every \(\omega \in {\varOmega }_0(E)\), as \(t\rightarrow \infty \),

$$\begin{aligned}R_tf_E(0,\omega )\rightarrow \pi (E).\end{aligned}$$

Furthermore, since \(\mu \) is absolutely continuous with respect to \(\mathbb {P}\),

$$\begin{aligned} \mu ({\varOmega }_0(E))=\mathbb {P}({\varOmega }_0(E))=1. \end{aligned}$$
(4.41)

Therefore, the dominated convergence theorem, (4.40) and (4.41) imply that

$$\begin{aligned} \mu (E)=\lim _{t\rightarrow \infty }\int _{\varOmega }R_tf_E(0,\omega )\;d\mu =\int _{\varOmega }\pi (E)\;d\mu =\pi (E). \end{aligned}$$

Since \(E\in \tilde{\mathcal {F}}\) was arbitrary, and since \(\tilde{F}\) is an algebra of subsets, the Caratheodory Extension Theorem implies, using the fact that \(\mathcal {F}=\sigma (\tilde{\mathcal {F}})\), we have, for every \(E\in \mathcal {F}\),

$$\begin{aligned}\pi (E)=\mu (E),\end{aligned}$$

which completes the argument.

The argument for mutual absolute continuity proceeds by contradiction. If not, the Lebesgue decomposition theorem implies that there exists a subset \(E\subset {\varOmega }\) satisfying \(0<\mathbb {P}({\varOmega }{\setminus }E)<1\) and \(\pi ({\varOmega }{\setminus }E)=0\), and with \(\mathbb {P}\) absolutely continuous with respect to \(\pi \) on E. Since

$$\begin{aligned} 1=\pi (E)=\int _{{\varOmega }}R_1f_E(0,\omega )\;d\pi =\int _{\varOmega }\int _{\mathbb {R}^d}p(0,1,y,\omega )\mathbf{{1}}_E(\tau _y\omega )\;dy\;d\pi , \end{aligned}$$

and since \(\pi \) is absolutely continuous with respect to \(\mathbb {P}\), this implies that, for almost every \(\omega \in E\) with respect to \(\mathbb {P}\), and for almost every \(y\in \mathbb {R}^d\), we have \(\tau _y\omega \in E.\) Fubini’s theorem therefore implies that, for almost every \(y\in \mathbb {R}^d\), up to a set of measure zero,

$$\begin{aligned} \tau _y(E)=E. \end{aligned}$$

And, since the map \(y\rightarrow \mathbf{{1}}_E(\tau _y\omega )\) is continuous from \(\mathbb {R}^d\) to \(L^1({\varOmega })\), we conclude that, for every \(y\in \mathbb {R}^d\), up to a set of measure zero,

$$\begin{aligned} \tau _y(E)=E. \end{aligned}$$

Therefore, using (4.39), we have \(\mathbb {P}(E)=0\) or \(\mathbb {P}(E)=1\), a contradiction, which completes the proof of mutual absolute continuity.

We now prove the ergodicity. Suppose that, for \(E\in \mathcal {F}\) and \(t\ge 0\), we have

$$\begin{aligned} R_tf_E(0,\omega )=P_t(\omega ,E)=1\quad \text {for almost every}\quad \omega \in E\quad \text {with respect to}\quad \pi . \end{aligned}$$

Since \(\mathbb {P}\) is absolutely continuous with respect to \(\pi \), this implies

$$\begin{aligned}R_tf_E(0,\omega )=P_t(\omega ,E)=1\quad \text {for almost every}\quad \omega \in E\quad \text {with respect to}\quad \mathbb {P},\end{aligned}$$

which, by repeating the above argument, implies \(E\in \mathcal {F}\) is an invariant set under the transformation group \(\left\{ \tau _x\right\} _{x\in \mathbb {R}^d}\). Therefore, because (4.39) implies \(\mathbb {P}(E)=0\) or \(\mathbb {P}(E)=1\), we conclude that, since \(\pi \) is absolutely continuous with respect to \(\mathbb {P}\), either \(\pi (E)=0\) or \(\pi (E)=1\), and this completes the argument.

5 A Proof of Homogenization for Locally Measurable Functions

In this section, we characterize the limiting behavior, as \(\epsilon \rightarrow 0\), on a subset of full probability, of solutions \(u^\epsilon :\mathbb {R}^d\times [0,\infty )\times {\varOmega }\rightarrow \mathbb {R}\) satisfying

$$\begin{aligned} \left\{ \begin{array}{ll} u^\epsilon _t=\frac{1}{2}{{\mathrm{tr}}}(A(x/\epsilon ,\omega )D^2u^\epsilon )+\frac{1}{\epsilon }b(x/\epsilon ,\omega )\cdot Du^\epsilon &{}\quad \text {on}\quad \mathbb {R}^d\times (0,\infty ), \\ u^\epsilon (x,0,\omega )=f(x/\epsilon ,\omega ) &{}\quad \text {on}\quad \mathbb {R}^d\times \left\{ 0\right\} ,\end{array}\right. \end{aligned}$$
(5.1)

for initial data \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) stationary in the sense of (3.2) and which is locally measurable in the sense that, for some \(R>0\), \(f(0,\omega )\in L^\infty ({\varOmega },\sigma _{B_R}).\)

The local measurability ensures that the triplet \((A(x,\omega ), b(x,\omega ), f(x,\omega ))\) satisfies a finite range dependence in the sense of (2.7), and the following three statements can be extended to this situation by a repetition of the arguments appearing in Sect. 4. We remark, however, that after a rescaling the case of local measurability and Theorem 5.3 below are enough to prove the convergence, on a subset of full probability, for each \(p\in \mathbb {R}^d\), of the approximate first-order correctors

$$\begin{aligned}\delta v^\delta (y)=\frac{1}{2}{{\mathrm{tr}}}(A(y,\omega )D^2v^\delta )+b(y,\omega )\cdot (p+Dv^\delta )\quad \text {on}\quad \mathbb {R}^d,\end{aligned}$$

in the sense that, on a subset of full probability, as \(\delta \rightarrow 0\) and using the isotropy (2.8) to obtain the final equality,

$$\begin{aligned}-\delta v^\delta (0,\omega )\rightarrow \int _{\varOmega }p\cdot b(0,\omega )\;d\pi =0.\end{aligned}$$

See for instance Bensoussan et al. [1, Chapter 3] for the role of correctors in the periodic homogenization theory of equations like (5.1) with non-oscillating initial data and elliptic analogues.

In what follows, for each \(E\in \bigcup _{R>0}\sigma _{B_R}\), recall the subset of full probability \({\varOmega }_0(E)\) defined in (4.37), and the related subsets \(B_{n,0}(E)\) defined in (4.36).

Theorem 5.1

Assume (2.10) and (2.33). Suppose \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) satisfies (3.2) and, for some \(R_1>0\), satisfies \(f(0,\omega )\in L^\infty ({\varOmega },\sigma _{B_{R_1}})\). For each \(\epsilon >0\), write \(u^\epsilon \) for the solution of (5.1) with initial data \(f(x/\epsilon ,\omega )\). On a subset of full probability, as \(\epsilon \rightarrow 0\),

$$\begin{aligned} u^\epsilon \rightarrow \overline{\pi }(f)=\int _{{\varOmega }}f(0,\omega )\;d\pi \quad \text {locally uniformly on}\quad \mathbb {R}^d\times (0,\infty ). \end{aligned}$$

Proof

Observe that, for each \(n\ge 0\), \(f,g\in L^\infty (\mathbb {R}^d\times {\varOmega })\) satisfying (3.2) and \(\alpha ,\beta \in \mathbb {R}\), we have

$$\begin{aligned}\pi _n(\alpha f+\beta g)=\alpha \pi _n(f)+\beta \pi _n(g)\quad \text {and}\quad \overline{\pi }(\alpha f+\beta g)=\alpha \overline{\pi }(f)+\beta \overline{\pi }(g).\end{aligned}$$

Furthermore, for each \(f,g\in L^\infty (\mathbb {R}^d\times {\varOmega })\), for each \(n\ge 0\),

$$\begin{aligned} |\pi _n(f)-\pi _n(g)|=|\pi _n(f-g)|\le ||f-g||_{L^\infty (\mathbb {R}^d\times {\varOmega })}, \end{aligned}$$

and

$$\begin{aligned} |\overline{\pi }(f)-\overline{\pi }(g)|=|\overline{\pi }(f-g)|\le ||f-g||_{L^\infty (\mathbb {R}^d\times {\varOmega })}. \end{aligned}$$

It therefore suffices, using the definition of the Lebesgue integral, to prove the theorem for translates under \(\left\{ \tau _x\right\} _{x\in \mathbb {R}^d}\) of indicator functions corresponding to locally measurable sets in \(\mathcal {F}\).

Fix \(R_1>0\) and \(E\in \sigma _{B_{R_1}}\). We write \(u^\epsilon \) for the solution of (5.1) with initial data \(f_E(x/\epsilon ,\omega )\). Observe that, for each \(\epsilon >0\), \(u^\epsilon (x,t,\omega )=u(x/\epsilon ,t/\epsilon ^2,\omega )\) for \(u:\mathbb {R}^d\times [0,\infty )\times {\varOmega }\rightarrow \mathbb {R}\) satisfying

$$\begin{aligned} \left\{ \begin{array}{ll} u_t=\frac{1}{2}{{\mathrm{tr}}}(A(y,\omega )D^2u)+b(y,\omega )\cdot Du &{}\quad \text {on}\quad \mathbb {R}^d\times (0,\infty ), \\ u(x,0,\omega )=f_E(x,\omega ) &{}\quad \text {on}\quad \mathbb {R}^d\times \left\{ 0\right\} .\end{array}\right. \end{aligned}$$

It therefore remains to prove that, on a subset of full probability, for each \(R>0\),

$$\begin{aligned} \lim _{\epsilon \rightarrow 0}\sup _{(x,t)\in B_{R/\epsilon }\times [R^2/\epsilon ^2,\infty )}|u(x,t,\omega )-\overline{\pi }(f_E)|=0. \end{aligned}$$
(5.2)

We now prove that (5.2) is satisfied for every \(\omega \in {\varOmega }_0(E)\), for every \(R>0\).

Fix \(R>0\) and \(\omega \in {\varOmega }_0(E)\). Since \(\omega \in {\varOmega }_0(E)\), choose \(\overline{\epsilon }_1>0\) such that, whenever \(0<\epsilon <\overline{\epsilon }\), \(n\ge 0\) and \(1\le k<\ell _n^2\) satisfy

$$\begin{aligned}kL_n^2\le R^2/\epsilon ^2\le (k+1)L_n^2,\end{aligned}$$

we have \(\omega \in B_{n,0}(E)\). And, in view of (2.19) and (2.20), choose \(\overline{\epsilon }_2>0\) such that, whenever \(0<\epsilon <\overline{\epsilon }_2\) satisfies, for \(n\ge 0\) and \(1\le k<\ell _n^2\),

$$\begin{aligned}kL_n^2\le R^2/\epsilon ^2<(k+1)L_n,\end{aligned}$$

we have

$$\begin{aligned} (k+1)R^2L_n^2\le k\tilde{D}_n^2. \end{aligned}$$

Define \(\overline{\epsilon }=\min \left\{ \overline{\epsilon }_1,\overline{\epsilon }_2\right\} \) and observe that, whenever \(0<\epsilon <\overline{\epsilon }\) satisfies, for \(n\ge 0\) and \(1\le k<\ell _n^2\),

$$\begin{aligned}kL_n^2\le R^2/\epsilon ^2\le (k+1)L_n^2,\end{aligned}$$

it follows that \(\omega \in B_{n,0}(E)\) and \(B_{R/\epsilon }\subset B_{\sqrt{k}\tilde{D}_n}\). Therefore, Proposition 4.6 implies, whenever \(0<\epsilon <\overline{\epsilon }\) satisfies, for \(\overline{n}\ge 0\) and \(1\le \overline{k}<\ell _{\overline{n}}^2\),

$$\begin{aligned} \overline{k}L_{\overline{n}}^2\le R^2/\epsilon ^2<(\overline{k}+1)L_{\overline{n}}^2, \end{aligned}$$

for \(C>0\) independent of \(\overline{n}\),

$$\begin{aligned}\sup _{(x,t)\in B_{R/\epsilon }\times [(R^2+\epsilon ^2)/\epsilon ^2,\infty )}|u(x,t,\omega )-\overline{\pi }(f_E)|\le C/\tilde{\kappa }_{\overline{n}}+\sup _{n\ge \overline{n}}|\pi _n(f)-\overline{\pi }(f)|.\end{aligned}$$

This, in view of Proposition 3.11, completes the argument, since \(\overline{n}\rightarrow \infty \) as \(\epsilon \rightarrow 0\), and since \(R>0\), \(R_1>0\) and \(E\in \sigma _{B_{R_1}}\) were arbitrary.

Finally, as an immediate consequence of Theorem 5.1, our methods also imply the homogenization of equations involving an oscillating righthand side,

$$\begin{aligned} \left\{ \begin{array}{ll} u^\epsilon _t=\frac{1}{2}{{\mathrm{tr}}}(A(x/\epsilon ,\omega )D^2u^\epsilon )+\frac{1}{\epsilon }b(x/\epsilon )\cdot Du^\epsilon +f(x/\epsilon ,\omega ) &{}\quad \text {on}\quad \mathbb {R}^d\times (0,\infty ), \\ u^\epsilon =0 &{}\quad \text {on}\quad \mathbb {R}^d\times \left\{ 0\right\} ,\end{array}\right. \end{aligned}$$
(5.3)

and, the homogenization of analogous time-independent problems, like

$$\begin{aligned} u^\epsilon =\frac{1}{2}{{\mathrm{tr}}}(A(x/\epsilon ,\omega )D^2u^\epsilon )+\frac{1}{\epsilon }b(x/\epsilon ,\omega )\cdot Du^\epsilon +f(x/\epsilon ,\omega )\quad \text {on}\quad \mathbb {R}^d, \end{aligned}$$
(5.4)

for righthand side \(f(x,\omega )\) satisfying (3.2) with, for some \(R>0\), \(f(0,\omega )\in L^\infty ({\varOmega },\sigma _{B_R})\). The following two theorems summarize the results.

Theorem 5.2

Assume (2.10) and (2.33). Suppose \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) satisfies (3.2) and, for some \(R_1>0\), satisfies \(f(0,\omega )\in L^\infty ({\varOmega },\sigma _{B_{R_1}})\). For each \(\epsilon >0\), write \(u^\epsilon \) for the solution of (5.3) with righthand side \(f(x/\epsilon ,\omega )\). On a subset of full probability, as \(\epsilon \rightarrow 0\),

$$\begin{aligned} u^\epsilon (x,t,\omega )\rightarrow t\overline{\pi }(f)=t\int _{{\varOmega }}f(0,\omega )\;d\pi \quad \text {locally uniformly on}\quad \mathbb {R}^d\times [0,\infty ). \end{aligned}$$

Proof

Fix \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) and \(R_1>0\) satisfying (3.2) and \(f(0,\omega )\in L^\infty ({\varOmega },\sigma _{B_{R_1}}).\) For each \(\epsilon >0\), let \(u^\epsilon :\mathbb {R}^d\times [0,\infty )\times {\varOmega }\rightarrow \mathbb {R}\) satisfy (5.3) with righthand side \(f(x/\epsilon ,\omega )\) and let \(\tilde{u}^\epsilon :\mathbb {R}^d\times [0,\infty )\times {\varOmega }\rightarrow \mathbb {R}\) satisfy (5.1) with initial data \(f(x/\epsilon ,\omega ).\) We observe that, for each \(\epsilon >0\),

$$\begin{aligned} u^\epsilon (x,t,\omega )=\int _{0}^t\tilde{u}^\epsilon (x,s,\omega )\;ds\quad \text {on}\quad \mathbb {R}^d\times [0,\infty )\times {\varOmega }. \end{aligned}$$

This, in view of Theorem 5.1, completes the proof.

Theorem 5.3

Assume (2.10) and (2.33). Suppose \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) satisfies (3.2) and, for some \(R_1>0\), satisfies \(f(0,\omega )\in L^\infty ({\varOmega },\sigma _{B_{R_1}})\). For each \(\epsilon >0\), write \(u^\epsilon \) for the solution of (5.4) with righthand side \(f(x/\epsilon ,\omega )\). On a subset of full probability, as \(\epsilon \rightarrow 0\),

$$\begin{aligned} u^\epsilon \rightarrow \overline{\pi }(f)=\int _{{\varOmega }}f(0,\omega )\;d\pi \quad \text {locally uniformly on}\quad \mathbb {R}^d. \end{aligned}$$

Proof

Fix \(f\in L^\infty (\mathbb {R}^d\times {\varOmega })\) and \(R_1>0\) satisfying (3.2) and \(f(0,\omega )\in L^\infty ({\varOmega },\sigma _{B_{R_1}}).\) For each \(\epsilon >0\), let \(u^\epsilon :\mathbb {R}^d\times {\varOmega }\rightarrow \mathbb {R}\) satisfy (5.4) with righthand side \(f(x/\epsilon ,\omega )\) and let \(\tilde{u}^\epsilon :\mathbb {R}^d\times [0,\infty )\times {\varOmega }\rightarrow \mathbb {R}\) satisfy (5.1) with initial data \(f(x/\epsilon ,\omega ).\) We observe that, for each \(\epsilon >0\),

$$\begin{aligned} u^\epsilon (x,\omega )=\int _{0}^\infty e^{-s}\tilde{u}^\epsilon (x,s,\omega )\;ds\quad \text {on}\quad \mathbb {R}^d\times {\varOmega }. \end{aligned}$$

This, in view of Theorem 5.1, completes the proof.