1 Introduction

Consider the operator \(L^a\) in divergence form,

$$\begin{aligned} (L^a f)(x) = \sum _{i,j=1}^d \partial _{x_i}(a_{ij}(\cdot )\, \partial _{x_j}f(\cdot ))(x), \end{aligned}$$

acting on functions on \({\mathbb {R}}^d\), where the symmetric positive definite matrix \(a = (a_{ij}(x))\) is bounded, measurable and uniformly elliptic, i.e. for some \(0 < {\uplambda }_1 \le {\uplambda }_2 < \infty \)

$$\begin{aligned} {\uplambda }_1\, |\xi |^2 \le \sum _{i, j = 1}^d a_{ij}(x)\, \xi _i\, \xi _j \le {\uplambda }_2\, |\xi |^2, \quad \forall \, x, \xi \in {\mathbb {R}}^d. \end{aligned}$$
(1.1)

Then, a celebrated result of Moser [27] states that the elliptic Harnack inequality (EHI) holds, that is there exists a constant \(C_{\mathrm {EH}} \equiv C_{\mathrm {EH}}({\uplambda }_1, {\uplambda }_2) < \infty \) such that for any positive function \(u > 0\) which is \(L^a\) harmonic on the ball \(B(x_0, r)\), i.e. \((L^a u)(x)=0\) for all \(x \in B(x_0,r)\), we have

$$\begin{aligned} \max _{B(x_0,r/2)} u \le C_{\mathrm {EH}}\, \min _{B(x_0,r/2)} u. \end{aligned}$$

A few years later, Moser also proved that the parabolic Harnack inequality (PHI) holds (see [28]), that is there exists a constant \(C_{\mathrm {PH}} \equiv C_{\mathrm {PH}}({\uplambda }_1, {\uplambda }_2) < \infty \) such that for any positive caloric function \(u > 0\), i.e. any weak solution of the heat equation \((\partial _t-L^a)u(t,x)=0\) on \([t_0, t_0 + r^2] \times B(x_0,r)\), we have

$$\begin{aligned} \max _{(t,x) \in Q_{-}} u(t,x) \le C_{\mathrm {PH}}\, \min _{(t,x) \in Q_+} u(t,x), \end{aligned}$$

where \(Q_- = [t_0 + \frac{1}{4} r^2, t_0 + \frac{1}{2} r^2] \times B(x_0, \frac{1}{2} r)\) and \(Q_+ = [t_0 + \frac{3}{4} r^2, t_0 + r^2] \times B(x_0, \frac{1}{2} r)\).

Both EHI and PHI have had a big influence on PDE theory and differential geometry, in particular they play a prominent role in elliptic regularity theory. One application of Harnack inequalities–observed by Nash in [30]—is to show Hölder regularity for solutions of elliptic or parabolic equations. For instance, the PHI implies the Hölder continuity of the fundamental solution \(p^a_t(x,y)\) of the parabolic equation

$$\begin{aligned} ( \partial _t - L^a ) p^a_t(x,y) = 0, \quad p^a_0(x,y) = \delta _x(y). \end{aligned}$$

Furthermore, the PHI is also one of the key tools in Aronson’s proof [5] of Gaussian type bounds for \(p^a_t(x,y)\).

The remarkable fact about these results is that they only rely on the uniform ellipticity and not on the regularity of the matrix \(a\). We refer to the monograph [32] for more details on this topic.

1.1 The model

In this paper we will be dealing with a discretised version of the operator \(L^a\). We consider an infinite, connected, locally finite graph \(G = (V, E)\) with vertex set \(V\) and edge set \(E\). We will write \(x \sim y\) if \(\{x,y\} \in E\) and \(e \sim e'\) for \(e, e' \in E\) if the edges \(e\) and \(e'\) have a common vertex. A path of length \(n\) between \(x\) and \(y\) in \(G\) is a sequence \(\{x_i : i = 0, \ldots , n\}\) with the property that \(x_0 = x\), \(x_n = y\) and \(x_i \sim x_{i+1}\). Let \(d\) be the natural graph distance on \(G\), i.e. \(d(x,y)\) is the minimal length of a path between \(x\) and \(y\). We denote by \(B(x,r)\) the closed ball with center \(x\) and radius \(r\), i.e. \(B(x,r) \mathrel {\mathop :}=\{y \in V \mid d(x,y) \le r\}\).

The graph is endowed with the counting measure, i.e. the measure of \(A \subset V\) is simply the number \(|A|\) of elements in \(A\). For any non-empty, finite \(A \subset V\) and \(p \in [1, \infty )\), we introduce space-averaged \(\ell ^{p}\)-norms on functions \(f: A \rightarrow {\mathbb {R}}\) by the usual formula

$$\begin{aligned} ||f||_{p,A} \mathrel {\mathop :}=\left( \frac{1}{|A|}\; \sum _{x \in A}\, |f(x)|^p\right) ^{\frac{1}{p}} \quad \text {and} \quad ||f||_{\infty , A} \mathrel {\mathop :}=\max _{x\in A} |f(x)|. \end{aligned}$$

For a given set \(B \subset V\), we define the relative internal boundary of \(A \subset B\) by

$$\begin{aligned} \partial _B A \mathrel {\mathop :}=\{ x \in A | \exists \, y \in B {\setminus } A \text { s.th. } \{x,y\} \in E \} \end{aligned}$$

and we simply write \(\partial A\) instead of \(\partial _V A\). Throughout the paper we will make the following assumption on \(G\).

Assumption 1.1

For some \(d\ge 2\) the graph \(G\) satisfies the following conditions:

  1. (i)

    volume regularity of order \(d\), that is there exists \(C_{\mathrm {reg}} \in (0, \infty )\) such that

    $$\begin{aligned} C^{-1}_{\mathrm {reg}}\, r^d \le |B(x,r)| \le C_{\mathrm {reg}}\, r^d, \quad \forall \, x \in V,\; r \ge 1. \end{aligned}$$
    (1.2)
  2. (ii)

    relative isoperimetric inequality of order \(d\), that is there exists \(C_{\mathrm {riso}} \in (0, \infty )\) such that for all \(x \in V\) and \(r \ge 1\),

    $$\begin{aligned} \frac{|\partial _{B(x,r)} A|}{|A|} \ge \frac{C_{\mathrm {riso}}}{r}, \quad \forall \; A \subset B(x,r)\; \text { s.th. } |A| < \tfrac{1}{2} |B(x,r)|. \end{aligned}$$
    (1.3)

Remark 1.2

The Euclidean lattice, \(({\mathbb {Z}}^d, E_d)\), satisfies the Assumption 1.1.

Assume that the graph, \(G\), is endowed with positive weights, i.e. we consider a family \(\omega = \{\omega (e) \in (0, \infty ) : e \in E\}\). With an abuse of notation we also denote the conductance matrix by \(\omega \), that is for \(x, y \in V\) we set \(\omega (x,y) = \omega (y,x) = \omega (\{x,y\})\) if \(\{x,y\} \in E\) and \(\omega (x,y) = 0\) otherwise. We also refer to \(\omega (x,y)\) as the conductance of the corresponding edge \(\{x,y\}\) and we call \((V,E,\omega )\) a weighted graph. Let us further define measures \(\mu ^{\omega }\) and \(\nu ^{\omega }\) on \(V\) by

$$\begin{aligned} \mu ^{\omega }(x) \mathrel {\mathop :}=\sum _{y \sim x}\, \omega (x,y) \quad \text {and} \quad \nu ^{\omega }(x) \mathrel {\mathop :}=\sum _{y \sim x}\, \frac{1}{\omega (x,y)}. \end{aligned}$$

For any fixed \(\omega \) we consider a reversible continuous time Markov chain, \(Y = \{Y_t: t \ge 0 \}\), on \(V\) with generator \({\mathcal {L}}^{\omega }_Y\) acting on bounded functions \(f: V \rightarrow {\mathbb {R}}\) as

$$\begin{aligned} ({\mathcal {L}}_Y^{\omega }\, f)(x) = \mu ^\omega (x)^{-1} \sum _{y \sim x} \omega (x,y) \, (f(y) - f(x)). \end{aligned}$$
(1.4)

We denote by \({{\mathrm{\mathrm {P}}}}_x^\omega \) the law of the process starting at the vertex \(x \in V\). The corresponding expectation will be denoted by \({{\mathrm{\mathrm {E}}}}_x^\omega \). Setting \(p^{\omega }(x,y) \mathrel {\mathop :}=\omega (x,y) / \mu ^{\omega }(x)\), this random walk waits at \(x\) an exponential time with mean \(1\) and chooses its next position \(y\) with probability \(p^{\omega }(x,y)\). Since the law of the waiting times does not depend on the location, \(Y\) is also called the constant speed random walk (CSRW). Furthermore, \(Y\) is a reversible Markov chain with symmetrising measure given by \(\mu \). In [3] we mainly consider the variable speed random walk (VSRW) \(X = \{X_t: t \ge 0 \}\), which waits at \(x\) an exponential time with mean \(1/\mu (x)\), with generator

$$\begin{aligned} ({\mathcal {L}}^{\omega } f)(x) = \sum _{y \in {\mathbb {Z}}^d} \omega (x,y)\, (f(y) - f(x)) = \mu ^\omega (x)\, ({\mathcal {L}}^{\omega }_Y f)(x). \end{aligned}$$

1.2 Harnack inequalities on graphs

In the case of uniformly bounded conductances, i.e. there exist \(0 < {\uplambda }_1 \le {\uplambda }_2 < \infty \) such that

$$\begin{aligned} {\uplambda }_1 \le \omega (x,y) \le {\uplambda }_2, \end{aligned}$$

both the EHI and the PHI have been derived in the setting of an infinite, connected, locally finite graph by Delmotte in [19, 20] with constant \(C_{\mathrm {EH}} = C_{\mathrm {EH}}({\uplambda }_1, {\uplambda }_2)\) and \(C_{\mathrm {PH}} = C_{\mathrm {PH}}({\uplambda }_1, {\uplambda }_2)\) depending on \({\uplambda }_1\) and \({\uplambda }_2\) only. It is obvious that for a given ball \(B(x_0,n)\) the constants \({\uplambda }_1\) and \({\uplambda }_2\) can be replaced by

$$\begin{aligned} \max _{x,y \in B(x_0,n)} \frac{1}{\omega (x,y)} \quad \text {and} \quad \max _{x,y \in B(x_0, n)} \omega (x,y). \end{aligned}$$

Our main result shows that the uniform upper and lower bound can be replaced by \(L^p\) bounds.

Theorem 1.3

(Elliptic Harnack inequality) For any \(x_0 \in V\) and \(n \ge 1\) let \(B(n) = B(x_0,n)\). Suppose that \(u > 0\) is harmonic on \(B(n)\), i.e. \({\mathcal {L}}_Y^\omega \, u = 0\) on \(B(n)\). Then, for any \(p,q \in (1, \infty )\) with

$$\begin{aligned} \frac{1}{p} + \frac{1}{q} < \frac{2}{d} \end{aligned}$$

there exists \(C_{\mathrm {EH}} \equiv C_{\mathrm {EH}}(||\mu ^\omega ||_{p,B(n)}, ||\nu ^\omega ||_{q,B(n)})\) such that

$$\begin{aligned} \max _{x \in B(n/2)} u(x) \le C_{\mathrm {EH}}\, \min _{x \in B(n/2)} u(x). \end{aligned}$$
(1.5)

The constant \(C_{\mathrm {EH}}\) is more explicitly given by

$$\begin{aligned} C_{\mathrm {EH}}(||\mu ^\omega ||_{p,B(n)}, ||\nu ^\omega ||_{q,B(n)}) = c_1\, \exp ( c_2\, ( 1 \vee ||\mu ^\omega ||_{p,B(n)}\, ||\nu ^\omega ||_{q,B(n)})^\kappa ) \end{aligned}$$

for some positive \(c_i = c_i(d,p,q)\) and \(\kappa = \kappa (d,p,q)\).

For abbreviation we introduce \(M_{x_0,n}^\omega \mathrel {\mathop :}=1 \vee (||\mu ^\omega ||_{1,B(n)}/||\mu ^\omega ||_{1,B(n/2)})\).

Theorem 1.4

(Parabolic Harnack inequality) For any \(x_0\in V\), \(t_0\ge 0\) and \(n\ge 1\) let \(Q(n) = [t_0,t_0+n^2]\times B(x_0,n)\). Suppose that \(u > 0\) is caloric on \(Q(n)\), i.e. \(\partial _t u - {\mathcal {L}}_Y^{\omega } u = 0\) on \(Q(n)\). Then, for any \(p,q \in (1, \infty )\) with

$$\begin{aligned} \frac{1}{p} + \frac{1}{q} < \frac{2}{d} \end{aligned}$$

there exists \(C_{\mathrm {PH}} = C_{\mathrm {PH}}(M_{x_0,n}^\omega , \Vert \mu ^{\omega }\Vert _{p,B(x_0,n)}, \Vert \nu ^{\omega }\Vert _{q,B(x_0,n)})\) such that

$$\begin{aligned} \max _{(t,x) \in Q_{-}} u(t,x) \le C_{\mathrm {PH}}\, \min _{(t,x) \in Q_+} u(t,x), \end{aligned}$$
(1.6)

where \(Q_- = [t_0 + \frac{1}{4} n^2, t_0 + \frac{1}{2}n^2] \times B(x_0, \frac{1}{2} n)\) and \(Q_+ = [t_0 + \frac{3}{4} n^2, t_0 + n^2] \times B(x_0, \frac{1}{2} n)\). The constant \(C_{\mathrm {PH}}\) is more explicitly given by

$$\begin{aligned}&C_{\mathrm {PH}} ( M^{\omega }_{x_0,n}, ||\mu ^\omega ||_{p,B(x_0,n)}, ||\nu ^\omega ||_{q,B(x_0,n)})\\&\quad = c_1\, \exp ( c_2\, (M^\omega _{x_0,n} ( 1 \vee ||\mu ^\omega ||_{p,B(x_0,n)})\, ( 1 \vee ||\nu ^\omega ||_{q,B(x_0,n)}) )^{\kappa }) \end{aligned}$$

for some positive \(c_i = c_i(d,p,q)\) and \(\kappa = \kappa (d,p,q)\).

In the context of elliptic PDEs on \({\mathbb {R}}^d\) discussed above, the classical results on EHI and regularity of harmonic function have been extended to the situation, where the uniform ellipticity assumption on the coefficient [see (1.1)] is weakened to a certain integrability condition very similar to ours, see [22, 26, 29, 33, 34]. In particular, the same \(L^p\)-condition \(1/p+1/q<2/d\) is obtained. This condition also appears in [35], where an upper bound on the solution of a degenerate parabolic equation is derived by using Moser iteration and Davies’ method (cf. [15]).

Remark 1.5

Given a speed measure \(\pi ^\omega : {\mathbb {Z}}^d \rightarrow (0,\infty )\), one can also consider the process, \(Z = \{Z_t: t \ge 0\}\) on \(V\) that is defined by a time change of the VSRW \(X\), i.e. \(Z_t \mathrel {\mathop :}=X_{a_t}\) for \(t \ge 0\), where \(a_t \mathrel {\mathop :}=\inf \{s \ge 0 : A_s > t\}\) denotes the right continuous inverse of the functional

$$\begin{aligned} A_t = \int _0^t \pi ^\omega (X_s) \, \mathrm {d}s, \quad t \ge 0. \end{aligned}$$

Its generator is given by

$$\begin{aligned} ({\mathcal {L}}_{\pi }^{\omega } f)(x) = \pi ^{\omega }(x)^{-1} \sum _{y \in V} \omega (x,y) (f(y)-f(x)). \end{aligned}$$
(1.7)

The maybe most natural choice for the speed measure is \(\pi ^{\omega } = \mu ^{\omega }\), for which we get again the CSRW \(Y\). Since we have the same harmonic functions w.r.t. \( {\mathcal {L}}_{\pi }^{\omega }\) for any choice of \(\pi \), it is obvious that the EHI in Theorem 1.3 also holds for \({\mathcal {L}}_{\pi }^{\omega }\). On the other hand, for the PHI one can show the following statement along the lines of the proof of Theorem 1.4. Let \(u > 0\) be caloric w.r.t. \({\mathcal {L}}_{\pi }^{\omega }\) on \(Q(n)\), then for any \(p,q,r \in (1, \infty )\) with

$$\begin{aligned} \frac{1}{r} + \frac{1}{p-1} \frac{r-1}{r} + \frac{1}{q} < \frac{2}{d} \end{aligned}$$
(1.8)

there exists \(C_{\mathrm {PH}} = C_{\mathrm {PH}}(M_{x_0,n}^\omega , ||\mu ^{\omega } / (\pi ^{\omega })^{1 - \frac{1}{p}}||_{p,B(n)}, ||\nu ^\omega ||_{q,B(n)}, ||\pi ^\omega ||_{r,B(n)})\) with \(M_{x_0,n}^\omega \mathrel {\mathop :}=1 \vee (||\pi ^{\omega }||_{1,B(n)} / ||\pi ^{\omega }||_{1,B(n/2)})\) such that

$$\begin{aligned} \max _{(t,x) \in Q_{-}} u(t,x) \le C_{\mathrm {PH}}\, \min _{(t,x) \in Q_+} u(t,x). \end{aligned}$$

In particular, for the VSRW \(X\) we have \(\pi ^{\omega } \equiv 1\) and choosing \(r=\infty \) the condition on \(p\) and \(q\) reads \(1/(p-1)+1/q <2/d\) in this case.

Notice that the Harnack constants \(C_{\mathrm {EH}}\) and \(C_{\mathrm {PH}}\) in Theorems 1.3 and 1.4 do depend on the ball under consideration, namely on its center point \(x_0\) as well as on its radius \(n\). As a consequence one cannot deduce directly Hölder continuity estimates from these results. This requires ergodicity w.r.t. translations as an additional assumption (see Propositions 3.8 and 4.8 below).

Assumption 1.6

Assume that

$$\begin{aligned} \bar{\mu }&= \sup _{x_0 \in V} \limsup _{n\rightarrow \infty } ||\mu ^{\omega }||_{p,B(x_0,n)} < \infty , \quad \bar{\nu } = \sup _{x_0 \in V} \limsup _{n\rightarrow \infty } ||\nu ^{\omega }||_{q,B(x_0,n)} < \infty ,\\ \overline{M}&= \sup _{x_0 \in V} \limsup _{n\rightarrow \infty } M^{\omega }_{x_0,n} < \infty . \end{aligned}$$

In particular, Assumption 1.6 implies for the Harnack constants appearing in Theorems 1.3 and 1.4 that \(C^*_{\mathrm {EH}} \mathrel {\mathop :}=2 C_{\mathrm {EH}}(\bar{\mu }, \bar{\nu }) < \infty \) and \(C^*_{\mathrm {PH}} \mathrel {\mathop :}=2 C_{\mathrm {PH}}(\overline{M}, \bar{\mu }, \bar{\nu }) < \infty \) do not depend on \(x_0\) and \(n\). Furthermore, there exists \(s^\omega (x_0) \ge 1\) such that

$$\begin{aligned} \sup _{n \ge s^\omega (x_0)}&C_{\mathrm {EH}}(||\mu ^{\omega }||_{p,B(x_0,n)}, ||\nu ^{\omega }||_{q,B(x_0,n)} ) \le C^*_{\mathrm {EH}} \;<\; \infty ,\\ \sup _{n \ge s^\omega (x_0)}&C_{\mathrm {PH}}(M^\omega _{x_0,n},||\mu ^{\omega }||_{p,B(x_0,n)}, ||\nu ^{\omega }||_{q,B(x_0,n)} ) \le C^*_{\mathrm {PH}} \;<\; \infty . \end{aligned}$$

In other words, under Assumption 1.6 the results in Theorems 1.3 and 1.4 are Harnack inequalities becoming effective for balls with radius \(n\) large enough.

Corollary 1.7

Suppose that Assumption 1.6 holds. Then, for any \(x_0 \in V\) and \(n \ge s^\omega (x_0)\) the elliptic and parabolic Harnack inequalities in Theorems 1.3 and 1.4 hold with the Harnack constants in (1.5) and (1.6) replaced by \(C^*_{\mathrm {EH}}\) and \(C^*_{\mathrm {PH}}\), respectively.

In the applications to the random conductance model discussed below the conductances will be stationary ergodic random variables, so under some moment conditions Assumption 1.6 will be satisfied by the ergodic theorem.

It is well known that a PHI is equivalent to Gaussian lower and upper bounds on the heat kernel in many situations, for instance in the case of uniformly bounded conductances on a locally finite graph, see [20]. Indeed, in a forthcoming paper [4] we will prove upper Gaussian estimates on the heat kernel under Assumption 1.6 following the approach in [35], i.e. we combine Moser’s iteration technique with Davies’ method. We remark that in our non-elliptic setting we cannot deduce off-diagonal Gaussian bounds directly from the PHI due to the dependence of the Harnack constant on the underlying ball. More precisely, in order to get effective Gaussian off-diagonal bounds, one needs to apply the PHI on a number of balls with radius \(n\) having a distance of order \(n^2\). In general, the ergodic theorem does not give the required uniform control on the convergence of space-averages of stationary random variables over such balls (see [1]). However, we still obtain some near-diagonal estimates (see Proposition 4.7 below).

1.3 The method

Since the pioneering works [27, 28] Moser’s iteration technique is by far the best-established tool in order to prove Harnack inequalities. Moser’s iteration is based on two main ideas: the Sobolev-type inequality which allows us to control the \(\ell ^r\) norm with \(r = r(d) = d/(d-2) > 1\) in terms of the Dirichlet form, and a control on the Dirichlet form of any harmonic (or caloric) function \(u\). In the uniformly elliptic case this is rather standard. In our case where the conductances are unbounded from above and not bounded away from zero we need to work with a dimension dependent weighted Sobolev inequality, established in [3] by using Hölder’s inequality. That is, the coefficient \(r(d)\) is replaced by

$$\begin{aligned} r(d,p,q) = \frac{d - d/p}{(d-2) + d/q}. \end{aligned}$$

For the Moser iteration we need \(r(d,p,q) > 1\), of course, which is equivalent to \(1/p + 1/q < 2/d\) appearing in statements of Theorems 1.3 and 1.4. As a result we obtain a maximum inequality for \(u\), that is an estimate for the \(\ell ^{\infty }\)-norm of \(u\) on a ball in terms of the \(\ell ^{\alpha }\)-norm of \(u\) on a slightly bigger ball for some \(\alpha > 0\). Since the same holds for \(u^{-1}\), to conclude the Harnack inequality we are left to link the \(\ell ^{\alpha }\)-norm and the \(\ell ^{-\alpha }\)-norm of \(u\). In the setting of uniformly elliptic conductances (see [19]) this can be done by using the John–Nirenberg inequality, that is the exponential integrability of BMO functions. In this paper we establish this link by an abstract lemma of Bombieri and Giusti [11] (see Lemma 2.5 below). In order to apply this lemma, beside the maximum inequalities mentioned above, we need to establish a weighted Poincaré inequality, which is classically obtained by the Whitney covering technique (see e.g. [6, 20, 32]). However, in this paper we can avoid the Whitney covering by using results from a recent work by Dyda and Kassmann [21].

1.4 Local limit theorem for the random conductance model

Our main motivation for deriving the above Harnack inequalities is the quenched local CLT for the random conductance model. Consider the \(d\)-dimensional Euclidean lattice, \((V_d, E_d)\), for \(d \ge 2\). The vertex set, \(V_d\), of this graph equals \({\mathbb {Z}}^d\) and the edge set, \(E_d\), is given by the set of all non oriented nearest neighbour bonds, i.e. \(E_d \mathrel {\mathop :}=\{ \{x,y\}: x,y \in {\mathbb {Z}}^d, |x-y|=1\}\).

Let \((\Omega , {\mathcal {F}}) = ({\mathbb {R}}_+^{E_d}, {\mathcal {B}}({\mathbb {R}}_+)^{\otimes \, E_d})\) be a measurable space. Let the graph \((V_d, E_d)\) be endowed with a configuration \(\omega = \{\omega (e) : e \in E_d\} \in \Omega \) of conductances. We will henceforth denote by \({{\mathrm{\mathbb {P}}}}\) a probability measure on \((\Omega , {\mathcal {F}})\), and we write \({{\mathrm{\mathbb {E}}}}\) to denote the expectation with respect to \({{\mathrm{\mathbb {P}}}}\). A space shift by \(z \in {\mathbb {Z}}^d\) is a map \(\tau _z: \Omega \rightarrow \Omega \),

$$\begin{aligned} (\tau _z \omega )(x, y) \mathrel {\mathop :}=\omega (x+z, y+z), \quad \forall \, \{x,y\} \in E_d. \end{aligned}$$
(1.9)

The set \(\{\tau _x : x \in {\mathbb {Z}}^d\}\) together with the operation \(\tau _{x} \circ \tau _{y} \mathrel {\mathop :}=\tau _{x+y}\) defines the group of space shifts. We will study the nearest-neighbor random conductance model. For any fixed realization \(\omega \) it is a reversible continuous time Markov chain, \(Y = \{Y_t: t \ge 0 \}\), on \({\mathbb {Z}}^d\) with generator \({\mathcal {L}}^{\omega }_Y\) defined as in (1.4). We denote by \(q^{\omega }(t,x,y)\) for \(x, y \in {\mathbb {Z}}^d\) and \(t \ge 0\) the transition density (or heat kernel associated with \({\mathcal {L}}_Y^{\omega }\)) of the CSRW with respect to the reversible measure \(\mu \), i.e.

$$\begin{aligned} q^{\omega }(t,x,y) \mathrel {\mathop :}=\frac{{{\mathrm{\mathrm {P}}}}_x^\omega [Y_t = y]}{\mu ^{\omega }(y)}. \end{aligned}$$

As a consequence of (1.9) we have \(q^{\tau _z \omega }(t, x, y) = q^{\omega }(t, x+z, y+z)\).

Assumption 1.8

Assume that \({{\mathrm{\mathbb {P}}}}\) satisfies the following conditions:

  1. (i)

    \({{\mathrm{\mathbb {P}}}}[ 0 < \omega (e) < \infty ] = 1\) for all \(e \in E_d\) and \({{\mathrm{\mathbb {E}}}}[\mu ^{\omega }(0)] < \infty \).

  2. (ii)

    \({{\mathrm{\mathbb {P}}}}\) is ergodic with respect to translations of \({\mathbb {Z}}^d\), i.e. \({{\mathrm{\mathbb {P}}}}\circ \, \tau _x^{-1} = {{\mathrm{\mathbb {P}}}}\,\) for all \(x \in {\mathbb {Z}}^d\) and \({{\mathrm{\mathbb {P}}}}[A] \in \{0,1\}\,\) for any \(A \in {\mathcal {F}}\) such that \(\tau _x(A) = A\,\) for all \(x \in {\mathbb {Z}}^d\).

Assumption 1.9

There exist \(p,q \in (1, \infty ]\) satisfying \(1/p + 1/q < 2/d\) such that

$$\begin{aligned} {{\mathrm{\mathbb {E}}}}[\omega (e)^p] < \infty \quad \text {and} \quad {{\mathrm{\mathbb {E}}}}[\omega (e)^{-q}] < \infty \end{aligned}$$
(1.10)

for any \(e \in E_d\).

Notice that under Assumptions 1.8 and 1.9 the spatial ergodic theorem gives that for \({{\mathrm{\mathbb {P}}}}\)-a.e. \(\omega \),

$$\begin{aligned}&\lim _{n\rightarrow \infty } ||\mu ^{\omega }||_{p, B(x_0, n)}^p = {{\mathrm{\mathbb {E}}}}[ \mu ^{\omega }(0)^p] < \infty , \quad \lim _{n\rightarrow \infty } ||\nu ^{\omega }||_{q, B(x_0, n)}^q = {{\mathrm{\mathbb {E}}}}[\nu ^{\omega }(0)^q] < \infty ,\\&\quad \lim _{n \rightarrow \infty } M_{x_0,n}^{\omega } = 1. \end{aligned}$$

In particular, Assumption 1.6 is fulfilled in this case and thus for \({{\mathrm{\mathbb {P}}}}\)-a.e. \(\omega \) the EHI and the PHI do hold for \({\mathcal {L}}_Y^{\omega }\) in the sense of Corollary 1.7, i.e. provided \(n \ge s^{\omega }(x_0)\).

We are interested in the \({{\mathrm{\mathbb {P}}}}\)-a.s. or quenched long range behaviour, in particular in obtaining a quenched functional limit theorem (QFCLT) or invariance principle for the process \(Y\) starting in \(0\) and a quenched local limit theorem for \(Y\). We first recall that the following QFCLT has been recently obtained as the main result in [3].

Theorem 1.10

(QFCLT) Suppose that \(d \ge 2\) and Assumptions 1.8 and 1.9 hold. Then, \({{\mathrm{\mathbb {P}}}}\)-a.s. \(Y_t^{(n)} \mathrel {\mathop :}=\frac{1}{n} Y_{n^2 t}\) converges (under \({{\mathrm{\mathrm {P}}}}_{0}^\omega )\) in law to a Brownian motion on \({\mathbb {R}}^d\) with a deterministic non-degenerate covariance matrix \(\Sigma _Y^2\).

Proof

See Theorem 1.3 and Remark 1.5 in [3].\(\square \)

We refer to [16] for a quenched invariance principle for diffusions under an analogue moment condition. In this paper our main concern is to establish a local limit theorem for \(Y\). It roughly describes how the transition probabilities of the random walk \(Y\) can be rescaled in order to get the Gaussian transition density of the Brownian motion, which appears as the limit process in Theorem 1.10. Write

$$\begin{aligned} k_t(x) \equiv k_t^{\Sigma _Y}(x) \mathrel {\mathop :}=\frac{1}{\sqrt{(2 \pi t)^d \det \Sigma _Y^2} }\, \exp (- x \cdot ( \Sigma ^2_Y)^{-1}x / 2t) \end{aligned}$$
(1.11)

for the Gaussian heat kernel with covariance matrix \(\Sigma ^2_Y\). For \(x \in {\mathbb {R}}^d\) write \(\lfloor x \rfloor = ( \lfloor x_1 \rfloor , \dots \lfloor x_d \rfloor )\).

Theorem 1.11

(Quenched Local Limit Theorem) Let \(T_2 > T_1>0\) and \(K > 0\) and suppose that \(d \ge 2\) and Assumptions 1.8 and 1.9 hold. Then,

$$\begin{aligned} \lim _{n \,\rightarrow \, \infty } \sup _{|x| \le K}\, \sup _{t \,\in [T_1,T_2]}\, | n^{d}\, q^\omega (n^2 t, 0, \lfloor nx \rfloor ) - a\, k_t(x) | = 0, \quad {{\mathrm{\mathbb {P}}}}\text {-a.s.} \end{aligned}$$

with \(a \mathrel {\mathop :}=1 / {{\mathrm{\mathbb {E}}}}[\mu ^{\omega }(0)]\).

Remark 1.12

Similarly to Remark 1.5 it is possible to state a local limit theorem for the process \(Z\), obtained as a time-change of the VSRW \(X\) in terms of a speed measure \(\pi ^{\omega }: {\mathbb {Z}}^d \rightarrow (0,\infty )\) with generator \({\mathcal {L}}_{\pi }^{\omega }\) defined in (1.7). Indeed, if \(\pi ^\omega (x) = \pi ^{\tau _x \omega }(0)\) for every \(x \in {\mathbb {Z}}^d\), if \(0 < {{\mathrm{\mathbb {E}}}}[\pi ^{\omega }(0)] < \infty \) and if Assumptions 1.8 and 1.9 hold, we have a QFCLT also for the process \(Z\) with some non-degenerate covariance matrix \(\Sigma _Z^2 = {{\mathrm{\mathbb {E}}}}[\pi ^\omega (0)]^{-1} \Sigma ^2_X\), where \(\Sigma ^2_X\) denotes the covariance matrix in the QFCLT for the VSRW \(X\) (see Remark 1.5 in [3]). If in addition there exists \(r > 1\) such that \(p\), \(q\) and \(r\) satisfy (1.8) and \( {{\mathrm{\mathbb {E}}}}[\pi ^{\omega }(0)^r]< \infty \), writing \(q^{\omega }_Z(t,x,y)\) for the heat kernel associated with \({\mathcal {L}}_{\pi }^{\omega }\) and \(Z\), we have

$$\begin{aligned} \lim _{n \,\rightarrow \, \infty } \sup _{|x| \le K}\, \sup _{t \,\in [T_1,T_2]}\, | n^{d}\, q_Z^\omega (n^2 t, 0, \lfloor nx \rfloor ) - a\, k_t^{\Sigma _Z}(x) | = 0, \quad {{\mathrm{\mathbb {P}}}}\text {-a.s.} \end{aligned}$$

with \(a \mathrel {\mathop :}=1/ {{\mathrm{\mathbb {E}}}}[\pi ^{\omega }(0)]\).

Analogous results have been obtained in other settings, for instance for random walks on supercritical percolation clusters, see [8]. In [18], the arguments of [8] have been used in order to establish a general criterion for a local limit theorem to hold, which is applicable in a number of different situations like graph trees converging to a continuum random tree or local homogenisation for nested fractals.

For the random conductance model a local limit theorem has been proven in the case, where the conductances are i.i.d. random variables and uniformly elliptic (see Theorem 5.7 in [8]). Later this has been improved for the VSRW under i.i.d. conductances, which are only uniformly bounded away from zero (Theorem 5.14 in [7]). However, if the conductances are i.i.d. but have fat tails at zero, the QFCLT still holds (see [2]), but due to a trapping phenomenon the heat kernel decay is sub-diffusive, so the transition density does not have enough regularity for a local limit theorem—see [9, 10].

Hence, it is clear that some moment conditions are needed. It turns out that the moment conditions in Assumption 1.9 are optimal in the sense that for any \(p\) and \(q\) satisfying \(1/p + 1/q > 2/d\) there exists an environment of ergodic random conductances with \({{\mathrm{\mathbb {E}}}}[\omega (e)^p] < \infty \) and \({{\mathrm{\mathbb {E}}}}[\omega (e)^{-q}]<\infty \), under which the local limit theorem fails (see Theorem 5.4 below). In the case when the conductances are bounded from above, i.e. \(p = \infty \), the optimality of the condition \(q > d/2\) is already indicated by the examples in [12, 23].

However, in Sect. 6 below we will show that for some examples of conductances our moment condition can be improved if it is replaced by a moment condition on the heat kernel. While for the example in [23], where the conductances are given by \(\omega (x,y) = \theta ^{\omega }(x) \wedge \theta ^{\omega }(y)\) for a family \(\{\theta ^\omega (x): x \in {\mathbb {Z}}^d\}\) of random variables bounded from above, no improvement can be achieved by this method (see Proposition 6.2), we obtain that \(q > \frac{3}{4} \frac{d}{d+1}\) is sufficient if for instance \(\omega (x,y) = \theta ^{\omega }(x) \vee \theta ^{\omega }(y)\) (see Proposition 6.5). Moreover, in the case of i.i.d. conductances uniformly bounded from above this procedure gives the following result (see Proposition 6.3).

Theorem 1.13

Assume that the conductances \(\{\omega (e) : e \in E_d\}\) are independent and uniformly bounded from above, i.e. \(p = \infty \), and \({{\mathrm{\mathbb {E}}}}[\omega (e)^{-q}] < \infty \) for some \(q > 1/4\). Then, the QFCLT in Theorem 1.10 and the quenched local limit theorem in Theorem 1.11 hold for both the CSRW and the VSRW.

Under the assumptions of Theorem 1.13 a PHI and a local limit theorem have recently been proven in [13, Theorem 1.9] on one hand for the VSRW if again \(q>1/4\) and on the other hand for the CSRW if \(q>d/(8d-4)\), which improves the result in Theorem 1.13. We refer to Remark 1.10 in [13] for a discussion on the optimality of these conditions for both CSRW and VSRW.

We shall prove Theorem 1.11 by using the approach in [8, 18]. The two main ingredients of the proof are the QFCLT in Theorem 1.10 and a Hölder-continuity estimate on the heat kernel, which at the end enables us to replace the weak convergence given by the QFCLT by the pointwise convergence in Theorem 1.11.

Finally, notice that our result applies to a random conductance model given by

$$\begin{aligned} \omega (x,y) = \exp (\phi (x) + \phi (y)), \quad \{x,y\} \in E_d, \end{aligned}$$

where for \(d \ge 3\), \(\{\phi (x) : x \in {\mathbb {Z}}^d\}\) is the discrete massless Gaussian free field, cf. [14]. In this case the moment condition (1.10) holds for any \(p, q \in (0, \infty )\), of course.

When \(d\ge 3\) we can also establish a local limit theorem for the Green kernel. Indeed, as a direct consequence from the PHI we get some on-diagonal bounds on the heat-kernel (see Proposition 4.7 below) from which we obtain the existence of the Green kernel \(g^\omega (x,y)\) defined by

$$\begin{aligned} g^\omega (x,y) = \int _0^\infty q^\omega (t,x,y)\, \mathrm {d}t. \end{aligned}$$

By integration we can deduce the following local central limit theorem for the Green kernel from Theorem 1.11.

Theorem 1.14

Suppose that \(d \ge 3\) and Assumptions 1.8 and 1.9 hold. Then, for any \(x \not = 0\) we have

$$\begin{aligned} \lim _{n \,\rightarrow \, \infty } | n^{d-2}\, g^{\omega }(0, \lfloor nx \rfloor ) - a\, g_{\mathrm {BM}}(x)| = 0, \quad {{\mathrm{\mathbb {P}}}}\text {-a.s.} \end{aligned}$$

with \(a \mathrel {\mathop :}=1 / {{\mathrm{\mathbb {E}}}}[\mu ^{\omega }(0)]\) and \(g_{\mathrm {BM}}(x) = \int _0^\infty k_t(x)\, \mathrm {d}t\) denoting the Green kernel of a Brownian motion with covariance matrix \(\Sigma ^2_Y\).

The paper is organised as follows: In Sect. 2 we provide a collection of Sobolev-type inequalities needed to setup the Moser iteration and to apply the Bombieri–Giusti criterion. Then, Sects. 3 and 4 contain the proof of the EHI and PHI in Theorems 1.3 and 1.4, respectively. The local limit theorem for the random conductance model is proven in Sect. 5. Some examples for conductances allowing an improvement on the moment conditions are discussed in Sect. 6. Finally, the appendix contains a collection of some elementary estimates needed in the proofs.

Throughout the paper we write \(c\) to denote a positive constant which may change on each appearance. Constants denoted \(C_i\) will be the same through each argument.

2 Sobolev and Poincaré inequalities

2.1 Setup and preliminaries

For functions \(f:A \rightarrow {\mathbb {R}}\), where either \(A \subseteq V\) or \(A \subseteq E\), the \(\ell ^p\)-norm \(||f||_{\ell ^{p}(A)}\) will be taken with respect to the counting measure. The corresponding scalar products in \(\ell ^2(V)\) and \(\ell ^2(E)\) are denoted by \(\langle \cdot ,\cdot \rangle _{\ell ^{2}(V)}\) and \(\langle \cdot ,\cdot \rangle _{\ell ^{2}(E)}\), respectively. Similarly, the \(\ell ^p\)-norm and scalar product w.r.t. to a measure \(\varphi : V\rightarrow {\mathbb {R}}\) will be denoted by \(||f||_{\ell ^{p}(V,\varphi )}\) and \(\langle \cdot ,\cdot \rangle _{\ell ^{2}(V,\varphi )}\), i.e. \(\langle f,g\rangle _{\ell ^{2}(V,\varphi )}=\langle f,g \cdot \varphi \rangle _{\ell ^{2}(V)}\). For any non-empty, finite \(A \subset V\) and \(p \in [1, \infty )\), we introduce space-averaged norms on functions \(f: A \rightarrow {\mathbb {R}}\) by

$$\begin{aligned} ||f||_{p,A,\varphi } \mathrel {\mathop :}=\left( \frac{1}{|A|}\; \sum _{x \in A}\, |f(x)|^p \; \varphi (x)\right) ^{\frac{1}{p}}. \end{aligned}$$

The operators \(\nabla \) and \(\nabla ^*\) are defined by \(\nabla f: E \rightarrow {\mathbb {R}}\) and \(\nabla ^*F: V \rightarrow {\mathbb {R}}\)

$$\begin{aligned} \nabla f(e) \mathrel {\mathop :}=f(e^+) - f(e^-), \quad \text {and} \quad \nabla ^*F (x) \mathrel {\mathop :}=\sum _{e: e^+ =\,x} F(e) - \sum _{e:e^-= x} F(e) \end{aligned}$$

for \(f: V \rightarrow {\mathbb {R}}\) and \(F: E \rightarrow {\mathbb {R}}\), where for each non-oriented edge \(e \in E\) we specify out of its two endpoints one as its initial vertex \(e^-\) and the other one as its terminal vertex \(e^+\). Nothing of what will follow depend on the particular choice. Note that \(\nabla ^*\) is the adjoint of \(\nabla \), i.e. for all \(f \in \ell ^2(V)\) and \(F \in \ell ^2(E)\) it holds \(\langle \nabla f,F\rangle _{\ell ^{2}(E)} = \langle f,\nabla ^* F\rangle _{\ell ^{2}(V)}\). We define the products \(f \cdot F\) and \(F \cdot f\) between a function, \(f\), defined on the vertex set and a function, \(F\), defined on the edge set in the following way

$$\begin{aligned} (f \cdot F)(e) \mathrel {\mathop :}=f(e^-)\, F(e), \quad \text {and} \quad (F \cdot f)(e) \mathrel {\mathop :}=f(e^+)\, F(e). \end{aligned}$$

Then, the discrete analog of the product rule can be written as

$$\begin{aligned} \nabla (f g) = (g \cdot \nabla f) + (\nabla g \cdot f). \end{aligned}$$
(2.1)

In contrast to the continuum setting, a discrete version of the chain rule cannot be established. However, by means of the estimate (7.1), \(|\nabla f^{\alpha }|\) for \(f \ge 0\) can be bounded from above by

$$\begin{aligned} \frac{1}{1 \vee |\alpha |}\,|\nabla f^{\alpha }| \le |f^{\alpha -1} \cdot \nabla f| + |\nabla f \cdot f^{\alpha -1}|, \quad \forall \, \alpha \in {\mathbb {R}}. \end{aligned}$$
(2.2)

On the other hand, the estimate (7.4) implies the following lower bound

$$\begin{aligned} 2\,|\nabla f^{\alpha }| \ge |f^{\alpha -1} \cdot \nabla f| + |\nabla f \cdot f^{\alpha -1}| \quad \forall \, \alpha \ge 1. \end{aligned}$$
(2.3)

The Dirichlet form or energy associated to \({\mathcal {L}}^{\omega }_Y\) is defined by

$$\begin{aligned} {\mathcal {E}}^{\omega }(f,g) \mathrel {\mathop :}=\langle f,-{\mathcal {L}}^{\omega }_Yg\rangle _{\ell ^{2}(V, \mu ^\omega )} = \langle \nabla f,\omega \nabla g\rangle _{\ell ^{2}(E)}, \quad {\mathcal {E}}^{\omega }(f) \equiv {\mathcal {E}}^{\omega }(f,f). \end{aligned}$$
(2.4)

For a given function \(\eta : B \subset V \rightarrow {\mathbb {R}}\), we denote by \({\mathcal {E}}_{\eta ^2}^{\omega }(u)\) the Dirichlet form where \(\omega (e)\) is replaced by \(\frac{1}{2}(\eta ^2(e^+) + \eta ^2(e^-))\, \omega (e)\) for \(e \in E\).

2.2 Local Poincaré and Sobolev inequality

The main objective in this subsection is to establish a version of a local Poincaré inequality.

Suppose that the graph \((V, E)\) satisfies the condition (ii) in Assumption 1.1. Then, by means of a discrete version of the co-area formula, the classical local \(\ell ^1\)-Poincaré inequality on \(V\) can be easily established, see e.g. [31, Lemma 3.3], which also implies an \(\ell ^{\alpha }\)-Poincaré inequality for any \(\alpha \in [1, d)\). Moreover, the condition (i) in Assumption 1.1 ensures that balls in \(V\) have a regular volume growth. Note that, due to [17, Théorème 4.1], the volume regularity of balls \(B(x_0, n)\) and the local \(\ell ^{\alpha }\)-Poincaré inequality on \(V\) implies that for \(d \ge 2\) and any \(u: V \rightarrow {\mathbb {R}}\)

$$\begin{aligned} \inf _{a \in {\mathbb {R}}} ||u - a||_{\frac{d \alpha }{d-\alpha }, B(x_0, n)} \le C_1\, n\, \Bigg ( \frac{1}{|B(x_0, n)|}\, \sum _{\begin{array}{c} x, y \in B(x_0, n) \\ x \sim y \end{array}} | u(x) - u(y) |^\alpha \Bigg )^{\frac{1}{\alpha }}.\qquad \end{aligned}$$
(2.5)

This inequality is the starting point to prove a local \(\ell ^2\)-Poincaré inequality on the weighted graph \((V, E, \omega )\).

Proposition 2.1

(Local Poincaré inequality) Suppose \(d \ge 2\) and for any \(x_0 \in V\) and \(n \ge 1\), let \(B(n) \equiv B(x_0,n)\). Then, there exists \(C_{\mathrm {PI}} \equiv C_{\mathrm {PI}}(d) < \infty \) such that for any \(u: V \rightarrow {\mathbb {R}}\)

$$\begin{aligned} ||u - (u)_{B(n)}||_{2, B(n)}^2 \le C_{\mathrm {PI}}\, ||\nu ^{\omega }||_{\frac{d}{2}, B(n)}\; \frac{n^2}{|B(n)|} \sum _{\begin{array}{c} x, y \in B(n) \\ x \sim y \end{array}} \omega (x, y)\, ( u(x) - u(y))^2,\nonumber \\ \end{aligned}$$
(2.6)

and for any \(p, q \in (1, \infty ]\) such that \(1/p + 1/q = 2/d\),

$$\begin{aligned}&||u - (u)_{B(n), \mu ^{\omega }}||_{2, B(n), \mu ^{\omega }}^2 \nonumber \\&\quad \le \; C_{\mathrm {PI}}\, ||\mu ^{\omega }||_{p, B(n)}\, ||\nu ^{\omega }||_{q, B(n)}\; \frac{n^2}{|B(n)|} \sum _{\begin{array}{c} x, y \in B(n) \\ x \sim y \end{array}} \omega (x, y)\, ( u(x) - u(y))^2,\qquad \end{aligned}$$
(2.7)

where \((u)_{B(n)} \equiv (u)_{B(n),1}\) and \((u)_{B(n), \pi } \mathrel {\mathop :}=\sum _{x \in B(n)} \pi (x) u(x) / \sum _{x \in B(n)} \pi (x)\) for any non-negative weight \(\pi \) on \(V\).

Remark 2.2

We prove here a strong version of the local Poincaré inequality in the sense that the balls on both sides of inequality have the same radius, in contrast to weak local Poincaré inequalities used for instance in [6].

Proof

For any \(\alpha \in [1, 2)\), Hölder’s inequality yields

$$\begin{aligned}&\Bigg ( \frac{1}{|B(n)|} \sum _{\begin{array}{c} x, y \in B(n)\\ x \sim y \end{array}} |u(x) - u(y)|^{\alpha }\Bigg )^{\frac{1}{\alpha }} \nonumber \\&\quad \le ||\nu ^{\omega }||_{\frac{\alpha }{2 - \alpha }}^{1 / 2}\; \Bigg ( \frac{1}{|B(n)|} \sum _{\begin{array}{c} x, y \in B(n)\\ x \sim y \end{array}} \omega (x, y)\, ( u(x) - u(y) )^2 \Bigg )^{\frac{1}{2}}. \end{aligned}$$
(2.8)

Hence, since \(\inf _{a \in {\mathbb {R}}} \Vert u - a\Vert _{2, B(n)} = \Vert u - (u)_{B(n)}\Vert _{2, B(n)}\), the assertion (2.6) follows immediately from (2.5) by choosing \(\alpha = 2d / (d+2)\).

Next, for any \(p, q \in (1, \infty ]\) with \(1/p + 1/q = 2/d\) set \(\alpha = 2 p d / (p d - d + 2p)\). Then, \(\alpha \in [1, 2)\) and \(\alpha / (2 - \alpha ) = q\). Moreover, an application of Hölder’s inequality yields

$$\begin{aligned} ||u - (u)_{B(n), \mu ^{\omega }}||_{2, B(n), \mu ^{\omega }}^2&= \inf _{a \in {\mathbb {R}}} ||u - a||_{2, B(n), \mu ^{\omega }}^2\\&\le ||\mu ^{\omega }||_{p, B(n)}\; \inf _{a \in {\mathbb {R}}} ||u - a||_{2 p_*, B(n)}, \end{aligned}$$

where \(p_* = p / (p-1)\). Thus, since \(2 p_* = d \alpha / (d - \alpha )\), (2.7) follows by combining the last estimate with (2.8).\(\square \)

Later we will also need a weighted version of the local Poincaré inequality.

Proposition 2.3

(Local Poincaré inequality with radial cutoff) Suppose \(d \ge 2\) and for \(x_0 \in V\) and \(n \ge 1\) let \(B(n) \equiv B(x_0, n)\). Further, let \(\eta \) be a non-negative, radially decreasing weight with \({{\mathrm{\mathrm {supp }}}}\eta \subset B(n)\), i.e. \(\eta (x) = \Phi (d(x_0, x))\) for some non-increasing and non-negative càdlàg function \(\Phi \). Then, for any \(u: V \rightarrow {\mathbb {R}}\)

$$\begin{aligned} ||u - (u)_{B(n), \eta ^2}||_{2, B(n), \eta ^2}^2 \le C_{\mathrm {PI}}\, M_1\, n^2\, ||\nu ^{\omega }||_{\frac{d}{2}, B(n)}\, \frac{{\mathcal {E}}^{\omega ,\eta }(u)}{|B(n)|}, \end{aligned}$$

where \(M_1 = 8^2\, \frac{|B(n)|}{|B(n/2)|}\, \frac{\Phi (0)}{\Phi (1/2)}\), and for any \(p, q \in (1, \infty ]\) such that \(1/p + 1/q = 2/d\),

$$\begin{aligned} ||u - (u)_{B(n), \eta ^2 \mu ^{\omega }}||_{2, B(n), \eta ^2 \mu ^{\omega }}^2 \le C_{\mathrm {PI}}\, M_2\, n^2\, ||\mu ^{\omega }||_{p, B(n)}||\nu ^{\omega }||_{q, B(n)}\, \frac{{\mathcal {E}}^{\omega ,\eta }(u)}{|B(n)|}, \end{aligned}$$

where \(M_2 = M_1 ||\mu ^{\omega }||_{1,B(n)}/||\mu ^{\omega }||_{1, B(n/2)}\) and

$$\begin{aligned} {\mathcal {E}}^{\omega ,\eta }(u) \mathrel {\mathop :}=\sum _{e \in E} \min \{\eta ^2(e^+), \eta ^2(e^-)\}\, \omega (e)\, (\nabla u(e))^2. \end{aligned}$$

Proof

Given the local Poincaré inequality in Proposition 2.1, this follows from the same arguments as in Theorem 1 and Proposition 4 in [21, Corollary 5].\(\square \)

From the relative isoperimetric inequality in Assumption 1.1 one can deduce that the following Sobolev inequality holds for any function \(u\) with finite support

$$\begin{aligned} ||u||_{\ell ^{\frac{d}{d-1}}(V)} \le C_{S_1}\, ||\nabla u||_{\ell ^{1}(E)} \end{aligned}$$
(2.9)

(cf. Remark 3.3 in [3]). Note that (2.9) is a Sobolev inequality on an unweighted graph, while for our purposes we need a version involving the conductances as weights. Let

$$\begin{aligned} \rho = \rho (q,d) \mathrel {\mathop :}=\frac{q d}{q(d-2) + d}. \end{aligned}$$
(2.10)

Notice that \(\rho (q,d)\) is monotone increasing in \(q\) and \(\rho (d/2,d) = 1\). In [3] we derived the following weighted Sobolev inequality by using (2.9) and the Hölder inequality.

Proposition 2.4

(Sobolev inequality) Suppose that the graph \((V, E)\) has the isoperimetric dimension \(d \ge 2\) and let \(B \subset V\) be finite and connected. Consider a non-negative function \(\eta \) with

$$\begin{aligned} {{\mathrm{\mathrm {supp }}}}\eta \;\subset \; B, \quad 0 \le \eta \le 1 \quad \text {and} \quad \eta \equiv 0 \quad \text {on} \quad \partial B. \end{aligned}$$

Then, for any \(q \in [1, \infty ]\), there exists \(C_{\mathrm {\,S}} \equiv C_{\mathrm {\,S}}(d,q) < \infty \) such that for any \(u:V \rightarrow {\mathbb {R}}\),

$$\begin{aligned} ||(\eta \, u)^2||_{\rho , B} \le C_{\mathrm {\,S}}\, |B|^{\frac{2}{d}}\, ||\nu ^{\omega }||_{q,B}\; \left( \frac{{\mathcal {E}}_{\eta ^2}^{\omega }(u)}{|B|} + ||\nabla \eta ||_{\ell ^{\infty }(E)}^2\, ||u^2||_{1, B, \mu ^{\omega }}\right) ,\qquad \end{aligned}$$
(2.11)

where \({\mathcal {E}}_{\eta ^2}^{\omega }(u) \mathrel {\mathop :}=\frac{1}{2} \sum _{x,y} \eta ^2(x)\, \omega (x,y) \, (u(y)-u(x))^2\).

Proof

See [3, Proposition 3.5].\(\square \)

2.3 The lemma of Bombieri and Giusti

In order to obtain the Harnack inequalities we will basically follow the approach in [19]. However, due to the lack of a suitable control on the BMO norm and thus of a John–Nirenberg lemma we will use the following abstract lemma by Bombieri and Giusti (cf. [11]):

Lemma 2.5

Let \(\{U_\upsigma : \upsigma \in (0,1] \}\) be a collection of subsets of a fixed measure space endowed with a measure \(m\) such that and \(U_{\upsigma '}\subset U_\upsigma \) if \(\upsigma '<\upsigma \). Fix \(0<\delta <1\), \(0<\alpha ^*\le \infty \) and let \(f\) be a positive function on \(U:=U_1\). Suppose that

  1. (i)

    there exists \(\gamma > 0\) such that for all \(\delta \le \upsigma ' < \upsigma \le 1\) and \(0 < \alpha \le \min \{1, \alpha ^*/2 \}\),

    $$\begin{aligned} \left( \int _{U_{\upsigma '}} f^{\alpha ^*} \, \mathrm {d}m \right) ^{\frac{1}{\alpha ^*}} \le ( C_{\mathrm {BG1}}\, (\upsigma - \upsigma {}')^{-\gamma }\, m[U]^{-1})^{\frac{1}{\alpha } - \frac{1}{\alpha ^*}}\; \left( \int _{U_{\upsigma }} f^{\alpha } \, \mathrm {d}m \right) ^{\frac{1}{\alpha }}, \end{aligned}$$
  2. (ii)

    for all \({\uplambda }> 0\),

    $$\begin{aligned} m[ \ln f > {\uplambda }] \le C_{\mathrm {BG2}}\, m[U] \, {\uplambda }^{-1}, \end{aligned}$$

with some positive constants \(C_{\mathrm {BG1}}\) and \(C_{\mathrm {BG2}}\). Then, there exists \(A{=}A(c_1, c_2, \alpha ^*, \delta , \gamma )\) such that

$$\begin{aligned} \int _{U_{\delta }} f^{\alpha ^*}\, \mathrm {d}m \le A\, m[U], \quad \text {if } \quad \alpha ^* < \infty , \quad \text {and} \quad \sup _{x\in U_\delta } f(x) \le A, \quad \text {if } \quad \alpha ^*= \infty . \end{aligned}$$

The constant \(A\) is more explicitly given by

$$\begin{aligned} A = \exp ( c_1 (c_2 + 2\, (C_{\mathrm {BG1}} \vee C_{\mathrm {BG2}})^3)) \end{aligned}$$

for some positive \(c_1 \equiv c_1(\delta , \gamma )\) and \(c_2 \equiv c_2(\alpha ^*)\).

Proof

See Lemma 2.2.6 in [32].\(\square \)

3 Elliptic Harnack inequality

In this section we prove Theorem 1.3.

3.1 Mean value inequalities

Lemma 3.1

Consider a connected, finite subset \(B \subset V\) and a function \(\eta \) on \(V\) with

$$\begin{aligned} {{\mathrm{\mathrm {supp }}}}\eta \;\subset \; B, \qquad 0 \le \eta \le 1 \qquad \text {and} \qquad \eta \equiv 0 \quad \text {on} \quad \partial B. \end{aligned}$$

Further, let \(u > 0\) be such that \({\mathcal {L}}_Y^\omega \, u \ge 0\) on \(B\). Then, there exists \(C_2 < \infty \) such that for all \(\alpha \ge 1\)

$$\begin{aligned} \frac{{\mathcal {E}}_{\eta ^2}^{\omega }(u^{\alpha })}{|B|} \le C_2\, \tfrac{\alpha ^4}{(2\alpha -1)^2}\; ||\nabla \eta ||_{\ell ^{\infty }(E)}^2\; ||u^{2 \alpha }||_{1,B, \mu ^{\omega }}. \end{aligned}$$
(3.1)

Proof

Since \({\mathcal {L}}_Y^\omega \, u \ge 0\), a summation by parts yields,

$$\begin{aligned} 0 \;\ge \; \langle \eta ^2 u^{2\alpha -1},-{\mathcal {L}}_Y^\omega \, u\rangle _{\ell ^{2}(V,\mu ^{\omega })} = \langle \nabla (\eta ^2 u^{2\alpha -1}),\omega \, \nabla u\rangle _{\ell ^{2}(E)}. \end{aligned}$$
(3.2)

As an immediate consequence of the product rule (2.1) and (7.2), we obtain

$$\begin{aligned}&\langle \nabla (\eta ^2 u^{2\alpha -1}),\omega \, \nabla u\rangle _{\ell ^{2}(E)} \nonumber \\&\quad \ge \; \frac{2 \alpha - 1}{\alpha ^2}\; {\mathcal {E}}_{\eta ^2}^{\omega }(u^\alpha ) \,-\, \frac{1}{2}\, \langle |\nabla \eta ^2|,\omega \, (u^{2 \alpha -1} \cdot |\nabla u| + |\nabla u| \cdot u^{2\alpha -1})\rangle _{\ell ^{2}(E)}.\qquad \quad \end{aligned}$$
(3.3)

Using (7.5) we find that

$$\begin{aligned}&\langle |\nabla \eta ^2|,\omega \, (u^{2 \alpha -1} \cdot |\nabla u| + |\nabla u| \cdot u^{2\alpha -1})\rangle _{\ell ^{2}(E)}\\&\quad \le \; 4\, \langle (\eta \cdot \sqrt{\omega } + \sqrt{\omega } \cdot \eta )\, |\nabla u^{\alpha }|,\sqrt{\omega }\,(u^{\alpha } \cdot |\nabla \eta | + |\nabla \eta | \cdot u^{\alpha })\rangle _{\ell ^{2}(E)} \end{aligned}$$

Thus, by applying the Young inequality \(|a b| \le \frac{1}{2}(\varepsilon a^2 + b^2 / \varepsilon )\) and combining the result with (3.3), we obtain

$$\begin{aligned}&\langle \nabla (\eta ^2 u^{2\alpha -1}),\omega \, \nabla u\rangle _{\ell ^{2}(E)}\nonumber \\&\quad \ge \; \bigg (\frac{2 \alpha - 1}{\alpha ^2} - 4 \varepsilon \; \bigg )\; {\mathcal {E}}_{\eta ^2}^{\omega }(u^{\alpha }) \,-\, \frac{4}{\varepsilon }\; |B|\; ||\nabla \eta ||_{\ell ^{\infty }(E)}^2\; ||u^{2 \alpha }||_{1,B, \mu ^{\omega }}. \end{aligned}$$
(3.4)

By choosing \(\varepsilon = \frac{2 \alpha - 1}{8 \alpha ^2}\) and combining the result with (3.2), the claim follows.\(\square \)

Proposition 3.2

For any \(x_0 \in V\) and \(n \ge 1\), let \(B(n) \equiv B(x_0,n)\). Let \(u > 0\) be such that \({\mathcal {L}}_Y^\omega \, u \ge 0\) on \(B(n)\). Then, for any \(p,q \in (1, \infty ]\) with

$$\begin{aligned} \frac{1}{p} + \frac{1}{q} \;<\; \frac{2}{d} \end{aligned}$$

there exists \(\kappa =\kappa (d,p,q) \in (1, \infty )\) and \(C_3 = C_3(d,q)\) such that for all \(\beta \in [2 p_*,\infty ]\) and for all \(1/2 \le \upsigma ' < \upsigma \le 1\) we have

$$\begin{aligned} ||u||_{\beta ,B(\upsigma ' n)} \le C_3\, \left( \frac{1 \vee ||\mu ^\omega ||_{p,B(n)}\, ||\nu ^\omega ||_{q,B(n)}}{ (\upsigma - \upsigma ')^{2}} \right) ^{\kappa }\; ||u||_{2 p_*,B(\upsigma n)}, \end{aligned}$$
(3.5)

where \(p_* = p / (p-1)\).

Proof

For fixed \(1/2 \le \upsigma ' < \upsigma \le 1\), consider a sequence \(\{B(\upsigma _k n)\}_k\) of balls with radius \(\upsigma _k n\) centered at \(x_0\), where

$$\begin{aligned} {\upsigma }_{k} = \upsigma {}' + 2^{-k} (\upsigma - \upsigma {}') \qquad \text {and} \qquad \tau _{k} = 2^{-k-1} (\upsigma - \upsigma {}'), \quad k = 0, 1, \ldots \end{aligned}$$

Note that \(\upsigma _k = \upsigma _{k+1} + \tau _k\) and \(\upsigma _0 = \upsigma \). Further, we consider a cut-off function \(\eta _k\) with \({{\mathrm{\mathrm {supp }}}}\eta _k \subset B(\upsigma _{k} n)\) having the property that \(\eta _k \equiv 1\) on \(B(\upsigma _{k+1} n)\), \(\eta _k \equiv 0\) on \(\partial B(\upsigma _{k} n)\) and linear decaying on \(B(\upsigma _{k} n) {\setminus } B(\upsigma _{k+1} n)\). This choice of \(\eta _k\) implies that \(|\nabla \eta _k(e)| \le 1/\tau _{k} n\) for all \(e \in E\).

Set \(\alpha _k = (\rho / p_*)^k\). Since \(1/p + 1/q < 2/d\) we have \(\rho > p_*\). Hence, we have that \(\alpha _k \ge 1\) for every \(k\). Thus, by applying the Sobolev inequality (2.11) to \(u^{\alpha _k}\), we get

$$\begin{aligned}&||(\eta _k\, u^{\alpha _k})^2||_{\rho , B(\upsigma _{k} n)}\nonumber \\&\quad \le C_{\mathrm {S}}\, |B(\upsigma _k n)|^{\frac{2}{d}}\, ||\nu ^{\omega }||_{q,B(\upsigma _k n)}\, \left( \frac{{\mathcal {E}}_{\eta _k^2}^{\omega }(u^{\alpha _k})}{|B(\upsigma _{k} n)|} \,+\, \frac{||\mu ^{\omega }||_{p,B(\upsigma _k n)}}{(\tau _k n)^2}\, ||u^{2 \alpha _k}||_{p_*,B(\upsigma _{k} n)} \right) . \end{aligned}$$

On the other hand, by Lemma 3.1 and using the Hölder inequality with \(1/p + 1/p_* = 1\) we have

$$\begin{aligned} \frac{{\mathcal {E}}_{\eta _k^2}^{\omega }(u^{\alpha _k})}{|B(\upsigma _k n)|} \le C_2\, \left( \frac{\alpha _k}{\tau _k n}\right) ^2\; ||\mu ^{\omega }||_{p,B(\upsigma _k n)}\; ||u^{2 \alpha _k}||_{p_*,B(\upsigma _{k} n)}. \end{aligned}$$

Since \(\alpha _{k+1} p_* = \alpha _k \rho \), we obtain by combining these two estimates and using the volume regularity which implies that \(|B(n)| / |B(\upsigma 'n)| \le C_\mathrm{{reg}}^2 2^d\) that

$$\begin{aligned} ||u||_{2 \alpha _{k+1} p_*, B(\upsigma _{k+1} n)}&\le \left( c\; \frac{2^{2k}\, \alpha _k^2}{(\upsigma -\upsigma ')^{2}}\; ||\mu ^\omega ||_{p,B(n)}\, ||\nu ^\omega ||_{q,B(n)}\, \right) ^{\frac{1}{2\alpha _k}}\; ||u||_{2 \alpha _k p_*,B(\upsigma _{k} n)}\nonumber \\ \end{aligned}$$
(3.6)

for some \(c < \infty \). First, for \(\beta \in [2p_*,\infty )\) choose \(K < \infty \) such that \(\beta \le 2 \alpha _{K} p_*\). Then, Jensen’s inequality implies \(\Vert u\Vert _{\beta , B(\upsigma _K n)} \le \Vert u\Vert _{2 \alpha _K p_*,B(\upsigma _K n)}\). By iterating (3.6) and using the fact that \(\sum _{k=0}^{\infty }k/\alpha _k < \infty \) there exists \(C_3 < \infty \) independent of \(k\) such that

$$\begin{aligned} ||u||_{\beta , B(\upsigma ' n)}&\le C_3\, \prod _{k=0}^{K-1} \left( \frac{||\mu ^\omega ||_{p,B(n)}\, ||\nu ^\omega ||_{q,B(n)}}{(\upsigma -\upsigma ')^{2}} \right) ^{\frac{1}{2\alpha _k}}\; ||u||_{2 p_*,B(\upsigma n)}. \end{aligned}$$

Thus, the claim is immediate with \(\kappa = \frac{1}{2} \sum _{k=0}^\infty (1/\alpha _k) < \infty \). Now, let us consider the case \(\beta = \infty \). Note that

$$\begin{aligned} \max _{x \in B(\upsigma ' n)} u(x) \le |B(\upsigma {}' n)|^{\frac{1}{2 \alpha _n p_*}}\, ||u||_{2 \alpha _n p_*, B(\upsigma ' n)} \le c\, ||u||_{2 \alpha _n p_*, B(\upsigma _n n)}. \end{aligned}$$

By iterating the inequality (3.6) there exists \(C_3 < \infty \) independent of \(k\) such that

$$\begin{aligned} \max _{x \in B(\upsigma ' n)} u(x) \le C_3\, \prod _{k = 0}^n \left( \frac{||\mu ^{\omega }||_{p,B(n)}\, ||\nu ^\omega ||_{q,B(n)}}{(\upsigma -\upsigma ')^2} \right) ^{\frac{1}{2\alpha _k}}\, ||u||_{2 p_*, B(\upsigma n)}. \end{aligned}$$

Finally, choosing \(\kappa \) as before the claim is immediate.\(\square \)

Corollary 3.3

Let \(u > 0\) be such that \({\mathcal {L}}_Y^{\omega }\, u = 0\) on \(B(n)\). Then, for all \(\alpha \in (0, \infty )\) and \(1/2 \le \upsigma ' < \upsigma \le 1\) there exists \(C_4 < \infty \) such that

$$\begin{aligned} \max _{x \in B(\upsigma ' n)} u(x)^{-1} \le \left( C_4\, \frac{1 \vee ||\mu ^{\omega }||_{p,B(n)}\, ||\nu ^{\omega }||_{q,B(n)}}{(\upsigma - \upsigma ')^{2}} \right) ^{\frac{2 p_*\kappa }{\alpha }}\; ||u^{-1}||_{\alpha , B(\upsigma n)}. \end{aligned}$$
(3.7)

Proof

Since \(u > 0\) is harmonic on \(B(n)\), the function \(f = u^{-\alpha / 2p_*}\) is sub-harmonic on \(B(n)\), i.e. \({\mathcal {L}}_Y^{\omega }\, f \ge 0\). Thus, (3.7) follows by applying (3.5) with \(\beta = \infty \) to \(f\).\(\square \)

Corollary 3.4

Consider a non-negative \(u\) such that \({\mathcal {L}}_Y^{\omega }\, u \ge 0\) on \(B(n)\). Then, for all \(\alpha \in (0, \infty )\) and \(1/2 \le \upsigma ' < \upsigma \le 1\), there exists \(C_5(\alpha ) < \infty \) such that

$$\begin{aligned} \max _{x \in B(\upsigma ' n)} u(x) \le \left( C_5(\alpha )\, \frac{1 \vee ||\mu ^{\omega }||_{p,B(n)}\, ||\nu ^{\omega }||_{q,B(n)}}{(\upsigma - \upsigma ')^{2}} \right) ^{\kappa '}\; ||u||_{\alpha , B(\upsigma n)} \end{aligned}$$
(3.8)

with \(\kappa '=\left( 1\vee \frac{2p_*}{\alpha }\right) \kappa >1\).

Proof

The proof of this Corollary, based on arguments given originally in [32, Theorem 2.2.3], can be found in [3, Corollary 3.9].\(\square \)

The proof of the EHI or more precisely the verification of condition (i) in the lemma of Bombieri–Giusti requires a certain bound on the averaged \(\ell ^\alpha \)-norm of a harmonic function \(u\) for an arbitrarily small \(\alpha > 0\). However, the constant \(C_5(\alpha )\) diverges as \(\alpha \) tends to zero so that Corollary 3.4 cannot directly be used in order to verify condition (i) in Lemma 2.5. Therefore, we proceed by establishing a bound for super-harmonic functions, for which another Moser iteration is needed.

Lemma 3.5

Consider a connected, finite subset \(B \subset V\) and a function \(\eta \) on \(V\) with

$$\begin{aligned} {{\mathrm{\mathrm {supp }}}}\eta \;\subset \; B, \qquad 0 \le \eta \le 1 \qquad \text {and} \qquad \eta \equiv 0 \quad \text {on} \quad \partial B. \end{aligned}$$

Further, let \(u > 0\) be such that \({\mathcal {L}}_Y^\omega \, u \le 0\) on \(B\). Then, for any \(\alpha < \frac{1}{2}\),

$$\begin{aligned} \frac{{\mathcal {E}}_{\eta ^2}^{\omega }(u^{\alpha })}{|B|} \le \frac{32 \alpha ^2}{(1-2\alpha )^2} \, {{\mathrm{\mathrm {osr}}}}( \eta ^2 ) \, ||\nabla \eta ||_{\ell ^{\infty }(E)}^2\; ||u^{2 \alpha }||_{1,B, \mu ^{\omega }}, \end{aligned}$$
(3.9)

where \({{\mathrm{\mathrm {osr}}}}(\eta ) \mathrel {\mathop :}=\max \{\eta (y) / \eta (x) \wedge 1 \mid \{x,y\} \in E,\; \eta (x) \ne 0\}\).

Proof

Since \({\mathcal {L}}_Y^\omega \, u \le 0\), a summation by parts yields,

$$\begin{aligned} 0 \le \langle \eta ^2 u^{2\alpha -1},-{\mathcal {L}}_Y^\omega \, u\rangle _{\ell ^{2}(V,\mu ^{\omega })} = \langle \nabla (\eta ^2 u^{2\alpha -1}),\omega \, \nabla u\rangle _{\ell ^{2}(E)}. \end{aligned}$$
(3.10)

Further, using (7.7) (with \(\beta = 1-2\alpha \)) and (7.6) we obtain for any \(\{x,y\}\in E\) that

$$\begin{aligned}&\nabla (\eta ^2 u^{2\alpha -1})(x,y)\, (\nabla u)(x,y)\le \; \frac{1}{2}\, \min \{\eta ^2(x), \eta ^2(y)\}\, (\nabla u^{2\alpha -1})(x,y)\, (\nabla u)(x,y)\\&\quad \quad +\, \frac{8}{1-2\alpha }\, {{\mathrm{\mathrm {osr}}}}(\eta ^2)\, (\nabla \eta )^2(x,y) \, ( u^{2\alpha }(x)+u^{2\alpha }(y) )\\&\quad \le \; \frac{2\alpha -1}{2\alpha ^2}\, (\nabla u^{\alpha })(x,y) \,+\, \frac{8}{1-2\alpha }\, {{\mathrm{\mathrm {osr}}}}(\eta ^2)\, (\nabla \eta )^2(x,y)\, ( u^{2\alpha }(x)+u^{2\alpha }(y) ), \end{aligned}$$

and therefore

$$\begin{aligned}&\langle \nabla (\eta ^2 u^{2\alpha -1}),\omega \, \nabla u\rangle _{\ell ^{2}(E)}\nonumber \\&\le \; \frac{2\alpha -1}{2\alpha ^2} \; {\mathcal {E}}_{\eta ^2}^{\omega }(u^\alpha ) \,+\, \frac{16 }{1-2\alpha } \,|B| \, {{\mathrm{\mathrm {osr}}}}( \eta ^2 ) \, ||\nabla \eta ||_{\ell ^{\infty }(E)}^2\; ||u^{2 \alpha }||_{1,B, \mu ^{\omega }}, \end{aligned}$$
(3.11)

which together with (3.10) gives the claim.\(\square \)

Proposition 3.6

For any \(x_0 \in V\) and \(n \ge 1\), let \(B(n) \equiv B(x_0,n)\). Let \(u > 0\) be such that \({\mathcal {L}}_Y^\omega \, u \le 0\) on \(B(n)\). Then, for any \(p, q \in (1, \infty ]\) with

$$\begin{aligned} \frac{1}{p} + \frac{1}{q} \;<\; \frac{2}{d} \end{aligned}$$

and any \(\beta \in (0, p_*/2)\) there exists \(C_6 = C_6(d, p, q)\) and \(\kappa = \kappa (d, p, q)\) such that for all \(\alpha \in (0, \beta / 2)\) and for all \(1/2 \le \upsigma ' < \upsigma \le 1\) we have

$$\begin{aligned} ||u||_{\beta ,B(\upsigma ' n)} \le \left( C_6 \, \frac{1 \vee ||\mu ^\omega ||_{p,B(n)}\, ||\nu ^\omega ||_{q, B(n)}}{ (\upsigma - \upsigma ')^{2}} \right) ^{\kappa \, \left( \frac{1}{\alpha } - \frac{1}{\beta }\right) }\; ||u||_{\alpha , B(\upsigma n)}. \end{aligned}$$
(3.12)

Proof

We will proceed as in [31, Theorem 2.2.5]. Since \(1/p + 1/q < 2/d\), let us recall that \(\rho / p_* > 1\). W.l.o.g. we also assume that \(\rho /p_* \le 2\) (otherwise we replace \(\rho \) by \(\rho ' \mathrel {\mathop :}=2p_* > 1\) in the following argument). Then, there exists \(j \ge 2\) such that

$$\begin{aligned} \beta \, \left( \frac{\rho }{p_*} \right) ^{-j} \le \alpha \;<\; \beta \, \left( \frac{\rho }{p_*} \right) ^{-(j-1)}. \end{aligned}$$

Further, set

$$\begin{aligned} \alpha _k \mathrel {\mathop :}=\frac{\beta }{2 p_*} \left( \frac{\rho }{p_*} \right) ^{k-j} \le \; \frac{\beta }{2 p_*} \;<\; \frac{1}{4}, \qquad \forall \,k = 0, \ldots , j. \end{aligned}$$

Let \(\upsigma _k n\), \(\tau _k\) and \(\eta _k\) be defined as in the proof of Proposition 3.2 above. Note that \({{\mathrm{\mathrm {osr}}}}(\eta ^2_k) \le 4\) for all \(k\). Then, we apply again the Sobolev inequality (2.11) to \(u^{\alpha _k}\) and afterwards we use Lemma 3.5, the volume regularity and the relation \(\alpha _{k+1} p_* = \alpha _k \rho \) to obtain

$$\begin{aligned} ||u||_{2 \alpha _{k+1} p_*, B(\upsigma _{k+1} n)}&\le \left( C_6\; 2^{2k}\dfrac{||\mu ^\omega ||_{p,B(n)}\, ||\nu ^\omega ||_{q,B(n)}}{(\upsigma -\upsigma ')^{2}} \right) ^{\frac{1}{2\alpha _k}}\; ||u||_{2 \alpha _k p_*,B(\upsigma _{k} n)}\qquad \quad \end{aligned}$$
(3.13)

for some \(C_6 < \infty \) (note that for all \(\alpha _k \in (0, 1/4)\) the constant in (3.9) is uniformly bounded in \(k\)). Further, since \(2p_* \alpha _0 \le \alpha \) by Jensen’s inequality we have that \(\Vert u\Vert _{2\alpha _0 p_*, B(\upsigma _n)} \le \Vert u\Vert _{\alpha ,B(\upsigma n)}\). By iterating (3.13),

$$\begin{aligned} ||u||_{\beta , B(\upsigma ' n)}&\le \prod _{k=0}^{j-1} \left( C_6 \, 2^{2k} \, \frac{||\mu ^\omega ||_{p,B(n)}\, ||\nu ^\omega ||_{q,B(n)}}{(\upsigma -\upsigma ')^{2}} \right) ^{\frac{1}{2\alpha _k}}\; ||u||_{\alpha ,B(\upsigma n)}. \end{aligned}$$

Since \(\rho / p_* \ge 1\) and \(j \ge 2\) we have that

$$\begin{aligned} \frac{(\rho /p_*)^j}{\beta } \,-\, \frac{1}{\beta } \le \left( 1 + \frac{\rho }{p_*} \right) \, \left( \frac{1}{\alpha } - \frac{1}{\beta } \right) \end{aligned}$$

and therefore

$$\begin{aligned} \sum _{k=0}^{j-1} \frac{1}{\alpha _k} = \frac{2\, p_* \rho }{\rho - p_*}\, \left( \frac{(\rho /p_*)^j}{\beta } -\frac{1}{\beta } \right) \le \frac{ 2\, \rho (\rho +p_*)}{\rho -p_*}\, \left( \frac{1}{\alpha } - \frac{1}{\beta } \right) . \end{aligned}$$

Finally, since \(\sum _{k=0}^{j-1} (k/\alpha _k) \le c \sum _{k=0}^{j-1}1 /\alpha _k < \infty \) for some \(c=c(d,p,q)\) the claim follows.\(\square \)

3.2 From mean value inequalities to Harnack principle

Lemma 3.7

Consider a connected, finite subset \(B \subset V\) and a function \(\eta \) on \(V\) with

$$\begin{aligned} {{\mathrm{\mathrm {supp }}}}\eta \;\subset \; B, \qquad 0 \le \eta \le 1 \qquad \ \text {and} \qquad \eta \equiv 0 \quad \text {on} \quad \partial B. \end{aligned}$$

Further, let \(u > 0\) be such that \({\mathcal {L}}_Y^\omega \, u \le 0\) on \(B\). Then,

$$\begin{aligned} \frac{{\mathcal {E}}^{\omega , \eta }(\ln u)}{|B|} \le 8\, {{\mathrm{\mathrm {osr}}}}(\eta ^2)\; ||\nabla \eta ||_{\ell ^{\infty }(E)}^2\; ||\mu ^\omega ||_{1,B}, \end{aligned}$$
(3.14)

where \({{\mathrm{\mathrm {osr}}}}\) is defined as in Lemma 3.5.

Proof

Since \(u > 0\) and \({\mathcal {L}}_Y^{\omega }\, u \le 0\) on \(B\), a summation by parts yields

$$\begin{aligned} 0 \le \langle \eta ^2 u^{-1},-{\mathcal {L}}_Y^{\omega }\, u\rangle _{\ell ^{2}(V,\mu ^{\omega })} = \langle \nabla (\eta ^2 u^{-1}),\omega \, \nabla u\rangle _{\ell ^{2}(E)}. \end{aligned}$$
(3.15)

In view of the estimates (7.7) (with the choice \(\beta =1\)) and (7.3), we obtain for any \(\{x, y\} \in E\) that

$$\begin{aligned}&\nabla (\eta ^2 u^{-1})(x,y)\, (\nabla u)(x,y)\nonumber \\&\quad \le \frac{1}{2}\, \min \{\eta ^2(x), \eta ^2(y)\}\, (\nabla u^{-1})(x,y)\, (\nabla u)(x,y) \,+\, 8\, {{\mathrm{\mathrm {osr}}}}(\eta ^2)\, (\nabla \eta )^2(x,y)\nonumber \\&\quad \le - \frac{1}{2}\, \min \{\eta ^2(x), \eta ^2(y)\}\, (\nabla \ln u )^2(x,y)\, \,+\, 8\, {{\mathrm{\mathrm {osr}}}}(\eta ^2)\, (\nabla \eta )^2(x,y). \end{aligned}$$
(3.16)

Thus, by combining the resulting estimate with (3.15) the claim (3.14) follows.\(\square \)

Proof of Theorem 1.3

Consider a harmonic function \(u > 0\) on \(B(n) \equiv B(x_0,n)\). Pick \(\varrho \) so that \(3/4 \le \varrho < 1\) and set

$$\begin{aligned} \eta (x) \mathrel {\mathop :}=\left( 1 \,-\, \frac{[ d(x_0, x) - \varrho n\, ]_+ }{(1-\varrho )\, n} \right) \;\vee \; 0. \end{aligned}$$

Obviously, the cut-off function \(\eta \) has the properties that \({{\mathrm{\mathrm {supp }}}}\eta \subset B(n)\), \(\eta \equiv 1\) on \(B(\varrho n)\), \(\eta \equiv 0\) on \(\partial B(n)\), \(\Vert \nabla \eta \Vert _{\ell ^\infty (E)} \le 1 / (1-\varrho ) n\) and \({{\mathrm{\mathrm {osr}}}}(\eta ^2) \le 4\).

Now let \(f_1 \mathrel {\mathop :}=u^{-1}\, \mathrm {e}^{\xi }\) with \(\xi = \xi (u) \mathrel {\mathop :}=(\ln u)_{B(n),\eta ^2}\). Our aim is to apply Lemma 2.5 to the function \(f_1\) with \(U = B(\varrho n)\), \(U_\upsigma =B(\upsigma \varrho n)\), \(\delta = 1 / 2 \varrho \), \(\alpha ^* = \infty \) and \(m\) being the counting measure. Note that Corollary 3.3 implies that the condition (i) of Lemma 2.5 is satisfied with \(\gamma = 4p_* \kappa \), where \(\kappa \) and \(p_*\) are chosen as in Proposition 3.2 and

$$\begin{aligned} C_{\mathrm {BG1}} = ( C_4\, (1 \vee ||\mu ^\omega ||_{p,B(n)}\, ||\nu ^\omega ||_{q,B(n)}) )^{2p_* \kappa }. \end{aligned}$$

To verify the second condition of Lemma 2.5, we apply the weighted local \(\ell ^2\)-Poincaré inequality in Proposition 2.3 to the function \(\ln u\). This yields

$$\begin{aligned} ||\ln u - (\ln u)_{B(n),\eta ^2}||_{2, B(n), \eta ^2}^2&\le C_{\mathrm {PI}}\, M_1\, n^2\, ||\nu ^{\omega }||_{\frac{d}{2}, B(n)}\; \frac{{\mathcal {E}}_{\eta ^2 \wedge \eta ^2}^{\omega }(\ln u)}{|B(n)|}\\&\le c\, ||\nu ^{\omega }||_{\frac{d}{2}, B(n)}\, ||\mu ^{\omega }||_{1, B(n)}, \end{aligned}$$

where we used (3.14) in the last step. Hence,

$$\begin{aligned} {\uplambda }\, ||1\!\!1_{\{ \ln f_1 \,>\, {\uplambda }\}}||_{1,B(\varrho n)} \le \frac{|B(n)|}{|B(\varrho n)|}\; ||\ln u - \xi ||_{1, B(n), \eta ^2} \le c\, ( ||\nu ^{\omega }||_{\frac{d}{2}, B(n)}\, ||\mu ^{\omega }||_{1, B(n)} )^{\frac{1}{2}}. \end{aligned}$$

This shows that the second hypothesis of Lemma 2.5 is satisfied by \(f_1\). Thus, we can apply Lemma 2.5 to \(f_1\) and, since \(U_\delta =B(n/2)\), we conclude that

$$\begin{aligned} ||f_1||_{\infty ,B(n/2)} \le A_1^\omega \qquad \Longleftrightarrow \qquad \mathrm {e}^{\xi } \le A_1^{\omega }\, \min _{x \in B(n/2)} u(x). \end{aligned}$$
(3.17)

Analogously one can verify that the function \(f_2 = u\, \mathrm {e}^{-\xi }\), with \(\xi (u) \mathrel {\mathop :}=(\ln u)_{B(n), \eta ^2}\), satisfies the conditions of Lemma 2.5 with the choice \(U\) and \(U_\upsigma \) as before, \(\delta = 3/4 \varrho \) and any fixed \(\alpha ^*\in (0, p_*/2)\). Indeed, the first hypothesis is fulfilled by Proposition 3.6 with the choice \(\beta = \alpha ^*\) and the second condition again by the weighted local \(\ell ^2\)-Poincaré inequality in Proposition 2.3. Hence, \(||f_2||_{\alpha ^*,B(3n/4)} \; \le \;A_2^{\omega }\) and by Corollary 3.4 we get

$$\begin{aligned} \max _{x \in B(n/2)} u(x) \le \big ( 16\, C_5(\alpha ^*)\, \big ( 1 \vee ||\mu ^{\omega }||_{p,B(n)}\, ||\nu ^{\omega }||_{q,B(n)} \big ) \big )^{\kappa '}\; A_2^{\omega }\, \mathrm {e}^{\xi }. \end{aligned}$$
(3.18)

By combining (3.17) and (3.18) we obtain the EHI. Finally, the explicit form of the Harnack constant follows from the representation of the constant in Lemma 2.5.\(\square \)

3.3 Hölder continuity

As an immediate consequence from the elliptic Harnack inequality we prove the Hölder-continuity of harmonic functions.

Proposition 3.8

Suppose that Assumption 1.6 holds. Let \(x_0 \in V\) and let \(s^{\omega }(x_0)\) and \(C^*_{\mathrm {EH}}\) be as in Assumption 1.6. Further, let \(R \ge s(x_0)\) and suppose that \(u > 0\) is a harmonic function on \(B(x_0, R_0)\) for some \(R_0 \ge 2 R\). Then, for any \(x_1, x_2\in B(x_0,R)\),

$$\begin{aligned} | u(x_1) - u(x_2) | \le c \cdot \left( \frac{R}{R_0} \right) ^{\theta } \max _{B(x_0, R_0 / 2)} u, \end{aligned}$$

where \(\theta = \ln (2C^*_\mathrm{{EH}}/(2C^*_\mathrm{{EH}}-1))/\ln 2\) and \(c\) only depending on \(C^*_\mathrm{{EH}}\).

Proof

The proof is standard (c.f. [31, page 50]) and we omit it here. For a similar but more complicated argument see the corresponding parabolic result in Proposition 4.8.\(\square \)

4 Parabolic Harnack inequality

In this section we prove the parabolic Harnack inequality in Theorem 1.4.

4.1 Mean value inequalities

Lemma 4.1

Suppose that \(Q = I \times B\), where \(I = [s_1, s_2]\) is an interval and \(B\) is a finite, connected subset of \(V\). Consider a function \(\eta : V \rightarrow {\mathbb {R}}\) and a smooth function \(\zeta : {\mathbb {R}}\rightarrow {\mathbb {R}}\) with

$$\begin{aligned} {{\mathrm{\mathrm {supp }}}}\eta&\;\subset \; B,&0&\;\le \; \eta \;\le \; 1&\text {and}&\eta&\;\equiv \; 0 \quad \text {on} \quad \partial B, \\ {{\mathrm{\mathrm {supp }}}}\zeta&\;\subset \; I,&0&\;\le \; \zeta \;\le \; 1&\text {and}&\zeta (s_1)&\;=\; 0. \end{aligned}$$

Further, let \(u > 0\) be such that \(\partial _t u -{\mathcal {L}}_Y^\omega \, u \le 0\) on \(Q\). Then, for all \(\alpha \ge 1\),

$$\begin{aligned} \int _{I} \zeta (t)\; \frac{{\mathcal {E}}_{\eta ^2}^{\omega }(u_t^{\alpha })}{|B|} \, \mathrm {d}t \le 64\, \alpha ^2\, ( ||\nabla \eta ||_{\ell ^{\infty }(E)}^2 \,+\, \Vert \zeta ' \Vert _{L^{\infty } (I)} )\; \int _{I}\; ||u_t^{2 \alpha }||_{1,B, \mu ^{\omega }}\, \mathrm {d}t\qquad \end{aligned}$$
(4.1)

and

$$\begin{aligned} \max _{t \in I} ( \zeta (t)\; ||(\eta \, u_t^{\alpha })^2||_{1, B, \mu ^{\omega }} ) \le 64\, \alpha ^2\, ( ||\nabla \eta ||_{\ell ^{\infty }(E)}^2 \,+\, \Vert \zeta ' \Vert _{L^{\infty } (I)} )\; \int _{I}\; ||u_t^{2 \alpha }||_{1 , B, \mu ^{\omega } }\, \mathrm {d}t.\nonumber \\ \end{aligned}$$
(4.2)

Proof

Since \(\partial _t u - {\mathcal {L}}_Y^\omega \, u \le 0\) on \(Q\) we have, for every \(t \in I\),

$$\begin{aligned} \frac{1}{2 \alpha }\, \partial _t \langle \eta ^2,u_t^{2\alpha }\rangle _{\ell ^{2}(V, \mu ^{\omega })}&\,=\, \langle \eta ^2\, u_t^{2 \alpha - 1},\partial _t\, u_t\rangle _{\ell ^{2}(V, \mu ^{\omega })}\nonumber \\&\,\le \, \langle \eta ^2\, u_t^{2 \alpha - 1},{\mathcal {L}}_Y^{\omega }\, u_t\rangle _{\ell ^{2}(V,\mu ^{\omega })} {=} -\langle \nabla (\eta ^2\, u_t^{2\alpha -1}),\omega \, \nabla u_t\rangle _{\ell ^{2}(E)}.\nonumber \\ \end{aligned}$$

By choosing \(\varepsilon = \frac{1}{8 \alpha }\) in (3.4) and exploiting the fact that \(\alpha \ge 1\), we obtain

$$\begin{aligned} \langle \nabla (\eta ^2 u_t^{2\alpha -1}),\omega \, \nabla u_t\rangle _{\ell ^{2}(E)} \;\ge \; \frac{1}{2 \alpha }\; {\mathcal {E}}_{\eta ^2}^{\omega }(u_t^{\alpha }) \,-\, 32\, \alpha \; |B|\;||\nabla \eta ||_{\ell ^{\infty }(E)}^2\; ||u_t^{2\alpha }||_{1, B, \mu ^{\omega }}. \end{aligned}$$

Hence,

$$\begin{aligned} \partial _t ||(\eta \, u_t^{\alpha })^2||_{1, B, \mu ^{\omega }} \,+\, \frac{{\mathcal {E}}_{\eta ^2}^{\omega }(u_t^{\alpha })}{|B|} \le 64\, \alpha ^2\; ||\nabla \eta ||_{\ell ^{\infty }(E)}^2\; ||u_t^{2 \alpha }||_{1, B, \mu ^{\omega }}. \end{aligned}$$
(4.3)

By multiplying both sides of (4.3) with \(\zeta (t)\), we get

$$\begin{aligned}&\partial _t ( \zeta (t)\, ||(\eta \, u_t^{\alpha })^2||_{1, B, \mu ^{\omega }} ) \,+\, \zeta (t)\, \frac{{\mathcal {E}}_{\eta ^2}^{\omega }(u_t^{\alpha })}{|B|}\nonumber \\&\quad \le \; 64\, \alpha ^2 \, ||\nabla \eta ||_{\ell ^{\infty }(E)}^2\, ||u_t^{2 \alpha }||_{1, B, \mu ^{\omega }} \,+\, \Vert \zeta ' \Vert _{L^{\infty } (I)} ||(\eta \, u_t^{\alpha })^2||_{1, B, \mu ^{\omega }}. \end{aligned}$$

Integrating this inequality over \([s_1,s]\) for any \(s \in I\), we obtain

$$\begin{aligned}&\zeta (s)\, ||(\eta \, u_s^{\alpha })^2||_{1, B, \mu ^{\omega }} \,+\, \int _{s_1}^s \zeta (t)\; \frac{{\mathcal {E}}_{\eta ^2}^{\omega }(u_t^{\alpha })}{|B|}\, \mathrm {d}t\nonumber \\&\quad \le \; 64\, \alpha ^2\, ( ||\nabla \eta ||_{\ell ^{\infty }( E)}^2 \,+\, \Vert \zeta ' \Vert _{L^{\infty } (I)} )\; \int _{I}\; ||u_t^{2 \alpha }||_{1, B, \mu ^{\omega }}\, \mathrm {d}t. \end{aligned}$$
(4.4)

By neglecting the first term on the left-hand side of (4.4) we get (4.1), whereas (4.2) follows once we neglect the second term on the left-hand side of (4.4).\(\square \)

For any \(x_0 \in V\), \(t_0 \ge 0\) and \(n \ge 1\), we write \(Q(n) \equiv [t_0, t_0 + n^2] \times B(n)\) with \(B(n) \equiv B(x_0, n)\). Further, we consider a family of intervals \(\{I_{\upsigma } : \upsigma \in [0, 1]\}\), i. e.

$$\begin{aligned} I_{\upsigma } \mathrel {\mathop :}=[ \upsigma \, t_0 + ( 1 - \upsigma ) s', ( 1 - \upsigma ) s'' + \upsigma ( t_0 + n^2 ) ] \end{aligned}$$

interpolating between the intervals \([t_0, t_0+n^2]\) and \([s', s'']\), where \(s' \le s''\) are chosen such that given \(\varepsilon \in (0, 1/4)\) we have \(s' - t_0 \ge \varepsilon n^2\) and either \(t_0 + n^2 - s'' \ge \varepsilon n^2\) or \(s'' = t_0 + n^2\). Moreover, set \( Q(\upsigma n) \mathrel {\mathop :}=I_{\upsigma } \times B(\upsigma n)\). In addition, for any sets \(I\) and \(B\) as in Lemma 4.1 let us introduce a \(L^p\)-norm on functions \(u :{\mathbb {R}}\times V \rightarrow {\mathbb {R}}\) by

$$\begin{aligned} ||u||_{p,I \times B, \mu ^{\omega }} \mathrel {\mathop :}=\left( \frac{1}{|I|}\; \int _{I}\; ||u_t||_{p, B, \mu ^{\omega }}^p\; \mathrm {d}t \right) ^{ \frac{1}{p}}, \end{aligned}$$

where \(u_t=u(t,.)\), \(t\in {\mathbb {R}}\).

Proposition 4.2

Let \(u > 0\) be such that \(\partial _t u - {\mathcal {L}}_Y^{\omega }\, u \le 0\) on \(Q(n)\). Then, for any \(p,q \in (1,\infty ]\) with

$$\begin{aligned} \frac{1}{p} \,+\, \frac{1}{q} \;<\; \frac{2}{d} \end{aligned}$$
(4.5)

there exists \(\kappa \equiv \kappa (d,p,q)\) and \(C_7 \equiv C_7(d, q, \varepsilon )\) such that for all \(\beta \ge 1\) and for all \(1/2 \le \upsigma ' < \upsigma \le 1\), we have

$$\begin{aligned} ||u||_{2 \beta , Q(\upsigma ' n),\mu ^{\omega }} \le C_7\, \left( \frac{ (1 \vee ||\mu ^{\omega }||_{p,B(n)})\, (1 \vee ||\nu ^{\omega }||_{q,B(n)}) }{(\upsigma - \upsigma ')^2} \right) ^{ \kappa } ||u||_{2, Q(\upsigma n), \mu ^{\omega }}.\nonumber \\ \end{aligned}$$
(4.6)

Proof

Proceeding as in the proof of Proposition 3.2, consider a sequence \(\{B(\upsigma _k n)\}_k\) of balls with radius \(\upsigma _k n\) centered at \(x_0\), where

$$\begin{aligned} \upsigma {_{k}} = \upsigma {}' + 2^{-k} (\upsigma - \upsigma {}') \qquad \text {and} \qquad \tau _k = 2^{-k-1} (\upsigma - \upsigma {}'), \quad k = 0, 1, \ldots \end{aligned}$$

and a sequence \(\{\eta _k\}_k\) of cut-off functions in space such that \({{\mathrm{\mathrm {supp }}}}\eta _k \subset B(\upsigma _k n)\), \(\eta _k \equiv 1\) on \(B(\upsigma _{k+1} n)\), \(\eta _k \equiv 0\) on \(\partial B(\upsigma _k n)\) and \(||\nabla \eta _k||_{\ell ^{\infty }(E)} \le 1/\tau _{k} n\). Further, let \(\{\zeta _k\}_k\) be a sequence of cut-off functions in time, i.e. \(\zeta _k \in C^{\infty }({\mathbb {R}})\), \({{\mathrm{\mathrm {supp }}}}\zeta _k \subset I_{\upsigma _k}\), \(\zeta _k \equiv 1\) on \(I_{\upsigma _{k+1}}\), \(\zeta _k(\upsigma _k t_0 + (1-\upsigma _k)s') = 0\) and \(\Vert \zeta _k' \Vert _{L^{\infty } ([t_0,t_0+n^2])} \le 1 / \varepsilon \tau _k n^2\).

Set \(\alpha _k = (1 + (\rho - p_*) / \rho )^k\). Since \(1/p + 1/q < 2/d\), it holds that \(\rho > p_*\). Hence, we have that \(\alpha _k \ge 1\) for every \(k \in {\mathbb {N}}_0\). By Hölder’s inequality, we have that

$$\begin{aligned}&||u^{2 \alpha _k}||_{1 + \frac{\rho -p_*}{\rho }, B(\upsigma _{k+1} n), \mu ^{\omega }}^{1+\frac{\rho - p_*}{\rho }}\\&\quad \le \; \frac{|B(\upsigma _k n)|}{|B(\upsigma _{k+1} n)|}\; ||(\eta _k\, u_t^{\alpha _k})^2||_{1, B(\upsigma _{k} n), \mu ^{\omega }}^{1-\frac{p_*}{\rho }}\; ||(\eta _k\, u_t^{\alpha _k})^2||_{\rho ,B(\upsigma _{k} n)}\; ||\mu ^{\omega }||_{p, B(\upsigma _k n)}^{\frac{p_*}{\rho }}. \end{aligned}$$

Since by the volume regularity \(|B(\upsigma _k n)| / |B(\upsigma _{k+1} n)| \le C_{\mathrm {reg}}^2 2^d\), we obtain

$$\begin{aligned} ||u^{2\alpha _k}||_{1 + \frac{\rho -p_*}{\rho }, Q(\upsigma _{k+1} n), \mu ^{\omega }}^{1+\frac{\rho - p_*}{\rho }}&\le C_{\mathrm {reg}}^2 2^d\, ||\mu ^{\omega }||_{p,B(\upsigma _{k} n)}^{\frac{p_*}{\rho }} \left( \max _{t \in I_{\upsigma _{k+1}}} ||(\eta _k\, u_t^{\alpha _k})^2||_{1, B(\upsigma _{k} n), \mu ^{\omega }}^{1 - \frac{p_*}{\rho }} \right) \\&\quad \times \left( \frac{1}{|I_{\upsigma _{k+1}}|}\, \int _{I_{\upsigma _{k+1}}} ||(\eta _k\, u_t^{\alpha _k})^2||_{\rho ,B(\upsigma _{k} n)}\, \mathrm {d}t \right) . \end{aligned}$$

Now, in view of (2.11), the integrand can be estimated from above by

$$\begin{aligned} ||(\eta _k\, u_t^{\alpha _k})^2||_{\rho , B(\upsigma _k n)}&\le C_{\mathrm {S}}\, n^2\, ||\nu ^{\omega }||_{q,B(\upsigma _k n)} \\&\ \quad \times \left( \frac{{\mathcal {E}}_{\eta _k^2}^{\omega }\big (u_t^{\alpha _k}\big )}{|B(\upsigma _k n)|} \,+\, \frac{1}{(\tau _k n)^2}\, ||u_t^{2 \alpha _k}||_{1, B(\upsigma _k n), \mu ^{\omega }} \right) . \end{aligned}$$

On the other hand, we obtain from (4.1) that

$$\begin{aligned} \int _{I_{\upsigma _{k+1}}} \frac{{\mathcal {E}}_{\eta _k^2}^{\omega }(u_t^{\alpha _k})}{|B(\upsigma _k n)|}\; \mathrm {d}t \le 64\, \left( \frac{\alpha _k}{\tau _k n}\right) ^2\, \left( 1 + \frac{\tau _k}{\varepsilon } \right) \, \int _{I_{\upsigma _k}} ||u_t^{2 \alpha _k}||_{1,B(\upsigma _k n), \mu ^{\omega }}\, \mathrm {d}t. \end{aligned}$$

Combining the estimates above and using (4.2), we finally get

$$\begin{aligned}&||u||_{2 \alpha _{k+1}, Q(\upsigma _{k+1} n), \mu ^{\omega }}\\&\quad \le \; \left( c\, \varepsilon ^{-1}\, \frac{2^{2k}\, \alpha _k^2}{(\upsigma - \upsigma ')^2}\; \left( 1 \vee ||\mu ^{\omega }||_{p,B(n)}^{\frac{p_*}{\rho }}\, ||\nu ^{\omega }||_{q,B(n)} \right) \right) ^{ \frac{1}{2 \alpha _k}} ||u||_{2 \alpha _k,Q(\upsigma _k n), \mu ^{\omega }} \end{aligned}$$

for some \(c < \infty \), where we used that \(\tau _k \le 1\) and \(|I_{\upsigma _k}|/|I_{\upsigma _{k+1}}| = \upsigma _k/\upsigma _{k+1}\le 2\).

Now for any \(\beta \in [1, \infty )\) choose \(K < \infty \) such that \(2 \beta \le 2 \alpha _{K}\). By iterating the inequality above and exploiting the fact that \(||u||_{2 \beta , Q(\upsigma ' n)} \le \, ||u||_{2 \alpha _{K}, Q(\upsigma _{K} n)}\) gives

$$\begin{aligned} ||u||_{2 \beta , Q(\upsigma ' n), \mu ^{\omega }} \le C_7\, \prod _{k=0}^{K-1} \left( \frac{ (1 \vee ||\mu ^{\omega }||_{p,B(n)})\, (1 \vee ||\nu ^{\omega }||_{q,B(n)}) }{(\upsigma - \upsigma ')^2} \right) ^{ \frac{1}{2 \alpha _k}} ||u||_{2,Q(\upsigma n), \mu ^{\omega }},\nonumber \\ \end{aligned}$$
(4.7)

where \(C_7 < \infty \) is a constant independent of \(k\) since \(\sup _k 2^{(k+1)/2 \alpha _k} < \infty \). Finally, by choosing \(\kappa = \frac{1}{2} \sum _{k=0}^{\infty } 1/\alpha _k < \infty \) the claim follows.\(\square \)

Corollary 4.3

Suppose that the assumptions of the Proposition 4.2 are satisfied. In addition, assume that \(u > 0\) solves \(\partial _t u - {\mathcal {L}}^{\omega } u = 0\) on \(Q(n)\).

  1. (i)

    Suppose that \(s' - t_0 \ge \varepsilon n^2\) and either \(t_0 + n^2 - s'' \ge \varepsilon n^2\) or \(s'' = t_0 + n^2\). Then, for all \(\alpha \in (0, \infty )\), there exists \(C_8 = C_8(d, q, \varepsilon )\) and \(\kappa ' = 2 (1 + \kappa ) / \alpha \) such that

    $$\begin{aligned}&\max _{(t,x) \in Q(\upsigma ' n)} u(t,x)^{-1} \nonumber \\&\quad \le \left( C_8\, \frac{ (1 \vee ||\mu ^{\omega }||_{p,B(n)})\, (1 \vee ||\nu ^{\omega }||_{q,B(n)}) }{(\upsigma - \upsigma ')^2} \right) ^{ \kappa '} ||u^{-1}||_{\alpha , Q(\upsigma n), \mu ^{\omega }}. \end{aligned}$$
    (4.8)
  2. (ii)

    Suppose that \(s' - t_0 \ge \varepsilon n^2\) and \(t_0 + n^2 - s'' \ge \varepsilon n^2\). Then, for all \(\alpha \in (0, \infty )\), there exists \(C_9(\alpha ) = C_9(d, q, \varepsilon , \alpha )\) and \(\kappa ' = \left( 1 \vee \frac{2}{\alpha }\right) (1 + \kappa )\) such that

    $$\begin{aligned}&\max _{(t,x) \in Q(\upsigma ' n)} u(t,x)\nonumber \\&\quad \le \left( C_9(\alpha )\, \frac{ (1 \vee ||\mu ^{\omega }||_{p, B(n)})\, (1 \vee ||\nu ^{\omega }||_{q, B(n)}) }{(\upsigma - \upsigma ')^2} \right) ^{ \kappa '}||u||_{\alpha , Q(\upsigma n), \mu ^{\omega }}. \end{aligned}$$
    (4.9)

Proof

(i) We shall show that for \(\beta ' \ge 1 \vee \alpha \) large enough there exists \(c < \infty \) which is independent of \(n\) such that

$$\begin{aligned} \max _{(t,x) \in Q(\upsigma ' n)} u(t,x)^{-1} \le c\, \left( \frac{1 \vee ||\nu ^{\omega }||_{1,B(n)}}{(\upsigma - \upsigma ')^2} \right) \; ||u^{-1}||_{2 \beta ', Q(\bar{\upsigma } n), \mu ^{\omega }}, \end{aligned}$$
(4.10)

where \(\bar{\upsigma } = (\upsigma ' + \upsigma ) / 2\). Since \(u > 0\) is caloric on \(Q(n)\), the function \(f = u^{-\alpha /2}\), \(\alpha \in (0, \infty )\), is sub-caloric on \(Q(n)\), that is \(\partial _t f - {\mathcal {L}}_{Y}^{\omega } f \le 0\). Thus, provided we have established (4.10), (4.8) follows by applying (4.6) to the function \(f\) with \(\upsigma '\) replaced by \(\bar{\upsigma }\) and \(\beta = 2\beta ' / \alpha \).

The proof of (4.10) uses arguments similar to the ones given in [20]. Since \(u\) is caloric on \(Q(n)\), we have for every \(t \in I_{\upsigma }\) and \(x \in B(\upsigma n)\),

$$\begin{aligned} \partial _t\, u(t,x) = {\mathcal {L}}_{Y}^{\omega }\, u(t, \cdot )(x) \;\ge \; -u(t,x), \end{aligned}$$

which implies that \(u(t_2,x) \ge \mathrm {e}^{-(t_2-t_1)}\, u(t_1,x)\) for every \(t_1, t_2 \in I_{\upsigma }\) with \(t_1 < t_2\) and \(x \in B(\upsigma n)\). Now, choose \((t_*, x_*) \in Q(\upsigma ' n)\) in such a way that

$$\begin{aligned} u(t_*, x_*)^{-1} = \max _{(t,x) \in Q(\upsigma ' n)} u(t,x)^{-1}. \end{aligned}$$

Moreover, set \(I' \mathrel {\mathop :}=[\bar{\upsigma }\, t_0 + (1-\bar{\upsigma })\, s' , t_*] \subset I_{\bar{\upsigma }}\). Then, for any \(t \in I'\) we have that \(u(t,x_*)^{-1} \ge \mathrm {e}^{-(t_* - t)}\, u(t_*, x_*)^{-1}\). This implies

$$\begin{aligned} \int _{I_{\bar{\upsigma }}} ||u_t^{-1}||_{\beta ', B(\bar{\upsigma } n)}^{\beta '}\, \mathrm {d}t \;\ge \; \frac{1}{|B(\bar{\upsigma } n)|}\, \int _{I'} u(t, x_*)^{-\beta '}\, \mathrm {d}t \;\ge \; \frac{1 - \mathrm {e}^{-\beta ' |I'|}}{\beta ' |B(n)|}\, u(t_*,x_*)^{-\beta '}. \end{aligned}$$

Recall that \(s' - t_0 \ge \varepsilon n^2\). Hence, \(|I'| \ge \frac{1}{2} \varepsilon (\upsigma - \upsigma ')\, n^2\). Thus, for all \(n\) large enough, we have that \(1 - \mathrm {e}^{-\beta ' |I'|} \ge 1/2\). Choosing \(\beta ' = |B(n)| n^2 \vee \alpha \), an application of the Cauchy–Schwarz inequality yields

$$\begin{aligned} ||u^{-1}||_{2 \beta ', Q(\bar{\upsigma } n), \mu ^{\omega }}\, ||\nu ^{\omega }||_{1, B(\bar{\upsigma } n)}^{\frac{1}{2 \beta '}} \ge ||u^{-1}||_{\beta ', Q(\bar{\upsigma } n)} \ge \frac{1}{c}\, (\upsigma - \upsigma ' )^{\frac{1}{\beta '}} \max _{(t,x) \in Q(\upsigma ' n)} u(t,x)^{-1} , \end{aligned}$$

where \(c = c(\varepsilon ) \in (0, \infty )\). Here, we used that \(||1 / \mu ^{\omega }||_{1,B(\bar{\upsigma } n)} \le ||\nu ^{\omega }||_{1,B(\bar{\upsigma } n)}\). This completes the proof of (4.10).

(ii) Analogously, for every \(t \in [t_*, (1 - \bar{\upsigma })\, s'' + \bar{\upsigma }\, (t_0 + n^2)] \subset I_{\bar{\upsigma }}\) we have that \(u(t, x_*) \ge \mathrm {e}^{-(t-t_*)}\, u(t_*, x_*)\), where \(u(t_*, x_*) = \max _{(t,x) \in Q(\upsigma ' n)}u(t, x)\). Therefore, by a computation similar to the one above and in view of (4.6), we obtain

$$\begin{aligned} \max _{(t,x) \in Q(\upsigma ' n)} u(t,x) \le c\, \left( \frac{ (1 \vee ||\mu ^{\omega }||_{p, B(n)})\, (1 \vee ||\nu ^{\omega }||_{q, B(n)}) }{(\upsigma - \upsigma ')^2} \right) ^{ 1+\kappa } ||u||_{2, Q(\upsigma n), \mu ^{\omega }}. \end{aligned}$$

In particular, this inequality implies (4.9) for every \(\alpha \ge 2\). For \(\alpha \in (0, 2)\) the proof goes literally along the lines of the proof of Corollary 3.4.\(\square \)

Lemma 4.4

Suppose that \(Q = I \times B\), where \(I = [s_1, s_2]\) is an interval and \(B\) is a finite, connected subset of \(V\). Consider a function \(\eta : V \rightarrow {\mathbb {R}}\) and a smooth function \(\zeta : {\mathbb {R}}\rightarrow {\mathbb {R}}\) with

$$\begin{aligned} {{\mathrm{\mathrm {supp }}}}\eta&\;\subset \; B,&0&\;\le \; \eta \;\le \; 1&\text {and}&\eta&\;\equiv \; 0 \quad \text {on} \quad \partial B, \\ {{\mathrm{\mathrm {supp }}}}\zeta&\;\subset \; I,&0&\;\le \; \zeta \;\le \; 1&\text {and}&\zeta (s_2)&\;=\; 0. \end{aligned}$$

Further, let \(u > 0\) be such that \(\partial _t u -{\mathcal {L}}_Y^\omega \, u \ge 0\) on \(Q\). Then, for all \(\alpha \in (0,1/2)\),

$$\begin{aligned}&\int _{I} \zeta (t)\; \frac{{\mathcal {E}}_{\eta ^2}^{\omega }(u_t^{\alpha })}{|B|} \, \mathrm {d}t\nonumber \\&\le \; \frac{8}{(1 - 2\alpha )^2}\, ( {{\mathrm{\mathrm {osr}}}}(\eta ^2)\, ||\nabla \eta ||_{\ell ^{\infty }( E)}^2 \,+\, \Vert \zeta ' \Vert _{L^{\infty } (I)} )\; \int _{I}\; ||u_t^{2 \alpha }||_{1,B, \mu ^{\omega }}\, \mathrm {d}t \end{aligned}$$
(4.11)

and

$$\begin{aligned}&\max _{t \in I} ( \zeta (t)\; ||(\eta \, u_t^{\alpha })^2||_{1, B, \mu ^{\omega }} )\nonumber \\&\quad \le \; \frac{16}{1 - 2\alpha }\, ( {{\mathrm{\mathrm {osr}}}}(\eta ^2)\, ||\nabla \eta ||_{\ell ^{\infty }( E)}^2 \,+\, \Vert \zeta ' \Vert _{L^{\infty } (I)} )\; \int _{I}\; ||u_t^{2 \alpha }||_{1 , B, \mu ^{\omega } }\, \mathrm {d}t. \end{aligned}$$
(4.12)

Proof

Since \(\partial _t u - {\mathcal {L}}_Y^\omega \, u \ge 0\) on \(Q\) we have, for every \(t \in I\) and \(\alpha \in (0, 1/2)\),

$$\begin{aligned} \frac{1}{2 \alpha }\, \partial _t \langle \eta ^2 ,u_t^{2\alpha }\rangle _{\ell ^{2}(V, \mu ^{\omega })}&\,=\, \langle \eta ^2\, u_t^{2 \alpha - 1} ,\partial _t\, u_t\rangle _{\ell ^{2}(V, \mu ^{\omega })}\nonumber \\&\,\ge \, \langle \eta ^2 u_t^{2 \alpha - 1} ,{\mathcal {L}}_Y^{\omega } u_t\rangle _{\ell ^{2}(V,\mu ^{\omega })} {=} -\langle \nabla (\eta ^2\, u_t^{2\alpha -1}),\omega \, \nabla u_t\rangle _{\ell ^{2}(E)}. \end{aligned}$$

Together with the estimate (3.11), we obtain that

$$\begin{aligned}&-\partial _t ||(\eta \, u_t^{\alpha })^2||_{1, B, \mu ^{\omega }} \,+\, \frac{1-2\alpha }{\alpha }\, \frac{{\mathcal {E}}_{\eta ^2}^{\omega }(u_t^{\alpha })}{|B|}\nonumber \\&\quad \le \frac{32\, \alpha }{1-2\alpha }\; {{\mathrm{\mathrm {osr}}}}(\eta ^2)\; ||\nabla \eta ||_{\ell ^{\infty }( E)}^2\; ||u_t^{2 \alpha }||_{1, B, \mu ^{\omega }}. \end{aligned}$$
(4.13)

The difference between (4.3) and (4.13) is the minus sign that appears in front of the first term on the left-hand side of (4.13). This is the reason why we choose a cut-off function, \(\zeta \), that vanishes at \(s_2\). To finish the proof, it suffices to repeat the remaining arguments in Lemma 4.1.\(\square \)

Proposition 4.5

Let \(u > 0\) be such that \(\partial _t u - {\mathcal {L}}_Y^{\omega }\, u \ge 0\) on \(Q(n)\). Then, for any \(p, q \in (1,\infty ]\) with

$$\begin{aligned} \frac{1}{p} \,+\, \frac{1}{q} \;<\; \frac{2}{d} \end{aligned}$$

and any \(\beta \in (0, 1/2)\) there exists \(C_{10} \equiv C_{10}(d, q, \varepsilon )\) and \(\kappa \equiv \kappa (d,p,q)\) such that for all \(\alpha \in (0, \beta / 2)\) and for all \(1/2 \le \upsigma ' < \upsigma \le 1\), we have

$$\begin{aligned} ||u||_{\beta , Q(\upsigma ' n),\mu ^{\omega }} \le \left( C_{10}\, \frac{ (1 \vee ||\mu ^{\omega }||_{p,B(n)}) (1 \vee ||\nu ^{\omega }||_{q,B(n)}) }{(\upsigma {-} \upsigma ')^2} \right) ^{ \kappa \,(\frac{1}{\alpha } \!-\! \frac{1}{\beta })} ||u||_{\alpha , Q(\upsigma n), \mu ^{\omega }}.\nonumber \\ \end{aligned}$$
(4.14)

Proof

The proof goes literally along the lines of Proposition 3.6 (c.f. [31, Theorem 5.2.17]). The only difference is that we have to replace \(\rho / p_*\) by \(1 + (\rho - p_*) / \rho \) in the definition of \(\alpha _k\).\(\square \)

4.2 From mean value inequalities to Harnack principle

From now on we denote by \(m\) the product measure \(m = \mathrm {d}t \times | \cdot |\) on \({\mathbb {R}}_+ \times V\), where \(| \cdot |\) is still the counting measure on \(V\).

Lemma 4.6

For \(x_0 \in V\), \(t_0 \ge 0\) and \(n \ge 1\), let \(Q(n) = I_1 \times B(n)\) where \(I_1 = [t_0, t_0 + n^2]\) and \(B(n) \equiv B(x_0, n)\). Further, let \(u > 0\) be such that \(\partial _t u -{\mathcal {L}}_Y^\omega \, u \ge 0\) on \(Q(n)\). Then, there exists \(C_{11}=C_{11}(d)\) and \(\xi = \xi (u)\) such that for all \({\uplambda }> 0\) and all \(\varrho \in [1/2, 1)\),

$$\begin{aligned} m [ \{ (t,z) \in I_1 \times B(\varrho n) :\, \ln u(t, z) \,<\, -{\uplambda }\,-\, \xi \} ] \le C_{11}\, M_3\, M_4\, m[I_1 \times B(\varrho n)]\, {\uplambda }^{-1}, \end{aligned}$$

and

$$\begin{aligned} m [ \{ (t,z) \in I_1 \times B(\varrho n) :\, \ln u(t,z) \,>\, {\uplambda }\,-\, \xi \} ] \le C_{11}\, M_3 \, M_4\, m[I_1 \times B(\varrho n)] \, {\uplambda }^{-1}, \end{aligned}$$

where \(M_3 \mathrel {\mathop :}=||\mu ^{\omega }||_{1, B(n)} /||\mu ^{\omega }||_{1, B(n/2)}\) and \(M_4 \mathrel {\mathop :}=1 \vee ||\mu ^{\omega }||_{p, B(n)}^2\, ||\nu ^{\omega }||_{q, B(n)} \).

Proof

Let \(\eta \) be defined as

$$\begin{aligned} \eta (x) = \left[ 1 - \frac{d(x_0,x)}{n}\right] _+. \end{aligned}$$

In particular, \(\eta \le 1\), \(\max _{x\sim y} |\nabla \eta (x,y)| \le 1/n\) and \({{\mathrm{\mathrm {osr}}}}(\eta ^2) \le 4\). Moreover, note that \(\eta \ge 1 - \varrho \) on \(B(\varrho n)\), and by the volume regularity of \(V\) we get

$$\begin{aligned} |B(n)|\, ||\mu ^{\omega }||_{1, B(n)}\ge & {} \langle \eta ^2,1\rangle _{\ell ^{2}(V,\mu ^\omega )}\nonumber \\\ge & {} (1 - \varrho )^2\, \mu ^{\omega }[B(\varrho n)] \;\ge \; \frac{(1 - \varrho )^2}{2^{d}\, C_{\mathrm {reg}}^{2}}\, |B(n)|\, ||\mu ^{\omega }||_{1, B(n/2)}.\nonumber \\ \end{aligned}$$
(4.15)

Since \(\partial _t u -{\mathcal {L}}_Y^\omega \, u \ge 0\) on \(Q(n)\), we have

$$\begin{aligned} \partial _t \langle \eta ^2,-\ln u_t\rangle _{\ell ^{2}(V, \mu ^{\omega })}&= \langle \eta ^2 u_t^{-1},-\partial _t u_t\rangle _{\ell ^{2}(V, \mu ^{\omega })}\nonumber \\&\le \langle \eta ^2 u_t^{-1},-{\mathcal {L}}_Y^{\omega }\, u_t\rangle _{\ell ^{2}(V, \mu ^{\omega })} = \langle \nabla (\eta ^2 u_t^{-1}),\omega \, \nabla u_t\rangle _{\ell ^{2}(E)}. \end{aligned}$$

In view of (3.16) and (4.15), we obtain

$$\begin{aligned} \langle \nabla (\eta ^2 u_t^{-1}),\omega \, \nabla u_t\rangle _{\ell ^{2}(E)} \le -{\mathcal {E}}^{\omega ,\eta }(\ln u_t) \,+\, \frac{c_1}{n^2}\, \langle \eta ^2,1\rangle _{\ell ^{2}(V, \mu ^\omega )} \, M_3. \end{aligned}$$

Hence,

$$\begin{aligned} \partial _t \langle \eta ^2,-\ln u_t\rangle _{\ell ^{2}(V, \mu ^{\omega })} \le -{\mathcal {E}}^{\omega ,\eta }(\ln u_t) \,+\, \frac{c_1}{n^2}\, \langle \eta ^2,1\rangle _{\ell ^{2}(V, \mu ^\omega )} \, M_3. \end{aligned}$$
(4.16)

From now on the proof goes literally along the lines of [32, Lemma 5.4.1].\(\square \)

Proof of Theorem 1.4

Consider a caloric function \(u > 0\) on space-time cylinder \(Q(n) = [t_0, t_0 + n^2] \times B(n)\) with \(B(n) = B(x_0, n)\) and fix some \(\varrho \in [3/4, 1)\). Consider the function \(f_1 \mathrel {\mathop :}=u^{-1}\, \mathrm {e}^{-\xi }\), where \(\xi = \xi (u)\) be as in Lemma 4.6. Our aim is to apply Lemma 2.5 to the function \(f_1 \mathrel {\mathop :}=u^{-1}\, \mathrm {e}^{-\xi }\) with \(U = [t_0, t_0 + n^2] \times B(\varrho n)\),

$$\begin{aligned} U_{\upsigma } = [\upsigma \, t_0 + (1 - \upsigma )\,s' ,\, t_0 + n^2] \times B(\upsigma \varrho n) \quad \text {with} \quad s' = t_0 + \frac{3 \varrho }{4 \varrho - 2}\; n^2, \end{aligned}$$

\(\delta = 1/2 \varrho \), \(\alpha ^* = \infty \) and \(m = \mathrm {d}t \times | \cdot |\). In view of (4.8) the condition (i) of Lemma 2.5 is satisfied with \(\gamma = 2(1 + \kappa )\) and constant \(C_{\mathrm {BG1}}\) given by

$$\begin{aligned} C_{\mathrm {BG1}} = c\, ( (1 \vee ||\mu ^\omega ||_{p,B(n)})\; (1 \vee ||\nu ^\omega ||_{q,B(n)}) )^{ 2 (1 + \kappa )}. \end{aligned}$$

Since \(\ln f_1 > {\uplambda }\) is equivalent to \(\ln u < -{\uplambda }-\xi \), the condition (ii) of Lemma 2.5 follows immediately from Lemma 4.6. Noting that \(U_{\delta } = Q_+\), an application of Lemma 2.5 gives

$$\begin{aligned} \max _{(t, x) \in Q_+} f_1(t,x) \le A_1^\omega \qquad \Longleftrightarrow \qquad \mathrm {e}^\xi \le A_1^\omega \min _{(t,x) \in Q_+} u(t,x). \end{aligned}$$
(4.17)

Next, consider the function \(f_2 = u\, \mathrm {e}^{\xi }\), where \(\xi = \xi (u)\) be again as in Lemma 4.6. Now, set \(U_{\upsigma } = [\upsigma \, t_0 + (1 - \upsigma )\, s', (1 - \upsigma )\, s'' + \upsigma (t_0 + n^2)] \times B(\upsigma \varrho n)\) with

$$\begin{aligned} s' = t_0 + \frac{\varrho }{4 \varrho - 2}\, n^2 \qquad \text {and} \qquad s'' = t_0 + \frac{\varrho - 1}{2 \varrho - 1}\, n^2, \end{aligned}$$

where \(\delta =3/4\varrho \), \(\alpha ^*\in (0,1/2)\) arbitrary but fixed and \(m\) are as before. Similarly to the above procedure, the two conditions of Lemma 2.5 are verified due to Proposition 4.5 and Lemma 4.6. Hence, \(||f_2||_{\alpha ^*,U_\delta }\le A_2^\omega \), and since \(U_{1/2\varrho } = Q_-\) we obtain from Corollary 4.3

$$\begin{aligned} \max _{(t,x) \in Q_{-}} f_2(t,x) \le ( c(\varrho )\, C_9(\alpha ^*)\, ( 1 \vee ||\mu ^{\omega }||_{p,B(n)}\, ||\nu ^{\omega }||_{q,B(n)} ) )^{\kappa '}\; A_2^{\omega }\, \mathrm {e}^{\xi }. \end{aligned}$$
(4.18)

By combining (4.17) and (4.18), the assertion is immediate. Note that the quantity \(M^\omega _{x_0,n}\) enters the Harnack constant \(C_{\mathrm {PH}}\) through the application of Lemma 4.6.\(\square \)

4.3 Near-diagonal estimates and Hölder-continuity

It is a well known fact that the parabolic Harnack inequality is a powerful property, which has a number of important consequences. We state here only a few of them, namely it provides immediately near-diagonal estimates on the heat kernel and a quantitative Hölder continuity estimate for caloric functions.

Proposition 4.7

(Near-diagonal heat-kernel bounds) Let \(t > 0\) and \(x_1 \in V\). Then,

  1. (i)

    for any \(x_2 \in V\),

    $$\begin{aligned} q^\omega (t, x_1, x_2) \le \frac{C_{\mathrm {PH}}}{\mu ^{\omega }\left[ B\left( x_1, \tfrac{1}{2} \sqrt{t}\right) \right] } \le \frac{C_{\mathrm {PH}}\, C_{\mathrm {reg}}}{||\mu ^{\omega }||_{1, B\left( x_1, \sqrt{t} / 2\right) }}\; 2^{d}\,t^{-\frac{d}{2}}, \end{aligned}$$
  2. (ii)

    for any \(x_2 \in B\left( x_1,\tfrac{1}{2} \sqrt{t}\right) \)

    $$\begin{aligned} q^\omega (t,x_1,x_2) \;\ge \; \frac{1}{C_{\mathrm {PH}}^{2}\, \big |B\left( x_1, \frac{1}{2} \sqrt{t}\right) \big |} \;\ge \; \frac{C_{\mathrm {reg}}}{C_{\mathrm {PH}}^{2}}\, \min \{1, 2^{d}\, t^{-\frac{d}{2}} \} \end{aligned}$$

where \(C_{\mathrm {PH}}(M^\omega _{x_1,\sqrt{t}}, \Vert \mu \Vert _{p, B(x_1, \sqrt{t})}, \Vert \nu \Vert _{q, B(x_1, \sqrt{t})})\) is the constant in Theorem 1.4.

Proof

This follows by the same arguments as in [20, Proposition 3.1].\(\square \)

Proposition 4.8

Suppose that Assumption 1.6 holds. Let \(x_0 \in V\) and let \(s(x_0)\) and \(C_{\mathrm {PH}}^*\) be as in Assumption 1.6. Further, let \(R\ge s(x_0)\) and \(\sqrt{T} \ge R\). Set \(T_0 \mathrel {\mathop :}=T+1\) and \(R_0^2 \mathrel {\mathop :}=T_0\) and suppose that \(u > 0\) is a caloric function on \([0, T_0] \times B(x_0, R_0)\). Then, for any \(x_1, x_2 \in B(x_0,R)\) we have

$$\begin{aligned} |u(T,x_1) - u(T,x_2)| \le c\, \left( \frac{R}{T^{1/2}} \right) ^{ \theta }\, \max _{[3 T_0 / 4, T_0] \times B(x_0, R_0 / 2)} u, \end{aligned}$$

where \(\theta = \ln (2C^*_\mathrm{{PH}}/(2C^*_\mathrm{{PH}}-1)) / \ln 2\) and \(c\) only depending on \(C^*_\mathrm{{PH}}\).

Proof

We will follow the arguments in [18, Lemma 6] (cf. also [8, Proposition 3.2]), but some extra care is needed due to the special form of the Harnack constant. Set \(R_k = 2^{-k} R_0\). Let

$$\begin{aligned} Q_k = [T_0 - R_k^2, T_0] \times B(x_0,R_k), \end{aligned}$$

and let \(Q_k^-\) and \(Q_k^+\) be accordingly defined as

$$\begin{aligned} Q_k^- {=} \left[ T_0 - \tfrac{3}{4} R_k^2, T_0 {-} \tfrac{1}{2} R_k^2 \right] \times B\left( x_0, \tfrac{1}{2} R_k\right) , \quad Q_k^+ {=} \left[ T_0 - \tfrac{1}{4} R_k^2, T_0 \right] \times B\left( x_0,\tfrac{1}{2} R_k\right) . \end{aligned}$$

In particular, note that \(Q_{k+1}\subset Q_k\) and \(Q_k^+ = Q_{k+1}\). Setting

$$\begin{aligned} v_k = \frac{u - \min _{Q_k} u}{\max _{Q_k} u - \min _{Q_k}u}, \end{aligned}$$

we have that \(v_k\) is caloric, \(0 \le v_k \le 1\) and \({{\mathrm{\mathrm {osc}}}}(v_k, Q_k) = 1\). (Here \({{\mathrm{\mathrm {osc}}}}(u,A)=\max _A u - \min _A u\) denotes the oscillation of \(u\) on \(A\).) Replacing \(v_k\) by \(1-v_k\) if necessary we may assume that \(\max _{Q_k^-} v_k \ge \frac{1}{2}\).

Now for every \(k\) such that \(R_{k} \ge s(x_0)\) we apply the PHI in Theorem 1.4 and using the definition of \(C^*_\mathrm{{PH}}\) we get

$$\begin{aligned} \tfrac{1}{2} \le \max _{Q_k^-} v_k \le C_{\mathrm {PH}}^*\, \min _{Q_k^+} v_k. \end{aligned}$$

Since \({{\mathrm{\mathrm {osc}}}}(u,Q_{k+1}) = {{\mathrm{\mathrm {osc}}}}(u,Q_k^+)\), it follows that for such \(k\),

$$\begin{aligned} {{\mathrm{\mathrm {osc}}}}(u,Q_{k+1})&= \frac{\max _{Q_k^+} u - \min _{Q_k^+} u}{{{\mathrm{\mathrm {osc}}}}(u, Q_k)} {{\mathrm{\mathrm {osc}}}}(u,Q_k)\\&= \left( 1 \,+\, \frac{\max _{Q_k^+} u - \max _{Q_k} u }{{{\mathrm{\mathrm {osc}}}}(u, Q_k)} \,-\, \min _{Q_k^+} v_k \right) \; {{\mathrm{\mathrm {osc}}}}(u, Q_k). \end{aligned}$$

Hence,

$$\begin{aligned} {{\mathrm{\mathrm {osc}}}}(u, Q_{k+1}) \le (1 - \delta )\, {{\mathrm{\mathrm {osc}}}}(u, Q_k) \end{aligned}$$

with \(\delta = (2 C_{\mathrm {PH}}^*)^{-1}\). Now we choose \(k_0\) such that \(R_{k_0} \ge R > R_{k_0+1}\). Then, we can iterate the above inequality on the chain of boxes \(Q_1 \supset Q_2 \supset \cdots \supset Q_{k_0}\) to obtain

$$\begin{aligned} {{\mathrm{\mathrm {osc}}}}(u, Q_{k_0}) \le (1 - \delta )^{k_0-1}\, {{\mathrm{\mathrm {osc}}}}(u,Q_1). \end{aligned}$$

Note that \(B(x_0, R) \subseteq B(x_0, R_{k_0})\), \(T \in [T_0 - R_{k_0}^2,T_0]\) and \(Q_1 = Q_0^+\). Finally, since \((1 - \delta )^{k_0} \le c (R / T^{1/2})^\theta \), the claim follows.\(\square \)

5 Local limit theorem

In this section, we prove the quenched local limit theorem for the random conductance model as formulated in Theorem 1.11. For this reason, we consider the \(d\)-dimensional Euclidean lattice \(({\mathbb {Z}}^d, E_d)\) as the underlying graph and assume throughout this section that the random weights \(\omega \) satisfy the Assumptions 1.8 and 1.9. Let us recall that \(k_t = k_t^{\Sigma _Y} \), defined in (1.11), denotes the Gaussian heat kernel with diffusion matrix \(\Sigma _Y^2\).

The proof of the local limit theorem is based on the approach in [8] and [18]. In the sequel, we will briefly explain the strategy. For \(x \in {\mathbb {R}}^d\) and \(\delta >0\), let

$$\begin{aligned} J(t, n) \;\equiv \; J(t, x, \delta , n) \mathrel {\mathop :}={{\mathrm{\mathrm {P}}}}_0^{\omega } [ Y^{(n)}_t \in C(x, \delta ) ] \,-\, \int _{C(x, \delta )} k_t(y)\, \mathrm {d}y. \end{aligned}$$
(5.1)

Here, we denote by \(C(x, \delta ) = x + [-\delta , \delta ]^d\) the cube with center \(x\) and side length \(2 \delta \), i.e. \(C(x, \delta )\) is a ball in \({\mathbb {R}}^d\) with respect to the supremum norm \(|\cdot |_{\infty }\). Further, for any \(n \ge 1\), we set \(C^n(x, \delta ) \mathrel {\mathop :}=n C(x, \delta ) \cap {\mathbb {Z}}^d\).

Now, let us rewrite (5.1) as \(J(t,n) = J_1(t,n) + J_2(t, n) + J_3(t, n) + J_4(t, n)\), where

$$\begin{aligned} J_1(t, n)&\mathrel {\mathop :}=\sum _{z \in C^n(x,\delta )} ( q^{\omega }(n^2 t, 0, z) \,-\, q^{\omega }(n^2 t, 0, \lfloor nx \rfloor ) )\, \mu ^{\omega }(z),\\ J_2(t, n)&\mathrel {\mathop :}=\mu ^{\omega }[C^n(x,\delta ) ]\, ( q^{\omega }(n^2 t, 0, \lfloor nx \rfloor ) \,-\, n^{-d}\, a\, k_t(x) ),\\ J_3(t, n)&\mathrel {\mathop :}=k_t(x)\, \big ( \mu ^{\omega }[C^n(x,\delta ) ]\, n^{-d}\, a - (2\delta )^d \big ),\\ J_4(t, n)&\mathrel {\mathop :}=\int _{C(x,\delta )} ( k_t(x) - k_t(y) )\, \mathrm {d}y, \end{aligned}$$

and \(a = 1 / {{\mathrm{\mathbb {E}}}}[\mu ^{\omega }(0)]\). Thus, in order to conclude a quenched local limit theorem, it remains to establish suitable upper bounds on the terms \(J_1\), \(J_3\) and \(J_4\) uniformly for \(t \in [T_1, T_2] \subset (0, \infty )\) that are small compared to \(\mu ^{\omega }[C^n(x, \delta )] n^{-d}\). Further, note that by means of the spatial ergodic theorem, cf. Lemma 5.1 below, \(\mu ^{\omega }[C^n(x, \delta )] n^{-d}\) converges \({{\mathrm{\mathbb {P}}}}\)-a.s to \({{\mathrm{\mathbb {E}}}}[\mu ^{\omega }(0)] (2 \delta )^{d}\) as \(n\) tends to infinity. Hence, we obtain that \(\lim _{n \rightarrow \infty } |J_3(t, n)| / \mu ^{\omega }[C^n(x, \delta )] n^{-d} = 0\) and \(\lim _{n \rightarrow \infty } |J(t,n)| / \mu ^{\omega }[C^n(x, \delta )] n^{-d} = 0\) uniformly for all \(t \in [T_1, T_2]\). The latter follows from the quenched invariance principle, see Theorem 1.10. Further, note that \(q^{\omega }(\cdot , 0, \cdot )\) is a caloric function. Thus, provided that the assumptions of Proposition 4.8 are satisfied, the Hölder continuity combined with the near-diagonal upper bounds of Proposition 4.7 implies

$$\begin{aligned} \max _{z \in C^{n}(x, \delta )}\, n^{d}\, | q^{\omega }( n^2 t, 0, z ) \,-\, q^{\omega }( n^2 t, 0, \lfloor n x \rfloor ) | \le c\, \delta ^{\theta }\, T_1^{-\frac{d}{2} - \frac{\theta }{2}}. \end{aligned}$$

This estimate allows us to deduce \(\lim _{n \rightarrow \infty } |J_1(t,n)| / \mu ^{\omega }[C^n(x, \delta )] n^{-d}| = {\mathcal {O}}(\delta ^{\theta })\) uniformly for \(t \in [T_1, T_2]\), which vanishes when \(\delta \) tends to \(0\). The verification of the assumptions of Proposition 4.8 will be based on the ergodic theorem as well. Finally, an uniform upper bound of \(|J_4(t, n)| / \mu ^{\omega }[C^n(x, \delta )] n^{-d}\) can be deduced from the explicit form of \(k_t(x)\).

For the proof Theorem 1.11 we will apply the general criterion in [18, Theorem 1], so we will just verify the required assumptions on the underlying sequence of rescaled graphs, the weights and a PHI on sufficiently large balls.

Let us start with some immediate consequences of the ergodic theorem.

Lemma 5.1

For any \(x \in {\mathbb {R}}^d\) and \(\delta > 0\) we have that, for \({{\mathrm{\mathbb {P}}}}\)-a.e. \(\omega \),

$$\begin{aligned} \lim _{n \rightarrow \infty }\, ||\mu ^{\omega }||_{p, C^n(x, \delta )}^p = {{\mathrm{\mathbb {E}}}}[\mu ^{\omega }(0)^p] \quad \; \text {and} \quad \; \lim _{n \rightarrow \infty }\, ||\nu ^{\omega }||_{q,C^n(x, \delta )}^q = {{\mathrm{\mathbb {E}}}}[\nu ^{\omega }(0)^q]. \end{aligned}$$

In particular, for every \(\varepsilon > 0\) we have \(r^{\omega }(x, \delta , \varepsilon ) < \infty \) for \({{\mathrm{\mathbb {P}}}}\)-a.e. \(\omega \), where

$$\begin{aligned} r^{\omega }(x, \delta , \varepsilon ) \mathrel {\mathop :}=\inf \{ n&\ge 1 \;:\;\; \left| ||\mu ^{\omega }||_{p, C^{n}(x, \delta )}^p \,-\, {{\mathrm{\mathbb {E}}}}[\mu ^{\omega }(0)^p] \right| \;<\; \varepsilon \\&\text {and } \left| ||\nu ^{\omega }||_{q, C^{n}(x, \delta )}^q \,-\, {{\mathrm{\mathbb {E}}}}[\nu ^{\omega }(0)^q] \right| \;<\; \varepsilon \}. \end{aligned}$$

Proof

The assertions of the lemma are an immediate consequence of the spatial ergodic theorem [25, Theorem 2.8 in Chapter 6], i.e.

$$\begin{aligned} \lim _{n \rightarrow \infty } ||\mu ^{\omega }||_{p, C^n(x, \delta )}^p = \lim _{n \rightarrow \infty } \frac{1}{| C^n(x, \delta ) |}\, \sum _{x \in C^n(x, \delta )} ( \mu ^{\tau _x \omega }(0) )^p = {{\mathrm{\mathbb {E}}}}[\mu ^{\omega }(0)^p], \end{aligned}$$

provided we have shown that the sequence \(\{ C^n(x, \delta ) : n \in {\mathbb {N}}\}\) is regular. For this purpose, consider \(\{C^{n}(0, |x|_{\infty } + \delta ) : n \in {\mathbb {N}}\}\). Obviously, this sequence of cubes is increasing and have the property that \(C^n(x, \delta ) \subset C^{n}(0, |x|_{\infty } + \delta )\) for all \(n \in {\mathbb {N}}\). Moreover, there exists \(K \equiv K(x, \delta ) < \infty \) such that

$$\begin{aligned} | C^{n}(0, |x|_{\infty } + \delta ) | \le K\, | C^n(x, \delta ) |, \qquad \forall \, n \in {\mathbb {N}}. \end{aligned}$$

Thus, the sequence \(\{ C^n(x, \delta ) : n \in {\mathbb {N}}\}\) is regular.\(\square \)

Remark 5.2

Note that \(\{ B(\lfloor x n \rfloor , \delta n) : n \in {\mathbb {N}}\}\) is also a regular sequence of boxes for every given \(x \in {\mathbb {R}}^d\) and \(\delta > 0\). Thus, the assertion of Lemma 5.1 also holds if we replace \(C^n(x,\delta )\) by \(B(\lfloor n x \rfloor , \delta n)\).

Proof of Theorem 1.11

We just need to verify Assumption 1 and 4 in [18]. Then, the assertion follows immediately from [18, Theorem 1].

Consider the sequence of rescaled Euclidean lattices \(\big \{\! \left( \frac{1}{n} {\mathbb {Z}}^d, \frac{1}{n} E_d\right) : n \in {\mathbb {N}}\big \}\) on the underlying metric space \(({\mathbb {R}}^d, |\cdot - \cdot |_{\infty })\) and set \(\alpha (n) = n\), \(\beta (n) = n^d / {{\mathrm{\mathbb {E}}}}[\mu ^{\omega }(0)]\) and \(\gamma (n) = n^2\). Recall that we denote by \(d(x,y)\) the graph distance for \(x, y \in \tfrac{1}{n} {\mathbb {Z}}^d\). Since,

$$\begin{aligned} \alpha (n)\, |x - y|_{\infty } \le d(x,y) \le d\, \alpha (n)\, |x - y|_{\infty }, \qquad \forall \, x, y \in \tfrac{1}{n}\, {\mathbb {Z}}^d, \end{aligned}$$

and for all \(\delta > 0\)

$$\begin{aligned} \lim _{n \rightarrow \infty }\, \tfrac{1}{n}\, {\mathbb {Z}}^d \cap C(0, \delta ) = C(0, \delta ) \end{aligned}$$

with respect to the usual Hausdorff topology on non-empty compact subsets of \(({\mathbb {R}}^d, |\cdot - \cdot |_{\infty })\), the Assumption 1(a), (b) and Assumption 4(b) are satisfied. The Assumption 1(c) requires that for every \(x \in {\mathbb {R}}^d\) and \(\delta > 0\),

$$\begin{aligned} \lim _{n \rightarrow \infty }\, \beta (n)^{-1}\, \mu ^{\omega }[C^n(x,\delta )] = (2 \delta )^d = \int _{C(x,\delta )} \mathrm {d}y. \qquad {{\mathrm{\mathbb {P}}}}\text {-a.s.} \end{aligned}$$

But, in view of Lemma 5.1 and the definition of \(\beta (n)\), this assumption is satisfied. Further, the Assumption 1(d) requires that for any compact interval \(I \subset (0, \infty )\), \(x \in {\mathbb {R}}^d\) and \(\delta > 0\)

$$\begin{aligned} \lim _{n \rightarrow \infty } {{\mathrm{\mathrm {P}}}}_0^{\omega } \Bigg [\tfrac{1}{n} Y_{\gamma (n) t} \in C(x, \delta ) \Bigg ] = \int _{C(x, \delta )} k_t(y)\, \mathrm {d}y, \qquad {{\mathrm{\mathbb {P}}}}\text {-a.s.} \end{aligned}$$

uniformly for all \(t \in I\). As we have discussed above, this is a direct consequence of Theorem 1.10.

It remains to verify the Assumption 4(c): For every \(x_0 \in \frac{1}{n} {\mathbb {Z}}^d\), \(n \ge 1\), there exists \(s_n(x) < \infty \) such that the parabolic Harnack inequality with constant \(C_{\mathrm {PH}}^*\) holds on \(Q(r) \equiv [0, r^2] \times B(x_0, r)\) for all \(r \ge s_n(x_0)\) and for every \(x \in {\mathbb {R}}^d\) we have \(\alpha (n)^{-1} s_n(\lfloor n x \rfloor ) \rightarrow 0\) as \(n \rightarrow \infty \).

By the ergodic theorem we know that \(\Vert \mu ^{\omega }\Vert _{p, B(x_0, n)}\) and \(\Vert \nu ^{\omega }\Vert _{q, B(x_0, n)}\) converge as \(n\) tends to infinity for \({{\mathrm{\mathbb {P}}}}\)-a.e. \(\omega \). In particular, by Lemma 5.1 and Remark 5.2, we have for all \(x \in {\mathbb {R}}^d\) and \(\varepsilon > 0\) that \({{\mathrm{\mathbb {P}}}}\)-a.s. there exists \(r^{\omega }(x, 1, \varepsilon ) < \infty \) such that

$$\begin{aligned} ||\mu ^{\omega }||_{p, B(\lfloor n x \rfloor , n)} \le ( \varepsilon \,+\, {{\mathrm{\mathbb {E}}}}[\mu ^{\omega }(0)^p] )^{\frac{1}{p}}, \qquad ||\nu ^{\omega }||_{q, B(\lfloor n x \rfloor , n)} \le ( \varepsilon \,+\, {{\mathrm{\mathbb {E}}}}[\nu ^{\omega }(0)^q] )^{\frac{1}{q}} \end{aligned}$$

for all \(n \ge r^{\omega }(x, 1, \varepsilon )\). Since the Harnack constant appearing in Theorem 1.4 depends monotonically on \(\Vert \mu ^{\omega }\Vert _{p, B(\lfloor n x \rfloor , n)}\) and \(\Vert \nu ^{\omega }\Vert _{q, B(\lfloor n x \rfloor , n)}\), this implies that for any \(r \ge r^{\omega }(x, 1, \varepsilon )\) the parabolic Harnack inequality holds on \(Q(r)\) with constant \(C_{\mathrm {PH}}^* \mathrel {\mathop :}=C_{\mathrm {PH}}((\varepsilon + {{\mathrm{\mathbb {E}}}}[\mu ^{\omega }(0)^p])^{1/p}, (\varepsilon + {{\mathrm{\mathbb {E}}}}[\nu ^{\omega }(0)^q])^{1/q} )\). Thus, the Assumption 4(c) of [18] is satisfied as well.\(\square \)

We now give an example, communicated to us by Noam Berger, which shows that the moment condition in Assumption 1.9 is optimal for a local limit theorem. For preparation we first note in the following lemma that a local limit theorem implies certain near-diagonal bounds on the heat kernel.

Lemma 5.3

Suppose that the local limit theorem holds in the sense that

$$\begin{aligned} \lim _{n \,\rightarrow \, \infty } \sup _{|x| \le 1}\, \sup _{t \in [1,4]}\, | n^{d}\, q^{\omega }(n^2 t, 0, \lfloor nx \rfloor ) - a\, k_t(x) | = 0, \qquad {{\mathrm{\mathbb {P}}}}\text {-a.s.} \end{aligned}$$
(5.2)

with \(a \mathrel {\mathop :}=1 / {{\mathrm{\mathbb {E}}}}[\mu ^{\omega }(0)]\). Then, there exists \(N_0 = N_0(\omega )\) and constants \(C_{12}\) and \(C_{13}\) such that \({{\mathrm{\mathbb {P}}}}\)-a.s. for all \(t \ge N_0\) and \(x \in B\left( 0,\sqrt{t}\right) \),

$$\begin{aligned} C_{12} t^{-d/2} \le q^\omega (t,0,x) \le C_{13} t^{-d/2}. \end{aligned}$$
(5.3)

Proof

Setting \(n=\lfloor \sqrt{t} \rfloor \) and \(h(t)=t/\lfloor \sqrt{t} \rfloor ^2\) we write

$$\begin{aligned} q^\omega (t,0,x) = n^{-d}\, ( n^d q^\omega (n^2 h(t), 0, \lfloor n (x/n) \rfloor ) - a\, k_{h(t)}(x/n) ) + n^{-d}\, a k_{h(t)}(x/n). \end{aligned}$$

Since \(|x|/n \le 1\) and \(h(t) \in [1,4]\) for \(t \ge 1\) we have \(c^{-1} \le k_{h(t)}(x/n)\le c\) for some \(c > 0\) independent of \(t\), \(x\) and \(n\). Further, by (5.2) for \({{\mathrm{\mathbb {P}}}}\)-a.e. \(\omega \),

$$\begin{aligned}&| n^d\, q^\omega (n^2 h(t),0, \lfloor n (x/n) \rfloor ) \,-\, a\, k_{h(t)}(x/n) |\\&\quad \le \; \sup _{|y| \le 1} \sup _{s\in [1,4]} | n^d\, q^\omega (n^2s, 0, \lfloor n y\rfloor ) \,-\, a\, k_s(y) | \;\rightarrow \; 0 \end{aligned}$$

as \(n \rightarrow \infty \), and we obtain (5.3).\(\square \)

Theorem 5.4

Let \(d \ge 2\) and \(p,q \in (1,\infty ]\) be such that \(1/p + 1/q > 2/d\). There exists an ergodic law \({{\mathrm{\mathbb {P}}}}\) on \((\Omega , {\mathcal {F}})\) with \({{\mathrm{\mathbb {E}}}}[\omega (e)^p] < \infty \) and \({{\mathrm{\mathbb {E}}}}[\omega (e)^{-q}] < \infty \) such that the conclusion of Lemma 5.3 does not hold.

The strategy of the proof is to construct an ergodic environment such that in every large box one can find a sufficiently deep trap for the walk.

Proof

Step 1: Definition of the environment. We first construct an ergodic law on \((\Omega , {\mathcal {F}})\). Let \(\{\xi (e): e\in E_d\}\) be a collection of i.i.d. random variables uniformly distributed over \([0,1]\). Further let \(k_0\in {\mathbb {N}}\) be such that \(2^{-k_0}<p_c\), where \(p_c\) denotes the critical probability for bond percolation on \({\mathbb {Z}}^d\). Then, for \(k\ge k_0\) we define \(\{\theta ^k(e) : e \in E_d\}\) by

$$\begin{aligned} \theta ^k(e) = {\left\{ \begin{array}{ll} 1 &{}\quad \text {if }\xi (e)\le 2^{-k}, \\ 0 &{}\quad \text {otherwise.} \end{array}\right. } \end{aligned}$$

Further, set \({\mathcal {O}}^k \mathrel {\mathop :}=\{ e \in E_d: \theta ^k(e)=1 \}\) and let \(\{{\mathcal {H}}^k_j \}_{j \ge 1}\) be the collection of connected components of the graph \(({\mathbb {Z}}^d, {\mathcal {O}}^k)\) and \({\mathcal {H}}^k \mathrel {\mathop :}=\bigcup _{j\ge 1} {\mathcal {H}}^k_j\), \(k\ge k_0\). Note that \({{\mathrm{\mathbb {P}}}}\)-a.s. every component \({\mathcal {H}}^k_j\) is finite by our choice of \(k_0\). Notice that by construction the components are decreasing in \(k\), i.e. for each \({\mathcal {H}}^{k+1}_j\) there exists a unique \(i\) such that \({\mathcal {H}}^{k+1}_j \subseteq {\mathcal {H}}^k_i\). Since we have assumed \(1/p + 1/q > 2/d\) there exist \(\alpha < 1/p\) and \(\beta < 1/q\) such that \((\alpha + \beta )d /2 > 1\).

We now define the conductances by an iteration procedure over \(k\) starting from \(k_0\). We initialize \(\omega (e) = 1\) for all \(e \in E_d\). In the iteration step \(k\) for all \(j \ge 1\) we choose in each component \({\mathcal {H}}^k_j\) one edge \(e_{k,j}\) uniformly at random and set \(\omega (e_{k,j}) = 2^{\alpha k}\) and \(\omega (e') = 2^{-\beta k}\) for all \(e' \sim e_{k,j}\). Since the components \(\{{\mathcal {H}}^k_j\}\) are separated by at least one edge, this procedure is well-defined and does not allow high conductances on neighbouring edges. That is, on each component \({\mathcal {H}}^k_j\) we put one trap (see Fig. 1). Note that during the iteration it might happen that some traps are replaced by even deeper traps and that some of the weaker bonds are left when one of the associated bonds with high conductance has been overwritten.

Henceforth, we will call an edge \(e \in E_d\) a k-trap if \(\omega (e) \ge 2^{\alpha k}\) and \(\omega (e') \le 2^{-\beta k}\) for all \(e' \sim e\). Note that if the CSRW starts at a \(k\)-trap \(\{x,y\} \in E_d\) it will spend at least \(O(2^{(\alpha +\beta )k})\)units of time in \(\{x,y\}\). In particular, there exists \(C_{14} > 0\) independent of \(k\) such that

$$\begin{aligned} {{\mathrm{\mathrm {P}}}}^{\omega }_x[Y_{2^{(\alpha +\beta )k}} = x] \;\ge \; C_{14} \qquad \text {for all }\,\, k \ge k_0. \end{aligned}$$
(5.4)
Fig. 1
figure 1

Illustration a \(k\)-trap in \(d = 2\)

Step 2: Ergodicity. The environment constructed in Step 1 is ergodic. First, the stationarity is obvious from the construction. To see that it is ergodic, consider two local functions \(f, g \in L^2(\Omega )\), meaning that \(f\) and \(g\) only depend on a finite set of coordinates \(\{\omega (e): e \in K \}\), for any \(K \subset E_d\) finite. Set

$$\begin{aligned} D_K \mathrel {\mathop :}=\max _{j: {\mathcal {H}}^{k_0}_j \cap K \ne \emptyset } {{\mathrm{\mathrm {diam }}}}{\mathcal {H}}_j^{k_0} \;<\; \infty \qquad {{\mathrm{\mathbb {P}}}}\text {-a.s.} \end{aligned}$$

Then, note that by construction the conductances on edges contained in \(K\) are independent of the conductances on edges having distance at least \(D_K + {{\mathrm{\mathrm {diam }}}}K\). This implies

$$\begin{aligned} {{\mathrm{\mathbb {E}}}}[ f \cdot g\circ \tau _x ] \;\rightarrow \; {{\mathrm{\mathbb {E}}}}[f] \, {{\mathrm{\mathbb {E}}}}[g] \qquad \text {as } |x| \rightarrow \infty . \end{aligned}$$

By a standard density argument the same holds for all \(f,g \in L^2(\Omega )\). Thus, for any shift-invariant set \(A\in \mathcal {F}\),

$$\begin{aligned} {{\mathrm{\mathbb {P}}}}[ A ] = {{\mathrm{\mathbb {P}}}}[ A \cap \tau _x(A) ] \;\rightarrow \; {{\mathrm{\mathbb {P}}}}[A]^2 \qquad \text {as } |x| \rightarrow \infty , \end{aligned}$$

and therefore \({{\mathrm{\mathbb {P}}}}[A] \in \{0,1\}\).

Step 3: Moment condition. The environment satisfies the required moment condition. Indeed, by construction we have for any edge \(e\) the obvious bounds

$$\begin{aligned} \min _{e' \sim e} \xi (e')^{\beta } \le \min _{e' \sim e} \min _{k: \xi (e') \le 2^{-k}} 2^{-\beta k} \le \omega (e)\le \max _{k: \xi (e) \le 2^{-k}} 2^{\alpha k} \le \xi (e)^{-\alpha }, \end{aligned}$$

and therefore

$$\begin{aligned} {{\mathrm{\mathbb {E}}}}[ \omega (e)^p ]&\le {{\mathrm{\mathbb {E}}}}[\xi (e)^{-\alpha p} ] = \int _0^1 u^{-\alpha p} \, \mathrm {d}u \;<\; \infty ,\\ {{\mathrm{\mathbb {E}}}}[\omega (e)^{-q}]&\le {{\mathrm{\mathbb {E}}}}[ \max _{e'\sim e} \xi (e')^{-\beta q} ] \le 2d\, \int _0^1 u^{- \beta q}\, \mathrm {d}u \;<\; \infty , \end{aligned}$$

where we used that \(\alpha < 1/p\) and \(\beta < 1/q\).

Step 4: Let \(A_n(e) \mathrel {\mathop :}=\{\omega (e)\ge 2^{n \alpha } \}\) and \(R_n = (n 2^n)^{1/d}\). In this step we show that almost surely for infinitely many \(n\) we find an \(n\)-trap in \(B(R_n) \equiv B(0,R_n)\). First,

$$\begin{aligned}&{{\mathrm{\mathbb {P}}}}\big [\bigcap \nolimits _{e \in E(R_n)} A_n(e)^c\big ]\\&\quad \le \; {{\mathrm{\mathbb {P}}}}[{\mathcal {H}}^n \cap B(R_n) = \emptyset ] \,+\, {{\mathrm{\mathbb {P}}}}\big [ {\mathcal {H}}^n \cap B(R_n) \ne \emptyset ,\, \bigcap \nolimits _{e\in E(R_n)} A_n(e)^c \big ], \end{aligned}$$

where \(E(R_n)\) is the set of all edges with at least one endpoint contained in \(B(R_n)\). The first term on the right hand side can be easily computed as

$$\begin{aligned} {{\mathrm{\mathbb {P}}}}[{\mathcal {H}}^n \cap B(R_n) = \emptyset ]= & {} {{\mathrm{\mathbb {P}}}}[\theta ^n(e)=0, \, \forall \, e \in E(R_n)]\nonumber \\= & {} {{\mathrm{\mathbb {P}}}}[\xi (e)>2^{-n}]^{c R_n^d} = (1-2^{-n})^{cR_n^d}. \end{aligned}$$
(5.5)

For the second term we obtain

$$\begin{aligned}&{{\mathrm{\mathbb {P}}}}\big [ {\mathcal {H}}^n \cap B(R_n) \ne \emptyset ,\; \bigcap \nolimits _{e \in E(R_n)} A_n(e)^c \big ]\\&\quad \le {{\mathrm{\mathbb {P}}}}\bigg [{\mathcal {H}}^n \cap B\big (\tfrac{1}{2} R_n\big ) = \emptyset \bigg ]\\&\quad \quad +\, {{\mathrm{\mathbb {P}}}}\Bigg [ {\mathcal {H}}^n \cap B\big (\tfrac{1}{2} R_n\big ) \ne \emptyset ,\, {\mathcal {H}}^n \cap B(R_n) \not =\emptyset ,\, \bigcap \nolimits _{e\in E(R_n)} A_n(e)^c \Bigg ]\\&\quad \le \; {{\mathrm{\mathbb {P}}}}\bigg [{\mathcal {H}}^n \cap B\big (\tfrac{1}{2} R_n\big ) = \emptyset \bigg ] \,+\, {{\mathrm{\mathbb {P}}}}\bigg [ \exists \, j \ge 1 : \mathrm {diam} {\mathcal {H}}_j^n\ge \tfrac{1}{2} R_n \bigg ]\\&\quad \le \; (1 - 2^{-n})^{cR_n^d} \,+\, |\partial B(R_n)|\, c\, \mathrm {e}^{-c R_n}, \end{aligned}$$

where in the last step we used a similar argument as in (5.5) and a tail estimate for the size of the holes in supercritical percolation clusters (see Theorems 6.10 and 6.14 in [24]). Thus,

$$\begin{aligned} {{\mathrm{\mathbb {P}}}}\big [\bigcap \nolimits _{e\in E(R_n)} A_n(e)^c\big ] \le 2 (1 - 2^{-n})^{cR_n^d} \,+\, |\partial B(R_n)|\, c\, \mathrm {e}^{-c R_n}, \end{aligned}$$

which by our choice of \(R_n\) is summable over \(n\). In particular, by Borel–Cantelli there exists \(n_0 = n_0(\omega )\) such that \({{\mathrm{\mathbb {P}}}}\)-a.s. for all \(n \ge n_0\) there exists \(\bar{e}_n \in E(R_n)\) such that the event \(A_n(\bar{e}_n)\) occurs, that is for each \(n \ge n_0\) we find an \(n\)-trap \(\bar{e}_n\) in \(B(R_n)\).

Step 5: Now suppose that the heat kernel bounds in Lemma 5.3 hold. For any positive \(\delta < C_{14}\) there exists \(n_{\delta }\) such that, setting \(t_1 = t_1(n) \mathrel {\mathop :}=R_n^2\) and \(t_2 = t_2(n) \mathrel {\mathop :}=t_1^{(\alpha +\beta )d/2}\), we have for any \(x \in B(R_n)\),

$$\begin{aligned} \frac{{{\mathrm{\mathrm {P}}}}_0^\omega [Y_{t_1+t_2} = x]}{{{\mathrm{\mathrm {P}}}}_0^\omega [Y_{t_1} = x]} = \frac{q^\omega (t_1+t_2, 0, x)}{ q^\omega (t_1, 0, x)} \le c\, \Bigg (1 + \frac{t_2}{t_1}\Bigg )^{-\frac{d}{2}} \le \delta , \qquad \forall \, n \ge n_{\delta }, \end{aligned}$$
(5.6)

since \(t_2/t_1 = R_n^{(\alpha +\beta )d-2}\gg 1\) as \((\alpha + \beta )d - 2 > 0\) by our choice of \(\alpha \) and \(\beta \). On the other hand, using the Markov property for \(n \ge n_0\) and (5.4),

$$\begin{aligned} \frac{{{\mathrm{\mathrm {P}}}}_0^\omega [Y_{t_1+t_2} = \bar{e}_n^+]}{{{\mathrm{\mathrm {P}}}}_0^\omega [Y_{t_1} = \bar{e}_n^+]} \;\ge \; {{\mathrm{\mathrm {P}}}}_0^\omega \big [Y_{t_1+t_2} = \bar{e}_n^+ \, | \, Y_{t_1}=\bar{e}_n^+ \big ] = {{\mathrm{\mathrm {P}}}}^\omega _{\bar{e}_n^+}[Y_{t_2} = \bar{e}_n^+] \;\ge \; C_{14}, \end{aligned}$$

which for \(n \ge \max \{ n_0, n_\delta \}\) contradicts (5.6) as \(\delta < C_{14}\).\(\square \)

6 Improving the lower moment condition

It is naive to expect that for general ergodic distributions, one could formulate a necessary condition for a local limit theorem based on the marginal distribution of the conductances only. In fact, for any edge \(\{x, y\} \in E_d\), the lower bound condition

$$\begin{aligned} {{\mathrm{\mathbb {E}}}}[ \omega (x,y)^{-q} ] \;<\; \infty \end{aligned}$$
(6.1)

some \(q > 0\) can be improved in several situations to a more general condition based on the heat kernel of the VSRW

$$\begin{aligned} {{\mathrm{\mathbb {E}}}}[ p^{\omega }(t, x, y)^{-q'} ] \;<\; \infty \end{aligned}$$
(6.2)

or on the heat kernel of the CSRW

$$\begin{aligned} {{\mathrm{\mathbb {E}}}}[( \mu ^{\omega }(x)\, q^{\omega }(t, x, y)\, \mu ^{\omega }(y) )^{-q'} ] \;<\; \infty , \end{aligned}$$
(6.3)

respectively, for some \(q' > 0\).

Proposition 6.1

Suppose that \(d \ge 2\) and that Assumptions 1.8 and 1.9 hold with the moment condition \({{\mathrm{\mathbb {E}}}}[\omega (x,y)^{-q}] < \infty \) replaced by

$$\begin{aligned} {{\mathrm{\mathbb {E}}}}[ p^{\omega }(t, x, y)^{-q}] \;<\; \infty \qquad \text {or} \qquad {{\mathrm{\mathbb {E}}}}[ ( \mu ^{\omega }(x)\, q^{\omega }(t, x, y)\, \mu ^{\omega }(x) )^{-q} ] \;<\; \infty \end{aligned}$$

for any \(t>0\) and all \(x, y \in {\mathbb {Z}}^d\) such that \(\{x, y\} \in E_d\). Then, the statements of Theorem 1.10 and Theorem 1.11 hold.

Proof

We only consider the moment condition on the heat kernel \(q(t,x,y)\) of the CSRW. For \(x \in {\mathbb {Z}}^d\) and \(t > 0\), set \(\tilde{\nu }^\omega (x) \mathrel {\mathop :}=\sum _{y \sim x} (\mu ^{\omega }(x)\, q^{\omega }(t,x,y)\, \mu ^{\omega }(y))^{-1}\). Our main objective is to establish a Sobolev inequality similar to the one in (2.11). Indeed, one can show along the lines of the proof of this Sobolev inequality in [3, Proposition 3.5] that

$$\begin{aligned} ||(\eta u)^2||_{\rho ,B} {\le } c |B|^{\frac{2}{d}}\, ||\tilde{\nu }^{\omega }||_{q, B} \frac{1}{|B|} \sum _{\{x,y\} \in E} \mu ^{\omega }(x)\, q^{\omega }(t,x,y)\, \mu ^{\omega }(y) ( (\eta u)(x) {-} (\eta u)(y) )^2 \end{aligned}$$

(cf. equation (3.13) in [3]). Since the sum on the right hand side is smaller than

$$\begin{aligned} {\mathcal {E}}^{q_t}(\eta u) \mathrel {\mathop :}=\frac{1}{2}\, \sum _{x,y}\, \mu ^{\omega }(x)\, q^{\omega }(t,x,y)\, \mu ^{\omega }(y) ((\eta u)(x) - (\eta u)(y) )^2 \end{aligned}$$

and \(t^{-1} {\mathcal {E}}^{q_t}_{\eta ^2}(u) \nearrow {\mathcal {E}}^{\omega }_{\eta ^2}(u)\) as \(t \searrow 0\), cf. [15, p. 250], following the arguments in [3, Proposition 3.5] we conclude that (2.11) holds with \(\nu ^{\omega }\) replaced by \(\tilde{\nu }^{\omega }\), but with a constant depending now also on \(t\). Finally, since \(\tilde{\nu }^{\omega }(x) = \tilde{\nu }^{\tau _x \omega }(0)\) for every \(x \in {\mathbb {Z}}^d\), an application of the ergodic theorem gives that

$$\begin{aligned} \lim _{n \rightarrow \infty }\, ||\tilde{\nu }^\omega ||_{q,B(n)}^q = {{\mathrm{\mathbb {E}}}}[\tilde{\nu }^\omega (0)^q] \;<\; \infty . \end{aligned}$$

The statements of Theorems 1.10 and 1.11 now follow as before.\(\square \)

Of course, it is not easy to verify (6.3) (or 6.2), because this would require a priori lower bounds on the heat kernel. However, note that

$$\begin{aligned} \mu ^{\omega }(y)\, q^{\omega }(t,x,y) = {{\mathrm{\mathrm {P}}}}_x^{\omega }[ Y_t = y ] = \mathrm {e}^{-t}\, \sum _{n=0}^{\infty }\, \frac{t^n}{n!}\, (p^{\omega })^{n}(x,y), \end{aligned}$$
(6.4)

where still \(p^{\omega }(x,y) = \omega (x,y) / \mu ^{\omega }(x)\) and \((p^{\omega })^{n+1}(x,y) = \sum _z (p^{\omega })^{n}(x,z)\, p^{\omega }(z,y)\). In particular, since \((p^{\omega })^{n}(0, x) = 0\) for every \(x \in {\mathbb {Z}}^d\) with \(|x|_{\infty } = 1\) and even \(n\), the summation is only over odd \(n\).

Our first result deals with an example, where (6.3) and (6.1) are equivalent, i.e. no improvement can be achieved.

Proposition 6.2

Let either \(\omega (x,y) = \theta ^{\omega }(x) \wedge \theta ^{\omega }(y)\) or \(\omega (x,y) = \theta ^{\omega }(x)\, \theta ^{\omega }(y)\), where \(\{\theta ^{\omega }(x) : x \in {\mathbb {Z}}^d\}\) is a family of stationary random variables, i.e. \(\theta ^{\omega }(x) = \theta ^{\tau _x \omega }(0)\) for all \(x \in {\mathbb {Z}}^d\). Then, for every \(\{x, y\} \in E_d\) and all \(q \ge 1\),

$$\begin{aligned} {{\mathrm{\mathbb {E}}}}[ \omega (x, y)^{-q}] \;<\; \infty \quad \Longleftrightarrow \quad {{\mathrm{\mathbb {E}}}}[( \mu ^{\omega }(x)\, q^{\omega }(t, x, y)\, \mu ^{\omega }(y) )^{-q} ] \;<\; \infty . \end{aligned}$$

Proof

It suffices to show that for every \(n\) odd and \(\{x, y\} \in E_d\)

$$\begin{aligned} \mu ^{\omega }(x) (p^{\omega })^{n}(x,y) \le (2d)^n\, \omega (x,y). \end{aligned}$$
(6.5)

Once the claim (6.5) is proven, the assertion of the Proposition is immediate. In order to see this, mind that (6.4) implies the following trivial estimate

$$\begin{aligned} \mu ^{\omega }(x)\, q^\omega (t, x, y)\, \mu ^\omega (y) \;\ge \; t\, e^{-t}\, \mu ^{\omega }(x)\, p^{\omega }(x, y) = t\, e^{-t}\, \omega (x, y). \end{aligned}$$

On the other hand, using again (6.4) and (6.5) we obtain

$$\begin{aligned} \mu ^{\omega }(x)\, q^{\omega }(t, x, y)\, \mu ^{\omega }(y) \le \mathrm {e}^{-t}\, \omega (x,y)\, \sum _{k \text { odd}} \frac{t^k}{k!}\, (2d)^k \le \mathrm {e}^{(2d-1)t}\, \omega (x, y). \end{aligned}$$

In what follows, we will prove the claim (6.5). For this purpose, we distinguish two cases depending on the definition of the weights \(\omega \). To start with, suppose that \(\omega (x, y) = \theta ^{\omega }(x) \wedge \theta ^{\omega }(x)\) for \(\{x, y\} \in E_d\). Since \(\mu ^{\omega }(x) \le 2 d\, \theta ^{\omega }(x)\) we obtain that

$$\begin{aligned} \mu ^{\omega }(x)\, (p^{\omega })^{n}(x, y) \le 2d\, \theta ^{\omega }(x). \end{aligned}$$

On the other hand, by exploiting the detailed balance condition we get

$$\begin{aligned} \mu ^{\omega }(x)\, (p^{\omega })^{n}(x, y) = \mu ^{\omega }(y)\, (p^{\omega })^{n}(y, x) \le 2d\, \theta ^{\omega }(y). \end{aligned}$$

Thus, (6.5) follows by combing these two estimates.

Let us now consider the case \(\omega (x, y) = \theta ^{\omega }(x)\, \theta ^{\omega }(y)\). By introducing the abbreviation \(\alpha ^{\omega }(x) = \sum _{z \sim x} \theta (z)\), we can rewrite

$$\begin{aligned} \mu ^{\omega }(x) = \theta ^{\omega }(x)\, \alpha ^{\omega }(x) \qquad \text {and} \qquad p^{\omega }(x, y) = \frac{\theta ^{\omega }(y)}{\alpha ^{\omega }(x)}. \end{aligned}$$
(6.6)

Thus, for \(n = 3\), we have that

$$\begin{aligned} \mu ^{\omega }(x)\, (p^{\omega })^{3}(x, y) = \theta ^{\omega }(x)\, \theta ^{\omega }(y)\, \sum _{z_1 \sim x}\, \sum _{z_2 \sim y}\, \frac{\theta ^{\omega }(z_1)}{\alpha ^{\omega }(z_2)}\, \frac{\theta ^{\omega }(z_2)}{\alpha ^{\omega }(z_1)} \le (2 d)^2\, \omega (x,y), \end{aligned}$$

where we used that \(\theta ^{\omega }(z_1) / \alpha ^{\omega }(z_2) \le 1\) and \(\theta ^{\omega }(z_2) / \alpha ^{\omega }(z_1) \le 1\) since \(z_1 \sim z_2\). This implies (6.5) for \(n = 3\). For general odd \(n\) the claim (6.5) follows inductively.\(\square \)

Note that in Proposition 6.2 no assumptions have been made on the distribution of the random variables \(\{\theta ^{\omega }(x) : x \in {\mathbb {Z}}^d\}\). In general the computations can be rather intricate, in particular due to the quotient in the definition of \(p^\omega (x,y)\). For simplicity we will now focus on the case with conductances bounded from above, that is \(p = \infty \).

Proposition 6.3

Assume that the conductances \(\{\omega (e) : e \in E_d\}\) are independent and uniformly bounded from above, i.e. \({{\mathrm{\mathbb {P}}}}\)-a.s. \(\omega (e) \le 1\). Then, if (6.1) holds for any \(q > 0\), we have that (6.3) holds for all \(q' < 2 d q\). In particular, if \({{\mathrm{\mathbb {E}}}}[ \omega (x,y)^{-q} ] < \infty \) for some \(q > 1/4\) and all \(\{x, y\} \in E_d\) then there exists \(q' > \frac{d}{2}\) such that

$$\begin{aligned} {{\mathrm{\mathbb {E}}}}[( \mu ^{\omega }(x)\, q^{\omega }(t, x, y)\, \mu ^{\omega }(y) )^{-q'} ] \;<\; \infty . \end{aligned}$$

For the proof we will need the following simple lemma.

Lemma 6.4

Let \(Z_1, Z_2, \ldots , Z_N\) be independent random variables such that

$$\begin{aligned} {{\mathrm{\mathbb {P}}}}[0 < Z_i \le 1] = 1 \qquad \text {and} \qquad c_i(\alpha ) = {{\mathrm{\mathbb {E}}}}[ Z_i^{-\alpha }] \;<\; \infty , \end{aligned}$$

for some \(\alpha > 0\). Then

$$\begin{aligned} {{\mathrm{\mathbb {E}}}}[( Z_1 + Z_2 + \cdots + Z_N )^{-\beta }] \;<\; \infty \end{aligned}$$

for all \(\beta < N \alpha \).

Proof

Note that \(Z_1 + Z_2 + \cdots + Z_N \ge \max _{i = 1, \ldots , N} Z_i\), and Čebyšev’s inequality yields

$$\begin{aligned} {{\mathrm{\mathbb {P}}}}\Big [\Big ( \max _{i = 1, \ldots , N} Z_i \Big )^{ -\beta } >\, t \Big ]&= {{\mathrm{\mathbb {P}}}}\Big [\max _{i=1,\ldots ,N} Z_i \,<\, t^{-\frac{1}{\beta }}\Big ] \le t^{-\frac{n \alpha }{\beta }}\, \prod _{i=1}^n c_i(\alpha ). \end{aligned}$$

Since \({{\mathrm{\mathbb {P}}}}[0 < Z_i \le 1] = 1\), the assertion follows by integrating the estimate above in \(t\) over the interval \([1, \infty )\).\(\square \)

Proof of Proposition 6.3

Suppose that (6.1) holds for any \(q > 0\). Since the conductances are assumed to be bounded from above \(\omega (x,y) \le 1\) we have \(\mu ^{\omega }(x) \le 2d\) and therefore \((p^\omega )^n(x,y) \ge (2d)^{-n} \omega ^{(n)}(x,y)\), where we define \(\omega ^{1}(x, y) \mathrel {\mathop :}=\omega (x, y)\) and \(\omega ^{n+1}(x,y) = \sum _z \omega ^{n}(x,z)\, \omega (z,y)\). Thus, in view of (6.4) we get

$$\begin{aligned} \mu ^{\omega }(x)\, q^{\omega }(t, x, y)\, \mu ^{\omega }(y) \;\ge \; c\, ( \omega (x, y) \,+\, \omega ^{3}(x, y) \,+\, \omega ^{9}(x, y)), \end{aligned}$$

for some positive constant \(c \equiv c(d, t)\). Hence, it suffices to show that

$$\begin{aligned} {{\mathrm{\mathbb {E}}}}[ ( \omega (x, y) \,+\, \omega ^{3}(x, y) \,+\, \omega ^{9}(x, y) )^{-q'} ] \;<\; \infty . \end{aligned}$$
(6.7)

Note that for \(\{x, y\} \in E_d\) there are at least \(2d\) different disjoint nearest neighbour oriented paths \(\{\gamma _i : i = 1, \ldots , 2d\}\) from \(x\) to say \(y\), meaning that the sets of edges along the paths \((\gamma _i)\) are disjoint: one direct path of length \(1\), \(2(d-1)\) paths of length \(3\) and one path of length \(9\) (see Fig. 2). That is,

$$\begin{aligned} \omega (x, y) + \omega ^{3}(x, y) + \omega ^{9}(x, y) \;\ge \; Z_1 + Z_2 + \cdots + Z_{2d}, \end{aligned}$$

where \(Z_i = \prod _{e \in \gamma _i} \omega (e)\) for \(i = 1, \ldots , 2d\). Using the independence we see that \({{\mathrm{\mathbb {E}}}}[Z_i^{-q}] < \infty \) for \(i = 1, \ldots , 2d\). This implies (6.7) by Lemma 6.4.\(\square \)

Fig. 2
figure 2

Illustration of the four different disjoint nearest neighbour paths \(\{\gamma _i : i = 1, \ldots 4\}\) from \(x\) to \(y\) for \(d = 2\)

Proposition 6.5

Let either \(\omega (x,y) = \theta ^{\omega }(x) \vee \theta ^{\omega }(y)\) or \(\omega (x,y) = \theta ^{\omega }(x) + \theta ^{\omega }(y)\), where \(\{\theta ^{\omega }(x) : x \in {\mathbb {Z}}^d\}\) is a family of uniformly bounded i.d.d. random variables, that is \(p = \infty \). Then, if (6.1) holds for any \(q > 0\), we have that (6.3) holds for all \(q' < \frac{2}{3} (d+1) q\). In particular, if \({{\mathrm{\mathbb {E}}}}[ \omega (x,y)^{-q} ] < \infty \) for some \(q > \frac{3}{4}\, \frac{d}{d+1}\) and all \(\{x, y\} \in E_d\) then there exists \(q' > \frac{d}{2}\) such that

$$\begin{aligned} {{\mathrm{\mathbb {E}}}}[( \mu ^{\omega }(x)\, q^{\omega }(t, x, y)\, \mu ^{\omega }(y) )^{-q'} ] \;<\; \infty . \end{aligned}$$

Lemma 6.6

Let \(X_1\) and \(X_2\) be i.i.d. random variables such that

$$\begin{aligned} {{\mathrm{\mathbb {P}}}}[0 < X_i \le 1] = 1, \qquad \text {and} \qquad c(\alpha ) = {{\mathrm{\mathbb {E}}}}[ X_i^{-\alpha }] \;<\; \infty , \qquad i=1,2, \end{aligned}$$

for some \(\alpha > 0\). Then

$$\begin{aligned} {{\mathrm{\mathbb {E}}}}[ (X_1^2 X_2 \vee X_1 X_2^2 )^{-\beta } ] \;<\; \infty \end{aligned}$$

for all \(\beta < \frac{2}{3} \alpha \).

Proof

First of all, note that for any \(s>0\) we have

$$\begin{aligned} {{\mathrm{\mathbb {P}}}}[X_1^2 X_2 \vee X_1 X_2^2 < s] =&{{\mathrm{\mathbb {P}}}}[X_1< s^{\frac{1}{3}},\, X_2 < s^{\frac{1}{3}} ]\\&+\; {{\mathrm{\mathbb {P}}}}[ X_1\ge s^{\frac{1}{3}},\, X_2 < s^{\frac{1}{3}},\, X_2< s X_1^{-2} ]\\&+\; {{\mathrm{\mathbb {P}}}}[ X_1< s^{\frac{1}{3}},\, X_2 \ge s^{\frac{1}{3}},\, X_1< s X_2^{-2} ]\\ =&{{\mathrm{\mathbb {P}}}}[X_1< s^{\frac{1}{3}} ]\, {{\mathrm{\mathbb {P}}}}[X_2 < s^{\frac{1}{3}}] \,+\, 2\, {{\mathrm{\mathbb {P}}}}[X_1\ge s^{ \frac{1}{3}}, X_2< s X_1^{-2}], \end{aligned}$$

where the second equality follows by a symmetry argument. Let us denote by \(\widehat{{{\mathrm{\mathbb {E}}}}}_{\alpha }\) the expectation w.r.t. \(\mathrm {d}\widehat{{{\mathrm{\mathbb {P}}}}}_{\alpha } \mathrel {\mathop :}=c_1(\alpha )^{-1} X_1^{-\alpha }\, \mathrm {d}{{\mathrm{\mathbb {P}}}}\). Then, we obtain

$$\begin{aligned} {{\mathrm{\mathbb {E}}}}[ 1\!\!1_{\{X_1 \ge s^{1/3} \}}\, X_1^{-2 \alpha } ] = c(\alpha )\; \widehat{{{\mathrm{\mathbb {E}}}}}_{\alpha }[ 1\!\!1_{\{ X_1 \ge s^{1/3} \}}\, X_1^{-\alpha } ] \le c(\alpha )\; s^{\frac{1}{3} \alpha }. \end{aligned}$$

Thus, Čebyšev’s inequality yields the following upper bound

$$\begin{aligned} {{\mathrm{\mathbb {P}}}}[X_1\ge s^{\frac{1}{3}},\, X_2< s X_1^{-2} ] = {{\mathrm{\mathbb {E}}}}[ 1\!\!1_{\{X_1 \ge s^{1/3}\}}\, {{\mathrm{\mathbb {P}}}}[ X_2^{-\alpha } > s^{-\alpha } X_1^{2 \alpha } |\, X_1 ] ] \le c(\alpha )^2\, s^{\frac{2}{3} \alpha }. \end{aligned}$$

Hence, for any \(t > 0\), again by Čebyšev’s inequality

$$\begin{aligned} {{\mathrm{\mathbb {P}}}}[ (X_1^2 X_2 \vee X_1 X_2^2 )^{-\beta } > t ] =&{{\mathrm{\mathbb {P}}}}[ X_1 < t^{-\frac{1}{3 \beta }} ]\, {{\mathrm{\mathbb {P}}}}[ X_2 < t^{-\frac{1}{3 \beta }} ]\\&+2\, {{\mathrm{\mathbb {P}}}}[ X_1 \ge t^{-\frac{1}{3\beta }},\, X_2 < t^{-\frac{1}{\beta }}\, X_1^{-2}] \le 3\, c(\alpha )^2\, t^{-\frac{2 \alpha }{3 \beta }}. \end{aligned}$$

\(\square \)

Proof of Proposition 6.5

Since

$$\begin{aligned} \frac{1}{2}\, ( \theta ^{\omega }(x) + \theta ^{\omega }(y) ) \le \theta ^{\omega }(x) \vee \theta ^{\omega }(y) \le \theta ^{\omega }(x) + \theta ^{\omega }(y) \end{aligned}$$

it is enough to consider the case \(\omega (x,y) = \theta ^{\omega }(x) \vee \theta ^{\omega }(y)\). Again it suffices to show (6.7). First note that we have \({{\mathrm{\mathbb {P}}}}[ \theta ^{\omega }(x) < s ]^2 = {{\mathrm{\mathbb {P}}}}[\omega (x,y) < s ]\) for any edge \(\{x, y\} \in E_d\) and therefore

$$\begin{aligned} {{\mathrm{\mathbb {P}}}}[\theta ^{\omega }(x)^{-r} > t ] = {{\mathrm{\mathbb {P}}}}[ \theta ^{\omega }(x) < t^{-\frac{1}{r}} ] = {{\mathrm{\mathbb {P}}}}[ \omega (x,y)^{-q} > t^{\frac{q}{r}} ]^{\frac{1}{2}} \le t^{-\frac{q}{2r}}\, {{\mathrm{\mathbb {E}}}}[ \omega (x,y)^{-q} ]^{\frac{1}{2}}. \end{aligned}$$

In particular, \({{\mathrm{\mathbb {E}}}}[ \theta ^{\omega }(x)^{-r} ] < \infty \) for all \(r < q/2\). Let \(x = x_0, x_1, \ldots , x_{2k}, x_{2k+1} = y\) be a disjoint nearest neighbour path from \(x\) to \(y\). Then,

$$\begin{aligned} \prod _{j=0}^{2k} \omega (x_j,x_{j+1})= & {} \prod _{j=0}^{2k} \theta ^{\omega }(x_j) \vee \theta ^{\omega }(x_{j+1})\nonumber \\\ge & {} \prod _{j=1}^{k} \theta ^{\omega }(x_j) \cdot ( \theta ^{\omega }(x_k) \vee \theta ^{\omega }(x_{k+1}) ) \cdot \prod _{j = k+1}^{2k} \theta ^{\omega }(x_j). \end{aligned}$$
(6.8)

In particular, note that the right hand side does not depend on \(\theta ^{\omega }(x)\) and \(\theta ^{\omega }(y)\). Moreover, by Lemma 6.6

$$\begin{aligned} {{\mathrm{\mathbb {E}}}}\bigg [ \Bigg ( \prod \nolimits _{j=0}^{2k} \omega (x_j,x_{j+1}) \Bigg )^{ -\beta } \bigg ] \;<\; \infty , \qquad \forall \, \beta < \frac{q}{3}. \end{aligned}$$

In order to prove (6.7) we consider again \(2d\) disjoint nearest neighbour paths from \(x\) to \(y\) and define \(Z_i\), \(i = 1, \ldots , 2d\) as above in the proof of Proposition 6.3. Then, using (6.8) we have \(Z_i \ge \tilde{Z}_i\) for \(i=2,\ldots ,2d\), where \(\{\tilde{Z}_i : i = 2, \ldots , d \}\) is a collection of independent random variables with \({{\mathrm{\mathbb {E}}}}[\tilde{Z}_i^{-\beta } ] < \infty \) for all \(\beta < \frac{q}{3}\). Furthermore, \(\{\tilde{Z}_i \}\) is independent of \(Z_1 = \omega (x, y)\). Thus,

$$\begin{aligned} \omega (x, y) + \omega ^{3}(x, y) + \omega ^{9}(x, y) \;\ge \; Z_1 + \tilde{Z}_2 + \cdots + \tilde{Z}_{2d} \end{aligned}$$

and by Lemma 6.4 we conclude that (6.7) holds for all \(q'< q + (2d-1) \beta \), which implies the claim.\(\square \)