1 Introduction and Main Results

Section 1.1 provides a brief introduction to the parabolic Anderson model. Section 1.2 introduces basic notation and key assumptions. Section 1.3 states the main theorem and gives an outline of the remainder of the paper.

1.1 The PAM and Intermittency

The parabolic Anderson model (PAM) is the Cauchy problem

$$\begin{aligned} \partial _t u(x,t) = \Delta _\mathscr {X}u(x,t) + \xi (x) u(x,t) , \qquad t>0, \, x \in \mathscr {X}, \end{aligned}$$
(1.1)

where \(\mathscr {X}\) is an ambient space, \(\Delta _\mathscr {X}\) is a Laplace operator acting on functions on \(\mathscr {X}\), and \(\xi \) is a random potential on \(\mathscr {X}\). Most of the literature considers the setting where \(\mathscr {X}\) is either \(\mathbb {Z}^d\) or \(\mathbb {R}^d\) with \(d \ge 1\) (for mathematical surveys we refer the reader to [1, 13]). More recently, other choices for \(\mathscr {X}\) have been considered as well: the complete graph [8], the hypercube [3], Galton–Watson trees [7], and random graphs with prescribed degrees [7].

The main target for the PAM is a description of intermittency: for large t the solution \(u(\cdot ,t)\) of (1.1) concentrates on well-separated regions in \(\mathscr {X}\), called intermittent islands. Much of the literature has focussed on a detailed description of the size, shape and location of these islands, and the profiles of the potential \(\xi (\cdot )\) and the solution \(u(\cdot ,t)\) on them. A special role is played by the case where \(\xi \) is an i.i.d. random potential with a double-exponential marginal distribution

$$\begin{aligned} \mathrm {P}(\xi (0) > u) = \mathrm {e}^{-\mathrm {e}^{u/\varrho }}, \qquad u \in \mathbb {R}, \end{aligned}$$
(1.2)

where \(\varrho \in (0,\infty )\) is a parameter. This distribution turns out to be critical, in the sense that the intermittent islands neither grow nor shrink with time, and therefore represents a class of its own.

The analysis of intermittency typically starts with a computation of the large-time asymptotics of the total mass, encapsulated in what are called Lyapunov exponents. There is an important distinction between the annealed setting (i.e., averaged over the random potential) and the quenched setting (i.e., almost surely with respect to the random potential). Often both types of Lyapunov exponents admit explicit descriptions in terms of characteristic variational formulas that contain information about where and how the mass concentrates in \(\mathscr {X}\). These variational formulas contain a spatial part (identifying where the concentration on islands takes place) and a profile part (identifying what the size and shape of both the potential and the solution are on the islands).

In the present paper we focus on the case where \(\mathscr {X}\) is a Galton–Watson tree, in the quenched setting (i.e., almost surely with respect to the random tree and the random potential). In [7] the large-time asymptotics of the total mass was derived under the assumption that the degree distribution has bounded support. The goal of the present paper is to relax this assumption to unbounded degree distributions. In particular, we identify the weakest condition on the tail of the degree distribution under which the arguments in [7] can be pushed through. To do so we need to control the occurrence of large degrees uniformly in large subtrees of the Galton–Watson tree.

1.2 The PAM on a Graph

We begin with some basic definitions and notations (and refer the reader to [1, 13] for more background).

Let \(G = (V,E)\) be a simple connected undirected graph, either finite or countably infinite. Let \(\Delta _G\) be the Laplacian on G, i.e.,

$$\begin{aligned} (\Delta _G f)(x) := \sum _{\begin{array}{c} {y\in V:} \\ { \{x,y\} \in E} \end{array}} [f(y) - f(x)], \qquad x \in V,\,f:\,V\rightarrow \mathbb {R}. \end{aligned}$$
(1.3)

Our object of interest is the non-negative solution of the Cauchy problem with localised initial condition,

$$\begin{aligned} \begin{array}{llll} \partial _t u(x,t) &{}=&{} (\Delta _G u)(x,t) + \xi (x) u(x,t), &{}x \in V,\, t>0,\\ u(x,0) &{}=&{} \delta _\mathcal {O}(x), &{}x \in V, \end{array} \end{aligned}$$
(1.4)

where \(\mathcal {O}\in V\) is referred to as the root of G. We say that G is rooted at \(\mathcal {O}\) and call \(G=(V,E,\mathcal {O})\) a rooted graph. The quantity u(xt) can be interpreted as the amount of mass present at time t at site x when initially there is unit mass at \(\mathcal {O}\).

Criteria for existence and uniqueness of the non-negative solution to (1.4) are well known (see [9, 10] for the case \(G=\mathbb {Z}^d\)), and the solution is given by the Feynman-Kac formula

$$\begin{aligned} u(x,t) = \mathbb {E}_\mathcal {O}\left[ \mathrm {e}^{\int _0^t \xi (X_s) \mathrm{d}s}\,{\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}\{X_t = x\}\right] , \end{aligned}$$
(1.5)

where \(X=(X_t)_{t \ge 0}\) is the continuous-time random walk on the vertices V with jump rate 1 along the edges E, and \(\mathbb {P}_\mathcal {O}\) denotes the law of X given \(X_0=\mathcal {O}\). We are interested in the total mass of the solution,

$$\begin{aligned} U(t):= \sum _{x\in V} u(x,t) = \mathbb {E}_\mathcal {O}\left[ \mathrm {e}^{\int _0^t \xi (X_s) \mathrm{d}s}\right] . \end{aligned}$$
(1.6)

Often we suppress the dependence on \(G,\xi \) from the notation. Note that, by time reversal and the linearity of (1.4), \(U(t) = {\hat{u}}(0,t)\) with \({\hat{u}}\) the solution of (1.4) with a different initial condition, namely, \({\hat{u}}(x,0) = 1\) for all \(x \in V\).

As in [7], throughout the paper we assume that the random potential \(\xi = (\xi (x))_{x \in V}\) consists of i.i.d. random variables with marginal distribution satisfying:

Assumption 1.1

(Asymptotic double-exponential potential)

For some \(\varrho \in (0,\infty )\),

$$\begin{aligned} \mathrm {P}\left( \xi (0) \ge 0\right) = 1, \qquad \mathrm {P}\left( \xi (0) > u \right) = \mathrm {e}^{-\mathrm {e}^{u/\varrho }} \;\; \text {for } u \text { large enough.} \end{aligned}$$
(1.7)

The restrictions in (1.7) are helpful to avoid certain technicalities that require no new ideas. In particular, (1.7) is enough to guarantee existence and uniqueness of the non-negative solution to (1.4) on any graph whose largest degrees grow modestly with the size of the graph (as can be inferred from the proof in [10] for the case \(G=\mathbb {Z}^d\); see Appendix C for more details). All our results remain valid under milder restrictions (e.g. [10, Assumption (F)] plus an integrability condition on the lower tail of \(\xi (0)\)).

The following characteristic variational formula is important for the description of the asymptotics of U(t) when \(\xi \) has a double-exponential tail. Denote by \(\mathcal {P}(V)\) the set of probability measures on V. For \(p \in \mathcal {P}(V)\), define

$$\begin{aligned} I_E(p) := \sum _{\{x,y\} \in E} \left( \sqrt{p(x)} - \sqrt{p(y)}\,\right) ^2, \qquad J_V(p) := - \sum _{x \in V} p(x) \log p(x), \end{aligned}$$
(1.8)

and set

$$\begin{aligned} \chi _G(\varrho ) := \inf _{p \in \mathcal {P}(V)} [I_E(p) + \varrho J_V(p)], \qquad \varrho \in (0,\infty ). \end{aligned}$$
(1.9)

The first term in (1.9) is the quadratic form associated with the Laplacian, describing the solution \(u(\cdot ,t)\) in the intermittent islands, while the second term in (1.9) is the Legendre transform of the rate function for the potential, describing the highest peaks of \(\xi (\cdot )\) in the intermittent islands.

1.3 The PAM on a Galton–Watson Tree

Let D be a random variable taking values in \(\mathbb {N}\). Start with a root vertex \(\mathcal {O}\), and attach edges from \(\mathcal {O}\) to D first-generation vertices. Proceed recursively: after having attached the n-th generation of vertices, attach to each one of them independently a number of vertices having the same distribution as D, and declare the union of these vertices to be the \((n+1)\)-th generation of vertices. Denote by \({\mathcal {G}\mathcal {W}}=(V,E)\) the graph thus obtained and by \(\mathfrak {P}\) its probability law and \(\mathfrak {E}\) the expectation. Write \(\mathcal {P}\) and \(\mathcal {E}\) to denote probability and expectation for D, and \({{\,\mathrm{supp}\,}}(D)\) to denote the support of \(\mathcal {P}\). The law of D can be veiwed as the offspring distribution of \({\mathcal {G}\mathcal {W}}\), and the law of \(D+1\) the degree distribution of \({\mathcal {G}\mathcal {W}}\).

Throughout the paper, we assume that the degree distribution satisfies:

Assumption 1.2

(Exponential tails)

  1. (1)

    \(d_{\min } := \min {{\,\mathrm{supp}\,}}(D) \ge 2\) and \(\mathcal {E}[D] \in (2,\infty )\).

  2. (2)

    \(\mathcal {E}\big [\mathrm {e}^{aD}\big ] < \infty \) for all \(a \in (0,\infty )\).

Under this assumption, \({\mathcal {G}\mathcal {W}}\) is \(\mathfrak {P}\)-a.s. an infinite tree. Moreover,

$$\begin{aligned} \lim _{r \rightarrow \infty } \frac{\log |B_r(\mathcal {O})|}{r} = \log \mathcal {E}[D] =: \vartheta \in (0,\infty ) \qquad \mathfrak {P}-a.s., \end{aligned}$$
(1.10)

where \(B_r(\mathcal {O}) \subset V\) is the ball of radius r around \(\mathcal {O}\) in the graph distance (see e.g. [14, pp. 134–135]). Note that this ball depends on \({\mathcal {G}\mathcal {W}}\) and therefore is random. For our main result we need an assumption that is much stronger than Assumption 1.2(2).

Assumption 1.3

(Super-double-exponential tails) There exists a function \(f:\,(0,\infty ) \rightarrow (0,\infty )\) satisfying \(\lim _{s\rightarrow \infty } f(s) = 0\) and \(\lim _{s\rightarrow \infty } f'(s) = 0\) such that

$$\begin{aligned} \limsup _{s\rightarrow \infty } \mathrm {e}^{-s} \log \mathcal {P}(D>s^{f(s)}) < -2\vartheta . \end{aligned}$$
(1.11)

To state our main result, we define the constant

$$\begin{aligned} {\widetilde{\chi }}(\varrho ) := \inf \big \{ \chi _T(\varrho ) :\, T \text { is an infinite tree with degrees in } {{\,\mathrm{supp}\,}}(D) \big \}, \end{aligned}$$
(1.12)

with \(\chi _G(\varrho )\) defined in (1.9), and abbreviate

$$\begin{aligned} \mathfrak {r}_t = \frac{\varrho t}{\log \log t}. \end{aligned}$$
(1.13)

Theorem 1.4

(Quenched Lyapunov exponent) Subject to Assumptions 1.11.3,

$$\begin{aligned} \frac{1}{t} \log U(t) = \varrho \log (\vartheta \mathfrak {r}_t) -\varrho - {\widetilde{\chi }}(\varrho ) + o(1), \quad t \rightarrow \infty , \qquad (\mathrm {P}\times \mathfrak {P})\text {-a.s.} \end{aligned}$$
(1.14)

With Theorem 1.4 we have completed our task to relax the main result in [7] to degree distributions with unbounded support. The extension comes at the price of having to assume a tail that decays faster than double-exponential as shown in (1.11). This property is needed to control the occurrence of large degrees uniformly in large subtrees of \({\mathcal {G}\mathcal {W}}\). No doubt Assumption 1.3 is stronger than is needed, but to go beyond would require a major overhaul of the methods developed in [7], which remains a challenge.

In (1.4) the initial mass is located at the root. The asymptotics in (1.14) is robust against different choices. A heuristic explanation where the terms in (1.14) come from was given in [7, Sect. 1.5]. The asymptotics of U(t) is controlled by random walk paths in the Feynman-Kac formula in (1.6) that run within time \(\mathfrak {r}_t/\varrho \log \mathfrak {r}_t\) to an intermittent island at distance \(\mathrm {r}_t\) from \(\mathcal {O}\), and afterwards stay near that island for the rest of the time. The intermittent island turns out to consist of a subtree with degree \(d_{\min }\) where the potential has a height \(\varrho \log (\vartheta \mathfrak {r}_t)\) and a shape that is the solution of a variational formula restricted to that subtree. The first and third term in (1.14) are the contribution of the path after it has reached the island, the second term is the cost for reaching the island.

For \(d \in \mathbb {N}\setminus \{1\}\), let \(\mathcal {T}_d\) be the infinite homogeneous tree in which every node has downward degree d. It was shown in [7] that if \(\varrho \ge 1/\log (d_\mathrm{min}+1)\), then

$$\begin{aligned} {\widetilde{\chi }}(\varrho ) = \chi _{\mathcal {T}_{d_{\min }}}(\varrho ). \end{aligned}$$
(1.15)

Presumably \(\mathcal {T}_{d_{\min }}\) is the unique minimizer of (1.12), but proving so would require more work.

Outline. The remainder of the paper is organised as follows. Section 2 collects some structural properties of Galton–Watson trees. Section 3 contains several preparatory lemmas, which identify the maximum size of the islands where the potential is suitably high, estimate the contribution to the total mass in (1.6) by the random walk until it exits a subset of \({\mathcal {G}\mathcal {W}}\), bound the principal eigenvalue associated with the islands, and estimate the number of locations where the potential is intermediate. Section 4 uses these preparatory lemmas to find the contribution to the Feynman-Kac formula in (1.6) coming from various sets of paths. Section 5 uses these contributions to prove Theorem 1.4. Appendices A–B contain some facts about variational formulas and largest eigenvalues that are needed in Sect. 3. Appendix C provides a proof that the Feynman-Kac formula in (1.5) holds as soon as Assumptions 1.1-1.2 are in force.

Assumptions 1.11.2 are needed throughout the paper. Only in Sects. 45 do we need Assumption 1.3.

2 Structural Properties of the Galton–Watson Tree

In the section we collect a few structural properties of \({\mathcal {G}\mathcal {W}}\) that play an important role throughout the paper. None of these properties were needed in [7]. Section 2.1 looks at volumes, Sect. 2.2 at degrees, Sect. 2.3 at tree animals.

2.1 Volumes

Let \(Z_k\) be the number of offspring in generation k, i.e.,

$$\begin{aligned} Z_k = |\{x\in V:\,d(x,\mathcal {O}) = k\}|, \end{aligned}$$
(2.1)

where \(d(x,\mathcal {O})\) is the distance from \(\mathcal {O}\) to x. Let \(\mu = \mathcal {E}[D]\). Then there exists a random variable \(W \in (0,\infty )\) such that

$$\begin{aligned} W_k := \mathrm {e}^{-k\vartheta } Z_k = \mu ^{-k} Z_k \rightarrow W \qquad \mathfrak {P}\text {-a.s. as } k \rightarrow \infty . \end{aligned}$$
(2.2)

It is shown in [2, Theorem 5] that

$$\begin{aligned} \exists \,C<\infty , c>0:\,\quad \mathfrak {P}(|W_k-W| \ge \varepsilon ) \le C\mathrm {e}^{-c\,\varepsilon ^{2/3}\mu ^{k/3}} \qquad \forall \,\varepsilon > 0,\,k \in \mathbb {N}. \end{aligned}$$
(2.3)

In addition, it is shown in [4, Theorems 2–3] that if D is bounded, then

$$\begin{aligned} -\log \mathfrak {P}(W \ge x)= & {} x^{\gamma ^+/(\gamma ^+-1)}\,[L^+(x) + o(1)], \qquad x \rightarrow \infty , \end{aligned}$$
(2.4)
$$\begin{aligned} -\log \mathfrak {P}(W \le x)= & {} x^{-\gamma ^-/(1-\gamma ^-)}\,[L^-(x)+ o(1)], \qquad x \downarrow 0, \end{aligned}$$
(2.5)

where \(\gamma ^+ \in (1,\infty )\) and \(\gamma ^- \in (0,1)\) are the unique solutions of the equations

$$\begin{aligned} \mu ^{\gamma ^+} = d_{\max }, \qquad \mu ^{\gamma ^-} = d_{\min }, \end{aligned}$$
(2.6)

with \(L^+,L^-:(0,\infty ) \rightarrow (0,\infty )\) real-analytic functions that are multiplicatively periodic with period \(\mu ^{\gamma ^+-1}\), respectively, \(\mu ^{1-\gamma ^-}\). Note that Assumption 1.2(1) guarantees that \(\gamma ^- \ne 1\).

The tail behaviour in (2.4) requires that \(d_{\max }<\infty \). In our setting we have \(d_{\max }=\infty \), which corresponds to \(\gamma ^+=\infty \), and so we expect exponential tail behaviour. The following lemma provides a rough bound.

Lemma 2.1

(Exponential tail for generation sizes) If there exists an \(a>0\) such that \(\mathcal {E}[\mathrm {e}^{aD}] <\infty \), then there exists an \(a_* > 0\) such that \(\mathfrak {E}[\mathrm {e}^{a_* W}] < \infty \).

Proof

First note that if there exists an \(a>0\) such that \(\mathcal {E}[\mathrm {e}^{aD}] <\infty \), then there exist \(b>0\) large and \(c >0\) small such that

$$\begin{aligned} \varphi (a):= \mathcal {E}[\mathrm {e}^{aD}] \le \mathrm {e}^{\mu a + b a^2} \qquad \forall \, 0< a < c. \end{aligned}$$
(2.7)

Hence

$$\begin{aligned} \mathfrak {E}[\mathrm {e}^{a Z_{n+1}}] = \mathfrak {E}[\varphi (a)^{Z_n}] \le \mathfrak {E}[\mathrm {e}^{(\mu a +b a^2)Z_n}] \end{aligned}$$
(2.8)

and consequently, because \(\mu >1\),

$$\begin{aligned} \mathfrak {E}\left[ \mathrm {e}^{a W_{n+1}}\right] \le \mathfrak {E}\left[ \mathrm {e}^{(a + ba^2\mu ^{-(n+2)})W_n}\right] \le \mathfrak {E}\left[ \mathrm {e}^{a \exp (bc\mu ^{-(n+2)})W_n}\right] . \end{aligned}$$
(2.9)

Put \(a_n := c \exp (-bc \sum _{k=0}^{n-1} \mu ^{-(k+2)})\), which satisfies \(0< a_n \le c\). From the last inequality in (2.9) it follows that

$$\begin{aligned} \mathfrak {E}\left[ \mathrm {e}^{a_{n+1}W_{n+1}}\right] \le \mathfrak {E}\left[ \mathrm {e}^{a_nW_n}\right] . \end{aligned}$$
(2.10)

Since \(n \mapsto a_n\) is decreasing with \(\lim _{n\rightarrow \infty } a_n = a_* >0\), Fatou’s lemma gives

$$\begin{aligned} \mathfrak {E}\left[ \mathrm {e}^{a_* W}\right] \le \mathfrak {E}\left[ \mathrm {e}^{a_0 W_0}\right] . \end{aligned}$$
(2.11)

Because \(\mathcal {E}[\mathrm {e}^{a_0 W_0}] = \mathrm {e}^{a_0}<\infty \), we get the claim. \(\square \)

The following lemma says that \(\mathfrak {P}\)-a.s. a ball of radius \(R_r\) centred anywhere in \(B_r(\mathcal {O})\) has volume \(\mathrm {e}^{\vartheta R_r + o(R_r)}\) as \(r\rightarrow \infty \), provided \(R_r\) is large compared to \(\log r\).

Lemma 2.2

(Volumes of large balls) Subject to Assumption 1.2(1), if there exists an \(a>0\) such that \(\mathcal {E}[\mathrm {e}^{aD}] <\infty \), then for any \(R_r\) satisfying \(\lim _{r\rightarrow \infty } R_r/\log r = \infty \),

$$\begin{aligned}&\liminf _{r\rightarrow \infty } \frac{1}{R_r} \log \Big (\inf _{x \in B_r(\mathcal {O})} |B_{R_r}(x)|\Big ) = \limsup _{r\rightarrow \infty } \frac{1}{R_r} \log \Big (\sup _{x \in B_r(\mathcal {O})} |B_{R_r}(x)|\Big ) = \vartheta \qquad \mathfrak {P}-a.s.\nonumber \\ \end{aligned}$$
(2.12)

Proof

For \(y\in {\mathcal {G}\mathcal {W}}\) that lies k generations below \(\mathcal {O}\), let \(y[-i]\), \(0 \le i\le k\) be the vertex that lies i generations above y. Define the lower ball of radius around y as

$$\begin{aligned} B^\downarrow _r(y):= \{x\in V:\, \exists \, 0 \le i \le r\ \text {with}\ x[-i] = y\}. \end{aligned}$$
(2.13)

Note that \(B^\downarrow _r(\mathcal {O}) = B_r(\mathcal {O})\).

We first prove the claim for lower balls. Afterwards we use a sandwich argument to get the claim for balls.

Let \(\mathcal {Z}_k\) denote the vertices in the k-th generation. To get the upper bound, pick \(\delta >0\) and estimate

$$\begin{aligned} \begin{aligned}&\mathfrak {P}\Big (\sup _{x \in B_r(\mathcal {O})} |B_{R_r}^\downarrow (x)| \ge \mathrm {e}^{(1+\delta )\vartheta R_r}\Big ) \le \sum _{k=0}^r \mathfrak {P}\Big (\sup _{x \in \mathcal {Z}_k} |B_{R_r}^\downarrow (x)| \ge \mathrm {e}^{(1+\delta )\vartheta R_r}\Big )\\&\quad = \sum _{k=0}^r\sum _{l\in \mathbb {N}} \mathfrak {P}\Big (\sup _{x \in \mathcal {Z}_k} |B_{R_r}^\downarrow (x)| \ge \mathrm {e}^{(1+\delta )\vartheta R_r} ~\Big |~ Z_k =l\Big ) \mathfrak {P}(Z_k =l)\\&\quad \le \sum _{k=0}^r\sum _{l\in \mathbb {N}} l\ \mathfrak {P}\Big (|B^\downarrow _{R_r}(\mathcal {O})| \ge \mathrm {e}^{(1+\delta )\vartheta R_r}\Big )\mathfrak {P}(Z_k= l) \\&\quad = \mathfrak {P}\Big (|B^\downarrow _{R_r}(\mathcal {O})| \ge \mathrm {e}^{(1+\delta )\vartheta R_r}\Big ) \sum _{k=0}^r \mathfrak {E}(Z_k). \end{aligned} \end{aligned}$$
(2.14)

By (1.10), \(\sum _{k=0}^r \mathfrak {E}(Z_k) = \frac{\mathrm {e}^{\vartheta (r+1)} -1}{\mathrm {e}^\vartheta -1} = O(\mathrm {e}^{\vartheta r})\), and so in order to be able to apply the Borel-Cantelli lemma, it suffices to show that the probability in the last line decays faster than exponentially in r for any \(\delta >0\). To that end, estimate

$$\begin{aligned}&\mathfrak {P}\Big (|B^\downarrow _{R_r}(\mathcal {O})| \ge \mathrm {e}^{(1+\delta )\vartheta R_r}\Big ) = \mathfrak {P}\Big (\sum _{k=0}^{R_r} Z_k \ge \mathrm {e}^{(1+\delta )\vartheta R_r}\Big )\nonumber \\&\quad = \mathfrak {P}\Big (\sum _{k=0}^{R_r} W_k \ge \mathrm {e}^{\delta \vartheta R_r} \mathrm {e}^{\vartheta (R_r-k)}\Big ) \le \sum _{k=0}^{R_r} \mathfrak {P}\Big (W_k \ge \frac{1}{R_r+1} \mathrm {e}^{\delta \vartheta R_r}\mathrm {e}^{\vartheta (R_r-k)}\Big )\nonumber \\&\quad = \sum _{k=0}^{R_r} \mathfrak {P}\Big (W+(W_k-W) \ge \frac{1}{R_r+1} \mathrm {e}^{\delta \vartheta R_r} \mathrm {e}^{\vartheta (R_r-k)} \Big )\nonumber \\&\quad \le \sum _{k=0}^{R_r} \mathfrak {P}\Big (W \ge \frac{1}{2(R_r+1)} \mathrm {e}^{\delta \vartheta R_r} \mathrm {e}^{\vartheta (R_r-k)} \Big )\nonumber \\&\qquad \qquad + \sum _{k=0}^{R_r} \mathfrak {P}\Big (|W_k-W| \ge \frac{1}{2(R_r+1)} \mathrm {e}^{\delta \vartheta R_r} \mathrm {e}^{\vartheta (R_r-k)} \Big )\nonumber \\&\quad \le \mathfrak {E}[\mathrm {e}^{a_* W}]\sum _{k=0}^{R_r} \exp \Big (-a_*\frac{1}{2(R_r+1)} \mathrm {e}^{\delta \vartheta R_r} \mathrm {e}^{\vartheta (R_r -k)}\Big ) \nonumber \\&\qquad \qquad + \sum _{k=0}^{R_r} C \exp \Big (-c\Big [\frac{1}{2(R_r+1)}\,\mathrm {e}^{\delta \vartheta R_r}\,\mathrm {e}^{\vartheta (R_r-k)}\Big ]^{2/3}(\mathrm {e}^\vartheta )^{k/3}\Big )\nonumber \\&\quad \le \mathfrak {E}[\mathrm {e}^{a_* W}](R_r+1) \exp \Big (-a_*\frac{1}{2(R_r+1)}\mathrm {e}^{\delta \vartheta R_r}\Big )\nonumber \\&\qquad \qquad + C(R_r+1) \exp \Big (-c\Big [\frac{1}{2(R_r+1)}\,\mathrm {e}^{\delta \vartheta R_r}\Big ]^{2/3}\Big ), \end{aligned}$$
(2.15)

where we use (2.3) with \(\mu = \mathrm {e}^\vartheta \). This produces the desired estimate.

To get the lower bound, pick \(0<\delta <1\) and estimate

$$\begin{aligned} \begin{aligned}&\mathfrak {P}\Big (\inf _{x \in B_r(\mathcal {O})} |B_{R_r}^\downarrow (x)| \le \mathrm {e}^{(1-\delta )\vartheta R_r}\Big ) \le \sum _{k=0}^r \mathfrak {P}\Big (\inf _{x \in \mathcal {Z}_k} |B_{R_r}^\downarrow (x)| \le \mathrm {e}^{(1-\delta )\vartheta R_r}\Big )\\&\quad = \sum _{k=0}^r\sum \limits _{l\in \mathbb {N}} \mathfrak {P}\Big (\inf _{x \in \mathcal {Z}_k} |B_{R_r}^\downarrow (x)| \le \mathrm {e}^{(1-\delta )\vartheta R_r} ~\Big |~ Z_k =l\Big )\mathfrak {P}(Z_k =l)\\&\quad \le \sum _{k=0}^r\sum \limits _{l\in \mathbb {N}} l\,\mathfrak {P}\Big (|B^\downarrow _{R_r}(\mathcal {O})| \le \mathrm {e}^{(1-\delta )\vartheta R_r}\Big )\mathfrak {P}(Z_k= l) \\&\quad = \mathfrak {P}\Big (|B^\downarrow _{R_r}(\mathcal {O})| \le \mathrm {e}^{(1-\delta )\vartheta R_r}\Big ) \sum \limits _{k=0}^r \mathfrak {E}(Z_k). \end{aligned} \end{aligned}$$
(2.16)

It again suffices to show that the probability in the last line decays faster than exponentially in r for any \(\delta >0\). To that end, estimate

$$\begin{aligned} \begin{aligned}&\mathfrak {P}\Big (|B^\downarrow _{R_r}(\mathcal {O})| \le \mathrm {e}^{(1-\delta )\vartheta R_r}\Big ) =\mathfrak {P}\Big ( \mathrm {e}^{-\vartheta R_r} \sum _{k=0}^{R_r} Z_k \le \mathrm {e}^{-\delta \vartheta R_r}\Big )\\&\le \mathfrak {P}\Big (W_{R_r} \le \mathrm {e}^{-\delta \vartheta R_r}\Big ) \le \mathfrak {P}(W \le 2\,\mathrm {e}^{-\delta \vartheta R_r})+ \mathfrak {P}(W - W_{R_r} \ge \mathrm {e}^{-\delta \vartheta R_r}) \\&\le \exp \Big (-c^-(2\mathrm {e}^{\delta \vartheta R_r})^{\frac{\gamma ^-}{1-\gamma ^-}}[1+o(1)]\Big ) + C\exp \Big (-c\,[\mathrm {e}^{-\frac{2}{3}\delta \vartheta }(\mathrm {e}^\vartheta )^{\frac{1}{3}}]^{R_r}\Big ), \end{aligned} \end{aligned}$$
(2.17)

where we use (2.5), (2.3) with \(\mu =\mathrm {e}^\vartheta \), and put \(c^- := \inf L^- \in (0,\infty )\). For \(\delta \) small enough this produces the desired estimate. This completes the proof of (2.12) for lower balls.

To get the claim for balls, we observe that

$$\begin{aligned} B_r^\downarrow (x) \subseteq B_r(x) \subseteq \bigcup _{k=0}^r B_r^\downarrow (x[-k]), \end{aligned}$$
(2.18)

and therefore

$$\begin{aligned} |B_r^\downarrow (x)| \le |B_r(x)| \le \sum _{k=0}^r |B_r^\downarrow (x[-k])|. \end{aligned}$$
(2.19)

It follows from (2.19) that

$$\begin{aligned} \inf _{x\in B_r(\mathcal {O})} |B_r^\downarrow (x)| \le \inf _{x\in B_r(\mathcal {O})} |B_r(x)| \le \sup _{x\in B_r(\mathcal {O})} |B_r(x)| \le (r+1)\sup _{x\in B_r(\mathcal {O})}|B_r^\downarrow (x)|. \end{aligned}$$
(2.20)

Hence we get (2.12). \(\square \)

2.2 Degrees

Write \(D_x\) to denote the degree of vertex x. The following lemma implies that, \(\mathfrak {P} \)-a.s. and for \(r\rightarrow \infty \), \(D_x\) is bounded by a vanishing power of \(\log r\) for all \(x \in B_{2r}(\mathcal {O})\).

Lemma 2.3

(Maximal degree in a ball around the root)

  1. (a)

    Subject to Assumption 1.2(2), for every \(\delta >0\),

    $$\begin{aligned} \sum _{r \in \mathbb {N}} \mathfrak {P}\big (\exists \,x \in B_{2r}(\mathcal {O}):\, D_x > \delta r \big ) < \infty . \end{aligned}$$
    (2.21)
  2. (b)

    Subject to Assumption 1.3, there exists a function \(\delta _r:\,(0,\infty ) \rightarrow (0,\infty )\) satisfying \(\lim _{r\rightarrow \infty } \delta _r\) \(= 0\) and \(\lim _{r\rightarrow \infty } r\frac{\mathrm {d}}{\mathrm {d}r}\delta r = 0\) such that

    $$\begin{aligned} \sum _{r \in \mathbb {N}} \mathfrak {P}\big (\exists \,x \in B_{2r}(\mathcal {O}):\, D_x > (\log r)^{\delta _r} \big ) < \infty . \end{aligned}$$
    (2.22)

Proof

  1. (a)

    Estimate

    $$\begin{aligned} \begin{aligned}&\mathfrak {P}\big (\exists \, x\in B^\downarrow _{2r}(\mathcal {O}):\, D_x> \delta r\big ) \le \sum _{k=0}^{2r} \mathfrak {P}\big (\exists \, x \in \mathcal {Z}_k:\,D_x> \delta r \big )\\&= \sum _{k=0}^{2r} \sum _{l\in \mathbb {N}} \mathfrak {P}\big (\exists \,x \in \mathcal {Z}_k:\, D_x> \delta r \mid Z_k = l\big )\,\mathfrak {P}(Z_k = l)\\&\le \mathcal {P}(D> \delta r) \sum _{k=0}^{2r} \sum _{l\in \mathbb {N}} l\,\mathfrak {P}\big (Z_k = l) = \mathcal {P}(D > \delta r) \sum _{k=0}^{2r}\mathfrak {E}(Z_k). \end{aligned} \end{aligned}$$
    (2.23)

    Since \(\sum _{k=0}^{2r} \mathfrak {E}(Z_k) = \frac{\mathrm {e}^{(2r+1)\vartheta }-1}{\mathrm {e}^\vartheta -1} = O(\mathrm {e}^{2r\vartheta })\), it suffices to show that \(\mathcal {P}(D > \delta r) = O(\mathrm {e}^{-cr})\) for some \(c>2\vartheta \). Since \(\mathcal {P}(D > \delta r) \le \mathrm {e}^{-a\delta r}\mathcal {E}(\mathrm {e}^{aD})\), the latter is immediate from Assumption 1.2(2) when we choose \(a>2\vartheta /\delta \).

  2. (b)

    The only change is that in the last line \(\mathcal {P}(D > \delta r)\) must be replaced by \(\mathcal {P}(D > (\log r)^{\delta _r})\). To see that the latter is \(O(\mathrm {e}^{-cr})\) for some \(c>2\vartheta \), we use the tail condition in (1.11) with \(\delta _r = f(s)\) and \(s=\log r\).

\(\square \)

2.3 Tree Animals

For \(n \in \mathbb {N}_0\) and \(x \in B_r(\mathcal {O})\), let

$$\begin{aligned} \mathcal {A}_n(x) = \{\Lambda \subset B_n(x) :\, \Lambda \text { is connected}, \Lambda \ni x, |\Lambda |=n+1\} \end{aligned}$$
(2.24)

be the set of tree animals of size \(n+1\) that contain x. Put \(a_n(x) = |\mathcal {A}_n(x)|\).

Lemma 2.4

(Number of tree animals) Subject to Assumption 1.2(2), \(\mathfrak {P}\)-a.s. there exists an \(r_0\in \mathbb {N}\) such that \(a_n(x) \le r^n\) for all \(r \ge r_0\), \(x\in B_r(\mathcal {O})\) and \(0 \le n\le r\).

Proof

For \(n \in \mathbb {N}_0\) and \(x \in B^\downarrow _r(\mathcal {O})\), let

$$\begin{aligned} \mathcal {A}^\downarrow _n(x) = \{\Lambda \subset B^\downarrow _n(x) :\, \Lambda \text { is connected}, \Lambda \ni x, |\Lambda |=n+1\} \end{aligned}$$
(2.25)

be the set of lower tree animals of size \(n+1\) that contain x. Put \(a^\downarrow _n(x) = |\mathcal {A}^\downarrow _n(x)|\).

We first prove the claim for lower tree animals. Afterwards we use a sandwich argument to get the claim for tree animals.

Fix \(\delta >0\). By Lemma 2.3(a) and the Borel-Cantelli lemma, \(\mathfrak {P}\)-a.s. there exists an \(r_0=r_0(\delta ) \in \mathbb {N}\) such that \(D_x \le \delta r\) for all \(x \in B^\downarrow _{2r}(\mathcal {O})\). Any lower tree animal of size \(n+1\) containing a vertex in \(B^\downarrow _r(\mathcal {O})\) is contained in \(B^\downarrow _{r+n}(\mathcal {O})\). Any lower tree animal of size \(n+1\) can be created by adding a vertex to the outer boundary of a lower tree animal of size n. This leads to the recursive inequality

$$\begin{aligned} a^\downarrow _n(x)\le (\delta r) a^\downarrow _{n-1}(x) \qquad \forall \,x \in B^\downarrow _r(\mathcal {O}), \quad 1 \le n \le r. \end{aligned}$$
(2.26)

Since \(a^\downarrow _0(x) =1\), it follows that

$$\begin{aligned} a^\downarrow _n(x) \le (\delta r)^n \qquad \forall \,x \in B^\downarrow _r(\mathcal {O}),\,\,0 \le n \le r. \end{aligned}$$
(2.27)

Pick \(\delta =1\) to get the claim for lower tree animals.

To get the claim for tree animals, note that \(a_n(x) \le \sum _{k=0}^n a^\downarrow _n(x[-k])\) (compare with (2.19)), and so \(a_n(x) \le (n+1) r^n\) for all \(x\in B_r(\mathcal {O})\) and all \(0 \le n\le r\). \(\square \)

3 Preliminaries

In this section we extend the lemmas in [7, Sect. 2]. Section 3.1 identifies the maximum size of the islands where the potential is suitably high. Section 3.2 estimates the contribution to the total mass in (1.6) by the random walk until it exits a subset of \({\mathcal {G}\mathcal {W}}\). Section 3.3 gives a bound on the principal eigenvalue associated with the islands. Section 3.5 estimates the number of locations where the potential is intermediate.

Abbreviate \(L_r = L_r({\mathcal {G}\mathcal {W}}) = |B_r(\mathcal {O})|\) and put

$$\begin{aligned} S_r := (\log r)^\alpha , \qquad \alpha \in (0,1). \end{aligned}$$
(3.1)

3.1 Maximum Size of the Islands

For every \(r \in \mathbb {N}\) there is a unique \(a_r\) such that

$$\begin{aligned} \mathrm {P}(\xi (0) > a_r) = \frac{1}{r}. \end{aligned}$$
(3.2)

By Assumption 1.1, for r large enough

$$\begin{aligned} a_r = \varrho \log \log r. \end{aligned}$$
(3.3)

For \(r \in \mathbb {N}\) and \(A>0\), let

$$\begin{aligned} \Pi _{r,A} = \Pi _{r,A}(\xi ) := \{z \in B_r(\mathcal {O}):\,\xi (z)>a_{L_r}-2A\} \end{aligned}$$
(3.4)

be the set of vertices in \(B_r(\mathcal {O})\) where the potential is close to maximal,

$$\begin{aligned} D_{r,A} = D_{r,A}(\xi ) := \{z \in B_r(\mathcal {O}):\,\mathrm {dist}(z,\Pi _{r,A}) \le S_r\} \end{aligned}$$
(3.5)

be the \(S_r\)-neighbourhood of \(\Pi _{r,A}\), and \(\mathfrak {C}_{r,A}\) be the set of connected components of \(D_{r,A}\) in \({\mathcal {G}\mathcal {W}}\), which we think of as islands. For \(M_A\in \mathbb {N}\), define the event

$$\begin{aligned} \mathcal {B}_{r,A} := \big \{ \exists \, \mathcal {C}\in \mathfrak {C}_{r,A}:\, |\mathcal {C}\cap \Pi _{r,A}| > M_A \big \}. \end{aligned}$$
(3.6)

Note that \(\Pi _{r,A}, D_{r,A}, \mathcal {B}_{r,A}\) depend on \({\mathcal {G}\mathcal {W}}\) and therefore are random.

Lemma 3.1

(Maximum size of the islands) Subject to Assumptions 1.11.2, for every \(A > 0\) there exists an \(M_A \in \mathbb {N}\) such that

$$\begin{aligned} \sum _{r \in \mathbb {N}} \mathrm {P}(\mathcal {B}_{r,A}) < \infty \qquad \mathfrak {P}-a.s. \end{aligned}$$
(3.7)

Proof

We follow [5, Lemma 6.6]. By Assumption 1.1, for every \(x \in V\) and r large enough,

$$\begin{aligned} \mathrm {P}(x \in \Pi _{r,A}) = \mathrm {P}(\xi (x) > a_{L_r} -2A) = L_r^{-c_A} \end{aligned}$$
(3.8)

with \(c_A = e^{-2A/\varrho }\). By Lemma 2.2, \(\mathfrak {P}\)-a.s. for every \(y \in B_r(\mathcal {O})\) and r large enough,

$$\begin{aligned} |B_{S_r}(y)| \le |B_{o(r)}(\mathcal {O})| = L_{o(r)} = L_r^{o(1)}, \end{aligned}$$
(3.9)

where we use that \(S_r = o(\log r)=o(r)\), and hence for every \(m\in \mathbb {N}\),

$$\begin{aligned} \mathrm {P}(|B_{S_r}(y) \cap \Pi _{r,A}| \ge m) \le \left( {\begin{array}{c}|B_{S_r}(y)|\\ m\end{array}}\right) L_r^{-c_Am} \le (|B_{S_r}(y)|L_r^{-c_A})^{m} \le L_r^{-c_Am[1+o(1)]}.\nonumber \\ \end{aligned}$$
(3.10)

Consequently, \(\mathfrak {P}\)-a.s.

$$\begin{aligned} \begin{aligned} \mathrm {P}(\exists \,\mathcal {C}\in \mathfrak {C}_{r,A}:\, |\mathcal {C}\cap \Pi _{r,A}| \ge m)&\le \mathrm {P}(\exists \,y \in B_r(\mathcal {O}):\, |B_{S_r}(y) \cap \Pi _{r,A}| \ge m)\\&\le |B_r(\mathcal {O}) |L_r^{} = L_r^{(1-c_Am)[1+o(1)]}. \end{aligned} \end{aligned}$$
(3.11)

By choosing \(m>1/c_A\), we see that the above probability becomes summable in r, and so we have proved the claim with \(M_A=\lceil 1/c_A \rceil \). \(\square \)

Lemma 3.1 implies that \((\mathrm {P}\times \mathfrak {P})\)-a.s. \(\mathcal {B}_{r,A}\) does not occur eventually as \(r \rightarrow \infty \). Note that \(\mathfrak {P}\)-a.s. on the event \([\mathcal {B}_{r,A}]^c\),

$$\begin{aligned} \forall \,\mathcal {C}\in \mathfrak {C}_{r,A}:\, |\mathcal {C}\cap \Pi _{r,A}| \le M_A, \, {{\,\mathrm{diam}\,}}_{\mathcal {G}\mathcal {W}}(\mathcal {C}) \le 2M_A S_r, \, |\mathcal {C}| \le \mathrm {e}^{2\vartheta M_AS_r}, \end{aligned}$$
(3.12)

where the last inequality follows from Lemma 2.2.

3.2 Mass Up to an Exit Time

Lemma 3.2

(Mass up to an exit time) Subject to Assumption 1.2(2), \(\mathfrak {P}\)-a.s. for any \(\delta >0\), \(r \ge r_0\), \(y \in \Lambda \subset B_r(\mathcal {O})\), \(\xi \in [0,\infty )^V\) and \(\gamma > \lambda _\Lambda = \lambda _\Lambda (\xi ,{\mathcal {G}\mathcal {W}})\),

$$\begin{aligned} \mathbb {E}_y \left[ \mathrm {e}^{\int _0^{\tau _{\Lambda ^{\mathrm{c}}}} (\xi (X_s) - \gamma )\, \mathrm{d}s} \right] \le 1 + \frac{(\delta r)\,|\Lambda |}{ \gamma - \lambda _\Lambda }. \end{aligned}$$
(3.13)

Proof

We follow the proof of [10, Lemma 2.18] and [11, Lemma 4.2]. Define

$$\begin{aligned} u(x) := \mathbb {E}_x \left[ \mathrm {e}^{ \int _0^{\tau _{\Lambda ^{\mathrm{c}}}} (\xi (X_s) - \gamma )\, \mathrm{d}s} \right] . \end{aligned}$$
(3.14)

This is the solution to the boundary value problem

$$\begin{aligned} \begin{aligned} (\Delta + \xi -\gamma )u&= 0 \quad \text {on}\ \Lambda \\ u&=1 \quad \text {on}\ \Lambda ^{\mathrm{c}}. \end{aligned} \end{aligned}$$
(3.15)

Via the substitution \(u=:1+v\), this turns into

$$\begin{aligned} \begin{aligned} (\Delta + \xi -\gamma )v&= \gamma - \xi \quad \text {on}\ \Lambda \\ v&=0 \qquad \quad \text {on}\ \Lambda ^{\mathrm{c}}. \end{aligned} \end{aligned}$$
(3.16)

It is readily checked that for \(\gamma > \lambda _\Lambda \) the solution exists and is given by

$$\begin{aligned} v = \mathcal {R}_\gamma (\xi -\gamma ), \end{aligned}$$
(3.17)

where \(\mathcal {R}_\gamma \) denotes the resolvent of \(\Delta + \xi \) in \(\ell ^2(\Lambda )\) with Dirichlet boundary condition. Hence

$$\begin{aligned} v(x) \le (\delta r)\,(\mathcal {R}_\gamma \mathbbm {1})(x) \le (\delta r)\,\langle \mathcal {R}_\gamma \mathbbm {1},\mathbbm {1}\rangle _\Lambda \le \frac{(\delta r)\,|\Lambda |}{\gamma -\lambda _\Lambda }, \quad x \in \Lambda , \end{aligned}$$
(3.18)

where \(\mathbbm {1}\) denotes the constant function equal to 1, and \(\langle \cdot ,\cdot \rangle _\Lambda \) denotes the inner product in \(\ell ^2(\Lambda )\). To get the first inequality, we combine Lemma 2.3(a) with the lower bound in (B.2) from Lemma B.1, to get \(\xi - \gamma \le \lambda _\Lambda + \delta r -\gamma \le \delta r\) on \(\Lambda \). The positivity of the resolvent gives

$$\begin{aligned} 0 \le [\mathcal {R}_\gamma (\delta r - (\xi -\gamma ))](x) = (\delta r)\,[\mathcal {R}_\gamma \mathbbm {1}](x) - [\mathcal {R}_\gamma (\xi -\gamma )](x). \end{aligned}$$
(3.19)

To get the second inequality, we write

$$\begin{aligned} (\delta r)\,(\mathcal {R}_\gamma \mathbbm {1})(x) \le (\delta r) \sum _{x \in \Lambda } (\mathcal {R}_\gamma \mathbbm {1})(x) = (\delta r) \sum _{x \in \Lambda } (\mathcal {R}_\gamma \mathbbm {1})(x)\mathbbm {1}(x) = (\delta r)\, \langle \mathcal {R}_\gamma \mathbbm {1},\mathbbm {1}\rangle _\Lambda . \end{aligned}$$
(3.20)

To get the third inequality, we use the Fourier expansion of the resolvent with respect to the orthonormal basis of eigenfunctions of \(\Delta + \xi \) in \(\ell ^2(\Lambda )\). \(\square \)

3.3 Principal Eigenvalue of the Islands

The following lemma provides a spectral bound.

Lemma 3.3

(Principal eigenvalues of the islands) Subject to Assumptions 1.1 and 1.2(2), for any \(\varepsilon >0\), \((\mathrm {P}\times \mathfrak {P})\)-a.s. eventually as \(r \rightarrow \infty \),

$$\begin{aligned} \text {all}\ \mathcal {C}\in \mathfrak {C}_{r,A}\ \text {satisfy} :\, \lambda _\mathcal {C}(\xi ; {\mathcal {G}\mathcal {W}}) \le a_{L_r} - {\widehat{\chi }}_{\mathcal {C}}({\mathcal {G}\mathcal {W}}) + \varepsilon . \end{aligned}$$
(3.21)

Proof

We follow the proof of [7, Lemma 2.6]. For \(\varepsilon >0\) and \(A>0\), define the event

$$\begin{aligned} {\bar{\mathcal {B}}}_{r,A} {:}{=} \left\{ \begin{array}{c} \text {there exists a connected subset } \Lambda \subset V \text { with } \Lambda \cap B_{r}(\mathcal {O}) \ne \emptyset ,\\ |\Lambda | \le \mathrm {e}^{2\vartheta M_{A} S_{r}},\,\lambda _{\Lambda }(\xi ; {\mathcal {G}\mathcal {W}}) > a_{L_{r}} - {\widehat{\chi }}_{\Lambda }({\mathcal {G}\mathcal {W}}) + \varepsilon \end{array}\right\} \end{aligned}$$
(3.22)

with \(M_A\) as in Lemma 3.1. Note that, by (1.1), \(\mathrm {e}^{\xi (x)/\varrho }\) is stochastically dominated by \(Z \vee N\), where Z is an \(\mathrm {Exp}(1)\) random variable and \(N>0\) is a constant. Thus, for any \(\Lambda \subset V\), using [7, Eq. (2.17)], putting \(\gamma = \sqrt{\mathrm {e}^{\varepsilon /\varrho }} > 1\) and applying Markov’s inequality, we may estimate

$$\begin{aligned} \begin{aligned}&\mathrm {P}\left( \lambda _\Lambda (\xi ; {\mathcal {G}\mathcal {W}})> a_{L_r} - {\widehat{\chi }}_\Lambda ({\mathcal {G}\mathcal {W}}) + \varepsilon \right) \le \mathrm {P}\left( \mathcal {L}_\Lambda (\xi - a_{L_r}-\varepsilon )> 1\right) \\&= \mathrm {P}\left( \gamma ^{-1} \mathcal {L}_\Lambda (\xi )> \gamma \log L_r \right) \le \mathrm {e}^{-\gamma \log L_r} \mathrm {E}[\mathrm {e}^{\gamma ^{-1} \mathcal {L}_\Lambda (\xi )}] \le \mathrm {e}^{-\gamma \log L_r } K_\gamma ^{|\Lambda |} \end{aligned} \end{aligned}$$
(3.23)

with \(K_\gamma = \mathrm {E}[\mathrm {e}^{\gamma ^{-1}(Z \vee N)}] \in (1,\infty )\). Next, by Lemma 2.4, for any \(x \in B_r(\mathcal {O})\) and \(1 \le n \le r\), the number of connected subsets \(\Lambda \subset V\) with \(x \in \Lambda \) and \(|\Lambda |=n+1\) is \(\mathfrak {P}\)-a.s. at most \((n+1)r^n \le \mathrm {e}^{2n\log r}\) for \(r \ge r_0\). Noting that \(\mathrm {e}^{S_r} \le r\), we use a union bound and that by Lemma 2.2\(\log L_r = \vartheta r + o(r)\) as \(r\rightarrow \infty \ \mathfrak {P}\)-a.s., to estimate for r large enough,

$$\begin{aligned} \begin{aligned} \mathrm {P}(\bar{\mathcal {B}}_{r,A})&\le \mathrm {e}^{-(\gamma -1) \log L_r} \sum _{n=1}^{\lfloor \mathrm {e}^{2\vartheta M_A S_r} \rfloor } \mathrm {e}^{2n \log r} K_\gamma ^n\\&\le \mathrm {e}^{2\vartheta M_A S_r} \exp \left\{ -\vartheta (\gamma -1) r + o(r) + (2\log r + \log K_\gamma )\,\mathrm {e}^{2\vartheta M_A S_r}\right\} \\&= r^{o(1)} \exp \left\{ -\vartheta (\gamma -1) r + o(r) + (\log r)\, r^{o(1)}\right\} \le \mathrm {e}^{-\tfrac{1}{2} \vartheta (\gamma -1) r}. \end{aligned} \end{aligned}$$
(3.24)

Via the Borel-Cantelli lemma this implies that \((\mathrm {P}\times \mathfrak {P})\)-a.s. \(\bar{\mathcal {B}}_{r,A}\) does not occur eventually as \(r\rightarrow \infty \). The proof is completed by invoking Lemma 3.1. \(\square \)

Corollary 3.4

(Uniform bound on principal eigenvalue of the islands) Subject to Assumptions 1.11.2, for \(\vartheta \) as in (1.10), and any \(\varepsilon >0\), \((\mathrm {P}\times \mathfrak {P})\)-a.s. eventually as \(r \rightarrow \infty \),

$$\begin{aligned} \max _{\mathcal C\in \mathfrak {C}_{r,A}}\lambda ^{{{\scriptscriptstyle {({1}})}}}_\mathcal C(\xi ; G) \le a_{L_r} - {\widetilde{\chi }}(\varrho ) + \varepsilon . \end{aligned}$$
(3.25)

Proof

See [7, Corollary 2.8]. The proof carries over verbatim because the degrees play no role. \(\square \)

3.4 Maximum of the Potential

The next lemma shows that \(a_{L_r}\) is the leading order of the maximum of \(\xi \) in \(B_r(\mathcal {O})\).

Lemma 3.5

(Maximum of the potential) Subject to Assumptions 1.11.2, for any \(\vartheta >0\), \((\mathrm {P}\times \mathfrak {P})\)-a.s. eventually as \(r \rightarrow \infty \),

$$\begin{aligned} \left| \max _{x \in B_r(\mathcal {O})} \xi (x) - a_{L_r} \right| \le \frac{2 \varrho \log r}{\vartheta r}. \end{aligned}$$
(3.26)

Proof

See [7, Lemma 2.4]. The proof carries over verbatim and uses Lemma 2.2. \(\square \)

3.5 Number of Intermediate Peaks of the Potential

We recall the following Chernoff bound for a binomial random variable with parameters n and p (see e.g. [6, Lemma 5.9]):

$$\begin{aligned} P \left( \text {Bin}(n,p) \ge u\right) \le \mathrm {e}^{-u[\log (\frac{u}{np}) - 1]}, \qquad u > 0. \end{aligned}$$
(3.27)

Lemma 3.6

(Number of intermediate peaks of the potential) Subject to Assumptions 1.1 and 1.2(2), for any \(\beta \in (0,1)\) and \(\varepsilon \in (0, \tfrac{1}{2}\beta )\) the following holds. For a self-avoiding path \(\pi \) in \({\mathcal {G}\mathcal {W}}\), set

$$\begin{aligned} N_{\pi } = N_{\pi }(\xi ) :=|\{z \in {{\,\mathrm{supp}\,}}(\pi ) :\, \xi (z) > (1-\varepsilon ) a_{L_r} \}|. \end{aligned}$$
(3.28)

Define the event

$$\begin{aligned} \mathcal {B}_r := \left\{ \begin{array}{c} \text {there exists a self-avoiding path } \pi \text { in } {\mathcal {G}\mathcal {W}}\text { with } \\ {{\,\mathrm{supp}\,}}(\pi ) \cap B_r \ne \emptyset , \, |{{\,\mathrm{supp}\,}}(\pi )| \ge (\log L_r)^{\beta } \text { and } N_\pi > \frac{|{{\,\mathrm{supp}\,}}(\pi )|}{(\log {L_r})^\varepsilon } \end{array} \right\} . \end{aligned}$$
(3.29)

Then

$$\begin{aligned} \sum _{r \in \mathbb {N}_0} \mathrm {P}(\mathcal {B}_r) < \infty \qquad \mathfrak {P}-a.s. \end{aligned}$$
(3.30)

Proof

We follow the proof of [7, Lemma 2.9]. Fix \(\beta \in (0,1)\) and \(\varepsilon \in (0,\frac{1}{2}\beta )\). (1.7) implies

$$\begin{aligned} p_r := \mathrm {P}(\xi (0) > (1-\varepsilon )a_{L_r}) = \exp \left\{ -(\log L_r)^{1-\varepsilon }\right\} . \end{aligned}$$
(3.31)

Fix \(x \in B_r(\mathcal {O})\) and \(k \in \mathbb {N}\). The number of self-avoiding paths \(\pi \) in \(B_r(\mathcal {O})\) with \(|{{\,\mathrm{supp}\,}}(\pi )|=k\) and \(\pi _0 = x\) is at most \(\mathrm {e}^{k \log r}\) by Lemma 2.4 for r sufficiently large. For such a \(\pi \), the random variable \(N_{\pi }\) has a Bin(k, \(p_r\))-distribution. Using (3.27), we obtain

$$\begin{aligned}&\mathrm {P}\Bigl ( \exists \, \text { self-avoiding } \pi \text { with } |{{\,\mathrm{supp}\,}}(\pi )|=k, \pi _0 = x \text { and } N_{\pi } > k/ (\log L_r)^\varepsilon \Bigr ) \nonumber \\&\quad \le \exp \Big \{ -k \Big ((\log L_r)^{1-2\varepsilon } - \log r - \frac{1+ \varepsilon \log \log L_r}{(\log L_r)^{\varepsilon }}\Big ) \Big \}. \end{aligned}$$
(3.32)

By the definition of \(\varepsilon \), together with the fact that \(L_r > r\) and \(x \mapsto (\log \log x)/(\log x)^\varepsilon \) is eventually decreasing, the expression in parentheses above is at least \(\frac{1}{2}(\log L_r)^{1- 2\varepsilon }\). Summing over \(k \ge (\log L_r)^\beta \) and \(x \in B_r(\mathcal {O})\), we get \(\mathfrak {P}-a.s.\)

$$\begin{aligned} \begin{aligned}&\mathrm {P}\left( \mathcal {B}_r\right) \le 2 L_r \exp \Big \{-\tfrac{1}{2} (\log L_r)^{1+\beta -2\varepsilon } \Big \} \le c_1 \exp \Big \{-c_2 (\log L_r)^{1+\delta } \Big \} \end{aligned} \end{aligned}$$
(3.33)

for some \(c_1, c_2, \delta >0\). Since \(L_r > r\), (3.33) is summable in r. \(\square \)

Lemma 3.6 implies that \((\mathrm {P}\times \mathfrak {P})\)-a.s. for r large enough, all self-avoiding paths \(\pi \) in \({\mathcal {G}\mathcal {W}}\) with \({{\,\mathrm{supp}\,}}(\pi ) \cap B_r \ne \emptyset \) and \(|{{\,\mathrm{supp}\,}}(\pi )| \ge (\log L_r)^{\beta }\) satisfy \(N_{\pi } \le \frac{|{{\,\mathrm{supp}\,}}(\pi )|}{(\log L_r)^\varepsilon }\).

Lemma 3.7

(Number of high exceedances of the potential) Subject to Assumptions 1.1 and 1.2(2), for any \(A>0\) there is a \(C \ge 1\) such that, for all \(\delta \in (0,1)\), the following holds. For a self-avoiding path \(\pi \) in \({\mathcal {G}\mathcal {W}}\), let

$$\begin{aligned} N_\pi := |\{ x \in {{\,\mathrm{supp}\,}}(\pi ) :\, \xi (x) > a_{L_r} - 2A \}|. \end{aligned}$$
(3.34)

Define the event

$$\begin{aligned} \mathcal B_r := \left\{ \begin{array}{c} \text {there exists a self-avoiding path } \pi \text { in { G} with } \\ {{\,\mathrm{supp}\,}}(\pi ) \cap B_r \ne \emptyset , \, |{{\,\mathrm{supp}\,}}(\pi )| \ge C (\log L_r)^{\delta } \text { and } N_\pi > \frac{|{{\,\mathrm{supp}\,}}(\pi )|}{(\log {L_r})^\delta } \end{array} \right\} . \end{aligned}$$
(3.35)

Then \(\sum _{r \in \mathbb {N}_0} \sup _{G \in \mathfrak {G}_r} \mathrm {P}(\mathcal B_r) < \infty \). In particular, \((\mathrm {P}\times \mathfrak {P})\)-a.s. for r large enough, all self-avoiding paths \(\pi \) in \({\mathcal {G}\mathcal {W}}\) with \({{\,\mathrm{supp}\,}}(\pi ) \cap B_r \ne \emptyset \) and \(|{{\,\mathrm{supp}\,}}(\pi )| \ge C (\log L_r)^{\delta }\) satisfy

$$\begin{aligned} N_\pi = |\{ x \in {{\,\mathrm{supp}\,}}(\pi ) :\, \xi (x) > a_{L_r} - 2A \}| \le \frac{|{{\,\mathrm{supp}\,}}(\pi )|}{(\log L_r)^\delta }. \end{aligned}$$
(3.36)

Proof

Proceed as for Lemma 3.6, noting that this time

$$\begin{aligned} p_r := \mathrm {P}\big (\xi (0) > a_{L_r} - 2A\big ) = L_r^{-\epsilon } \end{aligned}$$
(3.37)

where \(\epsilon =\mathrm {e}^{-2A/\varrho }\), and taking \(C > 2/\epsilon \). \(\square \)

4 Path Expansions

In this section we extend [7, Sect. 3]. Section 4.1 proves three lemmas that concern the contribution to the total mass in (1.6) coming from various sets of paths. Section 4.2 proves a key proposition that controls the entropy associated with a key set of paths. The proof is based on the three lemmas in Sect. 4.1.

We need various sets of nearest-neighbour paths in \({\mathcal {G}\mathcal {W}}=(V,E,\mathcal {O})\), defined in [7]. For \(\ell \in \mathbb {N}_0\) and subsets \(\Lambda , \Lambda ' \subset V\), put

$$\begin{aligned} \begin{aligned}&\mathscr {P}_\ell (\Lambda ,\Lambda ') := \left\{ (\pi _0, \ldots , \pi _{\ell }) \in V^{\ell +1} :\, \begin{array}{ll} &{}\pi _0 \in \Lambda , \pi _{\ell } \in \Lambda ',\\ &{}\{\pi _{i}, \pi _{i-1}\} \in E \;\forall \, 1 \le i \le \ell \end{array} \right\} ,\\&\mathscr {P}(\Lambda , \Lambda ') := \bigcup _{\ell \in \mathbb {N}_0} \mathscr {P}_\ell (\Lambda ,\Lambda '), \end{aligned} \end{aligned}$$
(4.1)

and set

$$\begin{aligned} \mathscr {P}_\ell := \mathscr {P}_\ell (V,V), \qquad \mathscr {P}:= \mathscr {P}(V,V). \end{aligned}$$
(4.2)

When \(\Lambda \) or \(\Lambda '\) consists of a single point, write x instead of \(\{x\}\). For \(\pi \in \mathscr {P}_\ell \), set \(|\pi | := \ell \). Write \({{\,\mathrm{supp}\,}}(\pi ) := \{\pi _0, \ldots , \pi _{|\pi |}\}\) to denote the set of points visited by \(\pi \).

Let \(X=(X_t)_{t\ge 0}\) be the continuous-time random walk on G that jumps from \(x \in V\) to any neighbour \(y\sim x\) at rate 1. Denote by \((T_k)_{k \in \mathbb {N}_0}\) the sequence of jump times (with \(T_0 := 0\)). For \(\ell \in \mathbb {N}_0\), let

$$\begin{aligned} \pi ^{{{\scriptscriptstyle {({\ell }})}}}(X) := (X_0, \ldots , X_{T_{\ell }}) \end{aligned}$$
(4.3)

be the path in \(\mathscr {P}_\ell \) consisting of the first \(\ell \) steps of X. For \(t \ge 0\), let

$$\begin{aligned} \pi (X_{[0,t]}) = \pi ^{{{\scriptscriptstyle {({\ell _t}})}}}(X), \quad \text { with } \ell _t \in \mathbb {N}_0 \, \text { satisfying } \, T_{\ell _t} \le t < T_{\ell _t+1}, \end{aligned}$$
(4.4)

denote the path in \(\mathscr {P}\) consisting of all the steps taken by X between times 0 and t.

Recall the definitions from Sect. 3.1. For \(\pi \in \mathscr {P}\) and \(A>0\), define

$$\begin{aligned} \lambda _{r,A}(\pi ) := \sup \big \{ \lambda ^{{{\scriptscriptstyle {({1}})}}}_\mathcal C(\xi ; G) :\, \mathcal C\in \mathfrak {C}_{r,A}, \, {{\,\mathrm{supp}\,}}(\pi )\cap \mathcal C\cap \Pi _{r,A} \ne \emptyset \big \}, \end{aligned}$$
(4.5)

with the convention \(\sup \emptyset = -\infty \). This is the largest principal eigenvalue among the components of \({\mathfrak {C}}_{r,A}\) in \({\mathcal {G}\mathcal {W}}\) that have a point of high exceedance visited by the path \(\pi \).

Lemma 4.1

(Mass up to an exit time) Subject to Assumption 1.3, \(\mathfrak {P}\)-a.s. for any \(r \ge r_0\), \(y \in \Lambda \subset B_r(\mathcal {O})\), \(\xi \in [0,\infty )^V\) and \(\gamma > \lambda _\Lambda = \lambda _\Lambda (\xi ,{\mathcal {G}\mathcal {W}})\),

$$\begin{aligned} \mathbb {E}_y \left[ \mathrm {e}^{\int _0^{\tau _{\Lambda ^{\mathrm{c}}}} (\xi (X_s) - \gamma )\, \mathrm{d}s} \right] \le 1 + \frac{(\log r)^{\delta r}\,|\Lambda |}{ \gamma - \lambda _\Lambda }. \end{aligned}$$
(4.6)

Proof

The proof is identical to that of Lemma 3.2, with \(\delta r\) replaced by \((\log r)^{\delta r}\) (recall Lemma 2.3). \(\square \)

4.1 Mass of the Solution Along Excursions

Lemma 4.2

(Path evaluation) For \(\ell \in \mathbb {N}_0\), \(\pi \in \mathscr {P}_\ell \) and \(\gamma > \max _{0 \le i < |\pi |} \{\xi (\pi _i)-D_{\pi _i}\}\),

$$\begin{aligned} \mathbb {E}_{\pi _0} \left[ \mathrm {e}^{\int _0^{T_{\ell }} (\xi (X_s) - \gamma )\, \mathrm{d}s} ~\Big |~ \pi ^{{{\scriptscriptstyle {({\ell }})}}}(X) = \pi \right] = \prod _{i=0}^{\ell -1} \frac{D_{\pi _i}}{\gamma - [\xi (\pi _i)-D_{\pi _i}]}. \end{aligned}$$
(4.7)

Proof

The proof is identical to that of [7, Lemma 3.2]. The left-hand side of (4.7) can be evaluated by using the fact that \(T_\ell \) is the sum of \(\ell \) independent Exp(\(\deg (\pi _i)\)) random variables that are independent of \(\pi ^{{{\scriptscriptstyle {({\ell }})}}}(X)\). The condition on \(\gamma \) ensures that all \(\ell \) integrals are finite. \(\square \)

For a path \(\pi \in \mathscr {P}\) and \(\varepsilon \in (0,1)\), we write

$$\begin{aligned} M^{r,\varepsilon }_\pi := \big | \bigl \{0 \le i < |\pi | :\, \xi (\pi _i) \le (1-\varepsilon )a_{L_r}\bigr \}\big |, \end{aligned}$$
(4.8)

with the interpretation that \(M^{r,\varepsilon }_\pi = 0\) if \(|\pi |=0\).

Lemma 4.3

(Mass of excursions) Subject to Assumptions 1.11.3, for every \(A, \varepsilon >0\), \((\mathrm {P}\times \mathfrak {P})\)-a.s. there exists an \(r_0 \in \mathbb {N}\) such that, for all \(r \ge r_0\), all \(\gamma > a_{L_r} - A\) and all \(\pi \in \mathscr {P}(B_r(\mathcal {O}), B_r(\mathcal {O}))\) satisfying \(\pi _i \notin \Pi _{r,A}\) for all \(0 \le i < \ell :=|\pi |\),

$$\begin{aligned} \mathbb {E}_{\pi _0} \left[ \mathrm {e}^{ \int _0^{T_{\ell }}(\xi (X_s) - \gamma )\, \mathrm{d}s} ~\Big |~ \pi ^{{{\scriptscriptstyle {({\ell }})}}}(X) = \pi \right] \le q_{r,A}^{\ell } \mathrm {e}^{ M^{r,\varepsilon }_\pi \log [(\log r)^{\delta _r}/a_{L_r,A,\varepsilon }q_{r,A}]}, \end{aligned}$$
(4.9)

where

$$\begin{aligned} a_{L_r,A,\varepsilon } := \varepsilon a_{L_r}-A, \qquad q_{r,A} := \left( 1+\frac{A}{(\log r)^{\delta _r}}\right) ^{-1}. \end{aligned}$$
(4.10)

Note that \(\pi _{\ell } \in \Pi _{r,A}\) is allowed.

Proof

The proof is identical to that of [7, Lemma 3.3], with \(d_{\max }\) replaced by \((\log r)^{\delta _r}\) (recall Lemma 2.3). \(\square \)

We follow [7, Definition 3.4] and [6, Sect. 6.2]. Note that the distance between \(\Pi _{r,A}\) and \(D_{r,A}^{\mathrm{c}}\) in \({\mathcal {G}\mathcal {W}}\) is at least \(S_r = (\log L_r)^\alpha \) (recall (3.4)–(3.5)).

Definition 4.4

(Concatenation of paths) (a) When \(\pi \) and \(\pi '\) are two paths in \(\mathscr {P}\) with \(\pi _{|\pi |} = \pi '_0\), we define their concatenation as

$$\begin{aligned} \pi \circ \pi ' := (\pi _0, \ldots , \pi _{|\pi |}, \pi '_1, \ldots , \pi '_{|\pi '|}) \in \mathscr {P}. \end{aligned}$$
(4.11)

Note that \(|\pi \circ \pi '| = |\pi | + |\pi '|\).

(b) When \(\pi _{|\pi |} \ne \pi '_0\), we can still define the shifted concatenation of \(\pi \) and \(\pi '\) as \(\pi \circ {\hat{\pi }}'\), where \({\hat{\pi }}' := (\pi _{|\pi |}, \pi _{|\pi |} + \pi '_1 - \pi '_0, \ldots , \pi _{|\pi |} + \pi '_{|\pi '|} - \pi '_0)\). The shifted concatenation of multiple paths is defined inductively via associativity.

Now, if a path \(\pi \in \mathscr {P}\) intersects \(\Pi _{r,A}\), then it can be decomposed into an initial path, a sequence of excursions between \(\Pi _{r,A}\) and \(D_{r,A}^{\mathrm{c}}\), and a terminal path. More precisely, there exists \(m_\pi \in \mathbb {N}\) such that

$$\begin{aligned} \pi = {\check{\pi }}^{{{\scriptscriptstyle {({1}})}}} \circ {\hat{\pi }}^{{{\scriptscriptstyle {({1}})}}} \circ \cdots \circ {\check{\pi }}^{{{\scriptscriptstyle {({m_\pi }})}}} \circ {\hat{\pi }}^{{{\scriptscriptstyle {({m_\pi }})}}} \circ {\bar{\pi }}, \end{aligned}$$
(4.12)

where the paths in (4.12) satisfy

$$\begin{aligned} \begin{aligned} {\check{\pi }}^{{{\scriptscriptstyle {({1}})}}}&\in \mathscr {P}(V, \Pi _{r,A})&\qquad \text {with}\qquad&{\check{\pi }}^{{{\scriptscriptstyle {({1}})}}}_i&\notin \Pi _{r,A},&\quad \, 0\le i< |{\check{\pi }}^{{{\scriptscriptstyle {({1}})}}}|, \\ {\hat{\pi }}^{{{\scriptscriptstyle {({k}})}}}&\in \mathscr {P}(\Pi _{r,A}, D_{r,A}^{\mathrm{c}})&\qquad \text {with}\qquad&{\hat{\pi }}^{{{\scriptscriptstyle {({k}})}}}_i&\in D_{r, A},&\quad \, 0\le i< |{\hat{\pi }}^{{{\scriptscriptstyle {({k}})}}}|, \; 1 \le k \le m_{\pi } - 1, \\ {\check{\pi }}^{{{\scriptscriptstyle {({k}})}}}&\in \mathscr {P}(D_{r,A}^{\mathrm{c}}, \Pi _{r,A})&\qquad \text {with}\qquad&{\check{\pi }}^{{{\scriptscriptstyle {({k}})}}}_i&\notin \Pi _{r,A},&\quad \, 0\le i< |{\check{\pi }}^{{{\scriptscriptstyle {({k}})}}}|, \; 2 \le k \le m_\pi , \\ {\hat{\pi }}^{{{\scriptscriptstyle {({m_\pi }})}}}&\in \mathscr {P}(\Pi _{r,A}, V)&\qquad \text {with}\qquad&{\hat{\pi }}^{{{\scriptscriptstyle {({m_\pi }})}}}_i&\in D_{r,A},&\quad \, 0\le i < |{\hat{\pi }}^{{{\scriptscriptstyle {({m_\pi }})}}}|, \end{aligned} \end{aligned}$$
(4.13)

while

$$\begin{aligned} \begin{array}{ll} {\bar{\pi }} \in \mathscr {P}(D_{r,A}^{\mathrm{c}}, V) \text { and } {\bar{\pi }}_i \notin \Pi _{r,A} \; \forall \, i \ge 0 &{} \text { if } {\hat{\pi }}^{{{\scriptscriptstyle {({m_\pi }})}}} \in \mathscr {P}(\Pi _{r,A}, D^{\mathrm{c}}_{r, A}), \\ {\bar{\pi }}_0 \in D_{r,A}, |{\bar{\pi }}| = 0 &{} \text { otherwise.} \end{array} \end{aligned}$$
(4.14)

Note that the decomposition in (4.12)–(4.14) is unique, and that the paths \({\check{\pi }}^{{{\scriptscriptstyle {({1}})}}}\), \({\hat{\pi }}^{{{\scriptscriptstyle {({m_\pi }})}}}\) and \({\bar{\pi }}\) can have zero length. If \(\pi \) is contained in \(B_r(\mathcal {O})\), then so are all the paths in the decomposition.

Whenever \({{\,\mathrm{supp}\,}}(\pi ) \cap \Pi _{r,A} \ne \emptyset \) and \(\varepsilon > 0\), we define

$$\begin{aligned} s_\pi := \sum _{i=1}^{m_\pi } |{\check{\pi }}^{{{\scriptscriptstyle {({i}})}}}| + |{\bar{\pi }}|, \qquad k^{r,\varepsilon }_\pi := \sum _{i=1}^{m_\pi } M^{r,\varepsilon }_{{\check{\pi }}^{{{\scriptscriptstyle {({i}})}}}} + M^{r,\varepsilon }_{{\bar{\pi }}} \end{aligned}$$
(4.15)

to be the total time spent in exterior excursions, respectively, on moderately low points of the potential visited by exterior excursions (without their last point).

In case \({{\,\mathrm{supp}\,}}(\pi ) \cap \Pi _{r,A} = \emptyset \), we set \(m_\pi := 0\), \(s_\pi := |\pi |\) and \(k^{r,\varepsilon }_\pi := M^{r,\varepsilon }_{\pi }\). Recall from (4.5) that, in this case, \(\lambda _{r,A}(\pi ) = -\infty \).

We say that \(\pi , \pi ' \in \mathscr {P}\) are equivalent, written \(\pi ' \sim \pi \), if \(m_{\pi } = m_{\pi '}\), \({\check{\pi }}'^{{{\scriptscriptstyle {({i}})}}}={\check{\pi }}^{{{\scriptscriptstyle {({i}})}}}\) for all \(i=1,\ldots ,m_{\pi }\), and \({\bar{\pi }}' = {\bar{\pi }}\). If \(\pi ' \sim \pi \), then \(s_{\pi '}\), \(k^{r, \varepsilon }_{\pi '}\) and \(\lambda _{r,A}(\pi ')\) are all equal to the counterparts for \(\pi \).

To state our key lemma, we define, for \(m,s \in \mathbb {N}_0\),

$$\begin{aligned} \mathscr {P}^{(m,s)} = \left\{ \pi \in \mathscr {P}:\, m_\pi = m, s_\pi = s \right\} , \end{aligned}$$
(4.16)

and denote by

$$\begin{aligned} C_{r,A}:= \max \{|\mathcal C| :\, \mathcal C\in \mathfrak {C}_{r,A}\} \end{aligned}$$
(4.17)

the maximal size of the islands in \(\mathfrak {C}_{r,A}\).

Lemma 4.5

(Mass of an equivalence class) Subject to Assumptions 1.1 and 1.3, for every \(A,\varepsilon > 0\), \((\mathrm {P}\times \mathfrak {P})\)-a.s. there exists an \(r_0 \in \mathbb {N}\) such that, for all \(r \ge r_0\), all \(m,s \in \mathbb {N}_0\), all \(\pi \in \mathscr {P}^{(m,s)}\) with \({{\,\mathrm{supp}\,}}(\pi ) \subset B_r(\mathcal {O})\), all \(\gamma > \lambda _{r,A}(\pi ) \vee (a_{L_r} -A)\) and all \(t \ge 0\),

$$\begin{aligned}&\mathbb {E}_{\pi _0} \left[ \mathrm{e}^{\int _0^t (\xi (X_u) - \gamma )\, \mathrm{d}u}\, {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{\pi (X_{[0,t]}) \sim \pi \}} \right] \nonumber \\&\quad \le \left( C_{r,A}^{1/2} \right) ^{{\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{m>0\}}} \left( 1+\frac{(\log r)^{\delta _r} \, C_{r,A}}{\gamma - \lambda _{r,A}(\pi )} \right) ^m \left( \frac{q_{r,A}}{d_{\min }}\right) ^s \mathrm {e}^{k^{r,\varepsilon }_{\pi }\log [(\log r)^{\delta _r}/a_{L_r,A,\varepsilon }q_{r,A}]}. \end{aligned}$$
(4.18)

Proof

The proof is identical to that of [7, Lemma 3.5], with \(d_{\max }\) is replaced by \((\log r)^{\delta _r}\) (recall Lemma 2.3). \(\square \)

4.2 Key Proposition

The main result of this section is the following proposition.

Proposition 4.6

(Entropy reduction) Let \(\alpha \in (0,1)\) and \(\kappa \in (\alpha ,1)\). Subject to Assumption 1.3, there exists an \(A_0(r)\) such that, for all \(A \ge A_0(r)\), with \(\mathfrak {P}\)-probability tending to one as \(r\rightarrow \infty \), the following statement is true. For each \(x \in B_r(\mathcal {O})\), each \(\mathcal N\subset \mathscr {P}(x,B_r(\mathcal {O}))\) satisfying \({{\,\mathrm{supp}\,}}(\pi ) \subset B_r(\mathcal {O})\) and \(\max _{1 \le \ell \le |\pi |} {{\,\mathrm{dist}\,}}_{G}(\pi _\ell , x) \ge (\log L_r)^\kappa \) for all \(\pi \in \mathcal {N}\), and each assignment \(\pi \mapsto (\gamma _\pi , z_\pi )\in \mathbb {R}\times V\) satisfying

$$\begin{aligned} \gamma _\pi \ge \left( \lambda _{r,A}(\pi ) + \mathrm{e}^{-S_r} \right) \vee (a_{L_r}- A) \qquad \forall \,\,\pi \in \mathcal N\end{aligned}$$
(4.19)

and

$$\begin{aligned} z_\pi \in {{\,\mathrm{supp}\,}}(\pi ) \cup \bigcup _{ \begin{array}{c} \mathcal C\in \mathfrak {C}_{r,A} :\\ {{\,\mathrm{supp}\,}}(\pi ) \cap \mathcal C\cap \Pi _{r,A} \ne \emptyset \end{array}} \mathcal C\qquad \forall \,\, \pi \in \mathcal N, \end{aligned}$$
(4.20)

the following inequality holds for all \(t \ge 0\):

$$\begin{aligned} \log \mathbb {E}_x \left[ \mathrm{e}^{\int _0^t \xi (X_s) \mathrm{d}s} {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{\pi (X_{[0,t]}) \in \mathcal {N}\}}\right] \le \sup _{\pi \in \mathcal {N}} \Big \{ t \gamma _\pi + {{\,\mathrm{dist}\,}}_{G}(x,z_\pi ) \log [(\log r)^{\delta _r}/a_{L_r,A,\varepsilon }q_{r,A}]\Big \}.\nonumber \\ \end{aligned}$$
(4.21)

Proof

The proof is based on [7, Sect. 3.4]. First fix \(c_0 >2\) and define

$$\begin{aligned} A_0(r) = (\log r)^{\delta _r} \left( \mathrm {e}^{4 c_0(\log r)^{1-\alpha }}-1\right) . \end{aligned}$$
(4.22)

Fix \(A \ge A_0(r)\), \(\beta \in (0,\alpha )\) and \(\varepsilon \in (0,\frac{1}{2}\beta )\) as in Lemma 3.6. Let \(r_0 \in \mathbb {N}\) be as given in Lemma 4.5, and take \(r \ge r_0\) so large that the conclusions of Lemmas 2.3, 3.1, 3.3 and 3.6 hold, i.e., assume that the events \(\mathcal B_r\) and \(\mathcal B_{r,A}\) in these lemmas do not occur. Fix \(x \in B_r(\mathcal {O})\). Recall the definitions of \(C_{r,A}\) and \(\mathscr {P}^{(m,s)}\). Note that the relation \(\sim \) is an equivalence relation in \(\mathscr {P}^{(m,s)}\), and define

$$\begin{aligned} {\widetilde{\mathscr {P}}}^{(m,s)}_x := \big \{\text {equivalence classes of the paths in } \mathscr {P}(x,V) \cap \mathscr {P}^{(m,s)}\big \}. \end{aligned}$$
(4.23)

The following bounded on the cardinality of this set is needed.

Lemma 4.7

(Bound equivalence classes) Subject to Assumption 1.3, \(\mathfrak {P}\)-a.s.,\(|{\widetilde{\mathscr {P}}}^{(m,s)}_x| \) \(\le (2C_{r,A})^m (\log r)^{\delta _r (m+s)}\) for all \(m,s \in \mathbb {N}_0\).

Proof

We can copy the proof of [7, Lemma 3.6], replacing \(d_{\max }\) by \((\log r)^{\delta _r}\).

The estimate is clear when \(m=0\). To prove that it holds for \(m \ge 1\), write \(\partial \Lambda := \{z \notin \Lambda :\, {{\,\mathrm{dist}\,}}_{G}(z, \Lambda )=1\}\) for \(\Lambda \subset V\). Then \(|\partial \mathcal C\cup \mathcal C| \le ((\log r)^{\delta _r}+1) |\mathcal C| \le 2(\log r)^{\delta _r} C_{r,A}\) by Lemma 2.3. Define the map \(\Phi :{\widetilde{\mathscr {P}}}^{(m,s)}_x \rightarrow \mathscr {P}_s(x,V) \times \{1, \ldots , 2(\log r)^{\delta _r} C_{r,A} \}^m\) as follows. For each \(\Lambda \subset V\) with \(1 \le |\Lambda | \le 2(\log r)^{\delta _r} C_{r,A}\), fix an injection \(f_\Lambda :\Lambda \rightarrow \{1, \ldots , 2(\log r)^{\delta _r} C_{r,A} \}\). Given a path \(\pi \in \mathscr {P}^{(m,s)} \cap \mathscr {P}(x,V)\), decompose \(\pi \), and denote by \({\widetilde{\pi }} \in \mathscr {P}_s(x, V)\) the shifted concatenation of \({{\check{\pi }}}^{{{\scriptscriptstyle {({1}})}}}, \ldots , {{\check{\pi }}}^{{{\scriptscriptstyle {({m}})}}}\), \({\bar{\pi }}\). Note that, for \(2\le k\le m\), the point \({{\check{\pi }}}^{{{\scriptscriptstyle {({k}})}}}_0\) lies in \(\partial \mathcal C_k\) for some \(\mathcal C_k\in \mathfrak {C}_{r,A}\), while \({\bar{\pi }}_0 \in \partial {\overline{\mathcal C}} \cup {\overline{\mathcal C}}\) for some \({\overline{\mathcal C}} \in \mathfrak {C}_{r,A}\). Thus, it is possible to set

$$\begin{aligned} \Phi (\pi ):= \bigl ({{\widetilde{\pi }}},f_{\partial \mathcal C_2}({\check{\pi }}^{{{\scriptscriptstyle {({2}})}}}_0),\dots , f_{\partial \mathcal C_m}({\check{\pi }}^{{{\scriptscriptstyle {({m}})}}}_0), f_{\partial {\bar{\mathcal C}} \cup {\bar{\mathcal C}}}({\bar{\pi }}_0) \bigr ). \end{aligned}$$
(4.24)

It is readily checked that \(\Phi (\pi )\) depends only on the equivalence class of \(\pi \) and, when restricted to equivalence classes, \(\Phi \) is injective. Hence the claim follows. \(\square \)

Now take \(\mathcal N\subset \mathscr {P}(x, V)\) as in the statement, and set

$$\begin{aligned} \widetilde{\mathcal {N}}^{(m,s)} := \big \{\text {equivalence classes of paths in } \mathcal N\cap \mathscr {P}^{(m,s)}\big \} \subset {\widetilde{\mathscr {P}}}^{(m,s)}_x. \end{aligned}$$
(4.25)

For each \(\mathcal M\in {\widetilde{\mathcal N}}^{(m,s)}\), choose a representative \(\pi _\mathcal M\in \mathcal M\), and use Lemma 4.7 to write

$$\begin{aligned}&\mathbb {E}_x \left[ \mathrm{e}^{\int _0^t \xi (X_u) \mathrm{d}u} {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{\pi (X_{[0,t]}) \in \mathcal {N}\}} \right] = \sum _{m, s \in \mathbb {N}_0} \sum _{\mathcal M\in \widetilde{\mathcal {N}}^{(m,s)}} \mathbb {E}_x \left[ \mathrm{e}^{\int _0^t \xi (X_u) \mathrm{d}u} {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{\pi (X_{[0,t]}) \sim \pi _\mathcal M\}} \right] \nonumber \\&\quad \qquad \le \sum _{m, s \in \mathbb {N}_0} (2 (\log r)^{\delta _r} C_{r,A})^m ((\log r)^{\delta _r})^s \sup _{\pi \in \mathcal N^{(m,s)}} \mathbb {E}_x \left[ \mathrm{e}^{\int _0^t \xi (X_u) \mathrm{d}u} {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{\pi (X_{[0,t]}) \sim \pi \}} \right] \end{aligned}$$
(4.26)

with the convention \(\sup \emptyset = 0\). For fixed \(\pi \in \mathcal {N}^{(m,s)}\), by (4.19), apply (4.18) and Lemma 3.1 to obtain, for all r large enough and with \(c_0 >2\) ,

$$\begin{aligned} \begin{aligned}&(2 (c\log r)^{\delta _r})^m (\log r)^{\delta _r s}\, \mathbb {E}_x \left[ \mathrm{e}^{\int _0^t \xi (X_u) \mathrm{d}u} {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{\pi (X_{[0,t]}) \sim \pi \}} \right] \\&\qquad \le \mathrm{e}^{t \gamma _\pi } \mathrm{e}^{c_0 m \log r} [q_{r,A}(\log r)^{\delta _r}]^s\, \mathrm {e}^{k^{r,\varepsilon }_\pi \log [(\log r)^{\delta _r}/a_{L_r,A,\varepsilon }q_{r,A}]}. \end{aligned} \end{aligned}$$
(4.27)

We next claim that, for r large enough and \(\pi \in \mathcal N^{(m,s)}\),

$$\begin{aligned} s \ge \left[ (m-1)\vee 1 \right] S_r . \end{aligned}$$
(4.28)

Indeed, when \(m\ge 2\), \(|{{\,\mathrm{supp}\,}}({\check{\pi }}^{{{\scriptscriptstyle {({i}})}}})| \ge S_r\) for all \(2 \le i \le m\). When \(m=0\), \(|{{\,\mathrm{supp}\,}}(\pi )| \ge \max _{1 \le \ell \le |\pi |} |\pi _\ell -x| \ge (\log L_r)^\kappa \gg S_r\) by assumption. When \(m=1\), the latter assumption and Lemma 3.1 together imply that \({{\,\mathrm{supp}\,}}(\pi ) \cap D^{\mathrm{c}}_{r,A} \ne \emptyset \), and so either \(|{{\,\mathrm{supp}\,}}({\check{\pi }}^{{{\scriptscriptstyle {({1}})}}})| \ge S_r\) or \(|{{\,\mathrm{supp}\,}}({\bar{\pi }})|\ge S_r\). Thus, (4.28) holds by the definition of \(S_r\) and s.

Note that \(q_{r,A}^{S_r} < \mathrm {e}^{-4c_0\log r}\), so

$$\begin{aligned}&\sum _{m \ge 0} \sum _{s \ge [(m-1)\vee 1] S_r} \mathrm {e}^{c_0 m \log r} [q_{r,A}(\log r)^{\delta _r}]^s \nonumber \\&\quad = \frac{[q_{r,A}(\log r)^{\delta _r}]^{S_r} + \mathrm {e}^{c_0 \log r}[q_{r,A}(\log r)^{\delta _r}]^{S_r} + \sum _{m \ge 2} \mathrm {e}^{mc_0 \log r } [q_{r,A}(\log r)^{\delta _r}]^{(m-1)S_r}}{1-q_{r,A}(\log r)^{\delta _r}}\nonumber \\&\quad \le \frac{3 \mathrm {e}^{-c_0 \log r}}{1-q_{r,A}(\log r)^{\delta _r}} < 1 \end{aligned}$$
(4.29)

for r large enough. Inserting this back into (4.26), we obtain

$$\begin{aligned} \log \mathbb {E}_x \left[ \mathrm{e}^{\int _0^t \xi (X_s) \mathrm{d}s} {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{\pi (X_{0,t}) \in \mathcal {N}\}} \right] \le \sup _{\pi \in \mathcal {N}} \Big \{ t \gamma _\pi + k^{r,\varepsilon }_\pi \log [(\log r)^{\delta _r}/a_{L_r,A,\varepsilon }q_{r,A}]\Big \}.\qquad \end{aligned}$$
(4.30)

Thus the proof will be finished once we show that, for some \(\varepsilon ' > 0\) and whp, respectively, a.s. eventually as \(r \rightarrow \infty \),

$$\begin{aligned} k^{r,\varepsilon }_\pi \ge {{\,\mathrm{dist}\,}}_{G}(x,z_{\pi })(1-2(\log L_r)^{-\varepsilon '}) \qquad \forall \,\pi \in \mathcal N. \end{aligned}$$
(4.31)

We can copy the argument at the end of [7, Sect. 3.4]. For each \(\pi \in \mathcal N\) define an auxiliary path \(\pi _\star \) as follows. First note that by using our assumptions we can find points \(z', z'' \in {{\,\mathrm{supp}\,}}(\pi )\) (not necessarily distinct) such that

$$\begin{aligned} {{\,\mathrm{dist}\,}}_{G}(x,z') \ge (\log L_r)^\kappa , \qquad {{\,\mathrm{dist}\,}}_{G}(z'', z_\pi ) \le 2 M_A S_r, \end{aligned}$$
(4.32)

where the latter holds by (3.12). Write \(\{z_1, z_2 \} = \{z', z''\}\) with \(z_1\), \(z_2\) ordered according to their hitting times by \(\pi \), i.e., \(\inf \{ \ell :\pi _\ell = z_1 \} \le \inf \{\ell :\pi _\ell = z_2\}\). Define \(\pi _e\) as the concatenation of the loop erasure of \(\pi \) between x and \(z_1\) and the loop erasure of \(\pi \) between \(z_1\) and \(z_2\). Since \(\pi _e\) is the concatenation of two self-avoiding paths, it visits each point at most twice. Finally, define \(\pi _\star \sim \pi _e\) by replacing the excursions of \(\pi _e\) from \(\Pi _{r,A}\) to \(D_{r,A}^{\mathrm{c}}\) by direct paths between the corresponding endpoints, i.e., replace each \({\hat{\pi }}_e^{{{\scriptscriptstyle {({i}})}}}\) by \(|{\hat{\pi }}_e^{{{\scriptscriptstyle {({i}})}}}|=\ell _i\), \(({\hat{\pi }}_e^{{{\scriptscriptstyle {({i}})}}})_0 = x_i \in \Pi _{r,A}\), and \(({\hat{\pi }}_e^{{{\scriptscriptstyle {({i}})}}})_{\ell _i} = y_i \in D_{r,A}^{\mathrm{c}}\) by a shortest-distance path \({\widetilde{\pi }}_\star ^{{{\scriptscriptstyle {({i}})}}}\) with the same endpoints and \(|{\widetilde{\pi }}_\star ^{{{\scriptscriptstyle {({i}})}}}| = {{\,\mathrm{dist}\,}}_{G}(x_i, y_i)\). Since \(\pi _\star \) visits each \(x \in \Pi _{r,A}\) at most 2 times,

$$\begin{aligned} \begin{aligned} k^{r,\varepsilon }_\pi \ge k^{r,\varepsilon }_{\pi _\star } \ge M^{r,\varepsilon }_{\pi _\star } - 2 |{{\,\mathrm{supp}\,}}(\pi _\star )\cap \Pi _{r,A}|(S_r+1) \ge M^{r,\varepsilon }_{\pi _\star } - 4 |{{\,\mathrm{supp}\,}}(\pi _\star )\cap \Pi _{r,A}| S_r. \end{aligned} \end{aligned}$$
(4.33)

Note that \(M_{\pi _\star }^{r, \varepsilon } \ge \left| \{x \in {{\,\mathrm{supp}\,}}(\pi _\star ) :\, \xi (x) \le (1-\varepsilon ) a_{L_r}\} \right| - 1\) and, by (4.32), \(|{{\,\mathrm{supp}\,}}(\pi _\star )| \ge {{\,\mathrm{dist}\,}}_{G}(x,z') \ge (\log L_r)^\kappa \gg (\log L_r)^{\alpha +2\varepsilon '}\) for some \(0<\varepsilon '<\varepsilon \). Applying Lemmas 3.63.7 and using (3.1) and \(L_r > r\), we obtain, for r large enough,

$$\begin{aligned} \begin{aligned} k^{r,\varepsilon }_\pi&\ge |{{\,\mathrm{supp}\,}}(\pi _\star )|\left( 1 - \frac{2}{(\log L_r)^{\varepsilon }} - \frac{4 S_r}{(\log L_r)^{\alpha +2\varepsilon '}}\right) \ge |{{\,\mathrm{supp}\,}}(\pi _\star )|\left( 1 - \frac{1}{(\log L_r)^{\varepsilon '}}\right) . \end{aligned} \end{aligned}$$
(4.34)

On the other hand, since \(|{{\,\mathrm{supp}\,}}(\pi _\star )| \ge (\log L_r)^\kappa \), by (4.32) we have

$$\begin{aligned} \begin{aligned} \left| {{\,\mathrm{supp}\,}}(\pi _\star ) \right|&= \big (\left| {{\,\mathrm{supp}\,}}(\pi _\star ) \right| + 2 M_A S_r\big ) - 2 M_A S_r\\&= \big (\left| {{\,\mathrm{supp}\,}}(\pi _\star ) \right| + 2 M_A S_r\big ) \left( 1- \frac{2 M_A S_r}{\left| {{\,\mathrm{supp}\,}}(\pi _\star ) \right| + 2 M_A S_r}\right) \\&\ge \left( {{\,\mathrm{dist}\,}}_{G}(x,z'') + 2 M_A S_r \right) \left( 1-\frac{2 M_A S_r}{(\log L_r)^\kappa } \right) \\&\ge {{\,\mathrm{dist}\,}}_{G}(x,z_\pi )\left( 1-\frac{1}{(\log L_r)^{\varepsilon '}} \right) , \end{aligned} \end{aligned}$$
(4.35)

where the first inequality uses that the distance between two points on \(\pi _\star \) is less than the total length of \(\pi _\star \). Now (4.31) follows from (4.34)–(4.35). \(\square \)

5 Proof of the Main Theorem

Define

$$\begin{aligned} U^*(t) := \mathrm {e}^{t[\varrho \log (\vartheta \mathfrak {r}_t) -\varrho - {\widetilde{\chi }}(\varrho )]}, \end{aligned}$$
(5.1)

where we recall (1.13). To prove Theorem 1.4 we show that

$$\begin{aligned} \frac{1}{t} \log U(t) - \frac{1}{t} \log U^*(t) = o(1), \quad t \rightarrow \infty , \qquad (\mathrm {P}\times \mathfrak {P})\text {-a.s.} \end{aligned}$$
(5.2)

The proof proceeds via upper and lower bound, proved in Sects. 5.1 and 5.2, respectively. Throughout this section, Assumptions 1.1, 1.2(1) and 1.3 are in force.

5.1 Upper Bound

We follow [7, Sect. 4.2]. The proof of the upper bound in (5.2) relies on two lemmas showing that paths staying inside a ball of radius \(\lceil t^\gamma \rceil \) for some \(\gamma \in (0,1)\) or leaving a ball of radius \(t \log t\) have a negligible contribution to (1.6), the total mass of the solution.

Lemma 5.1

(No long paths) For any \(\ell _t \ge t \log t\),

$$\begin{aligned} \lim _{t \rightarrow \infty } \frac{1}{U^*(t)}\,\mathbb {E}_{\mathcal {O}} \left[ \mathrm {e}^{\int _0^t \xi (X_s) \mathrm {d}s} {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{\tau _{[B_{\ell _t}]^{\mathrm{c}}}< t\}}\right] = 0 \quad (\mathrm {P}\times \mathfrak {P})-a.s. \end{aligned}$$
(5.3)

Proof

We follow [7, Lemma 4.2]. For \(r \ge \ell _t\), let

$$\begin{aligned} \mathcal B_r := \left\{ \max _{x \in B_r(\mathcal {O})} \xi (x) \ge a_{L_r} + 2 \varrho \right\} . \end{aligned}$$
(5.4)

Since \(\lim _{t\rightarrow \infty } \ell _t = \infty \), Lemma 3.5 gives that \(\mathrm {P}\)-a.s.

$$\begin{aligned} \bigcup _{r \ge \ell _t} \mathcal B_r \text { does not occur eventually as } t\rightarrow \infty . \end{aligned}$$
(5.5)

Therefore we can work on the event \(\bigcap _{r \ge \ell _t} [\mathcal B_r]^{\mathrm{c}}\). On this event, we write

$$\begin{aligned} \mathbb {E}_{\mathcal {O}} \left[ \mathrm {e}^{\int _0^t \xi (X_s) \mathrm {d}s} {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{\tau _{[B_{\ell _t}]^{\mathrm{c}}}< t\}} \right]&= \sum _{r \ge \ell _t} \mathbb {E}_{\mathcal {O}} \left[ \mathrm {e}^{\int _0^t \xi (X_s) \mathrm {d}s} {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{\sup _{s \in [0,t]}|X_s| = r \}} \right] \nonumber \\&\le \mathrm {e}^{2\varrho t} \sum _{r \ge \ell _t}\, \mathrm {e}^{\varrho t \log r + \log (\delta _r\log \log r)} \, \mathbb {P}_{\mathcal {O}} \left( J_t \ge r \right) , \end{aligned}$$
(5.6)

where \(J_t\) is the number of jumps of X up to time t, and we use that \(|B_r(\mathcal {O})| \le (\log r)^{\delta _r r}\). Next, \(J_t\) is stochastically dominated by a Poisson random variable with parameter \(t (\log r)^{\delta _r}\). Hence

$$\begin{aligned} \mathbb {P}_{\mathcal {O}} \left( J_t \ge r \right) \le \frac{[\mathrm {e}t\, (\log r)^{\delta _r}]^r}{r^r} \le \exp \left\{ -r \log \left( \frac{r}{\mathrm {e}t\, (\log r)^{\delta _r}}\right) \right\} \end{aligned}$$
(5.7)

for large r. Using that \(\ell _t \ge t \log t\), we can easily check that, for \(r \ge \ell _t\) and t large enough,

$$\begin{aligned} \varrho t \log r - r \log \left( \frac{r}{\mathrm {e}t\, (\log r)^{\delta _r}}\right) < -3 r, \qquad r \ge \ell _t. \end{aligned}$$
(5.8)

Thus (5.6) is at most

$$\begin{aligned} \mathrm {e}^{2\varrho t} \sum _{r \ge \ell _t}\, \mathrm {e}^{-3r + \log (\delta _r\log \log r)} \, \le \mathrm {e}^{2\varrho t} \sum _{r \ge \ell _t}\, \mathrm {e}^{-2r} \le 2\,\mathrm {e}^{2\varrho t}\,\mathrm {e}^{-2\ell _t} \le \mathrm {e}^{-\ell _t}. \end{aligned}$$
(5.9)

Since \(\lim _{t\rightarrow \infty } \ell _t = \infty \) and \(\lim _{t\rightarrow \infty } U^*(t) = \infty \), this settles the claim. \(\square \)

Lemma 5.2

(No short paths) For any \(\gamma \in (0,1)\),

$$\begin{aligned} \lim _{t \rightarrow \infty } \frac{1}{U^*(t)}\,\mathbb {E}_{\mathcal {O}} \left[ \mathrm {e}^{\int _0^t \xi (X_s) \mathrm {d}s} {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{\tau _{[B_{\lceil t^\gamma \rceil }]^{\mathrm{c}}} > t\}} \right] = 0 \quad (\mathrm {P}\times \mathfrak {P})-a.s. \end{aligned}$$
(5.10)

Proof

We follow [7, Lemma 4.3]. By Lemma 3.5 with \(r = \lceil t^\gamma \rceil \), we may assume that

$$\begin{aligned} \max _{x \in B_{\lceil t^\gamma \rceil }} \xi (x) \le \varrho \log \log L_{\lceil t^\gamma \rceil } + \frac{2 \varrho \log \lceil t^\gamma \rceil }{\vartheta \lceil t^\gamma \rceil } \le \gamma \varrho \log t + O(1), \quad t \rightarrow \infty , \end{aligned}$$
(5.11)

where the second inequality uses that \(\log L_{\lceil t^\gamma \rceil } \sim \log |B_{\lceil t^\gamma \rceil }(\mathcal {O})| \sim \vartheta \lceil t^\gamma \rceil \). Hence

$$\begin{aligned} \frac{1}{U^*(t)} \,\mathbb {E}_{\mathcal {O}} \left[ \mathrm {e}^{\int _0^t \xi (X_s) \mathrm {d}s} {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{\tau _{[B_{\lceil t^\gamma \rceil }]^{\mathrm{c}}} > t\}}\right]\le & {} \frac{1}{U^*(t)}\,\mathrm {e}^{\gamma \varrho t \log t+O(1)}\nonumber \\\le & {} \mathrm {e}^{ (1-\gamma )\varrho t \log t + C\log \log \log t}, \quad t \rightarrow \infty , \end{aligned}$$
(5.12)

for any constant \(C>1\). \(\square \)

The proof of the upper bound in (5.2) also relies on a third lemma estimating the contribution of paths leaving a ball of radius \(\lceil t^\gamma \rceil \) for some \(\gamma \in (0,1)\) but staying inside a ball of radius \(t \log t\). We slice to annulus between these two balls into layers, and derive an estimate for paths that reach a given layer but do not reach the next layer. To that end, fix \(\gamma \in (\alpha ,1)\) with \(\alpha \) as in (3.1), and let

$$\begin{aligned} K_t := \lceil t^{1-\gamma } \log t \rceil , \qquad r^{(k)}_t := k \lceil t^\gamma \rceil , \quad 1 \le k \le K_t, \qquad \ell _t := K_t \lceil t^\gamma \rceil \ge t \log t.\quad \end{aligned}$$
(5.13)

For \(1 \le k \le K_t\), define (recall (4.1))

$$\begin{aligned} \mathcal {N}^{{{\scriptscriptstyle {({k}})}}}_t := \left\{ \pi \in \mathscr {P}(\mathcal {O}, V) :\, {{\,\mathrm{supp}\,}}(\pi ) \subset B_{r^{{{\scriptscriptstyle {({k+1}})}}}_t}(\mathcal {O}),\, {{\,\mathrm{supp}\,}}(\pi )\cap B^{\mathrm{c}}_{r^{{{\scriptscriptstyle {({k}})}}}_t}(\mathcal {O}) \ne \emptyset \right\} \end{aligned}$$
(5.14)

and set

$$\begin{aligned} U^{{{\scriptscriptstyle {({k}})}}}(t) := \mathbb {E}_\mathcal {O}\left[ \mathrm {e}^{\int _0^t \xi (X_s) \mathrm {d}s} {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{\pi _{[0,t]}(X) \in \mathcal {N}^{{{\scriptscriptstyle {({k}})}}}_t \}}\right] . \end{aligned}$$
(5.15)

Lemma 5.3

(Upper bound on \(U^{{{\scriptscriptstyle {({k}})}}}(t)\)) For any \(\varepsilon >0\), \((\mathrm {P}\times \mathfrak {P})\)-a.s. eventually as \(t \rightarrow \infty \),

$$\begin{aligned} \sup _{1 \le k \le K_t} \frac{1}{t} \log U^{{{\scriptscriptstyle {({k}})}}}_t \le \frac{1}{t}\log U^*(t) + \varepsilon . \end{aligned}$$
(5.16)

Proof

We follow [7, Lemma 4.4] Fix \(k \in \{1, \ldots , K_t\}\). For \(\pi \in \mathcal {N}^{{{\scriptscriptstyle {({k}})}}}_t\), let

$$\begin{aligned} \gamma _\pi := \lambda _{r^{{{\scriptscriptstyle {({k+1}})}}}_t, A}(\pi ) + \mathrm {e}^{-S_{\lceil t^\gamma \rceil }}, \qquad z_\pi \in {{\,\mathrm{supp}\,}}(\pi ), |z_\pi | > r^{{{\scriptscriptstyle {({k}})}}}_t, \end{aligned}$$
(5.17)

chosen such that (4.19)–(4.20) are satisfied. By Proposition 4.6 and (4.10), \((\mathrm {P}\times \mathfrak {P})\)-a.s. eventually as \(t \rightarrow \infty \),

$$\begin{aligned} \begin{aligned} \frac{1}{t} \log U^{{{\scriptscriptstyle {({k}})}}}_t \le \gamma _\pi - \frac{|z_\pi |}{t} \left( \log [ \varepsilon \varrho \log (\vartheta r^{(k+1)}_t)] - \delta _r\log [ \log (r^{(k+1)}_t)] + o(1) \right) . \end{aligned} \end{aligned}$$
(5.18)

Using Corollary 3.4 and \(\log L_r \sim \vartheta r\), we bound

$$\begin{aligned} \begin{aligned} \gamma _\pi \le \varrho \log (\vartheta r^{(k+1)}_t) - {\widetilde{\chi }}(\varrho ) + \tfrac{1}{2} \varepsilon + o(1). \end{aligned} \end{aligned}$$
(5.19)

Moreover, \(|z_\pi | > r^{{{\scriptscriptstyle {({k+1}})}}}_t - \lceil t^\gamma \rceil \) and

$$\begin{aligned} \begin{aligned}&\frac{\lceil t^\gamma \rceil }{t} \left( \log [ \varepsilon \varrho \log ( \vartheta r^{(k+1)}_t)] - \delta _r \log [\log (r^{(k+1)}_t)] \right) \\&\qquad \le \frac{1}{t^{1-\gamma }} \log \log (2 t \log t) = o(1). \end{aligned} \end{aligned}$$
(5.20)

Hence

$$\begin{aligned} \gamma _\pi \le F_t(r^{(k+1)}_t) - {\widetilde{\chi }}(\varrho ) + \tfrac{1}{2} \varepsilon + o(1) \end{aligned}$$
(5.21)

with

$$\begin{aligned} F_t(r) := \varrho \log (\vartheta r) - \frac{r}{t} \big [ \log (\varepsilon \varrho \log (\vartheta r)) - \delta _r \log (\log r) \big ], \qquad r>0. \end{aligned}$$
(5.22)

The function \(F_t\) is maximized at any point \(r_t\) satisfying

$$\begin{aligned} \varrho t = r_t \left[ \log (\varepsilon \varrho \log (\vartheta r_t)) - (\delta _r + r\tfrac{\mathrm {d}}{\mathrm {d}r}\delta _r) \log \log r + \frac{1}{\log (\vartheta r_t)} - \frac{\delta _r}{\log r_t} \right] . \end{aligned}$$
(5.23)

In particular, \(r_t = \mathfrak {r}_t[1+o(1)]\), which implies that

$$\begin{aligned} \sup _{r > 0} F_t(r) \le \varrho \log (\vartheta \mathfrak {r}_t) - \varrho + o(1), \qquad t \rightarrow \infty . \end{aligned}$$
(5.24)

Inserting (5.24) into (5.21), we obtain \(\displaystyle \frac{1}{t} \log U^{{{\scriptscriptstyle {({k}})}}}_t < \varrho \log (\vartheta \mathfrak {r}_t) - \varrho - {\widetilde{\chi }}(\varrho ) + \varepsilon \), which is the desired upper bound because \(\varepsilon >0\) is arbitrary. \(\square \)

Proof of the upper bound in (5.2)

To avoid repetition, all statements hold \((\mathfrak {P}\times \mathrm {P})\)-a.s. eventually as \(t \rightarrow \infty \). Set

$$\begin{aligned} U^{{{\scriptscriptstyle {({0}})}}}(t):= & {} \mathbb {E}_\mathcal {O}\left[ \mathrm {e}^{\int _0^t \xi (X_s) \mathrm {d}s} {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{\tau _{[B_{\lceil t^\gamma \rceil }]^{\mathrm{c}}}>t\}}\right] , \nonumber \\ U^{{{\scriptscriptstyle {({\infty }})}}}(t):= & {} \mathbb {E}_\mathcal {O}\left[ \mathrm {e}^{\int _0^t \xi (X_s) \mathrm {d}s} {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{\tau _{[B_{\lceil t \log t \rceil }]^{\mathrm{c}}} \le t\}}\right] . \end{aligned}$$
(5.25)

Then

$$\begin{aligned} U(t) \le U^{{{\scriptscriptstyle {({0}})}}}(t) + U^{{{\scriptscriptstyle {({\infty }})}}}(t) + K_t \max _{1 \le k \le K_t} U^{{{\scriptscriptstyle {({k}})}}}(t). \end{aligned}$$
(5.26)

From Lemmas 5.15.3 and the fact that \(K_t = o(t)\), we get

$$\begin{aligned} \limsup _{t\rightarrow \infty } \left\{ \frac{1}{t} \log U(t) - \frac{1}{t} \log U^*(t)\right\} \le \varepsilon . \end{aligned}$$
(5.27)

Since \(\varepsilon >0\) is arbitrary, this completes the proof of the upper bound in (1.14). \(\square \)

5.2 Lower Bound

We follow [7, Sect. 4.1]. Fix \(\varepsilon >0\). By the definition of \({\widetilde{\chi }}\), there exists an infinite rooted tree \(T=(V',E',\mathcal Y)\) with degrees in \({{\,\mathrm{supp}\,}}(D_g)\) such that \(\chi _T(\varrho ) < {\widetilde{\chi }}(\varrho ) + \tfrac{1}{4} \varepsilon \). Let \(Q_r = B^T_r(\mathcal Y)\) be the ball of radius r around \(\mathcal Y\) in T. By Proposition A.1 and (A.2), there exist a radius \(R \in \mathbb {N}\) and a potential profile \(q:B^T_R \rightarrow \mathbb {R}\) with \(\mathcal {L}_{Q_R}(q;\varrho )<1\) (in particular, \(q\le 0\)) such that

$$\begin{aligned} \lambda _{Q_R}(q;T) \ge -{\widehat{\chi }}_{Q_R}(\varrho ;T) - \tfrac{1}{2} \varepsilon > -{\widetilde{\chi }}(\varrho ) - \varepsilon . \end{aligned}$$
(5.28)

For \(\ell \in \mathbb {N}\), let \(B_\ell = B_\ell (\mathcal {O})\) denote the ball of radius \(\ell \) around \(\mathcal {O}\) in \({\mathcal {G}\mathcal {W}}\). We will show next that, \((\mathfrak {P} \times \mathrm {P})\)-a.s. eventually as \(\ell \rightarrow \infty \), \(B_\ell \) contains a copy of the ball \(Q_R\) where the potentail \(\xi \) is bounded from below by \(\varrho \log \log |B_\ell | + q\).

Proposition 5.4

(Balls with high exceedances) \((\mathfrak {P}\times \mathrm {P})\)-almost surely eventually as \(\ell \rightarrow \infty \), there exists a vertex \(z \in B_\ell \) with \(B_{R+1}(z) \subset B_\ell \) and an isomorphism \(\varphi :B_{R+1}(z) \rightarrow Q_{R+1}\) such that \(\xi \ge \varrho \log \log |B_\ell | + q \circ \varphi \) in \(B_R(z)\). In particular,

$$\begin{aligned} \lambda _{B_R(z)}(\xi ; {\mathcal {G}\mathcal {W}}) > \varrho \log \log |B_\ell | - {\widetilde{\chi }}(\varrho ) - \varepsilon . \end{aligned}$$
(5.29)

Any such z necessarily satisfies \(|z| \ge c \ell \) \((\mathfrak {P}\times \mathrm {P})\)-a.s. eventually as \(\ell \rightarrow \infty \) for some constant \(c = c(\varrho , \vartheta , {\widetilde{\chi }}(\varrho ), \varepsilon ) >0\).

Proof

See [7, Proposition 4.1]. The proof carries over verbatim because the degrees play no role. \(\square \)

Proof of the lower bound in (1.14)

Let z be as in Proposition 5.4. Write \(\tau _z\) for the hitting time of z by the random walk X. For \(s\in (0,t)\), we estimate

$$\begin{aligned} \begin{aligned} U(t)&\ge \mathbb {E}_\mathcal {O}\Big [\mathrm {e}^{\int _0^t \xi (X_u)\,\mathrm {d}u}\,{\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{\tau _z\le s\}}\, {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{X_u\in B_R(z)\,\forall u\in [\tau _z,t]\}}\Big ]\\&=\mathbb {E}_\mathcal {O}\Big [\mathrm {e}^{\int _0^{\tau _z} \xi (X_u)\,\mathrm {d}u}\,{\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{{\{\tau _z\le s\}}}\, \mathbb {E}_z\Big [\mathrm {e}^{\int _0^{v} \xi (X_u)\,\mathrm {d}u}\,{\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{{\{X_u\in B_R(z)\,\forall u\in [0,v]\}}}\Big ]\Big |_{v=t-\tau _z}\Big ], \end{aligned} \end{aligned}$$
(5.30)

where we use the strong Markov property at time \(\tau _z\). We first bound the last term in the integrand in (5.30). Since \(\xi \ge \varrho \log \log |B_\ell | +q \) in \(B_R(z)\),

$$\begin{aligned} \begin{aligned} \mathbb {E}_z\Big [\mathrm {e}^{\int _0^{v} \xi (X_u)\,\mathrm {d}u} {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{X_u\in B_R(z)\,\forall u\in [0,v]\}}\Big ]&\ge \mathrm {e}^{v \varrho \log \log |B_\ell |} \mathbb {E}_{\mathcal Y}\Big [\mathrm {e}^{\int _0^{v} q(X_u)\,\mathrm {d}u} {\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}_{\{X_u\in Q_R\,\forall u\in [0,v]\}}\Big ] \\&\ge \mathrm {e}^{ v \varrho \log \log |B_\ell |} \mathrm {e}^{v \lambda _{Q_R}(q;T)} \phi ^{{{\scriptscriptstyle {({1}})}}}_{Q_R}(\mathcal Y)^2 \\&> \exp \big \{ v \left( \varrho \log \log |B_{\ell }| - {\widetilde{\chi }}(\varrho ) - \varepsilon \right) \big \} \end{aligned} \end{aligned}$$
(5.31)

for large v, where we used that \(B_{R+1}(z)\) is isomorphic to \(Q_{R+1}\) for the indicators in the first inequality, and applied Lemma B.2 and (5.28) to obtain the second and third inequalities, respectively. On the other hand, since \(\xi \ge 0\),

$$\begin{aligned} \mathbb {E}_\mathcal {O}\Big [\mathrm {e}^{\int _0^{\tau _z} \xi (X_u)\,\mathrm {d}u}{\mathchoice{1\mathrm l}{1\mathrm l}{1\mathrm l}{1\mathrm l}}{\{\tau _z\le s\}}\Big ] \ge \mathbb {P}_\mathcal {O}(\tau _z\le s), \end{aligned}$$
(5.32)

and we can bound the latter probability from below by the probability that the random walk runs along a shortest path from the root \(\mathcal {O}\) to z within a time at most s. Such a path \((y_i)_{i=0}^{|z|}\) has \(y_0 = \mathcal {O}\), \(y_{|z|} = z\), \(y_i \sim y_{i-1}\) for \(i=1, \ldots , |z|\), has at each step from \(y_i\) precisely \(\deg (y_i)\) choices for the next step with equal probability, and the step is carried out after an exponential time \(E_i\) with parameter \(\deg (y_i)\). This gives

$$\begin{aligned} \begin{aligned} \mathrm {P}_\mathcal {O}(\tau _z\le s)&\ge \Big (\prod _{i=1}^{|z|}\frac{1}{\deg (y_i)}\Big ) P\Big (\sum _{i=1}^{|z|} E_i \le s\Big ) \ge ((\log |z|)^{\delta _\ell })^{-|z|} \mathrm{Poi}_{d_{\mathrm{min}} s}([|z|,\infty )), \end{aligned} \end{aligned}$$
(5.33)

where \(\mathrm{Poi}_\gamma \) is the Poisson distribution with parameter \(\gamma \), and P is the generic symbol for probability. Summarising, we obtain

$$\begin{aligned} U(t)\ge & {} ((\log |z|)^{\delta _l})^{-|z|} \mathrm {e}^{-d_{\mathrm{min}} s}\frac{(d_{\mathrm{min}} s)^{|z|}}{|z|!} \mathrm {e}^{(t-s)\left[ \varrho \log \log |B_{\ell }| - {\widetilde{\chi }}(\varrho ) - \varepsilon \right] } \nonumber \\\ge & {} \exp \left\{ -d_{\min } s \!+\! (t-s)\left[ \varrho \log \log |B_{\ell }| \!-\! {\widetilde{\chi }}(\varrho ) - \varepsilon \right] \! -\! |z| \log \! \left( \frac{(\log |z|)^{\delta _\ell }}{d_{\min }}\frac{|z|}{s}\right) \! \right\} \qquad \nonumber \\\ge & {} \exp \left\{ -d_{\min } s \!+\! (t-s)\left[ \varrho \log \log |B_{\ell }| - {\widetilde{\chi }}(\varrho ) - \varepsilon \right] \!- \!\ell \log \! \left( \frac{(\log \ell )^{\delta _\ell }}{d_{\min }}\frac{\ell }{s}\right) \!\right\} ,\quad \end{aligned}$$
(5.34)

where in the last inequality we use that \(s \le |z|\) and \(\ell \ge |z|\). Further assuming that \(\ell = o(t)\), we see that the optimum over s is obtained at

$$\begin{aligned} s= \frac{\ell }{d_{\min }+\varrho \log \log |B_{\ell }|-{\widetilde{\chi }}(\varrho ) - \varepsilon } =o(t). \end{aligned}$$
(5.35)

Note that, by Proposition 5.4, this s indeed satisfies \(s\le |z|\). Applying (1.10) we get, after a straightforward computation, \((\mathfrak {P}\times \mathrm {P})\)-a.s. eventually as \(t \rightarrow \infty \),

$$\begin{aligned} \frac{1}{t}\log U(t) \ge \varrho \log \log |B_\ell | - \frac{\ell }{t} \log \log \ell - \frac{\ell }{t} \delta _\ell \log \log \ell - {\widetilde{\chi }}(\varrho ) - \varepsilon + O\left( \frac{\ell }{t} \right) .\quad \end{aligned}$$
(5.36)

Inserting \(\log |B_\ell | \sim \vartheta \ell \), we get

$$\begin{aligned} \frac{1}{t}\log U(t) \ge F_\ell - {\widetilde{\chi }}(\varrho ) - \varepsilon + o(1) + O\left( \frac{\ell }{t} \right) \end{aligned}$$
(5.37)

with

$$\begin{aligned} F_\ell = \varrho \log (\vartheta \ell ) - \frac{\ell }{t} \log \log \ell - \frac{\ell }{t} \delta _\ell \log \log \ell . \end{aligned}$$
(5.38)

The optimal \(\ell \) for \(F_\ell \) satisfies

$$\begin{aligned} \varrho t = \ell \big [1+ (\delta _\ell + \ell \tfrac{\mathrm {d}}{\mathrm {d}\ell }\delta _\ell )] \log \log \ell + \frac{\ell \delta _\ell }{\log \ell } + \frac{\ell }{\log \ell }, \end{aligned}$$
(5.39)

i.e., \(\ell = \mathfrak {r}_t[1+o(1)]\). For this choice we obtain

$$\begin{aligned} \frac{1}{t}\log U(t)\ge \varrho \log (\vartheta \mathfrak {r}_t) - \varrho -{\widetilde{\chi }}(\varrho ) - \varepsilon + o(1). \end{aligned}$$
(5.40)

Hence \((\mathfrak {P}\times \mathrm {P})\)-a.s.

$$\begin{aligned} \liminf _{t \rightarrow \infty } \left\{ \frac{1}{t}\log U(t) - \frac{1}{t} \log U^*(t)\right\} \ge - \varepsilon . \end{aligned}$$
(5.41)

Since \(\varepsilon >0\) is arbitrary, this completes the proof of the lower bound in (1.14). \(\square \)

Remark

It is clear from (5.23) and (5.39) that, in order to get the correct asymptotics, it is crucial that both \(\delta _r\) and \(r\frac{\mathrm {d}}{\mathrm {d}r}\delta _r\) tend to zero as \(r\rightarrow \infty \). This is why Assumption 1.3 is the weakest condition on the tail of the degree distribution under which the arguments in [7] can be pushed through.