Abstract
In den Hollander et al. (The parabolic Anderson model on a Galton-Watson tree, to appear in in and out of equilibrium 3: celebrating Vladas Sidoravicius. Progress in probability, Birkhäuser, Basel, 2021) a detailed analysis was given of the large-time asymptotics of the total mass of the solution to the parabolic Anderson model on a supercritical Galton–Watson random tree with an i.i.d. random potential whose marginal distribution is double-exponential. Under the assumption that the degree distribution has bounded support, two terms in the asymptotic expansion were identified under the quenched law, i.e., conditional on the realisation of the random tree and the random potential. The second term contains a variational formula indicating that the solution concentrates on a subtree with minimal degree according to a computable profile. The present paper extends the analysis to degree distributions with unbounded support. We identify the weakest condition on the tail of the degree distribution under which the arguments in den Hollander et al. (The parabolic Anderson model on a Galton-Watson tree, to appear in in and out of equilibrium 3: celebrating Vladas Sidoravicius. Progress in probability, Birkhäuser, Basel, 2021) can be pushed through. To do so we need to control the occurrence of large degrees uniformly in large subtrees of the Galton–Watson tree.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and Main Results
Section 1.1 provides a brief introduction to the parabolic Anderson model. Section 1.2 introduces basic notation and key assumptions. Section 1.3 states the main theorem and gives an outline of the remainder of the paper.
1.1 The PAM and Intermittency
The parabolic Anderson model (PAM) is the Cauchy problem
where \(\mathscr {X}\) is an ambient space, \(\Delta _\mathscr {X}\) is a Laplace operator acting on functions on \(\mathscr {X}\), and \(\xi \) is a random potential on \(\mathscr {X}\). Most of the literature considers the setting where \(\mathscr {X}\) is either \(\mathbb {Z}^d\) or \(\mathbb {R}^d\) with \(d \ge 1\) (for mathematical surveys we refer the reader to [1, 13]). More recently, other choices for \(\mathscr {X}\) have been considered as well: the complete graph [8], the hypercube [3], Galton–Watson trees [7], and random graphs with prescribed degrees [7].
The main target for the PAM is a description of intermittency: for large t the solution \(u(\cdot ,t)\) of (1.1) concentrates on well-separated regions in \(\mathscr {X}\), called intermittent islands. Much of the literature has focussed on a detailed description of the size, shape and location of these islands, and the profiles of the potential \(\xi (\cdot )\) and the solution \(u(\cdot ,t)\) on them. A special role is played by the case where \(\xi \) is an i.i.d. random potential with a double-exponential marginal distribution
where \(\varrho \in (0,\infty )\) is a parameter. This distribution turns out to be critical, in the sense that the intermittent islands neither grow nor shrink with time, and therefore represents a class of its own.
The analysis of intermittency typically starts with a computation of the large-time asymptotics of the total mass, encapsulated in what are called Lyapunov exponents. There is an important distinction between the annealed setting (i.e., averaged over the random potential) and the quenched setting (i.e., almost surely with respect to the random potential). Often both types of Lyapunov exponents admit explicit descriptions in terms of characteristic variational formulas that contain information about where and how the mass concentrates in \(\mathscr {X}\). These variational formulas contain a spatial part (identifying where the concentration on islands takes place) and a profile part (identifying what the size and shape of both the potential and the solution are on the islands).
In the present paper we focus on the case where \(\mathscr {X}\) is a Galton–Watson tree, in the quenched setting (i.e., almost surely with respect to the random tree and the random potential). In [7] the large-time asymptotics of the total mass was derived under the assumption that the degree distribution has bounded support. The goal of the present paper is to relax this assumption to unbounded degree distributions. In particular, we identify the weakest condition on the tail of the degree distribution under which the arguments in [7] can be pushed through. To do so we need to control the occurrence of large degrees uniformly in large subtrees of the Galton–Watson tree.
1.2 The PAM on a Graph
We begin with some basic definitions and notations (and refer the reader to [1, 13] for more background).
Let \(G = (V,E)\) be a simple connected undirected graph, either finite or countably infinite. Let \(\Delta _G\) be the Laplacian on G, i.e.,
Our object of interest is the non-negative solution of the Cauchy problem with localised initial condition,
where \(\mathcal {O}\in V\) is referred to as the root of G. We say that G is rooted at \(\mathcal {O}\) and call \(G=(V,E,\mathcal {O})\) a rooted graph. The quantity u(x, t) can be interpreted as the amount of mass present at time t at site x when initially there is unit mass at \(\mathcal {O}\).
Criteria for existence and uniqueness of the non-negative solution to (1.4) are well known (see [9, 10] for the case \(G=\mathbb {Z}^d\)), and the solution is given by the Feynman-Kac formula
where \(X=(X_t)_{t \ge 0}\) is the continuous-time random walk on the vertices V with jump rate 1 along the edges E, and \(\mathbb {P}_\mathcal {O}\) denotes the law of X given \(X_0=\mathcal {O}\). We are interested in the total mass of the solution,
Often we suppress the dependence on \(G,\xi \) from the notation. Note that, by time reversal and the linearity of (1.4), \(U(t) = {\hat{u}}(0,t)\) with \({\hat{u}}\) the solution of (1.4) with a different initial condition, namely, \({\hat{u}}(x,0) = 1\) for all \(x \in V\).
As in [7], throughout the paper we assume that the random potential \(\xi = (\xi (x))_{x \in V}\) consists of i.i.d. random variables with marginal distribution satisfying:
Assumption 1.1
(Asymptotic double-exponential potential)
For some \(\varrho \in (0,\infty )\),
The restrictions in (1.7) are helpful to avoid certain technicalities that require no new ideas. In particular, (1.7) is enough to guarantee existence and uniqueness of the non-negative solution to (1.4) on any graph whose largest degrees grow modestly with the size of the graph (as can be inferred from the proof in [10] for the case \(G=\mathbb {Z}^d\); see Appendix C for more details). All our results remain valid under milder restrictions (e.g. [10, Assumption (F)] plus an integrability condition on the lower tail of \(\xi (0)\)).
The following characteristic variational formula is important for the description of the asymptotics of U(t) when \(\xi \) has a double-exponential tail. Denote by \(\mathcal {P}(V)\) the set of probability measures on V. For \(p \in \mathcal {P}(V)\), define
and set
The first term in (1.9) is the quadratic form associated with the Laplacian, describing the solution \(u(\cdot ,t)\) in the intermittent islands, while the second term in (1.9) is the Legendre transform of the rate function for the potential, describing the highest peaks of \(\xi (\cdot )\) in the intermittent islands.
1.3 The PAM on a Galton–Watson Tree
Let D be a random variable taking values in \(\mathbb {N}\). Start with a root vertex \(\mathcal {O}\), and attach edges from \(\mathcal {O}\) to D first-generation vertices. Proceed recursively: after having attached the n-th generation of vertices, attach to each one of them independently a number of vertices having the same distribution as D, and declare the union of these vertices to be the \((n+1)\)-th generation of vertices. Denote by \({\mathcal {G}\mathcal {W}}=(V,E)\) the graph thus obtained and by \(\mathfrak {P}\) its probability law and \(\mathfrak {E}\) the expectation. Write \(\mathcal {P}\) and \(\mathcal {E}\) to denote probability and expectation for D, and \({{\,\mathrm{supp}\,}}(D)\) to denote the support of \(\mathcal {P}\). The law of D can be veiwed as the offspring distribution of \({\mathcal {G}\mathcal {W}}\), and the law of \(D+1\) the degree distribution of \({\mathcal {G}\mathcal {W}}\).
Throughout the paper, we assume that the degree distribution satisfies:
Assumption 1.2
(Exponential tails)
-
(1)
\(d_{\min } := \min {{\,\mathrm{supp}\,}}(D) \ge 2\) and \(\mathcal {E}[D] \in (2,\infty )\).
-
(2)
\(\mathcal {E}\big [\mathrm {e}^{aD}\big ] < \infty \) for all \(a \in (0,\infty )\).
Under this assumption, \({\mathcal {G}\mathcal {W}}\) is \(\mathfrak {P}\)-a.s. an infinite tree. Moreover,
where \(B_r(\mathcal {O}) \subset V\) is the ball of radius r around \(\mathcal {O}\) in the graph distance (see e.g. [14, pp. 134–135]). Note that this ball depends on \({\mathcal {G}\mathcal {W}}\) and therefore is random. For our main result we need an assumption that is much stronger than Assumption 1.2(2).
Assumption 1.3
(Super-double-exponential tails) There exists a function \(f:\,(0,\infty ) \rightarrow (0,\infty )\) satisfying \(\lim _{s\rightarrow \infty } f(s) = 0\) and \(\lim _{s\rightarrow \infty } f'(s) = 0\) such that
To state our main result, we define the constant
with \(\chi _G(\varrho )\) defined in (1.9), and abbreviate
Theorem 1.4
(Quenched Lyapunov exponent) Subject to Assumptions 1.1–1.3,
With Theorem 1.4 we have completed our task to relax the main result in [7] to degree distributions with unbounded support. The extension comes at the price of having to assume a tail that decays faster than double-exponential as shown in (1.11). This property is needed to control the occurrence of large degrees uniformly in large subtrees of \({\mathcal {G}\mathcal {W}}\). No doubt Assumption 1.3 is stronger than is needed, but to go beyond would require a major overhaul of the methods developed in [7], which remains a challenge.
In (1.4) the initial mass is located at the root. The asymptotics in (1.14) is robust against different choices. A heuristic explanation where the terms in (1.14) come from was given in [7, Sect. 1.5]. The asymptotics of U(t) is controlled by random walk paths in the Feynman-Kac formula in (1.6) that run within time \(\mathfrak {r}_t/\varrho \log \mathfrak {r}_t\) to an intermittent island at distance \(\mathrm {r}_t\) from \(\mathcal {O}\), and afterwards stay near that island for the rest of the time. The intermittent island turns out to consist of a subtree with degree \(d_{\min }\) where the potential has a height \(\varrho \log (\vartheta \mathfrak {r}_t)\) and a shape that is the solution of a variational formula restricted to that subtree. The first and third term in (1.14) are the contribution of the path after it has reached the island, the second term is the cost for reaching the island.
For \(d \in \mathbb {N}\setminus \{1\}\), let \(\mathcal {T}_d\) be the infinite homogeneous tree in which every node has downward degree d. It was shown in [7] that if \(\varrho \ge 1/\log (d_\mathrm{min}+1)\), then
Presumably \(\mathcal {T}_{d_{\min }}\) is the unique minimizer of (1.12), but proving so would require more work.
Outline. The remainder of the paper is organised as follows. Section 2 collects some structural properties of Galton–Watson trees. Section 3 contains several preparatory lemmas, which identify the maximum size of the islands where the potential is suitably high, estimate the contribution to the total mass in (1.6) by the random walk until it exits a subset of \({\mathcal {G}\mathcal {W}}\), bound the principal eigenvalue associated with the islands, and estimate the number of locations where the potential is intermediate. Section 4 uses these preparatory lemmas to find the contribution to the Feynman-Kac formula in (1.6) coming from various sets of paths. Section 5 uses these contributions to prove Theorem 1.4. Appendices A–B contain some facts about variational formulas and largest eigenvalues that are needed in Sect. 3. Appendix C provides a proof that the Feynman-Kac formula in (1.5) holds as soon as Assumptions 1.1-1.2 are in force.
Assumptions 1.1–1.2 are needed throughout the paper. Only in Sects. 4–5 do we need Assumption 1.3.
2 Structural Properties of the Galton–Watson Tree
In the section we collect a few structural properties of \({\mathcal {G}\mathcal {W}}\) that play an important role throughout the paper. None of these properties were needed in [7]. Section 2.1 looks at volumes, Sect. 2.2 at degrees, Sect. 2.3 at tree animals.
2.1 Volumes
Let \(Z_k\) be the number of offspring in generation k, i.e.,
where \(d(x,\mathcal {O})\) is the distance from \(\mathcal {O}\) to x. Let \(\mu = \mathcal {E}[D]\). Then there exists a random variable \(W \in (0,\infty )\) such that
It is shown in [2, Theorem 5] that
In addition, it is shown in [4, Theorems 2–3] that if D is bounded, then
where \(\gamma ^+ \in (1,\infty )\) and \(\gamma ^- \in (0,1)\) are the unique solutions of the equations
with \(L^+,L^-:(0,\infty ) \rightarrow (0,\infty )\) real-analytic functions that are multiplicatively periodic with period \(\mu ^{\gamma ^+-1}\), respectively, \(\mu ^{1-\gamma ^-}\). Note that Assumption 1.2(1) guarantees that \(\gamma ^- \ne 1\).
The tail behaviour in (2.4) requires that \(d_{\max }<\infty \). In our setting we have \(d_{\max }=\infty \), which corresponds to \(\gamma ^+=\infty \), and so we expect exponential tail behaviour. The following lemma provides a rough bound.
Lemma 2.1
(Exponential tail for generation sizes) If there exists an \(a>0\) such that \(\mathcal {E}[\mathrm {e}^{aD}] <\infty \), then there exists an \(a_* > 0\) such that \(\mathfrak {E}[\mathrm {e}^{a_* W}] < \infty \).
Proof
First note that if there exists an \(a>0\) such that \(\mathcal {E}[\mathrm {e}^{aD}] <\infty \), then there exist \(b>0\) large and \(c >0\) small such that
Hence
and consequently, because \(\mu >1\),
Put \(a_n := c \exp (-bc \sum _{k=0}^{n-1} \mu ^{-(k+2)})\), which satisfies \(0< a_n \le c\). From the last inequality in (2.9) it follows that
Since \(n \mapsto a_n\) is decreasing with \(\lim _{n\rightarrow \infty } a_n = a_* >0\), Fatou’s lemma gives
Because \(\mathcal {E}[\mathrm {e}^{a_0 W_0}] = \mathrm {e}^{a_0}<\infty \), we get the claim. \(\square \)
The following lemma says that \(\mathfrak {P}\)-a.s. a ball of radius \(R_r\) centred anywhere in \(B_r(\mathcal {O})\) has volume \(\mathrm {e}^{\vartheta R_r + o(R_r)}\) as \(r\rightarrow \infty \), provided \(R_r\) is large compared to \(\log r\).
Lemma 2.2
(Volumes of large balls) Subject to Assumption 1.2(1), if there exists an \(a>0\) such that \(\mathcal {E}[\mathrm {e}^{aD}] <\infty \), then for any \(R_r\) satisfying \(\lim _{r\rightarrow \infty } R_r/\log r = \infty \),
Proof
For \(y\in {\mathcal {G}\mathcal {W}}\) that lies k generations below \(\mathcal {O}\), let \(y[-i]\), \(0 \le i\le k\) be the vertex that lies i generations above y. Define the lower ball of radius around y as
Note that \(B^\downarrow _r(\mathcal {O}) = B_r(\mathcal {O})\).
We first prove the claim for lower balls. Afterwards we use a sandwich argument to get the claim for balls.
Let \(\mathcal {Z}_k\) denote the vertices in the k-th generation. To get the upper bound, pick \(\delta >0\) and estimate
By (1.10), \(\sum _{k=0}^r \mathfrak {E}(Z_k) = \frac{\mathrm {e}^{\vartheta (r+1)} -1}{\mathrm {e}^\vartheta -1} = O(\mathrm {e}^{\vartheta r})\), and so in order to be able to apply the Borel-Cantelli lemma, it suffices to show that the probability in the last line decays faster than exponentially in r for any \(\delta >0\). To that end, estimate
where we use (2.3) with \(\mu = \mathrm {e}^\vartheta \). This produces the desired estimate.
To get the lower bound, pick \(0<\delta <1\) and estimate
It again suffices to show that the probability in the last line decays faster than exponentially in r for any \(\delta >0\). To that end, estimate
where we use (2.5), (2.3) with \(\mu =\mathrm {e}^\vartheta \), and put \(c^- := \inf L^- \in (0,\infty )\). For \(\delta \) small enough this produces the desired estimate. This completes the proof of (2.12) for lower balls.
To get the claim for balls, we observe that
and therefore
It follows from (2.19) that
Hence we get (2.12). \(\square \)
2.2 Degrees
Write \(D_x\) to denote the degree of vertex x. The following lemma implies that, \(\mathfrak {P} \)-a.s. and for \(r\rightarrow \infty \), \(D_x\) is bounded by a vanishing power of \(\log r\) for all \(x \in B_{2r}(\mathcal {O})\).
Lemma 2.3
(Maximal degree in a ball around the root)
-
(a)
Subject to Assumption 1.2(2), for every \(\delta >0\),
$$\begin{aligned} \sum _{r \in \mathbb {N}} \mathfrak {P}\big (\exists \,x \in B_{2r}(\mathcal {O}):\, D_x > \delta r \big ) < \infty . \end{aligned}$$(2.21) -
(b)
Subject to Assumption 1.3, there exists a function \(\delta _r:\,(0,\infty ) \rightarrow (0,\infty )\) satisfying \(\lim _{r\rightarrow \infty } \delta _r\) \(= 0\) and \(\lim _{r\rightarrow \infty } r\frac{\mathrm {d}}{\mathrm {d}r}\delta r = 0\) such that
$$\begin{aligned} \sum _{r \in \mathbb {N}} \mathfrak {P}\big (\exists \,x \in B_{2r}(\mathcal {O}):\, D_x > (\log r)^{\delta _r} \big ) < \infty . \end{aligned}$$(2.22)
Proof
-
(a)
Estimate
$$\begin{aligned} \begin{aligned}&\mathfrak {P}\big (\exists \, x\in B^\downarrow _{2r}(\mathcal {O}):\, D_x> \delta r\big ) \le \sum _{k=0}^{2r} \mathfrak {P}\big (\exists \, x \in \mathcal {Z}_k:\,D_x> \delta r \big )\\&= \sum _{k=0}^{2r} \sum _{l\in \mathbb {N}} \mathfrak {P}\big (\exists \,x \in \mathcal {Z}_k:\, D_x> \delta r \mid Z_k = l\big )\,\mathfrak {P}(Z_k = l)\\&\le \mathcal {P}(D> \delta r) \sum _{k=0}^{2r} \sum _{l\in \mathbb {N}} l\,\mathfrak {P}\big (Z_k = l) = \mathcal {P}(D > \delta r) \sum _{k=0}^{2r}\mathfrak {E}(Z_k). \end{aligned} \end{aligned}$$(2.23)Since \(\sum _{k=0}^{2r} \mathfrak {E}(Z_k) = \frac{\mathrm {e}^{(2r+1)\vartheta }-1}{\mathrm {e}^\vartheta -1} = O(\mathrm {e}^{2r\vartheta })\), it suffices to show that \(\mathcal {P}(D > \delta r) = O(\mathrm {e}^{-cr})\) for some \(c>2\vartheta \). Since \(\mathcal {P}(D > \delta r) \le \mathrm {e}^{-a\delta r}\mathcal {E}(\mathrm {e}^{aD})\), the latter is immediate from Assumption 1.2(2) when we choose \(a>2\vartheta /\delta \).
-
(b)
The only change is that in the last line \(\mathcal {P}(D > \delta r)\) must be replaced by \(\mathcal {P}(D > (\log r)^{\delta _r})\). To see that the latter is \(O(\mathrm {e}^{-cr})\) for some \(c>2\vartheta \), we use the tail condition in (1.11) with \(\delta _r = f(s)\) and \(s=\log r\).
\(\square \)
2.3 Tree Animals
For \(n \in \mathbb {N}_0\) and \(x \in B_r(\mathcal {O})\), let
be the set of tree animals of size \(n+1\) that contain x. Put \(a_n(x) = |\mathcal {A}_n(x)|\).
Lemma 2.4
(Number of tree animals) Subject to Assumption 1.2(2), \(\mathfrak {P}\)-a.s. there exists an \(r_0\in \mathbb {N}\) such that \(a_n(x) \le r^n\) for all \(r \ge r_0\), \(x\in B_r(\mathcal {O})\) and \(0 \le n\le r\).
Proof
For \(n \in \mathbb {N}_0\) and \(x \in B^\downarrow _r(\mathcal {O})\), let
be the set of lower tree animals of size \(n+1\) that contain x. Put \(a^\downarrow _n(x) = |\mathcal {A}^\downarrow _n(x)|\).
We first prove the claim for lower tree animals. Afterwards we use a sandwich argument to get the claim for tree animals.
Fix \(\delta >0\). By Lemma 2.3(a) and the Borel-Cantelli lemma, \(\mathfrak {P}\)-a.s. there exists an \(r_0=r_0(\delta ) \in \mathbb {N}\) such that \(D_x \le \delta r\) for all \(x \in B^\downarrow _{2r}(\mathcal {O})\). Any lower tree animal of size \(n+1\) containing a vertex in \(B^\downarrow _r(\mathcal {O})\) is contained in \(B^\downarrow _{r+n}(\mathcal {O})\). Any lower tree animal of size \(n+1\) can be created by adding a vertex to the outer boundary of a lower tree animal of size n. This leads to the recursive inequality
Since \(a^\downarrow _0(x) =1\), it follows that
Pick \(\delta =1\) to get the claim for lower tree animals.
To get the claim for tree animals, note that \(a_n(x) \le \sum _{k=0}^n a^\downarrow _n(x[-k])\) (compare with (2.19)), and so \(a_n(x) \le (n+1) r^n\) for all \(x\in B_r(\mathcal {O})\) and all \(0 \le n\le r\). \(\square \)
3 Preliminaries
In this section we extend the lemmas in [7, Sect. 2]. Section 3.1 identifies the maximum size of the islands where the potential is suitably high. Section 3.2 estimates the contribution to the total mass in (1.6) by the random walk until it exits a subset of \({\mathcal {G}\mathcal {W}}\). Section 3.3 gives a bound on the principal eigenvalue associated with the islands. Section 3.5 estimates the number of locations where the potential is intermediate.
Abbreviate \(L_r = L_r({\mathcal {G}\mathcal {W}}) = |B_r(\mathcal {O})|\) and put
3.1 Maximum Size of the Islands
For every \(r \in \mathbb {N}\) there is a unique \(a_r\) such that
By Assumption 1.1, for r large enough
For \(r \in \mathbb {N}\) and \(A>0\), let
be the set of vertices in \(B_r(\mathcal {O})\) where the potential is close to maximal,
be the \(S_r\)-neighbourhood of \(\Pi _{r,A}\), and \(\mathfrak {C}_{r,A}\) be the set of connected components of \(D_{r,A}\) in \({\mathcal {G}\mathcal {W}}\), which we think of as islands. For \(M_A\in \mathbb {N}\), define the event
Note that \(\Pi _{r,A}, D_{r,A}, \mathcal {B}_{r,A}\) depend on \({\mathcal {G}\mathcal {W}}\) and therefore are random.
Lemma 3.1
(Maximum size of the islands) Subject to Assumptions 1.1–1.2, for every \(A > 0\) there exists an \(M_A \in \mathbb {N}\) such that
Proof
We follow [5, Lemma 6.6]. By Assumption 1.1, for every \(x \in V\) and r large enough,
with \(c_A = e^{-2A/\varrho }\). By Lemma 2.2, \(\mathfrak {P}\)-a.s. for every \(y \in B_r(\mathcal {O})\) and r large enough,
where we use that \(S_r = o(\log r)=o(r)\), and hence for every \(m\in \mathbb {N}\),
Consequently, \(\mathfrak {P}\)-a.s.
By choosing \(m>1/c_A\), we see that the above probability becomes summable in r, and so we have proved the claim with \(M_A=\lceil 1/c_A \rceil \). \(\square \)
Lemma 3.1 implies that \((\mathrm {P}\times \mathfrak {P})\)-a.s. \(\mathcal {B}_{r,A}\) does not occur eventually as \(r \rightarrow \infty \). Note that \(\mathfrak {P}\)-a.s. on the event \([\mathcal {B}_{r,A}]^c\),
where the last inequality follows from Lemma 2.2.
3.2 Mass Up to an Exit Time
Lemma 3.2
(Mass up to an exit time) Subject to Assumption 1.2(2), \(\mathfrak {P}\)-a.s. for any \(\delta >0\), \(r \ge r_0\), \(y \in \Lambda \subset B_r(\mathcal {O})\), \(\xi \in [0,\infty )^V\) and \(\gamma > \lambda _\Lambda = \lambda _\Lambda (\xi ,{\mathcal {G}\mathcal {W}})\),
Proof
We follow the proof of [10, Lemma 2.18] and [11, Lemma 4.2]. Define
This is the solution to the boundary value problem
Via the substitution \(u=:1+v\), this turns into
It is readily checked that for \(\gamma > \lambda _\Lambda \) the solution exists and is given by
where \(\mathcal {R}_\gamma \) denotes the resolvent of \(\Delta + \xi \) in \(\ell ^2(\Lambda )\) with Dirichlet boundary condition. Hence
where \(\mathbbm {1}\) denotes the constant function equal to 1, and \(\langle \cdot ,\cdot \rangle _\Lambda \) denotes the inner product in \(\ell ^2(\Lambda )\). To get the first inequality, we combine Lemma 2.3(a) with the lower bound in (B.2) from Lemma B.1, to get \(\xi - \gamma \le \lambda _\Lambda + \delta r -\gamma \le \delta r\) on \(\Lambda \). The positivity of the resolvent gives
To get the second inequality, we write
To get the third inequality, we use the Fourier expansion of the resolvent with respect to the orthonormal basis of eigenfunctions of \(\Delta + \xi \) in \(\ell ^2(\Lambda )\). \(\square \)
3.3 Principal Eigenvalue of the Islands
The following lemma provides a spectral bound.
Lemma 3.3
(Principal eigenvalues of the islands) Subject to Assumptions 1.1 and 1.2(2), for any \(\varepsilon >0\), \((\mathrm {P}\times \mathfrak {P})\)-a.s. eventually as \(r \rightarrow \infty \),
Proof
We follow the proof of [7, Lemma 2.6]. For \(\varepsilon >0\) and \(A>0\), define the event
with \(M_A\) as in Lemma 3.1. Note that, by (1.1), \(\mathrm {e}^{\xi (x)/\varrho }\) is stochastically dominated by \(Z \vee N\), where Z is an \(\mathrm {Exp}(1)\) random variable and \(N>0\) is a constant. Thus, for any \(\Lambda \subset V\), using [7, Eq. (2.17)], putting \(\gamma = \sqrt{\mathrm {e}^{\varepsilon /\varrho }} > 1\) and applying Markov’s inequality, we may estimate
with \(K_\gamma = \mathrm {E}[\mathrm {e}^{\gamma ^{-1}(Z \vee N)}] \in (1,\infty )\). Next, by Lemma 2.4, for any \(x \in B_r(\mathcal {O})\) and \(1 \le n \le r\), the number of connected subsets \(\Lambda \subset V\) with \(x \in \Lambda \) and \(|\Lambda |=n+1\) is \(\mathfrak {P}\)-a.s. at most \((n+1)r^n \le \mathrm {e}^{2n\log r}\) for \(r \ge r_0\). Noting that \(\mathrm {e}^{S_r} \le r\), we use a union bound and that by Lemma 2.2\(\log L_r = \vartheta r + o(r)\) as \(r\rightarrow \infty \ \mathfrak {P}\)-a.s., to estimate for r large enough,
Via the Borel-Cantelli lemma this implies that \((\mathrm {P}\times \mathfrak {P})\)-a.s. \(\bar{\mathcal {B}}_{r,A}\) does not occur eventually as \(r\rightarrow \infty \). The proof is completed by invoking Lemma 3.1. \(\square \)
Corollary 3.4
(Uniform bound on principal eigenvalue of the islands) Subject to Assumptions 1.1–1.2, for \(\vartheta \) as in (1.10), and any \(\varepsilon >0\), \((\mathrm {P}\times \mathfrak {P})\)-a.s. eventually as \(r \rightarrow \infty \),
Proof
See [7, Corollary 2.8]. The proof carries over verbatim because the degrees play no role. \(\square \)
3.4 Maximum of the Potential
The next lemma shows that \(a_{L_r}\) is the leading order of the maximum of \(\xi \) in \(B_r(\mathcal {O})\).
Lemma 3.5
(Maximum of the potential) Subject to Assumptions 1.1–1.2, for any \(\vartheta >0\), \((\mathrm {P}\times \mathfrak {P})\)-a.s. eventually as \(r \rightarrow \infty \),
Proof
See [7, Lemma 2.4]. The proof carries over verbatim and uses Lemma 2.2. \(\square \)
3.5 Number of Intermediate Peaks of the Potential
We recall the following Chernoff bound for a binomial random variable with parameters n and p (see e.g. [6, Lemma 5.9]):
Lemma 3.6
(Number of intermediate peaks of the potential) Subject to Assumptions 1.1 and 1.2(2), for any \(\beta \in (0,1)\) and \(\varepsilon \in (0, \tfrac{1}{2}\beta )\) the following holds. For a self-avoiding path \(\pi \) in \({\mathcal {G}\mathcal {W}}\), set
Define the event
Then
Proof
We follow the proof of [7, Lemma 2.9]. Fix \(\beta \in (0,1)\) and \(\varepsilon \in (0,\frac{1}{2}\beta )\). (1.7) implies
Fix \(x \in B_r(\mathcal {O})\) and \(k \in \mathbb {N}\). The number of self-avoiding paths \(\pi \) in \(B_r(\mathcal {O})\) with \(|{{\,\mathrm{supp}\,}}(\pi )|=k\) and \(\pi _0 = x\) is at most \(\mathrm {e}^{k \log r}\) by Lemma 2.4 for r sufficiently large. For such a \(\pi \), the random variable \(N_{\pi }\) has a Bin(k, \(p_r\))-distribution. Using (3.27), we obtain
By the definition of \(\varepsilon \), together with the fact that \(L_r > r\) and \(x \mapsto (\log \log x)/(\log x)^\varepsilon \) is eventually decreasing, the expression in parentheses above is at least \(\frac{1}{2}(\log L_r)^{1- 2\varepsilon }\). Summing over \(k \ge (\log L_r)^\beta \) and \(x \in B_r(\mathcal {O})\), we get \(\mathfrak {P}-a.s.\)
for some \(c_1, c_2, \delta >0\). Since \(L_r > r\), (3.33) is summable in r. \(\square \)
Lemma 3.6 implies that \((\mathrm {P}\times \mathfrak {P})\)-a.s. for r large enough, all self-avoiding paths \(\pi \) in \({\mathcal {G}\mathcal {W}}\) with \({{\,\mathrm{supp}\,}}(\pi ) \cap B_r \ne \emptyset \) and \(|{{\,\mathrm{supp}\,}}(\pi )| \ge (\log L_r)^{\beta }\) satisfy \(N_{\pi } \le \frac{|{{\,\mathrm{supp}\,}}(\pi )|}{(\log L_r)^\varepsilon }\).
Lemma 3.7
(Number of high exceedances of the potential) Subject to Assumptions 1.1 and 1.2(2), for any \(A>0\) there is a \(C \ge 1\) such that, for all \(\delta \in (0,1)\), the following holds. For a self-avoiding path \(\pi \) in \({\mathcal {G}\mathcal {W}}\), let
Define the event
Then \(\sum _{r \in \mathbb {N}_0} \sup _{G \in \mathfrak {G}_r} \mathrm {P}(\mathcal B_r) < \infty \). In particular, \((\mathrm {P}\times \mathfrak {P})\)-a.s. for r large enough, all self-avoiding paths \(\pi \) in \({\mathcal {G}\mathcal {W}}\) with \({{\,\mathrm{supp}\,}}(\pi ) \cap B_r \ne \emptyset \) and \(|{{\,\mathrm{supp}\,}}(\pi )| \ge C (\log L_r)^{\delta }\) satisfy
Proof
Proceed as for Lemma 3.6, noting that this time
where \(\epsilon =\mathrm {e}^{-2A/\varrho }\), and taking \(C > 2/\epsilon \). \(\square \)
4 Path Expansions
In this section we extend [7, Sect. 3]. Section 4.1 proves three lemmas that concern the contribution to the total mass in (1.6) coming from various sets of paths. Section 4.2 proves a key proposition that controls the entropy associated with a key set of paths. The proof is based on the three lemmas in Sect. 4.1.
We need various sets of nearest-neighbour paths in \({\mathcal {G}\mathcal {W}}=(V,E,\mathcal {O})\), defined in [7]. For \(\ell \in \mathbb {N}_0\) and subsets \(\Lambda , \Lambda ' \subset V\), put
and set
When \(\Lambda \) or \(\Lambda '\) consists of a single point, write x instead of \(\{x\}\). For \(\pi \in \mathscr {P}_\ell \), set \(|\pi | := \ell \). Write \({{\,\mathrm{supp}\,}}(\pi ) := \{\pi _0, \ldots , \pi _{|\pi |}\}\) to denote the set of points visited by \(\pi \).
Let \(X=(X_t)_{t\ge 0}\) be the continuous-time random walk on G that jumps from \(x \in V\) to any neighbour \(y\sim x\) at rate 1. Denote by \((T_k)_{k \in \mathbb {N}_0}\) the sequence of jump times (with \(T_0 := 0\)). For \(\ell \in \mathbb {N}_0\), let
be the path in \(\mathscr {P}_\ell \) consisting of the first \(\ell \) steps of X. For \(t \ge 0\), let
denote the path in \(\mathscr {P}\) consisting of all the steps taken by X between times 0 and t.
Recall the definitions from Sect. 3.1. For \(\pi \in \mathscr {P}\) and \(A>0\), define
with the convention \(\sup \emptyset = -\infty \). This is the largest principal eigenvalue among the components of \({\mathfrak {C}}_{r,A}\) in \({\mathcal {G}\mathcal {W}}\) that have a point of high exceedance visited by the path \(\pi \).
Lemma 4.1
(Mass up to an exit time) Subject to Assumption 1.3, \(\mathfrak {P}\)-a.s. for any \(r \ge r_0\), \(y \in \Lambda \subset B_r(\mathcal {O})\), \(\xi \in [0,\infty )^V\) and \(\gamma > \lambda _\Lambda = \lambda _\Lambda (\xi ,{\mathcal {G}\mathcal {W}})\),
Proof
The proof is identical to that of Lemma 3.2, with \(\delta r\) replaced by \((\log r)^{\delta r}\) (recall Lemma 2.3). \(\square \)
4.1 Mass of the Solution Along Excursions
Lemma 4.2
(Path evaluation) For \(\ell \in \mathbb {N}_0\), \(\pi \in \mathscr {P}_\ell \) and \(\gamma > \max _{0 \le i < |\pi |} \{\xi (\pi _i)-D_{\pi _i}\}\),
Proof
The proof is identical to that of [7, Lemma 3.2]. The left-hand side of (4.7) can be evaluated by using the fact that \(T_\ell \) is the sum of \(\ell \) independent Exp(\(\deg (\pi _i)\)) random variables that are independent of \(\pi ^{{{\scriptscriptstyle {({\ell }})}}}(X)\). The condition on \(\gamma \) ensures that all \(\ell \) integrals are finite. \(\square \)
For a path \(\pi \in \mathscr {P}\) and \(\varepsilon \in (0,1)\), we write
with the interpretation that \(M^{r,\varepsilon }_\pi = 0\) if \(|\pi |=0\).
Lemma 4.3
(Mass of excursions) Subject to Assumptions 1.1–1.3, for every \(A, \varepsilon >0\), \((\mathrm {P}\times \mathfrak {P})\)-a.s. there exists an \(r_0 \in \mathbb {N}\) such that, for all \(r \ge r_0\), all \(\gamma > a_{L_r} - A\) and all \(\pi \in \mathscr {P}(B_r(\mathcal {O}), B_r(\mathcal {O}))\) satisfying \(\pi _i \notin \Pi _{r,A}\) for all \(0 \le i < \ell :=|\pi |\),
where
Note that \(\pi _{\ell } \in \Pi _{r,A}\) is allowed.
Proof
The proof is identical to that of [7, Lemma 3.3], with \(d_{\max }\) replaced by \((\log r)^{\delta _r}\) (recall Lemma 2.3). \(\square \)
We follow [7, Definition 3.4] and [6, Sect. 6.2]. Note that the distance between \(\Pi _{r,A}\) and \(D_{r,A}^{\mathrm{c}}\) in \({\mathcal {G}\mathcal {W}}\) is at least \(S_r = (\log L_r)^\alpha \) (recall (3.4)–(3.5)).
Definition 4.4
(Concatenation of paths) (a) When \(\pi \) and \(\pi '\) are two paths in \(\mathscr {P}\) with \(\pi _{|\pi |} = \pi '_0\), we define their concatenation as
Note that \(|\pi \circ \pi '| = |\pi | + |\pi '|\).
(b) When \(\pi _{|\pi |} \ne \pi '_0\), we can still define the shifted concatenation of \(\pi \) and \(\pi '\) as \(\pi \circ {\hat{\pi }}'\), where \({\hat{\pi }}' := (\pi _{|\pi |}, \pi _{|\pi |} + \pi '_1 - \pi '_0, \ldots , \pi _{|\pi |} + \pi '_{|\pi '|} - \pi '_0)\). The shifted concatenation of multiple paths is defined inductively via associativity.
Now, if a path \(\pi \in \mathscr {P}\) intersects \(\Pi _{r,A}\), then it can be decomposed into an initial path, a sequence of excursions between \(\Pi _{r,A}\) and \(D_{r,A}^{\mathrm{c}}\), and a terminal path. More precisely, there exists \(m_\pi \in \mathbb {N}\) such that
where the paths in (4.12) satisfy
while
Note that the decomposition in (4.12)–(4.14) is unique, and that the paths \({\check{\pi }}^{{{\scriptscriptstyle {({1}})}}}\), \({\hat{\pi }}^{{{\scriptscriptstyle {({m_\pi }})}}}\) and \({\bar{\pi }}\) can have zero length. If \(\pi \) is contained in \(B_r(\mathcal {O})\), then so are all the paths in the decomposition.
Whenever \({{\,\mathrm{supp}\,}}(\pi ) \cap \Pi _{r,A} \ne \emptyset \) and \(\varepsilon > 0\), we define
to be the total time spent in exterior excursions, respectively, on moderately low points of the potential visited by exterior excursions (without their last point).
In case \({{\,\mathrm{supp}\,}}(\pi ) \cap \Pi _{r,A} = \emptyset \), we set \(m_\pi := 0\), \(s_\pi := |\pi |\) and \(k^{r,\varepsilon }_\pi := M^{r,\varepsilon }_{\pi }\). Recall from (4.5) that, in this case, \(\lambda _{r,A}(\pi ) = -\infty \).
We say that \(\pi , \pi ' \in \mathscr {P}\) are equivalent, written \(\pi ' \sim \pi \), if \(m_{\pi } = m_{\pi '}\), \({\check{\pi }}'^{{{\scriptscriptstyle {({i}})}}}={\check{\pi }}^{{{\scriptscriptstyle {({i}})}}}\) for all \(i=1,\ldots ,m_{\pi }\), and \({\bar{\pi }}' = {\bar{\pi }}\). If \(\pi ' \sim \pi \), then \(s_{\pi '}\), \(k^{r, \varepsilon }_{\pi '}\) and \(\lambda _{r,A}(\pi ')\) are all equal to the counterparts for \(\pi \).
To state our key lemma, we define, for \(m,s \in \mathbb {N}_0\),
and denote by
the maximal size of the islands in \(\mathfrak {C}_{r,A}\).
Lemma 4.5
(Mass of an equivalence class) Subject to Assumptions 1.1 and 1.3, for every \(A,\varepsilon > 0\), \((\mathrm {P}\times \mathfrak {P})\)-a.s. there exists an \(r_0 \in \mathbb {N}\) such that, for all \(r \ge r_0\), all \(m,s \in \mathbb {N}_0\), all \(\pi \in \mathscr {P}^{(m,s)}\) with \({{\,\mathrm{supp}\,}}(\pi ) \subset B_r(\mathcal {O})\), all \(\gamma > \lambda _{r,A}(\pi ) \vee (a_{L_r} -A)\) and all \(t \ge 0\),
Proof
The proof is identical to that of [7, Lemma 3.5], with \(d_{\max }\) is replaced by \((\log r)^{\delta _r}\) (recall Lemma 2.3). \(\square \)
4.2 Key Proposition
The main result of this section is the following proposition.
Proposition 4.6
(Entropy reduction) Let \(\alpha \in (0,1)\) and \(\kappa \in (\alpha ,1)\). Subject to Assumption 1.3, there exists an \(A_0(r)\) such that, for all \(A \ge A_0(r)\), with \(\mathfrak {P}\)-probability tending to one as \(r\rightarrow \infty \), the following statement is true. For each \(x \in B_r(\mathcal {O})\), each \(\mathcal N\subset \mathscr {P}(x,B_r(\mathcal {O}))\) satisfying \({{\,\mathrm{supp}\,}}(\pi ) \subset B_r(\mathcal {O})\) and \(\max _{1 \le \ell \le |\pi |} {{\,\mathrm{dist}\,}}_{G}(\pi _\ell , x) \ge (\log L_r)^\kappa \) for all \(\pi \in \mathcal {N}\), and each assignment \(\pi \mapsto (\gamma _\pi , z_\pi )\in \mathbb {R}\times V\) satisfying
and
the following inequality holds for all \(t \ge 0\):
Proof
The proof is based on [7, Sect. 3.4]. First fix \(c_0 >2\) and define
Fix \(A \ge A_0(r)\), \(\beta \in (0,\alpha )\) and \(\varepsilon \in (0,\frac{1}{2}\beta )\) as in Lemma 3.6. Let \(r_0 \in \mathbb {N}\) be as given in Lemma 4.5, and take \(r \ge r_0\) so large that the conclusions of Lemmas 2.3, 3.1, 3.3 and 3.6 hold, i.e., assume that the events \(\mathcal B_r\) and \(\mathcal B_{r,A}\) in these lemmas do not occur. Fix \(x \in B_r(\mathcal {O})\). Recall the definitions of \(C_{r,A}\) and \(\mathscr {P}^{(m,s)}\). Note that the relation \(\sim \) is an equivalence relation in \(\mathscr {P}^{(m,s)}\), and define
The following bounded on the cardinality of this set is needed.
Lemma 4.7
(Bound equivalence classes) Subject to Assumption 1.3, \(\mathfrak {P}\)-a.s.,\(|{\widetilde{\mathscr {P}}}^{(m,s)}_x| \) \(\le (2C_{r,A})^m (\log r)^{\delta _r (m+s)}\) for all \(m,s \in \mathbb {N}_0\).
Proof
We can copy the proof of [7, Lemma 3.6], replacing \(d_{\max }\) by \((\log r)^{\delta _r}\).
The estimate is clear when \(m=0\). To prove that it holds for \(m \ge 1\), write \(\partial \Lambda := \{z \notin \Lambda :\, {{\,\mathrm{dist}\,}}_{G}(z, \Lambda )=1\}\) for \(\Lambda \subset V\). Then \(|\partial \mathcal C\cup \mathcal C| \le ((\log r)^{\delta _r}+1) |\mathcal C| \le 2(\log r)^{\delta _r} C_{r,A}\) by Lemma 2.3. Define the map \(\Phi :{\widetilde{\mathscr {P}}}^{(m,s)}_x \rightarrow \mathscr {P}_s(x,V) \times \{1, \ldots , 2(\log r)^{\delta _r} C_{r,A} \}^m\) as follows. For each \(\Lambda \subset V\) with \(1 \le |\Lambda | \le 2(\log r)^{\delta _r} C_{r,A}\), fix an injection \(f_\Lambda :\Lambda \rightarrow \{1, \ldots , 2(\log r)^{\delta _r} C_{r,A} \}\). Given a path \(\pi \in \mathscr {P}^{(m,s)} \cap \mathscr {P}(x,V)\), decompose \(\pi \), and denote by \({\widetilde{\pi }} \in \mathscr {P}_s(x, V)\) the shifted concatenation of \({{\check{\pi }}}^{{{\scriptscriptstyle {({1}})}}}, \ldots , {{\check{\pi }}}^{{{\scriptscriptstyle {({m}})}}}\), \({\bar{\pi }}\). Note that, for \(2\le k\le m\), the point \({{\check{\pi }}}^{{{\scriptscriptstyle {({k}})}}}_0\) lies in \(\partial \mathcal C_k\) for some \(\mathcal C_k\in \mathfrak {C}_{r,A}\), while \({\bar{\pi }}_0 \in \partial {\overline{\mathcal C}} \cup {\overline{\mathcal C}}\) for some \({\overline{\mathcal C}} \in \mathfrak {C}_{r,A}\). Thus, it is possible to set
It is readily checked that \(\Phi (\pi )\) depends only on the equivalence class of \(\pi \) and, when restricted to equivalence classes, \(\Phi \) is injective. Hence the claim follows. \(\square \)
Now take \(\mathcal N\subset \mathscr {P}(x, V)\) as in the statement, and set
For each \(\mathcal M\in {\widetilde{\mathcal N}}^{(m,s)}\), choose a representative \(\pi _\mathcal M\in \mathcal M\), and use Lemma 4.7 to write
with the convention \(\sup \emptyset = 0\). For fixed \(\pi \in \mathcal {N}^{(m,s)}\), by (4.19), apply (4.18) and Lemma 3.1 to obtain, for all r large enough and with \(c_0 >2\) ,
We next claim that, for r large enough and \(\pi \in \mathcal N^{(m,s)}\),
Indeed, when \(m\ge 2\), \(|{{\,\mathrm{supp}\,}}({\check{\pi }}^{{{\scriptscriptstyle {({i}})}}})| \ge S_r\) for all \(2 \le i \le m\). When \(m=0\), \(|{{\,\mathrm{supp}\,}}(\pi )| \ge \max _{1 \le \ell \le |\pi |} |\pi _\ell -x| \ge (\log L_r)^\kappa \gg S_r\) by assumption. When \(m=1\), the latter assumption and Lemma 3.1 together imply that \({{\,\mathrm{supp}\,}}(\pi ) \cap D^{\mathrm{c}}_{r,A} \ne \emptyset \), and so either \(|{{\,\mathrm{supp}\,}}({\check{\pi }}^{{{\scriptscriptstyle {({1}})}}})| \ge S_r\) or \(|{{\,\mathrm{supp}\,}}({\bar{\pi }})|\ge S_r\). Thus, (4.28) holds by the definition of \(S_r\) and s.
Note that \(q_{r,A}^{S_r} < \mathrm {e}^{-4c_0\log r}\), so
for r large enough. Inserting this back into (4.26), we obtain
Thus the proof will be finished once we show that, for some \(\varepsilon ' > 0\) and whp, respectively, a.s. eventually as \(r \rightarrow \infty \),
We can copy the argument at the end of [7, Sect. 3.4]. For each \(\pi \in \mathcal N\) define an auxiliary path \(\pi _\star \) as follows. First note that by using our assumptions we can find points \(z', z'' \in {{\,\mathrm{supp}\,}}(\pi )\) (not necessarily distinct) such that
where the latter holds by (3.12). Write \(\{z_1, z_2 \} = \{z', z''\}\) with \(z_1\), \(z_2\) ordered according to their hitting times by \(\pi \), i.e., \(\inf \{ \ell :\pi _\ell = z_1 \} \le \inf \{\ell :\pi _\ell = z_2\}\). Define \(\pi _e\) as the concatenation of the loop erasure of \(\pi \) between x and \(z_1\) and the loop erasure of \(\pi \) between \(z_1\) and \(z_2\). Since \(\pi _e\) is the concatenation of two self-avoiding paths, it visits each point at most twice. Finally, define \(\pi _\star \sim \pi _e\) by replacing the excursions of \(\pi _e\) from \(\Pi _{r,A}\) to \(D_{r,A}^{\mathrm{c}}\) by direct paths between the corresponding endpoints, i.e., replace each \({\hat{\pi }}_e^{{{\scriptscriptstyle {({i}})}}}\) by \(|{\hat{\pi }}_e^{{{\scriptscriptstyle {({i}})}}}|=\ell _i\), \(({\hat{\pi }}_e^{{{\scriptscriptstyle {({i}})}}})_0 = x_i \in \Pi _{r,A}\), and \(({\hat{\pi }}_e^{{{\scriptscriptstyle {({i}})}}})_{\ell _i} = y_i \in D_{r,A}^{\mathrm{c}}\) by a shortest-distance path \({\widetilde{\pi }}_\star ^{{{\scriptscriptstyle {({i}})}}}\) with the same endpoints and \(|{\widetilde{\pi }}_\star ^{{{\scriptscriptstyle {({i}})}}}| = {{\,\mathrm{dist}\,}}_{G}(x_i, y_i)\). Since \(\pi _\star \) visits each \(x \in \Pi _{r,A}\) at most 2 times,
Note that \(M_{\pi _\star }^{r, \varepsilon } \ge \left| \{x \in {{\,\mathrm{supp}\,}}(\pi _\star ) :\, \xi (x) \le (1-\varepsilon ) a_{L_r}\} \right| - 1\) and, by (4.32), \(|{{\,\mathrm{supp}\,}}(\pi _\star )| \ge {{\,\mathrm{dist}\,}}_{G}(x,z') \ge (\log L_r)^\kappa \gg (\log L_r)^{\alpha +2\varepsilon '}\) for some \(0<\varepsilon '<\varepsilon \). Applying Lemmas 3.6–3.7 and using (3.1) and \(L_r > r\), we obtain, for r large enough,
On the other hand, since \(|{{\,\mathrm{supp}\,}}(\pi _\star )| \ge (\log L_r)^\kappa \), by (4.32) we have
where the first inequality uses that the distance between two points on \(\pi _\star \) is less than the total length of \(\pi _\star \). Now (4.31) follows from (4.34)–(4.35). \(\square \)
5 Proof of the Main Theorem
Define
where we recall (1.13). To prove Theorem 1.4 we show that
The proof proceeds via upper and lower bound, proved in Sects. 5.1 and 5.2, respectively. Throughout this section, Assumptions 1.1, 1.2(1) and 1.3 are in force.
5.1 Upper Bound
We follow [7, Sect. 4.2]. The proof of the upper bound in (5.2) relies on two lemmas showing that paths staying inside a ball of radius \(\lceil t^\gamma \rceil \) for some \(\gamma \in (0,1)\) or leaving a ball of radius \(t \log t\) have a negligible contribution to (1.6), the total mass of the solution.
Lemma 5.1
(No long paths) For any \(\ell _t \ge t \log t\),
Proof
We follow [7, Lemma 4.2]. For \(r \ge \ell _t\), let
Since \(\lim _{t\rightarrow \infty } \ell _t = \infty \), Lemma 3.5 gives that \(\mathrm {P}\)-a.s.
Therefore we can work on the event \(\bigcap _{r \ge \ell _t} [\mathcal B_r]^{\mathrm{c}}\). On this event, we write
where \(J_t\) is the number of jumps of X up to time t, and we use that \(|B_r(\mathcal {O})| \le (\log r)^{\delta _r r}\). Next, \(J_t\) is stochastically dominated by a Poisson random variable with parameter \(t (\log r)^{\delta _r}\). Hence
for large r. Using that \(\ell _t \ge t \log t\), we can easily check that, for \(r \ge \ell _t\) and t large enough,
Thus (5.6) is at most
Since \(\lim _{t\rightarrow \infty } \ell _t = \infty \) and \(\lim _{t\rightarrow \infty } U^*(t) = \infty \), this settles the claim. \(\square \)
Lemma 5.2
(No short paths) For any \(\gamma \in (0,1)\),
Proof
We follow [7, Lemma 4.3]. By Lemma 3.5 with \(r = \lceil t^\gamma \rceil \), we may assume that
where the second inequality uses that \(\log L_{\lceil t^\gamma \rceil } \sim \log |B_{\lceil t^\gamma \rceil }(\mathcal {O})| \sim \vartheta \lceil t^\gamma \rceil \). Hence
for any constant \(C>1\). \(\square \)
The proof of the upper bound in (5.2) also relies on a third lemma estimating the contribution of paths leaving a ball of radius \(\lceil t^\gamma \rceil \) for some \(\gamma \in (0,1)\) but staying inside a ball of radius \(t \log t\). We slice to annulus between these two balls into layers, and derive an estimate for paths that reach a given layer but do not reach the next layer. To that end, fix \(\gamma \in (\alpha ,1)\) with \(\alpha \) as in (3.1), and let
For \(1 \le k \le K_t\), define (recall (4.1))
and set
Lemma 5.3
(Upper bound on \(U^{{{\scriptscriptstyle {({k}})}}}(t)\)) For any \(\varepsilon >0\), \((\mathrm {P}\times \mathfrak {P})\)-a.s. eventually as \(t \rightarrow \infty \),
Proof
We follow [7, Lemma 4.4] Fix \(k \in \{1, \ldots , K_t\}\). For \(\pi \in \mathcal {N}^{{{\scriptscriptstyle {({k}})}}}_t\), let
chosen such that (4.19)–(4.20) are satisfied. By Proposition 4.6 and (4.10), \((\mathrm {P}\times \mathfrak {P})\)-a.s. eventually as \(t \rightarrow \infty \),
Using Corollary 3.4 and \(\log L_r \sim \vartheta r\), we bound
Moreover, \(|z_\pi | > r^{{{\scriptscriptstyle {({k+1}})}}}_t - \lceil t^\gamma \rceil \) and
Hence
with
The function \(F_t\) is maximized at any point \(r_t\) satisfying
In particular, \(r_t = \mathfrak {r}_t[1+o(1)]\), which implies that
Inserting (5.24) into (5.21), we obtain \(\displaystyle \frac{1}{t} \log U^{{{\scriptscriptstyle {({k}})}}}_t < \varrho \log (\vartheta \mathfrak {r}_t) - \varrho - {\widetilde{\chi }}(\varrho ) + \varepsilon \), which is the desired upper bound because \(\varepsilon >0\) is arbitrary. \(\square \)
Proof of the upper bound in (5.2)
To avoid repetition, all statements hold \((\mathfrak {P}\times \mathrm {P})\)-a.s. eventually as \(t \rightarrow \infty \). Set
Then
From Lemmas 5.1–5.3 and the fact that \(K_t = o(t)\), we get
Since \(\varepsilon >0\) is arbitrary, this completes the proof of the upper bound in (1.14). \(\square \)
5.2 Lower Bound
We follow [7, Sect. 4.1]. Fix \(\varepsilon >0\). By the definition of \({\widetilde{\chi }}\), there exists an infinite rooted tree \(T=(V',E',\mathcal Y)\) with degrees in \({{\,\mathrm{supp}\,}}(D_g)\) such that \(\chi _T(\varrho ) < {\widetilde{\chi }}(\varrho ) + \tfrac{1}{4} \varepsilon \). Let \(Q_r = B^T_r(\mathcal Y)\) be the ball of radius r around \(\mathcal Y\) in T. By Proposition A.1 and (A.2), there exist a radius \(R \in \mathbb {N}\) and a potential profile \(q:B^T_R \rightarrow \mathbb {R}\) with \(\mathcal {L}_{Q_R}(q;\varrho )<1\) (in particular, \(q\le 0\)) such that
For \(\ell \in \mathbb {N}\), let \(B_\ell = B_\ell (\mathcal {O})\) denote the ball of radius \(\ell \) around \(\mathcal {O}\) in \({\mathcal {G}\mathcal {W}}\). We will show next that, \((\mathfrak {P} \times \mathrm {P})\)-a.s. eventually as \(\ell \rightarrow \infty \), \(B_\ell \) contains a copy of the ball \(Q_R\) where the potentail \(\xi \) is bounded from below by \(\varrho \log \log |B_\ell | + q\).
Proposition 5.4
(Balls with high exceedances) \((\mathfrak {P}\times \mathrm {P})\)-almost surely eventually as \(\ell \rightarrow \infty \), there exists a vertex \(z \in B_\ell \) with \(B_{R+1}(z) \subset B_\ell \) and an isomorphism \(\varphi :B_{R+1}(z) \rightarrow Q_{R+1}\) such that \(\xi \ge \varrho \log \log |B_\ell | + q \circ \varphi \) in \(B_R(z)\). In particular,
Any such z necessarily satisfies \(|z| \ge c \ell \) \((\mathfrak {P}\times \mathrm {P})\)-a.s. eventually as \(\ell \rightarrow \infty \) for some constant \(c = c(\varrho , \vartheta , {\widetilde{\chi }}(\varrho ), \varepsilon ) >0\).
Proof
See [7, Proposition 4.1]. The proof carries over verbatim because the degrees play no role. \(\square \)
Proof of the lower bound in (1.14)
Let z be as in Proposition 5.4. Write \(\tau _z\) for the hitting time of z by the random walk X. For \(s\in (0,t)\), we estimate
where we use the strong Markov property at time \(\tau _z\). We first bound the last term in the integrand in (5.30). Since \(\xi \ge \varrho \log \log |B_\ell | +q \) in \(B_R(z)\),
for large v, where we used that \(B_{R+1}(z)\) is isomorphic to \(Q_{R+1}\) for the indicators in the first inequality, and applied Lemma B.2 and (5.28) to obtain the second and third inequalities, respectively. On the other hand, since \(\xi \ge 0\),
and we can bound the latter probability from below by the probability that the random walk runs along a shortest path from the root \(\mathcal {O}\) to z within a time at most s. Such a path \((y_i)_{i=0}^{|z|}\) has \(y_0 = \mathcal {O}\), \(y_{|z|} = z\), \(y_i \sim y_{i-1}\) for \(i=1, \ldots , |z|\), has at each step from \(y_i\) precisely \(\deg (y_i)\) choices for the next step with equal probability, and the step is carried out after an exponential time \(E_i\) with parameter \(\deg (y_i)\). This gives
where \(\mathrm{Poi}_\gamma \) is the Poisson distribution with parameter \(\gamma \), and P is the generic symbol for probability. Summarising, we obtain
where in the last inequality we use that \(s \le |z|\) and \(\ell \ge |z|\). Further assuming that \(\ell = o(t)\), we see that the optimum over s is obtained at
Note that, by Proposition 5.4, this s indeed satisfies \(s\le |z|\). Applying (1.10) we get, after a straightforward computation, \((\mathfrak {P}\times \mathrm {P})\)-a.s. eventually as \(t \rightarrow \infty \),
Inserting \(\log |B_\ell | \sim \vartheta \ell \), we get
with
The optimal \(\ell \) for \(F_\ell \) satisfies
i.e., \(\ell = \mathfrak {r}_t[1+o(1)]\). For this choice we obtain
Hence \((\mathfrak {P}\times \mathrm {P})\)-a.s.
Since \(\varepsilon >0\) is arbitrary, this completes the proof of the lower bound in (1.14). \(\square \)
Remark
It is clear from (5.23) and (5.39) that, in order to get the correct asymptotics, it is crucial that both \(\delta _r\) and \(r\frac{\mathrm {d}}{\mathrm {d}r}\delta _r\) tend to zero as \(r\rightarrow \infty \). This is why Assumption 1.3 is the weakest condition on the tail of the degree distribution under which the arguments in [7] can be pushed through.
Data Availability
Data sharing not applicable as no datasets were generated or analysed during the current study.
References
Astrauskas, A.: From extreme values of i.i.d. random fields to extreme eigenvalues of finite-volume Anderson Hamiltonian. Probab. Surv. 13, 156–244 (2016)
Athreya, K.B.: Large deviation rates for branching processes. I. Single type case. Ann. Appl. Probab. 4, 779–790 (1994)
Avena, L., Gün, O., Hesse, M.: The parabolic Anderson model on the hypercube. Stoch. Proc. Appl. 130, 3369–3393 (2020)
Biggins, J., Bingham, N.: Large deviations in the supercritical branching process. Adv. Appl. Prob. 25, 757–772 (1993)
Biskup, M., König, W.: Eigenvalue order statistics from random Schrödinger operators with doubly-exponential tails. Commun. Math. Phys. 341, 179–218 (2016)
Biskup, M., König, W., dos Santos, R.S.: Mass concentration and aging in the parabolic Anderson model with doubly-exponential tails. Probab. Theory Relat. Fields 171, 251–331 (2018)
den Hollander, F., Konig, W., dos Santos, R.S.: The Parabolic Anderson Model on a Galton-Watson Tree, to appear in In and Out of Equilibrium 3: Celebrating Vladas Sidoravicius. Progress in Probability, Birkhäuser, Basel (2021)
Fleischmann, K., Molchanov, S.A.: Exact asymptotics in a mean field model with random potential. Probab. Theory Relat. Fields 86, 239–251 (1990)
Gärtner, J., Molchanov, S.A.: Parabolic problems for the Anderson model I. Intermittency and related problems. Commun. Math. Phys. 132, 613–655 (1990)
Gärtner, J., Molchanov, S.A.: Parabolic problems for the Anderson model II. Second-order asymptotics and structure of high peaks. Probab. Theory Relat. Fields 111, 17–55 (1998)
Gärtner, J., König, W., Molchanov, S.: Geometric characterization of intermittency in the parabolic Anderson model. Ann. Probab. 35, 439–499 (2007)
Grimmett, G.: Percolation (2nd. ed.). Grundlehren der mathematischen Wissenschaften, vol 321. Springer, Berlin (1999)
König, W.: The Parabolic Anderson Model. Pathways in Mathematics, Birkhäuser (2016)
Lyons, R., Peres, Y.: Probability on Trees and Networks. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, New York (2016)
Acknowledgements
The work in this paper was supported through Gravitation-grant NETWORKS-024.002.003 of the Netherlands Organisation for Scientific Research (NWO). The authors thank Götz Kersting and Anton Wakolbinger for helpful discussions on large deviation properties of the Galton–Watson process.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Simone Warzel.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
A Dual Variational Formula
We introduce alternative representations for \(\chi \) in (1.9) in terms of a ‘dual’ variational formula. Fix \(\varrho \in (0,\infty )\) and a graph \(G=(V,E)\). The functional
plays the role of a large deviation rate function for the potential \(\xi \) in V (compare with (1.7)). For \(\Lambda \subset V\), define
The condition \(\mathcal {L}(q;G) \le 1\) under the supremum ensures that the potentials q have a fair probability under the i.i.d. double-exponential distribution. Write \({\widehat{\chi }}(G) = {\widehat{\chi }}_V(G)\).
Proposition A.1
(Alternative representations for \(\chi \) ) For any graph \(G = (V,E)\) and any \(\Lambda \subset V\),
Proof
See [7, Sect. A.1] \(\square \)
B Largest Eigenvalue
We recall the Rayleigh-Ritz formula for the principal eigenvalue of the Anderson Hamiltonian. For \(\Lambda \subset V\) and \(q:\,V \rightarrow [-\infty , \infty )\), let \(\lambda _\Lambda (q; G)\) denote the largest eigenvalue of the operator \(\Delta _G + q\) in \(\Lambda \) with Dirichlet boundary conditions on \(V\setminus \Lambda \), i.e.,
Lemma B.1
(Spectral bounds)
-
(1)
For any \(\Gamma \subset \Lambda \subset V\),
$$\begin{aligned} \max _{z \in \Gamma } q(z) - D_{{\bar{z}}} \le \lambda _\Gamma (q;G) \le \lambda _\Lambda (q;G) \le \max _{z \in \Lambda } q(z) \end{aligned}$$(B.2)with \({\bar{z}} = \mathrm {arg}\max _{z \in \Gamma } q(z)\) and \(D_{\bar{z}}\) the degree of \({\bar{z}}\).
-
(2)
The eigenfunction corresponding to \(\lambda _\Lambda (q;G)\) can be taken to be non-negative.
-
(3)
If q is real-valued and \(\Gamma \subsetneq \Lambda \) is finite and connected in G, then the second inequality in (B.2) is strict and the eigenfunction corresponding to \(\lambda _\Lambda (q;G)\) is strictly positive.
Proof
Write
where the first sum in the last line runs over all ordered pairs (x, y) with \((x,y) \ne (y,x)\), which gives rise to the factor \(\tfrac{1}{2}\). The upper bound in (B.2) follows from the estimate
To get the lower bound in (B.2), we use the fact that \(\lambda _\Lambda \) is non-decreasing in q. Hence, replacing q(z) by \(-\infty \) for every \(z \ne {\bar{z}}\) and taking as test function \(\phi ={{\bar{\phi }}} = \delta _{{\bar{z}}}\), we get from (B.3) that
which settles the claim in (1). The claims in (2) and (3) are standard. \(\square \)
Inside \({\mathcal {G}\mathcal {W}}\), fix a finite connected subset \(\Lambda \subset V\), and let \(H_\Lambda \) denote the Anderson Hamiltonian in \(\Lambda \) with zero Dirichlet boundary conditions on \(\Lambda ^c = V \backslash \Lambda \) (i.e., the restriction of the operator \(H_G = \Delta _G + \xi \) to the class of functions supported on \(\Lambda \)). For \(y \in \Lambda \), let \(u^y_\Lambda \) be the solution of
and set \(U^y_\Lambda (t) := \sum _{x \in \Lambda } u^y_\Lambda (x,t)\). The solution admits the Feynman-Kac representation
where \(\tau _{\Lambda ^{\mathrm{c}}}\) is the hitting time of \(\Lambda ^{\mathrm{c}}\). It also admits the spectral representation
where \(\lambda ^{{{\scriptscriptstyle {({1}})}}}_\Lambda \ge \lambda ^{{{\scriptscriptstyle {({2}})}}}_\Lambda \ge \cdots \ge \lambda ^{{{\scriptscriptstyle {({|\Lambda |}})}}}_\Lambda \) and \(\phi ^{{{\scriptscriptstyle {({1}})}}}_\Lambda , \phi ^{{{\scriptscriptstyle {({2}})}}}_\Lambda , \ldots , \phi ^{{{\scriptscriptstyle {({|\Lambda |}})}}}_\Lambda \) are, respectively, the eigenvalues and the corresponding orthonormal eigenfunctions of \(H_\Lambda \). These two representations may be exploited to obtain bounds for one in terms of the other, as shown by the following lemma.
Lemma B.2
(Bounds on the solution) For any \(y \in \Lambda \) and any \(t > 0\),
Proof
The first and third inequalities follow from (B.7)–(B.8) after a suitable application of Parseval’s identity. The second inequality is elementary. \(\square \)
C Existence and Uniqueness of the Feynman-Kac Formula
We follow the argument in [9, Sect. 2], where existence and uniqueness of the Feynman-Kac formula in (1.5) was shown for \(G= \mathbb {Z}^d\).
Theorem C.1
Subject to Assumptions 1.1 and 1.2, (1.4) has a unique nonnegative solution \((P \times \mathfrak {P})\)-almost surely. This solution admits the Feynman-Kac representation in (1.5).
We note that, due to the exponential growth of the Galton–Watson tree, the condition on the potential needed here is stronger than the one required in [9] on \(\mathbb {Z}^d\).
The proof of Theorem C.1 requires several preparatory results. Lemmas C.2 and C.3 below show the existence and uniqueness, respectively, of the Feynman-Kac solution for a deterministic potential. Lemma C.4 extends this to a random potential.
Consider the problem
where q is a deterministic potential that is bounded from below. Without loss of generality, we may assume that q is nonnegative.
Define
Lemma C.2
(Existence) (C.1) admits at least one nonnegative solution if and only if
If (C.3) is fulfilled, then v is the minimal nonnegative solution of (C.1).
Proof
See [7, Lemma 2.2]. The proof relies on restricting the Feynman-Kac functional in (C.3) to cubes of length 2N around the origin and letting \(N \rightarrow \infty \). On the tree we restrict to balls of radius R around the root and let \(R \rightarrow \infty \). The arguments carry over with this change. \(\square \)
Lemma C.3
(Uniqueness) If q is bounded from below, then (C.1) admits at most one nonnegative solution \(\mathfrak {P}\)-almost surely.
Proof
It suffices to show that (C.1) with initial condition \(u(x,0) = 0\), \(x\in V\), only has the 0 solution. We follow the proof of [9, Lemma 2.3]. For \(R \in \mathbb {N}\), define \(\Gamma _R\) to be the set of paths
consisting of neighbouring vertices in V such that \(x_0,\cdots ,x_{n-1}\in B_R(\mathcal {O})\) and \(x_n \in Z_{R+1}\). Furthermore, define
Let \(\tau _R\) be the first time when the random walk hits \(Z_{R+1}\), and let v be a solution of (C.1). The Feynman-Kac representation of v reads
We are done once we show that
for all \(0 < t \le T\) and all \(R\in \mathbb {N}\). Indeed, in that case the right-hand side tends to infinity as \(R \rightarrow \infty \), and therefore so does the left-hand side, which leaves \(v(t,0) = 0\) for all \(t \in (0,T]\) as the only possibile solution.
To prove (C.7), fix an arbitrary path \(\gamma \in \Gamma _R\). The contribution of the random walk moving along the path \(\gamma \) equals \(\chi _\gamma (t)\) with
where \(q_i = q(x_i)\) and the \(\sigma _i\) are the successive waiting times of the random walk, which are independent and exponentially distributed with parameter \(D_i\). Letting m be such that \(q(x_m) = \displaystyle \min _{x\in \gamma }q(x)\), we can rewrite (C.8) as
where the inequality uses Lemma 2.3(a), which shows that the maximal degree is o(R) as \(R\rightarrow \infty \). After some straightforward manipulations and making a change of integration variables (for full details see [9, (2.14)–(2.15)]), we arrive at (C.7). \(\square \)
Lemma C.4
For each \(t>0\),
where, for \(x\in V\), \(|x|= {{\,\mathrm{dist}\,}}(\mathcal {O}, x)\) denotes the distance between x and the root \(\mathcal {O}\).
Proof
For fixed R, let \(({\tilde{X}}_t)_{t\ge 0}\) be the random walk on the regular tree with offspring o(R), and define \((N(t))_{t\ge 0}\) to be the Poisson process with rate \(\log R\), associated with the jumps of \({\tilde{X}}\). We estimate
Since
(C.10) follows from Stirling’s formula. \(\square \)
Proof of Theorem C.1
We follow the proof of [9, Theorem 2.1 a)]. We need to check that the expression in (1.5) is finite for arbitrary \((t,x) \in \mathbb {R}_+ \times V\). To that end we estimate
We know from Lemma 3.5 that \(\displaystyle \max _{y\in B_R(\mathcal {O})}\xi (y) \sim \varrho \log (\theta R)\) as \(R\rightarrow \infty \), \((P\times \mathfrak {P})\)-a.s. (recall (1.10) and (3.3)). Applying Lemma C.4, we see that the sum on the right-hand side is indeed finite. \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
den Hollander, F., Wang, D. The Parabolic Anderson Model on a Galton–Watson Tree Revisited. J Stat Phys 189, 8 (2022). https://doi.org/10.1007/s10955-022-02951-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10955-022-02951-1
Keywords
- Parabolic Anderson model
- Galton–Watson tree
- Double-exponential distribution
- Quenched Lyapunov exponent
- Variational formula