1 Introduction and Main Results

1.1 History and introduction

Our work will focus on two distinct but related models: The \(\mathbb {H}^{2|2}\)-model, a lattice spin model which is related to the Anderson transition, and the vertex-reinforced jump process (VRJP), a random walk on graphs which is more likely to jump to vertices on which it has already spent a lot of time.

The \(\mathbb {H}^{2|2}\)-model was initially introduced by Zirnbauer [1] as a toy model for studying the Anderson transition. Formally, it is a lattice spin model taking values in the hyperbolic superplane \(\mathbb {H}^{2|2}\), a supersymmetric analogue of hyperbolic space. Independently, the VRJP was introduced by Davis and Volkov [2] as a natural example of a reinforced (and consequently non-Markovian) continuous-time random walk. Somewhat surprisingly, Sabot and Tarrès [3] observed that these two models are intimately related. Namely, the time the VRJP asymptotically spends on vertices can be expressed in terms of the \(\mathbb {H}^{2|2}\)-model. This has been used to see the VRJP as a random walk in random environment, with the environment being given by the \(\mathbb {H}^{2|2}\)-model. Furthermore, the two models are linked by a Dynkin-type isomorphism theorem due to Bauerschmidt, Helmuth and Swan [4, 5], analogous to the connection between simple random walk and the Gaussian free field [6].

Both models are parametrised by an inverse temperature \(\beta > 0\) and, depending on the background geometry of the graph under consideration, may exhibit a phase transition at some critical parameter \(\beta _{\textrm{c}} \in \left( 0,\infty \right] \). For the \(\mathbb {H}^{2|2}\)-model the expected transition is between a disordered high-temperature phase (\(\beta < \beta _{\textrm{c}}\)) and a symmetry-broken low-temperature phase (\(\beta > \beta _{\textrm{c}}\)) exhibiting long-range order. For the VRJP the transition is between a recurrent phase due to strong reinforcement effects and a transient phase due to low reinforcement effects.

On \(\mathbb {Z}^{D}\) a fair bit is known about the phase diagram of the two models. In dimension \(D\le 2\) both models are never delocalised (i.e. they are always disordered and recurrent, respectively) [2,3,4, 7,8,9]. In dimensions \(D\ge 3\), however, they exhibit a phase transition from a localised to a delocalised phase at a unique \(\beta _{\textrm{c}} \in (0,\infty )\) [3, 8, 10,11,12,13,14].

Fig. 1
figure 1

The rooted \((d+1)\)-regular tree \(\mathbb {T}_{d}\) for \({d=2}\) shown up to its third generation, with the root vertex denoted as 0

In this article we consider both models on the geometry of a rooted \((d+1)\)-regular tree \(\mathbb {T}_{d}\) with \(d\ge 2\) (see Fig. 1). For the VRJP this setting was previously explored by various authors [15,16,17,18,19]. In particular, Basdevant and Singh [17] showed that the VRJP on Galton–Watson trees with mean offspring \(m>1\) has a phase transition from recurrence to transience at some explicitly characterised \(\beta _{\textrm{c}} \in (0,\infty )\). For simplicity, we focus on the “deterministic case”, but our results should translate to Galton–Watson trees as well (up to some technical restrictions on the offspring distribution).

The main goal of this work is to provide new information on the supercritical phase (\(\beta > \beta _{\textrm{c}}\)) including the near-critical regime. Roughly speaking, we show that on the infinite rooted \((d+1)\)-regular tree \(\mathbb {T}_{d}\) the order parameters of the VRJP and the \(\mathbb {H}^{2|2}\)-model diverge as \(\exp (c/\sqrt{\beta -\beta _{\textrm{c}}})\) as one approaches the critical point from the supercritical regime, \(\beta \searrow \beta _{\textrm{c}}\) (see Theorem 1.2 and 1.5, respectively). Such behaviour has previously been predicted by Zirnbauer for Efetov’s model [20]. This “infinite-order” behaviour towards the critical point is rather surprising, as it conflicts with usual scaling hypotheses in statistical mechanics, which predict algebraic singularities as one approaches the critical points. Moreover, we show that on finite rooted \((d+1)\)-regular trees, the VRJP and the \(\mathbb {H}^{2|2}\)-model exhibit an additional mulifractal intermediate regime for \(\beta \in (\beta _{\textrm{c}}, \beta _{\textrm{c}}^{\textrm{erg}})\) (see Theorem 1.3, 1.4, and 1.6). An illustration of some of our results for the VRJP is given in Fig. 2.

Fig. 2
figure 2

Sketch of the phase diagram for the VRJP on \(\mathbb {T}_{d}\) with \(d\ge 2\). The recurrence/transience transition at \(\beta _{\textrm{c}}\) is phrased in terms of \(\mathbb {E}[L^{0}_{\infty }]\), i.e. the expected total time the walk (on the infinite rooted \((d+1)\)-regular tree \(\mathbb {T}_{d}\)) spends at the starting vertex. In this article, we obtain precise asymptotics for \(\mathbb {E}[L^{0}_{\infty }]\) as \(\beta \searrow \beta _{\textrm{c}}\). Second, we show that there is an additional transition point \(\beta _{\textrm{c}}^{\textrm{erg}} > \beta _{\textrm{c}}\). It is phrased in terms of the volume-scaling of the fraction of total time, \(\lim _{t\rightarrow \infty } L^{0}_{t}/t\), the VRJP on the finite tree \(\mathbb {T}_{d,n}\) spends at the origin. Here, the symbol “\(\sim \)” is understood loosely, and we refer to the text for precise error terms

Connection to the Anderson Transition and Efetov’s Model. Inspiration for our work originates from predictions in the physics literature on Efetov’s model [20,21,22,23,24,25]. The latter is a supersymmetric lattice sigma model that is considered to capture the Anderson transition [26, 27]. To be more precise, Efetov’s model can be derived from a granular limit (similar to a Griffiths-Simon construction [28]) of the random band matrix model, followed by a sigma model approximation [29, 30]. The connection to our work is due to Zirnbauer, who introduced the \(\mathbb {H}^{2|2}\)-model as a simplification of Efetov’s model [1]. Namely, in Efetov’s model spins take value in the symmetric superspace \(\textrm{U}(1,1|2)/[\textrm{U}(1|1)\otimes \textrm{U}(1|1)]\). According to Zirnbauer, the essential features of this target space are its hyperbolic symmetry and its supersymmetry.Footnote 1 In this sense, \(\mathbb {H}^{2|2}\) is the simplest target space with these two properties. Study of the \(\mathbb {H}^{2|2}\)-model may guide the analysis of supersymmetric field theories more closely related to the Anderson transition.

Moreover, the \(\mathbb {H}^{2|2}\)-model and the VRJP are directly and rigorously related to an Anderson-type model, which we refer to as the STZ-Anderson model (see Definition 1.8). This fact was already hinted at by Disertori, Spencer and Zirnbauer [10], but only fully appreciated by Sabot, Tarrès and Zeng [31, 32], who exploited the relationship to gain new insights on the VRJP. It is an interesting open problem to better understand the spectral properties of this model and how it relates to the VRJP and the \(\mathbb {H}^{2|2}\)-model.

Notably, the phase diagram of the \(\mathbb {H}^{2|2}\)-model is better understood than that of Efetov’s model or the Anderson model on a lattice. For example, for the \(\mathbb {H}^{2|2}\)-model there is proven absence of long-range order in 2D [4] as well as proven existence of a phase transition in 3D [10, 11]. For the Anderson model on \(\mathbb {Z}^{D}\), the existence of a phase transition in \(D\ge 3\) and the absence of one in \(D=2\) are arguably among the most prominent open problems in mathematical physics. A good example of the Anderson model’s intricacies is given by the work of Aizenman and Warzel [33, 34]. Despite many previous efforts, they were the first to gain a somewhat complete understanding of the model’s spectral properties on the regular tree. However, many questions are still open, in particular there are no rigorous results on the Anderson model’s (near-)critical behaviour. In this sense one might (somewhat generously) interpret this article as a step towards better understanding of the near-critical behaviour for a model in the “Anderson universality class”.

We would also like to comment on the methods used in the physics literature on Efetov’s model. The analysis of the model on a regular tree, initiated by Efetov and Zirnbauer [20, 21], relies on a recursion/consistency relation that is specific to the tree setting. Using this approach, Zirnbauer predicted the divergence of the order parameter (relevant for the symmetry-breaking transition of Efetov’s model) for \(\beta \searrow \beta _{\textrm{c}}\). We should mention that Mirlin and Gruzberg [35] argued that this analysis should essentially carry through for the \(\mathbb {H}^{2|2}\)-model. In our case, we take a different path, exploiting a branching random walk structure in the “horospherical marginal” of the \(\mathbb {H}^{2|2}\)-model (the t-field).

After completion of this work, we were made aware by Martin Zirnbauer of recent numerical investigations for the Anderson transition on random tree-like graphs [36, 37]. The observed scaling behaviour near the transition point might suggest the need for a field-theoretic description beyond the supersymmetric approach of Efetov (also see [38, 39]). At this point, there does not seem to exist a consensus on the theoretical description of near-critical scaling for the Anderson transition of tree-like graphs and rigorous results would be of great value.

Notation: In multi-line estimates, we occasionally use “running constants” \(c,C > 0\) whose precise value may vary from line to line. We denote by \([n] = {1,\ldots ,n}\) the range of positive integers up to n. For a graph \(G = (V,E)\) an unoriented edge \(\{x,y\} \in E\) will be denoted by the juxtaposition xy, whereas an oriented edge is denoted by a tuple (xy), which is oriented from x to y. Write \({E}\) for the set of oriented edges. For a vertex x in a rooted tree (or a particle of a branching random walk), we denote its generation (i.e. distance from the origin) by \(\left| x\right| \). We use the short-hand \(\sum _{\left| x\right| = n}\ldots \) to denote summation over all vertices/particles at generation n. Variants of this convention will be used and the meaning should be clear from context. When our results concern the \((d+1)\)-regular rooted tree \(\mathbb {T}_{d}\), we assume \(d\ge 2\) will typically suppress the d-dependence of all involved constants, unless specified otherwise. Mentions of \(\beta _{\textrm{c}}\) implicity refer to the critical parameter \(\beta _{\textrm{c}} = \beta _{\textrm{c}}(d)\) as given by Proposition 2.14.

Fig. 3
figure 3

An illustration of various interconnected models, that we touch on. Solid lines denote rigorous connections, i.e. relevant quantities in one model can be expressed in terms of the other. Dashed lines signify conceptual/heuristic connections

1.2 Model definitions and results

In this section, we define the VRJP, the \(\mathbb {H}^{2|2}\)-model, the t-field and the STZ-Anderson model. We are aware that spin systems with fermionic degrees of freedom, such as the \(\mathbb {H}^{2|2}\)-model, might be foreign to some readers. However, understanding this model is not necessary for the main results on the VRJP, and the reader can feel comfortable to skip references to the \(\mathbb {H}^{2|2}\)-model on a first reading. We also note that all models that we introduce are intimately related (as illustrated in Fig. 3) and Sect. 2 will illuminate some of these connections.

1.2.1 Vertex-Reinforced jump process

Definition 1.1

Let \(G = (V,E)\) be a locally finite graph equipped with positive edge-weights \((\beta _e)_{e\in E}\), and a starting vertex \(i_0\in V\). The VRJP \((X_t)_{t\ge 0}\) starting at \(X_{0} = i_0\) is the continuous-time jump process that at time t jumps from a vertex \(X_{t} = x\) to a neighbour y at rate

$$\begin{aligned} \beta _{xy}[1+L_{t}^{y}] \quad \text {with} \quad L_{t}^{y}(t):=\int _{0}^t 1_{X_s=y}\text {d}s. \end{aligned}$$
(1.1)

We refer to \(L_{t}^{y}\) as the local time at y up to time t.

Unless specified otherwise, the VRJP on a graph G refers to the case of constants weights \(\beta _{e} \equiv \beta \) and the dependency on the weight \(\beta \) is specified by a subscript, as in \(\mathbb {E}_{\beta }\) or \(\mathbb {P}_{\beta }\). By a slight abuse of language, we refer to \(\beta \) as an inverse temperature.

Results for the VRJP. Note that Fig. 2 gives a rough picture of our statements for the VRJP. In the following we provide the exact results.

In the following, \(\beta _{\textrm{c}} = \beta _{\textrm{c}}(d)\) will denote the critical inverse temperature for the recurrence/transience transition of the VRJP on the infinite rooted \((d+1)\)-regular tree \(\mathbb {T}_{d}\) with \(d\ge 2\). By Basdevant and Singh [17] this inverse temperature is well-defined and finite: \(\beta _{\textrm{c}} \in (0,\infty )\) (cf. Proposition 2.14). Alternatively, \(\beta _{\textrm{c}}\) is characterised in terms of divergence of the expected total local time at the origin: \(\beta _{\textrm{c}} = \inf \{\beta > 0: \mathbb {E}_{\beta }[L^{0}_{\infty }] < \infty \}\). The following theorem provides information about the divergence of \(\mathbb {E}_{\beta }[L^{0}_{\infty }]\) as we approach the critical point from the transient regime.

Theorem 1.2

(Local-Time Asymptotics as \(\beta \searrow \beta _{\textrm{c}}\) for the VRJP on \(\mathbb {T}_{d}\)). Consider the VRJP, started at the root 0 of the infinite rooted \((d+1)\)-regular tree \(\mathbb {T}_{d}\) with \(d\ge 2\). Let \(\beta _{\textrm{c}} = \beta _{\textrm{c}}(d) \in (0,\infty )\) be as in Proposition 2.14. Let \(L^{0}_{\infty } = \lim _{t\rightarrow \infty }L^{0}_{t}\) denote the total time the VRJP spends at the root. There are constants \(c,C>0\) such that for sufficiently small \(\epsilon > 0\):

$$\begin{aligned} \exp (c/\sqrt{\epsilon })\le \mathbb {E}_{\beta _{\textrm{c}}+\epsilon }[L^{0}_{\infty }]\le \exp (C/\sqrt{\epsilon }). \end{aligned}$$
(1.2)

The above result concerned the infinite rooted \((d+1)\)-regular tree \(\mathbb {T}_{d}\). On a finite rooted \((d+1)\)-regular tree \(\mathbb {T}_{d,n}\) the total local time at the origin always diverges, but we may consider the fraction of time the walk spends at the starting vertex. In terms of this quantity we can identify both the recurrence/transience transition point \(\beta _{\textrm{c}}\) as well as an additional intermediate phase inside the transient regime.

Theorem 1.3

(Intermediate Phase for VRJP on Finite Trees). Consider the VRJP started at the root of the rooted \((d+1)\)-regular tree of depth n, \(\mathbb {T}_{d,n}\), with \(d\ge 2\). Let \(L_{t}^{0}\) denote the total time the walk spent at the root up until time t. We have

$$\begin{aligned} \textstyle \lim _{t\rightarrow \infty } \tfrac{L^{0}_{t}}{t} = \left| \mathbb {T}_{d,n}\right| ^{-\nu (\beta ) + o(1)} \qquad \text {w.h.p.\ as } n\rightarrow \infty \end{aligned}$$
(1.3)

with \(\beta \mapsto \nu (\beta )\) continuous and non-decreasing such that

$$\begin{aligned} \nu (\beta ) {\left\{ \begin{array}{ll} = 0 &{}\text { for } \beta \le \beta _{\textrm{c}}\\ \in (0,1) &{}\text { for } \beta _{\textrm{c}}< \beta < \beta _{\textrm{c}}^{\textrm{erg}}\\ =1 &{}\text { for } \beta > \beta _{\textrm{c}}^{\textrm{erg}}, \end{array}\right. } \end{aligned}$$
(1.4)

for some \(\beta _{\textrm{c}}^{\textrm{erg}} = \beta _{\textrm{c}}^{\textrm{erg}}(d) > \beta _{\textrm{c}}\). More precisely, we have

$$\begin{aligned} \nu (\beta ) = \max \!\Big (0, \inf _{\eta \in \left( 0,1 \right] } \frac{\psi _{\beta }(\eta )}{\eta \log d}\Big ) \end{aligned}$$
(1.5)

with \(\psi _{\beta }(\eta )\) given in (3.7).

Moreover, in the intermediate phase the inverse fraction of time at the origin shows a multifractal scaling behaviour:

Theorem 1.4

(Multifractality in the Intermediate Phase). Consider the setup of Theorem 1.3 and suppose \(\beta \in (\beta _{\textrm{c}}, \beta _{\textrm{c}}^{\textrm{erg}})\). For \(\eta \in (0,1)\) we have

$$\begin{aligned} \textstyle \mathbb {E}_{\beta }[(\lim _{t \rightarrow \infty }\tfrac{L_{t}^{0}}{t})^{-\eta }] \sim \left| \mathbb {T}_{d,n}\right| ^{\tau _{\beta }(\eta ) + o(1)} \quad \text {as} \quad n\rightarrow \infty , \end{aligned}$$
(1.6)

where

$$\begin{aligned} \tau _{\beta }(\eta ) = {\left\{ \begin{array}{ll} \frac{\eta }{\eta _{\beta }} \frac{\psi _{\beta }(\eta _{\beta })}{\log d} &{}\text { for } \eta \le \eta _{\beta }\\ \frac{\psi _{\beta }(\eta )}{\log d} &{}\text { for } \eta \ge \eta _{\beta }, \end{array}\right. } \end{aligned}$$
(1.7)

where \(\psi _{\beta }\) is given in (3.7) and \(\eta _{\beta } = \textrm{argmin}_{\eta >0} \psi _{\beta }(\eta )/\eta \in (0,1)\).

1.2.2 The \(\mathbb {H}^{2|2}\)-model

Definition of the \(\mathbb {H}^{2|2}\)-Model. We start by writing down the formal expressions defining the \(\mathbb {H}^{2|2}\)-model, and then make sense out of it afterwards. Conceptually, we think of the hyperbolic superplane \(\mathbb {H}^{2|2}\) as the set of vectors \(\textbf{u} = (z,x,y,\xi ,\eta )\), satisfying

$$\begin{aligned} -1 = \textbf{u}\cdot \textbf{u} :=-z^{2} + x^{2} + y^{2} - 2\xi \eta . \end{aligned}$$
(1.8)

Here, zxy are even/bosonic coordinates and \(\xi ,\eta \) are odd/fermionic, a notion that will be explained shortly. For two vectors \(\textbf{u}_{i} = (z_{i},x_{i},y_{i},\xi _{i},\eta _{i})\) and \(\textbf{u}_{j} = (z_{j},x_{j},y_{j},\xi _{j},\eta _{j})\), we define the inner product

$$\begin{aligned} \textbf{u}_{i} \cdot \textbf{u}_{j} :=-z_{i}z_{j} + x_{i}x_{j} + y_{i}y_{j} + \eta _{i}\xi _{j} - \xi _{i}\eta _{j}. \end{aligned}$$
(1.9)

In other words, this pairing is of hyperbolic type in the even variables and of symplectic type in the odd variables.

Consider a finite graph \(G = (V,E)\) with non-negative edge weights \((\beta _{e})_{e \in E}\) and magnetic field \(h > 0\). Morally, we think of the \(\mathbb {H}^{2|2}\)-model on G as a probability measure on spin configurations \(\underline{\textbf{u}} = (\textbf{u}_{i})_{i\in V} \in (\mathbb {H}^{2|2})^{V}\), such that the formal expectation of a functional \(F \in C^{\infty }((\mathbb {H}^{2|2})^{V})\) is given by

$$\begin{aligned} \langle {F(\underline{\textbf{u}})}\rangle _{\beta , h} :=\int \limits _{(\mathbb {H}^{2|2})^{V}} \prod _{i\in V}\text {d}{\textbf{u}_{i}} F(\underline{\textbf{u}}) \, e^{\sum _{ij \in E} \beta _{ij}(\textbf{u}_{i}\cdot \textbf{u}_{j} + 1) - h\sum _{i\in V} (z_{i} - 1)}, \end{aligned}$$
(1.10)

with \(\text {d}{\textbf{u}}\) denoting the Haar measure over \(\mathbb {H}^{2|2}\). In other words, formally everything is analogous to the definition of spin/sigma models with “usual” target spaces, such as spheres \(S^{n}\) or hyperbolic spaces \(\mathbb {H}^{n}\). The only subtlety is that we still need to understand what a functional such as \(F \in C^{\infty }((\mathbb {H}^{2|2})^{V})\) means and how to interpret the integral above.

Rigorously, the space \(\mathbb {H}^{2|2}\) is not understood as a set of points, but rather is defined in a dual sense by directly specifying its set of smooth functions to be

$$\begin{aligned} C^{\infty }(\mathbb {H}^{2|2}) :=C^{\infty }(\mathbb {R}^{2}) \otimes \Lambda (\mathbb {R}^{2}) \end{aligned}$$
(1.11)

In other words, this is the exterior algebra in two generators with coefficients in \(C^{\infty }(\mathbb {R}^{2})\) (which is the same as \(C^{\infty }(\mathbb {R}^{2|2})\), analogous to the fact that \(\mathbb {H}^{2} \cong \mathbb {R}^{2}\) as smooth manifolds.). Note that this set naturally carries the structure of a graded-commutative algebra. More concretely, any superfunction \(f\in C^{\infty }(\mathbb {H}^{2|2})\) can we written as

$$\begin{aligned} f = f_{0}(x,y) + f_{\xi }(x,y) \xi + f_{\eta }(x,y) \eta + f_{\xi \eta }(x,y) \xi \eta \end{aligned}$$
(1.12)

with smooth functions \(f_{0},f_{\xi },f_{\eta },f_{\xi \eta } \in C^{\infty }(\mathbb {R}^{2})\) and \(\xi ,\eta \) generating a Grassmann algebra, i.e. they satisfy the algebraic relations \(\xi \eta = -\eta \xi \) and \(\xi ^{2} = \eta ^{2} = 0\). We think of such f as a smooth function in the variables \(x,y,\xi ,\eta \) and write \(f = f(x,y,\xi ,\eta )\). In particular, the coordinate functions \(x,y,\xi ,\eta \) are themselves superfunctions. In light of (1.8), we define the z-coordinate to be the (even) superfunction

$$\begin{aligned} z :=(1 + x^{2} + y^{2} - 2\xi \eta )^{1/2} :=(1 + x^{2} + y^{2})^{1/2} - \frac{\xi \eta }{(1+x^{2}+y^{2})^{1/2}} \in C^{\infty }(\mathbb {H}^{2|2}). \nonumber \\ \end{aligned}$$
(1.13)

In this sense the coordinate vector \(\textbf{u} = (z,x,y,\xi ,\eta )\) satisfies \(\textbf{u}\cdot \textbf{u} = -1\). By abuse of notation we write \(\textbf{u} \in \mathbb {H}^{2|2}\), but more correctly one might say that \(\textbf{u}\) parametrises \(\mathbb {H}^{2|2}\). For a superfunction \(f \in C^{\infty }(\mathbb {H}^{2|2})\) we write \(f(\textbf{u}) = f(x,y,\xi ,\eta ) = f\) and in line with physics terminology we might say that f is a function of the even/bosonic variables zxy and the odd/fermionic variables \(\xi ,\eta \).

The definition of z in (1.13) shows a particular example of a more general principle: The composition of an ordinary function (the square root in the example) with a superfunction (in the example that is \(1 + x^{2} + y^{2} - 2\xi \eta \)) is defined by formal Taylor expansion in the Grassmann variables. Due to nilpotency of the Grassmann variables this is well-defined.

Next we would like to introduce a notion of integrating a superfunction \(f(\textbf{u})\) over \(\mathbb {H}^{2|2}\). Expressing f as in (1.12), we define the derivations \(\partial _{\xi }, \partial _{\eta }\) acting via

$$\begin{aligned} \partial _{\xi }f = f_{\xi }(x,y) + f_{\xi \eta }(x,y) \eta \quad \text {and} \quad \partial _{\eta }f = f_{\eta }(x,y) - f_{\xi \eta }(x,y) \xi . \end{aligned}$$
(1.14)

In particular, note that these derivations are odd: they anticommute, \(\partial _{\xi }\partial _{\eta } = -\partial _{\eta }\partial _{\xi }\), and satisfy a graded Leibniz rule. The \(\mathbb {H}^{2|2}\)-integral of \(f \in C^{\infty }(\mathbb {H}^{2|2})\) is then defined to be the linear functional

$$\begin{aligned} \int _{\mathbb {H}^{2|2}}\text {d}{\textbf{u}}f(\textbf{u}) :=\int _{\mathbb {R}^{2}}\text {d}{x}\text {d}{y} \partial _{\eta }\partial _{\xi } [\frac{1}{z} f]. \end{aligned}$$
(1.15)

The factor \(\tfrac{1}{z}\) plays the role of a \(\mathbb {H}^{2|2}\)-volume element in the coordinates \(x,y,\xi ,\eta \). Note that this integral evaluates to a real number.

In a final step to formalise (1.10) we define multivariate superfunctions over \(\mathbb {H}^{2|2}\)

$$\begin{aligned} C^{\infty }((\mathbb {H}^{2|2})^{V}) :=\bigotimes \limits _{i\in V} C^{\infty }(\mathbb {H}^{2|2}) \cong C^{\infty }(\mathbb {R}^{2\left| V\right| }) \otimes \Lambda (\mathbb {R}^{2 \left| V\right| }), \end{aligned}$$
(1.16)

that is the Grassmann algebra in \(2\left| V\right| \) generators \(\{\xi _{i}, \eta _{i}\}_{i\in V}\) with coefficients in \(C^{\infty }(\mathbb {R}^{2\left| V\right| })\). An element of this algebra is considered a functional over spin configurations \(\underline{\textbf{u}} = \{\textbf{u}_{i}\}_{i\in V}\) and we write \(F = F(\underline{\textbf{u}})\). Any superfunction \(F \in C^{\infty }((\mathbb {H}^{2|2})^{V})\) can be expressed, analogously to (1.12), as

$$\begin{aligned} \sum _{I,J \subseteq V} f_{I,J}(\{x_{i},y_{i}\}_{i\in V}) \prod _{i\in I} \xi _{i} \prod _{j\in J} \eta _{j}. \end{aligned}$$
(1.17)

The integral of such F over \((\mathbb {H}^{2|2})^{V}\) is defined as

$$\begin{aligned} \hspace{-1em} \int \limits _{(\mathbb {H}^{2|2})^{V}} \text {d}{\underline{\textbf{u}}} F(\underline{\textbf{u}}) :=\int \limits _{(\mathbb {H}^{2|2})^{V}} \prod _{i\in V}\text {d}{\textbf{u}_{i}} F(\underline{\textbf{u}}) :=\int \limits _{\mathbb {R}^{2\left| V\right| }} \prod _{i\in V}\text {d}{x_{i}}\text {d}{y_{i}} \prod _{i\in V}\partial _{\eta _{i}}\partial _{\xi _{i}} [{(\textstyle \prod _{i\in V}\tfrac{1}{z_{i}})} F(\underline{\textbf{u}})]. \nonumber \\ \end{aligned}$$
(1.18)

With this notion of integration, the definition of the \(\mathbb {H}^{2|2}\)-model in (1.10) can be understood in a rigorous sense: The “Gibbs factor” is the composition of a regular function (exponential) with a superfunction (the exponent). As such it is defined by expansion in the Grassmann variables.

Results for the \(\mathbb {H}^{2|2}\)-Model. In the following we will simply rephrase above theorems in terms of the \(\mathbb {H}^{2|2}\)-model.

Theorem 1.5

(Asymptotics as \(\beta \searrow \beta _{\textrm{c}}\) for the \(\mathbb {H}^{2|2}\)-model on \(\mathbb {T}_{d}\)). Consider the \(\mathbb {H}^{2|2}\)-model on \(\mathbb {T}_{d,n}\). Suppose \(\beta _{\textrm{c}} = \beta _{\textrm{c}}(d) \in (0,\infty )\) is as in Proposition 2.14. The quantity

$$\begin{aligned} \langle x_{0}^{2} \rangle _{\beta _{\textrm{c}} + \epsilon }^{+} :=\lim _{h\searrow 0} \lim _{n\rightarrow \infty } \langle x_{0}^{2} \rangle _{\beta _{\textrm{c}} + \epsilon ;h,\mathbb {T}_{d,n}} \end{aligned}$$
(1.19)

is well-defined and finite for any \(\epsilon > 0\). There exist constants \(c,C > 0\) such that for sufficiently small \(\epsilon > 0\)

$$\begin{aligned} \exp (c/\sqrt{\epsilon }) \le \langle x_{0}^{2} \rangle _{\beta _{\textrm{c}} + \epsilon }^{+} \le \exp (C/\sqrt{\epsilon }). \end{aligned}$$
(1.20)

The above statement considered the infinite-volume limit, i.e. taking \(n\rightarrow \infty \) before removing the magnetic field \(h \searrow 0\). One may also consider a finite-volume limit (also referred to as inverse-order thermodynamic limit [40]): In that case, we consider scaling limits of observable as \(h\searrow 0\) before taking \(n\rightarrow \infty \). In this limit, we also demonstrate an intermediate multifractal regime for the \(\mathbb {H}^{2|2}\)-model.

Theorem 1.6

(Intermediate Phase for the \(\mathbb {H}^{2|2}\)-Model on \(\mathbb {T}_{d,n}\)). There exist \(0< \beta _{\textrm{c}}< \beta _{\textrm{c}}^{\textrm{erg}} < \infty \) as in Theorem 1.3, such that for \(\beta _{\textrm{c}}< \beta < \beta _{\textrm{c}}^{\textrm{erg}}\) we have for \(\eta \in (0,1)\)

$$\begin{aligned} \textstyle \lim _{h\searrow 0} h^{-\eta }\langle z_{0} \left| x_{0}\right| ^{-\eta } \rangle _{\beta ,h;\mathbb {T}_{d,n}} \sim \left| \mathbb {T}_{d,n}\right| ^{\tau _{\beta }(\eta ) + o(1)} \quad \text {as} \quad n\rightarrow \infty \end{aligned}$$
(1.21)

with \(\tau _{\beta }(\eta )\) as given in (1.7).

At first glance, the observable in (1.21) might seem somewhat obscure. However, in the physics literature on Efetov’s model and the Anderson transition, analogous quantities are predicted to encode disorder-averaged (fractional) moments of eigenstates at a given vertex and energy level, see for example [25, Equation (6)]. The volume-scaling of these quantities provides information about the (de)localisation behaviour of the eigenstates.

1.2.3 The t-field

Despite the inconspicuous name, the t-field is the most relevant object for our analysis. It is directly related to both the VRJP, encoding the time the VRJP asymptotically spends on each vertex, as well as the \(\mathbb {H}^{2|2}\)-model, arising as a marginal in horospherical coordinates (see Sect. 2 for details).

Definition 1.7

(t-field Distribution). Consider a finite graph \(G = (V,E)\), a vertex \(i_{0} \in V\) and non-negative edge-weights \((\beta _e)_{e\in E}\). The law of the t-field, with weights \((\beta _e)_{e\in E}\), pinned at \(i_{0}\), is a probability measure on configurations \(\textbf{t} = \{t_{i}\}_{i\in V} \in \mathbb {R}^{V}\) given by

$$\begin{aligned} \mathcal {Q}^{(i_{0})}_{\beta }(\text {d}{\textbf{t}}) :=e^{-\sum _{ij \in E}\beta _{ij}[\cosh (t_{i} - t_{j}) - 1]} D_{\beta }(\textbf{t})^{1/2}\; \delta (t_{i_{0}})\prod _{i \in V {\setminus }\{i_{0}\}} \frac{\text {d}{t}_{i}}{\sqrt{2\pi /\beta }}, \end{aligned}$$
(1.22)

with the determinantal term

$$\begin{aligned} D_{\beta }(\textbf{t}) :=\sum _{T \in {\mathcal {T}}^{(i_{0})}} \prod _{(i,j) \in T} \beta _{ij}e^{t_{i} - t_{j}}, \end{aligned}$$
(1.23)

where \({\mathcal {T}}^{(i_{0})}\) is the set of spanning trees in G oriented away from \(i_{0}\).

Alternatively, one can write \(D_{\beta }(\textbf{t}) = \prod _{i\in V\setminus \{i_{0}\}} e^{-2t_{i}} \det _{i_{0}}(-\Delta _{\beta (\textbf{t})})\), where \(\det _{i_{0}}\) denotes the principal minor with respect to \(i_{0}\) and \(-\Delta _{\beta (\textbf{t})}\) is the discrete Laplacian for edge-weights \(\beta (\textbf{t}) = (\beta _{ij}e^{t_{i}+t_{j}})_{ij}\).

In general the determinantal term renders the law \(\mathcal {Q}_{\beta }^{(i_{0})}\) highly non-local. However, in case the underlying graph G is a tree, only a single summand contributes to (1.23) and the measure factorises in terms of the oriented edge-increments \(\{t_{i}-t_{j}\}_{(i,j)}\). This simplification is essential for this article and gives us the possibility to analyse the t-field on rooted \((d+1)\)-regular trees in terms of a branching random walk.

1.2.4 STZ-Anderson model

The following introduces a random Schrödinger operator, which is related to the previously introduced models. It will only be required for translating our results on the intermediate phase to the \(\mathbb {H}^{2|2}\)-model (Sect. 5.2), so the reader may skip this definition on a first reading. As Sabot, Tarrès and Zeng [31, 32] were the first to study this system in detail, we refer to it as the STZ-Anderson model.

Definition 1.8

(STZ-Anderson model). Consider a locally finite graph \(G = (V,E)\), equipped with non-negative edge-weights \((\beta _e)_{e\in E}\). For \(B = (B_{i})_{i\in \Lambda } \subseteq \mathbb {R}_{+}^{\Lambda }\) define the Schrödinger-type operator

$$\begin{aligned} H_{B} :=-\Delta _{\beta } + V(B) \quad \text {with} \quad [V(B)]_{i} = B_{i} - {\textstyle \sum _{j}\beta _{ij}}. \end{aligned}$$
(1.24)

Define a probability distribution \(\nu _{\beta }\) over configurations \(B = (B_{i})_{i\in \Lambda }\) by specifying the Laplace transforms of its finite-dimensional marginals: For any vector \((\lambda _{i})_{i\in V} \in \left[ 0,\infty \right) ^{V}\) with only finitely many non-zero entries, we have

$$\begin{aligned} \int e^{-(\lambda ,B)}\nu _{\beta }(\text {d}{B}) = \frac{1}{\prod _{i\in V}\sqrt{1+2\lambda _{i}}} \exp [-\sum _{ij \in E}\beta _{ij}(\sqrt{1+2\lambda _{i}} \sqrt{1+2\lambda _{j}} - 1)]. \nonumber \\ \end{aligned}$$
(1.25)

Subject to this distribution, we refer to B as the STZ-field and to \(H_{B}\) as the STZ-Anderson model.

One may note that on finite graphs, the density of \(\nu _{\beta }\) is explicit:

$$\begin{aligned} \nu _{\beta }(\text {d}{B}) \propto \frac{e^{-\tfrac{1}{2} \sum _{i}B_{i}}}{\sqrt{\det (H_{B})}} \mathbbm {1}_{H_{B} > 0} \text {d}{B}, \end{aligned}$$
(1.26)

where \(H_{B} > 0\) means that the matrix \(H_{B}\) is positive definite. The definitino via (1.25) is convenient, since it allows us to directly consider the infinite-volume limit. We also note that while the density (1.26) seems highly non-local, the Laplace transform in (1.25) only involves values of \(\lambda \) at adjacent vertices and therefore implies 1-dependency of the STZ-field.

In the original literature the STZ-field is denoted by \(\beta \) and referred to as the \(\beta \)-field. In order to be consistent with the statistical physics literature and avoid confusion with the inverse temperature, we introduced this slightly different notation. To be precise, we used this change of notation to also introduce a slightly more convenient normalisation: one has \(B_{i} = 2\beta _{i}\) compared to the normalisation of the \(\beta \)-field \(\{\beta _{i}\}\) used by Sabot, Tarrès and Zeng.

1.3 Further comments

Comments on Related Work As noted earlier, the VRJP on tree geometries was already studied by various authors [15,16,17,18,19]. One notable difference to our work is that we do not consider the more general setting of Galton–Watson trees. While this is mostly to avoid unnecessary notational and technical difficulties, the Galton–Watson setting might be more subtle. This is due to an “extra” phase transition in the transient phase, observed by Chen and Zeng [18]. This phase transition depends on the probability of the Galton Watson tree having precisely one offspring. It is an interesting question how this would interact with our analysis.

In regard to our results, the recent work by Rapenne [19] is of particular interest. He provides precise quantitative information on the (sub-)critical phase \(\beta \le \beta _{\textrm{c}}\). The results are phrased in terms of a certain martingale, associated with the STZ-Anderson model, but they can be formulated in terms of the \(\mathbb {H}^{2|2}\)-model with wired boundary conditions (or analogously the VRJP started from the boundary) on a rooted \((d+1)\)-regular tree of finite depth. In this sense, Rapenne’s article can be considered as complementary to our work.

Another curious connection to our work is given by the Derrida-Retaux model [41,42,43,44,45,46,47,48]. The latter is a toy model for a hierarchical renormalisation procedure related to the depinning transition. It has recently been shown [48] that the free energy of this model may diverge as \(\sim \exp (-c/\sqrt{p - p_{\textrm{c}}})\) approaching the critical point from the supercritical phase, \(p\searrow p_{\textrm{c}}\). There are further formal similarities between their analysis and the present article. It would be of interest to shed further light on the universality of this type of behaviour.

Debate on Intermediate Phase We would like to highlight that the presence/absence of such an intermediate phase for the Anderson transitionFootnote 2 on tree-geometries has been a recent topic of debate in the physics literature (see [40, 49] and references therein). In short, the debate concerns the question of whether the intermediate phase only arises due to finite-volume and boundary effects on the tree.

While the presence of a non-ergodic delocalised phase on finite regular trees has been established in recent years [24, 25, 50], it was not clear if this behaviour persists in the absence of a large “free” boundary. To study this, one can consider a system on a large random regular graphs (RRGs) as a “tree without boundary” (alternatively one could consider trees with wired boundary conditions). For the Anderson transition on RRGs, early numerical simulations [23, 51, 52] suggested existence of an intermediate phase, in conflict with existing theoretical predictions [22, 53,54,55]. Shortly afterwards, it was argued that the discrepancy was due to finite-size effects that vanish at very large system sizes [24, 49, 56], even though this does not seem to be the consensusFootnote 3 [40, 52].

We should note that Aizenman and Warzel [33, 57] have shown the existence of an energy-regime of “resonant delocalisation” for the Anderson model on regular trees. It would be interesting to understand if/how this phenomenon is related to the intermediate phase discussed here.

In accordance with the physics literature, we refer to the intermediate phase (\(\beta _{\textrm{c}}< \beta < \beta _{\textrm{c}}^{\textrm{erg}}\)) as multifractal as opposed to the ergodic phase (\(\beta > \beta _{\textrm{c}}^{\textrm{erg}}\)).

1.4 Structure of this article

In Sect. 2 we provide details on the connections between the various models and recall previously known results for the VRJP. In particular, we recall that the VRJP can be seen as a random walk in random conductances given in terms of a t-field (referred to as the t-field environment). On the tree, the t-field can be seen as a branching random walk (BRW) and we recall various facts from the BRW literature. In Sect. 3 we apply BRW techniques to establish a statement on effective conductances in random environments given in terms of critical BRWs (Theorem 3.2). With Theorem 3.1 we prove a result on effective conductances in the near-critical t-field environment. We close the section by showing how the result on effective conductances implies Theorem 1.2 on expected local times for the VRJP. In Sect. 4 we continue to use BRW techniques for the t-field to establish Theorem 1.3 on the intermediate phase for the VRJP. We also prove Theorem 1.4 on the multifractality in the intermediate phase. Moreover, we argue that Rapenne’s recent work [19] implies the absence of such an intermediate phase on trees with wired boundary conditions. In Sect. 5 we show how to establish the results for the \(\mathbb {H}^{2|2}\)-model. For the near-critical asymptotics (Theorem 1.5) this is an easy consequence of a Dynkin isomorphism between the \(\mathbb {H}^{2|2}\)-model and the VRJP. For Theorem 1.6 on the intermediate phase, we make use of the STZ-field to connect the observable for the \(\mathbb {H}^{2|2}\)-model with the observable \(\lim _{t\rightarrow \infty } L^{0}_{t}/t\) that we study for the VRJP.

2 Additional Background

2.1 Dynkin isomorphism for the VRJP and the \(\mathbb {H}^{2|2}\)-Model

Analogous to the connection between the Gaussian free field and the (continuous-time) simple random walk, there is a Dynkin-type isomorphism theorem relating correlation functions of the \(\mathbb {H}^{2|2}\)-model with the local time of a VRJP.

Theorem 2.1

([5, Theorem 5.6]). Suppose \(G = (V,E)\) is a finite graph with positive edge-weights \(\{\beta _{ij}\}_{ij\in E}\). Let \(\langle \cdot \rangle _{\beta ,h}\) denote the expectation of the \(\mathbb {H}^{2|2}\)-model and suppose that under \(\mathbb {E}_{i}\), the process \((X_{t})_{t\ge 0}\) denotes a VRJP started from i. Suppose \(g:\mathbb {R}^{V} \rightarrow \mathbb {R}\) is a smooth bounded function. Then, for any \(i,j\in V\)

$$\begin{aligned} \langle x_{i}x_{j} g(\textbf{z}-1) \rangle _{\beta ,h} = \int _{0}^{\infty } \mathbb {E}_{i}[g(\textbf{L}_{t})\mathbbm {1}_{X_{t} = j}]e^{-ht}\text {d}{t}, \end{aligned}$$
(2.1)

where \(\textbf{L}_{t} = (L_{t}^{x})_{x\in V}\) denotes the VRJP’s local time field.

This result will be key to deduce Theorem 1.5 from Theorem 1.2.

2.2 VRJP as random walk in a t-field environment

As a continuous-time process, there is some freedom in the time-parametrisation of the VRJP. While the definition in (1.1) (the linearly reinforced timescale) is the “usual” parametrisation, we also make use of the exchangeable timescale VRJP \((\tilde{X}_t)_{t\in \left[ 0,+\infty \right) }\):

$$\begin{aligned} \textstyle \tilde{X}_t :=X_{A^{-1}(t)} \quad \text {with} \quad A(t) :=\int _{0}^{t} 2(1+L_{s}^{X_{s}})\text {d}{s} = \sum _{x\in V} [(1+L_{t}^{x})^{2} - 1] \end{aligned}$$
(2.2)

Writing \(\tilde{L}_{t}^{x} = \int _{0}^{t}\mathbbm {1}\{\tilde{X}_{s} = x\}\text {d}{s}\), the local times in the two timescales are related by

$$\begin{aligned} L_{t}^{x} = \sqrt{1+\tilde{L}_{t}^{x}} - 1. \end{aligned}$$
(2.3)

Above reparametrisation is motivated by the following result of Sabot and Tarrès [3], showing that the VRJP in exchangeable timescale can be seen as a (Markovian) random walk in random conductances given in terms of the t-field.

Theorem 2.2

(VRJP as Random Walk in Random Environment [3]). Consider a finite graph \(G = (V,E)\), a starting vertex \(i_{0} \in V\) and edge-weights \((\beta _e)_{e\in E}\). The exchangeable timescale VRJP, started at \(i_{0}\), equals in law an (annealed) continuous-time Markov jump process, with jump rates between from i to j given by

$$\begin{aligned} \tfrac{1}{2}\beta _{ij}e^{T_j-T_{i}}, \end{aligned}$$
(2.4)

where \(\textbf{T} = (T_x)_{x\in V}\) are random variables distributed according to the law of the t-field (1.22) pinned at \(i_{0}\).

As a consequence of Theorem 2.2, the t-field can be recovered from the VRJP’s asymptotic local time:

Corollary 2.3

(t-field from Asymptotic Local Time [31]). Consider the setting of Theorem 2.2. Let \((L_{t}^{x})_{x\in V}\) and \((\tilde{L}_{t}^{x})_{x\in V}\) denote the local time field of the VRJP in linearly reinforced and exchangeable timescale, respectively. Then

$$\begin{aligned} \begin{aligned} T_{i}&:=\lim \limits _{t\rightarrow \infty } \log \left( L^{i}_{t}/L^{i_{0}}_{t}\right) \qquad (i \in V)\\ \tilde{T}_{i}&:=\tfrac{1}{2} \lim \limits _{t\rightarrow \infty } \log \left( \tilde{L}^{i}_{t}/\tilde{L}^{i_{0}}_{t}\right) \qquad (i \in V) \end{aligned} \end{aligned}$$
(2.5)

exist and follow the law \(\mathcal {Q}^{(i_{0})}_{\beta }\) of the t-field in (1.22).

Proof

For the exchangeable timescale, Sabot, Tarrès and Zeng [31, Theorem 2] provide a proof. The statement for the usual (linearly reinforced) VRJP then follows by the time change formula for local times (2.3). \(\square \)

Considering the VRJP as a random walk in random environment enables us to study its local time properties with the tools of random conductance networks. For a t-field \(\textbf{T} = (T_x)_{x\in V}\) pinned at \(i_{0}\), we refer to the collection of random edge weights (or conductances)

$$\begin{aligned} \{\beta _{ij}e^{T_{i} + T_{j}}\}_{ij \in E} \end{aligned}$$
(2.6)

as the t-field environment. This should be thought of as a symmetrised version of the VRJP’s random environment (2.4). It is easier to study a random walk with symmetric jump rates, since its amenable to the methods of conductance networks. The following lemma relates local times in the t-field environment with the local times in the environment of the exchangeable timescale VRJP:

Lemma 2.4

Consider the setting of Theorem 2.2. Let \((\tilde{X}_{t})_{t\ge 0}\) and \((Y_{t})_{t\ge 0}\) denote two continuous-time Markov jump processes started from \(i_{0}\) with rates given by (2.4) and (2.6), respectively. We write \(\tilde{L}_{t}^{x}\) and \(l_{t}^{x}\) for their respective local time fields. Let \(B \subseteq V\) and write \(\tilde{\mathcal {T}_{B}}\) and \(\mathcal {T}_{B}\) for the respective hitting times of B. Then

$$\begin{aligned} L_{\tilde{\mathcal {T}}_{B}}^{x} {\mathop {=}\limits ^{\tiny \text {law}}} 2 e^{T_{x}} l_{\mathcal {T}_{B}}^{x}, \end{aligned}$$
(2.7)

for \(x \in V\). In particular, \(L_{\tilde{\mathcal {T}}_{B}}^{i_{0}} {\mathop {=}\limits ^{\tiny \text {law}}} 2 l_{\mathcal {T}_{B}}^{i_{0}}\).

Proof

The discrete-time processes associated to \((\tilde{X}_{t})_{t\ge 0}\) and \((Y_{t})_{t\ge 0}\) apparently agree. In particular, they both visit a vertex x the same number of times, before hitting B. Every time \(\tilde{X}_{t}\) visits the vertex x, it spends an \(\textrm{Exp}(\sum _{y}\tfrac{1}{2}\beta _{xy}e^{T_{y} - T_{x}})\)-distributed time there, before jumping to another vertex. \(Y_{t}\) on the other hand will spend time distributed as \(\textrm{Exp}(\sum _{y}\beta _{xy}e^{T_{x} + T_{y}}) = \tfrac{1}{2} e^{-2T_{x}} \textrm{Exp}(\sum _{y}\tfrac{1}{2}\beta _{xy}e^{T_{y} - T_{x}})\). This concludes the proof. \(\square \)

2.3 Effective conductance

Our approach to proving Theorem 1.2 will rely on establishing asymptotics for the effective conductance in the t-field environment (Theorem 3.1).

Definition 2.5

Consider a locally finite graph \(G = (V,E)\) with edge weights (or conductances) \(\{w_{ij}\}_{ij \in E}\). For two disjoint sets \(A,B \subseteq V\), the effective conductance between them is defined as

$$\begin{aligned} C^{\textrm{eff}}(A,B) :=\inf \limits _{\begin{array}{c} U:V\rightarrow \mathbb {R}\\ U\vert _{A} \equiv 0,\, U\vert _{B} \equiv 1 \end{array}} \sum _{ij \in E} w_{ij}\, (U(i) - U(j))^{2}. \end{aligned}$$
(2.8)

The variational definition (2.8) makes it easy to deduce monotonicity and boundedness properties:

Lemma 2.6

Consider the situation of Definition 2.5. Suppose \(S \subseteq E\) is a edge-cutset separating AB. Then

$$\begin{aligned} C^{\textrm{eff}}(A,B) \le \sum _{ij \in S} w_{ij}. \end{aligned}$$
(2.9)

Alternatively, suppose \(C \subseteq V\) is a vertex-cutset separating AB. Then

$$\begin{aligned} C^{\textrm{eff}}(A,B) \le C^{\textrm{eff}}(A,C). \end{aligned}$$
(2.10)

Proof

For the first statement, consider (2.8) for the function \(U:V \rightarrow \mathbb {R}\) that is constant zero (resp. one) in the component of A (resp. B) in \(V{\setminus }S\). For the second statement, note that for any funcion \(U :V \rightarrow \mathbb {R}\) with \(U\vert _{A} \equiv 0\) and \(U\vert _{C} \equiv 1\) we can define a function \(\tilde{U}\) that agrees with U on C and the connected compenent of \(V\setminus C\) containing A, and is constant equal to one on the component of B in \(V\setminus V\). Then, \(\tilde{U}\vert _{A} \equiv 0\) and \(\tilde{U}\vert _{B} \equiv 1\) and \(\sum _{ij\in E} w_{ij} (U(i) - U(j))^{2} \le \sum _{ij\in E} w_{ij} (\tilde{U}(i) - \tilde{U}(j))^{2}\), which proves the claim. \(\square \)

The monotoniciy in (2.10) makes it possible to define an effective conductance to infinity. For an increasing exhaustion \(V_{1} \subseteq V_{2} \subseteq \cdots \) of the vertex set \(V = \bigcup _{n}V_{n}\) and a given finite set \(A\subseteq V\), we define the effective conductance from A to infinity by

$$\begin{aligned} C^{\textrm{eff}}_{\infty }(A) = \lim _{n\rightarrow \infty } C^{\textrm{eff}}(A,V\setminus V_{n}). \end{aligned}$$
(2.11)

One may check that this is independent from the choice of exhaustion. For us, the main use of effective conductances stems from their relation to escape times:

Lemma 2.7

Consider a locally finite graph \(G = (V,E)\) with edge weights (or conductances) \(\{w_{ij}\}_{ij \in E}\). Let \(C^{\textrm{eff}}(i_{0},B)\) denote the effective conductance between the singleton \(\{i_{0}\}\) and a disjoint set B. Consider a continuous-time random walk \((X_{t})_{t\ge 0}\) on G, starting at \(X_{0} = i_{0}\) and jumping from \(X_{t} = i\) to j at rate \(w_{ij}\). Let \(L_{\textrm{esc}}(i_{0},B)\) denote the total time the walk spends at \(i_{0}\) before visiting B for the first time. Then \(L_{\textrm{esc}}(i_{0},B)\) is distributed as an \(\textrm{Exp}(1/C^{\textrm{eff}}(i_{0},B))\)-random variable.

For an infinite graph G, the above conclusions also hold for B “at infinity”: We let \(L_{\textrm{esc},\infty }(i_{0})\) denote the total time spent at \(i_{0}\) and understand \(C_{\infty }^{\textrm{eff}}(i_{0})\) as in (2.11). Then \(L_{\textrm{esc},\infty }(i_{0}) \sim \textrm{Exp}(1/C^{\textrm{eff}}_{\infty }(i_{0}))\).

Proof

According to [6, Sect. 2.2], the walk’s number of visits at \(i_{0}\) before hitting B is a geometric random variable \(N\sim \textrm{Geo}(p_{\textrm{esc}})\) with the escape probability \(p_{\textrm{esc}} = C^{\textrm{eff}}(i_{0},B)/(\sum _{j\sim i_{0}} w_{i_{0}j})\). Moreover, for the continuous-time process, every time we visit \(i_{0}\) we spend an \(\textrm{Exp}(\sum _{j\sim i_{0}} w_{i_{0}j})\)-distributed time there, before jumping to a neighbour. Hence, \(L_{\textrm{esc}}(i_{0},B)\) is distributed as the sum of N independent \(\textrm{Exp}(\sum _{j\sim i_{0}} w_{i_{0}j})\)-distributed random variables. By standard results for the exponential distribution (easily checked via its moment-generating function), this implies the claim. Note that this argument also holds true for B “at infinity”, in which case \(N\sim \textrm{Geo}(p_{\textrm{esc}})\) with \(p_{\textrm{esc}} = C^{\textrm{eff}}_{\infty }(i_{0})/(\sum _{j\sim i_{0}} w_{i_{0}j})\) will simply denote the total number of visits at \(i_{0}\) (see [6, Sect. 2.2] for more details). \(\square \)

2.4 The t-field from the \(\mathbb {H}^{2|2}\)- and STZ-Anderson model

t-Field as a Horospherical Marginal of the \(\mathbb {H}^{2|2}\)-model First we introduce horospherical coordinates on \(\mathbb {H}^{2|2}\). In these coordinates, \(\textbf{u} \in \mathbb {H}^{2|2}\) is parametrised by \((t,s,\bar{\psi },\psi )\), with \(t,s \in \mathbb {R}\) and Grassmann variables \(\bar{\psi },\psi \) via

$$\begin{aligned} \left( \begin{array}{ll} z\\ x\\ y\\ \xi \\ \eta \end{array}\right) = \left( \begin{array}{ll}\cosh (t) + e^{t} (\tfrac{1}{2} s^{2} + \bar{\psi }\psi ) \\ \sinh (t) - e^{t} (\tfrac{1}{2} s^{2} + \bar{\psi }\psi ) \\ \quad \quad \quad \quad \quad e^{t}s \\ \quad \quad \quad \quad \quad e^{t}\bar{\psi } \\ \quad \quad \quad \quad \quad e^{t}\psi \end{array}\right) . \end{aligned}$$
(2.12)

A particular consequence of this is that \(e^{t} = z + x\). By rewriting the Gibbs measure for the \(\mathbb {H}^{2|2}\)-model, defined in (1.10), in terms of horospherical coordinates and integrating out the fermionic variables \(\psi , \bar{\psi }\), one obtains a marginal density in \(\underline{t} = \{t_{x}\}_{x\in V}\) and \(\underline{s} = \{s_{x}\}_{x\in V}\), which can be interpreted probabilistically:

Lemma 2.8

(Horospherical Marginal of the \(\mathbb {H}^{2|2}\)-Model [4, 10, 11]). Consider a finite graph \(G = (V,E)\), a vertex \(i_{0} \in V\), and non-negative edge-weights \((\beta _{ij})_{ij\in E}\). There exist random variables \(\underline{T} = \{T_{x}\}_{x\in V} \in \mathbb {R}^{V}\) and \(\underline{S} = \{S_{x}\}_{x \in V} \in \mathbb {R}^{V}\), such that for any \(F \in C^{\infty }_{\textrm{c}}(\mathbb {R}^{V}\times \mathbb {R}^{V})\)

$$\begin{aligned} \langle F(\underline{t}, \underline{s}) \rangle _{\beta } = \mathbb {E}[F(\underline{T}, \underline{S})]. \end{aligned}$$
(2.13)

The law of \(\underline{T}\) is given by the t-field pinned at \(i_{0}\) (see Definition 1.7). Moreover, conditionally on \(\underline{T}\), the s-field follows the law of a Gaussian free field in conductances \(\{\beta _{ij}e^{T_{i} + T_{j}}\}_{ij \in E}\), pinned at \(i_{0}\), \(S_{i_{0}} = 0\).

t-Field and the STZ-Anderson Model. It turns out that the (zero-energy) Green’s function of the STZ-Anderson model is directly related to the t-field:

Proposition 2.9

[31] For \(H_{B}\) denoting the STZ-Anderson model as in Definition 1.8 define the Green’s function \(G_{B}(i,j) = [H_{B}^{-1}]_{i,j}\). For a vertex \(i_{0} \in V\), define \(\{T_{i}\}_{i\in \Lambda }\) via

$$\begin{aligned} e^{T_{i}} :=G_{B}(i_{0}, i)/G_{B}(i_{0},i_{0}). \end{aligned}$$
(2.14)

Then \(\{T_{i}\}\) follows the law \(\mathcal {Q}_{\beta }^{(i_{0})}\) of the t-field, pinned at \(i_{0}\). Moreover, with \(\{T_{i}\}\) as above we have \(B_{i} = \sum _{j\sim i}\beta _{ij}e^{T_{j} - T_{i}}\) for all \(i \in V\setminus \{i_{0}\}\).

This provides a way of coupling the STZ-field with the t-field, as well as a coupling of t-fields pinned at different vertices.

Remark 2.10

(Natural Coupling). Lemma 2.8 and Proposition 2.9 give us a way to define a natural coupling of STZ-field, t-field and s-field as follows: Fix some pinning vertex \(i_{0} \in V\). Sample an STZ-Anderson model \(H_{B}\) with respect to edge weights \(\{\beta _{ij}\}_{ij \in E}\). Then define the t-field \(\{T_{i}\}_{i\in V}\), pinned at \(i_{0}\) via (2.14). Then, conditionally on the t-field, sample the s-field \(\{S_{i}\}_{i\in V}\) as a Gaussian free field in conductances \(\{\beta _{ij}e^{T_{i} + T_{j}}\}_{ij \in E}\), pinned at \(i_{0}\), \(S_{i_{0}} = 0\).

2.5 Monotonicity properties of the t-field

A rather surprising property of the t-field, proved by the first author, is the monotonicity of various expectation values with respect to the edge-weights. The following is a restatement of [8, Theorem 6] after applying Proposition 2.9:

Theorem 2.11

([8, Theorem 6]). Consider a finite graph \(G = (V,E)\) and fix some vertex \(i_{0} \in V\). Under \(\mathbb {E}_{\pmb {\beta }}\), we let \(\mathbb {T} = \{T_{i}\}_{i\in V}\) denote a t-field pinned at \(i_{0}\) with respect to non-negative edge weights \(\pmb {\beta } = \{\beta _{e}\}_{e\in E}\). Then, for any convex \(f:\left[ 0,\infty \right) \rightarrow \mathbb {R}\) and non-negative \(\{\lambda _{i}\}_{i\in V}\), the map

$$\begin{aligned} \textstyle \pmb {\beta } \mapsto \mathbb {E}_{\pmb {\beta }}[f(\sum _{i}\lambda _{i} e^{T_{i}})] \end{aligned}$$
(2.15)

is decreasing.

A direct corollary of the above is that expectations of the form \(\mathbb {E}_{\beta }[e^{\eta T_{x}}]\) are increasing in \(\beta \) for \(\eta \le [0,1]\) and are decreasing for \(\eta \ge 1\). This will be the extent to which we make use of the result.

2.6 The t-field on \(\mathbb {T}_{d}\)

Consider the t-field measure (1.22) on \(\mathbb {T}_{d,n} = (V_{d,n}, E_{d,n})\), the rooted \((d+1)\)-regular tree of depth n, pinned at the root \(i_{0} = 0\). Only one term contributes to the determinantal term (1.23), namely the term corresponding to \(\mathbb {T}_{d,n}\) itself, oriented away from the root:

$$\begin{aligned} \mathcal {Q}_{\beta ; \mathbb {T}_{d,n}}^{(0)}(\text {d}{\textbf{t}}) = e^{-\sum _{(i,j) \in {E}_{d,n}}[\beta \, (\cosh (t_{j} - t_{i}) - 1) + \tfrac{1}{2} (t_{j} - t_{i})]} \delta (t_{0}) \prod _{i \in V_{d,n} {\setminus }0} \frac{\text {d}{t}_{i}}{\sqrt{2\pi /\beta }}, \end{aligned}$$
(2.16)

where \({E}_{d,n}\) is the set of edges in \(\mathbb {T}_{d,n}\) oriented away from the root. In other words, the increments of the t-field along outgoing edges are i.i.d. and distributed according to the following:

Definition 2.12

(t-field Increment Measure). For \(\beta > 0\) define the probability distribution

$$\begin{aligned} \mathcal {Q}^{\textrm{inc}}_{\beta }(\text {d}{t}) = e^{-\beta [\cosh (t) - 1] - t/2} \frac{\text {d}{t}}{\sqrt{2\pi /\beta }} \quad \text {with} \quad t \in \mathbb {R}. \end{aligned}$$
(2.17)

We refer to this as the t-field increment distribution and if not specified otherwise, T will always denote a random variable with distribution \(\mathcal {Q}^{\textrm{inc}}_{\beta }\). The dependence on \(\beta \) is either implicit or denoted by a subscript, such as in \(\mathbb {E}_{\beta }\) or \(\mathbb {P}_{\beta }\).

The density (2.17) implies that

$$\begin{aligned} e^{T} \sim \textrm{IG}(1,\beta ) \quad \text {and} \quad e^{-T} \sim \textrm{RIG}(1,\beta ), \end{aligned}$$
(2.18)

where IG (RIG) denotes the (reciprocal) inverse Gaussian distribution (cf. (A.4)). Note that changing variables to \(t\mapsto e^{t}\) and comparing to the density of the inverse Gaussian, we see that (2.17) is normalised.

Definition 2.13

(Free Infinite Volume t-field on \(\mathbb {T}_{d}\)). For \(\beta > 0\), associate to every edge e of the infinite rooted \((d+1)\)-regular tree \(\mathbb {T}_{d}\) a t-field increment \(\tilde{T}_{e}\), distributed according to (2.17). For every vertex \(x \in \mathbb {T}_{d}\) let \(\gamma _{x}\) denote the unique self-avoiding path from 0 to x and define \(T_{x} :=\sum _{e\in \gamma _{x}} \tilde{T}_{e}\). The random field \(\{T_{x}\}_{x\in \mathbb {T}_{d}}\) is the free infinite volume t-field on \(\mathbb {T}_{d}\) at inverse temperature \(\beta > 0\). In particular, its restriction \(\{T_{x}\}_{x \in \mathbb {T}_{d,n}}\) onto vertices up to generation n follows the law \(\mathcal {Q}^{(0)}_{\beta ;\mathbb {T}_{d,n}}\).

By construction, \(\{T_{x}\}_{x\in \mathbb {T}_{d}}\) can be considered a branching random walk (BRW) with a deterministic number of offsprings (every particle gives rise to d new particles in the next generation). In Sect. 2.8 we will elaborate on this perspective.

2.7 Previous results for VRJP on trees

As we have already noted in the introduction, the VRJP on tree graphs has received quite some attention [15,16,17,18,19]. In particular, Basdevant and Singh [17] studied the VRJP on Galton–Watson trees with general offspring distribution, and exactly located the recurrence/transience phase transition:

Proposition 2.14

(Basdevant-Singh [17]). Let \(\mathcal {T}\) denote a Galton–Watson tree with mean offspring \(b > 1\). Consider the VRJP started from the root of \(\mathcal {T}\), conditionally on non-extinction of the tree. There exists a critical parameter \(\beta _{\textrm{c}} = \beta _{\textrm{c}}(b)\), such that the VRJP is

  • recurrent for \(\beta \le \beta _\textrm{c}\),

  • transient for \(\beta > \beta _\textrm{c}\).

Moreover, \(\beta _{\textrm{c}}\) is characterised as the unique positive solution to

$$\begin{aligned} \frac{1}{b} = \sqrt{\frac{\beta _{\textrm{c}}}{2\pi }}\int _{-\infty }^{+\infty }\text {d}{t} e^{-\beta _{\textrm{c}} (\cosh (t) - 1)}. \end{aligned}$$
(2.19)

We also take the opportunity to highlight Rapenne’s recent results [19] concerning the (sub)critical phase, \(\beta \le \beta _{\textrm{c}}\). His statements can be seen to complement our results, which focus on the supercritical phase \(\beta > \beta _{\textrm{c}}\).

2.8 Background on branching random walks

Let’s quickly recall some basic results from the theory of branching random walks. For a more comprehensive treatment we refer to Shi’s monograph [58].

A branching random walk (BRW) with offspring distribution \(\mu \in \textrm{Prob}(\mathbb {N}_{0})\) and increment distribution \(\nu \) is constructed as follows: We start with a “root” particle \(x = 0\) at generation \(\left| 0\right| = 0\) and starting position \(V(0) = v_{0}\). We sample its number of offsprings according to \(\mu \). They constitute the particles at generation one, \(\{\left| x\right| = 1\}\). Every such particle is assigned a position \(v_{0} + \delta V_{x}\) with \(\{\delta V_{x}\}_{\left| x\right| = 1}\) being i.i.d. according to the increment distribution \(\nu \). This process is repeated recursively and we end up with a random collection of particles \(\{x\}\), each equipped with a position \(V(x) \in \mathbb {R}\), a generation \(\left| x\right| \in \mathbb {N}_{0}\) and a history \(0=x_{0}, x_{1}, \ldots , x_{\left| x\right| } = x\) of predecessors. Unless otherwise stated, we assume from now on that a BRW always starts from the origin, \(v_{0} = 0\).

A particularly useful quantity for the study of BRWs is the \(\log \)-Laplace transform of the offspring process:

$$\begin{aligned} \psi (\eta ) :=\log \mathbb {E}\Big [ \sum _{\left| x\right| = 1} e^{- \eta V(x)} \Big ], \end{aligned}$$
(2.20)

where the sum goes over all particles in the first generation. A priori, we have \(\psi (\eta ) \in [0,\infty ]\), but we typically assume \(\psi (0) > 0\) and \(\inf _{\eta >0} \psi (\eta ) < \infty \). The first assumption corresponds to supercriticality of the offspring distributionFootnote 4, whereas the second assumption enables us to study the average over histories of the BRW in terms of single random walk:

Proposition 2.15

(Many-To-One Formula). Consider a BRW with log-Laplace transform \(\psi (\eta )\). Choose \(\eta > 0\) such that \(\psi (\eta ) < \infty \) and define a random walk \(0 = S_{0}, S_{1}, \ldots \) with i.i.d. increments such that for any measurable \(h:\mathbb {R}\rightarrow \mathbb {R}\)

$$\begin{aligned} \textstyle \mathbb {E}[h(S_{1})] = \mathbb {E}\left[ \sum _{\left| x\right| = 1}e^{-\eta V(x)} h(V(x)) \right] \Big / \mathbb {E}\left[ \sum _{\left| x\right| = 1} e^{-\eta V(x)} \right] . \end{aligned}$$
(2.21)

Then, for all \(n\ge 1\) and \(g:\mathbb {R}^{n} \rightarrow \left[ 0,\infty \right) \) measurable we have

$$\begin{aligned} \textstyle \mathbb {E}\left[ \sum _{\left| x\right| = n} g(V(x_{1}), \ldots , V(x_{n}))\right] = \mathbb {E}\left[ e^{n\psi (\eta ) + \eta S_{n}} g(S_{1}, \ldots , S_{n}) \right] . \end{aligned}$$
(2.22)

For a proof we refer to Shi’s lecture notes [58, Theorem 1.1]. An application of the many-to-one formula is the following statement about the velocity of extremal particles (cf. [58, Theorem 1.3]).

Proposition 2.16

(Asymptotic Velocity of Extremal Particles). Suppose \(\psi (0) > 0\) and \(\inf \limits _{\eta > 0} \psi (\eta ) < \infty \). Then, almost surely under the event of non-extinction, we have

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n} \inf _{\left| x\right| = n} V(x) = -\inf _{\eta > 0} \psi (\eta )/\eta . \end{aligned}$$
(2.23)

Critical Branching Random Walks. A common assumption, under which BRWs exhibit various universal properties, is \(\psi (1) = \psi '(1) = 0\). While not common terminology in the literature, we will refer to this as criticality:

$$\begin{aligned} \textstyle \text {BRW with } \psi (\eta ) = \log \mathbb {E}[\sum _{\left| x\right| = 1} e^{-\eta V(x)}] \text { is critical } \quad \overset{\tiny \text {def}}{\Longleftrightarrow }\quad \psi (1) = \psi '(1) = 0 \nonumber \\ \end{aligned}$$
(2.24)

This definition can be motivated by considering the many-to-one formula (Proposition 2.15) applied to a critical BRW for \(\eta = 1\): In that case, the random walk \(S_{i}\) has mean zero increments, \(\mathbb {E}[S_{1}] = -\psi '(1) = 0\), and the exponential drift in (2.22) vanishes, \(e^{n\psi (1)} = 1\). Consequently, as far as the many-to-one formula is concerned, critical BRWs inherit some of the universality of mean zero random walks (e.g. Donsker’s theorem, say under an additional second moment assumption). Moreover, the notion of criticality is particularly useful, since in many cases we can reduce a BRW to the critical case by a simple rescaling/drift transformation:

Lemma 2.17

(Critical Rescaling of a BRW). Consider a BRW with log-Laplace transform \(\psi (\eta ) = \log \mathbb {E}[\sum _{\left| x\right| = 1} e^{-\eta V(x)}]\). Suppose there exists \(\eta ^{*} > 0\) solving the equation

$$\begin{aligned} \psi (\eta ^{*}) = \eta ^{*} \psi '(\eta ^{*}). \end{aligned}$$
(2.25)

Equivalently, \(\eta ^{*}\) is a critical point for \(\eta \rightarrow \psi (\eta )/\eta \). Define a BRW with the same particles \(\{x\}\) and rescaled positions

$$\begin{aligned} V^{*}(x) = \eta ^{*} V(x) + \psi (\eta ^{*}) \left| x\right| . \end{aligned}$$
(2.26)

The resulting BRW is critical.

Proof

Write \(\psi ^{*}(\gamma ) = \log \mathbb {E}\sum _{\left| x\right| = 1} e^{-\gamma V^{*}(x)}\) for the log-Laplace transform of the rescaled BRW. We easily check

$$\begin{aligned} \begin{aligned} \psi ^{*}(1) = \log \mathbb {E}\sum _{\left| x\right| =1}e^{-\eta ^{*}V(x) - \psi (\eta ^{*})}&= -\psi (\eta ^{*}) + \log \mathbb {E}\sum _{\left| x\right| =1}e^{-\eta ^{*}V(x)}\\&= -\psi (\eta ^{*}) + \psi (\eta ^{*}) = 0. \end{aligned} \end{aligned}$$
(2.27)

Equivalently, \(1 = \mathbb {E}\sum _{\left| x\right| =1}e^{-\eta ^{*}V(x) - \psi (\eta ^{*})}\), which together with (2.25) yields

$$\begin{aligned} \begin{aligned} (\psi ^{*})'(1)&= -\frac{\mathbb {E}\sum _{\left| x\right| =1}(\eta ^{*}V(x) + \psi (\eta ^{*}))e^{-\eta ^{*}V(x) - \psi (\eta ^{*})}}{\mathbb {E}\sum _{\left| x\right| =1}e^{-\eta ^{*}V(x) - \psi (\eta ^{*})}}\\&= -\eta ^{*} \mathbb {E}\sum _{\left| x\right| =1}V(x)e^{-\eta ^{*} V(x)} - \psi (\eta ^{*})\\&= \eta ^{*}\psi '(\eta ^{*}) - \psi (\eta ^{*}) = 0, \end{aligned} \end{aligned}$$
(2.28)

which concludes the proof. \(\square \)

3 VRJP and the t-Field as \(\beta \searrow \beta _{\textrm{c}}\)

The main goal of this section is to prove Theorem 1.2 on the asymptotic escape time of the VRJP as \(\beta \searrow \beta _{\textrm{c}}\). The main work will be in establishing the following result on the effective conductance in a t-field environment:

Theorem 3.1

(Near-Critical Effective Conductance). Let \(\{T_{x}\}_{x\in \mathbb {T}_{d}}\) denote the (free) t-field on \(\mathbb {T}_{d}\), pinned at the origin. Let \(C^{\textrm{eff}}_{\infty }\) denote the effective conductance from the origin to infinity in the network given by conductances \(\{\beta e^{T_{i} + T_{j}} \mathbbm {1}_{i\sim j}\}_{i,j\in \mathbb {T}_{d}}\). There exist constants \(c,C > 0\) such that

$$\begin{aligned} \exp [-(C+o(1))/\sqrt{\epsilon }] \le \mathbb {E}_{\beta _{\textrm{c}} + \epsilon }[C^{\textrm{eff}}_{\infty }] \le \exp [-(c+o(1))/\sqrt{\epsilon }], \end{aligned}$$
(3.1)

as \(\epsilon \searrow 0\), where \(\beta _{\textrm{c}} = \beta _{\textrm{c}}(d) > 0\) is given by Proposition 2.19.

For establishing this result, the BRW perspective onto the t-field is essential. The lower bound will follow from a mild modification of a result by Gantert, Hu and Shi [59] (see Theorem 3.8). For the upper bound we will consider the critical rescaling of the near-critical t-field (cf. Lemma 2.17). The bound will then follow by a perturbative argument applied to a result on effective conductances in a critical BRW environment. The latter we prove in a more general form, for which it is convenient to introduce some additional notions.

For a random variable V and a fixed offspring degree d we write

$$\begin{aligned} \psi _{V}(\eta ) :=\log (d\, \mathbb {E}[e^{-\eta V}]). \end{aligned}$$
(3.2)

Analogous to Definition 2.13, for an increment distribution given by V, we define a random field \(\{V_{x}\}_{x\in \mathbb {T}_{d}}\) and refer to it as the BRW with increments V. We say that V is a critical increment if \(\{V_{x}\}_{x \in \mathbb {T}_{d}}\) is critical, i.e. \(\psi _{V}(1) = \psi _{V}'(1) = 0\). Note that this implicitly depends on our choice of \(d \ge 2\), but we choose to suppress this dependency. For a critical increment V we write

$$\begin{aligned} \sigma _{V}^{2} :=\psi ''_{V}(1) = d\,\mathbb {E}[V^{2} e^{-V}]. \end{aligned}$$
(3.3)

Note that this is the variance of the (mean-zero) increments of the random walk \((S_{i})_{i\ge 0}\) given by the many-to-one formula (Proposition 2.15 for \(\eta = 1\)).

Theorem 3.2

Fix some offspring degree \(d \ge 2\) and consider a critical increment V with \(\sigma _{V}^{2} < \infty \) and \(\psi _{V}(1+2a) < \infty \) for some constant \(a > 0\). Write \(\{V_{x}\}_{x \in \mathbb {T}_{d}}\) for the BRW with increments V and define the conductances \(\{e^{-\gamma (V_{x} + V_{y})}\}_{xy}\). Let \(C_{n,\gamma }^{\textrm{eff}}\) denote the effective conductance between the origin 0 and the vertices in the n-th generation. Then, for \(\gamma \in (1/2, 1/2 + a)\), we have

$$\begin{aligned} \mathbb {E}[C_{n,\gamma }^{\textrm{eff}}] \le \exp \Big [-\big [ \min (\tfrac{1}{4},\gamma - \tfrac{1}{2})\, (\pi ^{2}\sigma _{V}^{2})^{1/3} +o(1)\big ]n^{1/3}\Big ] \quad \text {as} \quad n\rightarrow \infty . \end{aligned}$$
(3.4)

Moreover, this is uniform with respect to \(\gamma \), \(\sigma _{V}^{2}\) and \(\psi _{V}(1+2a)\) in the following sense: Suppose there is a family \(V^{(k)}\), \(k\in \mathbb {N}\), of critical increments and define \(C^{\textrm{eff}}_{n,\gamma ;k}\) as above. Further assume \(0< \inf _{k}\sigma ^{2}_{V^{(k)}} \le \sup _{k}\sigma ^{2}_{V^{(k)}} < \infty \) and \(\sup _{k} \psi _{V^{(k)}}(1+2a) < \infty \). Then we have

$$\begin{aligned} \limsup _{n\rightarrow \infty }\, \sup _{k} \sup _{\frac{1}{2}< \gamma < \frac{1}{2} + a}\, \Bigg ( n^{-1/3}\log \mathbb {E}[C^{\textrm{eff}}_{n,\gamma ;k}] + \min (\tfrac{1}{4},\gamma - \tfrac{1}{2})\, (\pi ^{2}\sigma _{V^{(k)}}^{2})^{1/3}\Bigg ) \le 0. \nonumber \\ \end{aligned}$$
(3.5)

We note that random walk in (critical) multiplicative environments on trees has previously been studied, see for example [60,61,62,63,64,65]. In particular, Hu and Shi [63, Theorem 2.1] established bounds analogous to (3.4) for escape probabilities, instead of effective conductances. While the quantities are related, bounds on the expected escape probability do not directly translate into bounds for the expected effective conductance. Moreover, their setup for the random environment does not directly apply to our settingFootnote 5. Last but not least, for our applications, we require additional uniformity of the bounds with respect to the underlying BRW.

3.1 The t-field as a branching random walk

Fig. 4
figure 4

Illustration of \(\psi _{\beta }(\eta )/\log (d)\) for \(d=2\) at different values of \(\beta \). Its minimum is always at \(\eta = 1/2\), and the value of this minimum is increasing with \(\beta \). It is equal to zero at \(\beta = \beta _{\textrm{c}}\)

Considered as a BRW, the t-field \(\{T_{x}\}_{x\in \mathbb {T}_{d}}\) on the rooted \((d+1)\)-regular tree \(\mathbb {T}_{d}\) (or more precisely the negative t-field) has a log-Laplace transform given by

$$\begin{aligned} \psi _{\beta }(\eta ) :=\log \mathbb {E}[\sum _{\left| x\right| = 1} e^{\eta T_{x}}] = \log (d\, \mathbb {E}_{\beta }[e^{\eta T}]) \qquad (\eta > 0), \end{aligned}$$
(3.6)

where T denotes the t-field increment as introduced in Definition 2.12. One can check easily that \(\psi _{\beta }(0) = \psi _{\beta }(1) = \log d\). More generally, using the density for T we have

$$\begin{aligned} \psi _{\beta }(\eta ) = \log \Big (d \int \frac{\text {d}{t}}{\sqrt{2\pi /\beta }} e^{- \beta \, [\cosh (t) - 1] - (\tfrac{1}{2} - \eta )\, t}\Big ) = \log \Big (\frac{ d\sqrt{2\beta }e^{\beta } }{\sqrt{\pi }} K_{\eta - \tfrac{1}{2}}(\beta )\Big ) \nonumber \\ \end{aligned}$$
(3.7)

where \(K_{\alpha }\) denotes the modified Bessel function of second kind. An illustration of \(\psi _{\beta }\) for different values of \(\beta \) is given in Fig. 4. In particular, it’s a smooth function in \(\beta ,\eta > 0\) and one may check that it’s strictly convex since

$$\begin{aligned} \psi _{\beta }^{\prime \prime }(\eta ) = \frac{\mathbb {E}_{\beta }[T^{2}e^{\eta T}]}{\mathbb {E}_{\beta }[e^{\eta T}]} - \frac{\mathbb {E}_{\beta }[T e^{\eta T}]^{2}}{\mathbb {E}_{\beta }[e^{\eta T}]^{2}} > 0 \end{aligned}$$
(3.8)

equals the variance of a non-deterministic random variable. Moreover, by the symmetry and monotonicity properties of the Bessel function (\(K_{\alpha } = K_{-\alpha }\) and \(K_{\alpha } \le K_{\alpha '}\) for \(0 \le \alpha \le \alpha '\)), the infimum of \(\psi _{\beta }(\eta )\) is attained at \(\eta = 1/2\):

$$\begin{aligned} \inf _{\eta > 0} \psi _{\beta }(\eta ) = \psi _{\beta }(1/2) = \log (d \, \mathbb {E}_{\beta } [ e^{T/2}]) = \log (\frac{ \sqrt{2\beta }e^{\beta } d}{\sqrt{\pi }} K_{0}(\beta )) \end{aligned}$$
(3.9)

The critical inverse temperature \(\beta _{\textrm{c}} = \beta _{\textrm{c}}(d) > 0\), as given in Proposition 2.14, is equivalently characterised by the vanishing of this infimum:

$$\begin{aligned} \psi _{\beta _{\textrm{c}}}(1/2) = \inf _{\eta > 0} \psi _{\beta _{\textrm{c}}}(\eta ) = 0. \end{aligned}$$
(3.10)

In particular, by Lemma 2.17, this implies that \(\{-\tfrac{1}{2} T_{x}\}_{x\in \mathbb {T}_{\textrm{d}}}\) is a critical BRW at \(\beta =\beta _{\textrm{c}}\). More generally, it will be useful to consider critical rescalings of \(\{T_{x}\}\) for general \(\beta > 0\). For this we write

$$\begin{aligned} \eta _{\beta } :=\textrm{argmin}_{\eta> 0} \frac{\psi _{\beta }(\eta )}{\eta } \quad \text {and} \quad \gamma _{\beta } :=\inf _{\eta > 0} \frac{\psi _{\beta }(\eta )}{\eta } = \frac{\psi _{\beta }(\eta _{\beta })}{\eta _{\beta }}. \end{aligned}$$
(3.11)

An illustration of these quantities is given in Fig. 5. If \(\eta _{\beta }\) as above is well-defined, then it satisfies (2.25) and hence by Lemma 2.17 the rescaled field

$$\begin{aligned} \tau ^{\beta }_{x} = - \eta _{\beta } T_{x} + \psi _{\beta }(\eta _{\beta }) \left| x\right| \end{aligned}$$
(3.12)

defines a critical BRW. The following lemma lends rigour to this:

Lemma 3.3

\(\eta _{\beta }\) as given in (3.11) is well-defined and the unique positive root of the strictly increasing map \(\eta \mapsto \eta \psi _{\beta }^{\prime }(\eta ) - \psi _{\beta }(\eta )\). Consequently, the maps \(\beta \mapsto \eta _{\beta }\) and \(\beta \mapsto \gamma _{\beta }\) are continuously differentiable.

Proof

Recall the Bessel function asymptotics \(K_{\alpha }(\beta ) \sim \tfrac{1}{2} (2/\beta )^{\alpha } \Gamma (\alpha )\) as \(\alpha \rightarrow \infty \), hence by (3.7) we have \(\psi _{\beta }(\eta ) \sim \eta \log \eta \) for \(\eta \rightarrow \infty \). Consequently, \(\psi _{\beta }(\eta )/\eta \) diverges as \(\eta \rightarrow \infty \) (and it also diverges as \(\eta \searrow 0\)). Hence it attains its infimum at some finite value. We claim that there is a unique minimiser \(\eta _{\beta }\). Since \(\psi _{\beta }(\eta )/\eta \) is continuously differentiable in \(\eta > 0\), at any minimum it will have vanishing derivative \(\partial _{\eta } (\psi _{\beta }(\eta )/\eta ) = [\eta \psi _{\beta }^{\prime }(\eta ) - \psi _{\beta }(\eta )]/\eta ^{2}\). And in fact the map \(\eta \mapsto \eta \psi _{\beta }^{\prime }(\eta ) - \psi _{\beta }(\eta )\) is strictly increasing, since its derivative equals \(\eta \psi _{\beta }^{\prime \prime }(\eta ) > 0\), see (3.8), and as such has at most one root. This implies that \(\eta _{\beta }\) as in (3.11) is well-defined and the unique root of \(\eta \psi _{\beta }^{\prime }(\eta ) - \psi _{\beta }(\eta )\).

Continuous differentiability of \(\beta \mapsto \eta _{\beta }\) follows from the implicit function theorem applied to \(f(\eta , \beta ) :=\eta \psi _{\beta }'(\eta ) - \psi _{\beta }(\eta )\), noting that \(\partial _{\eta }f(\eta ,\beta ) = \eta \psi _{\beta }^{\prime \prime }(\eta ) > 0\). This directly implies continuous differentiability of \(\beta \mapsto \gamma _{\beta } = \psi _{\beta }(\eta _{\beta })/\eta _{\beta }\) \(\square \)

Fig. 5
figure 5

Illustration of \(\eta _{\beta }\), \(\gamma _{\beta }/\log d\) and \(\psi _{\beta }(\eta )/(\eta \log d)\) for \(d=2\). For the figure on the left, note that \(\gamma _{\beta }\) is positive for \(\beta > \beta _{\textrm{c}}\) and attains its maximum at \(\beta _{\textrm{c}}^{\textrm{erg}}\), at the same point at which \(\eta _{\beta } = 1\). The right figure illustrates the same point: The minima of \(\psi _{\beta }(\eta )/\eta \) move to the right with increasing \(\beta \) and attain their highest value at \(\beta = \beta _{\textrm{c}}^{\textrm{erg}}\)

Considering the graphs in Fig. 5, one would conjecture that \(\eta _{\beta }\) is strictly increasing in \(\beta \). One can apply the implicit function theorem to \(f(\eta , \beta ) :=\eta \psi _{\beta }'(\eta ) - \psi _{\beta }(\eta )\) to obtain

$$\begin{aligned} \frac{\text {d}{\eta _{\beta }}}{\text {d}{\beta }} = - \frac{[\partial _{\beta }f](\eta _{\beta }, \beta )}{[\partial _{\eta }f](\eta _{\beta }, \beta )} = \frac{[\partial _{\beta }\psi _{\beta }](\eta _{\beta }) - \eta _{\beta } [\partial _{\beta } \psi '_{\beta }](\eta _{\beta })}{\eta _{\beta } \psi _{\beta }''(\eta _{\beta })}. \end{aligned}$$
(3.13)

The denominator is positive by (3.8), but we are not aware how to show non-negativity of the numerator for general \(\beta \). We can however make use of this for the special case \(\beta = \beta _{\textrm{c}}\), which will be relevant in Sect. 3.3, in order to prove Theorem 3.1.

Proposition 3.4

Let \(\psi _{\beta }(\eta )\) and \(\eta _{\beta }\) be as in (3.7) and (3.11), for some \(d\ge 2\). For \(\beta _{\textrm{c}} = \beta _{\textrm{c}}(d) > 0\), as given in Proposition 2.14, we have \(\eta _{\beta _{\textrm{c}}} = 1/2\) and

$$\begin{aligned} \frac{\text {d}}{\text {d}{\beta }}\Big \vert _{\beta =\beta _{\textrm{c}}} \eta _{\beta }> 0\,\, \quad \text {and} \quad \,\, \frac{\text {d}}{\text {d}{\beta }}\Big \vert _{\beta =\beta _{\textrm{c}}} \psi _{\beta }(\eta _{\beta }) > 0 \end{aligned}$$
(3.14)

Proof

By (3.10) we have \(\tfrac{1}{2} \psi _{\beta _{\textrm{c}}}^{\prime }(\tfrac{1}{2}) - \psi _{\beta _{\textrm{c}}}(\tfrac{1}{2}) = -\psi _{\beta _{\textrm{c}}}(\tfrac{1}{2}) = 0\). Lemma 3.3 therefore implies \(\eta _{\beta _{\textrm{c}}} = 1/2\). Applying (3.13) and recalling \(\psi _{\beta }'(\tfrac{1}{2}) = 0\), we get

$$\begin{aligned} \frac{\text {d}{\eta _{\beta }}}{\text {d}{\beta }}\Big \vert _{\beta = \beta _{\textrm{c}}} = \frac{\partial _{\beta }\vert _{\beta =\beta _{\textrm{c}}}\psi _{\beta }(\tfrac{1}{2})}{\tfrac{1}{2} \psi _{\beta }''(\eta _{\beta })}. \end{aligned}$$
(3.15)

The denominator is positive by (3.8). As for the numerator, we recall (3.7) for \(\eta = 1/2\):

$$\begin{aligned} \begin{aligned} \psi _{\beta }(\tfrac{1}{2}) = \log \left( d \int \sqrt{\frac{\beta }{2\pi }}e^{-\beta (\cosh (t)-1)}\text {d}{t}\right) . \end{aligned} \end{aligned}$$
(3.16)

To see monotonicity of the integral in \(\beta \) it is convenient to apply the change of variables.

$$\begin{aligned} \begin{aligned} u&= e^{t/2} - e^{-t/2} = 2\sinh (t/2) \Longleftrightarrow t = 2 {\text {arsinh}}(u/2)\\ \frac{\text {d}{u}}{\text {d}{t}}&= \frac{1}{2} (e^{t/2} + e^{-t/2}) = \sqrt{1 + u^{2}/4} \end{aligned} \end{aligned}$$
(3.17)

Note that \(u^{2}/2 = \tfrac{1}{2}(e^{t} + e^{-t}) - 1 = \cosh (t)-1\), hence

$$\begin{aligned} \begin{aligned} \int \sqrt{\frac{\beta }{2\pi }}e^{-\beta (\cosh (t)-1)}\text {d}{t} =&\int \sqrt{\frac{\beta }{2\pi }}e^{-\frac{\beta }{2}u^2 } \frac{2}{\sqrt{u^2+4}}\text {d}{u} \\ =&\int \sqrt{\frac{1}{2\pi }}e^{-\frac{1}{2}s^2 } \frac{2}{\sqrt{s^2/\beta +4}}\text {d}{s}. \end{aligned} \end{aligned}$$
(3.18)

Clearly, the integrand in the last line is strictly increasing in \(\beta \), hence \(\partial _{\beta }\psi _{\beta }(\tfrac{1}{2})>0\). This implies the first statement in (3.14). For the second statement note that \(\psi _{\beta _{\textrm{c}}}^{\prime }(\tfrac{1}{2}) = 0\). Hence, \(\partial _{\beta }\vert _{\beta =\beta _{\textrm{c}}}\psi _{\beta }(\eta _{\beta }) = \partial _{\beta }\vert _{\beta =\beta _{\textrm{c}}}\psi _{\beta }(\tfrac{1}{2}) > 0\). \(\square \)

As already suggested in Fig. 5, there is a second natural transition point \(\beta _{\textrm{c}}^{\textrm{erg}} > \beta _{\textrm{c}}\), which is “special” due to \(\gamma _{\beta }\) attaining its maximum there. This transition point will be relevant for the study of the intermediate phase in Sect. 4.

Proposition 3.5

(Characterisation of \(\beta _{\textrm{c}}^{\textrm{erg}}\)). Let \(\psi _{\beta }(\eta )\) and \(\eta _{\beta }\) be as in (3.7) and (3.11), for some \(d\ge 2\). The map \(\beta \mapsto \psi _{\beta }^{\prime }(1) - \psi _{\beta }(1)\) is strictly decreasing and there exists a unique \(\beta _{\textrm{c}}^{\textrm{erg}} = \beta _{\textrm{c}}^{\textrm{erg}}(d) > 0\), such that

$$\begin{aligned} \psi _{\beta _{\textrm{c}}^{\textrm{erg}}}(1) = \psi ^{\prime }_{\beta _{\textrm{c}}^{\textrm{erg}}}(1). \end{aligned}$$
(3.19)

Equivalently, \(\beta _{\textrm{c}}^{\textrm{erg}} > 0\) is characterised by any of the following conditions:

$$\begin{aligned} \mathbb {E}_{\beta _{\textrm{c}}^{\textrm{erg}}}[T] = - \log d \quad \Longleftrightarrow \quad \eta _{\beta _{\textrm{c}}^{\textrm{erg}}} = 1 \quad \Longleftrightarrow \quad \gamma _{\beta _{\textrm{c}}^{\textrm{erg}}} = \sup _{\beta > 0} \gamma _{\beta } = \log d. \end{aligned}$$
(3.20)

Moreover, for \(\beta < \beta _{\textrm{c}}^{\textrm{erg}}\) we have that \(\eta _{\beta } < 1\) and that \(\beta \mapsto \gamma _{\beta }\) is increasing, while for \(\beta > \beta _{\textrm{c}}^{\textrm{erg}}\) one has \(\eta _{\beta } > 1\) and \(\beta \mapsto \gamma _{\beta }\) is decreasing.

Proof

By definition of \(\psi _{\beta }\) and the t-field increment measure we have

$$\begin{aligned} \psi _{\beta }^{\prime }(1) - \psi _{\beta }(1) = \mathbb {E}_{\beta }[T e^{T}] - \log d = -\mathbb {E}_{\beta }[T] - \log d. \end{aligned}$$
(3.21)

We claim that \(\beta \mapsto \mathbb {E}_{\beta }[T]\) is strictly increasing. In fact, using the change of variables in (3.17) and noting that \(e^{-t/2} = \cosh (t/2) - \sinh (t/2) = \sqrt{1+(u/2)^{2}} - u/2\), we have

$$\begin{aligned} \begin{aligned} \mathbb {E}_{\beta }[T] =&\int \sqrt{\frac{\beta }{2\pi }}e^{-\beta (\cosh (t)-1)}e^{-t/2}t\text {d}{t}\\ =&\int \sqrt{\frac{\beta }{2\pi }}e^{-\frac{\beta }{2}u^2 } \,\frac{2{\text {arsinh}}(u/2)(\sqrt{1+(u/2)^{2}} - u/2)}{\sqrt{1+(u/2)^{2}}}\text {d}{u}\\ =&-2 \int \sqrt{\frac{\beta }{2\pi }}e^{-\frac{\beta }{2}u^2 } \,\frac{u}{2}\frac{{\text {arsinh}}(u/2)}{\sqrt{1+(u/2)^{2}}}\text {d}{u}. \end{aligned} \end{aligned}$$
(3.22)

It is easy to check that \(x {\text {arsinh}}(x)/\sqrt{1+x^{2}}\) is strictly increasing in \(\left| x\right| \). Consequently, rescaling \(u = s/\sqrt{\beta }\) as in (3.18), we see that above integral is strictly increasing in \(\beta \). Moreover, one also observes that that \(\mathbb {E}_{\beta }[T] \rightarrow -\infty \) for \(\beta \searrow 0\), whereas \(\mathbb {E}_{\beta }[T] \rightarrow 0\) for \(\beta \rightarrow \infty \). Hence by (3.21), there exists a unique \(\beta _{\textrm{c}}^{\textrm{erg}} > 0\), such that \(\psi _{\beta _{\textrm{c}}^{\textrm{erg}}}^{\prime }(1) = \psi _{\beta _{\textrm{c}}^{\textrm{erg}}}(1)\). In particular, \(\eta _{\beta _{\textrm{c}}^{\textrm{erg}}} = 1\).

The first two alternative characterisations in (3.20) follow from (3.21) and our previous considerations. Also, by Theorem 2.11, we have

$$\begin{aligned} \psi _{\beta }(1) \lessgtr \psi _{\beta }^{\prime }(1) \quad \text {for} \quad \beta \lessgtr \beta _{\textrm{c}}^{\textrm{erg}}, \end{aligned}$$
(3.23)

which by Lemma 3.3 implies that \(\eta _{\beta } \lessgtr 1\) for \(\beta \lessgtr \beta _{\textrm{c}}^{\textrm{erg}}\).

To show the last characterisation in (3.20), we calculate the derivative of \(\beta \mapsto \gamma _{\beta } = \psi _{\beta }(\eta _{\beta })/\eta _{\beta }\):

$$\begin{aligned} \begin{aligned} \partial _{\beta }\gamma _{\beta }&= \partial _{\beta }[\frac{\psi _{\beta }(\eta _{\beta })}{\eta _{\beta }}]\\&= \tfrac{1}{\eta _{\beta }}[\partial _{\beta }\psi _{\beta }](\eta _{\beta }) + \tfrac{1}{\eta _{\beta }} [\partial _{\beta }\eta _{\beta }] \psi _{\beta }^{\prime }(\eta _{\beta }) - \tfrac{1}{\eta _{\beta }^{2}} [\partial _{\beta }\eta _{\beta }] \psi _{\beta }(\eta _{\beta })\\&= \tfrac{1}{\eta _{\beta }}[\partial _{\beta }\psi _{\beta }](\eta _{\beta }), \end{aligned} \end{aligned}$$
(3.24)

where in the last line we used that \(\eta _{\beta }\psi ^{\prime }_{\beta }(\eta _{\beta }) - \psi _{\beta }(\eta _{\beta }) = 0\). By Theorem 2.11, the last line in (3.24) is non-negative if \(\eta _{\beta } \le 1\) and non-positive for \(\eta _{\beta } \ge 1\). Since \(\eta _{\beta } \lessgtr 1\) for \(\beta \lessgtr \beta _{\textrm{c}}^{\textrm{erg}}\) this implies the last statement in (3.20) as well as the stated monotonicity behaviour of \(\beta \mapsto \gamma _{\beta }\). \(\square \)

3.2 Effective conductance in a critical environment (Proof of Theorem 3.2)

First we recall some results on small deviation of random walks. To be precise, we use an extension of Mogulskii’s Lemma [66], due to Gantert, Hu and Shi [59].

Lemma 3.6

(Triangular Mogulskii’s Lemma [59, Lemma 2.1]). For each \(n\ge 1\), let \(X_i^{(n)}\), \(1\le i \le n\), be i.i.d. real-valued random variables. Let \(g_1<g_2\) be continuous functions on [0, 1] with \(g_1(0)<0<g_2(0)\). Let \((a_n)\) be a sequence of positive numbers such that \(a_n \rightarrow \infty \) and \(a^{2}_n/n \rightarrow 0\) as \(n \rightarrow \infty \). Assume that there exist constants \(\eta >0\) and \(\sigma ^2>0\) such that:

$$\begin{aligned} \sup _{n\ge 1} \mathbb {E}\left[ |X_1^{(n)}|^{2+\eta }\right] <\infty ,\qquad \mathbb {E}\left[ X_1^{(n)}\right] =o\bigg (\frac{a_n}{n}\bigg ), \qquad \text {Var}\big [X_1^{(n)}\big ]\rightarrow \sigma ^2. \end{aligned}$$
(3.25)

Consider the measurable event

$$\begin{aligned} E_n:=\bigg \{ g_1\left( \frac{i}{n}\right) \le \frac{S_i^{(n)}}{a_n}\le g_2\left( \frac{i}{n}\right) \; \forall i\in [n]\bigg \}, \end{aligned}$$
(3.26)

where \(S_i^{(n)}:= X_1^{(n)}+\cdots +X_i^{(n)},\ 1\le i \le n\). We have

$$\begin{aligned} \frac{a_n^2}{n}\log \left( \mathbb {P}[E_n]\right) \xrightarrow [n\rightarrow \infty ]{} -\frac{\pi ^2\sigma ^2}{2} \int _0^1 \frac{1}{(g_2(t)-g_1(t))^2}\text {d}{t}. \end{aligned}$$
(3.27)

Lemma 3.7

For each \(k\ge 1\), let \(X_{i}^{(k)}\), \(i\in \mathbb {N}\), be i.i.d. real-valued random variables with \(\mathbb {E}[X_{i}^{(k)}] = 0\) and \(\sigma _{k}^{2} :=\mathbb {E}[(X_{i}^{(k)})^{2}]\). Suppose that \(0< \inf _{k}\sigma _{k}^{2} \le \sup _{k}\sigma _{k}^{2} < \infty \). Write \(S_{i}^{k} = X_{1}^{(k)} + \cdots + X_{i}^{(k)}\). For \(\gamma > 0\) and \(\nu \in (0,\tfrac{1}{2})\), define the events

$$\begin{aligned} E^{(k)}_{n} :=\{ |S_{i}| \le \gamma n^{\nu },\; \forall i\in [n]\}. \end{aligned}$$
(3.28)

then we have

$$\begin{aligned} \lim _{n\rightarrow \infty } \sup _{k\in \mathbb {N}} \left| n^{1-2\nu }\log \mathbb {P}[E^{(k)}_{n}] + \Big (\frac{\pi \sigma _{k}}{2\gamma }\Big )^{2}\right| = 0. \end{aligned}$$
(3.29)

Proof

We proceed by contradiction. Write \(b_{n}^{(k)} :=-n^{1-2\nu }\log \mathbb {P}[E^{(k)}_{n}]\) and \(b_{\infty }^{(k)} :=\big (\frac{\pi \sigma _{k}}{2\gamma }\big )^{2}\) and suppose (3.29) does not hold. Then there exists \(\epsilon > 0\), \((k_{n})_{n\in \mathbb {N}}\), and a subsequence \(\mathcal {N}_{0} \subseteq \mathbb {N}\)

$$\begin{aligned} \forall n\in \mathcal {N}_{0}:\left| b_{n}^{(k_{n})} - b_{\infty }^{(k_{n})}\right| > \epsilon . \end{aligned}$$
(3.30)

Since the \(\sigma _{k}^{2}\) are bounded, we can refine to a subsequence \(\mathcal {N}_{1} \subseteq \mathcal {N}_{0} \subseteq \mathbb {N}\), such that \(\sigma ^{2}_{k_{n}} \rightarrow \tilde{\sigma } > 0\) along \(\mathcal {N}_{1}\). But by Lemma 3.6 (with \(a_{n} = n^{\nu }\), \(g_{1} = -\gamma \), and \(g_{2} = +\gamma \)) we have \(b_{n}^{(k_{n})} \rightarrow - \big (\frac{\pi \tilde{\sigma }}{2\gamma }\big )^{2}\) along \(\mathcal {N}_{1}\), in contradiction with (3.30). \(\square \)

Proof of Theorem 3.2

Recall the notation in Theorem 3.2. We proceed by proving the statement for an individual increment V, but indicate at which steps care has to be taken to establish the uniformity (3.5).

Write \(\partial \Lambda _{n} :=\{x \in \mathbb {T}_{d} :\left| x\right| = n\}\) for the vertices at distance n from the origin. Set \(\alpha :=\frac{1}{2}(\pi ^{2}\sigma _{V}^{2})^{1/3}\). Define the stopping lines of \(\{V_{x}\}_{x\in \mathbb {T}_{d}}\) at level \(\alpha n^{1/3}\):

$$\begin{aligned} \mathcal {L}^{(n)} :=\{(x,y) \in {E}:V_{y}\ge \alpha n^{1/3},\; \forall z \prec y : V_{z} < \alpha n^{1/3}\}, \end{aligned}$$
(3.31)

where we write \({E}\) for the set or edges oriented away from the origin and “\(a \prec b\)” means that a is an ancestor of b. Let \(A_{n}\) denote the event that \(\mathcal {L}^{(n)}\) is a cut-set between the origin and \(\partial \Lambda _{n}\). By (2.9), conditionally on the event \(A_{n}\) we have the point-wise bound

$$\begin{aligned} C^{\textrm{eff}}_{n,\gamma } \le \sum _{xy \in \mathcal {L}^{(n)}}e^{-\gamma (V_{x} + V_{y})}. \end{aligned}$$
(3.32)

We thus have:

$$\begin{aligned} \mathbb {E}\Big [C^{\textrm{eff}}_{n,\gamma }\Big ]\le \mathbb {E}\Big [\sum _{xy \in \mathcal {L}^{(n)}}e^{-\gamma (V_{x} + V_{y})}\Big ]+\mathbb {E}\Big [C^{\textrm{eff}}_{n,\gamma }\mathbbm {1}_{A_{n}^{\textrm{c}}}\Big ] \end{aligned}$$
(3.33)

Bounding the second summand. Clearly, we have

$$\begin{aligned} \begin{aligned} \mathbb {P}[A_{n}^{\textrm{c}}]&\le \mathbb {P}[\exists |x|=n,\text { such that } \forall y\prec x, |V_{y}|\le \alpha n^{1/3}]\\&\hspace{3em}+ \mathbb {P}[\exists |x|\le n,\text { such that } V_{x} \le -\alpha n^{1/3}]. \end{aligned} \end{aligned}$$
(3.34)

To bound the first summand on the right hand side, we apply the many-to-one formula (Proposition 2.15) with \(\eta = 1\), and get a random walk \((S_{i})_{i\ge 0}\), such that

$$\begin{aligned} \begin{aligned}&\hspace{-8em} \mathbb {P}[\exists |x|=n,\text { such that } \forall y\prec x, |V_{y}|\le \alpha n^{1/3}]\\&\le \mathbb {E}\Big [{\textstyle \sum _{|x|=n}} \mathbbm {1}\{\forall y\prec x, |V_{y}|\le \alpha n^{1/3}\}\Big ]\\&= \mathbb {E}[e^{S_n}\mathbbm {1}_{\forall i\in [n], |S_i|\le \alpha n^{1/3}}]\\&\le e^{\alpha n^{1/3}} \mathbb {P}[\forall i\in [n], |S_i|\le \alpha n^{1/3}]. \end{aligned} \end{aligned}$$
(3.35)

In the third line we used that \(\psi _{V}(1) = 0\). We recall that (since \(\psi (1)_{V} = \psi _{V}'(1) = 0\)) we have \(\mathbb {E}[S_{1}] = 0\) and \(\mathbb {E}[S_{1}^{2}] = \sigma _{V}^{2}\). Applying Lemma 3.7 (with \(\gamma = \alpha \) and \(\nu = 1/3\)) yields

$$\begin{aligned} \mathbb {P}[\forall i\in [n], |S_i|\le \alpha n^{1/3}] = e^{-[2\alpha +o(1)] n^{1/3}}, \end{aligned}$$
(3.36)

where we used that \((\tfrac{\pi \sigma _{V}}{2\alpha })^{2} = 2 \alpha \). Moreover, Lemma 3.7 states that the convergence in (3.36) is uniform over a family \(V^{(k)}\), \(k\in \mathbb {N}\), of critical increments given that \(0< \inf _{k}\sigma ^{2}_{V^{(k)}} \le \sup _{k}\sigma ^{2}_{V^{(k)}} < \infty \). In conclusion we have

$$\begin{aligned} \mathbb {P}[\exists |x|=n,\text { such that } \forall y\prec x, |V_{y}|\le \alpha n^{1/3}] \le e^{-[\alpha + o(1)]n^{1/3}}. \end{aligned}$$
(3.37)

Fot the second summand in (3.34) we have

$$\begin{aligned} \begin{aligned} \mathbb {P}[\exists |x|\le n,\text { such that } V_{x} \le -\alpha n^{1/3}] \le&\sum _{i=1}^n\mathbb {E}\Big [\sum _{|x|=i} \mathbbm {1}_{V_{x}\le -\alpha n^{1/3}} \Big ]\\ =&\sum _{i=1}^n \sum _{|x|=i} \mathbb {E}[e^{-V_{x}} \,e^{V_{x}}\mathbbm {1}_{V_{x}\le -\alpha n^{1/3}}]\\ \le&\sum _{i=1}^n \sum _{|x|=i} \mathbb {E}[e^{-V_{x}}]e^{-\alpha n^{1/3}}\\ =&\sum _{i=1}^n e^{i \psi _{V}(1)} e^{-\alpha n^{1/3}}\\ =&\sum _{i=1}^n e^{-\alpha n^{1/3}}\\ =&n e^{-\alpha n^{1/3}}. \end{aligned} \end{aligned}$$
(3.38)

Where we used that \(e^{i \psi _{V}(\eta )} = \sum _{\left| x\right| = i} \mathbb {E}[e^{-\eta V_{x}}]\), which one may check inductively. In conclusion, (3.34), (3.37) and (3.38) yield \(\mathbb {P}(A_{n}^{\textrm{c}}) \le e^{-(\alpha +o(1)) n^{1/3}}\). We proceed by controlling the second summand in (3.33) using Cauchy-Schwarz and properties of the effective conductance (Lemma 2.6):

$$\begin{aligned} \begin{aligned} \mathbb {E}[C^{\textrm{eff}}_{n,\gamma } \mathbbm {1}_{A_{n}^{\textrm{c}}}] \le \sqrt{\mathbb {E}[(C_{n,\gamma }^{\textrm{eff}})^{2}]}\; e^{-\frac{\alpha }{2}\,[n^{1/3} + o(1)]} \end{aligned} \end{aligned}$$
(3.39)

To bound the first factor on the right hand side note that \(C_{n,\gamma }^{\textrm{eff}} \le {\textstyle \sum _{\left| x\right| =1}}e^{-\gamma V_{x}}\) by Lemma 2.6. By Jensen’s and Hölder’s inequality

$$\begin{aligned} \begin{aligned} \mathbb {E}[({\textstyle \sum _{\left| x\right| =1}}e^{-\gamma V_{x}})^{2}]&\le d\, \mathbb {E}[{\textstyle \sum _{\left| x\right| =1}}e^{-2\gamma V_{x}}]\\&= d^{2}\, \mathbb {E}[e^{-2\gamma V}]\\&\le d^{2} \mathbb {E}[e^{-V}]^{2\gamma (1-\frac{2\gamma -1}{2a})} \mathbb {E}[e^{-(1+2a)V}]^{\frac{2\gamma }{1+2a}\frac{2\gamma -1}{2a}}\\&\le d^{2 - 2\gamma (1-\frac{2\gamma -1}{2a})} [\tfrac{1}{d} e^{\psi _{V}(1+2a)}]^{\frac{2\gamma }{1+2a}\frac{2\gamma -1}{2a}}, \end{aligned} \end{aligned}$$
(3.40)

where we used \(1 = e^{\psi _{V}(1)} = d\, \mathbb {E}[e^{-V}]\). The last line in (3.40) is continuous in \(\gamma \in \mathbb {R}\), hence uniformly bounded for \(\gamma \in (1/2, 1/2 + a)\). In conclusion, we have

$$\begin{aligned} \sup _{1/2< \gamma < 1/2 + a} \mathbb {E}[C^{\textrm{eff}}_{n,\gamma } \mathbbm {1}_{A_{n}^{\textrm{c}}}] \le C(\psi _{V}(1+2a))\, e^{-[\frac{\alpha }{2} + o(1)] n^{1/3}}, \end{aligned}$$
(3.41)

for a constant \(C(\psi _{V}(1+2a)) > 0\) depending continuously on \(\psi _{V}(1+2a)\). In particular, this yields a uniform bound over a family of critical increments \(V^{(k)}\) with \(0< \inf _{k}\sigma ^{2}_{V^{(k)}} \le \sup _{k}\sigma ^{2}_{V^{(k)}} < \infty \) and \(\sup _{k}\psi _{V^{(k)}}(1+2a) \infty \).

Bounding the first summand. For a vertex \(x \in \partial \Lambda _{n}\) we write \((x_{k})_{k=0,\ldots ,n}\) for its sequence of predecessors (\(x_{0} = 0, x_{n} = x\)). For a walk \(X = (X_{i})_{i\ge 0}\), analogously to our stopping lines, we introduce the stopping time at level \(\alpha n^{1/3}\):

$$\begin{aligned} T^{(n)}_{X} = \inf \{i\ge 0:X_{i} \ge \alpha n^{1/3}\} \end{aligned}$$
(3.42)

Note that on the event \(A_{n}\), we know for every \(x\in \partial \Lambda _{n}\) that the sequence \((V_{x_{i}})_{i=0,\ldots ,n}\) crosses level \(\alpha n^{1/3}\). In other words, \(T^{(n)}_{(V_{x_{i}})} \le n\).

Consequently, the first summand in (3.33) is bounded via

$$\begin{aligned} \begin{aligned} \mathbb {E}\Big [ \sum _{xy \in \mathcal {L}^{(n)}}e^{-\gamma (V_{x} + V_{y})} \Big ]&\le \sum _{k=1}^{n} \mathbb {E}\Big [ \sum _{\left| x\right| = k} \mathbbm {1}\{T^{(n)}_{(V_{x_{i}})} = k\} e^{-\gamma (V_{x_{k-1}} + V_{x_{k}})} \Big ]. \end{aligned} \end{aligned}$$
(3.43)

The last line is amenable to the many-to-one formula (Theorem 2.15). Write \((S_{i})_{i\ge 0}\) for the associated random walk (choosing \(\eta = 1\)), then the last line in (3.43) is equal to

$$\begin{aligned} \sum _{k=1}^{n} \mathbb {E}\Big [ \mathbbm {1}\{T^{(n)}_{S} = k\} e^{S_{k}} e^{-\gamma (S_{k-1} + S_{k})} \Big ] = \sum _{k=1}^{n} \mathbb {E}\Big [ \mathbbm {1}\{T^{(n)}_{S} = k\} e^{-(2\gamma -1)S_{k-1}} e^{(1-\gamma ) (S_{k} - S_{k-1})} \Big ]. \nonumber \\ \end{aligned}$$
(3.44)

Now, since \(S_{k}\ge \alpha n^{1/3}\) for \(T^{(n)}_{S} = k\), and since \(\gamma > 1/2\) by assumption, we can bound the right hand side and obtain

$$\begin{aligned} \mathbb {E}\Big [ \sum _{xy \in \mathcal {L}^{(n)}}e^{-\gamma (V_{x} + V_{y})} \mathbbm {1}_{A_{n}} \Big ]\le & {} e^{-(2\gamma - 1)\alpha n^{1/3}} \times \sum _{k=1}^{n} \mathbb {E}\Big [ \mathbbm {1}\{T^{(n)}_{S} = k\} e^{(1-\gamma )(S_{k} - S_{k-1})} \Big ] \nonumber \\\le & {} e^{-(2\gamma - 1)\alpha n^{1/3}} \times n\mathbb {E}\Big [ e^{(1-\gamma ) S_{1}} \Big ] \end{aligned}$$
(3.45)

Now by using the definition of \((S_{i})\) in (2.21) we have

$$\begin{aligned} \mathbb {E}[e^{(1-\gamma )S_{1}}] = d\,\mathbb {E}[e^{-\gamma V}] \le d\, \mathbb {E}[e^{-(1+2a)V}]^{\frac{\gamma }{1+2a}} \le d\, [\tfrac{1}{d} e^{\psi _{V}(1+2a)}]^{\frac{\gamma }{1+2a}} \le C(\psi _{V}(1+2a)), \nonumber \\ \end{aligned}$$
(3.46)

for a constant \(C(\psi _{V}(1+2a)) > 0\) that is independent of \(\gamma \in (1/2, 1/2 + a)\) and continuous with respect to \(\psi _{V}(1+2a)\). Hence,

$$\begin{aligned} \mathbb {E}\Big [ \sum _{xy \in \mathcal {L}^{(n)}}e^{-\gamma (V_{x} + V_{y})} \Big ] \le e^{-[(2\gamma - 1)\alpha +o(1)] n^{1/3}}, \end{aligned}$$
(3.47)

and this bound holds uniformly with respect to \(\gamma \in (1/2, 1/2 + a)\) and over family of critical increments \(V^{(k)}\), given that \(\sup _{k} \psi _{V^{(k)}}(1+2a) < \infty \). In conclusion (3.32), (3.41) and (3.47) yield

$$\begin{aligned} \begin{aligned} \mathbb {E}[C_{n,\gamma }^{\textrm{eff}}]&\le e^{-[\alpha /2 + o(1)]n^{1/3}} + e^{-[(2\gamma - 1)\alpha + o(1)]n^{1/3}}\\&\le e^{-[\min (\tfrac{1}{2},2\gamma - 1) \alpha + o(1)]n^{1/3}}\\&= e^{-[ \min (\tfrac{1}{4},\gamma - \tfrac{1}{2}) (\pi ^{2}\sigma _{V}^{2})^{1/3} +o(1)]n^{1/3}} \end{aligned} \end{aligned}$$
(3.48)

uniformly over \(\gamma \in (1/2, 1/2 + a)\) as \(n\rightarrow \infty \). And as noted, this bound is also uniform over a family of critical increments \(V^{(k)}\), given the assumptions in the theorem. This concludes the proof. \(\square \)

3.3 Near-critical effective conductance (Proof of Theorem 3.1)

The upper bound in Theorem 3.1 will follow from Theorem 3.2 and a perturbative argument. For the lower bound, we will apply a modification of a result due to Gantert, Hu and Shi [59]. In their work they give the asymptotics for the probability that some trajectory of a critical branching random walk stays below a slope \(\delta |i|\) when \(\delta \searrow 0\). We are interested in this result applied to the critical rescaling of t-field \(\{\tau _{x}^{\beta }\}_{x\in \mathbb {T}_{d}}\) as given in (3.12). Comparing to Gantert, Hu and Shi’s result, we will require additional uniformity in \(\beta \):

Theorem 3.8

Let \(\{\tau _{x}^{\beta }\}_{x\in \mathbb {T}_{d}}\) be as in (3.12). For any \(a>0\) small enough, there exists a constant \(C>0\) such that for all \(\beta \in [\beta _c,\beta _c+a]\), for \(\delta \) small enough:

$$\begin{aligned} \mathbb {P}_{\beta }[\exists \text {a path } \gamma :0\rightarrow \infty \text { s.t.\ } \forall i\in \mathbb {N},\ \tau ^{\beta }_{\gamma _i}\le \delta i]\ge e^{-C/\sqrt{\delta }}. \end{aligned}$$

This theorem will be proven in Appendix B, as it closely follows the arguments of Gantert, Hu and Shi, while taking some extra care to ensure the required uniformity.

Proof of Theorem 3.1

The main idea is to consider, for \(\beta = \beta _{\textrm{c}} + \epsilon \), the critical rescaling of the t-field (see Lemma 2.17, (3.11) and Lemma 3.3)

$$\begin{aligned} \tau ^{\beta }_{i} = - \eta _{\beta } T_{i} + \psi _{\beta }(\eta _{\beta }) \left| i\right| . \end{aligned}$$
(3.49)

We remind the reader of the definition of the rescaled field with the following near-critical behaviour for the constants (Proposition 3.4):

$$\begin{aligned} \begin{aligned} \eta _{\beta _{\textrm{c}} + \epsilon }&= \tfrac{1}{2} + c_{\eta } \epsilon + O(\epsilon ^{2})\quad \text {with} \quad c_{\eta }> 0\\ \psi _{\beta _{\textrm{c}} + \epsilon }(\eta _{\beta _{\textrm{c}} + \epsilon })&= c_{\psi }\epsilon + O(\epsilon ^{2}) \quad \text {with} \quad c_{\psi } > 0. \end{aligned} \end{aligned}$$
(3.50)

Together with these asymptotics, application Theorem 3.8 and Theorem 3.2 to \(\{\tau ^{\beta }_{i}\}_{i\in \mathbb {T}_{d}}\), will yield the lower and upper bound, respectively.

Lower Bound: According to Theorem 3.8 we have that there exist constants \(a, C>0\), such that for all sufficiently small \(\delta >0\):

$$\begin{aligned} \inf _{\beta _{\textrm{c}}< \beta < \beta _{\textrm{c}} + a}\mathbb {P}_{\beta }[\exists \text {a path } \gamma :0\rightarrow \infty \text { s.t.\ } \forall i\in \mathbb {N},\ \tau ^{\beta }_{\gamma _i}\le \delta i]\ge e^{-C/\sqrt{\delta }}. \end{aligned}$$
(3.51)

Note that \(\tau _{\gamma _{i}} \le \delta i\) is equivalent to \(T_{\gamma _{i}} \ge \eta _{\beta }^{-1} [\psi _{\beta }(\eta _{\beta }) - \delta ] i\). Choosing \(\delta (\epsilon ) = \tfrac{1}{2} c_{\psi } \epsilon \), we have \(\eta _{\beta _{\textrm{c}}+\epsilon }^{-1} [\psi _{\beta _{\textrm{c}}+\epsilon }(\eta _{\beta _{\textrm{c}}+\epsilon }) - \delta (\epsilon )] = c_{\psi }\epsilon + O(\epsilon ^{2})\). Hence, for \(\epsilon >0\) small enough

$$\begin{aligned} \mathbb {P}_{\beta _{\textrm{c}} + \epsilon }[\exists \text {a path } \gamma :0\rightarrow \infty \text { s.t.\ } \forall i\in \mathbb {N},\ T_{\gamma _i}\ge \tfrac{1}{2} c_{\psi }\epsilon i] \ge e^{-C/\sqrt{\epsilon }}. \end{aligned}$$
(3.52)

Write \(A_{\epsilon }\) for the event in brackets. Conditionally on this event, we can bound \(C_{\infty }^{\textrm{eff}}\) from below by the conductance along the path \(\gamma \) (which is given by Kirchhoff’s rule for conductors in series):

$$\begin{aligned} \text {On } A_{\epsilon }:\quad C_{\infty }^{\textrm{eff}} \ge \Big [\sum _{i=0}^{\infty } \frac{1}{\beta } e^{-2\tfrac{1}{2}c_{\psi }\epsilon \, i}\Big ]^{-1} = \beta (1 - e^{- c_{\psi }\epsilon }). \end{aligned}$$
(3.53)

Consequently, (3.52) and (3.53) yield

$$\begin{aligned} \mathbb {E}_{\beta _{\textrm{c}}+\epsilon }[C_{\infty }^{\textrm{eff}}] \ge (\beta _{\textrm{c}} + \epsilon ) (1 - e^{-c_{\psi }\epsilon }) e^{-C/\sqrt{\epsilon }} = e^{-[C + o(1)]/\sqrt{\epsilon }} \text { as } \epsilon \rightarrow 0. \end{aligned}$$
(3.54)

This concludes the proof of the lower bound in (3.1).

Upper Bound: Recalling the definition (3.49), we have for any \(i,j \in \mathbb {T}_{d,n} \subseteq \mathbb {T}_{d}\) that

$$\begin{aligned} e^{T_{i} + T_{j}} = e^{(\left| i\right| + \left| j\right| )\, \psi _{\beta }(\eta _{\beta })/\eta _{\beta }} e^{-\eta ^{-1}_{\beta }(\tau ^{\beta }_{i} + \tau ^{\beta }_{j})} \le e^{2n\, \psi _{\beta }(\eta _{\beta })/\eta _{\beta }} e^{-\eta ^{-1}_{\beta }(\tau ^{\beta }_{i} + \tau ^{\beta }_{j})}. \end{aligned}$$
(3.55)

Hence, if we write \(\tilde{C}^{\textrm{eff}}_{n}\) for the effective conductance between the origin and \(\partial \Lambda _{n} = \{x\in \mathbb {T}_{d}:\left| x\right| = n\}\) in the electrical network with conductances \(\{e^{-\eta ^{-1}_{\beta }(\tau ^{\beta }_{i} + \tau ^{\beta }_{j})}\}_{ij \in E}\), we have

$$\begin{aligned} \mathbb {E}_{\beta }[C_{n}^{\textrm{eff}}] \le e^{2n\, \psi _{\beta }(\eta _{\beta })/\eta _{\beta }}\, \mathbb {E}_{\beta }[\tilde{C}^{\textrm{eff}}_{n}]. \end{aligned}$$
(3.56)

For any \(\beta > 0\), the field \(\tau ^{\beta }_{i}\) is the BRW for the critical increment \(\tau ^{\beta } :=-\eta _{\beta } T + \psi _{\beta }(\eta _{\beta })\), with T is distributed as a t-field increment (at inverse temperature \(\beta \)). Hence, Theorem 3.2 implies

$$\begin{aligned} \mathbb {E}_{\beta }[\tilde{C}^{\textrm{eff}}_{n}] \le \exp [-\big [\min (\tfrac{1}{4},\eta ^{-1}_{\beta } - 1/2)\, (\pi ^{2}\sigma _{\tau ^{\beta }}^{2})^{1/3} +o(1)\big ]n^{1/3}] \quad \text {as} \quad n\rightarrow \infty , \nonumber \\ \end{aligned}$$
(3.57)

and moreover this holds uniformly as \(\beta \searrow \beta _{\textrm{c}}\). Note that by (3.50) we have \(\min (\tfrac{1}{4},\eta ^{-1}_{\beta } - 1/2) = \tfrac{1}{4}\) for \(\beta \) sufficiently close to \(\beta _{\textrm{c}}\). In the following write \(\beta = \beta _{\textrm{c}} + \epsilon \). By (3.50) we have \(\psi _{\beta _{\textrm{c} + \epsilon }}(\eta _{\beta _{\textrm{c}} + \epsilon })/\eta _{\beta _{\textrm{c}} + \epsilon } \sim 2 c_{\psi } \epsilon \) as \(\epsilon \searrow 0\). Hence, choosing \(n = n(\epsilon ) = c' \epsilon ^{-3/2}\) we have

$$\begin{aligned} 2n(\epsilon )\, \psi _{\beta _{\textrm{c} + \epsilon }}(\eta _{\beta _{\textrm{c}} + \epsilon })/\eta _{\beta _{\textrm{c}} + \epsilon } \sim 4 c_{\psi } c' \epsilon ^{-1/2} \quad \text {and} \quad n(\epsilon )^{1/3} = c'^{1/3} \epsilon ^{-1/2}, \end{aligned}$$
(3.58)

consequently for \(c' > 0\) sufficiently small, (3.56) and (3.57) together with Lemma 2.6 yield

$$\begin{aligned} \ \mathbb {E}_{\beta _{\textrm{c}} + \epsilon }[C^{\textrm{eff}}_{\infty }] \le \mathbb {E}_{\beta _{\textrm{c}} + \epsilon }[C^{\textrm{eff}}_{n(\epsilon )}] \le e^{-(C+o(1))\,\epsilon ^{-1/2}} \quad \text {as} \quad \epsilon \searrow 0, \end{aligned}$$
(3.59)

for some constant \(C > 0\). \(\square \)

A corollary of the proof above, in particular (3.52), (3.53) is the following

Lemma 3.9

In the setting of Theorem 3.1 one has, for some constants \(c, C > 0\)

$$\begin{aligned} \mathbb {P}_{\beta _{\textrm{c}} + \epsilon } [C_{\infty }^{\textrm{eff}} > c \epsilon ] \ge \exp [-(C + o(1))/\sqrt{\epsilon }], \end{aligned}$$
(3.60)

as \(\epsilon \searrow 0\).

3.4 Average escape time of the VRJP as \(\beta \searrow \beta _{\textrm{c}}\) (Proof of Theorem 1.2)

Lemma 3.10

(Local Time and Effective Conductance). Let \(L^{0}_{\infty }\) denote the time the VRJP spends at the origin. Let \(C_{\infty }^{\textrm{eff}}\) be the effective conductance between the origin and infinity in the t-field environment. Also suppose Z is an independent exponential random variable of unit mean. Then we have

$$\begin{aligned} L^{0}_{\infty } {\mathop {=}\limits ^{\tiny \text {law}}} \sqrt{1+2 Z/C_{\infty }^{\textrm{eff}}}\, -1. \end{aligned}$$
(3.61)

Proof

Write \(\tilde{L}^{0}_{\infty }\) for the total time the exchangeable timescale VRJP spends at the origin. By the time change formula for the local times (2.3), we have:

$$\begin{aligned} L^{0}_{\infty }=\sqrt{1+\tilde{L}^{0}_{\infty }}-1. \end{aligned}$$
(3.62)

By Theorem 2.2, Lemma 2.4, and Lemma 2.7, \(\tilde{L}^{0}_{\infty }\) is \(\textrm{Exp}(2/C_{\infty }^{\textrm{eff}})\)-distributed. \(\square \)

Lemma 3.11

Let \(C^\textrm{eff}_{\infty }\) be as in Theorem 3.1. For any \(\alpha >0\), there exists a constant \(c = c(d,\alpha ) > 0\), such that for \(\epsilon > 0\) small enough and \(x \ge e^{c/\sqrt{\epsilon }}\)

$$\begin{aligned} \mathbb {P}_{\beta _{\textrm{c}}+ \epsilon }[\tfrac{1}{C^\textrm{eff}_{\infty }} > x] \le x^{-\alpha }. \end{aligned}$$
(3.63)

In particular, there exists a constant \(C > 0\) such that

$$\begin{aligned} \mathbb {E}_{\beta _{\textrm{c}} + \epsilon }\Big [\frac{1}{C^{\textrm{eff}}_\infty }\Big ]\le e^{\frac{C}{\sqrt{\epsilon }}} \end{aligned}$$
(3.64)

Proof

Recall that the t-field environment is given by edge-weights \(\{\beta _{ij}e^{T_{i} + T_{j}}\}_{ij \in E(\mathbb {T}_{d})}\), where the t-field \(T_{i}\) has independent increments along outgoing edges and is defined to equal 0 at the origin. In particular, the environment on the subtree emanating from x (which is isomorphic to \(\mathbb {T}_{d}\)) is distributed as a t-field environment on \(\mathbb {T}_{d}\) multiplied by \(e^{2T_{x}}\) (which is the same as requiring that the t-field equals \(T_{x}\) at the “origin” x). For any \(n\in \mathbb {N}\), and a vertex x at generation n, write \(\omega _{n,x}\) for the effective conductance from x to infinity. By the above we have that \(\{e^{-2T_{x}}\omega _{n,x}\}_{\left| x\right| = n}\) are independently distributed as \(C_{\infty }^{\textrm{eff}}\). Also, they are independent from the t-field up to generation n.

In the following, we replace each of the \(d^{n}\) subtrees emanating from the vertices x at generation n by a single edge “to infinity” with weight \(\omega _{n,x}\). The resulting network has the same effective conductance between 0 and infinity.

Define the event

$$\begin{aligned} A_{n} :=\{\exists \left| x\right| = n : e^{-2T_{x}}\omega _{n,x} > 2c \epsilon \}. \end{aligned}$$
(3.65)

By Lemma 3.9 we have \(\mathbb {P}_{\beta _{\textrm{c}} + \epsilon }[e^{-2T_{x}}\omega _{n,x}>2c\epsilon ]\ge e^{-2 C/\sqrt{\epsilon }}\) and hence

$$\begin{aligned} \mathbb {P}_{\beta _{\textrm{c}} + \epsilon }[A_{n}^{\textrm{c}}] = 1 - \mathbb {P}_{\beta _{\textrm{c}} + \epsilon }[A_{n}] \le (1-e^{-2 C/\sqrt{\epsilon }})^{d^n}\le e^{-d^n e^{-2 C/\sqrt{\epsilon }}}, \end{aligned}$$
(3.66)

which is small for appropriately chosen n.

Hence, suppose we are working under the event \(A_{n}\), and let \(x_0\) be a vertex at generation n, such that \(e^{-2T_{x_0}}\omega _{n,x_0}>2c\epsilon \). The effective conductance on the tree is larger than the effective conductance on the subgraph where we only keep the edges between 0 and \(x_0\), as well as an edge between \(x_0\) and infinity with conductance \(e^{2T_{x_{0}}} 2c\epsilon < \omega _{n,x_{0}}\). Denote the conductance of this reduced graph by \(C^{\textrm{red}}\). We write \(y_0=0,\dots ,y_n=x_0\) for the vertices along the path from 0 to \(x_0\). The series formula for conductances yields

$$\begin{aligned} \frac{1}{C^\textrm{eff}_{\infty }}\le \frac{1}{C^{\textrm{red}}} = \frac{1}{\beta } \sum \limits _{i=0}^{n-1}e^{-(T_{y_i}+T_{y_{i+1}})} + \frac{1}{2c\epsilon }e^{-2 T_{y_n}}. \end{aligned}$$
(3.67)

We bound \(T_{y_{i}} + T_{y_{i+1}} \ge 2 \min (T_{y_{i}}, T_{y_{i+1}})\). Recall that \(T_{y_{i}} \overset{\tiny \text {law}}{=} \sum _{k=0}^{i} T^{(k)}\) with i.i.d. samples \(\{T^{(k)}\}_{k\ge 0}\) from the t-field increment measure (2.17). This yields

$$\begin{aligned} \frac{1}{C^{\textrm{red}}} \le (\tfrac{n}{\beta } + \tfrac{1}{2c\epsilon }) e^{-2\min (T_{y_{0}}, \ldots , T_{y_{n}})}. \end{aligned}$$
(3.68)

For fixed \(\tau > 0\) we apply a union bound and Chernoff’s bound (resp. Lemma A.1)

$$\begin{aligned} \textstyle \begin{aligned} \mathbb {P}_{\beta }[\min (T_{y_{0}}, \ldots , T_{y_{n}})< - n \tau ]&\le \sum _{i=0}^{n} \mathbb {P}[{\textstyle \sum _{k=0}^{i}T^{(k)}} < -n\tau ]\\&\le \sum _{i=0}^{n} \exp (-i \Psi _{\beta }^{*}(\tfrac{n}{i} \tau )), \end{aligned} \end{aligned}$$
(3.69)

where \(\Psi ^{*}_{\beta }(\tau ) = \sup _{\lambda \ge 0}(\lambda \tau - \log \mathbb {E}_{\beta } [e^{-\lambda T}])\) is the Fenchel-Legendre dual of the (negative) t-field increment’s log-MGF. Convexity of \(\Psi _{\beta }^{*}\) (and \(\Psi ^{*}_{\beta }(0) = 0\)) implies \(\Psi _{\beta }^{*}(\tfrac{n}{i} \tau ) \ge \tfrac{n}{i} \Psi _{\beta }^{*}(\tau )\). Consequently, (3.69) yields

$$\begin{aligned} \mathbb {P}_{\beta }[\min (T_{y_{0}}, \ldots , T_{y_{n}}) < - n \tau ] \le (n+1) e^{-n\Psi ^{*}_{\beta }(\tau )} \quad \text {for} \quad \tau > 0 \end{aligned}$$
(3.70)

which by (3.67) and (3.68) implies

$$\begin{aligned} \mathbb {P}_{\beta _{\textrm{c}} + \epsilon }[\tfrac{1}{C^{\textrm{eff}}_{\infty }} > (\tfrac{n}{\beta } + \tfrac{1}{2c\epsilon }) e^{2 n \tau } |A_{n}] \le (n+1)\exp [-n \Psi ^{*}_{\beta _{\textrm{c}} + \epsilon }(\tau )], \end{aligned}$$
(3.71)

In Appendix A we obtain lower bounds on \(\Psi _{\beta }^{*}\) (Lemma A.1). By (A.3), we have that for fixed \(\alpha > 0\) and sufficiently small \(\epsilon > 0\), any sufficiently large \(\tau > 0\) will satisfy \(\Psi ^{*}_{\beta _{\textrm{c}} + \epsilon }(\tau ) \ge 7\alpha \tau \), uniformly as \(\epsilon \searrow 0\). To conclude, we choose \(n\ge N(\epsilon ) :=\frac{4C}{\log (d) \sqrt{\epsilon }}\), such that \(\mathbb {P}[A_{n}] \le e^{-d^{n/2}}\). In conclusion, with above choices, (3.66) and (3.71) yield

$$\begin{aligned} \mathbb {P}_{\beta _{\textrm{c}}+ \epsilon }\big [\tfrac{1}{C^\textrm{eff}_{\infty }} > e^{3n\tau }\big ] \le e^{-6n\alpha \tau }+ e^{-d^{n/2}} \end{aligned}$$
(3.72)

This implies the claim. \(\square \)

Proof of Theorem 1.2

We start with the lower bound: By Lemma 3.10 there exists an exponential random variable Z of expectation 1 such that:

$$\begin{aligned} \begin{aligned} \mathbb {E}[L^{0}_{\infty }]&= \mathbb {E}\big [\sqrt{1+2 Z/C^{\textrm{eff}}_\infty }\big ]-1\\&\ge \mathbb {E}\big [\sqrt{1+2 Z/\mathbb {E}(C^{\textrm{eff}}_\infty )}\big ]-1 \text { by cond.\ Jensen inequality}\\&\ge \mathbb {E}[\sqrt{Z}]/\mathbb {E}[C^{\textrm{eff}}_\infty ]-1 \\&\ge \exp (c/\sqrt{\epsilon }) -1 \text { by Theorem}~3.1. \end{aligned} \end{aligned}$$
(3.73)

For the upper bound, we start with Jensen’s inequality:

$$\begin{aligned} \begin{aligned} \mathbb {E}[L^{0}_{\infty }]&= \mathbb {E}\big [\sqrt{1+2Z/C^{\textrm{eff}}_{\infty }}-1\big ]\\&\le \sqrt{1+2\mathbb {E}\big [Z/C^{\textrm{eff}}_{\infty }\big ]}-1\\&= \sqrt{1+2\mathbb {E}\big [1/C^{\textrm{eff}}_{\infty }\big ]}-1\\&\le \sqrt{2} \sqrt{\mathbb {E}\big [1/C^{\textrm{eff}}_{\infty }\big ]}. \end{aligned} \end{aligned}$$
(3.74)

The result now follows by Lemma 3.11. \(\square \)

4 Intermediate Phase of the VRJP

In this section we show that the VRJP on large finite regular trees exhibits an intermediate phase. We also argue that Rapenne’s recent results [19] imply the absence of such an intermediate phase on regular trees with wired boundary conditions.

4.1 Existence of an intermediate phase on \(\mathbb {T}_{d,n}\) (Proof of Theorem 1.3)

The intermediate phase is characterised by the VRJP, despite being transient, spending “unusually much” time at the root. To be precise, on finite trees the fraction of time spent at the origin scales with the system size as a fractional power of the inverse system volume. At the second transition point the walk then reverts to the behaviour that one expects by comparison with simple random walk, spending time inversely proportional to the tree’s volume at the starting vertex.

We will see that the different scalings will be due to different regimes for the log-Laplace transform of the t-field increments, \(\psi _{\beta }(\eta ) = \log [d \,\mathbb {E}_{\beta } e^{\eta T}]\), as elaborated in Sect. 3.1.

Before starting the proof, we show how the observable in Theorem 1.3 can be rephrased in terms of a t-field. The proof will then proceed by analysing the resulting t-field quantity via branching random walk methods.

Lemma 4.1

Consider the situation of Theorem 1.3. Further consider a t-field \(\{T_{x}\}\) on \(\mathbb {T}_{d,n}\), rooted at the origin 0. We then have

$$\begin{aligned} \textstyle \lim _{t\rightarrow \infty } \tfrac{L^{0}_{t}}{t} \overset{\text {law}}{=} \Big [\sum _{\left| x\right| \le n}e^{T_{x}}\Big ]^{-1}, \end{aligned}$$
(4.1)

Proof

Trivially one has \(t = \sum _{\left| x\right| \le n} L_{t}^{x}\). Consequently,

$$\begin{aligned} \lim _{t\rightarrow \infty } \tfrac{L^{0}_{t}}{t} = \lim _{t\rightarrow \infty } \Big [\sum _{\left| x\right| \le n}L^{x}_{t}/L^{0}_{t}\Big ]^{-1}. \end{aligned}$$
(4.2)

Hence, the claim follows from Corollary 2.3. \(\square \)

Proof of Theorem 1.3

In light of Lemma 4.1 we consider a t-field \(\{T_{x}\}\) on \(\mathbb {T}_{d}\), rooted at the origin. In the following we analyse the asymptotic behaviour of the random variable \(\sum _{\left| x\right| \le n}e^{T_{x}}\).

Case \(\beta _{\textrm{c}}< \beta < \beta _{\textrm{c}}^{\textrm{erg}}\): We note that it suffices to show

$$\begin{aligned} \sum _{\left| x\right| \le n} e^{T_{x}} = e^{n \gamma _{\beta } + o(n)} \quad \text {a.s. for} \quad n\rightarrow \infty \quad \text {with} \quad \gamma _{\beta } = \inf _{\eta> 0} \psi _{\beta }(\eta )/\eta > 0, \end{aligned}$$
(4.3)

since we have \(0< \gamma _{\beta } < \log (d)\) by Proposition 3.5. The lower bound in (4.3) follows from Theorem 2.16:

$$\begin{aligned} \sum _{\left| x\right| \le n} e^{T_{x}} \ge \sum _{\left| x\right| = n} e^{T_{x}} \ge e^{\max _{\left| x\right| = n} T_{x}} = e^{n\gamma _{\beta } + o(n)}. \end{aligned}$$
(4.4)

For the upper bound in (4.3) note that for \(\eta \in (0,1)\) and \(\epsilon > 0\) we have

$$\begin{aligned} \begin{aligned} \mathbb {P}[\sum _{\left| x\right| \le n} e^{T_{x}} > e^{n(\gamma _{\beta } + \epsilon )}]&\le e^{-n \eta (\gamma _{\beta } + \epsilon )} \mathbb {E}[(\sum _{\left| x\right| \le n} e^{T_{x}})^{\eta }]\\&\le e^{-n \eta (\gamma _{\beta } + \epsilon )} \mathbb {E}[\sum _{\left| x\right| \le n} e^{\eta T_{x}}]\\&= e^{-n \eta (\gamma _{\beta } + \epsilon )} \sum _{k=0}^{n} e^{\psi (\eta ) k} \end{aligned} \end{aligned}$$
(4.5)

Now let \(\eta = \eta _{\beta }\) as in Lemma 3.3, i.e. such that \(\gamma _{\beta } = \psi _{\beta }(\eta _{\beta })/\eta _{\beta } > 0\). Note that by Proposition 3.5, we have \(\gamma _{\beta } \in (0,\log (d))\). With this choice (4.5) implies \(\limsup _{n\rightarrow \infty }\tfrac{1}{n}\log \sum _{\left| x\right| \le n} e^{T_{x}} \le \gamma _{\beta } + \epsilon \) almost surely for any \(\epsilon > 0\). This yields the lower bound in (4.3).

Case \(\beta \le \beta _{\textrm{c}}\): This proceeds similarly to the previous case. For the lower bound we simply use \(\sum _{\left| x\right| \le n} e^{T_{x}} \ge e^{T_{0}} = 1\). For the lower bound we use (4.5) with \(\gamma _{\beta } \mapsto 0\) and \(\eta = 1/2\), which implies that \(\limsup _{n\rightarrow \infty }\tfrac{1}{n}\log \sum _{\left| x\right| \le n} e^{T_{x}} \le \epsilon \). almost surely for any \(\epsilon > 0\).

Case \(\beta > \beta _{\textrm{c}}^{\textrm{erg}}\): First note that the quantity \(W_{n} :=d^{-n} \sum _{\left| x\right| = n} e^{T_{x}}\) is a martingale. In the branching random walk literature this is referred to as the additive martingale associated with the BRW \(\{T_{x}\}_{x\in \mathbb {T}_{d}}\). Since \(W_{n}\) is non-negative it converges almost surely to a random variable \(W_{\infty } = \lim _{n\rightarrow \infty } W_{n}\). Biggin’s martingale convergence theorem [58, Theorem 3.2] implies that for \(\beta > \beta _{\textrm{c}}^{\textrm{erg}}\) (equivalently \(\psi _{\beta }'(1) < \psi _{\beta }(1)\), see Proposition 3.5), the sequence is uniformly integrable and the limit \(W_{\infty }\) is almost surely strictly positive. Consequently we also get convergence for the weighted average

$$\begin{aligned} \frac{1}{\left| \mathbb {T}_{d,n}\right| } \sum _{\left| x\right| \le n}e^{T_{x}} = \frac{1}{\left| \mathbb {T}_{d,n}\right| }\sum _{k=0}^{n}d^{k}W_{k} \rightarrow W_{\infty } > 0 \quad \text {a.s. for } \quad n\rightarrow \infty . \end{aligned}$$
(4.6)

In other words,

$$\begin{aligned} \sum _{\left| x\right| \le n}e^{T_{x}} \sim \left| \mathbb {T}_{d,n}\right| W_{\infty } = d^{n + O(1)} \quad \text {as} \quad n\rightarrow \infty , \end{aligned}$$
(4.7)

which implies the claim for \(\beta > \beta _{\textrm{c}}^{\textrm{erg}}\). \(\square \)

4.2 Multifractality of the intermediate phase (Proof of Theorem 1.4)

For the proof we will make use of explicit large deviation asymptotics for the maximum of the t-field. These follow (as an easy special case) from results due to Gantert and Höfelsauer on the large deviations of the maximum of a branching random walk [67, Theorem 3.2]:

Lemma 4.2

Consider the t-field \(\{T_{x}\}_{x\in \mathbb {T}_{d}}\) on \(\mathbb {T}_{d}\), pinned at the origin 0. Let \(\gamma _{\beta } = \inf _{\eta > 0} \psi _{\beta }(\eta )/\eta \) as in (3.11). For any \(\gamma > \gamma _{\beta }\) we have

$$\begin{aligned} \textstyle \liminf _{n\rightarrow \infty } \tfrac{1}{n} \log \mathbb {P}[\max _{\left| x\right| = n} T_{x} \ge n\gamma ] = - \sup _{\eta \in \mathbb {R}} [\gamma \eta - \psi _{\beta }(\eta )] < 0. \end{aligned}$$
(4.8)

Proof

As noted, this is a direct consequence of [67, Theorem 3.2]. To be precise, we consider the special case of a deterministic offspring distribution (instead of Galton-Watson trees) and fluctuations above the asymptotic velocity \(\gamma _{\beta }\) (corresponding to the case \(x > x^{*}\) in [67]). In this case, the rate function given by Gantert and Höfelsauer (denoted by \(x\mapsto I(x) - \log (m)\) in their article) is equal to

$$\begin{aligned} \gamma \mapsto \sup _{\eta \in \mathbb {R}} (\gamma \eta - \log \mathbb {E}[e^{\eta T}]) - \log d = \sup _{\eta \in \mathbb {R}} [\gamma \eta - \psi _{\beta }(\eta )]. \end{aligned}$$
(4.9)

This concludes the proof. \(\square \)

Proof of Theorem 1.4

By Lemma 4.1, we would like to understand fractional moments of

$$\begin{aligned}{}[\lim _{t\rightarrow \infty }L_{t}^{0}/t]^{-1} \overset{\tiny \text {law}}{=} \sum _{\left| x\right| \le n} e^{T_{x}}, \end{aligned}$$
(4.10)

where \(\{T_{x}\}_{x\in \mathbb {T}_{d}}\) denotes a t-field on the rooted \((d+1)\)-regular tree, pinned at the origin. Recall the definition of \(\eta _{\beta }\) in (3.11) and Lemma 3.3. For \(\beta \in (\beta _{\textrm{c}}, \beta _{\textrm{c}}^{\textrm{erg}})\) we have \(\eta _{\beta } \in (0,1)\) by Proposition 3.5.

Case \(\eta \in \left( 0,\eta _{\beta } \right] \): We recall Proposition 2.16, which implies that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n} \max _{\left| x\right| = n} T_{x} = \gamma _{\beta } = \psi _{\beta }(\eta _{\beta })/\eta _{\beta }. \end{aligned}$$
(4.11)

By Jensen’s inequality and Fatou’s lemma we get

$$\begin{aligned} \begin{aligned} \liminf _{n \rightarrow \infty } \tfrac{1}{n} \log \mathbb {E}[(\sum _{\left| x\right| \le n} e^{T_{x}})^{\eta }]&\ge \liminf _{n \rightarrow \infty } \tfrac{1}{n} \log \mathbb {E}[e^{\eta \max _{\left| x\right| =n}T_{x}}]\\&\ge \liminf _{n\rightarrow \infty } \frac{\eta }{n} \mathbb {E}[\max _{\left| x\right| = n} T_{x}]\\&\ge \eta \psi _{\beta }(\eta _{\beta })/\eta _{\beta }. \end{aligned} \end{aligned}$$
(4.12)

On the other hand, since \(\eta / \eta _{\beta } \le 1\)

$$\begin{aligned} \begin{aligned} \mathbb {E}[(\sum _{\left| x\right| \le n} e^{T_{x}})^{\eta }] \le \mathbb {E}[(\sum _{\left| x\right| \le n} e^{T_{x}})^{\eta _{\beta }}]^{\eta /\eta _{\beta }} \end{aligned} \end{aligned}$$
(4.13)

For any \(\eta \in (0,1)\) and \(\beta > \beta _{\textrm{c}}\) we can bound

$$\begin{aligned} \mathbb {E}[(\sum _{\left| x\right| \le n} e^{T_{x}})^{\eta }] \le \mathbb {E}[\sum _{\left| x\right| \le n} e^{\eta T_{x}}] \le \sum _{k=0}^{n} e^{k \psi _{\beta }(\eta )} \le e^{n \psi _{\beta }(\eta ) + o(n)}, \end{aligned}$$
(4.14)

where we used that \(\inf _{\eta> 0} \psi _{\beta }(\eta ) = \psi _{\beta }(1/2) > 0\) for \(\beta > \beta _{\textrm{c}}\) (cf. (3.10), (3.9) and (3.16)). Applying this to the last line of (4.13), we obtain

$$\begin{aligned} \mathbb {E}[(\sum _{\left| x\right| \le n} e^{T_{x}})^{\eta }] \le e^{n\, \eta \, \psi _{\beta }(\eta _{\beta })/\eta _{\beta } + o(n)} \end{aligned}$$
(4.15)

Case \(\eta \in \left[ \eta _{\beta }, 1 \right) \): The upper bound already follows from (4.14). For the lower bound we start with

$$\begin{aligned} \begin{aligned} \mathbb {E}[(\sum _{\left| x\right| \le n} e^{T_{x}})^{\eta }]&\ge \mathbb {E}[e^{\eta \max _{\left| x\right| = n} T_{x}}]\\&\ge e^{n\eta \gamma }\, \mathbb {P}[\max _{\left| x\right| = n} T_{x} \ge n\gamma ] \quad \text {for any} \quad \gamma > 0. \end{aligned} \end{aligned}$$
(4.16)

We get that for any \(\gamma \in \mathbb {R}\):

$$\begin{aligned} \liminf _{n\rightarrow \infty } \tfrac{1}{n} \log \mathbb {E}[(\sum _{\left| x\right| \le n} e^{T_{x}})^{\eta }] \ge \eta \gamma + \liminf _{n\rightarrow \infty } \tfrac{1}{n} \log \mathbb {P}[\max _{\left| x\right| = n} T_{x} \ge n\gamma ]. \end{aligned}$$
(4.17)

By Lemma 4.2, we have

$$\begin{aligned} \liminf _{n\rightarrow \infty } \tfrac{1}{n} \log \mathbb {E}[(\sum _{\left| x\right| \le n} e^{T_{x}})^{\eta }] \ge \sup _{\gamma > \gamma _{\beta }} \Big (\eta \gamma - \sup _{\tilde{\eta } \in \mathbb {R}} [\gamma \tilde{\eta } - \psi _{\beta }(\tilde{\eta })]\Big ). \end{aligned}$$
(4.18)

We claim that the right hand side of (4.18) is equal to \(\psi _{\beta }(\eta )\). For the upper bound simply choose \(\tilde{\eta } = \eta \). For the lower bound first note that the supremum of \(\tilde{\eta }\mapsto \gamma \tilde{\eta } - \psi _{\beta }{\tilde{\eta }}\) is attained at the unique \(\tilde{\eta }\), such that \(\psi _{\beta }^{\prime }(\tilde{\eta }) = \gamma \) (uniqueness follow from convexity of \(\eta \mapsto \psi _{\beta }(\eta )\)). Since we assumed \(\eta > \eta _{\beta }\), we may choose \(\gamma = \psi _{\beta }^{\prime }(\eta )\), satisfying \(\gamma > \gamma _{\beta } = \psi _{\beta }^{\prime }(\eta _{\beta })\). Together with previous observation this shows that the right hand side is larger or equal to \(\psi _{\beta }(\eta )\). This concludes the proof. \(\square \)

4.3 On the intermediate phase for wired boundary conditions

We recall that for the Anderson transition it was debated whether an intermediate multifractal phase persists in the infinite volume and on tree-like graphs without free boundary conditions (see Sect. 1.3).

We conjecture that there is no intermediate phase for the VRJP on regular trees with wired boundary conditions. In this section, we would like to provide some evidence for this claim, based on recent work by Rapenne [19].

Let \(\overline{\mathbb {T}}_{d,n}\) denote the rooted \((d+1)\)-regular tree of depth n with wired boundary, i.e. all vertices at generation n have an outgoing edge to a single boundary ghost \(\mathfrak {g}\). We consider \(\mathbb {T}_{d,n} \subset \overline{\mathbb {T}}_{d,n}\) as a the subgraph induced by the vertices excluding the ghost. Let \(\{\overline{T}^{\mathfrak {g}}_{x}\}_{x\in \overline{T}_{d,n}}\) denote a t-field on the wired tree \(\overline{\mathbb {T}}_{d,n}\), pinned at the ghost \(\mathfrak {g}\), and at inverse temperature \(\beta \). We define

$$\begin{aligned} \psi _{n}(x) = e^{\overline{T}^{\mathfrak {g}}_{x}} \text { for } x\in \mathbb {T}_{d,n}, \end{aligned}$$
(4.19)

where we use the index n to make the dependence on the underlying domain \(\overline{\mathbb {T}}_{d,n}\) more explicit. This coincides with the (vector) martingale \(\{\psi _{n}(x)\}_{x \in \mathbb {T}_{d,n}}\) considered by Rapenne (see [32, Lemma 2] for a proof that these are in fact the same). By [19, Theorem 2] we have for \(\beta > \beta _{\textrm{c}}\) and \(p\in (1,\infty )\)

$$\begin{aligned} \textstyle \sup _{n\ge 1} \mathbb {E}_{\beta }[\psi _{n}(0)^{p}] < \infty . \end{aligned}$$
(4.20)

Our statement about the absence of an intermediate phase, will be conditional on a (conjectural) extension of this result:

$$\begin{aligned} \text { Conjecture: } \sup _{n\ge 1} \frac{1}{\left| \mathbb {T}_{d,n}\right| } \sum _{x \in \mathbb {T}_{d,n}} \mathbb {E}_{\beta }[\psi _{n}(x)^{p}] < \infty \quad \text {for} \quad p>1 \text { and } \beta > \beta _{\textrm{c}}. \end{aligned}$$
(4.21)

We believe this statement to be true due to the following heuristic: Given that the origin of \(\overline{\mathbb {T}}_{d,n}\) is furthest away from the ghost \(\mathfrak {g}\), at which the t-field in (4.19) is pinned, we expect the fluctuations of \(\psi _{n}(x)\) to be largest at \(x=0\). Hence, we expect the moments of \(\psi _{n}(x)\) to be comparable with the ones of \(\psi _{n}(0)\), in which case (4.20) would imply (4.21).

Proposition 4.3

Consider a VRJP started from the root of \(\overline{\mathbb {T}}_{d,n}\) and let \(L_{t}^{0}\) denote the time it spent at root up until time t. Assume (4.21) holds true. Then, for any \(\beta > \beta _{\textrm{c}}\)

$$\begin{aligned} \lim _{t\rightarrow \infty } \frac{L_{t}^{0}}{t} \le \left| \mathbb {T}_{d,n}\right| ^{-1 + o(1)} \quad \text {w.h.p.\ as} \quad n\rightarrow \infty . \end{aligned}$$
(4.22)

This is to be contrasted with the behaviour in Theorem 1.3.

Proof

Let \(\{\overline{T}_{x}\}_{x\in \overline{\mathbb {T}}_{d,n}}\) denote the t-field on \(\overline{\mathbb {T}}_{d,n}\), pinned at the origin 0. We stress that this is different from \(\overline{T}^{\mathfrak {g}}_{x}\), as used in (4.19), which is pinned at the ghost \(\mathfrak {g}\). However, we can sample the former from the latter: First consider an STZ-Anderson operator \(H_{B}\) on the infinite graph \(\mathbb {T}_{d}\), as defined in Definition 1.8. Define \(\hat{G}_{n} :=(H_{B}\vert _{\mathbb {T}_{d,n}})^{-1}\) and also define \(\{\psi _{n}(x)\}_{x\in \mathbb {T}_{d}}\) by

$$\begin{aligned} (H_{B}\psi _{n})\vert _{\mathbb {T}_{d,n}} = 0 \text { and } \psi \vert _{\mathbb {T}_{d} \setminus \mathbb {T}_{d,n}} \equiv 1. \end{aligned}$$
(4.23)

By [32, Lemma 2], the \(\psi _{n}\) so defined (and restriced to \(\mathbb {T}_{d,n}\)) agree in law with the definition in (4.19). Then define \(\overline{T}_{x}\) for \(x\in \mathbb {T}_{d,n}\) via

$$\begin{aligned} e^{\overline{T}_{x}} = \frac{\hat{G}_{n}(0,x) + \frac{1}{2\gamma } \psi _{n}(0)\psi _{n}(x)}{\hat{G}_{n}(0,0) + \frac{1}{2\gamma } \psi _{n}(0)^{2}}, \end{aligned}$$
(4.24)

where \(\gamma \sim \mathrm {Gamma(\tfrac{1}{2}, 1)}\) is independent of \(H_{B}\). By [32, Proposition 8], \(\{\overline{T}_{x}\}_{x\in \mathbb {T}_{d,n}}\) has the law of a t-field on \(\overline{\mathbb {T}}_{d,n}\), pinned at the origin 0 (and restricted to \(\mathbb {T}_{d,n}\)). Note that \(\overline{\mathbb {T}}_{\mathfrak {g}}\) is not defined by (4.24). Using the conditional law of the t-field on \(\overline{\mathbb {T}}_{d,n}\) given its values away from the ghost, we can however define it such that \(\{\overline{T}_{x}\}_{x\in \overline{\mathbb {T}}_{d,n}}\) is the “full” t-field on \(\overline{\mathbb {T}}_{d,n}\), pinned at the origin. Then, as in (4.1), we have that

$$\begin{aligned} \lim _{t\rightarrow \infty } \frac{L_{t}^{0}}{t} \overset{\text {law}}{=} \Bigg [\sum _{x \in \overline{\mathbb {T}}_{d,n}}e^{\overline{T}_{x}}\Bigg ]^{-1}. \end{aligned}$$
(4.25)

By (4.24) and positivity of \(\hat{G}\) we get

$$\begin{aligned} \sum _{x \in \overline{\mathbb {T}}_{d,n}}e^{\overline{T}_{x}} \ge \sum _{x \in \mathbb {T}_{d,n}}e^{\overline{T}_{x}} \ge \frac{\psi _{n}(0)}{2\gamma \hat{G}_{n}(0,0) + \psi _{n}(0)^{2}} \sum _{x \in \mathbb {T}_{d,n}} \psi _{n}(x). \end{aligned}$$
(4.26)

By [32, Theorem 1], for \(\beta > \beta _{\textrm{c}}\) the fraction on the right hand side converges a.s. to a (random) positive number as \(n\rightarrow \infty \). Hence, the claim in (4.22) follows if we show that \(\sum _{x \in \mathbb {T}_{d,n}} \psi _{n}(x) \ge \left| \mathbb {T}_{d,n}\right| ^{1 - o(1)}\) a.s. as \(n\rightarrow \infty \). For any \(s>0\) and \(q\ge 1\) we have

$$\begin{aligned} \begin{aligned} \mathbb {P}[\sum _{x \in \mathbb {T}_{d,n}} \psi _{n}(x) \le s \left| \mathbb {T}_{d,n}\right| ]&= \mathbb {P}[(\frac{1}{\left| \mathbb {T}_{d,n}\right| } \sum _{x \in \mathbb {T}_{d,n}} \psi _{n}(x))^{-q} \ge s^{-q}]\\&\le s^{q}\, \mathbb {E}[(\frac{1}{\left| \mathbb {T}_{d,n}\right| } \sum _{x \in \mathbb {T}_{d,n}} \psi _{n}(x))^{-q}]\\&\le s^{q} \frac{1}{\left| \mathbb {T}_{d,n}\right| } \sum _{x \in \mathbb {T}_{d,n}} \mathbb {E}[\psi _{n}(x)^{-q}]\\&= s^{q} \frac{1}{\left| \mathbb {T}_{d,n}\right| } \sum _{x \in \mathbb {T}_{d,n}} \mathbb {E}[\psi _{n}(x)^{1+q}], \end{aligned} \end{aligned}$$
(4.27)

where in the last line we used the reflection property of the t-field (see Lemma C.1). Subject to the assumption that (4.21) holds true, we may choose \(q=1\) and \(s = n^{-2}\) in (4.27). An application of the Borel-Cantelli lemma then yields that \(\sum _{x \in \mathbb {T}_{d,n}} \psi _{n}(x) \ge \left| \mathbb {T}_{d,n}\right| ^{1 - o(1)}\) a.s. as \(n\rightarrow \infty \). Together with (4.25) and (4.26), this implies (4.22). \(\square \)

5 Results for the \(\mathbb {H}^{2|2}\)-Model

5.1 Asymptotics for the \(\mathbb {H}^{2|2}\)-model as \(\beta \searrow \beta _{\textrm{c}}\) (Proof of Theorem 1.5)

Proof of Theorem 1.5

By Theorem 1.2 it suffices to show that

$$\begin{aligned} \langle x_{0}^{2} \rangle _{\beta }^{+} = \lim _{h\searrow 0} \lim _{n\rightarrow \infty } \langle x_{0}^{2} \rangle _{\beta ;h,\mathbb {T}_{d,n}} = \mathbb {E}_{\beta }[L_{\infty }^{0}]. \end{aligned}$$
(5.1)

For this, we use the \(\mathbb {H}^{2|2}\)-Dynkin isomorphism (Theorem 2.1):

$$\begin{aligned} \langle x_{0}^{2} \rangle _{\beta ;h,\mathbb {T}_{d,n}} = \int \limits _{0}^{\infty }\text {d}{t} \mathbb {E}_{\beta ;\mathbb {T}_{d,n}}\big [e^{-ht}\, \mathbbm {1}_{X_{t}=0}\big ], \end{aligned}$$
(5.2)

where, subject to \(\mathbb {E}_{\beta ;\mathbb {T}_{d,n}}\), \((X_{t})_{t\ge 0}\) is a VRJP on \(\mathbb {T}_{d,n}\) started at 0. Coupling the VRJP on \(\mathbb {T}_{d,n}\) with a VRJP on the infinite tree \(\mathbb {T}_{d}\) up to the time they first visit the leaves of \(\mathbb {T}_{d,n}\), we get

$$\begin{aligned} \left| \mathbb {E}_{\beta ;\mathbb {T}_{d,n}}[\mathbbm {1}_{X_{t} = 0}] - \mathbb {E}_{\beta ;\mathbb {T}_{d}}[\mathbbm {1}_{X_{t}}=0]\right| \le \mathbb {P}_{\beta ;\mathbb {T}_{d}}[T_{n} \le t], \end{aligned}$$
(5.3)

with \(T_{n}\) being the VRJP’s hitting time of \(\partial \mathbb {T}_{d,n} = \{x\in \mathbb {T}_{d,n}: \left| x\right| = n\}\). By definition of the VRJP, the time it takes to reach \(\partial \mathbb {T}_{d,n}\) is stochastically lower bounded by an exponential random variable of rate \(d\beta /n\). Consequently, the right hand side of (5.3) converges to zero as \(n\rightarrow \infty \). By this observation and the monotone convergence theorem we have

$$\begin{aligned} \langle x_{0}^{2} \rangle _{\beta }^{+} = \lim _{h\searrow 0} \int \limits _{0}^{\infty }\text {d}{t} e^{-ht} \mathbb {E}_{\beta ;\mathbb {T}_{d}}\big [\mathbbm {1}_{X_{t}=0}\big ] = \int \limits _{0}^{\infty }\text {d}{t} \mathbb {E}_{\beta ;\mathbb {T}_{d}}\big [\mathbbm {1}_{X_{t}=0}\big ] = \mathbb {E}_{\beta ;\mathbb {T}_{d}}[L_{\infty }^{0}], \end{aligned}$$
(5.4)

which proves the claim. \(\square \)

5.2 Intermediate phase for the \(\mathbb {H}^{2|2}\)-model (Proof of Theorem 1.6)

In this section, we want to prove Theorem 1.6 on the intermediate phase of the \(\mathbb {H}^{2|2}\)-model. We will make use of the STZ-Anderson model, as defined in Definition 1.8, making use of its restriction properties as discussed in [8, 68].

The proof consists of three parts: First we evaluate the quantity on the left hand side of (1.21) on a graph consisting of a single vertex (and a coupling to a ghost vertex). Then we reduce the actual quantity in (1.21) onto the case of a single vertex with a random effective magnetic field \(h^{\textrm{eff}}\). As \(h\searrow 0\), the law of \(h^{\textrm{eff}}\) can be expressed in terms of the t-field and we can deduce Theorem 1.6 from Theorem 1.4 on the VRJP’s multifractality.

Lemma 5.1

Consider the \(\mathbb {H}^{2|2}\)-model on a single vertex 0 with magnetic field \(h>0\). For \(\eta \in (0,1)\) we have

$$\begin{aligned} \langle z_{0} \left| x_{0}\right| ^{-\eta } \rangle _{h;\{0\}} = h^{\eta } \times g_{\eta }(h) \end{aligned}$$
(5.5)

with

$$\begin{aligned} g_{\eta }(h) :=\frac{1}{\pi }e^{h} (2h)^{(1-\eta )/2}\, \Gamma (\tfrac{1}{2} - \tfrac{\eta }{2}) K_{(1-\eta )/2}(h). \end{aligned}$$
(5.6)

In particular

$$\begin{aligned} c_{\eta } :=\frac{1}{\pi } 2^{-\eta } \, \Gamma (\tfrac{1}{2} - \tfrac{\eta }{2})^{2} = \lim _{h\searrow 0} g_{\eta }(h) \end{aligned}$$
(5.7)

Proof

For convenience, lets write \(\langle \cdot \rangle = \langle \cdot \rangle _{h;\{0\}}\). By \(e^{t_{0}} = z_{0}+x_{0}\) and \(y_{0} = s_{0}e^{t_{0}}\), see (2.12), we have

$$\begin{aligned} \begin{aligned}&\langle z_{0} \left| x_{0}\right| ^{-\eta } \rangle \\&= \langle z_{0} \left| y_{0}\right| ^{-\eta } \rangle = \langle (e^{t_{0}} + x_{0}) \left| y_{0}\right| ^{-\eta } \rangle = \langle e^{t_{0}} \left| y_{0}\right| ^{-\eta } \rangle&= \langle e^{t_{0}} \left| s_{0}\right| ^{-\eta } e^{-\eta t_{0}} \rangle \\&= \langle e^{(1-\eta ) t_{0}} \left| s_{0}\right| ^{-\eta } \rangle . \end{aligned} \end{aligned}$$
(5.8)

The last line can be interpreted in purely probabilistic terms: \(t_{0}\) follows the law of a t-field increment with inverse temperature \(h>0\) and conditionally on \(t_{0}\), \(s_{0}\) is a Gaussian random variable with variance \(e^{-t_{0}}/h\). Consequently,

$$\begin{aligned} \begin{aligned} \mathbb {E}[|s_{0}|^{-\eta }|t_{0}]&= \sqrt{\frac{h e^{t_{0}}}{2\pi }} \int _{-\infty }^{+\infty } \text {d}{s} |s|^{-\eta } e^{-he^{t_{0}} s^{2}/2}\\&= (h e^{t_{0}})^{\eta /2} \frac{1}{\sqrt{2\pi }} \int _{-\infty }^{+\infty } \text {d}{x} |x|^{-\eta } e^{-x^{2}/2}\\&= (h e^{t_{0}})^{\eta /2}\, \frac{2^{-\eta /2}}{\sqrt{\pi }} \Gamma (\tfrac{1}{2} - \tfrac{\eta }{2}). \end{aligned} \end{aligned}$$
(5.9)

With (5.8) we obtain

$$\begin{aligned} \langle z_{0} \left| x_{0}\right| ^{-\eta } \rangle = h^{\eta /2}\, \frac{2^{-\eta /2}}{\sqrt{\pi }} \Gamma (\tfrac{1}{2} - \tfrac{\eta }{2}) \mathbb {E}_{h}[e^{(1-\eta /2) T}], \end{aligned}$$
(5.10)

where T denotes a t-field increment at inverse temperature h. Expressing the exponential moments of T in terms of the modified Bessel function of second kind \(K_{\alpha }\), as in (3.7), and using small-argument asymptotics for the latter, we obtain

$$\begin{aligned} \mathbb {E}_{h}[e^{(1-\eta /2) T}] = \frac{\sqrt{2h} e^{h}}{\sqrt{\pi }} K_{(1-\eta )/2}(h) \sim h^{\eta /2} \times \frac{2^{(1-\eta )/2} \Gamma (\tfrac{1}{2} - \tfrac{\eta }{2})}{\sqrt{2\pi }} \quad \text {as} \quad h\searrow 0. \qquad \end{aligned}$$
(5.11)

Inserting this into (5.10) yields the claim. \(\square \)

Effective Weight. Before proceeding, we need to introduce the notion of effective weight for the STZ-field: Consider an STZ-Anderson model \(H_{B}\) as in 1.8 and suppose the underlying graph \(G = (V,E)\) is finite. Write \(G_{B} = (H_{B})^{-1}\). Then, for \(i_{0},j_{0}\in V\), the effective weight between these two vertices is defined by

$$\begin{aligned} \beta _{i_{0}j_{0}}^{\textrm{eff}} :=\frac{G_{B}(i_{0},j_{0})}{G_{B}(i_{0},i_{0})G_{B}(j_{0},j_{0}) - G_{B}(i_{0},j_{0})^{2}}. \end{aligned}$$
(5.12)

Another expression can be deduced using Schur’s complement: Write \(V_{0} = \{i_{0}, j_{0}\}\) and \(V_{1} = V\setminus \{{i_{0}, j_{0}}\}\) and decompose \(H_{B}\) as

$$\begin{aligned} H_{B} = \left( \begin{array}{ll}H_{00}&{} H_{01}\\ H_{10}&{} H_{11}\end{array}\right) , \end{aligned}$$
(5.13)

with \(H_{00}\) being the restriction of \(H_{B}\) to entries with indices in \(V_{0}\) and analogously for the other submatrices. By Schur’s decomposition we have

$$\begin{aligned} \begin{aligned} G_{B}\vert _{V_{0}}&= H_{B}^{-1}\vert _{V_{0}}\\&= (H_{00} - H_{01}H_{11}^{-1}H_{10})^{-1}\\&=\Bigg ( \begin{matrix} B_{i_{0}} - [H_{01}H_{11}^{-1}H_{10}](i_{0},i_{0}) &{} -\beta _{i_{0}j_{0}} - [H_{01}H_{11}^{-1}H_{10}](i_{0},j_{0})\\ -\beta _{j_{0}i_{0}} - [H_{01}H_{11}^{-1}H_{10}](j_{0},i_{0}) &{} B_{j_{0}} - [H_{01}H_{11}^{-1}H_{10}](j_{0},j_{0}) \end{matrix}\Bigg )^{-1}. \end{aligned}\nonumber \\ \end{aligned}$$
(5.14)

Note that (5.12) reads as \(\beta _{i_{0}j_{0}}^{\textrm{eff}} = G_{B}(i_{0},j_{0}) / \det (G_{B}\vert _{V_{0}}) = G_{B}(i_{0},j_{0}) \det ([G_{B}\vert _{V_{0}}]^{-1})\). Hence using the familiar formula for the inverse of a \(2\times 2\)-matrix we get

$$\begin{aligned} \beta _{i_{0}j_{0}}^{\textrm{eff}} = \beta _{i_{0}j_{0}} + [H_{01}H_{11}^{-1}H_{10}](i_{0},j_{0}), \end{aligned}$$
(5.15)

which is measurable with respect to \(B\vert _{V_{1}}\). The relevance of the effective weight stems from the following Lemma (see [8, Sect. 6])

Lemma 5.2

For a finite graph \(G=(V,E)\) with positive edge-weights \(\{\beta _{ij}\}_{ij\in E}\) and a pinning vertex \(i_{0}\), consider the natural coupling of an STZ-field \((B_{i})_{i\in V}\) and a t-field \((T_{i})_{i\in V}\) (see Remark 2.10). For a vertex \(j_{0} \in V\setminus \{i_{0}\}\) write \(V_{0} :=\{i_{0}, j_{0}\}\) and \(V_{1} :=V\setminus \{{i_{0}, j_{0}}\}\).

Then, conditionally on \(B\vert _{V_{1}}\), the t-field \(T\vert _{V_{0}} = (T_{i_{0}}, T_{j_{0}})\) is distributed as a t-field on \(V_{0}\), pinned at \(i_{0}\), with edge-weight given by \(\beta _{i_{0}j_{0}}^{\textrm{eff}} = \beta _{i_{0}j_{0}}^{\textrm{eff}}(B\vert _{V_{1}})\).

Moreover, the notion of effective weight and effective conductance are directly related:

Lemma 5.3

(Effective Conductance vs. Weight) Consider the setting of Lemma 5.2. For \(j_{0} \in V\setminus \{i_{0}\}\), let \(C^{\textrm{eff}}_{i_{0}j_{0}}\) denote the effective conductance between \(i_{0}\) and \(j_{0}\) in the t-field environment \(\{\beta _{ij}e^{T_{i} + T_{j}}\}_{ij \in E}\). Then

$$\begin{aligned} C^{\textrm{eff}}_{i_{0}j_{0}} = e^{T_{j_{0}}} \beta ^{\textrm{eff}}_{i_{0}j_{0}}. \end{aligned}$$
(5.16)

This statement is proved in Appendix C. In the following, we will come back to the setting of the regular tree.

Reduction to Two Vertices on the Tree. We denote by \(\tilde{\mathbb {T}}_{d,n}\) the graph obtained by adding an additional ghost vertex \(\mathfrak {g}\) connected to every vertex of the graph \(\mathbb {T}_{d,n}\). For the \(\mathbb {H}^{2|2}\)-model (and consequently the t-/s-field) we refer to the model on \(\mathbb {T}_{d,n}\) with magnetic field \(h>0\) as the model on \(\tilde{\mathbb {T}}_{d,n}\), pinned at the ghost \(\mathfrak {g}\), with weights \(\beta _{x\mathfrak {g}} = h\) between the ghost and any other vertex.

Lemma 5.4

(Effective Magnetic Field at the Origin). Consider the natural coupling of t-field, s-field and STZ-field on \(\tilde{\mathbb {T}}_{d,n}\), at inverse temperature \(\beta > 0\) and with magnetic field \(h>0\), pinned at the ghost \(\mathfrak {g}\). The random fields are denoted by \(T_{x}\), \(S_{x}\) and \(B_{x}\), respectively (\(x \in \tilde{\mathbb {T}}_{d,n}\)). Write \(V_{0} :=\{0, \mathfrak {g}\}\) and \(V_{1} :=\tilde{\mathbb {T}}_{d,n}\setminus \{0,\mathfrak {g}\}\) and define \(H_{11} :=H_{B}\vert _{V_{1}}\).

Conditionally on \(B\vert _{V_{1}}\), the t-/s-field at the origin \((T_{0}, S_{0})\) follows the law of a t-/s-field on \(\{0,\mathfrak {g}\}\) with effective magnetic field

$$\begin{aligned} h^{\textrm{eff}} :=\beta ^{\textrm{eff}}_{0\mathfrak {g}} = h + h \beta \sum _{x,y\in V_{1}: y\sim 0} H_{11}^{-1}(y,x). \end{aligned}$$
(5.17)

Proof

By Lemma 5.2, conditionally on \(B\vert _{V_{1}}\), the t-field at the origin \(T_{0}\) has the law of a t-field increment at inverse temperature \(h^{\textrm{eff}}\). We claim that the analogous fact is true for the joint measure of \((T_{0}, S_{0})\).

Recall that, conditionally on \(\{T_{x}\}\), the law of \(\{S_{x}\}\) is that of Gaussian free field, pinned at \(\mathfrak {g}\), edge-weights given by the t-field environment \(\{\beta _{ij}e^{T_{i} + T_{j}}\}\) over edges in \(\tilde{\mathbb {T}}_{d,n}\) with \(\beta _{x\mathfrak {g}} = h\). Let \(C_{0\mathfrak {g}}^{\textrm{eff}}\) denote the effective conductance between the origin 0 and the ghost \(\mathfrak {g}\) in the t-field environment. Then, conditionally on \(\{T_{x}\}\), we have that \(S_{0}\) is a centred normal random variable with variance given by the effective resistance \(1/C^{\textrm{eff}}_{0\mathfrak {g}}\) (see [6, Proposition 2.24]). By Lemma 5.3 we have \(C^{\textrm{eff}}_{0\mathfrak {g}} = e^{T_{0}} \beta _{0\mathfrak {g}}^{\textrm{eff}} = e^{T_{0}} h^{\textrm{eff}}\). To conclude, it suffices to note that \(h^{\textrm{eff}}\) is measurable with respect to \(B\vert _{V_{1}}\). \(\square \)

Lemma 5.5

(Law of Effective Magnetic Field as \(h\searrow 0\)). Consider the setting of Lemma 5.4. Further consider a t-field \(\{T^{\mathrm {(0)}}_{x}\}\) on \(\mathbb {T}_{d,n}\), pinned at the origin, at the same inverse temperature \(\beta \). Then we have that

$$\begin{aligned} \frac{h^{\textrm{eff}}}{h} \overset{\textrm{law}}{\longrightarrow } \sum _{x\in \mathbb {T}_{d,n}} e^{T^{\mathrm {(0)}}_{x}} \quad \text {as} \quad h\searrow 0. \end{aligned}$$
(5.18)

Proof

By (5.17) it suffices to show that

$$\begin{aligned} \beta \sum _{y\in V_{1}:y\sim 0}H_{11}^{-1}(y,x) \overset{\text {law}}{\longrightarrow } e^{T_{x}^{(0)}} \quad \text {as} \quad h\searrow 0. \end{aligned}$$
(5.19)

We start by decomposing the restriction of \(H_{B}\) to \(\mathbb {T}_{d,n}\), i.e. without the ghost vertex \(\mathfrak {g}\), as follows

$$\begin{aligned} H_{B}\vert _{\mathbb {T}_{d,n}} = \left( \begin{array}{ll}B_{0}&{} -\beta _{0}^{\top }\\ -\beta _{0}^{\top }&{} H_{11}\end{array}\right) , \end{aligned}$$
(5.20)

where we write \(\beta _{0} = [\beta \mathbbm {1}_{y\sim 0}]_{y\in V_{1}}\). By Schur’s complement we have

$$\begin{aligned} (H_{B}\vert _{\mathbb {T}_{d,n}})^{-1} = \Bigg (\begin{matrix} (B_{0} - \beta _{0}^{\top } H^{-1}_{11}\beta _{0})^{-1}&{} (B_{0} - \beta _{0}^{\top } H^{-1}_{11}\beta _{0})^{-1} \beta _{0}^{\top } H^{-1}_{11}\\ \cdots &{} \cdots \end{matrix}\Bigg ). \end{aligned}$$
(5.21)

As a consequence, for any \(x\in V_{1}\)

$$\begin{aligned} \frac{(H_{B}\vert _{\mathbb {T}_{d,n}})^{-1}(0,x)}{(H_{B}\vert _{\mathbb {T}_{d,n}})^{-1}(0,0)} = (\beta _{0}^{\top }H^{-1}_{11})(0,x) = \beta \sum _{y\in V_{1}:y\sim 0} H^{-1}_{11}(y,x). \end{aligned}$$
(5.22)

We now note that as \(h\searrow 0\) the law of \(B\vert _{\mathbb {T}_{d,n}}\) converges to that of a STZ-field on \(\mathbb {T}_{d,n}\), as can be seen from (1.25). Consequently, by Proposition 2.9, the law of the left hand side in (5.22) converges to that of \(e^{T_{x}^{(0)}}\), which proves the claim. \(\square \)

Proof of Theorem 1.6

Combining Lemma 5.1 and 5.4 we have

$$\begin{aligned} \lim _{h\searrow 0} h^{-\eta }\langle z_{0} \left| x_{0}\right| ^{-\eta } \rangle _{\beta ,h;\mathbb {T}_{d,n}} = \lim _{h\searrow 0} \mathbb {E}_{\beta ,h}[\Big (\frac{h^{\textrm{eff}}}{h}\Big )^{\eta } g_{\eta }(h^{\textrm{eff}})] \end{aligned}$$
(5.23)

We note that by [8, Proposition 6.1.2] we have \(\mathbb {E}[h^{\textrm{eff}}] \le h \left| \mathbb {T}_{d,n}\right| \). Hence, for any fixed \(C>0\) we have \(h^{\textrm{eff}} \le C\) with probability \(1-o(1)\) as \(h\searrow 0\). Lemma 5.5 therefore implies

$$\begin{aligned} \lim _{h\searrow 0} h^{-\eta }\langle z_{0} \left| x_{0}\right| ^{-\eta } \rangle _{\beta ,h;\mathbb {T}_{d,n}} = c_{\eta } \mathbb {E}_{\beta }[\Big (\sum _{x\in \mathbb {T}_{d,n}} e^{T^{\mathrm {(0)}}_{x}}\Big )^{\eta }], \end{aligned}$$
(5.24)

with \(c_{\eta } > 0\) given in (5.7). Consequently, application of Lemma 4.1 and Theorem 1.4 concludes the proof. \(\square \)