1 The Arboreal Gas and Uniform Forest Model

1.1 Definition and main results

Let \({{\mathbb {G}}} = (\Lambda ,E)\) be a finite (undirected) graph. A forest is a subgraph \(F=(\Lambda , E')\) that does not contain any cycles. We write \(\mathcal {F}\) for the set of all forests. For \(\beta >0\) the arboreal gas (or weighted uniform forest model) is the measure on forests F defined by

$$\begin{aligned} {\mathbb {P}}_{\beta }[F] \equiv \frac{1}{Z_{\beta }} \beta ^{|F|}, \qquad Z_{\beta } \equiv \sum _{F\in \mathcal {F}} \beta ^{|F|}, \end{aligned}$$
(1.1)

where |F| denotes the number of edges in F. It is an elementary observation that the arboreal gas with parameter \(\beta \) is precisely Bernoulli bond percolation with parameter \(p_{\beta }=\beta /(1+\beta )\) conditioned to be acyclic:

$$\begin{aligned} {\mathbb {P}}_{p_{\beta }}^{\mathrm{perc}}\left[ F \mid \text {acyclic}\right] \equiv \frac{p_{\beta }^{|F |}(1-p_{\beta })^{|E |-|F |}}{\sum _{F} p_{\beta }^{|F |}(1-p_{\beta })^{|E |-|F |}} = \frac{ \beta ^{|F |}}{\sum _{F}\beta ^{|F |}} = {\mathbb {P}}_\beta [F]. \end{aligned}$$
(1.2)

The arboreal gas model is also the limit, as \(q\rightarrow 0\) with \(p=\beta q\), of the q-state random cluster model, see [40]. The particular case \(\beta =1\) is the uniform forest model mentioned in, e.g., [25, 26, 31, 40]. We emphasize that the uniform forest model is not the weak limit of a uniformly chosen spanning tree; emphasis is needed since the latter model is called the ‘uniform spanning forest’ (USF) in the probability literature. We will shortly see that the arboreal gas has a richer phenomenology than the USF. In fact, in finite volume, the uniform spanning tree is the \(\beta \rightarrow \infty \) limit of the arboreal gas.

Given that the arboreal gas arises from bond percolation, it is natural to ask about the percolative properties of the arboreal gas. It is straightforward to rule out the occurrence of percolation for small values of \(\beta \) via the following proposition, see Appendix A.

Proposition 1.1

On any finite graph, the arboreal gas with parameter \(\beta \) is stochastically dominated by Bernoulli bond percolation with parameter \(p_{\beta }\).

In particular, all subgraphs of \({\mathbb {Z}}^{d}\), all trees have uniformly bounded expectation if \(p_{\beta }<p_{c}(d)\) where \(p_{c}(d)\) is the critical parameter for Bernoulli bond percolation on \({\mathbb {Z}}^{d}\).

In the infinite-volume limit, the arboreal gas is a singular conditioning of bond percolation, and hence the existence of a percolation transition as \(\beta \) varies is non-obvious. However, on the complete graph it is known that there is a phase transition, see [8, 34, 36]. To illustrate some of our methods we will give a new proof of the existence of a transition.

Proposition 1.2

Let \({\mathbb {E}}_{N,\alpha }\) denote the expectation of the arboreal gas on the complete graph \(K_{N}\) with \(\beta = \alpha /N\), and let \(T_{0}\) be the tree containing a fixed vertex 0. Then

$$\begin{aligned} {\mathbb {E}}_{N,\alpha } |T_{0} | = (1+o(1)) {\left\{ \begin{array}{ll} \frac{\alpha }{1-\alpha } &{} \alpha <1 \\ c N^{1/3} &{} \alpha = 1 \\ (\frac{\alpha -1}{\alpha })^2 N &{} \alpha >1 \end{array}\right. } \end{aligned}$$
(1.3)

where \(c=3^{2/3}\Gamma (4/3)/\Gamma (2/3)\) and \(\Gamma \) denotes the Euler Gamma function.

Thus there is a transition for the arboreal gas exactly as for the Erdős–Rényi random graph with edge probability \(\alpha /N\). To compare the arboreal gas directly with the Erdős–Rényi graph, recall that Proposition 1.1 shows the arboreal gas is stochastically dominated by the Erdős–Rényi graph with edge probability \(p_{\beta } = \beta - \beta ^{2}/(1+\beta )\). The fact that the Erdős–Rényi graph asymptotically has all components trees in the subcritical regime \(\alpha <1\) makes the behaviour of the arboreal gas when \(\alpha <1\) unsurprising. On the other hand, the conditioning plays a role when \(\alpha >1\), as can be seen at the level of the expected tree size. For the supercritical Erdős–Rényi graph the expected size is \(4(\alpha -1)^{2}N\) as \(\alpha \downarrow 1\) — this follows from the fact that the largest component for the Erdős–Rényi graph with \(\alpha >1\) has size yN where y solves \(e^{-\alpha y}=1-y\), see, e.g., [3]. For further discussion, see Sect. 1.3.

On \({\mathbb {Z}}^2\), the singular conditioning that defines the arboreal gas has a profound effect. In the next theorem statement and henceforth, for finite subgraphs \(\Lambda \) of \({\mathbb {Z}}^{2}\) we write \({\mathbb {P}}_{\Lambda ,\beta }\) for the arboreal gas on \(\Lambda \).

Theorem 1.3

For all \(\beta >0\) there is a universal constant \(c_\beta >0\) such that the connection probabilities satisfy

$$\begin{aligned} {\mathbb {P}}_{\Lambda ,\beta }[0\leftrightarrow j] \leqslant |j|^{-c_\beta } \quad \text {for}\, j\in \Lambda \subset {\mathbb {Z}}^2, \end{aligned}$$
(1.4)

for all \(\Lambda \subset {\mathbb {Z}}^2\), where ‘\(i\leftrightarrow j\)’ denotes the event that the vertices i and j are in the same tree.

This theorem, together with classical techniques from percolation theory, imply the following corollary for the infinite volume limit, see Appendix A.

Corollary 1.4

Suppose \({\mathbb {P}}_{\beta }\) is a translation-invariant weak limit of \({\mathbb {P}}_{\Lambda _{n},\beta }\) for an increasing exhaustion of finite volumes \(\Lambda _{n}\uparrow {\mathbb {Z}}^{2}\). Then all trees are finite \({\mathbb {P}}_{\beta }\)-almost surely.

Thus on \({\mathbb {Z}}^{2}\) the behaviour of the arboreal gas is completely different from that of Bernoulli percolation. The absence of a phase transition can be non-rigorously predicted from the representation of the arboreal gas as the \(q\rightarrow 0\) limit (with \(p=\beta q\) fixed) of the random cluster model with \(q>0\) [19]. We briefly describe how this prediction can be made. The critical point of the random cluster model for \(q\geqslant 1\) on \({\mathbb {Z}}^{2}\) is known to be \(p_{c}(q)=\sqrt{q}/(1+\sqrt{q})\) [9]. Conjecturally, this formula holds for \(q>0\). Thus \(p_{c}(q)\sim \sqrt{q}\) as \(q\downarrow 0\), and by assuming continuity in q one obtains \(\beta _{c}=\infty \) for the arboreal gas. This heuristic applies also to the triangular and hexagonal lattices. Our proof is in fact quite robust, and applies to much more general recurrent two-dimensional graphs. We have focused on \({\mathbb {Z}}^{2}\) for the sake of concreteness.

This absence of percolation is not believed to persist in dimensions \(d\geqslant 3\): we expect that there is a percolative transition on \({\mathbb {Z}}^{d}\) with \(d\geqslant 3\). In the next section we will discuss the conjectural behaviour of the arboreal gas on \({\mathbb {Z}}^{d}\) for all \(d\geqslant 2\). Before this, we outline how we obtain the above results. Our starting point is an alternate formulation of the arboreal gas. Namely, in [13, 14, 16] it was noticed that the arboreal gas can be represented in terms of a model of fermions, and that this fermionic model can be extended to a sigma model with values in the superhemisphere. We also use this fermionic representation, but our results rely in an essential way on the new observation that this model is most naturally connected to a sigma model taking values in a hyperbolic superspace. Similar sigma models have recently received a great deal of attention due to their relationship with random band matrices and reinforced random walks [6, 21, 44, 45]. We will discuss the connection between our techniques and these papers after introducing the sigma models relevant to the present paper. A key step in our proof is the following integral formula for connection probabilities in the arboreal gas (see Corollary 2.14 for a version with general edge weights):

$$\begin{aligned} {\mathbb {P}}_{\Lambda ,\beta }[0\leftrightarrow j]= & {} \frac{1}{Z_\beta } \int _{{\mathbb {R}}^{\Lambda }} e^{t_j} e^{-\sum _{i\sim j} \beta (\cosh (t_i-t_j)-1)} \nonumber \\&\times \left( { e^{-2\sum _{i}t_i} \det (-\Delta _{\beta (t)})}\right) ^{3/2} \delta _0(dt_0) \prod _{i\ne 0} \frac{dt_i}{\sqrt{2\pi }} \end{aligned}$$
(1.5)

where \(\Delta _{\beta (t)}\) is the graph Laplacian with edge weights \(\beta e^{t_i+t_j}\), understood as acting on \(\Lambda \setminus 0\). This formula is a consequence of the hyperbolic sigma model representation of the arboreal gas.

Surprisingly, if the exponent 3/2 in (1.5) is replaced by 1/2, then the integrand on the right-hand side is the mixing measure of the vertex-reinforced jump process found by Sabot and Tarrès [45]. The Sabot–Tarrès formula (along with a closely related version for the edge-reinforced random walk) is known as the magic formula [32]. It seems even more magical to us that the same formula, with only a change of exponent, describes the arboreal gas. We will explain in Sect. 2 that there are in fact three ingredients to this magic: a ‘non-linear’ version of the matrix-tree theorem, supersymmetric localisation, and horospherical coordinates for (super-)hyperbolic space.

We remark that the whole family of sigma models taking values in hyperbolic superspaces has interesting behaviour, but for the present paper we restrict our attention to those related to the arboreal gas. A more general discussion of such models can be found in [17] by the second author.

1.2 Context and conjectured behaviour

Recall that ‘\(i\leftrightarrow j\)’ denotes the event that the vertices i and j are in the same tree. We also write \({\mathbb {P}}_{\beta }\left[ ij\right] \) for the probability an edge ij is in the forest.

The following conjecture asserts that the arboreal gas has a phase transition in dimensions \(d\geqslant 3\), just as in mean-field theory (Proposition 1.2). Numerical evidence for this transition can be found in [19].

Conjecture 1.5

For \(d\geqslant 3\) there exists \(\beta _c > 0\) such that

$$\begin{aligned} \lim _{n\rightarrow \infty } \lim _{\Lambda \uparrow {\mathbb {Z}}^{d}} {\mathbb {E}}_{\Lambda ,\beta }\frac{|T_0\cap B_{n}|}{|B_{n}|} {\left\{ \begin{array}{ll} = 0 &{} (\beta < \beta _c)\\> 0 &{} (\beta > \beta _c) \end{array}\right. } \end{aligned}$$
(1.6)

where \(T_0\) is the tree containing 0 and \(B_{n}\) is the ball of radius n centred at 0. Moreover, when \(\beta <\beta _c\) there is a universal constant \(c_\beta >0\) such that

$$\begin{aligned} {\mathbb {P}}_{\Lambda ,\beta }[i\leftrightarrow j] \leqslant Ce^{-c_\beta |i-j|}, \qquad (i,j\in {\mathbb {Z}}^d). \end{aligned}$$
(1.7)

When \(\beta > \beta _c\) there is a universal constant \(c_{\beta }'> 0\) such that

$$\begin{aligned} \lim _{\Lambda \uparrow {\mathbb {Z}}^{d}}{\mathbb {P}}_{\Lambda ,\beta }[i\leftrightarrow j] \geqslant c_\beta '. \end{aligned}$$
(1.8)

As indicated in the previous section, it is straightforward to prove the first equality of (1.6) when \(\beta \) is sufficiently small. The existence of a transition, i.e., a percolating phase for \(\beta \) large, is open. However, a promising approach to proving the existence of a percolation transition when \(d\geqslant 3\) and \(\beta \gg 1\) is to adapt the methods of [21]; we are currently pursuing this direction. Obviously, the existence of a sharp transition, i.e., a precise \(\beta _{c}\) separating the two behaviours in (1.6) is also open. The next conjecture distinguishes the supercritical behaviour of the arboreal gas from that of percolation for which the (centered) connection probabilities have exponential decay.

Conjecture 1.6

For \(d\geqslant 3\), when \(\beta >\beta _{c}\)

$$\begin{aligned} \lim _{\Lambda \uparrow {\mathbb {Z}}^{d}}{\mathbb {P}}_{\Lambda ,\beta }[i\leftrightarrow j] - c_{\beta } \approx |i-j|^{-(d-2)}, \qquad \text {as}\, |i-j|\rightarrow \infty , \end{aligned}$$
(1.9)

where \(c_{\beta }'\) is the optimal constant for which (1.8) holds.

Assuming the existence of a phase transition, one can also ask about the critical behaviour of the arboreal gas. One intriguing aspect of this question is that the upper critical dimension is not clear, even heuristically. There is some evidence that the critical dimension of the arboreal gas should be \(d=6\), as for percolation, and opposed to \(d=4\) for the Heisenberg model. For further details, and for other related conjectures, see [16, Section 12].

Theorem 1.3 shows that the behaviour of the arboreal gas in two dimensions is different from that of percolation. This difference would be considerably strengthened by the following conjecture, which first appeared in [13].

Conjecture 1.7

For \(\Lambda \subset {\mathbb {Z}}^{2}\), for any \(\beta >0\) there exists a universal constant \(c_\beta >0\) such that

$$\begin{aligned} \lim _{\Lambda \uparrow {\mathbb {Z}}^{2}}{\mathbb {P}}_{\Lambda , \beta }[i\leftrightarrow j] \approx e^{-c_\beta |i-j|}, \qquad (i,j\in {\mathbb {Z}}^2). \end{aligned}$$
(1.10)

As \(\beta \rightarrow \infty \), the constant \(c_{\beta }\) is exponentially small in \(\beta \):

$$\begin{aligned} c_\beta \approx e^{-c\beta }. \end{aligned}$$
(1.11)

In particular, \({\mathbb {E}}_{\beta }|T_0| \approx e^{c\beta } < \infty \) (with a different c) where \(T_0\) is the tree containing 0.

This conjecture is much stronger than the main result of the present paper, Theorem 1.3, which establishes only that all trees are finite almost surely, a significantly weaker property than having finite expectation.

Conjecture 1.7 is a version of the mass gap conjecture for ultraviolet asymptotically free field theories. The conjecture is based on the field theory representation discussed in Sect. 2, and supporting heuristics can be found in, e.g., [13]. Other models with the same conjectural feature include the two-dimensional Heisenberg model [41], the two-dimensional vertex-reinforced jump process [21] (and other \({\mathbb {H}}^{n|2m}\) models with \(2m-n\leqslant 0\), see [17]), the two-dimensional Anderson model [1], and most prominently four-dimensional Yang–Mills Theories [29, 41].

Let us briefly indicate discuss why Conjecture 1.7 seems challenging. Note that in finite volume the (properly normalized) arboreal gas converges weakly to the uniform spanning tree as \(1/\beta \rightarrow 0\), see Appendix B. For the uniform spanning tree it is a triviality that \(c_{\beta }=0\), and this is consistent with the conjecture \(c_\beta \approx e^{-c\beta }\) as \(\beta \rightarrow \infty \). On the other hand \(c_\beta \approx e^{-c\beta }\) suggests a subtle effect, not approachable via perturbative methods such as using \(1/\beta >0\) as a small parameter for a low-temperature expansion as can be done for, e.g., the Ising model. Indeed, since \(t \mapsto e^{-c/t}\) has an essential singularity at \(t=0\), its behaviour as \(t=1/\beta \rightarrow 0\) cannot be detected at any finite order in \(t=1/\beta \). The same difficulty applies to the other models mentioned above for which analogous behaviour is conjectured.

The last conjecture we mention is the negative correlation conjecture stated in [26, 31, 40] and recently in [10, 27]. This conjecture is also expected to hold true for general (positive) edge weights, see Sect. 2.1.

Conjecture 1.8

For any finite graph and any \(\beta >0\) negative correlation holds: for distinct edges ij and kl,

$$\begin{aligned} {\mathbb {P}}_\beta [ij,kl] \leqslant {\mathbb {P}}_\beta [ij] {\mathbb {P}}_\beta [kl]. \end{aligned}$$
(1.12)

More generally, for all distinct edges \(i_1j_1, \dots , i_nj_n\) and \(m<n\),

$$\begin{aligned} {\mathbb {P}}_\beta [i_1j_1,\dots , i_nj_n] \leqslant {\mathbb {P}}_\beta [i_1j_1,\dots , i_mj_m] {\mathbb {P}}_\beta [i_{m+1}j_{m+1},\dots , i_nj_n]. \end{aligned}$$
(1.13)

The weaker inequality \({\mathbb {P}}_\beta [ij,kl] \leqslant 2{\mathbb {P}}_\beta [ij] {\mathbb {P}}_\beta [kl]\) was recently proved in [10]. It is intriguing that the Lorentzian signature plays an important role in both [10] and the present work, but we are not aware of a direct relation. An important consequence of the full conjecture (with factor 1) is the existence of translation invariant arboreal gas measures on \({\mathbb {Z}}^d\); we prove this in Appendix A.

Proposition 1.9

Assume Conjecture 1.8 is true. Suppose \(\Lambda _{n}\) is an increasing family of subgraphs such that \(\Lambda _{n}\uparrow {\mathbb {Z}}^{d}\), and let \({\mathbb {P}}_{\beta ,n}\) be the arboreal gas on the finite graph \(\Lambda _{n}\). Then the weak limit \(\lim _{n}{\mathbb {P}}_{\beta ,n}\) exists and is translation invariant.

Remark 1

The conjectured inequality (1.12) can be recast as a reversed second Griffiths inequality. More precisely, (1.12) can be rewritten in terms of the \({\mathbb {H}}^{0|2}\) spin model introduced below in Sect. 2 as

$$\begin{aligned} \langle (u_i\cdot u_j)(u_k\cdot u_l) \rangle _\beta - \langle u_i\cdot u_j \rangle _\beta \,\langle u_k\cdot u_l \rangle _\beta \leqslant 0. \end{aligned}$$
(1.14)

This equivalence follows immediately from the results in Sect. 2.

1.3 Related literature

The arboreal gas has received attention under various names. An important reference for our work is [13], along with subsequent works by subsets of these authors and collaborators [7, 8, 14,15,16, 28]. These authors considered the connection of the arboreal gas with the antiferromagnetic \({{\mathbb {S}}} ^{0|2}\) model.

Our results are in part based on a re-interpretation of the \({{\mathbb {S}}} ^{0|2}\) formulation in terms of the hyperbolic \({\mathbb {H}}^{0|2}\) model. At the level of infinitesimal symmetries these models are equivalent. The power behind the hyperbolic language is that it allows for a further reformulation in terms of the \({\mathbb {H}}^{2|4}\) model, which is analytically useful. The \({\mathbb {H}}^{2|4}\) representation arises from a dimensional reduction formula, which in turn is a consequence of supersymmetric localization [2, 11, 39]. Much of Sect. 2 is devoted to explaining this. The upshot is that this representation allows us to make use of techniques originally developed for the non-linear \({\mathbb {H}}^{2|2}\) sigma model [20, 21, 49,50,51] and the vertex-reinforced jump process [4, 45]. In particular, our proof of Theorem 1.3 makes use of an adaptation of a Mermin–Wagner argument for the \({\mathbb {H}}^{2|2}\) model [6, 33, 44]; the particular argument we adapt is due to Sabot [44]. For more on the connections between these models, see [6, 45].

Conjecture 1.8 seems to have first appeared in print in [30]. Subsequent related works, including proofs for some special subclasses of graphs, include [10, 26, 46, 48].

As mentioned before, considerably stronger results are known for the arboreal gas on the complete graph. The first result in this direction concerned forests with a fixed number of edges [34], and later a fixed number of trees was considered [8]. Later in [36] the arboreal gas itself was considered, in the guise of the Erdős–Rényi graph conditioned to be acyclic. In [34] it was understood that the scaling window is of size \(N^{-1/3}\), and results on the behaviour of the ordered component sizes when \(\alpha = 1 +\lambda N^{-1/3}\) were obtained. In particular, the large components in the scaling window are of size \(N^{2/3}\). A very complete description of the component sizes in the critical window was obtained in [36].

We remark on an interesting aspect of the arboreal gas that was first observed in [34] and is consistent with Conjecture 1.6. Namely, in the supercritical regime, the component sizes of the k largest non-giant components are of order \(N^{2/3}\) [34, Theorem 5.2]. This is in contrast to the Erdős–Rényi graph, where the non-giant components are of logarithmic size. The critical size of the non-giant components is reminiscent of self-organised criticality, see [42] for example. A clearer understanding of the mechanism behind this behaviour for the arboreal gas would be interesting.

1.4 Outline

In the next section we introduce the \({\mathbb {H}}^{0|2}\) and \({\mathbb {H}}^{2|4}\) sigma models, relate them to the arboreal gas, and derive several useful facts. In Sect. 3 we use the \({\mathbb {H}}^{0|2}\) representation and Hubbard–Stratonovich type transformations to prove Theorem 3.1 by a stationary phase argument. In Sect. 4 we prove the quantitative part of Theorem 1.3, i.e., (1.4). The deduction that all trees are finite almost surely follows from adaptions of well-known arguments and is given in Appendix A. For the convenience of readers, we briefly discuss the fermionic representation of rooted spanning forests and spanning trees in Appendix B.

2 Hyperbolic Sigma Model Representation

In [13], it was noticed that the arboreal gas has a formulation in terms of fermionic variables, which in turn can be related to a supersymmetric spin model with values in the superhemisphere and negative (i.e., antiferromagnetic) spin couplings. In Sect. 2.1, we reinterpret this fermionic model as the \({\mathbb {H}}^{0|2}\) model (defined there) with positive (i.e., ferromagnetic) spin couplings. This reinterpretation has important consequences: in Sect. 2.4, we relate the \({\mathbb {H}}^{0|2}\) model to the \({\mathbb {H}}^{2|4}\) model (defined there) by a form of dimensional reduction applied to the target space. Technically this amounts to exploiting supersymmetric localisation associated to an additional set of fields. The \({\mathbb {H}}^{2|4}\) model allows the introduction of horospherical coordinates, which leads to an analytically useful probabilistic representation of the model as a gradient model with a non-local and non-convex potential. This gradient model is very similar to gradient models that arise in the study of linearly-reinforced random walks. In fact, up to the power of a determinant, this representation is in terms of a measure that is identical to the magic formula describing the mixing measure of the vertex-reinforced jump process, see (1.5).

2.1 \({\mathbb {H}}^{0|2}\) model and arboreal gas

Let \(\Lambda \) be a finite set, let \(\varvec{\beta } = (\beta _{ij})_{i,j\in \Lambda }\) be real-valued symmetric edge weights, and let \(\varvec{h} = (h_i)_{i\in \Lambda }\) be real-valued vertex weights. Throughout we will use this bold notation to denote tuples indexed by vertices or edges. For \(f:\Lambda \rightarrow {\mathbb {R}}\), we define the Laplacian associated with the edge weights by

$$\begin{aligned} \Delta _{\beta }f(i) \equiv \sum _{j\in \Lambda }\beta _{ij}(f(j)-f(i)). \end{aligned}$$
(2.1)

The non-zero edge weights induce a graph \({{\mathbb {G}}} = (\Lambda , E)\), i.e., \(ij\in E\) if and only if \(\beta _{ij}\ne 0\).

Let \(\Omega ^{2\Lambda }\) be a (real) Grassmann algebra (or exterior algebra) with generators \((\xi _i,\eta _i)_{i\in \Lambda }\), i.e., all of the \(\xi _i\) and \(\eta _i\) anticommute with each other. For \(i,j\in \Lambda \), define the even elements

$$\begin{aligned} z_i&\equiv \sqrt{1- 2\xi _i\eta _i} \equiv 1-\xi _i\eta _i \end{aligned}$$
(2.2)
$$\begin{aligned} u_i \cdot u_j&\equiv -\xi _i\eta _j - \xi _j\eta _i - z_iz_j = -1 -\xi _i\eta _j - \xi _j\eta _i + \xi _i\eta _i + \xi _j\eta _j - \xi _i\eta _i\xi _j\eta _j. \end{aligned}$$
(2.3)

Note that \(u_i \cdot u_i = -1\) which we formally interpret as meaning that \(u_i=(\xi ,\eta ,z) \in {\mathbb {H}}^{0|2}\) by analogy with the hyperboloid model for hyperbolic space. However, we emphasize that ‘\(\in {\mathbb {H}}^{0|2}\)’ does not have any literal sense. Similarly we write \(\varvec{u} = (u_{i})_{i\in \Lambda }\in ({\mathbb {H}}^{0|2})^{\Lambda }\). The fermionic derivative \(\partial _{\xi _i}\) is defined in the natural way, i.e., as the odd derivation on that acts on \(\Omega ^{2\Lambda }\) by

$$\begin{aligned} \partial _{\xi _i} (\xi _i F) \equiv F, \quad \partial _{\xi _i}F \equiv 0 \end{aligned}$$
(2.4)

for any form F that does not contain \(\xi _{i}\). An analogous definition applies to \(\partial _{\eta _i}\). The hyperbolic fermionic integral is defined in terms of the fermionic derivative by

$$\begin{aligned}{}[F]_0 \equiv \int _{({\mathbb {H}}^{0|2})^\Lambda } F \equiv \prod _{i \in \Lambda } \left( {\partial _{\eta _i} \partial _{\xi _i} \frac{1}{z_i}}\right) F = \partial _{\eta _N}\partial _{\xi _N} \cdots \partial _{\eta _1}\partial _{\xi _1} \left( {\frac{1}{z_1\cdots z_N} F}\right) \in {\mathbb {R}}\nonumber \\ \end{aligned}$$
(2.5)

if \(\Lambda =\{1,\dots ,N\}\). It is well-known that while the fermionic integral is formally equivalent to a fermionic derivative, it behaves in many ways like an ordinary integral. The factors of 1/z make the hyperbolic fermionic integral invariant under a fermionic version of the Lorentz group; see (2.18).

The \({\mathbb {H}}^{0|2}\) sigma model action is the even form \(H_{\beta ,h}(\varvec{u})\) in \(\Omega ^{2\Lambda }\) given by

$$\begin{aligned} H_{\beta ,h}(\varvec{u}) \equiv \frac{1}{2}(\varvec{u},-\Delta _{\beta }\varvec{u}) + (\varvec{h},\varvec{z}-1) = \frac{1}{4} \sum _{i,j}\beta _{ij}(u_i-u_j)^2 + \sum _i h_i (z_i-1)\quad \end{aligned}$$
(2.6)

where \((a,b) \equiv \sum _{i}a_{i}\cdot b_{i}\), with \(a_i\cdot b_i\) interpreted as the \({\mathbb {H}}^{0|2}\) inner product defined by (2.3). The corresponding unnormalised expectation \([\cdot ]_{\beta ,h}\) and normalised expectation \(\langle \cdot \rangle _{\beta ,h}\) are defined by

$$\begin{aligned}{}[F]_{\beta ,h} \equiv [Fe^{-H_{\beta ,h}}]_{0}, \qquad \langle F \rangle _{\beta ,h} \equiv \frac{[F]_{\beta ,h}}{[1]_{\beta ,h}}, \end{aligned}$$
(2.7)

the latter definition holding when \([1]_{\beta ,h}\ne 0\). In (2.7) the exponential of the even form \(H_{\beta ,h}\) is defined by the formal power series expansion, which truncates at finite order since \(\Lambda \) is finite. For an introduction to Grassmann algebras and integration as used in this paper, see [5, Appendix A].

Note that the unnormalised expectation \([\cdot ]_{\beta ,h}\) is well-defined for all real values of the \(\beta _{ij}\) and \(h_i\), including negative values, and in particular \(\varvec{h}=\varvec{0}\), \(\varvec{\beta }=\varvec{0}\), or both, are permitted. We will use the abbreviations \([\cdot ]_\beta \equiv [\cdot ]_{\beta ,0}\) and \(\langle \cdot \rangle _\beta \equiv \langle \cdot \rangle _{\beta ,0}\).

The following theorem shows that the partition function \([1]_{\beta ,h}\) of the \({\mathbb {H}}^{0|2}\) model is exactly the partition function of the arboreal gas \(Z_\beta \) defined in (1.1) when \(\varvec{h}=\varvec{0}\), and that it is a generalization the partition function when \(\varvec{h} \ne \varvec{0}\) which we will subsequently denote by \(Z_{\beta ,h}\). This connection between spanning forests and the antiferromagnetic \({{\mathbb {S}}} ^{0|2}\) model, which is equivalent to our ferromagnetic \({\mathbb {H}}^{0|2}\) model, was previously observed in [13]. As mentioned earlier, our hyperbolic interpretation will have important consequences in what follows.

Theorem 2.1

For any real-valued weights \(\varvec{\beta }\) and \(\varvec{h}\),

$$\begin{aligned}{}[1]_{\beta ,h} = \sum _{F\in \mathcal {F}} \prod _{ij\in F}\beta _{ij}\prod _{T\in F} (1+\sum _{i\in T}h_{i}) \end{aligned}$$
(2.8)

where the inner product runs over the trees T that make up the forest F.

For the reader’s convenience and to keep our exposition self contained, we provide a concise proof of Theorem 2.1 below. The interested reader may consult the original paper [13], where they can also find generalizations to hyperforests. The \(\varvec{h}=\varvec{0}\) case of Theorem 2.1 also implies the following useful representations of probabilities for the arboreal gas.

Corollary 2.2

Let \(\varvec{h}=\varvec{0}\) and assume the edge weights \(\varvec{\beta }\) are non-negative. Then for all edges ab,

$$\begin{aligned} {\mathbb {P}}_\beta \left[ ab\right] = \beta _{ab}\langle u_a\cdot u_b +1 \rangle _\beta , \end{aligned}$$
(2.9)

and more generally, for all sets of edges S,

$$\begin{aligned} {\mathbb {P}}_\beta [S] = \langle \prod _{ij \in S} \beta _{ij} (u_{i}\cdot u_{j}+1) \rangle _\beta . \end{aligned}$$
(2.10)

Moreover, for all vertices \(a,b \in \Lambda \),

$$\begin{aligned} {\mathbb {P}}_\beta [a\leftrightarrow b] = -\langle z_az_b \rangle _\beta = -\langle u_a\cdot u_b \rangle _\beta = \langle \xi _{a}\eta _{b} \rangle _{\beta } = 1- \langle \eta _{a}\xi _{a}\eta _{b}\xi _{b} \rangle _{\beta }, \end{aligned}$$
(2.11)

and also

$$\begin{aligned} \langle z_a \rangle _\beta =0. \end{aligned}$$
(2.12)

We will prove Theorem 2.1 and Corollary 2.2 in Sect. 2.3, but first we establish some integration identities associated with the symmetries of \({\mathbb {H}}^{0|2}\).

2.2 Ward Identities for \({\mathbb {H}}^{0|2}\)

Define the operators

$$\begin{aligned}&T \equiv \sum _{i\in \Lambda } T_i \equiv \sum _{i\in \Lambda } z_i\partial _{\xi _i}, \qquad {\bar{T}} \equiv \sum _{i\in \Lambda } {\bar{T}}_i \equiv \sum _{i\in \Lambda } z_i \partial _{\eta _i}, \qquad \nonumber \\&S \equiv \sum _{i\in \Lambda } S_i \equiv \sum _{i\in \Lambda } (\eta _i\partial _{\xi _i} + \xi _i \partial _{\eta _i}). \end{aligned}$$
(2.13)

Using (2.2), one computes that these act on coordinates as

$$\begin{aligned} T\xi _a&= z_a, \quad T\eta _a = 0, \quad Tz_a = -\eta _a, \end{aligned}$$
(2.14)
$$\begin{aligned} {\bar{T}}\xi _a&= 0, \quad {\bar{T}}\eta _a = z_a, \quad {\bar{T}}z_a = \xi _a, \end{aligned}$$
(2.15)
$$\begin{aligned} S\xi _a&= \eta _a, \quad S\eta _a= \xi _a, \quad S z_a = 0. \end{aligned}$$
(2.16)

The operator S is an even derivation on \(\Omega ^{2\Lambda }\), meaning that it obeys the usual Leibniz rule \(S(FG) = S(F)G + FS(G)\) for any forms FG. On the other hand, the operators T and \({{\bar{T}}}\) are odd derivations on \(\Omega ^{2\Lambda }\), also called supersymmetries. This means that if F is an even or odd form, then \(T(FG) = (TF)G \pm F(TG)\), with ‘\(+\)’ for F even and ‘−’ for F odd. We remark that T and \({{\bar{T}}}\) can be regarded as analogues of the infinitesimal Lorentz boost symmetries of \({\mathbb {H}}^{n}\), while S is an infinitesimal symplectic symmetry. In particular, the inner product (2.3) is invariant with respect to these symmetries, in the sense that

$$\begin{aligned} T (u_a\cdot u_b) = {{\bar{T}}}(u_a \cdot u_b) = S (u_a\cdot u_b) = 0. \end{aligned}$$
(2.17)

For T, this follows from \(T (u_a\cdot u_b) = T(-\xi _a\eta _b-\xi _b\eta _a-z_az_b) = -z_a\eta _b-z_b\eta _a+\eta _az_b+\eta _bz_a=0\) since the \(z_{i}\) are even. Analogous computations apply to \({{\bar{T}}}\) and S.

A complete description of the infinitesimal symmetries of \({\mathbb {H}}^{0|2}\) is given by the orthosymplectic Lie superalgebra \(\mathfrak {osp}(1|2)\), which is spanned by the three operators described above, together with a further two symplectic symmetries; see [13, Section 7] for details.

Lemma 2.3

For any \(a \in \Lambda \), the operators \(T_a\), \({{\bar{T}}}_a\) and S are symmetries of the non-interacting expectation \([\cdot ]_0\) in the sense that, for any form F,

$$\begin{aligned} {[}T_aF]_0 = [{{\bar{T}}}_a F]_0 = [S_aF]_0 = 0. \end{aligned}$$
(2.18)

Moreover, for any \(\varvec{\beta }=(\beta _{ij})\) and \(\varvec{h} = \varvec{0}\), also \(T = \sum _{i\in \Lambda } T_i\) and \({{\bar{T}}} = \sum _{i\in \Lambda } \bar{T}_i\) are symmetries of the interacting expectation \([\cdot ]_\beta \):

$$\begin{aligned}{}[TF]_\beta = [{{\bar{T}}} F]_\beta = 0, \end{aligned}$$
(2.19)

and similarly \(S = \sum _{i\in \Lambda } S_i\) is a symmetry of \([\cdot ]_{\beta ,h}\) for any \(\varvec{\beta }\) and \(\varvec{h}\).

Proof

First assume that \(\varvec{\beta }=\varvec{0}\). Then by (2.13),

$$\begin{aligned}{}[T_a F]_0 = \int \prod _i \partial _{\eta _i} \partial _{\xi _i} \frac{1}{z_i} (T_aF) = \int \left( \prod _{i\ne a} \partial _{\eta _i} \partial _{\xi _i} \frac{1}{z_i}\right) \partial _{\eta _a} \partial _{\xi _a} \partial _{\xi _a} F = 0 \end{aligned}$$
(2.20)

since \((\partial _{\xi _a})^2\) acts as 0 since any form can have at most one factor of \(\xi _a\). The same argument applies to \({{\bar{T}}}\), and a similar argument applies to S.

We now show that this implies T and \({{\bar{T}}}\) are also symmetries of \([\cdot ]_\beta \). Indeed, for any form F that is even (respectively odd), the fact that T is an odd derivation and the fact that \([\cdot ]_0\) is invariant implies the integration by parts formula

$$\begin{aligned}{}[TF]_\beta = \pm [F(TH_{\beta })]_\beta , \qquad H_{\beta } = H_{\beta ,0} = \frac{1}{4} \sum _{i,j\in \Lambda } \beta _{ij} (u_i-u_j)^2. \end{aligned}$$
(2.21)

For any \(\varvec{\beta }\) the right-hand side vanishes since \(TH_{\beta } = 0\) by (2.17). A similar argument applies for \({{\bar{T}}}\). Since every form F can be written as a sum of an even and an odd form, (2.19) follows.

The argument for S being a symmetry of \(\left[ \cdot \right] _{\beta ,h}\) is similar. \(\quad \square \)

To illustrate the use of these operators, we give a proof of the identities on the right-hand side of (2.11) and a proof of (2.12). Define

$$\begin{aligned} \lambda _{ab} \equiv z_b\xi _a, \qquad {{\bar{\lambda }}}_{ab} \equiv z_b\eta _a, \end{aligned}$$
(2.22)

and note \(T\lambda _{ab} = \xi _a\eta _b + z_az_b \) and \({{\bar{T}}} {{\bar{\lambda }}}_{ab}= \xi _{b}\eta _{a}+z_{a}z_{b}\). Hence

$$\begin{aligned} \langle u_a\cdot u_b \rangle _\beta = \langle z_az_b - T \lambda _{ab}-{{\bar{T}}} {{\bar{\lambda }}}_{ab} \rangle _\beta = \langle z_az_b \rangle _\beta , \end{aligned}$$
(2.23)

where the final equality is by linearity and Lemma 2.3. In particular, \(\langle z_{a}^{2} \rangle _{\beta }=-1\). Reasoning similarly, we obtain

$$\begin{aligned} \langle z_a \rangle _\beta&= \langle T\xi _a \rangle _\beta =0, \end{aligned}$$
(2.24)
$$\begin{aligned} \langle z_a z_b \rangle _{\beta }&= \langle T\lambda _{ab} \rangle _\beta -\langle \xi _a \eta _b \rangle _{\beta } = -\langle \xi _a \eta _b \rangle _{\beta }, \end{aligned}$$
(2.25)

which proves (2.12), and implies \(\langle \xi _{a}\eta _{a} \rangle _{\beta }=1\). Since \(z_az_b = (1-\xi _a\eta _a)(1-\xi _b\eta _b) = 1 - \xi _a\eta _a - \xi _b\eta _b + \xi _a\eta _a \xi _b\eta _b\) this also gives

$$\begin{aligned} -\langle z_a z_b \rangle _\beta = 1-\langle \xi _a\eta _a \xi _b\eta _b \rangle _\beta . \end{aligned}$$
(2.26)

Finally, we note that the symplectic symmetry and \(S(\xi _a\xi _b) = \xi _a\eta _b - \xi _b\eta _a\) imply

$$\begin{aligned} \langle \xi _a\eta _b \rangle _{\beta ,h} = \langle \xi _b\eta _a \rangle _{\beta ,h}. \end{aligned}$$
(2.27)

2.3 Proofs of Theorem 2.1 and Corollary 2.2

Our first lemma relies on the identities of the previous section.

Lemma 2.4

For any forest F,

$$\begin{aligned} \left[ { \prod _{ij \in F} (u_i\cdot u_j+1)}\right] _0 = 1. \end{aligned}$$
(2.28)

Proof

By factorization for fermionic integrals, it suffices to prove (2.28) when F is in fact a tree. We recall the definition

$$\begin{aligned}{}[G]_0 = \prod _i \partial _{\eta _i}\partial _{\xi _i} \frac{1}{z_i} G = \prod _i \partial _{\eta _i}\partial _{\xi _i} (1+\xi _i\eta _i) G. \end{aligned}$$
(2.29)

Hence, if T contains no edges then we have \([1]_{0}=1\). We complete the proof by induction, with the inductive assumption that the claim holds for all trees on k or fewer vertices. To advance the induction, let T be a tree on \(k+1\geqslant 2\) vertices and choose a leaf edge \(\{a,b\}\) of T. We will advance the induction by considering the sum of the integrals that result from expanding \((u_{a}\cdot u_{b}+1)\) in (2.28).

Note that by Lemma 2.3, if \(G_{1}\) is even (resp. odd) and \(TG=0\), then

$$\begin{aligned}{}[(TG_{1})G]_0 = \mp [G_{1}(TG)]_0 \end{aligned}$$
(2.30)

and similarly if \({{\bar{T}}} G = 0\). Thus for such a G, recalling the definition (2.22) of \(\lambda _{ab}\) and \({{\bar{\lambda }}}_{ab}\),

$$\begin{aligned}{}[(u_a\cdot u_b)G]_0 = [(z_az_b-T\lambda _{ab}-{{\bar{T}}}{{\bar{\lambda }}}_{ab})G]_0 = [z_az_bG]_0&= \frac{1}{2} [((T\xi _a)z_b+({{\bar{T}}}\eta _a)z_b)G]_{0} \nonumber \\&= \frac{1}{2} [(-\xi _a\eta _b+\eta _a\xi _b)G]_{0}, \end{aligned}$$
(2.31)

where we have used (2.30) in the second and final equalities. Applying this identity with \(G=\prod _{ij\in T\setminus \{a,b\}}(u_{i}\cdot u_{j}+1)\), the right-hand side is 0 since the product does not contain the missing generator at a to give a non-vanishing expectation. The inductive assumption and factorization for fermionic integrals implies \([G]_0=1\), and thus

$$\begin{aligned} {[}\prod _{ij \in T}(u_{i}\cdot u_{j}+1)]_0 = [(u_a\cdot u_b+1)G]_0 = [G]_0 = 1, \end{aligned}$$
(2.32)

advancing the induction. \(\quad \square \)

Lemma 2.5

For any \(i,j\in \Lambda \) we have \((u_i\cdot u_j+1)^2=0\), and for any graph C that contains a cycle,

$$\begin{aligned} \prod _{ij \in C} (u_i\cdot u_j+1) = 0. \end{aligned}$$
(2.33)

Proof

It suffices to consider when C is a cycle or doubled edge. Orienting C, the oriented edges of C are \((1,2),\dots , (k-1,k),(k,1)\) for some \(k\geqslant 2\). Then, with the convention \(k+1=1\),

$$\begin{aligned} \prod _{i=1}^{k} (u_i\cdot u_{i+1}+1)&= \prod _{i=1}^{k}(-\xi _{i}\eta _{i+1}+\eta _{i}\xi _{i+1} + \xi _{i}\eta _{i}+\xi _{i+1}\eta _{i+1} - \xi _{i}\eta _{i}\xi _{i+1}\eta _{i+1}) \nonumber \\&=\prod _{i=1}^{k}(-\xi _{i}\eta _{i+1}+\eta _{i}\xi _{i+1} + \xi _{i}\eta _{i}+\xi _{i+1}\eta _{i+1}) , \end{aligned}$$
(2.34)

the second equality by nilpotency of the generators and \(k\geqslant 2\). To complete the proof of the claim we consider which terms are non-zero in the expansion of this product. First consider the term that arises when choosing \(\xi _{1}\eta _{1}\) in the first term in the product: then for the second term any choice other than \(\xi _{2}\eta _{2}\) results in zero. Continuing in this manner, the only non-zero contribution is \(\prod _{i=1}^{k}\xi _{i}\eta _{i}\). Similar arguments apply to the other three choices possible in the first product, leading to

$$\begin{aligned}&\prod _{i=1}^{k}(-\xi _{i}\eta _{i+1}+\eta _{i}\xi _{i+1} + \xi _{i}\eta _{i}+\xi _{i+1}\eta _{i+1}) \nonumber \\&\quad = \prod _{i=1}^{k}\xi _{i}\eta _{i} + \prod _{i=1}^{k}\xi _{i+1}\eta _{i+1} + \prod _{i=1}^{k}(-\xi _{i}\eta _{i+1}) + \prod _{i=1}^{k}\eta _{i}\xi _{i+1} \nonumber \\&\quad = (1 + (-1)^{k}+(-1)^{2k-1}+(-1)^{k-1}) \prod _{i=1}^{k}\xi _{i}\eta _{i} \end{aligned}$$
(2.35)

which is zero for all k. The signs arise from re-ordering the generators. We have used that C is a cycle for the third and fourth terms. \(\quad \square \)

Proof of Theorem 2.1when \(\varvec{h}=\varvec{0}\). By Lemma 2.5,

$$\begin{aligned} e^{\frac{1}{2} (\varvec{u},\Delta _\beta \varvec{u})} = \sum _{S} \prod _{ij \in S} \beta _{ij} (u_i\cdot u_j+1) = \sum _{F} \prod _{ij \in F} \beta _{ij} (u_i\cdot u_j+1), \end{aligned}$$
(2.36)

where the sum runs over sets S of edges and that over F is over forests. By taking the unnormalised expectation \([\cdot ]_0\) we conclude from Lemma 2.4 that

$$\begin{aligned} Z_{\beta ,0} = [e^{\frac{1}{2} (\varvec{u},\Delta _\beta \varvec{u})}]_0 = \sum _{F} \prod _{ij \in F} \beta _{ij}. \end{aligned}$$
(2.37)

\(\square \)

To establish the theorem for \(\varvec{h} \ne \varvec{0}\) requires one further preliminary, which uses the idea of pinning the spin \(u_{0}\) at a chosen vertex \(0 \in \Lambda \). Informally, this means that \(u_0\) always evaluates to \((\xi ,\eta ,z) = (0,0,1)\). Formally, this means the following. To compute the pinned expectation of a function F of the forms \((u_{i}\cdot u_{j})_{i,j\in \Lambda }\), we replace \(\Lambda \) by \(\Lambda _{0} = \Lambda \setminus \{0\}\), set

$$\begin{aligned} h_j = \beta _{0 j}, \end{aligned}$$
(2.38)

in \(H_{\beta }\), and replace all instances of \(u_0 \cdot u_j\) by \(-z_j\) in both F and \(e^{-H_{\beta }}\). The pinned expectation of F is the hyperbolic fermionic integral (2.5) of this form with respect to the generators \((\xi _{i},\eta _{i})_{i\in \Lambda _{0}}\). We denote this expectation by

$$\begin{aligned}{}[\cdot ]_\beta ^0, \quad \langle \cdot \rangle _\beta ^0. \end{aligned}$$
(2.39)

This procedure gives a way to identify any function of the forms \((u_{i}\cdot u_{j})_{i,j\in \Lambda }\) with a function of the forms \((u_{i}\cdot u_{j})_{i,j\in \Lambda _{0}}\) and \((z_{i})_{i\in \Lambda _{0}}\). To minimize the notation, we will implicitly identify \(u_{0}\cdot u_{j}\) with \(-z_{j}\) when taking pinned expectations of functions F of the \((u_{i}\cdot u_{j})\).

The following proposition relates the pinned and unpinned models.

Proposition 2.6

For any polynomial F in \((u_i\cdot u_j)_{i,j\in \Lambda }\),

$$\begin{aligned}{}[F]_\beta ^0 = [(1-z_0)F]_\beta ,\qquad \langle F \rangle _{\beta }^0 = \langle (1-z_0) F \rangle _\beta . \end{aligned}$$
(2.40)

Proof

It suffices to prove the first equation of (2.40), as this implies \([1]_{\beta }^{0}=[1-z_{0}]_{\beta }=[1]_{\beta }\) since \([z_{0}]_{\beta }=0\) by (2.24).

Since \(1-z_0 = \xi _0\eta _0\), for any form F that contains a factor of \(\xi _0\) or \(\eta _0\), we have \((1-z_0)F=0\). Thus the expectation \([(1-z_0)F]_\beta \) amounts to the expectation with respect to \([\cdot ]_{0}\) of \(F e^{-H_\beta }\) with all terms containing factors \(\xi _0\) and \(\eta _0\) removed. The claim thus follows from by computing the right-hand side using the observations that (i) removing all terms with factors of \(\xi _0\) and \(\eta _0\) from \(u_0\cdot u_i\) yields \(-z_i\), and (ii) \(\partial _{\eta _{0}}\partial _{\xi _{0}}\xi _{0}\eta _{0}z_{0}^{-1}=1\).

\(\square \)

There is a correspondence between pinning and external fields. If one first chooses \(\Lambda \) and then pins at \(0\in \Lambda \), the result is that there is an external field \(h_j\) for all \(j\in \Lambda \setminus 0\). One can also view this the other way around, by beginning with \(\Lambda \) and an external field \(h_j\) for all \(j\in \Lambda \), and then realizing this as due to pinning at an ‘external’ vertex \(\delta \notin \Lambda \). This idea shows that Theorem 2.1 with \(\varvec{h}\ne \varvec{0}\) follows from the case \(\varvec{h}=\varvec{0}\); for the reader who is not familiar with arguments of this type, we provide the details below.

Proof of Theorem 2.1when \(\varvec{h}\ne \varvec{0}\). The partition function of the arboreal gas with \(\varvec{h} \ne \varvec{0}\) can be interpreted as that of the arboreal gas with \(\varvec{h} \equiv \varvec{0}\) on a graph \({{\tilde{G}}}\) augmented by an additional vertex \(\delta \) and with weights \({\tilde{\beta }}\) given by \({\tilde{\beta }}_{ij} = \beta _{ij}\) for all \(i,j\in G\) and \({\tilde{\beta }}_{i\delta } = {\tilde{\beta }}_{\delta i} = h_i\). Each \(F'\in \mathcal {F}({{\tilde{G}}})\) is a union of \(F\in \mathcal {F}(G)\) with a collection of edges \(\{i_{r}\delta \}_{r\in R}\) for some \(R\subset V(G)\). Since \(F'\) is a forest, \(|T\cap R |\leqslant 1\) for each tree T in F. Moreover, for any \(F\in \mathcal {F}(G)\) and any \(R\subset V(G)\) satisfying \(|V(T)\cap R |\leqslant 1\) for each T in F, \(F\cup \{i_{r}\delta \}_{r\in R}\in \mathcal {F}({{\tilde{G}}})\). Thus

$$\begin{aligned} Z_{{\tilde{\beta }},0}^{{{\tilde{G}}}} = \sum _{F'\in \mathcal {F}(G_{\delta })} \prod _{ij \in F'} \beta _{ij} = \sum _{F\in \mathcal {F}(G)}\prod _{ij \in F'} \beta _{ij} \prod _{T\in F}(1+\sum _{i\in T}h_{i}) = Z_{\beta ,h}^G. \end{aligned}$$
(2.41)

To conclude, note that \([(1-z_\delta )F]_{{\tilde{\beta }}} = [F]_{{\tilde{\beta }}}\) for any function F with \(TF=0\); this follows from \([z_aF] = [(T\xi _a)F] = -[\xi _a(TF)] = 0\). The conclusion now follows from Proposition 2.6 (where \(\delta \) takes the role of 0 in that proposition), which shows \([(1-z_\delta )F]_{{\tilde{\beta }}} = [F]_{\beta ,h}\). \(\quad \square \)

Proof of Corollary 2.2

Since \({\mathbb {P}}_\beta \left[ ab\right] = \beta _{ab}\frac{d}{d\beta _{ab}} \log Z\), we have

$$\begin{aligned} {\mathbb {P}}_\beta \left[ ab\right] = -\frac{1}{2}\beta _{ab}\langle (u_{a}-u_{b})^{2} \rangle , \end{aligned}$$
(2.42)

and expanding the right-hand side yields (2.9). Alternatively, multiplying  (2.36) by \(\beta _{ij}(1+u_i\cdot u_j)\), using Lemma 2.5, and then applying Lemma 2.4 yields the result. Similar considerations yield (2.10), and also show that

$$\begin{aligned} {\mathbb {P}}_{\beta }[i\nleftrightarrow j] = \langle 1+u_{i}\cdot u_{j} \rangle _{\beta }. \end{aligned}$$
(2.43)

Therefore \({\mathbb {P}}_{\beta }[i\leftrightarrow j] = -\langle u_{i}\cdot u_{j} \rangle _{\beta }\). Together with the identities (2.23)–(2.26), this proves (2.11). We already established (2.12) in Sect. 2.2. \(\quad \square \)

2.4 \({\mathbb {H}}^{2|4}\) model and dimensional reduction

In this section we define the \({\mathbb {H}}^{2|4}\) model, and show that for a class of ‘supersymmetric observables’ expectations with respect to the \({\mathbb {H}}^{2|4}\) model can be reduced to expectations with respect to the \({\mathbb {H}}^{0|2}\) model. To study the arboreal gas we will use this reduction in reverse: first we express arboreal gas quantities as \({\mathbb {H}}^{0|2}\) expectations, and in turn as \({\mathbb {H}}^{2|4}\) expectations. The utility of this rewriting will be explained in the next section, but in short, \({\mathbb {H}}^{2|4}\) expectations can be rewritten as ordinary integrals, and this carries analytic advantages.

The \({\mathbb {H}}^{2|4}\) model is a special case of the following more general \({\mathbb {H}}^{n|2m}\) model. These models originate with Zirnbauer’s \({\mathbb {H}}^{2|2}\) model [21, 51], but makes sense for all \(n ,m \in {\mathbb {N}}\). For fixed n and m with \(n+m>0\), the \({\mathbb {H}}^{n|2m}\) model is defined as follows.

Let \(\phi ^1,\dots , \phi ^n\) be n real variables, and let \(\xi ^1,\eta ^1,\dots ,\xi ^m,\eta ^m\) be 2m generators of a Grassmann algebra (i.e., they anticommute pairwise and are nilpotent of order 2). Note that we are using superscripts to distinguish variables. Forms, sometimes called superfunctions, are elements of \(\Omega ^{2m}({\mathbb {R}}^n)\), where \(\Omega ^{2m}({\mathbb {R}}^{n})\) is the Grassmann algebra generated by \((\xi ^k,\eta ^k)_{k=1}^m\) over \(C^\infty ({\mathbb {R}}^n)\). See [5, Appendix A] for details. We define a distinguished even element z of \(\Omega ^{2m}({\mathbb {R}}^n)\) by

$$\begin{aligned} z \equiv \sqrt{1+\sum _{\ell =1}^n (\phi ^\ell )^2 + \sum _{\ell =1}^m (-2\xi ^\ell \eta ^\ell )} \end{aligned}$$
(2.44)

and let \(u = (\phi ,\xi , \eta ,z)\). Given a finite set \(\Lambda \), we write \(\varvec{u} = (u_i)_{i\in \Lambda }\), where \(u_i=(\phi _i, \xi _i, \eta _i, z_i)\) with \(\phi _i\in {\mathbb {R}}^{n}\) and \(\xi _i=(\xi _{i}^{1},\dots ,\xi _{i}^{m})\) and \(\eta _i=(\eta _{i}^{1},\dots ,\eta _{i}^{m})\), each \(\xi _{i}^{j}\) (resp. \(\eta _{i}^{j}\)) a generator of \(\Omega ^{2m\Lambda }({\mathbb {R}}^{n\Lambda })\). We define the ‘inner product’

$$\begin{aligned} u_{i}\cdot u_{j} \equiv \sum _{\ell =1}^{n}\phi ^{\ell }_{i}\phi ^{\ell }_{j} + \sum _{\ell =1}^{m} (\eta _{i}^{\ell }\xi _{j}^{\ell }-\xi _{i}^{\ell }\eta _{j}^{\ell }) -z_{i}z_{j} . \end{aligned}$$
(2.45)

Note that these definitions imply \(u_i\cdot u_i = -1\). If \(m=0\), the constraint \(u_i\cdot u_i=-1\) defines the hyperboloid model for hyperbolic space \({\mathbb {H}}^n\), as in this case \(u_i \cdot u_j\) reduces to the Minkowski inner product on \({\mathbb {R}}^{n+1}\). For this reason we write \(u_i\in {\mathbb {H}}^{n|2m}\) and \(\varvec{u}\in ({\mathbb {H}}^{n|2m})^\Lambda \) and think of \({\mathbb {H}}^{n|2m}\) as a hyperbolic supermanifold. As we do not need to enter into the details of this mathematical object, we shall not discuss it further (see [51] for further details). We remark, however, that the expression \(\sum _{\ell =1}^{m} (-\xi _i^\ell \eta _j^\ell +\eta _i^\ell \xi _j^\ell )\) is the natural fermionic analogue of the Euclidean inner product \(\sum _{\ell =1}^{n}\phi _i^\ell \phi _j^\ell \) and motivates the supermanifold terminology.

The general class of models of interest are defined analogously to the \({\mathbb {H}}^{0|2}\) model by the action

$$\begin{aligned} H_{\beta ,h}(\varvec{u}) \equiv \frac{1}{2}(\varvec{u},-\Delta _{\beta }\varvec{u}) + (\varvec{h},\varvec{z}-1), \end{aligned}$$
(2.46)

where we now require \(\varvec{\beta } \geqslant 0\) and \(\varvec{h}\geqslant 0\), i.e., \(\varvec{\beta }=(\beta _{ij})_{i,j\in \Lambda }\) and \(\varvec{h}=(h_i)_{i\in \Lambda }\) satisfy \(\beta _{ij} \geqslant 0\) and \(h_i\geqslant 0\) for all \(i,j\in \Lambda \). We have again used the notation \((a,b) = \sum _{i\in \Lambda }a_{i}\cdot b_{i}\) but where \(\cdot \) now refers to (2.45). For a form \(F \in \Omega ^{2m\Lambda }({\mathbb {H}}^{n})\), the corresponding unnormalised expectation is

$$\begin{aligned} \left[ F\right] ^{{\mathbb {H}}^{n|2m}} \equiv \int _{({\mathbb {H}}^{n|2m})^\Lambda } F e^{-H_{\beta ,h}} \end{aligned}$$
(2.47)

where the superintegral of a form G is

$$\begin{aligned} \int _{({\mathbb {H}}^{n|2m})^{\Lambda }}G \equiv \int _{{\mathbb {R}}^{n\Lambda }} \prod _{i\in \Lambda }\frac{d\phi ^1_{i} \dots d\phi ^n_{i}}{(2\pi )^{n/2}} \, \partial _{\eta ^1_{i}}\partial _{\xi ^1_{i}} \cdots \partial _{\eta ^m_{i}}\partial _{\xi ^m_{i}} \left( \prod _{i\in \Lambda }\frac{1}{z_{i}}\right) G, \end{aligned}$$
(2.48)

where the \(z_{i}\) are defined by (2.44).

Henceforth we will only consider the \({\mathbb {H}}^{0|2}\) and \({\mathbb {H}}^{2|4}\) models, and hence we will write \(x_{i}=\phi _{i}^{1}\) and \(y_{i}=\phi ^{2}_{i}\) for notational convenience. We will also assume \(\varvec{\beta }\geqslant 0\) and \(\varvec{h}\geqslant 0\) to ensure both models are well-defined.

2.4.1 Dimensional reduction

The following proposition shows that, due to an internal supersymmetry, all observables F that are functions of \(u_i\cdot u_j\) have the same expectations under the \({\mathbb {H}}^{0|2}\) and the \({\mathbb {H}}^{2|4}\) expectation. Here \(u_i\cdot u_j\) is defined as in (2.3) for \({\mathbb {H}}^{0|2}\), respectively as in (2.45) for \({\mathbb {H}}^{2|4}\). In this section and henceforth we work under the convention that \(z_i = u_\delta \cdot u_i\) with \(u_\delta = (0,\dots , 0,1)\), and that \((u_i\cdot u_j)_{i,j}\) refers to the collection of forms indexed by \(i,j \in {\tilde{\Lambda }} \equiv \Lambda \cup \{\delta \}\). In other words, functions of \((u_i\cdot u_j)_{i,j}\) are also permitted to depend on \((z_i)_{i}\).

Proposition 2.7

For any \(F:{\mathbb {R}}^{{\tilde{\Lambda }}\times {\tilde{\Lambda }}}\rightarrow {\mathbb {R}}\) smooth with enough decay that the integrals exist,

$$\begin{aligned} \left[ F((u_{i}\cdot u_{j})_{i,j})\right] _{\beta ,h}^{{\mathbb {H}}^{0|2}} = \left[ F((u_{i}\cdot u_{j})_{i,j})\right] _{\beta ,h}^{{\mathbb {H}}^{2|4}}. \end{aligned}$$
(2.49)

In view of this proposition we will subsequently drop the superscript \({\mathbb {H}}^{n|2m}\) for expectations of observables F that are functions of \((u_i\cdot u_j)_{i,j}\). That is, we will simply write \(\left[ F\right] _{\beta ,h}\) for

$$\begin{aligned}{}[F]_{\beta ,h} =[F]_{\beta ,h}^{{\mathbb {H}}^{0|2}} =[F]_{\beta ,h}^{{\mathbb {H}}^{2|4}}. \end{aligned}$$
(2.50)

We will similarly write \(\langle F \rangle _{\beta ,h}=\langle F \rangle _{\beta ,h}^{{\mathbb {H}}^{0|2}}=\langle F \rangle _{\beta ,h}^{{\mathbb {H}}^{2|4}}\) whenever \(\left[ 1\right] _{\beta ,h}^{{\mathbb {H}}^{2|4}}\) positive and finite.

The proof of Proposition 2.7 uses the following fundamental localisation theorem. To state the theorem, consider forms in \(\Omega ^{2N}({\mathbb {R}}^{2N})\) and denote the even generators of this algebra by \((x_i,y_i)\) and the odd generators by \((\xi _i,\eta _i)\). Then we define

$$\begin{aligned} Q \equiv \sum _{i=1}^{N} Q_i\,, \qquad Q_i \equiv \xi _i \frac{\partial }{\partial x_i} + \eta _i \frac{\partial }{\partial y_i} - x_i\frac{\partial }{\partial \eta _i} + y_i \frac{\partial }{\partial \xi _i}. \end{aligned}$$
(2.51)

Theorem 2.8

Suppose \(F \in \Omega ^{2N}({\mathbb {R}}^{2N})\) is integrable and satisfies \(QF=0\). Then

$$\begin{aligned} \int _{{\mathbb {R}}^{2N}} \frac{dx\, dy\, \partial _{\eta }\, \partial _{\xi }}{2\pi }\, F = F_0(0) \end{aligned}$$
(2.52)

where the right-hand side is the degree-0 part of F evaluated at 0.

A proof of this theorem can be found, for example, in [5, Appendix B].

Proof of Proposition 2.7

To distinguish \({\mathbb {H}}^{0|2}\) and \({\mathbb {H}}^{2|4}\) variables, we write the latter as \(u_i'\), i.e.,

$$\begin{aligned} u_i \cdot u_j&= -\xi _i^1\eta _j^1-\xi _j^1\eta _i^1 - z_i z_j \end{aligned}$$
(2.53)
$$\begin{aligned} u_i' \cdot u_j'&= x_ix_j + y_iy_j -\xi _i^1\eta _j^1-\xi _j^1\eta _i^1 -\xi _i^2\eta _j^2-\xi _j^2\eta _i^2 - z_i' z_j'. \end{aligned}$$
(2.54)

We begin by considering the case \(N=1\), i.e., a graph with a single vertex. Since \(e^{-H_{\beta ,h}(\varvec{u})}\) is a function of \((u_i\cdot u_j)_{i,j}\), we will absorb the factor of \(e^{-H_{\beta ,h}(\varvec{u})}\) into the observable F to ease the notation. The \({\mathbb {H}}^{2|4}\) integral can be written as

$$\begin{aligned} \int _{{\mathbb {H}}^{2|4}}F = \int _{{\mathbb {R}}^{2}} \frac{dx \, dy}{2\pi } \, \partial _{\eta ^1}\partial _{\xi ^1} \, \partial _{\eta ^{2}}\partial _{\xi ^{2}} \frac{1}{z'} F = \partial _{\eta ^1}\partial _{\xi ^1} \int _{{\mathbb {R}}^2} \frac{dx \, dy}{2\pi } \, \partial _{\eta ^{2}}\partial _{\xi ^{2}} \frac{1}{z'} F \end{aligned}$$
(2.55)

where

$$\begin{aligned} z' = \sqrt{1+x^2 +y^2 - 2\xi ^1\eta ^1- 2\xi ^{2}\eta ^{2}} \end{aligned}$$
(2.56)

and \(\int _{{\mathbb {R}}^{2}}dx \, dy \, \partial _{\eta ^{2}}\partial _{\xi ^{2}} \frac{1}{z'} F\) is the form in \((\xi ^{1},\eta ^{1})\) obtained by integrating the coefficient functions term-by-term. Applying the localisation theorem (Theorem 2.8) to the variables \((x,y,\xi ^{2},\eta ^{2})\) gives, after noting \(z'\) localises to \(z = \sqrt{1-2 \xi ^{1}\eta ^{1}}\),

$$\begin{aligned} \int _{{\mathbb {R}}^2} \frac{dx \, dy}{2\pi } \, \partial _{\eta ^{2}}\partial _{\xi ^{2}} \frac{1}{z'} F((u_i'\cdot u_j')) = \frac{1}{z}F((u_i\cdot u_j)_{i,j}). \end{aligned}$$
(2.57)

Therefore

$$\begin{aligned} \int _{{\mathbb {H}}^{2|4}} F((u'_i\cdot u_j')_{i,j}) = \int _{{\mathbb {H}}^{0|2}} F((u_i\cdot u_j)_{i,j}) \end{aligned}$$
(2.58)

which is the claim. The argument for the case of general N is exactly analogous. \(\quad \square \)

2.5 Horospherical coordinates

Proposition 2.7 showed that ‘supersymmetric observables’ have the same expectations in the \({\mathbb {H}}^{0|2}\) and the \({\mathbb {H}}^{2|4}\) model. This is useful because the richer structure of the \({\mathbb {H}}^{2|4}\) model allows the introduction of horospherical coordinates, whose importance was recognised in [21, 47]. We will shortly define horospherical coordinates, but before doing this we state the result that we will deduce using them.

For the statement of the proposition, we require the following definitions. Let \(-\Delta _{\beta (t),h(t)}\) be the matrix with (ij)th element \(\beta _{ij}e^{t_{i}+t_{j}}\) for \(i\ne j\) and ith diagonal element \(-\sum _{j\in \Lambda }\beta _{ij}e^{t_{i}+t_{j}}-h_{i}e^{t_{i}}\). Let

$$\begin{aligned} {{\tilde{H}}}_{\beta ,h}(t,s)&\equiv \sum _{ij}\beta _{ij}(\cosh (t_{i}-t_{j})+\frac{1}{2} e^{t_i+t_j}(s_i-s_j)^2-1) \nonumber \\&\quad + \sum _{i}h_{i}(\cosh (t_{i})+\frac{1}{2} e^{t_i}s_i-1) - 2\log \det (-\Delta _{\beta (t),h(t)}) + 3\sum _i t_i \end{aligned}$$
(2.59)
$$\begin{aligned} {{\tilde{H}}}_{\beta ,h}(t)&\equiv \sum _{ij}\beta _{ij}(\cosh (t_{i}-t_{j})-1) + \sum _{i}h_{i}(\cosh (t_{i})-1) \nonumber \\&\quad - \frac{3}{2}\log \det (-\Delta _{\beta (t),h(t)}) + 3 \sum _i t_i \end{aligned}$$
(2.60)

where we abuse notation by using the symbol \({{\tilde{H}}}_{\beta ,h}\) both for the function \({{\tilde{H}}}_{\beta ,h}(t,s)\) and \({{\tilde{H}}}_{\beta ,h}(t)\). Below we will assume that \(\varvec{\beta }\) is irreducible, by which we mean that \(\varvec{\beta }\) induces a connected graph.

Proposition 2.9

Assume \(\varvec{\beta } \geqslant 0\) and \(\varvec{h} \geqslant 0\) with \(\varvec{\beta }\) irreducible and \(h_i>0\) for at least one \(i\in \Lambda \). For all smooth functions \(F:{\mathbb {R}}^{2\Lambda } \rightarrow {\mathbb {R}}\), respectively \(F:{\mathbb {R}}^\Lambda \rightarrow {\mathbb {R}}\), such that the integrals on the left- and right-hand sides converge absolutely,

$$\begin{aligned} \left[ F((x_i+z_i)_{i}, (y_i)_i)\right] _{\beta ,h}^{{\mathbb {H}}^{2|4}}&= \int _{{\mathbb {R}}^{2\Lambda }} F((e^{t_i})_{i}, (e^{t_i}s_i)_i) e^{-{{\tilde{H}}}_{\beta ,h}(t,s)} \prod _{i} \frac{dt_i\, ds_i}{2\pi } \end{aligned}$$
(2.61)
$$\begin{aligned} \left[ F((x_{i}+z_i)_{i})\right] _{\beta ,h}^{{\mathbb {H}}^{2|4}}&= \int _{{\mathbb {R}}^{\Lambda }} F((e^{t_i})_{i}) e^{-{{\tilde{H}}}_{\beta ,h}(t)} \prod _{i} \frac{dt_i}{\sqrt{2\pi }}. \end{aligned}$$
(2.62)

In particular, the normalising constant \(\left[ 1\right] _{\beta ,h}^{{\mathbb {H}}^{2|4}}\) is the partition function \(Z_{\beta ,h}\) of the arboreal gas.

Abusing notation further, we will denote either of the expectations on the right-hand sides of (2.61) and (2.62) by \([{\cdot }]_{\beta ,h}\), and we will write \(\langle \cdot \rangle _{\beta ,h}\) for the normalised versions. Before giving the proof of the proposition, which is essentially standard, we collect some resulting identities that will be used later.

Corollary 2.10

For all \(\varvec{\beta }\) and \(\varvec{h}\) as in Proposition 2.9,

$$\begin{aligned} \langle e^{t_i} \rangle _{\beta ,h} = \langle e^{2t_i} \rangle _{\beta ,h} = \langle z_i \rangle _{\beta ,h}, \quad \langle e^{3t_i} \rangle _{\beta ,h}=1 \end{aligned}$$
(2.63)

and

$$\begin{aligned} \langle s_is_je^{t_i+t_j} \rangle _{\beta ,h} = \langle \xi _i\eta _j \rangle _{\beta ,h}, \end{aligned}$$
(2.64)

where the left-hand sides are evaluated as on the right-hand side of (2.61), and the right-hand sides are given by the \({\mathbb {H}}^{0|2}\) expectation (2.7).

Proof

To lighten notation, we write \(\langle \cdot \rangle \equiv \langle \cdot \rangle _{\beta ,h}\). For the \({\mathbb {H}}^{2|4}\) expectation (2.47), we have \(\langle x_i^qz_i^p \rangle = 0\) whenever \(q>0\) is an odd integer by the symmetry \(x\mapsto -x\) (recall that \(x=\phi ^1\)). Also note that

$$\begin{aligned} \langle x_i^2 \rangle = \langle y_i^2 \rangle = \langle \xi _i^1\eta _i^1 \rangle = \langle \xi _i^2\eta _i^2 \rangle , \end{aligned}$$
(2.65)

where we emphasize that the superscript of \(x_i^2\) denotes the square and the superscript of \(\xi _i^2\) denotes the second component. These identies follow from the \(x\leftrightarrow y\) and \(\xi _{i}^{1}\eta _{i}^{1}\leftrightarrow \xi _{i}^{2}\eta _{i}^{2}\) symmetries of the \({\mathbb {H}}^{2|4}\) model and \(\langle x_i^2+y_i^2-2\xi _i^1\eta _i^1 \rangle =0\) by supersymmetric localisation, i.e., Theorem 2.8. Since

$$\begin{aligned} \langle z_i^2 \rangle&= 1- 2\langle \xi _i\eta _i \rangle&\text {in } {\mathbb {H}}^{0|2}, \end{aligned}$$
(2.66)
$$\begin{aligned} \langle z_i^2 \rangle&= 1+\langle x_i^2+ y_i^2-2\xi _i^1\eta _i^1-2\xi _i^2\xi _i^2 \rangle = 1- 2\langle \xi _i^2\eta _i^2 \rangle&\text {in } {\mathbb {H}}^{2|4}, \end{aligned}$$
(2.67)

and since the left-hand sides are equal by Proposition 2.7, we further see that the \({\mathbb {H}}^{2|4}\) expectation (2.65) equals the \({\mathbb {H}}^{0|2}\) expectation \(\langle \xi _i\eta _i \rangle \). Similarly, \(\langle x_{i}^{2}z_{i} \rangle = \langle y_{i}^{2}z_{i} \rangle = \langle \xi _{i}^{1}\eta _{i}^{1}z_{i} \rangle = \langle \xi _{i}^{2}\eta _{i}^{2}z_{i} \rangle \). By using the preceding equalities and by expanding \(\langle (-1+z_{i}^{2})z_{i} \rangle =\langle (u_{i}\cdot u_{i}+z_{i}^{2})z_{i} \rangle \) in both \({\mathbb {H}}^{0|2}\) and \({\mathbb {H}}^{2|4}\), one obtains

$$\begin{aligned} -2\langle x_{i}^{2}z_{i} \rangle = -\langle z_{i} \rangle + \langle z_{i}^{3} \rangle = -2\langle \xi _{i}\eta _{i} \rangle , \end{aligned}$$
(2.68)

where the first expectation is with respect to \({\mathbb {H}}^{2|4}\) and the others are with respect to \({\mathbb {H}}^{0|2}\). Using these identities and (2.61), we then find

$$\begin{aligned} \langle e^{t_i} \rangle&= \langle x_i+z_i \rangle = \langle z_i \rangle \end{aligned}$$
(2.69)
$$\begin{aligned} \langle e^{2t_i} \rangle&= \langle (x_i + z_i)^2 \rangle = \langle x_i^2 \rangle + \langle z_i^2 \rangle = \langle \xi _i\eta _i \rangle + \langle 1 - 2\xi _i\eta _i \rangle = \langle 1-\xi _i\eta _i \rangle = \langle z_i \rangle \end{aligned}$$
(2.70)
$$\begin{aligned} \langle e^{3t_i} \rangle&= \langle (x_i + z_i)^3 \rangle = \langle 3x_i^2z_i \rangle + \langle z_i^3 \rangle = 3\langle \xi _{i}\eta _{i} \rangle + \langle 1 - 3\xi _i\eta _i \rangle = 1 . \end{aligned}$$
(2.71)

The identity (2.64) follows analogously:

$$\begin{aligned} \langle s_is_je^{t_i+t_j} \rangle = \langle y_iy_j \rangle = \frac{1}{2} \langle \xi _i\eta _j+\xi _j\eta _i \rangle = \langle \xi _i\eta _j \rangle \end{aligned}$$
(2.72)

where we used the generalisation of (2.65) for the mixed expectation \(\langle x_ix_j \rangle \) and that \(\langle \xi _i\eta _j \rangle = \langle \xi _j\eta _i \rangle \), see (2.27). \(\quad \square \)

To describe the proof of Proposition 2.9 we now define horospherical coordinates for \({\mathbb {H}}^{2|4}\). These are a change of generators from the variables \((x,y,\xi ^{\gamma },\eta ^{\gamma })\) with \(\gamma =1,2\) to \((t,s,\psi ^{\gamma },{{\bar{\psi }}}^{\gamma })\), where

$$\begin{aligned} x = \sinh t - e^{t}(\frac{1}{2}s^{2} + {{\bar{\psi }}}^{1}\psi ^{1} + {{\bar{\psi }}}^{2}\psi ^{2}), \;\; y = e^{t}s,\;\; \eta ^{i} = e^{t}{{\bar{\psi }}}^{i}, \;\; \xi ^{i} = e^{t}\psi ^{i}.\qquad \end{aligned}$$
(2.73)

We note that \({{\bar{\psi }}}_{i}\) is simply notation to indicate a generator distinct from \(\psi _{i}\), i.e., the bar does not denote complex conjugation, which would not make sense. In these coordinates the action is quadratic in \(s, {{\bar{\psi }}}^{1},\psi ^{1},{{\bar{\psi }}}^{2},\psi ^{2}\). This leads to a proof of Proposition 2.9 by explicitly integrating out these variables when t is fixed via the following standard lemma, whose proof we omit.

Lemma 2.11

For any \(N \times N\) matrix A,

$$\begin{aligned} \left( {\prod _i \partial _{\eta _i}\partial _{\xi _i}}\right) e^{(\xi ,A\eta )} = \det A, \end{aligned}$$
(2.74)

and, for a positive definite \(N\times N\) matrix A,

$$\begin{aligned} \int _{{\mathbb {R}}^{N}} e^{-\frac{1}{2} (s,As)} \, \frac{ds}{\sqrt{2\pi }} = (\det A)^{ -1/2}. \end{aligned}$$
(2.75)

Proof of Proposition 2.9

The first step is to compute the Berezinian for the horospherical change of coordinates. This can be done as in [6, Appendix A]. There is an \(e^{t}\) for the s-variables and an \(e^{-t}\) for each fermionic variable, leading to a Berezinian \(z e^{-3t}\), i.e.,

$$\begin{aligned} \left[ F\right] _{\beta ,h}^{{\mathbb {H}}^{2|4}} = \int \left( \prod _{i}ds_{i} dt_{i}\partial _{\psi _{i}^{1}}\partial _{{{\bar{\psi }}}^{1}_{i}} \partial _{\psi ^{2}_{i}}\partial _{{{\bar{\psi }}}^{2}_{i}}\right) Fe^{-{{\bar{H}}}_{\beta ,h}(s,t,\psi ,{{\bar{\psi }}})} \prod _{i}\frac{e^{-3t_{i}}}{2\pi } \end{aligned}$$
(2.76)

where \({{\bar{H}}}_{\beta ,h}(s,t,\psi ,{{\bar{\psi }}})\) is \(H_{\beta ,h}\) expressed in horospherical coordinates.

The second step is to apply Lemma 2.11 repeatedly. To prove (2.62), we apply it twice, once for \(({{\bar{\psi }}}^{1},\psi ^{1})\) and once for \(({{\bar{\psi }}}^{2},\psi ^{2})\). The lemma applies since F does not depend on \(\psi ^{1},{{\bar{\psi }}}^{1},\psi ^{2},{{\bar{\psi }}}^{2}\) by assumption. To prove (2.62), we apply it three times, once for \(({{\bar{\psi }}}^{1},\psi ^{1})\), once for \(({{\bar{\psi }}}^{2},\psi ^{2})\), and once for s. Each integral contributes a power of \(\det (-\Delta _{\beta (t),h(t)})\), namely \(-1/2\) for the Gaussian and \(+1\) for each fermionic Gaussian. This explains the coefficient 2 in (2.61) and the coefficient \(3/2=2-1/2\) in (2.62).

The final claim follows as the conditions that \(\varvec{\beta }\) induces a connected graph and some \(h_{i}>0\) implies \(\left[ 1\right] _{\beta ,h}^{{\mathbb {H}}^{2|4}}\) is finite. The claim thus follows from Theorems 2.7 and 2.1. \(\quad \square \)

2.6 Pinned measure for the \({\mathbb {H}}^{2|4}\) model

This section introduces a pinned version of the \({\mathbb {H}}^{2|4}\) model and relates it to the pinned \({\mathbb {H}}^{0|2}\) model that was introduced in Sect. 2.2. For the \({\mathbb {H}}^{2|4}\) pinning means \(u_{0}\) always evaluates to \((x,y,\xi ^{1},\eta ^{1},\xi ^{2},\eta ^{2},z) = (0,0,0,0,0,0,1)\). As before, we implement this by replacing \(\Lambda \) by \(\Lambda _0 = \Lambda \setminus \{0\}\) and setting

$$\begin{aligned} h_j = \beta _{0j}, \end{aligned}$$
(2.77)

and replacing \(u_0 \cdot u_j\) by \(-z_j\). We denote the corresponding expectations by

$$\begin{aligned}{}[\cdot ]_\beta ^0, \quad \langle \cdot \rangle _\beta ^0. \end{aligned}$$
(2.78)

We can relate the pinned and unpinned measures exactly as for the \({\mathbb {H}}^{0|2}\) model.

Proposition 2.12

For any polynomial F in \((u_{i}\cdot u_{j})_{i,j\in \Lambda }\),

$$\begin{aligned}{}[F]_\beta ^0 = [(1-z_0)F]_\beta ,\qquad \langle F \rangle _{\beta }^0 = \langle (1-z_0) F \rangle _\beta . \end{aligned}$$
(2.79)

Moreover, \([1]^0_\beta = [1]_\beta \) and hence for any pairs of vertices \(i_kj_k\),

$$\begin{aligned} \langle \prod _{k} (u_{i_k}\cdot u_{j_k}+1) \rangle ^0_\beta = \langle \prod _{k} (u_{i_k}\cdot u_{j_k}+1) \rangle _\beta . \end{aligned}$$
(2.80)

Proof

The first equality in (2.79) follows by reducing the \({\mathbb {H}}^{2|4}\) expectation to a \({\mathbb {H}}^{0|2}\) expectation by Proposition 2.7 (recall the convention that \(z_0 = u_\delta \cdot u_0\)), then applying Proposition 2.6 for the \({\mathbb {H}}^{0|2}\) expectation, and finally applying Proposition 2.7 again (in reverse). The second equality in (2.79) then follows by normalising using that \([1]_\beta ^0 = [1-z_0]_\beta = [1]_\beta \) (as in Proposition 2.6). The equalities (2.80) follow from \([1]_\beta ^0 = [1]_\beta \) by differentiating with respect to the \(\beta _{i_kj_k}\). \(\quad \square \)

The next corollary expresses the pinned model in horospherical coordinates. For \(i,j \in \Lambda \), set

$$\begin{aligned} \beta _{ij}(t) \equiv \beta _{ij} e^{t_i+t_j}, \end{aligned}$$
(2.81)

and let \({{\tilde{D}}}_\beta (t)\) be the determinant of \(-\Delta _{\beta (t)}\) restricted to \(\Lambda _0 = \Lambda \setminus \{0\}\), i.e., the determinant of submatrix of \(-\Delta _{\beta (t)}\) indexed by \(\Lambda _{0}\). When \(\varvec{\beta }\) induces a connected graph, this determinant is non-zero, and by the matrix-tree theorem it can be written as

$$\begin{aligned} {{\tilde{D}}}_\beta (t) = \sum _T \prod _{ij} \beta _{ij}e^{t_i+t_j} \end{aligned}$$
(2.82)

where the sum is over all spanning trees on \(\Lambda \). For \(t\in {\mathbb {R}}^\Lambda \), then define

$$\begin{aligned} {{\tilde{H}}}_\beta ^0(t) \equiv \frac{1}{2} \sum _{i,j} \beta _{ij} (\cosh (t_i-t_j)-1) - \frac{3}{2} \log {{\tilde{D}}}_\beta (t) - 3\sum _{i} t_i. \end{aligned}$$
(2.83)

By combining Proposition 2.12 with Proposition 2.9, we have the following representation of the pinned measure in horospherical coordinates .

Corollary 2.13

For any smooth function \(F:{\mathbb {R}}^\Lambda \rightarrow {\mathbb {R}}\) with sufficient decay,

$$\begin{aligned}{}[{F((x+z)_i)}]_{\beta }^0 = \int F((e^{t_i})_{i}) e^{-\tilde{H}_\beta ^0(t)}\, \delta _0(dt_0) \prod _{i\ne 0} \frac{dt_i}{\sqrt{2\pi }}. \end{aligned}$$
(2.84)

Proof

We recall the definition of the left-hand side, i.e., that the expectation \([{\cdot }]_\beta ^0\) is defined in (2.77)–(2.78) as the expectation on \(\Lambda _{0}\) given by \([{\cdot }]^{0}_\beta = [{\cdot }]_{{\tilde{\beta }},{{\tilde{h}}}}\) with \({\tilde{\beta }}_{ij}=\beta _{ij}\) and \({{\tilde{h}}}_i = \beta _{0i}\) for \(i,j\in \Lambda _0\). The equality now follows from (2.62), together with the observation that \(\Delta _{\beta (t)}|_{\Lambda _0}\) is \(\Delta _{{\tilde{\beta }}(t),{{\tilde{h}}}(t)}\) if \(t_{0}=0\). \(\quad \square \)

In view of (2.84) and since \([1]_{\beta }^{0}=Z_{\beta }\) by Proposition 2.12, we again abuse notation somewhat and write the normalised expectation of a function of \(t=(t_i)_{i\in \Lambda }\) as

$$\begin{aligned} \langle F \rangle _\beta ^0 = \frac{1}{Z_{\beta }}\int _{{\mathbb {R}}^\Lambda } F((t_i)_i) e^{-{{\tilde{H}}}_\beta ^0(t)} \delta _0(dt_0)\, \prod _{i \ne 0} \frac{dt_i}{\sqrt{2\pi }} . \end{aligned}$$
(2.85)

Corollary 2.14

The connection probabilities can be written as in terms of the pinned \({\mathbb {H}}^{2|4}\) measure:

$$\begin{aligned} {\mathbb {P}}_\beta [0\leftrightarrow i] = \langle e^{t_i} \rangle _{\beta }^0. \end{aligned}$$
(2.86)

Moreover, for any vertex i,

$$\begin{aligned} \langle e^{3t_{i}} \rangle _{\beta }^{0}=1. \end{aligned}$$
(2.87)

Proof

(2.86) follows by applying first (2.11), then (2.80), then using the fact that \(u_0\cdot u_i=-z_i\) under \(\langle \cdot \rangle _\beta ^0\), then using that \(\langle x_i \rangle _\beta =0\) by symmetry, and finally applying (2.84):

$$\begin{aligned} {\mathbb {P}}_\beta [0\leftrightarrow i] = -\langle u_0\cdot u_i \rangle _\beta = \langle z_i \rangle _\beta ^0 = \langle z_i+x_i \rangle _\beta ^0 = \langle e^{t_i} \rangle _\beta ^0. \end{aligned}$$
(2.88)

The argument that \(\langle e^{3t_{i}} \rangle _{\beta }^{0}=1\) is identical to (2.71) with \(\langle \cdot \rangle _\beta \) replaced by \(\langle \cdot \rangle _\beta ^0\). \(\quad \square \)

3 Phase Transition on the Complete Graph

The following theorem shows that on the complete graph the arboreal gas undergoes a transition very similar to the percolation transition, i.e., the Erdős–Rényi graph. As mentioned in the introduction, this result has been obtained previously [8, 34, 36]. We have included a proof only to illustrate the utility of the \({\mathbb {H}}^{0|2}\) representation. The study of spanning forests of the complete graph goes back to (at least) Rényi [43] who obtained a formula which can be seen to imply that their asymptotic number grows like \(\sqrt{e}n^{n-2}\), see [37].

Throughout this section we consider \({{\mathbb {G}}} = K_{N}\), the complete graph on N vertices with vertex set \(\{0,1,2,\dots , N-1\}\), and we choose \(\beta _{ij} = \alpha /N\) with \(\alpha >0\) fixed for all edges ij. For notational simplicity we write \(Z_{\beta }\) and \({\mathbb {P}}_{\beta }\), i.e., we leave the dependence on N implicit.

Theorem 3.1

In the high temperature phase \(\alpha < 1\),

$$\begin{aligned} Z_\beta&\sim e^{(N+1)\alpha /2} \sqrt{1-\alpha }, \quad {\mathbb {P}}_\beta [0\leftrightarrow 1] \sim \left[ {\frac{\alpha }{1-\alpha }}\right] \frac{1}{N}. \end{aligned}$$
(3.1)

In the low temperature phase \(\alpha > 1\),

$$\begin{aligned} Z_\beta&\sim \frac{a^{N+3/2} e^{(a^2+N)/(2 a)}}{(a-1)^{5/2} N}, \quad {\mathbb {P}}_\beta [0\leftrightarrow 1] \sim \left[ {\frac{\alpha -1}{\alpha }}\right] ^{2} . \end{aligned}$$
(3.2)

In the critical case \(\alpha = 1\),

$$\begin{aligned} Z_\beta&\sim \frac{3^{1/6}\Gamma (\frac{2}{3})e^{(N+1)/2}}{N^{1/6}\sqrt{2\pi }}, \quad {\mathbb {P}}_\beta [0\leftrightarrow 1] \sim \left[ {\frac{3^{2/3}\Gamma (\tfrac{4}{3})}{\Gamma (\tfrac{2}{3})}}\right] \frac{1}{N^{2/3}}. \end{aligned}$$
(3.3)

3.1 Integral representation

The first step in the proof of the theorem is the following integral representation that follows from a transformation of the fermionic field theory representation from Sect. 2.1. We introduce the effective potential

$$\begin{aligned} V({{\tilde{z}}}) \equiv - P(i\alpha {{\tilde{z}}}), \qquad P(w) \equiv \frac{w^2}{2\alpha } + w + \log (1 -w) \end{aligned}$$
(3.4)

and set

$$\begin{aligned} F(w) \equiv 1 - \frac{\alpha }{1-w}, \quad F_{01}(w) \equiv -\left( {\frac{w}{1-w}}\right) ^2 \left( {F(w) - \frac{2\alpha }{N(-w)(1-w)}}\right) .\qquad \end{aligned}$$
(3.5)

Proposition 3.2

For all \(\alpha >0\) and all positive integers N,

$$\begin{aligned} Z_{\beta }&= e^{(N+1)\alpha /2} \sqrt{\frac{N\alpha }{2\pi }} \int _{{\mathbb {R}}} d{{\tilde{z}}}\, e^{-NV({{\tilde{z}}})} F(i\alpha {{\tilde{z}}}) \end{aligned}$$
(3.6)
$$\begin{aligned} Z_{\beta }[0\leftrightarrow 1]&= e^{(N+1)\alpha /2} \sqrt{\frac{N\alpha }{2\pi }} \int _{{\mathbb {R}}} d{{\tilde{z}}}\, e^{-NV({{\tilde{z}}})} F_{01}(i\alpha {{\tilde{z}}}), \end{aligned}$$
(3.7)

where \(Z_{\beta }[0\leftrightarrow 1] \equiv {\mathbb {P}}_{\beta }[0\leftrightarrow 1]Z_{\beta }\).

Proof

We start from the representations of the partition functions in terms of the \({\mathbb {H}}^{0|2}\) model, i.e., Theorem 2.1 and Corollary 2.2, which we simplify using the assumption that the graph is the complete graph. Let \((\Delta _\beta f)_i = \frac{\alpha }{N}\sum _{j=0}^{N-1} (f_i-f_j)\) be the mean-field Laplacian and \(\varvec{h}= (h_i)_i\). Then

$$\begin{aligned} \frac{1}{2} (\varvec{u},-\Delta _\beta \varvec{u})&= - (\varvec{\xi },-\Delta _\beta \varvec{\eta }) - \frac{1}{2} (\varvec{z},-\Delta _\beta \varvec{z}) \nonumber \\&= - (\varvec{\xi },-\Delta _\beta \varvec{\eta }) + \alpha \sum _{i=0}^{N-1} \xi _i\eta _i + \frac{\alpha }{2N} \left( {\sum _{i=0}^{N-1} z_i}\right) ^2 - \frac{\alpha N}{2} \end{aligned}$$
(3.8)
$$\begin{aligned} (\varvec{h}, \varvec{z}-\varvec{1})&= - \sum _{i=0}^{N-1} h_i\xi _i\eta _i . \end{aligned}$$
(3.9)

In the sequel we will omit the range of sums and products when there is no risk of ambiguity.

To decouple the two terms that are not diagonal sums we use the following Hubbard–Stratonovich-type transforms in terms of auxiliary variables \({\tilde{\xi }},{\tilde{\eta }}\) (fermionic) and \({{\tilde{z}}}\) (real). Let \({\mathbf {1}}\) be the vector such that \({\mathbf {1}}_i=1\) for all \(0\leqslant i\leqslant N-1\).

$$\begin{aligned} e^{+(\varvec{\xi },-\Delta _\beta \varvec{\eta })}&= \frac{1}{N\alpha } \partial _{{\tilde{\eta }}} \partial _{{\tilde{\xi }}} e^{\alpha ({\tilde{\xi }} {\mathbf {1}} -\varvec{\xi },{\tilde{\eta }} {\mathbf {1}}-\varvec{\eta })} = \frac{1}{N\alpha } \partial _{{\tilde{\eta }}} \partial _{{\tilde{\xi }}} \left[ { e^{N\alpha {\tilde{\xi }}{\tilde{\eta }}} \prod _i e^{\alpha (\xi _i\eta _i-{\tilde{\xi }}\eta _i-\xi _i{\tilde{\eta }})} }\right] \end{aligned}$$
(3.10)
$$\begin{aligned} e^{-\frac{\alpha }{2N} (\sum _i z_i)^2}&= \sqrt{\frac{N\alpha }{2\pi }} \int _{\mathbb {R}}d{{\tilde{z}}} \, e^{-\frac{1}{2} N\alpha {{\tilde{z}}}^2} e^{i\alpha {{\tilde{z}}} \sum _i z_i}. \end{aligned}$$
(3.11)

The second formula is the formula for the Fourier transform of a Gaussian measure. The first formula can be seen by making use of the following identity. Write \(Af \equiv \frac{1}{N}\sum _i f_i\) for the average of f, so that

$$\begin{aligned} \alpha ({\tilde{\xi }} {\mathbf {1}} -\varvec{\xi },{\tilde{\eta }} {\mathbf {1}}-\varvec{\eta })&= \alpha ([{\tilde{\xi }} -A\varvec{\xi }]{\mathbf {1}}-[\varvec{\xi }-(A\varvec{\xi }) {\mathbf {1}}],[{\tilde{\eta }} -A\varvec{\eta }]{\mathbf {1}}-[\varvec{\eta }-(A\varvec{\eta }){\mathbf {1}}]) \nonumber \\&= \alpha ([{\tilde{\xi }} -A\varvec{\xi }]{\mathbf {1}},[{\tilde{\eta }} -A\varvec{\eta }]{\mathbf {1}})+\alpha (\varvec{\xi }-(A\varvec{\xi }){\mathbf {1}},\varvec{\eta }-(A\varvec{\eta }){\mathbf {1}}) \nonumber \\&= N\alpha ({\tilde{\xi }} -A\varvec{\xi })({\tilde{\eta }} -A\varvec{\eta })+(\varvec{\xi },-\Delta _\beta \varvec{\eta }). \end{aligned}$$
(3.12)

Using this identity the first equality in (3.10) is readily obtained by computing the fermionic derivatives, while the second equality follows by expanding the exponent. In the second line of (3.12) we used the orthogonality of constant functions with the mean 0 function \(\varvec{\xi }-(A\varvec{\xi }){\mathbf {1}}\). Finally, on the last line of (3.12), we used that \([{\tilde{\eta }}-A\varvec{\eta }]{\mathbf {1}}\) is a constant to write the \(\ell ^2\) inner product as a product multiplied by a factor N, and the factor \(\alpha \) in the second term was absorbed into \(\Delta _{\beta }\).

Substituting (3.10)–(3.11) into (2.8) gives

$$\begin{aligned} Z_{\beta ,h}&= \prod _i \partial _{\eta _i} \partial _{\xi _i} \frac{1}{z_i} e^{-\frac{1}{2} (\varvec{u},-\Delta _\beta \varvec{u})-(\varvec{h},\varvec{z}-\varvec{1})} \nonumber \\&= \frac{e^{N\alpha /2}}{\sqrt{2\pi N\alpha }} \int _{{\mathbb {R}}} d {{\tilde{z}}} \partial _{{\tilde{\eta }}} \partial _{{\tilde{\xi }}} \; e^{-\frac{1}{2} N\alpha {{\tilde{z}}}^2 + N\alpha {\tilde{\xi }}{\tilde{\eta }}+\alpha /2} \nonumber \\&\quad \prod _{i=1}^N \left[ {\partial _{\eta _i}\partial _{\xi _i} \left( { \exp \left( {\alpha (\xi _i\eta _i-{\tilde{\xi }}\eta _i-\xi _i{\tilde{\eta }})+i\alpha {{\tilde{z}}} (1-\xi _i\eta _i)-\alpha \xi _i\eta _i+(1+h_i)\xi _i\eta _i}\right) }\right) }\right] . \end{aligned}$$
(3.13)

Simplifying the term inside the exponential gives

$$\begin{aligned} Z_{\beta ,h}&= \frac{e^{N\alpha /2}}{\sqrt{2\pi N\alpha }} \int _{{\mathbb {R}}} d {{\tilde{z}}} \partial _{{\tilde{\eta }}} \partial _{{\tilde{\xi }}} \; e^{-\frac{1}{2} N\alpha {{\tilde{z}}}^2 + N\alpha {\tilde{\xi }}{\tilde{\eta }}+N\alpha i{{\tilde{z}}}+\alpha /2} \nonumber \\&\quad \prod _{i=1}^N \left[ {\partial _{\eta _i}\partial _{\xi _i} \left( { \exp \left( {(1+h_i-i\alpha {{\tilde{z}}})(\xi _i\eta _i)-\alpha ({\tilde{\xi }}\eta _i+\xi _i{\tilde{\eta }})}\right) }\right) }\right] . \end{aligned}$$
(3.14)

Since \(({\tilde{\xi }}{\tilde{\eta }})^2=0\) and \(({\tilde{\xi }}\eta _i+\xi _i{\tilde{\eta }})^3=0\), the exponential can be replaced by its third-order Taylor expansion, giving

$$\begin{aligned} Z_{\beta ,h}&= \frac{e^{(N+1)\alpha /2}}{\sqrt{2\pi N\alpha }} \int _{{\mathbb {R}}} d {{\tilde{z}}} \partial _{{\tilde{\eta }}} \partial _{{\tilde{\xi }}} \; e^{-N\alpha [\frac{1}{2}{{\tilde{z}}}^2 - {\tilde{\xi }}{\tilde{\eta }}- i{{\tilde{z}}}]} \prod _{i} \left[ { (1+h_i-i\alpha {{\tilde{z}}}) - \alpha ^2{\tilde{\xi }}{\tilde{\eta }}}\right] \nonumber \\&= \frac{e^{(N+1)\alpha /2}}{\sqrt{2\pi N\alpha }} \int _{{\mathbb {R}}} d {{\tilde{z}}} \partial _{{\tilde{\eta }}} \partial _{{\tilde{\xi }}} \; e^{-N\alpha [\frac{1}{2}{{\tilde{z}}}^2 - {\tilde{\xi }}{\tilde{\eta }}- i{{\tilde{z}}}]} \prod _{i} (1+h_i-i\alpha {{\tilde{z}}}) \prod _{i} \nonumber \\&\quad [1 - \frac{\alpha ^2}{1+h_i-i\alpha {{\tilde{z}}}} {\tilde{\xi }}{\tilde{\eta }}]. \end{aligned}$$
(3.15)

Using again nilpotency of \({\tilde{\xi }}{\tilde{\eta }}\) this may be rewritten as

$$\begin{aligned} Z_{\beta ,h}&= \frac{e^{(N+1)\alpha /2}}{\sqrt{2\pi N\alpha }} \int _{{\mathbb {R}}} d {{\tilde{z}}} \partial _{{\tilde{\eta }}} \partial _{{\tilde{\xi }}} \;\nonumber \\&\quad e^{-N\alpha [\frac{1}{2}{{\tilde{z}}}^2- i{{\tilde{z}}}]} \prod _{i} (1+h_i-i\alpha {{\tilde{z}}}) \left[ {1 + \left( {N\alpha - \sum _{i} \frac{\alpha ^2}{1+h_i-i\alpha {{\tilde{z}}}}}\right) {\tilde{\xi }}{\tilde{\eta }}}\right] . \end{aligned}$$
(3.16)

Evaluating the fermionic derivatives gives the identity

$$\begin{aligned} Z_{\beta ,h}= & {} \frac{e^{(N+1)\alpha /2}\alpha N}{\sqrt{2\pi N\alpha }} \int _{{\mathbb {R}}} d {{\tilde{z}}} \; \nonumber \\&\quad e^{-N\alpha [\frac{1}{2}{{\tilde{z}}}^2 - i{{\tilde{z}}}]} \prod _{i=1}^N (1+h_i-i\alpha {{\tilde{z}}})\left[ {1 - \frac{\alpha }{N}\sum _i (1+h_i-i\alpha {{\tilde{z}}})^{-1}}\right] .\nonumber \\ \end{aligned}$$
(3.17)

To show (3.6)–(3.7) we now take \(\varvec{h}=0\). By definition the last bracket in (3.17) is then \(F(i\alpha {{\tilde{z}}})\) and the remaining integrand defines \(e^{-NV({{\tilde{z}}})}\), proving (3.6). For (3.7) we use that \(z_i = e^{z_i-1}\), and hence that \([z_0z_1]_{\beta } = Z_{\beta ,-1_0-1_1}\). Therefore (3.17) implies

$$\begin{aligned}{}[z_0z_1]_{\beta }&= \frac{e^{(N+1)\alpha /2}\alpha N}{\sqrt{2\pi N\alpha }} \int _{{\mathbb {R}}} d {{\tilde{z}}} \; \nonumber \\&\quad e^{-NV({{\tilde{z}}})} \left( {\frac{-i\alpha {{\tilde{z}}}}{1-i\alpha {{\tilde{z}}}}}\right) ^2 \left[ {F(i\alpha {{\tilde{z}}}) + \frac{2\alpha }{N} \left[ { \frac{1}{1-i\alpha {{\tilde{z}}}}-\frac{1}{-i\alpha {{\tilde{z}}}}}\right] }\right] . \end{aligned}$$
(3.18)

By definition, the integrand equals \(-F_{01}(i\alpha {{\tilde{z}}})\), so together with the relation \(Z_\beta [0\leftrightarrow 1] = -[z_0z_1]_{\beta }\), which holds by (2.11), the claim (3.7) follows. \(\quad \square \)

3.2 Asymptotic analysis

To apply the method of stationary phase to evaluate the asymptotics of the integrals, we need the stationary points of V, and asymptotic expansions for V and F. The first two derivatives of P are

$$\begin{aligned} P'(w) = \frac{w}{\alpha } + 1 - \frac{1}{1-w} ,\qquad P''(w) = \frac{1}{\alpha } - \frac{1}{(1-w)^2} \end{aligned}$$
(3.19)

The stationary points are those \(w=i\alpha {{\tilde{z}}}\) such that \(P'(w)=0\). This equation can be rewritten as

$$\begin{aligned} w^2 - w(1 -\alpha ) =0 , \end{aligned}$$
(3.20)

which has solutions \(w=0\) and \(w=1-\alpha \). We call a root \(w_0\) stable if \(P''(w_0) >0\) and unstable if \(P''(w_0)<0\). For \(\alpha <1\) the root 0 is stable whereas \(1-\alpha \) is unstable; for \(\alpha >1\) the root \(1-\alpha \) is stable whereas 0 is unstable; for \(\alpha =0\) the two roots collide at 0 and \(P''(0)=0\).

For the asymptotic analysis, we start with the nondegenerate case \(\alpha \ne 1\). First observe that we can view the right-hand sides of (3.6)–(3.7) as contour integrals and can, due to analyticity of the integrand and the decay of \(e^{-N\alpha {{\tilde{z}}}^2/2}\) when \({{\,\mathrm{Re}\,}}{{\tilde{z}}}\) is large, shift this contour to the horizontal line \({\mathbb {R}}+iw\) for any \(w\in {\mathbb {R}}\). We will then apply Laplace’s method in the version given by the next theorem, which is a simplified formulation of [38, Theorem 7, p.127].

Theorem 3.3

Let I be a horizontal line in \({\mathbb {C}}\). Suppose that \(V,G:U \rightarrow {\mathbb {R}}\) are analytic in a neighbourhood U of the contour I, that \(t_0 \in I\) is such that \(V'\) has a simple root at \(t_0\), and that \({{\,\mathrm{Re}\,}}(V(t)-V(t_0))\) is positive and bounded away from 0 for t away from \(t_0\). Then

$$\begin{aligned} \int _I e^{-NV(t)} G(t) \, dt \sim 2e^{-NV(t_0)} \sum _{s=0}^\infty \Gamma (s+1/2) \frac{b_{s}}{N^{s+1/2}} , \end{aligned}$$
(3.21)

where the notation \(\sim \) means that the right-hand side is an asymptotic expansion for the left-hand side, and the coefficients are given by (with all functions evaluated at \(t_0\)):

$$\begin{aligned} b_0 = \frac{G}{(2 V'')^{1/2}},\qquad b_1 = \left( 2G'' - \frac{2V'''G'}{V''} + \left[ \frac{5V'''^2}{6V''^2} -\frac{V''''}{2V''} \right] G \right) \frac{1}{(2V'')^{3/2}} ,\nonumber \\ \end{aligned}$$
(3.22)

and with \(b_{s}\) as given in [38] for \(s \ge 2\). (Also recall that \(\Gamma (1/2) = \sqrt{\pi }\) and that \(\Gamma (s+1)=s\Gamma (s)\).)

For \(\alpha \ne 1\), denote by \(w_0\) the unique stable root. As discussed in the previous paragraph, we can shift the contour to the line \({\mathbb {R}}-i \frac{w_0}{\alpha }\), and the previous theorem implies that

$$\begin{aligned}&\sqrt{\frac{N\alpha }{2\pi }} \int _{\mathbb {R}}e^{-NV({{\tilde{z}}})} G({{\tilde{z}}}) d{{\tilde{z}}} \nonumber \\&\quad = \sqrt{\frac{1}{\alpha P''}} e^{NP} \left[ { F - \frac{1}{4NP''} \left( 2F'' - \frac{2P'''F'}{P''} + \left[ \frac{5P'''^2}{6P''^2} -\frac{P''''}{2P''} \right] F \right) + O\left( \frac{1}{N^{2}}\right) }\right] ,\nonumber \\ \end{aligned}$$
(3.23)

with all functions on the right-hand side are evaluated at \(w_0\). From this the proof of Theorem 3.1 for \(\alpha \ne 1\) is an elementary (albeit somewhat tedious) computation of the derivatives of P and F and \(F_{01}\) at \(w_0\).

Proof of Theorem 3.1, \(\alpha <1\). The stable root is \(w_0=0\). By (3.23) and elementary computations for the derivatives of P and F and \(F_{01}\), we find

$$\begin{aligned} \sqrt{\frac{N\alpha }{2\pi }} \int _{\mathbb {R}}e^{-NV({{\tilde{z}}})} F(i\alpha {{\tilde{z}}}) d{{\tilde{z}}}&\sim \sqrt{1-\alpha } \end{aligned}$$
(3.24)
$$\begin{aligned} \sqrt{\frac{N\alpha }{2\pi }} \int _{\mathbb {R}}e^{-NV({{\tilde{z}}})} F_{01}(i\alpha {{\tilde{z}}}) d{{\tilde{z}}}&\sim \frac{\alpha ^2}{\sqrt{1-\alpha }}. \end{aligned}$$
(3.25)

Recalling the definitions (3.6)–(3.7), this implies the claims. \(\quad \square \)

Proof of Theorem 3.1, \(\alpha >1\). The stable root is \(w_0=1-\alpha \). Again (3.23) and elementary computations for the derivatives of P and F and \(F_{01}\) lead to

$$\begin{aligned} \sqrt{\frac{N\alpha }{2\pi }} \int _{\mathbb {R}}e^{-NV({{\tilde{z}}})} F(i\alpha {{\tilde{z}}}) d{{\tilde{z}}}&\sim e^{NP} \frac{\alpha ^{3/2}}{N(\alpha -1)^{5/2}} \end{aligned}$$
(3.26)
$$\begin{aligned} \sqrt{\frac{N\alpha }{2\pi }} \int _{\mathbb {R}}e^{-NV({{\tilde{z}}})} F_{01}(i\alpha {{\tilde{z}}}) d{{\tilde{z}}}&\sim e^{NP} \frac{1}{N(\alpha -1)^{1/2}\alpha ^{1/2}}, \end{aligned}$$
(3.27)

and \(P = P(w_0)=P(1-\alpha )\). Again the claims follow from (3.6)–(3.7). \(\quad \square \)

At the critical point \(\alpha =1\), the two roots collide at 0 and \(P''(0)=0\). We analyse the integral as follows.

Proof of Theorem 3.1, \(\alpha =1\). We begin by using the conjugate flip symmetry to write

$$\begin{aligned} N^{\frac{2}{3}}\int _{{\mathbb {R}}} d{{\tilde{z}}}\, e^{-NV({{\tilde{z}}})} F(i{{\tilde{z}}}) = 2 N^{\frac{2}{3}}{{\,\mathrm{Re}\,}}\int _{0}^\infty d{{\tilde{z}}}\, e^{-NV({{\tilde{z}}})} F(i{{\tilde{z}}}). \end{aligned}$$
(3.28)

Using analyticity of the integrand, we then deform the contour from \([0, \infty )\) to \([0, e^{i\pi /6}\infty )\); the contribution of the boundary arc vanishes due to the decay of \(e^{-N\alpha {{\tilde{z}}}^2/2}\) on this arc. We now split the contour into two intervals \(I_1 = [0, e^{i\pi /6}N^{-3/10})\) and \( I_2 = [e^{i\pi /6}N^{-3/10},e^{i\pi /6} \infty )\), and denote the integrals over these regions as \(J_1\) and \(J_2\) respectively.

Over the first interval \(I_1\), we introduce the new real variable \(s = N^{\frac{1}{3}}e^{-i\pi /6}{\tilde{z}}\), in terms of which

$$\begin{aligned} J_1&= 2N^{\frac{2}{3}}{{\,\mathrm{Re}\,}}\int _{I_1} d{{\tilde{z}}}\, e^{-NV({{\tilde{z}}})} F(i{{\tilde{z}}})\nonumber \\&= 2{{\,\mathrm{Re}\,}}\int _{0}^{N^{\frac{1}{30}}} ds\, e^{-NV(e^{\frac{i\pi }{6}}N^{-\frac{1}{3}}s)} N^{\frac{1}{3}}e^{\frac{i\pi }{6}}F(e^{\frac{2i\pi }{3}}N^{-\frac{1}{3}}s). \end{aligned}$$
(3.29)

We then approximate the arguments as

$$\begin{aligned} NV(e^{\frac{i\pi }{6}}N^{-\frac{1}{3}}s)&= \frac{1}{3}s^3 + {O}(N^{-\frac{1}{3}}s^4) = \frac{1}{3}s^3 + {O}(N^{-\frac{6}{30}})\end{aligned}$$
(3.30)
$$\begin{aligned} N^{\frac{1}{3}}e^{\frac{i\pi }{6}}F(e^{\frac{2i\pi }{3}}N^{-\frac{1}{3}}s)&= e^{-\frac{i\pi }{6}}s + {O}(N^{-\frac{1}{3}}s^2) = e^{-\frac{i\pi }{6}}s + {O}(N^{-\frac{8}{30}}), \end{aligned}$$
(3.31)

where the last error bounds hold uniformly for \(s\in [0,N^{1/30}]\). This gives

$$\begin{aligned} J_1= & {} 2{{\,\mathrm{Re}\,}}\int _{0}^{N^\frac{1}{30}} ds\, e^{-\frac{i\pi }{6}}se^{-\frac{1}{3}s^3} + {O}(N^{-\frac{4}{30}}) \nonumber \\= & {} 2{{\,\mathrm{Re}\,}}\int _{0}^{\infty } ds\, e^{-\frac{i\pi }{6}}se^{-\frac{1}{3}s^3} + o(1) =3^{\frac{1}{6}}\Gamma (\tfrac{2}{3})+o(1). \end{aligned}$$
(3.32)

The second term \(J_2\) is asymptotically negligible. To see this, we bound \(|F(i{\tilde{z}})| \le 1\), introduce the real variable \(s = e^{-\frac{i\pi }{6}}{\tilde{z}}\), and split the resulting domain as \( [N^{-3/10},2) \cup [2,\infty ) = I_2' \cup I_2''\):

$$\begin{aligned} J_2= & {} 2 N^{\frac{2}{3}}{{\,\mathrm{Re}\,}}\int _{I_2} d{{\tilde{z}}}\, e^{-NV({{\tilde{z}}})} F(i{{\tilde{z}}}) \nonumber \\\le & {} 2 N^{\frac{2}{3}}{{\,\mathrm{Re}\,}}\int _{I_2'} ds\, e^{-NV(\frac{i\pi }{6} s)} +2 N^{\frac{2}{3}}{{\,\mathrm{Re}\,}}\int _{I_2''} ds\, e^{-NV(\frac{i\pi }{6} s)} . \end{aligned}$$
(3.33)

Over \(I_2'\), we use that \(|I_2'| \le 2\) and bound the integral in terms of the supremum of the integrand:

$$\begin{aligned}&2 N^{\frac{2}{3}}{{\,\mathrm{Re}\,}}\int _{I_2'} ds\, e^{-NV(\frac{i\pi }{6} s)} e^{\frac{i\pi }{6}}F(e^{\frac{2i\pi }{3}}s) \nonumber \\&\quad \le 2 N^{\frac{2}{3}}{{\,\mathrm{Re}\,}}\int _{I_2'} ds\, e^{-NV(\frac{i\pi }{6} s)} \le 4 N^{\frac{2}{3}}\sup _{s \in I_2'} e^{-{{\,\mathrm{Re}\,}}[NV(\frac{i\pi }{6} s)]}, \end{aligned}$$
(3.34)

and as \({{\,\mathrm{Re}\,}}NV(\frac{i\pi }{6} s)\) is decreasing, this supremum is attained on the boundary \(s = N^{-3/10}\). Taylor expanding as before gives us

$$\begin{aligned} 4 N^{\frac{2}{3}}\sup _{s \in I_2'} e^{-{{\,\mathrm{Re}\,}}NV(\frac{i\pi }{6} s)} = 4 N^{\frac{2}{3}}e^{-{{\,\mathrm{Re}\,}}NV(\frac{i\pi }{6}N^{-\frac{3}{10}})}=e^{-(\frac{1}{3} + o(1))N^{\frac{1}{10}}}. \end{aligned}$$
(3.35)

Over \(I_2''\), we use that \({{\,\mathrm{Re}\,}}[NV(\frac{i\pi }{6} s)] \ge \frac{Ns^2}{4}\) for all \(s \ge 2\) to bound the second term as

$$\begin{aligned} 2 N^{\frac{2}{3}}{{\,\mathrm{Re}\,}}\int _{I_2'} ds\, e^{-NV(\frac{i\pi }{6} s)} \le 2 N^{\frac{2}{3}}\int _{I_2'} ds\, e^{-\frac{Ns^2}{4}} \le e^{-(1+o(1))N}. \end{aligned}$$
(3.36)

Putting together the estimates for \(J_1\) and \(J_2\), we therefore find

$$\begin{aligned} N^{\frac{2}{3}}\int _{{\mathbb {R}}} d{{\tilde{z}}}\, e^{-NV({{\tilde{z}}})} F(i{{\tilde{z}}}) = J_1 + J_2 = 3^{\frac{1}{6}}\Gamma (\tfrac{2}{3}) + o(1) \end{aligned}$$
(3.37)

and hence the first asymptotic relation in (3.3) follows from (3.6), i.e.,

$$\begin{aligned} Z_\beta \sim \frac{3^{\frac{1}{6}}\Gamma \left( \frac{2}{3}\right) e^{\frac{(N+1)}{2}}}{N^{\frac{1}{6}}\sqrt{2\pi }}. \end{aligned}$$
(3.38)

Using the same procedure, we can compute \({\mathbb {P}}_\beta [0\leftrightarrow 1]\). We again split the (conveniently scaled) integral into two terms as

$$\begin{aligned}&N^{\frac{4}{3}}\int _{{\mathbb {R}}} d{{\tilde{z}}}\, e^{-NV({{\tilde{z}}})} F_{01}(i{{\tilde{z}}}) = 2 {{\,\mathrm{Re}\,}}\int _{0}^{N^{\frac{1}{30}}} ds\, e^{-NV(e^{\frac{i\pi }{6}}N^{-\frac{1}{3}}s)} Ne^{\frac{i\pi }{6}}F_{01}(e^{\frac{2i\pi }{3}}N^{-\frac{1}{3}}s) \nonumber \\&\quad + 2{{\,\mathrm{Re}\,}}\int _{N^{\frac{1}{30}}}^{\infty } ds\, e^{-NV(e^{\frac{i\pi }{6}}N^{-\frac{1}{3}}s)} Ne^{\frac{i\pi }{6}}F_{01}(e^{\frac{2i\pi }{3}}N^{-\frac{1}{3}}s) = J_1 + J_2. \end{aligned}$$
(3.39)

As before \(J_2\) is asymptotically negligible. For \(J_1\), we approximate the \(F_{01}\) term as

$$\begin{aligned} Ne^{\frac{i\pi }{6}}F_{01}(e^{\frac{2i\pi }{3}}N^{-\frac{1}{3}}s) = e^{\frac{i\pi }{6}}s^3 + O(N^{-\frac{1}{3}}s^4) = e^{\frac{i\pi }{6}}s^3 + O(N^{-\frac{6}{30}}), \end{aligned}$$
(3.40)

uniformly for \(s\in [0,N^{1/30}]\), to obtain the asymptotic relation

$$\begin{aligned} J_1= & {} 2{{\,\mathrm{Re}\,}}\int _{0}^{N^{\frac{1}{30}}} ds\, e^{-NV(e^{\frac{i\pi }{6}}N^{-\frac{1}{3}}s)} Ne^{\frac{i\pi }{6}}F_{01}\nonumber \\&\times (e^{\frac{2i\pi }{3}}N^{-\frac{1}{3}}s) \sim 2{{\,\mathrm{Re}\,}}\int _{0}^{\infty } ds\, e^{\frac{i\pi }{6}}s^3e^{-\frac{1}{3}s^3} = 3^{\frac{5}{6}}\Gamma (\tfrac{4}{3}). \end{aligned}$$
(3.41)

From (3.7), we therefore find

$$\begin{aligned} Z_\beta [0\leftrightarrow 1] \sim \frac{3^{\frac{5}{6}}\Gamma \left( \frac{4}{3}\right) e^{\frac{(N+1)}{2}}}{N^{\frac{5}{6}}\sqrt{2\pi }} \end{aligned}$$
(3.42)

which after dividing by \(Z_\beta \) shows the second asymptotic relation in (3.3). \(\quad \square \)

4 No Percolation in Two Dimensions

In this section, we consider the arboreal gas on (finite approximations of) \({\mathbb {Z}}^2\) with constant nearest neighbour weights, i.e., with \(\beta _{ij}=\beta >0\) for all edges ij and vertex weights \(h_i=h\) for all vertices i. As such we write \(\beta \) instead of \(\varvec{\beta }\) in this section. Constant weights are merely a convenient choice; everything in this section also applies to translation-invariant finite range weights, for example. In contrast with the case of the complete graph, we show that on \({\mathbb {Z}}^2\) the tree containing a fixed vertex always has finite density. Our arguments are closely based on estimates developed for the vertex-reinforced jump process [6, 33, 44]. The main new idea is to use these bounds in combination with dimensional reduction from Sect. 2.4.

4.1 Two-point function decay in two dimensions

The proof of Theorem 1.3 makes use of the representation from Sect. 2.6, and closely follows [44]; an alternative proof could likely be obtained by adapting instead [33].

To lighten the notation, for a finite subgraph \(\Lambda \subset {\mathbb {Z}}^{2}\) we write \({\mathbb {P}}_{\beta }\) in place of \({\mathbb {P}}_{\Lambda ,\beta }\). By (2.86), the connection probability can be written in the horospherical coordinates of the \({\mathbb {H}}^{2|4}\) model as

$$\begin{aligned} {\mathbb {P}}_\beta [0 \leftrightarrow j] = \langle e^{t_j} \rangle _\beta ^0 \end{aligned}$$
(4.1)

where \(\langle \cdot \rangle ^0_\beta \) denotes the expectation with pinning at vertex 0. Explicitly, by (2.85), the measure \(\langle \cdot \rangle _\beta ^0\) on the right-hand side can be written as the \(a=3/2\) case of

$$\begin{aligned} Q_{\beta ,a}(dt) \equiv \frac{1}{Z_{\beta ,a}} \exp \left( -\frac{1}{2} \sum _{i,j} \beta _{ij} (\cosh (t_i-t_j)-1)\right) D(\beta ,t)^a \prod _{i\ne 0} \frac{dt_i}{ \sqrt{2\pi }}, \end{aligned}$$
(4.2)

where

$$\begin{aligned} D(\beta ,t) \equiv {{\tilde{D}}}_\beta (t) \prod _{i} e^{- 2 t_i}, \end{aligned}$$
(4.3)

and where \({{\tilde{D}}}_{\beta }(t)\) was given explicitly in (2.82) and \(Z_{\beta ,a}\) is a normalising constant. We have made the parameter a explicit as our argument adapts that of [44], which concerned the case \(a=1/2\). When \(a=1/2\) supersymmetry implies that \(Z_{\beta ,1/2} =1\) and \({\mathbb {E}}_{Q_{\beta ,1/2}}(e^{t_k})=1\) for all \(\varvec{\beta }=(\beta _{ij})\) and all \(k\in \Lambda \). These identities require the following replacement when \(a\ne 1/2\):

$$\begin{aligned} Z_{\beta ,a} \text { is increasing in all of the } \,\beta _{ij}, \qquad {\mathbb {E}}_{Q_{\beta ,a}}(e^{2at_k}) = 1 \quad \text {for all } \,(\beta _{ij}) \text {and all } \,k\in \Lambda .\nonumber \\ \end{aligned}$$
(4.4)

When \(a=3/2\) the first of these facts follow from the forest representation for the partition function, see Proposition 2.9, and the second is (2.87) of Corollary 2.13. Proof that (4.4) holds for general half-integer \(a \geqslant 0\) appears in [17], and we conjecture that these assumptions are true for any \(a\geqslant 0\).

With (4.4) given, it is straightforward to adapt [44, Lemma 1] to obtain the following lemma. In the next lemma we assume \(0,i\in \Lambda \), but we make no further assumptions beyond that \(\varvec{\beta }\) induces a connected graph.

Lemma 4.1

(Sabot [44, Lemma 1] for \(a=1/2\)). Let \(a\geqslant 0\), \(s\in (0,1)\), and \(\gamma > 0\). Assume (4.4) holds. Then for any \(v \in {\mathbb {R}}^\Lambda \) with \(v_j=1\), \(v_{0}=0\), and

$$\begin{aligned} \gamma |v_i-v_k| \leqslant \frac{1}{2} (1-s)^2 \quad \text {for all}\, i\sim k, \end{aligned}$$
(4.5)

one has, with \(q=1/(1-s)\),

$$\begin{aligned} {\mathbb {E}}_{Q_{\beta ,a}}(e^{2as t_j}) \leqslant e^{-2a s\gamma } e^{\frac{1}{2} \gamma ^2q^2\sum _{i,k}(\beta _{ik}+2a)(v_i-v_k)^2}. \end{aligned}$$
(4.6)

Proof

As mentioned, our proof is an adaptation of [44, Lemma 1], and hence we indicate the main steps but will be somewhat brief. In this reference \(a=1/2\), \(Q_{\beta ,a}\) is denoted Q, \(\beta _{ij}\) is denoted \(W_{ij}\), and t is denoted by u. Let \(Q_{\beta ,a}^\gamma \) denote the distribution of \(t-\gamma v\). Since the partition function does not change under translation of the underlying measure, by following [44, Prop. 1] we obtain,

$$\begin{aligned}&\frac{dQ_{\beta ,a}}{dQ^\gamma _{\beta ,a}}(t) = \exp \left( {-\frac{1}{2} \sum _{i,k} \beta _{ik} (\cosh (t_i-t_k)-\cosh (t_i-t_k+\gamma (v_i-v_k))}\right) \nonumber \\&\quad \frac{D(\beta ,t)^a}{D(\beta +\gamma v,t)^a} . \end{aligned}$$
(4.7)

With \(e^t\) replaced by \(e^{2at}\) but otherwise exactly as in the argument leading to [44, (2)], by using that \(s^{-1}\) and q are Hölder conjugate and using the second part of (4.4),

$$\begin{aligned} {\mathbb {E}}_{Q_{\beta ,a}}(e^{2a s t_k})={\mathbb {E}}_{Q^\gamma _{\beta ,a}}\left( {{\frac{dQ_{\beta ,a}}{dQ^\gamma _{\beta ,a}}}e^{2a s t_k}}\right)&\leqslant {\mathbb {E}}_{Q^\gamma _{\beta ,a}}\left( {\left( {\frac{dQ_{\beta ,a}}{dQ^\gamma _{\beta ,a}}}\right) ^q}\right) ^{1/q} \left( {{\mathbb {E}}_{Q^\gamma _{\beta ,a}}({e^{2a t_k}})}\right) ^s \nonumber \\&\leqslant {\mathbb {E}}_{Q^\gamma _{\beta ,a}}\left( {\left( {\frac{dQ_{\beta ,a}}{dQ^\gamma _{\beta ,a}}}\right) ^q}\right) ^{1/q} e^{-2a s\gamma }. \end{aligned}$$
(4.8)

The expectation on the right-hand side is estimated as in [44], with the only change that \(\sqrt{D(\beta ,t)}\) is replaced by \(D(\beta ,t)^a\) in all expressions, and that the change of measure from \(Q_{\beta ,a}\) to \(Q_{{\tilde{\beta }},a}\) involves the normalisation constants, i.e., a factor \(Z_{{\tilde{\beta }},a}/Z_{\beta ,a}\). Setting \(\gamma '= \gamma (q-1)\), we obtain

$$\begin{aligned}&{\mathbb {E}}_{Q^\gamma _{\beta ,a}}\left( {\left( {\frac{dQ_{\beta ,a}}{dQ^\gamma _{\beta ,a}}}\right) ^q}\right) \nonumber \\&\quad = {\mathbb {E}}_{Q^{\gamma '}_{\beta ,a}}\left( {\left( {\frac{dQ_{\beta ,a}}{dQ^\gamma _{\beta ,a}}}\right) ^{q-1}\frac{dQ_{\beta ,a}}{dQ^{\gamma '}_{\beta ,a}}}\right) \nonumber \\&\quad \leqslant {\mathbb {E}}_{Q^{\gamma '}_{\beta ,a}}\left( {\frac{q}{2} \sum _{i,k} \beta _{ik}\cosh (t_i-t_k+\gamma '(v_i-v_k))(2q^2\gamma ^2 (v_i-v_k)^2)}\right) \nonumber \\&\quad = e^{\frac{1}{2} \sum _{i,k} \beta _{ik} { q^3}\gamma ^2 (v_i-v_k)^2} \frac{Z_{{\tilde{\beta }},a}}{Z_{\beta ,a}}{\mathbb {E}}_{Q_{{\tilde{\beta }},a}}\left( {\left( {\frac{D(\beta ,t)}{D({\tilde{\beta }},t)}}\right) ^a}\right) \end{aligned}$$
(4.9)

where

$$\begin{aligned} {\tilde{\beta }}_{ik} = \beta _{ik}(1-2q^3\gamma ^2(v_i-v_k)^2) \in [\frac{1}{2} \beta _{ik}, \beta _{ik}]. \end{aligned}$$
(4.10)

The ratio of determinants is bounded using the matrix-tree theorem as done on [44, p.7], and we use that \(Z_{{\tilde{\beta }},a} \leqslant Z_{\beta ,a}\), by (4.4). The result is (4.6). \(\quad \square \)

Proof of Theorem 1.3

We may choose \(s=1/(2a) = 1/3 \in (0,1)\) in Lemma 4.1. We then combine (4.1) and (4.6) and choose v as a difference of Green functions (exactly as in [44, Section 2.2]) to find that,

$$\begin{aligned} {\mathbb {P}}_\beta [0\leftrightarrow j] ={\mathbb {E}}_{Q_{\beta ,a}}(e^{t_j}) ={\mathbb {E}}_{Q_{\beta ,a}}(e^{2ast_j}) \leqslant |j|^{-c_\beta } \end{aligned}$$
(4.11)

as needed. \(\quad \square \)

4.2 Mermin–Wagner theorem

We now show that the vanishing of the density of the cluster containing a fixed vertex on the torus also follows from a version of the classical Mermin–Wagner theorem. We first derive an expression for a quantity closely related to the mean tree size. For constant h, Theorem 2.1 implies that

$$\begin{aligned} \left[ z_{a}\right] _{\beta ,h} = \sum _{F\in \mathcal {F}}\prod _{ij\in F}\beta _{ij} \prod _{T\in F}(1+\sum _{k\in T}(h-1_{a=k})), \end{aligned}$$
(4.12)

which leads to

$$\begin{aligned} \langle z_{i} \rangle _{\beta ,h}&= {\mathbb {E}}_{\beta ,h} \frac{h|T_{i} |}{1+h|T_{i} |}, \end{aligned}$$
(4.13)

where \(T_i\) is the (random) tree containing the vertex i.

Let \(\Lambda \) be a d-dimensional discrete torus, and let \(\lambda (p)\) by the Fourier multiplier of the corresponding discrete Laplacian:

$$\begin{aligned} \lambda (p) \equiv \sum _{j\in \Lambda } \beta _{0j} (1-\cos (p\cdot j)), \qquad p\in \Lambda ^{\star } \end{aligned}$$
(4.14)

where \(\cdot \) is the Euclidean inner product on \({\mathbb {R}}^d\) and \(\Lambda ^{\star }\) is the Fourier dual of the discrete torus \(\Lambda \).

Theorem 4.2

Let \(d \geqslant 1\), and let \(\Lambda \) be a d-dimensional discrete torus of side length L. Then

$$\begin{aligned} \frac{1}{\langle z_0 \rangle _{\beta ,h}} \ge 1+ \frac{1}{(2\pi L)^d} \sum _{p \in \Lambda ^{\star }} \frac{1}{\lambda (p) + h}. \end{aligned}$$
(4.15)

Proof

The proof is analogous to [6, Theorem 1.5]. We write the \({\mathbb {H}}^{0|2}\) expectations \(\langle \xi _i\eta _j \rangle _{\beta ,h}\) and \(\langle z_i \rangle _{\beta ,h}\) in horospherical coordinates using Corollary 2.10:

$$\begin{aligned} \langle \xi _i\eta _j \rangle _{\beta ,h}= \langle s_is_je^{t_i+t_j} \rangle _{\beta ,h}, \quad \langle z_i \rangle _{\beta ,h} = \langle e^{t_i} \rangle _{\beta ,h} = \langle e^{2t_i} \rangle _{\beta ,h}. \end{aligned}$$
(4.16)

Set

$$\begin{aligned} S(p) = \frac{1}{\sqrt{|\Lambda |}} \sum _j e^{i(p\cdot j)} e^{t_j}s_j, \quad D = \frac{1}{\sqrt{|\Lambda |}} \sum _j e^{-i(p\cdot j)} \frac{\partial }{\partial s_j}. \end{aligned}$$
(4.17)

Since the expectation of functions depending only on (st) in horospherical coordinates is an expectation with respect to a probability measure, denoted \(\langle \cdot \rangle \) from hereon, the Cauchy–Schwarz inequality implies

$$\begin{aligned} \langle |S(p)|^2 \rangle \geqslant \frac{|\langle S(p)D{\tilde{H}} \rangle |^2}{\langle |D{\tilde{H}}|^2 \rangle }. \end{aligned}$$
(4.18)

Since the density in horospherical coordinates is \(e^{-{\tilde{H}}(s,t)}\), the probability measure \(\langle \cdot \rangle \) obeys the integration by parts \(\langle FD{{\tilde{H}}} \rangle = \langle DF \rangle \) identity for any function \(F=F(s,t)\) that does not grow too fast. Therefore by translation invariance, with \(y_i = s_ie^{t_i}\),

$$\begin{aligned} \langle |S(p)|^2 \rangle&= \frac{1}{|\Lambda |} \sum _{j,l} e^{i p\cdot (j-l)} \langle y_jy_l \rangle = \frac{1}{|\Lambda |} \sum _{j,l} e^{i p\cdot (j-l)} \langle y_0y_{j-l} \rangle = \sum _{j} e^{i (p\cdot j)} \langle y_0y_{j} \rangle , \end{aligned}$$
(4.19)
$$\begin{aligned} \langle S(p)D{\tilde{H}} \rangle&= \langle DS(p) \rangle = \frac{1}{|\Lambda |} \sum _{j,l} e^{ip\cdot (j-l)}\langle \frac{\partial y_j}{\partial s_l} \rangle = \frac{1}{|\Lambda |} \sum _{j} \langle e^{t_j} \rangle = \langle z_0 \rangle . \end{aligned}$$
(4.20)

By Cauchy–Schwarz, translation invariance, and (4.16) we also have

$$\begin{aligned} \langle e^{t_j+t_l} \rangle \leqslant \langle e^{2t_0} \rangle = \langle z_0 \rangle . \end{aligned}$$
(4.21)

Using (4.21) and the integration by parts identity it follows that

$$\begin{aligned} \langle |D{\tilde{H}}|^2 \rangle= & {} \langle D{{\bar{D}}}{\tilde{H}} \rangle = \frac{1}{|\Lambda |} \sum _{j,l} \beta _{jl} \langle e^{t_j+t_l} \rangle (1-\cos (p\cdot (j-l))) \nonumber \\&+ \frac{h}{|\Lambda |} \sum _j \langle e^{t_j} \rangle \leqslant \langle z_0 \rangle ( \lambda (p) + h ). \end{aligned}$$
(4.22)

In summary, we have proved

$$\begin{aligned} \sum _{j} e^{i (p\cdot j)} \langle \xi _0\eta _{j} \rangle = \sum _{j} e^{i (p\cdot j)} \langle y_0y_{j} \rangle = \langle |S(p)|^2 \rangle \geqslant \frac{|\langle S(p)D{\tilde{H}} \rangle |^2}{\langle |D{\tilde{H}}|^2 \rangle } \geqslant \frac{\langle z_0 \rangle }{\lambda (p) + h}.\nonumber \\ \end{aligned}$$
(4.23)

Summing over \(p \in \Lambda ^{\star }\) in the Fourier dual of \(\Lambda \) (with the sum correctly normalized), the left-hand side becomes \(\langle \xi _0\eta _0 \rangle \). Using \(\langle z_0 \rangle = 1-\langle \xi _0\eta _0 \rangle \) this then gives the claim:

$$\begin{aligned} \frac{1}{\langle z_0 \rangle }-1 \ge \frac{1}{(2\pi L)^d} \sum _{p\in \Lambda ^*} \frac{1}{\lambda (p) + h}. \end{aligned}$$
(4.24)

\(\square \)

From the Mermin–Wagner theorem we obtain that on a finite torus of side length L the density of the tree containing 0 tends to 0 as \(L\rightarrow \infty \). We write \(\lesssim \) for inequalities that hold up to universal constants.

Corollary 4.3

Let \(\Lambda \) be the 2-dimensional discrete torus of side length L. Then

$$\begin{aligned} {\mathbb {E}}_{\beta ,0} \frac{|T_0|}{|\Lambda |} \lesssim \frac{1}{\sqrt{\log L}}. \end{aligned}$$
(4.25)

Proof

For any \(h \leqslant 1/|\Lambda |\) we have \(h|T_0| \leqslant 1\). By Theorem 4.2, for \(d=2\) thus

$$\begin{aligned} {\mathbb {E}}_{\beta ,h} \frac{|T_0|}{|\Lambda |} = \frac{1}{|\Lambda |h} {\mathbb {E}}_{\beta ,h} h|T_0| \leqslant \frac{2}{|\Lambda |h} {\mathbb {E}}_{\beta ,h} \frac{h|T_0|}{1+h|T_0|} = \frac{2}{|\Lambda |h} \langle z_0 \rangle _{\beta ,h} \lesssim \frac{1}{h L^2 \log L}\nonumber \\ \end{aligned}$$
(4.26)

where we used that, for all \(h \geqslant 0\), the Green’s function of the discrete torus satisfies

$$\begin{aligned} \frac{1}{(2\pi L)^2} \sum _{p \in \Lambda ^{\star }} \frac{1}{\lambda (p) + h} \gtrsim \log (h^{-1}\wedge L). \end{aligned}$$
(4.27)

Directly following the conclusion of the present proof, we shall show that if X is a random variable with \(|X|\leqslant 1\), and if \(h \ll 1/|\Lambda |\),

$$\begin{aligned} \left|{\mathbb {E}}_{\beta ,h} X - {\mathbb {E}}_{\beta ,0} X \right| = O(h|\Lambda |). \end{aligned}$$
(4.28)

Applying this estimate with \(X=|T_0|/|\Lambda |\), for \(h \ll 1/|\Lambda |\) we have

$$\begin{aligned} \left| {\mathbb {E}}_{\beta ,h} \frac{|T_0|}{|\Lambda |} - {\mathbb {E}}_{\beta ,0} \frac{|T_0|}{|\Lambda |} \right| = O(hL^2). \end{aligned}$$
(4.29)

With \(h =L^{-2}(\log L)^{-1/2}\), combining both estimates gives

$$\begin{aligned} {\mathbb {E}}_{\beta ,0} \frac{|T_0|}{|\Lambda |} \lesssim \frac{1}{hL^2 \log L} + h L^2 \lesssim \frac{1}{\sqrt{\log L}} . \end{aligned}$$
(4.30)

\(\square \)

Lemma 4.4

Let \(\Lambda \) be any finite graph with \(|\Lambda |\) vertices. Let X be a random variable with \(|X|\leqslant 1\). Then for \(h \ll 1/|\Lambda |\),

$$\begin{aligned} \left|{\mathbb {E}}_{\beta ,h} X - {\mathbb {E}}_{\beta ,0} X \right| = O(h|\Lambda |). \end{aligned}$$
(4.31)

Proof

By definition,

$$\begin{aligned} {\mathbb {E}}_{\beta ,h} X = \frac{{\mathbb {E}}_{\beta ,0} (X \prod _{T\in F} (1+h|T|))}{{\mathbb {E}}_{\beta ,0}(\prod _{T\in F} (1+h|T|))}. \end{aligned}$$
(4.32)

With \(A'/(1+\varepsilon )-A = (A'-A) - A' (\varepsilon /(1+\varepsilon )) = (A'-A) + (A'/(1+\varepsilon )) \varepsilon \) we get

$$\begin{aligned} {\mathbb {E}}_{\beta ,h}X - {\mathbb {E}}_{\beta ,0} X = {\mathbb {E}}_{\beta ,0}(X (\prod _{T} (1+h|T|)-1)) - {\mathbb {E}}_{\beta ,h}(X){\mathbb {E}}_{\beta ,0}(\prod _{T} (1+h|T|)-1) . \end{aligned}$$
(4.33)

Since \(|X|\leqslant 1\) it suffices to bound

$$\begin{aligned} \prod _{T\in F} (1+h |T|)-1 = \sum _{F' \subset F}\prod _{T\in F'} h |T| \end{aligned}$$
(4.34)

where the sum runs over subforests \(F'\) of F, i.e., unions of the disjoint trees in F. Since \(\sum _i |T_i| \leqslant |\Lambda |\),

$$\begin{aligned} \sum _{F' \subset F}\prod _{T\in F'} h |T| \leqslant \sum _{n\geqslant 1} \sum _{i_1, \dots , i_n} \prod _{i=1}^n (h|T_i|) \leqslant \sum _{n\geqslant 1} \left( {h\sum _{i} |T_i|}\right) ^n \leqslant \sum _{n\geqslant 1} (h|\Lambda |)^n = O(h|\Lambda |)\nonumber \\ \end{aligned}$$
(4.35)

whenever \(h|\Lambda |\ll 1\). \(\quad \square \)