1 Introduction

In this article, we consider the Gaussian free field \(\varphi \) on the cable system \(\widetilde{{\mathcal {G}}}\) associated to an arbitrary transient weighted graph \({\mathcal {G}}\); see the discussion around (1.1) below for the precise setup. Cable processes have increasingly proved an insightful object of study, as shown for instance in the recent articles [7, 8, 19, 21, 27] and [29]. In the present work, we investigate a well-chosen observable, the capacity of finite clusters in the excursion set \(E^{\ge h}\) of \(\varphi \) above height \(h\in {\mathbb {R}}\), see (1.5) below. This quantity features prominently in our article [10]. Our main result, stated below in Theorem 1.1 – see also Sect. 3 for a more exhaustive discussion – underlines the central nature of this observable and unveils some of its deeper ramifications.

To wit, our findings imply for instance that the cluster capacity observable at height \(h=0\) is finite almost surely, for any transient graph \({\mathcal {G}}\), see Theorem 1.1,1) (our setup allows for a killing measure, including the degenerate case of Dirichlet boundary conditions, which will play an important role below). This immediately leads to a much improved understanding of why the height \(h=0\) tends to be critical for the percolation problem \(\{ E^{\ge h }: h \in {\mathbb {R}}\}\) in the massless case, i.e. in the absence of killing, and more generally when \({\mathbf {h}}_{\text {kill}}<1\) (see (1.2) below). A simple criterion, see (Cap) and Theorem 1.1,1), which covers an extensive number of cases, can then be used to check if the sign clusters of \(\varphi \) percolate or not.

For instance, see Corollary 1.2, as a consequence of this criterion, our results yield that the sign clusters of \(\varphi \) on any vertex-transitive graph with no killing are bounded and thus establish the phase transition of \(\{ E^{\ge h }: h \in {\mathbb {R}}\}\) as being second order. Corresponding results hold for the loop soup \({\mathcal {L}}_{1/2}\), see Corollary 3.6; see also the discussion following Theorem 1.1 regarding the current state of affairs.

When the sign clusters of \(\varphi \) are bounded – which holds e.g. when (Cap) holds – we are able to identify the distribution of the cluster capacity observable at any level \(h \in {\mathbb {R}}\), see Theorem 1.1,2) below. This law is explicitly characterized by (\({Law}_h\)), introduced above Theorem 1.1 (see also (3.8) for the corresponding density). Moreover, we show that this information is equivalent to the ‘strong Ray-Knight-type’ isomorphism recently derived in [27] (refining [19], see also (Isom) above Theorem 1.1) under slightly stronger assumptions than those to follow. This identity relates the free field itself with the local times of random interlacements on \(\widetilde{{\mathcal {G}}}\). Thus, we effectively obtain a characterization of an isomorphism theorem (in the non-interacting case) in terms of the free field alone. In fact, for massless graphs (or even if \({\mathbf {h}}_{\text {kill}}<1\)) our results imply under (\(\mathrm {Law}_{0}\)) the dichotomy \({\widetilde{h}}_* \in {\{0,\infty \}}\), where \({\widetilde{h}}_*\) refers to the corresponding critical level; cf. Theorem 1.1,3). We further refer to the forthcoming article [22] for sharpness and limitations to the validity of these results. The identity (\({Law}_h\)) is derived in [10] by means of differential formulas, and has important consequences regarding the (near-)critical regime for level sets of \(\varphi \) on \(\widetilde{{\mathcal {G}}}\); see [10] regarding these matters.

We now introduce our setup and refer to Sect. 2 for details. We consider a transient weighted graph \( {\mathcal {G}}= ({\overline{G}},{\bar{\lambda }},{\bar{\kappa }}),\) where \({\overline{G}}\) is a finite or countably infinite set, \({\bar{\lambda }}_{x,y}\in {[0,\infty )}\), \(x,y\in {\overline{G}},\) are non-negative weights satisfying \({\bar{\lambda }}_{x,y} ={\bar{\lambda }}_{y,x} \ge 0\) and \({\bar{\lambda }}_{x,x}=0\) for all \(x,y\in {\overline{G}}.\) Furthermore, \({\bar{\kappa }}_x\in {[0,\infty ]}\), \(x\in {\overline{G}},\) is a killing measure, possibly infinite. To deal with the latter in a convenient way, given \( {\mathcal {G}}= ({\overline{G}},{\bar{\lambda }},{\bar{\kappa }}),\) we introduce the triplet \((G,\lambda , \kappa )\), to which we will mostly refer throughout the article, by setting \((G,\lambda , \kappa )= ({\overline{G}}^M,{\bar{\lambda }}^M,{\bar{\kappa }}^M)\), the latter being defined in (2.12), with M a certain set of ‘mid-points’ given by (2.11). In particular, this definition entails that \((G,\lambda , \kappa )= ({\overline{G}},{\bar{\lambda }},{\bar{\kappa }})\) whenever \({\bar{\kappa }}_x < \infty \) for all \( x\in {\overline{G}}\). Otherwise \((G,\lambda , \kappa )\) is obtained by suitable ‘enhancement’ of \( {\mathcal {G}}\) (exploiting network equivalence). As a result, the killing measure \(\kappa \) is finite everywhere, i.e. \(\kappa _x < \infty \) for all \(x \in G\).

We always tacitly assume that the induced graph (GE) with edge set \(E =\{ \{x,y \}: x,y \in G,\, \lambda _{x,y}>0\}\) is connected and locally finite. We write \(x\sim y\) when \(\{x,y\}\in {E},\) and we define

$$\begin{aligned} \begin{aligned}&\lambda _x=\kappa _x+\sum _{y\in {G}}\lambda _{x,y},\ \rho _x=\frac{1}{2\kappa _x} \text { for }x\in {G} \text { and } \rho _{x,y}=\frac{1}{2\lambda _{x,y}}\text { for }x\sim y\in {G} \end{aligned} \end{aligned}$$
(1.1)

(with \(\rho _x=\infty \) when \(\kappa _x=0\) ). One naturally associates to \( {\mathcal {G}}\) a continuous version \({\widetilde{{\mathcal {G}}}},\) the corresponding cable system or metric graph, obtained by replacing each edge \(e =\{x,y \} \in E\) by an open interval \(I_e\) of length \(\rho _{x,y}\), glued to G through its endpoints x and y. One further attaches to each vertex \(x\in G\) an additional interval \(I_x\) isometric to \([0,\rho _x),\) glued to x through 0 (we refer to Sect. 2.3 and Remark 3.8,1) for their raison-d’être).

One then defines (e.g. in terms of its associated Dirichlet form, see (2.1) and (2.2) below for details) a diffusion process \(({X}_t)_{t \ge 0 }\) on \({\widetilde{{\mathcal {G}}}} \cup \{\Delta \}\), where \(\Delta \) denotes an (absorbing) cemetery state, which can be viewed as Brownian motion on the cable system. The process X induces a pure jump process \(Z=(Z_t)_{t\ge 0}\) on \(G\cup \{\Delta \}\), which we refer to as its trace (or print) on G, see (2.4), associated to a corresponding trace form. The induced process Z has the law of the continuous time Markov chain that jumps from \(x \in G\) to \(y \in G\) at rate \(\lambda _{x,y}\) and is killed at rate \(\kappa _x.\) Similarly, the trace of X on \(\{x\in {{\overline{G}}}:\,{\bar{\kappa }}_x<\infty \}\) has the law of the continuous time Markov chain on \({\overline{G}}\) that jumps from \(x \in {\overline{G}}\) to \(y \in {\overline{G}}\) at rate \({\bar{\lambda }}_{x,y}\) and is killed at rate \({\bar{\kappa }}_x.\) We write \(P_x\) for the canonical law of \(X_{\cdot }\) with starting point \(x \in {\widetilde{{\mathcal {G}}}}\), and occasionally \(P_x^{{\widetilde{{\mathcal {G}}}}}\) in place of \(P_x\) to stress the dependence on the datum \({\widetilde{{\mathcal {G}}}}\). We say that \(X_{\cdot }\) is killed if \(X_{\cdot }\) exits \({\widetilde{{\mathcal {G}}}}\) via \(I_x\) for some \(x\in {G}\) with \(\kappa _x>0\) (which is equivalent to Z being killed, i.e. entering \(\Delta \)). Accordingly, we define

$$\begin{aligned} {\mathbf {h}}_{\text {kill}}(x){\mathop {=}\limits ^{\text {def.}}}P_x(X_{\cdot } \text { is killed}), \text { for all }x\in {\widetilde{{\mathcal {G}}}}. \end{aligned}$$
(1.2)

Moreover, we say that \({\mathbf {h}}_{\text {kill}}<1\) if \({\mathbf {h}}_{\text {kill}}(x)<1\) for all \(x \in {\widetilde{{\mathcal {G}}}},\) or equivalently if \({\mathbf {h}}_{\text {kill}}(x)<1\) for some \(x\in {\widetilde{{\mathcal {G}}}}\) (recall that (GE) is assumed to be a connected graph). An important family of graphs satisfying \({\mathbf {h}}_{\text {kill}}<1\) are massless graphs with \({\bar{\kappa }} =\kappa \equiv 0\), or equivalently \({\mathbf {h}}_{\text {kill}}(\cdot )=0\).

Our results deal with the graph \({\mathcal {G}}\) and its associated metric graph \({\widetilde{{\mathcal {G}}}}\), when \({\mathcal {G}}\) is transient; that is, when the Markov chain Z is transient, which we tacitly assume from now on. In particular, the graph \({\mathcal {G}}\) may be finite when \(\kappa \not \equiv 0.\) We then define the Gaussian free field on \({{\widetilde{{\mathcal {G}}}}}\), whose canonical law \({\mathbb {P}}^G\) (occasionally denoted as \({\mathbb {P}}^G_{{\widetilde{{\mathcal {G}}}}}\)), defined on the space \(C({\widetilde{{\mathcal {G}}}},{\mathbb {R}})\) endowed with the \(\sigma \)-algebra generated by the coordinate maps \(\varphi _x,\) \(x\in {{\widetilde{{\mathcal {G}}}}},\) is such that

$$\begin{aligned} \text {under}\ {\mathbb {P}}^G, (\varphi _x)_{x\in {{\widetilde{{\mathcal {G}}}}}}\ \text {is a centered Gaussian field with covariance function}\ g(\cdot ,\cdot ).\nonumber \\ \end{aligned}$$
(1.3)

Here, \(g(\cdot ,\cdot )\) refers to the Green density of \(X_{\cdot }\) with respect to the Lebesgue measure m on \(\widetilde{{\mathcal {G}}}\), see (2.5). The restriction of this process to G has the same law as the usual Gaussian free field on \({\mathcal {G}}\) associated to the discrete Markov chain Z.

We now describe our main results, which deal with the excursion sets \(E^{\ge h}{\mathop {=}\limits ^{\text {def.}}}\{y\in {{\widetilde{{\mathcal {G}}}}}:\varphi _y\ge h\}\) of \(\varphi \), for varying height \(h\in {\mathbb {R}}.\) We endow \(\widetilde{ {\mathcal {G}}}\) with the (geodesic) distance \(d(\cdot ,\cdot )\) such that all intervals \(I_e\), \(e\in E\), and \(I_x\), when \(\rho _x< \infty \), have length one (rather than \(\rho _{e}\) and \(\rho _x,\) respectively). Albeit not essential, we assume for convenience that d also assigns length one to \(I_x\) when \(\rho _x= \infty \) (by means of some strictly increasing bijection \([0,1) \rightarrow [0,\infty )\)). The clusters, i.e. maximal connected components, of \(E^{\ge h}\), are defined as

$$\begin{aligned} \begin{aligned}&{E}^{\ge h}(x_0){\mathop {=}\limits ^{\text {def.}}} \big \{y\in {{\widetilde{{\mathcal {G}}}}}:\,x_0\leftrightarrow y\text { in }E^{\ge h} \big \}, \hbox { for } x_0\in {{\widetilde{{\mathcal {G}}}}} \hbox {,} \,\,h\in {\mathbb {R}}; \end{aligned} \end{aligned}$$
(1.4)

here, for measurable \({A}\subset {\widetilde{{\mathcal {G}}}}\) and \(x,y\in {{\widetilde{{\mathcal {G}}}}}\), we write \(\{x\leftrightarrow y \text { in } {A}\}\) if there exists a (continuous) path from x to y in A,  and we say that A is connected in \({\widetilde{{\mathcal {G}}}}\) if \(z\leftrightarrow z'\) in A for all \(z,z'\in {{A}}.\) A central role in this work will be played by the cluster capacity functional

$$\begin{aligned} \mathrm {{ cap}}({E}^{\ge h}(x_0)), \text { for}\ h\in {\mathbb {R}}, x_0\in {\widetilde{{\mathcal {G}}}}; \end{aligned}$$
(1.5)

We refer to (2.20) and (2.27) below for the definition of \(\text {cap}(A)\), the electrostatic capacity of A, for arbitrary closed, possibly unbounded subsets A of \({\widetilde{{\mathcal {G}}}}\). For instance, in case \(A\subset G\) is finite (or more generally if \(A'\subset {\widetilde{{\mathcal {G}}}}\) is compact and \(\partial A' = A\)), then \(\text {cap}(A)\) (and \(\text {cap}(A')\)) coincide with the usual capacity of the set A for the discrete chain Z.

One of our interests is on the percolative properties of the set \(E^{\ge h}\) (with respect to d). We introduce the corresponding critical parameter

$$\begin{aligned} \begin{aligned} {{\widetilde{h}}}_*&=\inf \big \{ h \in {\mathbb {R}}: \text { for all}\ x_0\in {{\widetilde{{\mathcal {G}}}}}, \, {\mathbb {P}}^G(E^{\ge h}(x_0) \text { is unbounded})=0 \big \} \end{aligned} \end{aligned}$$
(1.6)

(with the convention \(\inf \varnothing = \infty \); note that \({{\widetilde{h}}}_*\) is equivalently defined as the smallest level h such that \({\mathbb {P}}^G\)-a.s. \(E^{\ge h}\) contains no unbounded connected component). A fortiori, (1.6) entails that for each \(h< {\widetilde{h}}_*,\) with positive \({\mathbb {P}}^G\)-probability the discrete set \(E^{\ge h}\cap G\) contains a percolating connected component in the usual sense (i.e., the component is unbounded with respect to the graph distance on (GE)). In other words, the corresponding critical parameter \(h_*\) (see for instance (1.8) in [8] for its definition) satisfies \(h_* \ge {{\widetilde{h}}}_* \). Other natural definitions of critical parameters associated to the sets \(\{ E^{\ge h} , h \in {\mathbb {R}}\}\) exist and will be of interest, see (3.1) and (3.2) below. They correspond to several natural ways of measuring the ‘magnitude’ of clusters in \(E^{\ge h}\), and (1.5) reflects one such choice, based on capacity as a measure of size.

We now briefly introduce the process of random interlacements on \({\widetilde{{\mathcal {G}}}},\) see [11, 24] and [28], to the extent necessary to formulate our main findings; further details are provided in Sect. 2.5. The interlacement process will play a prominent role in the present context, due to recent isomorphisms, see [19, 27] and (Isom) below, relating it to \(\varphi \) in a very explicit fashion. Under a suitable probability measure \({\mathbb {P}}^I,\) for each \(u>0,\) random interlacements at level u on \(\widetilde{{\mathcal {G}}}\) constitute a Poisson point process \(\omega _u\) with intensity \(u\nu _{{\widetilde{{\mathcal {G}}}}},\) where \(\nu _{{\widetilde{{\mathcal {G}}}}}\) is a measure on doubly non-compact trajectories modulo time-shift (when \(\kappa \not \equiv 0,\) these trajectories may be killed by the measure \(\kappa \) before escaping to infinity, i.e., they may ‘exit \({\widetilde{{\mathcal {G}}}}\) via \(I_x\)’ for some \(x\in {G}\) with \(\kappa _x>0\); see (2.39) and (2.40) for the precise definition of \(\nu _{{\widetilde{{\mathcal {G}}}}}\)). We denote by \((\ell _{x,u})_{x\in {{\widetilde{{\mathcal {G}}}}}}\) the continuous field of local times associated to \(\omega _u,\) i.e. the sum of the local time densities relative to the Lebesgue measure on \({\widetilde{{\mathcal {G}}}}\) of all the trajectories in \(\omega _u.\) We then define the interlacement set as \({{{\mathcal {I}}}}^u = \{x \in {\widetilde{{\mathcal {G}}}}: \ell _{x,u} > 0 \}\), a random open subset of \({\widetilde{{\mathcal {G}}}}\). Without any further assumptions on \({{\mathcal {G}}}\), it can be shown that for all \(u>0,\)

$$\begin{aligned} \Big (\ell _{x,u}+\frac{1}{2}\varphi _x^2\Big )_{x\in {{\widetilde{{\mathcal {G}}}}}}\text { has the same law under }{\mathbb {P}}^G\otimes {\mathbb {P}}^I\text { as }\Big (\frac{1}{2}(\varphi _x+\sqrt{2u})^2\Big )_{x\in {{\widetilde{{\mathcal {G}}}}}}\text { under }{\mathbb {P}}^G;\nonumber \\ \end{aligned}$$
(1.7)

see [25] for the original derivation of this result on the (discrete) base graph graph \({\mathcal {G}}\) in case \(\kappa \equiv 0\), based on the generalized second Ray-Knight theorem of [12]; see also Proposition 6.3 of [19] and (1.27)–(1.30) in [27] for extensions to \({\widetilde{{\mathcal {G}}}}\). We refer to Remark 2.2 below regarding a justification for the validity of (1.7) in the present setup, which is more general. As first observed in [19], the isomorphism (1.7) implies a stochastic domination of each connected component of \({\mathcal {I}}^u\) by a level-set cluster of \(\varphi \), which straightforwardly yields (recall (1.2)) that

$$\begin{aligned} \text { if }{\mathbf {h}}_{\text {kill}}<1,\text { then }{\widetilde{h}}_*\ge 0, \end{aligned}$$
(1.8)

see the paragraph following (3.19) below for details. The reverse inequality \({\widetilde{h}}_*\le 0\) is an entirely different matter and has so far only been verified in a handful of cases (see below Theorem 1.1 for a list). Part of our main result addresses this issue.

Under additional assumptions, refining the link between \({\mathcal {I}}^u\) and level-sets of \(\varphi \) described above (1.8), the identity (1.7) can be considerably strengthened. Indeed, Theorem 2.4 in [27] asserts that, if

$$\begin{aligned}&{\mathbb {P}}^G\text {-a.s., }E^{\ge 0}\text { only contains bounded connected components,} \end{aligned}$$
(Sign)

and \(g|_{G \times G}\) is uniformly bounded on the diagonal, see also (1.42) in [27] for a slightly weaker condition (but see below; our results will imply that this latter condition is in fact unnecessary), then

$$\begin{aligned}&\begin{array}{l} \big (\varphi _x 1_{x\notin {{\mathcal {C}}_u}}+\sqrt{\varphi _x^2+2\ell _{x,u}}\, 1_{x\in {{\mathcal {C}}_u}}\big )_{x\in {{\widetilde{{\mathcal {G}}}}}}\text { has the same law} \\ \text {under }{{\mathbb {P}}}^{I}\otimes {\mathbb {P}}^G \text { as }\big (\varphi _x+\sqrt{2u}\big )_{x\in {{\widetilde{{\mathcal {G}}}}}}\text { under }{\mathbb {P}}^G,\text { for all }u\ge 0, \end{array} \end{aligned}$$
(Isom)

where \({\mathcal {C}}_u\) denotes the closure of the union of the connected components of those sign clusters \(\{x\in {{\widetilde{{\mathcal {G}}}}}:|\varphi _x|>0\}\) that intersect the interlacement set \({{{\mathcal {I}}}}^u.\) In particular, noting that \(\ell _{x,u}=0\) if \(x \notin {\mathcal {C}}_u\), (Isom) is seen to yield (1.7) upon taking squares. In practice, the main obstacle to deducing the identity (Isom) is showing that (Sign) holds (cf. the discussion following Theorem 1.1).

Our main result investigates the newly introduced capacity observable (1.5) and explores the links between this quantity, the value of the critical parameter \({\widetilde{h}}_*\) in (1.6) and the validity of the identity (Isom). A natural structural property that will appear in this context is the (weak) condition that

$$\begin{aligned}&\mathrm {cap}(A)=\infty \text { for all}\ (d\text {-})\text {unbounded, closed, connected sets }A\subset \widetilde{{\mathcal {G}}} \end{aligned}$$
(Cap)

(see (3.6) for an equivalent formulation in terms of the base graph \({\mathcal {G}}\) and below (1.5) for the definition of \(\text {cap}(\cdot )\) in the present context). One can for instance show that (Cap) is verified whenever the Green function \(g|_{G \times G}\) is uniformly bounded on the diagonal, see Lemma 3.4 below (cf. also (3.7) for a slightly more general condition). In particular, (Cap) holds on any vertex-transitive graph.

We now present a succinct version of our main result. It entails several findings which are discussed in Sect. 3 in a more comprehensive form. For later reference we introduce the condition

figure a

note that the Laplace transform in (\({Law}_h\)) can be equivalently described in terms of an associated density \(\rho _h\), which is explicit, see (3.8) and Lemma 5.2 below.

Theorem 1.1

Let \({\mathcal {G}}\) be a transient weighted graph. Then:

  1. (1)

    \({\mathbb {P}}^G\)-a.s., the random variable \(\mathrm {cap}(E^{\ge 0}(x_0))\) is finite for all \(x_0\in {{\widetilde{{\mathcal {G}}}}}.\) In particular, the condition (Cap) implies (Sign) (see Theorem 3.2 and Corollary 3.3 for details).

  2. (2)

    The following implications hold true (cf. also Fig. 1 below):

    In particular, in view of (1.8), if \({\mathcal {G}}\) is a transient weighted graph such that \({\mathbf {h}}_{\text {kill}}<1\) and (Cap) is fulfilled, then \({\widetilde{h}}_*=0\) and the law of \(\mathrm {cap}({E}^{\ge h}(x_0))\) is characterized by (\({Law}_{h}\)), for \(h \ge 0\) (equivalently, (Isom) holds).

  3. (3)

    If (\({Law}_{0}\)) holds but (Sign) does not hold, then \({\widetilde{h}}_*=\infty \) (see Corollary 3.11 for details).

    In particular, in view of (1.8), if (\({Law}_{0}\)) holds and \({\mathbf {h}}_{\text {kill}}<1,\) then \({\widetilde{h}}_* \in {\{0,\infty \}}\).

To appreciate the strength of Theorem 1.1, we highlight one particular consequence, which follows directly from items 1) and 2) above together with Corollary 3.4,2) below.

Corollary 1.2

(No percolation at criticality) Let \({\mathcal {G}}\) be a vertex-transitive, massless, transient weighted graph. Then (\({\widetilde{h}}_*=0\) and) the clusters of \(E^{\ge 0}\) are \({\mathbb {P}}\)-a.s. bounded.

We further refer to Corollary 3.6 below for interesting consequences of Theorem 1.1 regarding loop soups, and to [10] regarding the (near-)critical picture associated to the (continuous) phase transition exhibited by Corollary 1.2.

We now elaborate on the results of Theorem 1.1 in due detail and give some ideas concerning their proofs. In part 1) of Theorem 1.1, the finiteness of the capacity functional (1.5) at height \(h=0\) – which, remarkably, holds without any further assumption on \({\mathcal {G}}\) – can loosely be regarded as an indication that the sign clusters of the Gaussian free field on \({\widetilde{{\mathcal {G}}}}\) do not percolate, at least when measured in terms of capacity, cf. also (3.2) and Theorem 3.2 below. Condition (Cap) formalizes this intuition, since it directly implies that closed connected sets have finite capacity if and only if they are bounded. Thus, if (Cap) holds true, so does (Sign), which in turn directly entails \({\widetilde{h}}_*\le 0,\) see (1.6). The condition (Cap) is moreover usually easy to verify, since it depends only on the structure of the graph \({\mathcal {G}},\) and not on the Gaussian free field. As alluded to above, the inequality \({\widetilde{h}}_*\le 0\) had previously only been proved on a certain number of graphs with \(\kappa \equiv 0\), which all verify condition (Cap), namely:

  • \({\mathbb {Z}}^d,\) \(d\ge 3,\) with unit weights, see Theorem 1 and Proposition 5.5 in [19]. This proof could actually be easily extended to all amenable, vertex-transitive graphs, and such graphs verify (Cap), see Lemma 3.4,2).

  • The \((d+1)\)-regular tree \({\mathbb {T}}_d\), \(d\ge 2,\) with unit weights, see Proposition 4.1 in [27]. It is easy to prove that these graphs verify (Cap), using Lemma 3.4,3), the fact that \(e_{K,{\mathbb {T}}^d}(x) \ge c(d)\) (which holds uniformly over connected finite subsets \(K \subset {\mathbb {T}}_d\) and \(x \in \partial K\)), along with the isoperimetric bound \(|\partial K| \ge c'(d)|K|\) (see for instance [2], p.80).

  • Any tree \( {\mathbb {T}}\) with unit weights such that \(\{x\in \mathbb {T}:\,R_x^{\infty }>A\}\) only has bounded components for some \(A>0,\) where \(R_x^{\infty }\) is the effective resistance between x and infinity for the descendants of x,  see Proposition 2.2 in [1]. These graphs verify (Cap) by Lemma 3.4,3).

  • Any transient graph with controlled weights (see e.g. condition \((p_0)\) in [8]), such that the volume of balls have polynomial growth and the Green function decreases polyonomially fast, see Proposition 5.2 in [8]. These graphs verify (Cap), see Lemma 3.2 in [8].

Hence, Theorem 1.1 subsumes and generalizes all these previous results, and it covers many new cases, such as all vertex-transitive graphs, see Lemma 3.4,2) below. What is more, without assuming that (Cap) is fulfilled, it is possible to construct a graph \({\mathcal {G}}\) such that \({\widetilde{h}}_*\le 0\) fails to hold, see Proposition 8.1 in [22]. One can also easily find examples of graphs such that (Sign) is verified, while (Cap) is not, see Remark 3.5,3), or Proposition 7.1 in [22] for more details. A further, very interesting question is whether there exist examples of graphs \({\mathcal {G}}\) not satisfying (\({Law}_{0}\)), or any of the other equivalent conditions appearing in Theorem 1.1,2).

A stepping stone for the proof of Theorem 1.1,1) (and, as will soon turn out, of Part 2) as well) is the observation that the identity (Isom), if assumed to hold, implies (\({Law}_h\))\(_{h \ge 0}\), see Proposition 4.2 and Lemma 6.1 below. Crucially, this observation can be applied immediately when \({\mathcal {G}}\) is a finite (transient) graph, for (Isom) is then a direct consequence of the isomorphism between loop soups and the Gaussian free field, see [17] and [19], that we recall in (4.6). We refer to Lemma 4.4, proved in the Appendix B using similar ideas as in the proof of Theorem 8 in [20], for corresponding details.

Equipped with (Isom), and thus (\({Law}_h\))\(_{h\ge 0},\) on finite transient graphs we then approximate the Gaussian free field on any infinite transient graph \({\mathcal {G}}\) by the Gaussian free field on a sequence of finite transient graphs \({\mathcal {G}}_n\) increasing to \({\mathcal {G}}\) as \(n\rightarrow \infty \), see (4.10) and Lemma 4.6. The fact that our setup allows for 0-boundary conditions (i.e. \({\bar{\kappa }}_x =\infty \) for some \(x \in G\)) is central for this purpose. The capacity functional (1.5) has certain desirable monotonicity properties under this approximation, see (4.16), and Theorem 1.1,1) corresponds to the information that survives in the limit \(n \rightarrow \infty \) without further assumptions on \({\mathcal {G}}\).

Let us now comment on Part 2) of Theorem 1.1 and its proof. Figure 1 illustrates the various implications involved in its statement in a more explicit fashion and will hopefully provide some useful guidance for the reader.

Fig. 1
figure 1

The detailed chain of implications constituting Theorem 1.1,2). The implications in the second line immediately yield the equivalence of (\(\mathrm {Law}_{0}\)), (Isom) and (\({Law}_h\)\()_{h \ge 0}. \)

The equivalence a) in Fig. 1 entails that if \({\widetilde{h}}_*=0,\) then the level sets of the GFF never percolate at the critical point \(h=0,\) even if (Cap) (which imply (Sign)) is not verified. We comment on its proof at the very end of this discussion. Implication b) represents the desired improvement over the argument delineated above yielding Theorem 1.1,1), by which the full information (\({Law}_h\))\(_{h\ge 0}\) survives in the limit as \(n \rightarrow \infty \) under the assumption that the sign clusters of \(\varphi \) are bounded (which holds e.g. under condition (Cap)). In fact, when (Cap) is satisfied, we also provide an explicit formula for the law of the capacity of clusters above negative levels, see Theorem 3.7 for further details; see also Remark 3.10,4), Lemma 4.3 and Remark 5.3,2) regarding the (related) symmetry properties relating compact clusters in \(E^{\ge h}\) and \(E^{\ge -h}\), for arbitrary \(h>0\).

The exact formula (\({Law}_h\))\(_{h\ge 0}\) describing the law of the capacity functional (1.5) is of course instrumental and witnesses a certain degree of integrability of the model \(\{ E^{\ge h }: h \in {\mathbb {R}}\}\). For instance, one can immediately deduce from it (see (3.8)) that the capacity of critical clusters has heavy tails satisfying

$$\begin{aligned} \mathbb {P}^G\big ( \mathrm {cap}\big ({E}^{\ge 0}(x_0)\big ) \ge r \big ) \sim \big (\pi ^2g(x_0,x_0)r\big )^{-1/2}, \text { as }r \rightarrow \infty . \end{aligned}$$
(1.9)

Further to (1.9), one can use (\({Law}_h\))\(_{h\ge 0}\) to directly deduce bounds on various quantities of interest related to the (near-)critical behavior for the percolation of \(\{ E^{\ge h }: h \in {\mathbb {R}}\}\), see [10]. The approach using differential formulas developed therein actually leads to an independent proof of the implication b), along with extended results valid on any transient graph \({\mathcal {G}}\), see Theorem 1.1 in [10]. Incidentally, an explicit formula for the probability of the event \(\{x\longleftrightarrow y\) in \(E^{\ge 0}\}\) has also been obtained in Proposition 5.2 of [19], and was a key ingredient for all previous proofs of the inequality \({\widetilde{h}}_*\le 0.\)

We now turn to the equivalences c) and d) in the second line of Fig. 1. The direct (i.e. right) implications appearing there already imply the equivalences. The direct implication in d) is another application of our initial observation, Proposition 4.2, applied above in the context of Theorem 1.1,1) for finite graphs only, but remaining valid in infinite volume.

Remarkably, the direct implication in c) asserts that it is sufficient to know that the law of the capacity of the sign clusters is given by (\(\mathrm {Law}_{0}\)) in order to deduce the strong version (Isom) of the isomorphism theorem. In particular, together with b), this implies that (Isom) holds whenever (Sign) is verified, which generalizes Theorem 2.4 of [27] that required stronger assumptions, cf. the above discussion leading to (Isom).

Extending the setting in which the identity (Isom) is valid is also interesting as this relation has already been useful in [27] and [1] to compare the critical parameter for the percolation of random interlacements and the Gaussian free field on discrete trees, and in [8] to prove strong percolation for the level sets of the discrete Gaussian free field at a positive level on a large class graphs, for instance \({\mathbb {Z}}^d,\) \(d\ge 3,\) or various fractal graphs. It is not always easy to check that the conditions (1.32) and (1.34), or (1.42), of Theorem 2.4 in [27] are exactly verified, see the proof of Corollary 5.3 in [8] which sparked our interest, and it can thus be interesting to replace them by the weaker condition (Cap), which is easier to verify.

The proof of c) requires deriving a full-fledged isomorphism theorem relating random interlacements and the Gaussian free field on an adequate class of graphs, assuming the identity (\(\mathrm {Law}_{0}\)) alone. In order to prove (Isom), we employ an approximation scheme, starting from a finite-volume setup. The scheme is similar in spirit to the previously used approximation for \(\varphi \), but more involved, as it requires approximating random interlacements on infinite graphs by random interlacements on finite graphs, see Lemma 6.3. Combining the approximations for the free field and the interlacement process, we then obtain (Isom) if (\(\mathrm {Law}_{0}\)) is fulfilled, see Lemma 6.4.

Moreover, our proof of (Isom), which relies on taking a suitable limit rather than proceeding directly in infinite volume and using the Markov property as in [27], immediately lets us derive a signed version of the isomorphism for random interlacements on discrete graphs, taking advantage of the equivalent discrete isomorphism for the loop soup, (4.8). As a by-product of the proof, we thus obtain a version of the isomorphism (Isom) for the discrete graph \({\mathcal {G}}\) in Theorem 3.9, see (3.16), similar to the version of the second Ray-Knight theorem from Theorem 8 in [20].

Finally, the isomorphism (Isom) has another interesting consequence, stated in Theorem 1.1,3) and Corollary 3.11: if (\(\mathrm {Law}_{0}\)) holds but (Sign) does not hold, then \({\widetilde{h}}_*=\infty .\) This can be regarded as a partial converse to the implication (Sign) \(\Longrightarrow \) (\(\mathrm {Law}_{0}\)) from part 2), which leads to a dichotomy for the value of \({\widetilde{h}}_*\) in case \({\mathbf {h}}_{\text {kill}}<1\). In particular, if \({\mathcal {G}}\) is a graph such that \({\widetilde{h}}_*\le 0,\) then \(E^{\ge h}\) is \({\mathbb {P}}^G\)-a.s. bounded for all \(h>0,\) and thus (\({Law}_h\)) holds for all \(h>0,\) see Theorem 3.7. Taking the limit as \(h\searrow 0,\) one can then prove that (\(\mathrm {Law}_{0}\)), and thus (Isom), hold. Since \({\widetilde{h}}_*\ne \infty ,\) this means that (Sign) must hold, and thus we also obtain Theorem 1.1,2),a) (see Fig. 1).

We now explain how this article is organized. Section 2 recalls the main objects of interest, the diffusion X,  the Gaussian free field, and random interlacements on the cable system in the present (broad) setup. It also supplies suitable notions of equilibrium measure and capacity on \({\widetilde{{\mathcal {G}}}}\), see Lemma 2.1, (2.16) and (2.20).

Section 3 contains the detailed versions of all our findings, which together imply Theorem 1.1, and that we prove in the rest of the article. The central results are the three Theorems 3.23.7 and 3.9, along with their respective corollaries.

Section 4 gathers various key preliminary results, notably Proposition 4.2, which derives (\({Law}_h\))\(_{h\ge 0}\) as a consequence of (Isom) (or more precisely, an equivalent but more handy formulation (Isom’) introduced in Sect. 3). It also contains the approximation scheme for \(\varphi \), see Lemma 4.6, as well as the isomorphism (Isom) on finite graphs, see Lemma 4.4. These results are the ingredients of various arguments in the sequel.

First, Sect. 5 is devoted to the proof of Theorems 3.2 and 3.7, which roughly correspond to Theorem 1.1,1), and 2) ,b) in Fig. 1, but contain more detailed results. Their proof quickly follows from the preparatory work done in Sect. 4.

Section 6 is then concerned with the proof of the isomorphism between random interlacements and the Gaussian free field (Isom) under the condition (\(\mathrm {Law}_{0}\)), and to its consequences, Corollaries 3.11 and 3.12. At the technical level, an important role is played by the approximation of random interlacements on a graph \({\mathcal {G}},\) by random interlacements on a sequence of graphs increasing to \({\mathcal {G}},\) see Lemmas 6.2 and 6.3. Some concluding remarks and open questions are gathered at the end of that section.

Throughout the article, we will sometimes add \({\widetilde{{\mathcal {G}}}}\) as a subscript to the notation to stress the underlying graph \({\mathcal {G}}\) that we consider. For the reader’s orientation, we note that the conditions (Sign), (\({Law}_h\)) and (Isom) are all introduced above Theorem 1.1, and that the condition (Isom’) is introduced above Theorem 3.9.

2 Preliminaries and useful results

We return to the framework described around (1.1), consisting of a transient weighted graph \( {\mathcal {G}}\), the induced triplet \((G,\lambda ,\kappa )\) satisfying \(\kappa _x<\infty \) for all \(x\in {{G}}\) and the associated cable system \(\widetilde{{\mathcal {G}}}\). We now define the various objects attached to this setup. We first sketch a construction of the canonical diffusion X on \(\widetilde{{\mathcal {G}}}\) and of its trace on suitable subsets F of G from the associated Dirichlet form in Sect. 2.1. In Sect. 2.2 we introduce several aspects of potential theory on \(\widetilde{{\mathcal {G}}}\) in this general framework, which can be conveniently defined probabilistically by ‘enhancements’, exploiting instances of network equivalence on the base graph \({\mathcal {G}}\), see Lemma 2.1 below. We then briefly discuss the cables \(I_x\) (Sect. 2.3) and their role in taking suitable graph limits, recall the Gaussian free field \(\varphi \) and its Markovian decomposition (Sect. 2.4), and supply the definition of random interlacements in the present context (Sect. 2.5).

Recall the definition of the cable system \({\widetilde{{\mathcal {G}}}}\): first, each edge \(e=\{x,y\}\in E\) is replaced by an open interval \(I_e,\) isometric to \((0,\rho _{x,y}),\) see (1.1). In addition, an open interval \(I_x\) of length \(\rho _x(=\frac{1}{2\kappa _x})\) (possibly unbounded) is attached to each vertex x of G. The cable system \({\widetilde{{\mathcal {G}}}}\) is then obtained by glueing together the intervals \(I_e,\) \(e\in {E},\) to G through their respective endpoints, and by glueing one endpoint of \(I_x,\) \(x\in {G},\) to x. Note that G can be naturally viewed as a subset of \({\widetilde{{\mathcal {G}}}}.\) The elements of G will still be called vertices and the intervals \(I_e,\) \(e \in E,\) and \(I_x,\) \(x\in {{G}},\) will be referred to as the edges of \({\widetilde{{\mathcal {G}}}}.\)

The canonical distance on each \({I}_e,\) \(e\in {E},\) and \({I}_x,\) \(x\in {G},\) is denoted by \(\rho _{{\widetilde{{\mathcal {G}}}}}(\cdot , \cdot ).\) Note that \(\rho _{{\widetilde{{\mathcal {G}}}}}(x,y)\) is only defined if x and y are on the same edge. In a slight abuse of notation, for any edge \(e=\{x,y\}\in {E}\) and any \(t\in {[0,\rho _{x,y}}],\) we denote by \(x+t\cdot I_e=y+(\rho _{x,y}-t)\cdot I_e\) the point of \({I}_e\) at (\(\rho _{{\widetilde{{\mathcal {G}}}}}\)-)distance t from x,  and for any vertex \(x\in {G}\) and \(t\in {[0,\rho _{x}}),\) by \(x+t\cdot I_x\) the point of \({I}_x\) at distance t from x. We also consider the distance d on \({\widetilde{{\mathcal {G}}}},\) cf. above (1.4), which is such that d(xy),  \(x,y\in {{\widetilde{{\mathcal {G}}}}}\), is the minimal length of a continuous path between x and y,  when changing the length of each \(I_e,\) \(e\in {E\cup G}\) from \(\rho _e\) to 1. In particular, the restriction of \(d(\cdot ,\cdot ) \) to \(G\times G\) is just the graph distance \(d_{{\mathcal {G}}}\) on \({\mathcal {G}}.\) We consider \(({\widetilde{{\mathcal {G}}}},d)\) as a metric space, and for \(A\subset {\widetilde{{\mathcal {G}}}}\) we define \(\partial A\) as the boundary of A in \({\widetilde{{\mathcal {G}}}}\) for d. Finally throughout the article, we say that a set \(K\subset {\widetilde{{\mathcal {G}}}}\) is compact if it is compact for the distance d.

2.1 The canonical diffusion on the cable system

We define the set of forward trajectories \(W^+_{{\widetilde{{\mathcal {G}}}}}\) as the set of functions \(w^+:[0,\infty )\rightarrow {\widetilde{{\mathcal {G}}}}\cup \{\Delta \},\) where \(\Delta \) is a cemetery point (not in \({\widetilde{{\mathcal {G}}}}\)), for which there exists \(\zeta \in {[0,\infty ]}\) such that \(w^+_{|{[0,\zeta )}}\in {C([0,\zeta ),{\widetilde{{\mathcal {G}}}})}\) and, when \(\zeta <\infty ,\) \(w^+(t)=\Delta \) for all \(t\ge \zeta .\) For each \(t\ge 0\) we denote by \(X_t\) the projection at time t,  i.e. \(X_t(w^+)=w^+(t)\) for all \(w^+\in {W^+_{{\widetilde{{\mathcal {G}}}}}},\) and by \({\mathcal {W}}^+_{{\widetilde{{\mathcal {G}}}}}\) the \(\sigma \)-algebra on \(W^+_{{\widetilde{{\mathcal {G}}}}}\) generated by \(X_t,\) \(t\ge 0.\) By m we denote the Lebesgue measure on \({\widetilde{{\mathcal {G}}}},\) which can be informally described as the sum of the Lebesgue measures on each \(I_e,\) \(e\in {E},\) and \(I_x,\) \(x\in {G},\) with the normalization \(m(I_e)=\rho _e\) and \(m(I_x)= \rho _x\) (with, say, mass 1 associated to each sub-interval of Euclidean length 1). We proceed to define a diffusion on \({\widetilde{{\mathcal {G}}}},\) which we will characterize through its associated Dirichlet form. In order to define the latter, introduce for measurable \(f:{\widetilde{{\mathcal {G}}}}\rightarrow {\mathbb {R}}\),

$$\begin{aligned} (f,f)_{m}{\mathop {=}\limits ^{\text {def.}}}\sum _{e\in {E\cup {G}}}\int _{I_e}f^2\, \mathrm {d}m_{|I_e}, \end{aligned}$$
(2.1)

the corresponding Hilbert space \(L^2({\widetilde{{\mathcal {G}}}},m){\mathop {=}\limits ^{\text {def.}}}\{f:{\widetilde{{\mathcal {G}}}}\rightarrow {\mathbb {R}}\text { measurable}; \,(f,f)_{m}<\infty \}\) (modulo the usual equivalence relation) and \((f,g)_{m}\) the associated quadratic form on \(L^2({\widetilde{{\mathcal {G}}}},m)\) obtained via polarization. Let \(C_0({\widetilde{{\mathcal {G}}}})\) be the closure for the \(\Vert \cdot \Vert _\infty \)-norm of the set of continuous functions with compact support on \({\widetilde{{\mathcal {G}}}}\) and let \(D({\widetilde{{\mathcal {G}}}},m)\subset L^2({\widetilde{{\mathcal {G}}}},m)\) be the space of functions \(f\in {C_0({\widetilde{{\mathcal {G}}}})}\) such that \(f_{|I_e}\in {W^{1,2}(I_e,m_{|I_e}})\) for all \(e\in {E\cup {G}}\) and

$$\begin{aligned} \sum _{e\in E\cup {G}}\Vert f_{|I_e}\Vert _{W^{1,2}(I_e,m_{|I_e})}^2<\infty , \end{aligned}$$

where \(W^{1,2}(I_e,m_{|I_e})\) denotes the respective Sobolev space on \(I_e.\) We now define the Dirichlet form on \(L^2({\widetilde{{\mathcal {G}}}},m)\) (in which \(D({{\widetilde{{\mathcal {G}}}}},m)\) is densely embedded),

$$\begin{aligned} {\mathcal {E}}_{{\widetilde{{\mathcal {G}}}}}(f,g){\mathop {=}\limits ^{\text {def.}}}\frac{1}{2}(f',g')_m\text { for all }f,g\in {D({{\widetilde{{\mathcal {G}}}}},m)}. \end{aligned}$$
(2.2)

By Theorem 7.2.2. in [15], one associates to each \(x\in {{\widetilde{{\mathcal {G}}}}}\) an m-symmetric diffusion starting in x with state space \({\widetilde{{\mathcal {G}}}}\cup \{\Delta \}\) to the Dirichlet form \({\mathcal {E}}_{{\widetilde{{\mathcal {G}}}}}.\) We denote by \(P_x\, (=P_x^{{\widetilde{{\mathcal {G}}}}})\) its law on \((W_{{\widetilde{{\mathcal {G}}}}}^+,{\mathcal {W}}_{{\widetilde{{\mathcal {G}}}}}^+)\) and also define, for any non-negative measure \(\mu \) on \({\widetilde{{\mathcal {G}}}}\) with countable support \(\text {supp}(\mu )\), the measures

$$\begin{aligned} P_\mu {\mathop {=}\limits ^{\text {def.}}}\sum _{x\in {\text {supp}(\mu )}}\mu _xP_x. \end{aligned}$$
(2.3)

Note that \(\zeta = \inf \{ t \ge 0 : X_t =\Delta \}\) is either \(\infty ,\) or the first time X blows up (i.e., X escapes all d-bounded sets) or gets killed (i.e., exits \( {\widetilde{{\mathcal {G}}}}\) through some \(I_x\) with \(\kappa _x>0\)). Informally, one can obtain a diffusion with law \(P_x\) as follows: first, one runs a Brownian motion starting at x on \(I_e,\) with \(x\in {I_e},\) \(e\in {E\cup {G}},\) until a vertex y is reached. Then one chooses uniformly at random an edge or vertex v among \(\{ y\} \cup \{\{y,z\}: z\sim y\}\) and runs a Brownian excursion on \(I_v\) until a vertex is reached; this procedure is iterated until either the process blows up or the open end of the interval \(I_x\) is reached for some \(x\in {{G}},\) in which case the process is killed at that time. We refer to Sect. 2 of [9] or [19] for a more formal description of this construction on \({\mathbb {Z}}^d,\) \(d\ge 3.\)

We now briefly review how to take traces of the process X on suitable subsets F of \({\widetilde{{\mathcal {G}}}}\). One can show, analogously to Sect. 2 of [19], that the process X under \(P^{{\widetilde{{\mathcal {G}}}}}_x\) allows for a space-time continuous family of local times \((\ell _y(t))_{y\in {{\widetilde{{\mathcal {G}}}}},t\ge 0}.\) Therefore, using that \(P^{{\widetilde{{\mathcal {G}}}}}_x\) lives on the canonical space \((W_{{\widetilde{{\mathcal {G}}}}}^+,{\mathcal {W}}_{{\widetilde{{\mathcal {G}}}}}^+),\) for all sets \(F\subset \widetilde{G}\) of the form \(F = \bigcup _{e \in F_1} \overline{I}_e \cup \bigcup _{x \in F_2} \{x \}\), where \(F_1\subset E\cup {G}\) and \(F_2\subset {G}\) are arbitrary, we can define the time change

$$\begin{aligned} \tau _t^F{\mathop {=}\limits ^{\text {def.}}}\inf \Big \{s>0:\,\int _0^s 1_{\{X_u\in { \bigcup _{e \in F_1} I_e}\}}\,\mathrm {d}u+\sum _{y\in {F_2}}\ell _y(s)>t\Big \}\text { for all }t\ge 0\text { and }w^+\in {W^+_{{\widetilde{{\mathcal {G}}}}}}. \end{aligned}$$

Here, we use the convention \(\inf \varnothing =\zeta \) and denote the trace of X on F by \(X^F=(X_{\tau _t^F})_{t\ge 0}\) with the convention \(X_{\infty }=\Delta \), which corresponds to a time changed process with respect to a positive continuous additive functional (PCAF), see (A.2.36) and below in [15] for instance. As a first application of this definition, letting

$$\begin{aligned} Z{\mathop {=}\limits ^{\text {def.}}} X^{G} \text { (the trace of}\ X\ \text {on}\ G) \end{aligned}$$
(2.4)

it follows from Theorem 6.2.1. in [15] that for all \(x\in {{G}}\) the law of Z under \(P_x^{{\widetilde{{\mathcal {G}}}}}\) is that of the continuous time Markov chain that jumps from \(x \in G\) to \(y \in G\) at rate \(\lambda _{x,y}\) and is killed at rate \(\kappa _x.\) Furthermore, the local times \((\ell _y(\zeta ))_{y\in G}\) of X after being killed have the same law under \(P_x^{{\widetilde{{\mathcal {G}}}}}\) as the total occupation times of that jump process (after being killed), see for instance (1.97) and (2.80) in [26]. We also denote by \(({\widehat{Z}}_n)_{n\in {\mathbb {N}}}\) the discrete time skeleton of Z,  i.e. the sequence of elements of G visited by the process Z, with the convention that \({\widehat{Z}}_n = \Delta \) for all large enough n if Z gets killed.

2.2 Elements of potential theory on \({\widetilde{{\mathcal {G}}}}\)

Our next goal is to supply workable notions of equilibrium measure and capacity on \({\widetilde{{\mathcal {G}}}}\), for arbitrary closed (and in particular compact) subsets of \({\widetilde{{\mathcal {G}}}}\), as necessary in order to investigate observables like \(\mathrm {cap}(E^{\ge h}(x_0))\) (cf. Theorem 1.1). We first define the Green function of an open set \(U\subset {{\widetilde{{\mathcal {G}}}}}\) by

$$\begin{aligned} g_{U}(x,y)=E_x[\ell _y(T_U)]\text { for all } x,y\in {{\widetilde{{\mathcal {G}}}}}, \end{aligned}$$
(2.5)

where \(E_x\) denotes expectation with respect to \(P_x= P^{{\widetilde{{\mathcal {G}}}}}_x\) and \(T_U=\inf \{t\ge 0:X_t\notin {U}\}\) is the first exit time of U,  with the convention \(\inf \varnothing =\zeta .\) We simply write \(g= g_{{\widetilde{{\mathcal {G}}}}}\) for the usual Green function on \({\widetilde{{\mathcal {G}}}}.\)

We now introduce the notions of equilibrium measure and capacity on \({\widetilde{{\mathcal {G}}}}\) by ‘enhancements’, see Lemma 2.1 below. This will allow to directly reformulate the equilibrium problem in a discrete setup and to thereby import the respective standard versions of these notions on transient graphs, see (2.16), (2.20) and (2.27) below. In particular, this approach immediately provides several useful identities, e.g. relating exit distributions for the diffusion X with the corresponding equilibrium measure, cf. (2.19) and (2.17).

On the (transient) graph \((G,\lambda , \kappa )\) associated to \({\mathcal {G}}\), for all finite \(A\subset G\) the equilibrium measure and capacity of A are defined by

$$\begin{aligned} e_{A,{\mathcal {G}}}(x){\mathop {=}\limits ^{\mathrm {def.}}}\lambda _xP_{x}({\widetilde{H}}_{A}({\widehat{Z}})=\infty ) 1_{A}(x) \text { for all }x\in {{G}}, \quad \text {and} \quad \mathrm {{ cap}}_{{\mathcal {G}}}(A){\mathop {=}\limits ^{\mathrm {def.}}}\sum _{x\in {A}}e_{A,{\mathcal {G}}}(x),\nonumber \\ \end{aligned}$$
(2.6)

where \({\widetilde{H}}_{A}({\widehat{Z}}) {\mathop {=}\limits ^{\mathrm {def.}}}\inf \{n\ge 1,\ {\widehat{Z}}_n\in {A}\},\) with \(\inf \varnothing =\infty ,\) is the first return time to A for the discrete time random walk \({\widehat{Z}}\) on \({\mathcal {G}}\), cf. below (2.4). The following observation is key.

Lemma 2.1

(Enhancements). For all countable sets \(A\subset {\widetilde{{\mathcal {G}}}}\) without accumulation point in \({\widetilde{{\mathcal {G}}}},\) there exists a unique graph \({\mathcal {G}}^{A}=(G^A,\lambda ^A,\kappa ^A)\) with vertex set \(G^A=A\cup G,\) such that

$$\begin{aligned}&\text {(with a slight abuse of notation)}, {\widetilde{{\mathcal {G}}}}\ \text {is a subset of}\ {\widetilde{{\mathcal {G}}}}^A, \text {the cable system of}\ {\mathcal {G}}^{A}; \end{aligned}$$
(2.7)
$$\begin{aligned}&\text {for all }x\in {{G}^A,}\text { the laws of the traces } X^{G^A}=(X_{\tau ^{{G}^{A}}_t})_{t\ge 0}\text { under }P^{{\widetilde{{\mathcal {G}}}}}_x\text { and }P^{{\widetilde{{\mathcal {G}}}}^{A}}_x\nonumber \\&coincide; \end{aligned}$$
(2.8)

Proof

We first introduce the weights \(\lambda ^A\) and the killing measure \(\kappa ^A\). For each \(e=\{x_0,x_1 \}\in E\), let \(A\cap I_e=\{z_1(e),\dots , z_{n-1}(e)\}\), where \(n = n(e) \ge 1\) is such that \(n-1=|A\cap I_e|\) and the \(z_{k}(e)\)’s are labeled by order of appearance as one traverses the (open) edge \(I_e\) from, say, \(x_0\) to \(x_1\) (the underlying choice of orientation of e will not affect the definition of \(\lambda ^A, \kappa ^A\) in (2.9) below). For later convenience, we set \(z_0(e)=x_0\) and \(z_n(e)=x_1\), and drop the argument e in the sequel whenever no risk of confusion arises. Similarly, for \(x \in G\), we enumerate \(A\cap I_x=\{z_1(x),\dots , z_{n-1}(x)\}\) (with \(n=n(x)\in {{\mathbb {N}}\cup \{\infty \}}\) such that \(n-1=|A\cap I_{x}|\) if \(|A\cap I_{x}|<\infty ,\) and \(n=\infty \) otherwise) according to increasing distance from x, and set \(z_0(x)=x\). We then define, for \(z, z' \in G^A = G\cup A\),

$$\begin{aligned} \begin{aligned} \lambda _{z,z'}^{A}&={\left\{ \begin{array}{ll} \frac{1}{2\rho _{{\widetilde{{\mathcal {G}}}}}(z,z')},&{} \text {if } \{z,z'\}=\{z_{k-1}(v),z_k(v)\} \text {} \text { for some }v \in E\cup G \text { and } k \ge 1,\\ 0,&{}\text {otherwise,} \end{array}\right. }\\ \kappa _z^{A}&={\left\{ \begin{array}{ll} \frac{\kappa _{x}}{1-2\kappa _{x}\rho _{{\widetilde{{\mathcal {G}}}}}(x,z)},&{}\text {if}\ x= z_{n-1}(x)\ \text {for some}\ x \in G\ (\text {with}\ n=n(x)<\infty ), \\ 0,&{}\text {otherwise}. \end{array}\right. } \end{aligned} \end{aligned}$$
(2.9)

Thus, each edge \(e\in E\) is replaced by a linear chain of \(n=n(e)\) edges \(\{ z_{k-1}, z_k \}\), \(1\le k \le n\), with weights \(\lambda _{ z_{k-1}, z_k}^{A}\), and similarly a chain of \(n(x)-1\) edges is attached to each \(x \in G\), with killing \(\kappa _{ z_{n-1}(x)}^{A}\) at its ‘dangling’ end. By (2.9) and (1.1), for all \(e=\{ x_0,x_1\} \in E\) and \(x \in G\),

$$\begin{aligned}&\sum _{k=1}^{n(e)} \rho ^A_{z_{k-1},z_k}= \sum _{k=1}^{n(e)} \rho _{{\widetilde{{\mathcal {G}}}}}(z_{k-1},z_k) =\rho _{{\widetilde{{\mathcal {G}}}}}(x_0,x_1)=\rho _{x_0,x_1}, \nonumber \\&\sum _{k=1}^{n(x)} \rho ^A_{z_{k-1},z_k} + \frac{1}{2\kappa _{ z_{n(x)-1}}^{A}} = \sum _{k=1}^{n(x)} \rho _{{\widetilde{{\mathcal {G}}}}}(z_{k-1},z_k) + \frac{1}{2\kappa _x}- \rho _{{\widetilde{{\mathcal {G}}}}}(x,z_{n(x)-1}) \nonumber \\&\qquad =\rho _x, \text { if}\ n(x)<\infty , \nonumber \\&\sum _{k=1}^{\infty } \rho ^A_{z_{k-1},z_k} = \sum _{k=1}^{\infty } \rho _{{\widetilde{{\mathcal {G}}}}}(z_{k-1},z_k)=\rho _x, \text { if}\ n(x)=\infty . \end{aligned}$$
(2.10)

Therefore, \({\widetilde{{\mathcal {G}}}}\) can be identified with the set \({\widetilde{{\mathcal {G}}}}^A \setminus I\), where \({\widetilde{{\mathcal {G}}}}^A\) is the cable system associated to \((G^A,\lambda ^A,\kappa ^A)\) and \(I = I_1\cup I_2\cup I_3\), where

$$\begin{aligned} I_1= \bigcup _{e \in E} \bigcup _{ k=1}^{n(e)-1} I_{z_k(e)}, \quad I_2= \bigcup _{x \in G,n(x)<\infty } \bigcup _{ k=1}^{n(x)-2} I_{z_k(x)} \text { and } I_3 =\bigcup _{x \in G,n(x)=\infty } \bigcup _{ k=1}^{\infty } I_{z_k(x)}. \end{aligned}$$

By a similar reasoning as detailed below around (2.31), it then follows that for all \(x\in {{\widetilde{{\mathcal {G}}}}}\) (viewed as a subset of \({\widetilde{{\mathcal {G}}}}^A\)), the law of the trace of X on \({\widetilde{{\mathcal {G}}}}\) under \(P^{{\widetilde{{\mathcal {G}}}}^{A}}_x\) is \(P^{{\widetilde{{\mathcal {G}}}}}_x.\) In view of (2.4), the claim (2.8) then follows. \(\square \)

By slightly adapting the above arguments, one defines the graph \(({\overline{G}}^M,{\bar{\lambda }}^M,{\bar{\kappa }}^M)\) alluded to at the beginning of Sect. 1, see above (1.1), as follows. Given \({\mathcal {G}}=({\overline{G}},{\bar{\lambda }},{\bar{\kappa }})\), possibly with \({\bar{\kappa }}_x =\infty \) for some \(x \in {\overline{G}}\), let

$$\begin{aligned} M{\mathop {=}\limits ^{\text {def.}}}\{ a : \text { midpoint of}\ I_e\ \text {for some}\ e \in E_{{\bar{\kappa }}}\} \end{aligned}$$
(2.11)

where \( E_{{\bar{\kappa }}}=\big \{\{x,y\}:\,x,y \in {\overline{G}},\,{\bar{\lambda }}_{x,y}>0,{\bar{\kappa }}_x=\infty \text { and }{\bar{\kappa }}_y<\infty \big \}\) and \(I_e\) is an interval isomorphic to the open interval \((0,1/(2{\bar{\lambda }}_{x,y}))\) glued at 0 to y, with boundary \(\{x,y\}\). Now, by a small extension of Lemma 2.1, one constructs from \({\mathcal {G}}=({\overline{G}},{\bar{\lambda }},{\bar{\kappa }})\) the graph

$$\begin{aligned} (G,\lambda ,\kappa ) {\mathop {=}\limits ^{\text {def.}}}({\overline{G}}^M,{\overline{\lambda }}^M,{\overline{\kappa }}^M)\text { with }{\overline{G}}^M=\{ x \in {\overline{G}}: {\bar{\kappa }}_x < \infty \} \cup M\text { and }M\text { as in }(2.11),\nonumber \\ \end{aligned}$$
(2.12)

by treating \(I_e\) for \(e=\{ x,y\} \in E_{{\bar{\kappa }}}\) with \({\bar{\kappa }}_y <\infty \) in the same manner as \(I_y\) in (2.9) (whence \(\lambda _{y,a}={\bar{\lambda }}_{y,a}^M=2{\bar{\lambda }}_{y,x},\) \(\kappa _y={\bar{\kappa }}^M_y=0\) and \(\kappa _a={\bar{\kappa }}_a^M = 2{\bar{\lambda }}_{y,x}\) for \(a\in M\) the midpoint of \(I_{x,y}\)), and keeping the same weights and killing measures for the other vertices. Plainly, \(({\overline{G}}^M,{\overline{\lambda }}^M,{\overline{\kappa }}^M)\) satisfies \({\overline{\kappa }}^M< \infty \). Similarly as below (2.4), it follows from Theorem 6.2.1. in [15] that the law of the trace of X (under \(P_x^{\widetilde{{\mathcal {G}}}}\)) on \(\{x\in {{\overline{G}}}:\,{\bar{\kappa }}_x<\infty \}\) is that of the continuous time Markov chain on \({\overline{G}}\) that jumps from \(x \in {\overline{G}}\) to \(y \in {\overline{G}}\) at rate \({\bar{\lambda }}_{x,y}\) and is killed at rate \({\bar{\kappa }}_x,\) hence justifying our choice of \((G,\lambda ,\kappa )\) as in (2.12) to define the cable system \({\widetilde{{\mathcal {G}}}}.\) Note also that \((G,\lambda ,\kappa )={\mathcal {G}}\) when \({\bar{\kappa }}<\infty \) since \(E_{{\bar{\kappa }}}=\varnothing \) in that case.

The following remark turns out handy in a couple of instances in this article.

Remark 2.2

(Generating any given cable system from a graph without killing) As an application of Lemma 2.1, given \((G,\lambda , \kappa )\) and the corresponding cable system \({\widetilde{{\mathcal {G}}}}\), one can naturally associate \({\widetilde{{\mathcal {G}}}}\) to a triplet \((G', \lambda ', \kappa ')\) with \(\kappa ' \equiv 0\). To do so, one considers, for each \(I_x\) with \(\kappa _x \in (0,\infty )\) a sequence \(z_n(x)\), \(n \ge 0,\) converging to the open end of \(I_x\) (note that such a sequence does not have an accumulation point in \({\widetilde{{\mathcal {G}}}}\)). Then, with \(A= \{ z_n(x): n \ge 0, \, x\in G \text { s.t. }\kappa _x \in (0,\infty )\}\), one defines \(G'=G^A\) and \(\lambda '=\lambda ^A\) as given by Lemma 2.1 (note that \(\kappa ^A \equiv 0\) by (2.9)). By (2.7), one has that \({\widetilde{{\mathcal {G}}}}\subset {\widetilde{{\mathcal {G}}}}^A\) and \({\widetilde{{\mathcal {G}}}}\) is in fact obtained from \({\widetilde{{\mathcal {G}}}}^A\) by removing all (unbounded) cables \(I_x\), \(x\in A\). In particular, combining this observation with the isomorphism [25], which holds on \((G',\lambda ')\), one readily infers that (1.7) holds for \({\widetilde{{\mathcal {G}}}}\).

We now extend the definition of the equilibrium measure from (2.6) to the cable graph setting. When K is a compact subset of \({\widetilde{{\mathcal {G}}}},\) we define its exterior boundary

$$\begin{aligned} {\widehat{\partial }}K=\left\{ x\in { K}:\,P_x\left( X_{L_K}=x,L_K>0\right) >0\right\} , \end{aligned}$$
(2.13)

where \(L_K=\sup \{t>0:X_t\in {K}\}\) is the last exit time of K,  with the convention \(\sup \varnothing =0.\) Note that \( {\widehat{\partial }}K\) is finite since K is bounded and \(I_e\) contains at most two points of \( {\widehat{\partial }}K\) for all \(e\in {E\cup G}.\) Consider now any sets \(K,{\widehat{K}},A\subset {\widetilde{{\mathcal {G}}}}\) such that

$$\begin{aligned} K\ \text {is compact}, {\widehat{K}}\ \text {finite},\ A\ \text {has no accumulation point and }{\widehat{\partial }} K\subset {\widehat{K}}\subset (K\cap G^A).\nonumber \\ \end{aligned}$$
(2.14)

For all \(x,y\in {A},\) by (2.8) as well as (1.56) in [26] (and its straightforward adaptation to infinite transient weighted graphs; this also applies to subsequent references to [26]) applied to the graph \({\mathcal {G}}^{A},\) noting that \(L_{{\widehat{K}}}=L_K\) a.s. and \(\{L_{{\widehat{K}}}>0 , X_{L_{{\widehat{K}}}}=x\}=\{{\overline{L}}_{{\widehat{K}},A} >0 , X^{G^A}_{{\overline{L}}_{{\widehat{K}},A}^-}=x\}\) where \({\overline{L}}_{{\widehat{K}},A}\) is the last exit time of \({\widehat{K}}\) for \(X^{G^A},\) the trace of X on \(G^A,\) and \(X^{G^A}_{{\overline{L}}_{{\widehat{K}},A}^-}\) is the last vertex of \({\widehat{K}}\) visited by \(X^{G^A}\) before time \({\overline{L}}_{{\widehat{K}},A},\)

$$\begin{aligned} P^{{\widetilde{{\mathcal {G}}}}}_y( L_K >0 , X_{L_{K}}=x)=g(y,x)e_{{\widehat{K}},{\mathcal {G}}^{A}}(x). \end{aligned}$$
(2.15)

We now define the equilibrium measure of K in \({\widetilde{{\mathcal {G}}}}\) by

$$\begin{aligned} e_{K,{\widetilde{{\mathcal {G}}}}}(x){\mathop {=}\limits ^{\text {def.}}}e_{ {\widehat{\partial }} K,{\mathcal {G}}^{ {\widehat{\partial }} K}}(x) 1_{\{x\in { {\widehat{\partial }} K}\}}, \end{aligned}$$
(2.16)

with \({\mathcal {G}}^{ {\widehat{\partial }} K}\) as supplied by Lemma 2.1 and the (discrete) equilibrium measure on the right-hand side as defined in (2.6). For \(K,{\widehat{K}}\) and A as in (2.14), we then have that

$$\begin{aligned} e_{{\widehat{K}},{\mathcal {G}}^{A}}(x)=e_{K,{\widetilde{{\mathcal {G}}}}}(x)\text { for all }x\in {A}. \end{aligned}$$
(2.17)

Indeed, (2.17) follows from (2.15) when \(x\in {{\widehat{\partial }}K},\) and both terms of (2.17) are equal to 0 when \(x\in {A\setminus {\widehat{\partial }}K}\) by (2.15) and (2.16). In particular if \(K\subset G,\) by (2.17) with \({\widehat{K}}=K\) and \(A=\varnothing ,\) the definition (2.16) of the equilibrium measure on the cable system coincides with the definition of the equilibrium measure from (2.6). Moreover, (2.17) can be used to obtain a description of the equilibrium measure purely in terms of the diffusion X,  instead of using the equilibrium measure on the discrete graph \({\mathcal {G}}^{{\widehat{\partial }}K}\) as in (2.16). Indeed, denoting by \(B_{\rho }(x,\varepsilon )\) the ball centered at \(x\in {{\widetilde{{\mathcal {G}}}}}\) with radius \(\varepsilon \ge 0\) for the distance \(\rho \) introduced above Sect. 2.1, which is well defined for small enough \(\varepsilon ,\) one has

$$\begin{aligned} e_{K,{\widetilde{{\mathcal {G}}}}}(x)=\lim _{\varepsilon \rightarrow 0}\frac{d_x}{2\varepsilon }P_x(L_K<H_{\partial B_{\rho }(x,\varepsilon )})\text { for all }x\in {{\widehat{\partial }}K}, \end{aligned}$$
(2.18)

where \(d_x\) is the degree of x if \(x\in {G},\) and \(d_x=2\) otherwise. In order to prove (2.18), one uses (2.17) with \(A=\partial B_{\rho }(x,\varepsilon )\cup {\widehat{\partial }}K\) and \({\widehat{K}}=A\cap K,\) and (2.6), noting that \(\lambda _x^A= d_x/(2\varepsilon )\) by (2.9) and that \({\widetilde{H}}_K(X^{G^A})=\infty \) if and only if \(L_K<H_{\partial B_{\rho }(x,\varepsilon )}\) for \(\varepsilon \) small enough. Actually, the equality (2.18) thus still holds when removing the limit as \(\varepsilon \rightarrow 0,\) for small enough \(\varepsilon .\) Moreover, we obtain from (2.15) and (2.17) that

$$\begin{aligned} P^{{\widetilde{{\mathcal {G}}}}}_y( L_K >0 , X_{L_{K}}=x)=g(y,x)e_{K,{\widetilde{{\mathcal {G}}}}}(x), \text { for all}\ x,y\in {{\widetilde{{\mathcal {G}}}}}. \end{aligned}$$
(2.19)

The identity (2.19) is reminiscent of the equilibrium measure for the usual Brownian motion (on \({\mathbb {R}}^d\), with suitable killing when \(d=1,2\)), see for instance Proposition 3.3 in [23]. In fact, (2.19) (or (2.18)) could be used instead of (2.17) as defining \(e_{K,{\widetilde{{\mathcal {G}}}}}(\cdot )\).

The capacity of a compact set \(K\subset {\widetilde{{\mathcal {G}}}}\) is defined as the total mass of the equilibrium measure,

$$\begin{aligned} \mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}}(K){\mathop {=}\limits ^{\mathrm {def.}}}\sum _{x\in { {\widehat{\partial }} K}}e_{K,{\widetilde{{\mathcal {G}}}}}(x). \end{aligned}$$
(2.20)

When there is no risk of ambiguity, we will simply write \(e_K\), \(\mathrm {{ cap}}(K)\) instead of \(e_{K,{\widetilde{{\mathcal {G}}}}}\), \(\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}}(K)\).

Using (2.8), (2.16), and (2.17), we can now extend a variety of useful results on equilibrium measures from the discrete case to \({\widetilde{{\mathcal {G}}}}.\) By (an adaptation of) [26, (1.57)], one easily shows the following characterization of the capacity in terms of a variational problem as

$$\begin{aligned} \begin{aligned} \mathrm {cap}(K)=\Big (\inf _{\mu }\sum _{x,y\in {{\widehat{K}}}}g(x,y)\mu (x)\mu (y)\Big )^{-1}, \end{aligned} \end{aligned}$$
(2.21)

for \(K,{\widehat{K}}\subset {\widetilde{{\mathcal {G}}}}\) as in (2.14) with \(A={\widehat{K}},\) where the infimum is over all probability measures \(\mu \) on \({\widehat{K}},\) see e.g. Proposition 1.9 in [26]. In view of (2.17), when \(K\subset K'\) are two compacts of \({\widetilde{{\mathcal {G}}}},\) using (1.59) in [26], one obtains the ‘sweeping identity’

$$\begin{aligned} P_{e_{K'}}(X_{H_K}=x,H_K<\zeta )=e_K(x)\text { for all }x\in {\widetilde{{\mathcal {G}}}}, \end{aligned}$$
(2.22)

where \(H_K=\inf \{t\ge 0:\,X_t\in {K}\},\) with the convention \(\inf \varnothing =\zeta .\) In particular, summing (2.22) over \(x\in {{\partial K}}\) yields the monotonicity property

$$\begin{aligned} \mathrm {{ cap}}(K)\le \mathrm {{ cap}}(K'), \hbox { for}\ K\subset K'\ \hbox {compacts of}\ {\widetilde{{\mathcal {G}}}}. \end{aligned}$$
(2.23)

We now proceed to extend the notion of capacity to closed (not necessarily bounded) sets with finitely many components, cf. (2.26) below, which will turn out helpful in the proof of Lemma 4.6 below. For any measurable function \(f:{\widetilde{{\mathcal {G}}}}\rightarrow {\mathbb {R}}\) and K a compact subset of \({\widetilde{{\mathcal {G}}}},\) the harmonic extension \(\eta ^f_K\) of f on K is defined as

$$\begin{aligned} \eta _K^f(x){\mathop {=}\limits ^{\mathrm {def.}}}\sum _{y\in {\partial K}}P_{x}(X_{{H}_{K}}=y,H_{K}<\zeta )f(y) \quad \text { for all }x\in {{\widetilde{{\mathcal {G}}}}}. \end{aligned}$$
(2.24)

Note that the sum in (2.24) is well defined since for each \(x\in {{\widetilde{{\mathcal {G}}}}}\) the set \(\partial _{x}K{\mathop {=}\limits ^{\mathrm {def.}}}\{y\in {\partial K}:\,P_{x}(X_{{H}_{K}}=y,H_{K}<\zeta )>0\}\) contains at most two points per edge of \({\widetilde{{\mathcal {G}}}}\) intersecting K,  and hence is finite. In the sequel, a decreasing sequence of compacts \((K_n)_{n\in {\mathbb {N}}}\) is said to decrease to a compact K if \(K=\bigcap _{n\in {\mathbb {N}}} K_n\). Moreover, in a slight abuse of notation, we say that an increasing sequence of compacts \((K_n)_{n\in {\mathbb {N}}}\) increases to a compact K if K is the closure of \(\bigcup _{n\in {\mathbb {N}}} K_n\) (later on, this notion permits to assert for instance that if \({E}^{\ge h}(x_0)\) is compact, cf. (1.4), the clusters \({E}^{\ge h'}(x_0)\) increase to \({E}^{\ge h}(x_0)\) as \(h' \searrow h\)). The following convergence result for harmonic extensions will be useful.

Lemma 2.3

Let \(f:{\widetilde{{\mathcal {G}}}}\rightarrow {\mathbb {R}}\) be a continuous function and \(K_n\), \(n\in {\mathbb {N}}\), as well as K be compact subsets of \({\widetilde{{\mathcal {G}}}}\) such that \((K_n)_{n\in {\mathbb {N}}}\) increases or decreases to K. Then for all \(x\in {{\widetilde{{\mathcal {G}}}}}\),

$$\begin{aligned} \eta _{K_n}^f(x)\displaystyle \mathop {\longrightarrow }_{n\rightarrow \infty }\eta _{K}^f(x). \end{aligned}$$
(2.25)

Proof

Fix some \(x\in {{\widetilde{{\mathcal {G}}}}}.\) For all \(y\in {\partial _x K},\) let \(A_{n}^y=\{z\in {\partial _x{K_n}}:\,d(z,y)\le d(z,y')\text { for all }y'\in {\partial _x K}\}.\) Then \(\max _{z\in {A_n^y}}d(z,y)\displaystyle \mathop {\longrightarrow }_{n\rightarrow \infty }0\) for all \(y\in {\partial _x K},\) and there exists an integer N such that for all \(n\ge N,\) the set \((A_{n}^y)_{y\in \partial _x K}\) is a partition of \(\partial _x K_n.\) By (2.24), for all \(x\in {{\widetilde{{\mathcal {G}}}}}\) and \(n\ge N\),

$$\begin{aligned} \eta _{K}^f(x)-\eta _{K_n}^f(x)= & {} \sum _{y\in {\partial _x K}}\Big (P_x(X_{H_{K}}=y,H_{K}<\zeta )f(y)\\&-\sum _{z\in {A_{n}^y}}P_x(X_{H_{K_n}}=z,H_{K_n}<\zeta )f(z)\Big ). \end{aligned}$$

By continuity, for any \(\varepsilon >0\) there exists \(N'\ge N\) such that for all \(n\ge N',\) \(y\in {\partial _x K}\) and \(z\in {A_n^y}\) we have \(|f(y)-f(z)|\le \varepsilon .\) Therefore, for all \(x\in {{\widetilde{{\mathcal {G}}}}}\) and \(n\ge N',\)

$$\begin{aligned} |\eta _{K}^f(x)-\eta _{K_n}^f(x)|\le & {} \varepsilon +\sum _{y\in \partial _x K}|f(y)| \cdot \big |P_x(X_{H_{K}}=y,H_{K}<\zeta )\\&-P_x(X_{H_{K_n}}\in {{A_{n}^y}},H_{K_n}<\zeta )\big |. \end{aligned}$$

Since for all \(x\in {{\widetilde{{\mathcal {G}}}}}\) and \(y\in {\partial _x K}\) the absolute value of the difference on the right-hand side is bounded by

$$\begin{aligned}&P_x(X_{H_{K}}=y,X_{H_{K_n}}\notin A_{n}^y,H_{K}<\zeta ,H_{K_n}<\zeta )+P_x(H_{K}<\zeta ,H_{K_n}=\zeta ) \\&\quad +P_x(X_{H_{K}}\ne y,X_{H_{K_n}}\in A_{n}^y,H_{K}<\zeta ,H_{K_n}<\zeta )+P_x(H_{K}=\zeta ,H_{K_n}<\zeta ) \end{aligned}$$

and each of these terms tends to 0 as \(n \rightarrow \infty \), (2.25) follows as f is uniformly bounded on compacts. \(\square \)

An interesting and immediate consequence of Lemma 2.3 and (2.22) is the following: if \(K_n\), \(n\in {\mathbb {N}}\), and K are compacts of \({\widetilde{{\mathcal {G}}}}\) such that \((K_n)_{n\in {\mathbb {N}}}\) increases or decreases to K,  consider the quantity \(\sum _{x\in \partial K}e_K(x)\eta _{K_n}^1(x)\) in case the \(K_n\) are increasing and \(\sum _{x\in \partial K}e_{K_1}(x)\eta _{K_n}^1(x)\) in case the \(K_n\) are decreasing, respectively (which both equal \(\text {cap}(K_n)\) by virtue of (2.22)). We can then take \(n \rightarrow \infty \) while applying (2.25) with \(f=1\) to obtain that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\mathrm {cap}(K_n)=\mathrm {{ cap}}(K). \end{aligned}$$
(2.26)

Hence, we can extend the definition of the capacity to any closed set \(A\subset {\widetilde{{\mathcal {G}}}}\) by setting

$$\begin{aligned} \mathrm {cap}(A)=\lim \limits _{n\rightarrow \infty }\mathrm {cap}(A\cap K_n), \end{aligned}$$
(2.27)

where \((K_n)_{n\in {\mathbb {N}}}\) is any increasing sequence of compacts of \({\widetilde{{\mathcal {G}}}}\) exhausting \({\widetilde{{\mathcal {G}}}}.\) This limit exists and does not depend on the choice of the sequence \((K_n)_{n\in {\mathbb {N}}}\) by (2.23), and it is consistent with the existing definition of capacity for compacts, cf. (2.20), by means of (2.26).

2.3 Varying killing measure and the cables \(I_x\)

In the sequel, it will repeatedly be useful to compare the diffusion X on \({\widetilde{{\mathcal {G}}}}\) for varying killing measure. In particular, this comprises ‘infinite-volume’ limits, in which all but finitely many \(x\in {\overline{G}}\) initially satisfy \({\bar{\kappa }}_{x}=\infty \), and \({\bar{\kappa }}\) is sequentially reduced, see (4.10) below. Consider the family of graphs \(( {{\mathcal {G}}}_{{\bar{\kappa }}})_{{\bar{\kappa }}}\), where \({{\mathcal {G}}}_{{\bar{\kappa }}}=({\overline{G}}, {\bar{\lambda }}, {\bar{\kappa }})\), for fixed \({\overline{G}}\) and \({\bar{\lambda }}\) and varying killing measure \({\bar{\kappa }} \in [0,\infty ]^{{\overline{G}}}\). Let \({\widetilde{{\mathcal {G}}}}_{{\bar{\kappa }}}\) be the cable system associated to \({{\mathcal {G}}}_{{\bar{\kappa }}}\) (cf. below (1.1)). In view of (2.11), (2.12), one can interpret

$$\begin{aligned} {\widetilde{{\mathcal {G}}}}_{{\bar{\kappa }}'}\subset {\widetilde{{\mathcal {G}}}}_{{\bar{\kappa }}} \quad \text { if } {\bar{\kappa }}' \ge {\bar{\kappa }}, \end{aligned}$$
(2.28)

where \({\bar{\kappa }}' \ge {\bar{\kappa }}\) means \({\bar{\kappa }}'_x \ge {\bar{\kappa }}_x\) for all \(x \in {\overline{G}}.\) We then set, under \(P^{{\widetilde{{\mathcal {G}}}}_{{\bar{\kappa }}}}_x,\) \(x\in {{\widetilde{{\mathcal {G}}}}_{{\bar{\kappa }}'}}(\subset {\widetilde{{\mathcal {G}}}}_{{\bar{\kappa }}}),\)

$$\begin{aligned} X_t^{{\bar{\kappa }}'}= {\left\{ \begin{array}{ll} X_t, \text { if } t< \zeta _{{\bar{\kappa }}'}\\ \Delta , \text { if } t\ge \zeta _{{\bar{\kappa }}'} \end{array}\right. } \text { where } \zeta _{{\bar{\kappa }}'} = \inf \{ t \ge 0 : X_t \notin {\widetilde{{\mathcal {G}}}}_{{\bar{\kappa }}'} \}. \end{aligned}$$
(2.29)

By Theorem 4.4.2. in [15], the Dirichlet form associated to \(X_t^{{\bar{\kappa }}'}\) is \({\mathcal {E}}_{{\widetilde{{\mathcal {G}}}}_{{\bar{\kappa }}'}},\) and so

$$\begin{aligned} \text {the law of}\ X_t^{{\bar{\kappa }}'} \text {under}\ P^{{\widetilde{{\mathcal {G}}}}_{{\bar{\kappa }}}}_x\ \text {is}\ P^{{\widetilde{{\mathcal {G}}}}_{{\bar{\kappa }}'}}_x\ \text {for all}\,\, x\in {{\widetilde{{\mathcal {G}}}}_{{\bar{\kappa }}^{'}}}. \end{aligned}$$
(2.30)

We now briefly compare the above setup to existing definitions of the metric graph \({\widetilde{{\mathcal {G}}}}\) and its associated diffusion X, which do not usually involve attaching cables \(I_x\) to the vertices \(x \in G\) (see e.g. Section 5 of [4], Section 2 of [14] or Section 2 of [19]). Upon considering a suitable trace process in the present context, see (2.31) below, these two descriptions are essentially equivalent and in particular, they lead to the same notion of capacity for most sets of interest. Most important to our investigations is the feature that the cables \(I_x\) provide natural embeddings as \(\kappa \) varies, see (2.28)–(2.29) above. This will be useful for approximation purposes, see (4.10) and Lemmas 4.6 and 6.3 below, as well as to derive (\({Law}_h\)) and (Isom) in the case \(\kappa \ne 0\). We define \({\widetilde{{\mathcal {G}}}}^{-}\) as the closed subset of \({\widetilde{{\mathcal {G}}}}\) consisting of the closure of the union of the intervals \(I_e,\) \(e\in {E}\), (or, in other words, the subset of \({\widetilde{{\mathcal {G}}}}\) obtained upon removing the intervals \(I_x\), \(x \in G\)) and denote by \(X^{{\widetilde{{\mathcal {G}}}}^{-}}\) the trace on \({\widetilde{{\mathcal {G}}}}^{-}\) of X. One can prove by Theorem 6.2.1. in [15] that the Dirichlet form on \(L^2_{-}({\widetilde{{\mathcal {G}}}}^{-},m_{|{\widetilde{{\mathcal {G}}}}^{-}})= \{f\in {L^2({\widetilde{{\mathcal {G}}}}^{-},m_{|{\widetilde{{\mathcal {G}}}}^{-}})}:\sum _{x\in {{G}}}\kappa _xf(x)^2<\infty \}\) associated to \(X^{{\widetilde{{\mathcal {G}}}}^{-}}\) is

$$\begin{aligned} {\mathcal {E}}_{{\widetilde{{\mathcal {G}}}}^{-}}(f,g)&{\mathop {=}\limits ^{\text {def.}}}&\frac{1}{2}(f',g')_{m_{|{\widetilde{{\mathcal {G}}}}^{-}}}+\sum _{x\in {G}}\kappa _xf(x)g(x)\nonumber \\&\text { for }f,g\in {D({{\widetilde{{\mathcal {G}}}}^{-}},m_{|{\widetilde{{\mathcal {G}}}}^{-}})\cap L^2_{-}({{\widetilde{{\mathcal {G}}}}^{-}},m_{|{\widetilde{{\mathcal {G}}}}^{-}})}, \end{aligned}$$
(2.31)

where we recall that the space D had been introduced below (2.1). If \(\kappa \equiv 0\) on \({\mathcal {G}}\), the process \(X^{{\widetilde{{\mathcal {G}}}}^{-}}\) thus corresponds to the usual diffusion on the cable system \({\widetilde{{\mathcal {G}}}}^{-}.\) If \(\kappa \ge 0\) on \({\mathcal {G}}\) (i.e. \({\widetilde{{\mathcal {G}}}}={\widetilde{{\mathcal {G}}}}_{\kappa }\)), it follows from Theorems 6.1.1. and A.2.11. in [15] that \(X^{{\widetilde{{\mathcal {G}}}}^{-}}\) has the same law under \(P^{{\widetilde{{\mathcal {G}}}}}_x\) as the diffusion \(X^{{\widetilde{{\mathcal {G}}}}^{-}_0}\) under \(P^{{\widetilde{{\mathcal {G}}}}_{0}}_x\) (where \({\widetilde{{\mathcal {G}}}}_{0}= {\widetilde{{\mathcal {G}}}}_{\kappa \equiv 0}\)) killed at time \( \zeta _{\kappa }^{-}=\inf \{t<\zeta _0:\,\sum _{x\in {{G}}}\ell _x(t)\kappa _x\ge \xi \}\), where \(\xi \) is an independent exponential variable with parameter 1 (with the convention \(\inf \varnothing =\zeta _0\)). The latter is the process studied e.g. in Section 2 of [19]. Moreover, the trace of \(X^{{\widetilde{{\mathcal {G}}}}^{-}}\) (under \(P^{{\widetilde{{\mathcal {G}}}}}_x\)) on G has the same law as Z, hence the local times \((\ell _y(t))_{y\in {{\widetilde{{\mathcal {G}}}}^{-}},t\ge 0}\) have the same law under \(P^{{\widetilde{{\mathcal {G}}}}}_x\) as those of the process \(X^{{\widetilde{{\mathcal {G}}}}_0^{-}}\) (killed at time \(\zeta _{\kappa }^{-}\)) under \(P^{{\widetilde{{\mathcal {G}}}}_0}_x,\) i.e. the local times of the process introduced in [19].

Consequently, for compact \(K\subset {\widetilde{{\mathcal {G}}}}^{-}\) one could have defined a notion \(\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}^{-}}(K)\) similarly as in (2.16) and (2.20), but starting from the process \(X^{{\widetilde{{\mathcal {G}}}}^{-}}\) and considering suitable enhancements of \({\widetilde{{\mathcal {G}}}}^{-}\), resulting in \(\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}^{-}}(K)= \mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}}(K)\) for all \(K\subset {\widetilde{{\mathcal {G}}}}^{-}\). This can be further strengthened when \(\kappa \equiv 0\), as asserted in the following lemma, which records the capacity of the cables \(I_x\) for later purposes.

Lemma 2.4

For all \(x\in {{G}},\) the following dichotomy holds:

$$\begin{aligned} \text { if }\kappa _x>0,\text { then }\mathrm {{ cap}}({\overline{I}}_x)=\infty ,\text { and if }\kappa _x=0,\text { then }\mathrm { cap}({\overline{I}}_x)=\mathrm {cap}(\{x\}). \end{aligned}$$
(2.32)

Moreover, if \(\kappa \equiv 0,\) then for all connected and closed sets \(A\subset {\widetilde{{\mathcal {G}}}}\) such that \(A\cap {\widetilde{{\mathcal {G}}}}^{-}\ne \varnothing ,\) one has \(\mathrm {cap}_{{\widetilde{{\mathcal {G}}}}}(A)=\mathrm {cap}_{{\widetilde{{\mathcal {G}}}}^{-}}(A\cap {\widetilde{{\mathcal {G}}}}^{-}).\)

Proof

We first show (2.32). If \(\kappa _x>0,\) then for all \(t\in {(0,\rho _x)},\) writing \(y_t=x+(\rho _x-t)\cdot I_x\) (see the beginning of Sect. 2 for notation), we see by (2.9) that \(\kappa _{y_t}^{\{{y_t}\}}=\frac{1}{2t}.\) Let \(I_x^t=\{ x+s\cdot I_x : 0\le s \le \rho _x -t\}\). Then by (2.16) \(e_{I_x^t}(y_t)=\lambda _{y_t}^{\{y_t\}}P_{y_t}^{{\mathcal {G}}^{\{{y_t}\}}}({\widetilde{H}}_{\{x,y_t\}}=\infty )=\kappa ^{\{y_t\}}_{y_t} ,\) and so we see that \(\mathrm {cap}(I_x^t)\ge e_{I_x^t}(y_t)=\frac{1}{2t}.\) Hence, by (2.27), we obtain \(\mathrm {cap}({\overline{I}}_x)=\infty \) as \(t \downarrow 0\).

If \(\kappa _x=0,\) then keeping the same notation, we have for all \(t\in {(0,\infty )}\) that \(P_{y_t}^{{\mathcal {G}}^{\{{y_t}\}}}({\widetilde{H}}_{I_x^t}=\infty )=0,\) since X behaves like a Brownian motion on \(I_x\) and hence always return to \(I_x^t\) in finite time. Moreover \(P_{x}^{{\mathcal {G}}^{\{{y_t}\}}}({\widetilde{H}}_{I_x^t}=\infty )=P_{x}^{{\mathcal {G}}^{\{{y_t}\}}}({\widetilde{H}}_{\{x\}}=\infty ).\) Therefore by (2.16), we get \(\mathrm { cap}(I_x^t)=e_{I_x^t}(x)+0=e_{\{x\}}(x)=\mathrm {{ cap}}(\{x\}),\) and by (2.27) we obtain that \(\mathrm { cap}({\overline{I}}_x)=\mathrm { cap}(\{x\})\).

Suppose now that \(\kappa \equiv 0,\) and let \(K\subset {\widetilde{{\mathcal {G}}}}\) be a connected and compact set such that \(K\cap {\widetilde{{\mathcal {G}}}}^{-}\ne \varnothing .\) Then since X cannot be killed via \(I_x\) for all \(x\in {G},\) we have \({\widehat{\partial }} (K\cap {\widetilde{{\mathcal {G}}}}^{-})={\widehat{\partial }}K\) and for all \(x\in {{\widehat{\partial }} K}\)

$$\begin{aligned} e_{K\cap {\widetilde{{\mathcal {G}}}}^{-}}(x)=\lambda _x^{{\widehat{\partial }} K}P^{{\widetilde{{\mathcal {G}}}}^{{\widehat{\partial }} K}}_x({\widetilde{H}}_{K\cap {\widetilde{{\mathcal {G}}}}^{-}}=\infty )=\lambda _x^{{\widehat{\partial }} K}P^{{\widetilde{{\mathcal {G}}}}^{{\widehat{\partial }} K}}_x({\widetilde{H}}_K=\infty )=e_{K}(x), \end{aligned}$$

from which the claim follows for such K, and for arbitrary closed connected sets by means of (2.27). \(\square \)

Remark 2.5

The second part of Lemma 2.5 implies that, when \(\kappa \equiv 0,\) one can consider \({\widetilde{{\mathcal {G}}}}^-\) instead of \({\widetilde{{\mathcal {G}}}}\) and all our results, for instance (Isom) or (\({Law}_h\)) for \(\mathrm {cap}_{{\widetilde{{\mathcal {G}}}}^-}(E^{\ge h}_-(x_0))\) instead of \(\mathrm {cap}_{{\widetilde{{\mathcal {G}}}}}(E^{\ge h}(x_0)),\) hold under the same conditions, where \(E^{\ge h}_-(x_0)=E^{\ge h}(x_0)\cap {\widetilde{{\mathcal {G}}}}^-\) is the connected component of \(x_0\) in \(\{x\in {{\widetilde{{\mathcal {G}}}}^-}:\,\varphi _x\ge h\}.\) Note that this is not true anymore when \(\kappa \not \equiv 0\). Indeed for instance one has by (2.32) that \(\mathrm{cap}_{\widetilde{{\mathcal {G}}}}({\overline{I}}_x)=\infty ,\) yet, \(\mathrm {cap}_{{\widetilde{{\mathcal {G}}}}^{-}}({\overline{I}}_x\cap {\widetilde{{\mathcal {G}}}}^{-})= \mathrm{cap}_{{\mathcal {G}}}(\{x\}) \le \lambda _x < \infty .\) Therefore, one cannot simply replace \(\mathrm {cap}_{{\widetilde{{\mathcal {G}}}}}(E^{\ge h}(x_0))\) by \(\mathrm {cap}_{{\widetilde{{\mathcal {G}}}}^-}(E^{\ge h}_-(x_0))\) in (\({Law}_h\)), and, when considering \({\widetilde{{\mathcal {G}}}}^-\) instead of \({\widetilde{{\mathcal {G}}}},\) one has to change the isomorphism (Isom) to take into account the influence of the trajectories in the random interlacement process entirely included in one of the cables \(I_x,\) \(x\in {G}\) with \(\kappa _x>0,\) possibly hitting the sign clusters, see Remark 3.10,4) for details.

2.4 The Gaussian free field

We now collect a few important properties of the Gaussian free field \((\varphi _x)_{x\in {{\widetilde{{\mathcal {G}}}}}}\) on the cable system \({{\widetilde{{\mathcal {G}}}}}\) defined in (1.3). We first recall its strong spatial Markov property and refer to Section 1 of [27] for details. For any open set \(O\subset {\widetilde{{\mathcal {G}}}},\) we consider the \(\sigma \)-algebra \({\mathcal {A}}_O=\sigma ({\varphi }_x,\,x\in {O}),\) and for any compact \(K\subset {\widetilde{{\mathcal {G}}}}\) we define \({\mathcal {A}}_K^+=\bigcap _{\varepsilon >0}{\mathcal {A}}_{K^\varepsilon }\), where \(K^\varepsilon \) is the open \(\varepsilon \)-ball around K for the distance d. We say that \({\mathcal {K}}\) is a compatible random compact subset of \({\widetilde{{\mathcal {G}}}}\) if \({\mathcal {K}}\) is a compact subset of \({\widetilde{{\mathcal {G}}}}\) with finitely many connected components and \(\{{\mathcal {K}}\subset O\}\in {{\mathcal {A}}_O}\) for any open set \(O\subset {\widetilde{{\mathcal {G}}}}.\) We then define

$$\begin{aligned} \begin{aligned} {\mathcal {A}}_{{\mathcal {K}}}^+=\big \{A\in {{\mathcal {A}}_{{\widetilde{{\mathcal {G}}}}}}:\,A\cap \{{\mathcal {K}}\subset K\}\in {{\mathcal {A}}_K^+}&\,\,\text {for all }K\subset {\widetilde{{\mathcal {G}}}} \text { which are compact}\\&\text {and the closure of their respective interiors}\big \}.\nonumber \end{aligned}\!\!\!\!\!\!\!\!\\ \end{aligned}$$
(2.33)

The Markov property now states that for any compatible random compact \({\mathcal {K}},\)

$$\begin{aligned} \text {conditionally on }{\mathcal {A}}_{{\mathcal {K}}}^+,\ (\varphi _x)_{x\in {{\widetilde{{\mathcal {G}}}}}}\text { is a Gaussian field with mean }\eta _{{\mathcal {K}}}^{\varphi }\text { and covariance }g_{{\mathcal {K}}^c},\nonumber \!\!\!\!\!\!\!\!\\ \end{aligned}$$
(2.34)

where \(\eta _{{\mathcal {K}}}^{\varphi }\) was defined in (2.24) and \(g_{{\mathcal {K}}^c}\) in (2.5). An application of the Markov property is that, conditionally on \((\varphi _x)_{x\in {{G}}},\) if \(e=\{y,z\}\in {E},\) the law of \((\varphi _x)_{x\in {I_e}},\) is that of a Brownian bridge of length \(\rho _e\) between \(\varphi _y\) and \(\varphi _z\) of a Brownian motion with variance 2 at time 1,  and these Brownian bridges are independent as e varies. Similarly, conditionally on \((\varphi _x)_{x\in {{G}}},\) one can describe the law of \((\varphi _x)_{x\in {I_y}},\) as that of a Brownian bridge of length \(\rho _y\) between \(\varphi _y\) and 0 of a Brownian motion with variance 2 at time 1 if \(\kappa _y>0,\) and as that of a Brownian motion starting in \(\varphi _y\) with variance 2 at time 1 if \(\kappa _y=0,\) and all these Brownian bridges and Brownian motions are independent. We refer to Sect. 2 of [9] for a proof of this result on \({\mathbb {Z}}^d,\) \(d\ge 3,\) which can easily be adapted to any transient graph. In particular, we have that

$$\begin{aligned} \begin{array}{l} \text {conditionally on }(\varphi _x)_{x\in {{G}}},\text { the random fields }(\varphi _x)_{x\in {I_e}}, e\in {E\cup {G},\text { are}} \\ \text {independent, and for all }e\in {E\cup {G}}, \text { the field }(\varphi _{x})_{x\in {I_e}}\text { only depends on }\varphi _{|e}, \end{array} \end{aligned}$$
(2.35)

where \(\varphi _{|e}=(\varphi _x,\varphi _y)\) if \(e=\{x,y\}\in {E}\) and \(\varphi _{|e}=\varphi _x\) if \(e=x\in {{G}}.\) Moreover, using the exact formula for the distribution of the maximum of a Brownian bridge, see e.g. [3], Chapter IV.26, one knows that for all \(e\in {E\cup {G}}\)

$$\begin{aligned} {\mathbb {P}}^G(|\varphi _z|>0\text { for all }z\in {I_e}\,|\,\varphi _{|e})=\big (1- p_e^{{\mathcal {G}}}(\varphi )\big ) 1_{e\in {E}}, \end{aligned}$$
(2.36)

where for all \(e=\{x,y\}\in {E}\) and \(f:{G}\rightarrow {\mathbb {R}},\)

$$\begin{aligned} p_e^{{\mathcal {G}}}(f){\mathop {=}\limits ^{\mathrm {def.}}}p_e^{{\mathcal {G}}}(f,0)={\left\{ \begin{array}{ll} \exp \big (-2\lambda _{x,y}f(x)f(y)\big ),&{}\text { if }f(x)f(y)\ge 0, \\ 1,&{}\text { otherwise.} \end{array}\right. } \end{aligned}$$
(2.37)

A useful notation \( p_e^{{\mathcal {G}}}(f,g)\) will later be introduced and include (2.37) as a special case when \(g=0\), see (3.12) below.

2.5 Random interlacements

We now briefly introduce random interlacements on the cable system \({\widetilde{{\mathcal {G}}}}.\) We define the set of doubly infinite trajectories \(W_{{\widetilde{{\mathcal {G}}}}}\) as the set of functions \(w:{\mathbb {R}}\rightarrow {\widetilde{{\mathcal {G}}}}\cup {\Delta },\) for which there exist \(-\infty \le \zeta ^-<\zeta ^+\le \infty \) such that \(w_{|(\zeta ^-,\zeta ^+)}\in {C((\zeta ^-,\zeta ^+),{\widetilde{{\mathcal {G}}}})}\) and \(w(t)=\Delta \) for all \(t\notin {(\zeta ^-,\zeta ^+)}.\) For each \(w\in {W_{{\widetilde{{\mathcal {G}}}}}},\) we also define \( p_{{{\widetilde{{\mathcal {G}}}}}}^*(w)= p^*(w)\) as the equivalence class of w modulo time shift; here, w and \(w'\) are equal modulo time shift if there exists \(t_0\in {\mathbb {R}}\) such that \(w(t+t_0)=w(t)\) for all \(t\in {\mathbb {R}},\) and \(W_{{\widetilde{{\mathcal {G}}}}}^*=\{p^*(w):\,w\in {W_{{\widetilde{{\mathcal {G}}}}}}\}.\) Let \({\mathcal {W}}_{{\widetilde{{\mathcal {G}}}}}\) be the \(\sigma \)-algebra on \(W_{{\widetilde{{\mathcal {G}}}}}\) generated by the coordinate functions, and \({\mathcal {W}}_{{\widetilde{{\mathcal {G}}}}}^*=\{A\subset W_{{\widetilde{{\mathcal {G}}}}}^*:(p^*)^{-1}(A)\in {{\mathcal {W}}_{{\widetilde{{\mathcal {G}}}}}}\}.\) For each compact K of \({\widetilde{{\mathcal {G}}}},\) we denote by \(W_{K,{\widetilde{{\mathcal {G}}}}}^0\) the set of trajectories \(w\in {W_{{\widetilde{{\mathcal {G}}}}}}\) with \(H_K(w)=0,\) where \(H_K(w)=\inf \{t\in {\mathbb {R}}:\,w(t)\in {K}\},\) with the convention \(\inf \varnothing =\zeta ^+,\) and \(W_{K,{\widetilde{{\mathcal {G}}}}}^*=\{ w^*\in W_{{\widetilde{{\mathcal {G}}}}}^* : (p^*)^{-1}(w^*)\cap W_{K,{\widetilde{{\mathcal {G}}}}}^0\ne \varnothing \}\). For \(w\in {W_{{\widetilde{{\mathcal {G}}}}}},\) we define the forward part of w as \((w(t))_{t\ge 0}\) and the backward part of w as \((w(-t))_{t\ge 0},\) which are both elements of \(W_{{\widetilde{{\mathcal {G}}}}}^+,\) see above (2.1). For \(w^*\in {W_{K,{\widetilde{{\mathcal {G}}}}}^*}\) we define the forward (resp. backward) part of \(w^*\) on hitting K as the forward (resp. backward) part of the unique trajectory in \((p^*)^{-1}(\{w^*\})\cap W_{K,{\widetilde{{\mathcal {G}}}}}^0.\)

The intensity measure underlying random interlacements on \({\widetilde{{\mathcal {G}}}}\) is defined as follows. For a set \(A\in {{\mathcal {W}}_{{\widetilde{{\mathcal {G}}}}}}\) we write \(A^{\pm }{\mathop {=}\limits ^{\text {def.}}}\{(w(\pm t))_{t\ge 0}:\,w\in {A}\}\), whence \(A^+,A^-\in {{\mathcal {W}}^+_{{\widetilde{{\mathcal {G}}}}}}.\) The set of all \(A\in {{\mathcal {W}}_{{\widetilde{{\mathcal {G}}}}}}\) with \(A\subset {{W}_{K,{\widetilde{{\mathcal {G}}}}}^0},\) such that A is equal to the set of \(w\in {W_{K,{\widetilde{{\mathcal {G}}}}}^0}\) whose forward part is in \(A^+\) and whose backward part is in \(A^-,\) is denoted by \({\mathcal {W}}_{K,{\widetilde{{\mathcal {G}}}}}^0.\) We then observe that \({\mathcal {W}}_{K,{\widetilde{{\mathcal {G}}}}}^0\) and \(\{A\in {{\mathcal {W}}_{{\widetilde{{\mathcal {G}}}}}}:\,W_{K,{\widetilde{{\mathcal {G}}}}}^0\cap A=\varnothing \}\) generate \({\mathcal {W}}_{{\widetilde{{\mathcal {G}}}}}.\) Recalling the definition of the last exit time \(L_K\) and the exterior boundary \({\widehat{\partial }}K\) from (2.13) and below, for all \(x\in { {\widehat{\partial }} K}\) let

$$\begin{aligned} P^{ K,{\widetilde{{\mathcal {G}}}}}_x \equiv P^{K}_x \text { be the law of }(X_{t+L_K})_{t\ge 0} \text { under }P_x(\cdot \,|\, L_K >0 , X_{L_{K}}=x). \end{aligned}$$
(2.38)

We now define a measure \(Q_{K,{\widetilde{{\mathcal {G}}}}}\) on \({\mathcal {W}}_{{\widetilde{{\mathcal {G}}}}},\) whose restriction to \({\mathcal {W}}_{K,{\widetilde{{\mathcal {G}}}}}^0\) is given by

$$\begin{aligned} Q_{K,{\widetilde{{\mathcal {G}}}}}(A)=\sum _{x\in { {\widehat{\partial }} K}}e_K(x)P^{{\widetilde{{\mathcal {G}}}}}_x(X \in {A^+})P^{K,{\widetilde{{\mathcal {G}}}}}_x(X\in {A^-}), \quad A \in {\mathcal {W}}_{K,{\widetilde{{\mathcal {G}}}}}^0, \end{aligned}$$
(2.39)

and such that \(Q_{K,{\widetilde{{\mathcal {G}}}}}(A)=0\) for all \(A\in {{\mathcal {W}}_{{\widetilde{{\mathcal {G}}}}}}\) with \(A\cap W_{K,{\widetilde{{\mathcal {G}}}}}^0=\varnothing .\) It is essentially folklore by now that there exists a unique measure \(\nu _{{\widetilde{{\mathcal {G}}}}}\) on \(W^*_{{\widetilde{{\mathcal {G}}}}},\) such that for all compacts \(K\subset {\widetilde{{\mathcal {G}}}},\)

$$\begin{aligned} \nu _{{\widetilde{{\mathcal {G}}}}}(A^*)=Q_{K,{\widetilde{{\mathcal {G}}}}}\big ((p^*)^{-1}(A^*)\big )\text { for all }A^*\in {{\mathcal {W}}_{{\widetilde{{\mathcal {G}}}}}^*},\, A^*\subset W_{K,{\widetilde{{\mathcal {G}}}}}^*. \end{aligned}$$
(2.40)

We will not give a proof of the existence of the measure \(\nu _{{\widetilde{{\mathcal {G}}}}};\) instead, we refer to [28] for a proof of the existence of such a measure on the discrete graph \({\mathcal {G}}\) when \(\kappa \equiv 0,\) and to [19] for the setting of the cable system associated to \({{\mathbb {Z}}}^d,\) \(d\ge 3.\) Indeed, one can easily adapt these proofs to obtain a measure \(\nu _{{\widetilde{{\mathcal {G}}}}}\) such that (2.40) holds for all compacts K of \({\widetilde{{\mathcal {G}}}}\) with \( {\widehat{\partial }} K\subset {G},\) also in the case \(\kappa \not \equiv 0\) (see also Remark 2.2). Considering now the case of arbitrary compact subsets K of \({\widetilde{{\mathcal {G}}}},\) one can thus construct a measure \(\nu _{{\widetilde{{\mathcal {G}}}}^{ {\widehat{\partial }} K}}\) such that (2.40) holds for \(\nu _{{\widetilde{{\mathcal {G}}}}^{ {\widehat{\partial }} K}}\) and K. Using the fact that \(P^{{\widetilde{{\mathcal {G}}}}}_x\) is the law of the trace of X on \({\widetilde{{\mathcal {G}}}}\) under \(P_x^{{\widetilde{{\mathcal {G}}}}^{ {\widehat{\partial }} K}},\) one easily deduces that \(\nu _{{\widetilde{{\mathcal {G}}}}}\) is the ‘trace on \({\widetilde{{\mathcal {G}}}}\)’ of \(\nu _{{\widetilde{{\mathcal {G}}}}^{ {\widehat{\partial }} K}},\) so that (2.40) also holds for \(\nu _{{\widetilde{{\mathcal {G}}}}}\) and K. Alternatively, a direct proof of (2.40) on the cable system is also presented in Theorem 3.2 of [22].

The random interlacement process \({\omega }\) is a Poisson point process on \(W^*_{{\widetilde{{\mathcal {G}}}}}\times (0,\infty )\) under the probability \({\mathbb {P}}^{I}_{{\widetilde{{\mathcal {G}}}}}\) with intensity measure \(\nu _{{\widetilde{{\mathcal {G}}}}}\otimes \lambda ,\) where \(\lambda \) is the Lebesgue measure on \((0,\infty ).\) When \(\kappa \not \equiv 0,\) the forward and backward parts of the trajectories can be killed before blowing up; in our setup this is realized by either part of the trajectory exiting \({\widetilde{{\mathcal {G}}}}\) to \(\Delta \) via \(I_x\) for some \(x\in {G}\) with \(\kappa _x>0\). We also denote by \({\omega }_u\) the point process which consist of the trajectories in \(\omega \) with label less than u,  by \((\ell _{x,u})_{x\in {{\widetilde{{\mathcal {G}}}}}}\) the continuous field of local times relative to m on \({\widetilde{{\mathcal {G}}}}\) of \(\omega _u\) and by \({{{\mathcal {I}}}}^u=\{x\in {{\widetilde{{\mathcal {G}}}}}:\ \ell _{x,u}>0\}\) the interlacement set at level u. The set \({{{\mathcal {I}}}}^u\) is characterized by the following identity: for any measurable set \(A\subset {{\widetilde{{\mathcal {G}}}}},\)

$$\begin{aligned} {\mathbb {P}}^I_{{\widetilde{{\mathcal {G}}}}}({{{\mathcal {I}}}}^u\cap A=\varnothing )=\exp \left( -u\, \mathrm {{ cap}}({\overline{A}})\right) \end{aligned}$$
(2.41)

(note that the set \({{{\mathcal {I}}}}^u\) is open, so it intersects A if and only if it intersects \({\overline{A}}\)). The trace \({\widehat{\omega }}_u\) of \(\omega _u\) on G has the same law under \({\mathbb {P}}^I_{{\widetilde{{\mathcal {G}}}}}\) as the usual discrete random interlacement process, see [28] in the case \(\kappa \equiv 0.\) If \(\kappa \not \equiv 0,\) a trajectory in \({\widehat{\omega }}_u\) can start or end at a fixed point \(x\in {G},\) and in this case we say that this trajectory is killed at x. We also define \({{{\mathcal {I}}}}_E^u\subset E\cup G\) to be the set of edges in E crossed by at least one single trajectory in \({\widehat{\omega }}_u,\) union with the set of vertices at which a trajectory in \({\widehat{\omega }}_u\) is killed. In the case \(\lambda _{x,y}=\frac{T}{T+1}\) for all \(x,y\in {E}\) and \(\kappa _x=\frac{\text {deg}(x)}{T+1}\) for all \(x\in {G},\) \(T>0,\) the discrete random interlacement process \({\widehat{\omega }}_u\) corresponds to the model of ‘finitary random interlacements’ studied in [5]. In view of Remark 2.2, this actually fits within the framework of [28] upon suitable enhancement of \({\mathcal {G}}\).

The law of \(\omega _u\) can also be described as follows: for any compact K of \({\widetilde{{\mathcal {G}}}}\), the law of the forward trajectories in \(\omega _u\) hitting K is a Poisson point process with intensity \(uP_{e_K}^{{\widetilde{{\mathcal {G}}}}}\) which can be constructed from a Poisson point process of discrete trajectories with intensity \(uP_{e_K}^{{\widetilde{{\mathcal {G}}}}^{ {\widehat{\partial }} K}}({\widehat{Z}} \in \cdot )\) by adding Brownian excursions on the edges. Hence, \(\omega _u\) can be constructed from \({\widehat{\omega }}_u\) by adding independent Brownian excursion on the edges, see [19] for details. In particular,

$$\begin{aligned} \begin{array}{l} \text {conditionally on }{\widehat{\omega }}_u,\text { the random variables }(\ell _{x,u})_{x\in {I_e}},e\in {E\cup {G}}, \text { are} \\ \text {independent, and for all }e\in {E\cup {G}},(\ell _{x,u})_{x\in {I_e}}\text { only depends on }{\widehat{\omega }}_{u,e}, \end{array} \end{aligned}$$
(2.42)

where \({\widehat{\omega }}_{u,e}\) is the set of trajectories in \({\widehat{\omega }}_u\) hitting e. When there is no risk of ambiguity, we abbreviate \({\mathbb {P}}^I={\mathbb {P}}^I_{{\widetilde{{\mathcal {G}}}}},\) and \(\nu =\nu _{{\widetilde{{\mathcal {G}}}}}.\)

3 Main results

In this section, we state our main results, Theorems 3.23.7 and 3.9, and explore their consequences. Put together, these results in particular imply Theorem 1.1, see the end of this section for the short proof, but in fact they provide more detailed results. Theorem 3.2, together with its Corollary 3.3, roughly corresponds to 1) in Theorem 1.1. Theorem 3.7 investigates the properties of the cluster capacity observable. In particular, it establishes that, when bounded almost surely, the cluster \(E^{\ge h}(x_0)\) has a capacity described by (\({Law}_h\)). Theorem 3.9 then broadly speaking relates (\({Law}_h\))\(_{h\ge 0}\) and the identity (Isom) between random interlacements and the Gaussian free field on \({\widetilde{{\mathcal {G}}}}\). In doing so, it also supplies new instances of (Isom), see Remark 3.10,1), along with a version on the discrete base graph \({\mathcal {G}}\), see (3.16). Finally, some further interesting consequences are put together in Corollaries 3.11 and 3.12 .

We now lay the ground for our first main result, Theorem 3.2. Its true meaning becomes transparent upon defining, next to \({\widetilde{h}}_*\) (see (1.6)) two further critical parameters. As will soon become clear, the conditions \(\kappa \equiv 0\) or (Cap) appearing in Theorem 3.2 will cause various of these parameters to coincide, leading to streamlined results. We first introduce

$$\begin{aligned} {\widetilde{h}}_*^{\mathrm{com}} =\inf \big \{ h \in {\mathbb {R}}: \text { for all}\ x_0\in {{\widetilde{{\mathcal {G}}}}}, \, {\mathbb {P}}^G(E^{\ge h}(x_0) \text { is non-compact})=0 \big \} \end{aligned}$$
(3.1)

(recall that compactness is with respect to the graph distance d). Every compact set is (d-)bounded, so we always have \({\widetilde{h}}_*^{\mathrm{com}} \ge {\widetilde{h}}_*.\) The third critical parameter, involving the capacity of clusters in \(E^{\ge h},\) is

$$\begin{aligned} {\widetilde{h}}_*^{\mathrm{cap}} =\inf \big \{ h \in {\mathbb {R}}: \text { for all}\ x_0\in {{\widetilde{{\mathcal {G}}}}}, \, {\mathbb {P}}^G(\mathrm {cap}(E^{\ge h}(x_0))=\infty )=0 \big \}, \end{aligned}$$
(3.2)

see (2.27) for the definition of capacity in this context. Note that (3.2) is well-defined due to the monotonicity of \(\text {cap}(\cdot )\), see (2.23), which extends to arbitrary closed sets on account of (2.27). Every compact set has finite capacity, so \({\widetilde{h}}_*^{\mathrm{com}} \ge {\widetilde{h}}_*^{\mathrm{cap}} ,\) and we therefore have that

$$\begin{aligned} \text {on any transient graph, }{\widetilde{h}}_*^{\mathrm{com}} \ge {\widetilde{h}}_*^{\mathrm{cap}} \text { and }{\widetilde{h}}_*^{\mathrm{com}} \ge {\widetilde{h}}_*. \end{aligned}$$
(3.3)

On any graph such that \(\kappa \equiv 0\) or (Cap) is verified, the situation becomes simpler, due to the following basic result. Its proof can be omitted at first reading.

Lemma 3.1

\((h\in {\mathbb {R}}\), \(x_0\in {{\widetilde{{\mathcal {G}}}}})\).

\({\mathbb {P}}^G\)-a.s., if either \(h\ge 0,\) \(\mathrm {{ cap}}(E^{\ge h}(x_0))<\infty \) or \(\kappa \equiv 0\) on G,  then \(E^{\ge h}(x_0)\) is compact if and only if it is bounded.

Proof

Observe that by definition, a connected set K is compact if and only if it is a closed and bounded subset of \({\widetilde{{\mathcal {G}}}}\) such that \(I_x\cap K\) is a connected compact subset of \(I_x\) for all \(x\in {{G}}.\) Therefore, if the level set \(E^{\ge h}(x_0)\) of \(x_0\) is compact, then it is bounded. Hence, we only have to show the reverse implication, and we assume from now on that \(E^{\ge h}(x_0)\) is bounded. First note that, as explained below (2.34), if \(\kappa _x=0,\) since \(\varphi \) on \(I_x\) conditioned on \(\varphi _x\) has the same law as a Brownian motion starting in \(\varphi _x\) with variance 2 at time 1,  we have that \(I_x\cap E^{\ge h}(x_0)\) is \({\mathbb {P}}^G\)-a.s. a connected compact of \(I_x.\) Therefore \(E^{\ge h}(x_0)\) is a.s. compact if \(\kappa \equiv 0.\) If \(\kappa _x>0\) we have by (2.32) applied to the graph \({\mathcal {G}}^{\{x+t\cdot I_x\}}\) (cf. Lemma 2.1 for notation) that \(\mathrm {{ cap}}(I_x^t)=\infty \), where \(I_x^t=\{ x+s\cdot I_x : t \le s < \rho _x\}\). If \(\mathrm {{ cap}}(E^{\ge h}(x_0))<\infty ,\) by (2.23) we obtain \( I_x^t\not \subset E^{\ge h}(x_0),\) that is \(I_x\cap E^{\ge h}(x_0)\) is a connected compact of \(I_x,\) and so \(E^{\ge h}(x_0)\) is compact. Finally, if \(\kappa _x>0\) and \(h\ge 0,\) as explained below (2.34), since \(\varphi \) on \(I_x\) conditioned on \(\varphi _x\) has the same law as a Brownian bridge of finite length between \(\varphi _x\) and 0 of a Brownian motion with variance 2 at time 1,  \(I_x\cap E^{\ge h}(x_0)\) is a.s. a connected compact of \(I_x,\) and so \(E^{\ge h}(x_0)\) is a.s. compact. \(\square \)

Lemma 3.1 has two immediate consequences. On the one hand, in view of (1.6), (3.1) and by (3.3), Lemma 3.1 (applied in the case \(\kappa \equiv 0\)) yields that

$$\begin{aligned} \text { if }{\mathcal {G}}\text { is a transient graph with }\kappa \equiv 0\text {, then }{\widetilde{h}}_*^{\mathrm{com}} = {\widetilde{h}}_*\ge {\widetilde{h}}_*^{\mathrm{cap}} . \end{aligned}$$
(3.4)

We refer to Remark 8.2,3) in [22] for an example of a graph for which the inequality in (3.4) is strict. On the other hand, if condition (Cap) is fulfilled, then every connected closed set with finite capacity is bounded, and so \({\widetilde{h}}_*^{\mathrm{cap}} \ge {\widetilde{h}}_*\) by (1.6) and (3.2). But by Lemma 3.1, for all \(x_0\in {{\widetilde{{\mathcal {G}}}}},\) if \(\mathrm {{ cap}}(E^{\ge h}(x_0))<\infty ,\) then \(E^{\ge h}(x_0)\) is also compact, and so \({\widetilde{h}}_*^{\mathrm{cap}} \ge {\widetilde{h}}_*^{\mathrm{com}} .\) Thus, we obtain that

$$\begin{aligned} \text { if }{\mathcal {G}}\text { is a transient graph verifying (Cap) then, }{\widetilde{h}}_*^{\mathrm{com}} ={\widetilde{h}}_*^{\mathrm{cap}} \ge {\widetilde{h}}_*. \end{aligned}$$
(3.5)

In particular, if \({\mathcal {G}}\) satisfies (Cap) and \(\kappa \equiv 0,\) then from (3.5) and (3.4) it is clear that the three critical parameters \({\widetilde{h}}_*^{\mathrm{com}} ,\) \({\widetilde{h}}_*\) and \({\widetilde{h}}_*^{\mathrm{cap}} \) coincide; hence, in this case, in order to prove that they are equal to zero, it is sufficient to show that one of them is non-negative while another one is non-positive. Our first main result provides such a statement, without any further assumption on \({\mathcal {G}}\) (recall our setup from above (1.1)).

Theorem 3.2

Let \({\mathcal {G}}\) be a transient weighted graph. For each \(x_0\in {\widetilde{{\mathcal {G}}}}\) and \(h\ge 0,\) the random variable \(\mathrm {{ cap}}({E}^{\ge h}(x_0))\) is \({\mathbb {P}}^G\)-a.s. finite, and for each \(h<0\) the level set \(E^{\ge h}(x_0)\) of \(x_0\) is non-compact with positive probability.

The proof of Theorem 3.2 appears over the next two sections. Note that the fact that \(E^{\ge h}(x_0)\) is non-compact with positive probability for all \(h<0\) could alternatively be obtained from the Markov property (2.34) similarly as in [6], see also the Appendix of [1] for details, or from the isomorphism (1.7), see (1.8) and above. Here, we will obtain it as a direct consequence of our methods. In particular, Theorem 3.2 implies \({\widetilde{h}}_*^{\mathrm{cap}} \le 0\) and \({\widetilde{h}}_*^{\mathrm{com}} \ge 0.\) Thus, together with Lemma 3.1, (3.4) and (3.5), Theorem 3.2 has the following immediate

Corollary 3.3

Let \({\mathcal {G}}\) be a transient weighted graph.

  1. (1)

    If \({\mathcal {G}}\) satisfies (Cap), then (Sign) holds and \({\widetilde{h}}_*^{\mathrm{com}} ={\widetilde{h}}_*^{\mathrm{cap}} =0 \, ( \ge {\widetilde{h}}_*).\)

  2. (2)

    If \(\kappa \equiv 0,\) then for each \(h<0,\) the level set \(E^{\ge h}(x_0)\) of \(x_0\) is unbounded with positive probability; hence \(({\widetilde{h}}_*^{\mathrm{com}} =)\, {\widetilde{h}}_*\ge 0.\)

Therefore, if \({\mathcal {G}}\) satisfies (Cap) and \(\kappa \equiv 0,\) then \({\widetilde{h}}_*={\widetilde{h}}_*^{\mathrm{com}} ={\widetilde{h}}_*^{\mathrm{cap}} =0.\)

Notice that Theorem 3.2 and Corollary 3.3 immediately imply item 1) of Theorem 1.1. We now comment on Theorem 3.2 and Corollary 3.3, and first elaborate on the condition (Cap), which is central in obtaining \({\widetilde{h}}_*=0\). Further comments on Theorem 3.2 and Corollary 3.3 are collected below in Remark 3.5.

The following lemma supplies a large class of graphs for which (Cap) holds. In particular, by means of this lemma, Corollary 3.3 generalizes all previously known results about \({\widetilde{h}}_*=0\) (see below Theorem 1.1 for a list). We highlight item 2) of Lemma 3.4, comprising the condition (3.7) which is sufficient for (Cap) but stated only in terms of the Green function on \({\mathcal {G}},\) and thus can be easier to verify. It implies for instance that any vertex-transitive graph verifies (Cap). Part 3) below accounts for the trees studied in [1] and shows that Proposition 2.2 in [1] can be seen as direct consequence of Corollary 3.3,1); see also the discussion following Theorem 1.1.

Lemma 3.4

(Criteria for (Cap)).

  1. (1)

    Condition (Cap) holds true if and only if

    $$\begin{aligned} \mathrm {{ cap}}(A)=\infty \text { for all infinite and connected sets }A\subset {G}. \end{aligned}$$
    (3.6)
  2. (2)

    If

    $$\begin{aligned}&\begin{array}{l} \text {there exists }g_0<\infty \text { such that }\{x\in {{G}}:\,g(x,x)>g_0\} \\ \text {has no unbounded connected component} \end{array} \end{aligned}$$
    (3.7)

    then condition (Cap) is verified for \({\mathcal {G}}.\) In particular, if \({\mathcal {G}}\) is vertex-transitive, (Cap) holds.

  3. (3)

    Let \({\mathbb {T}}\) be a transient tree with zero killing measure and unit weights and denote by \(R_x^{\infty }\) the effective resistance between x and \(\infty \) in \({\mathbb {T}}_x\), the sub-tree of \({\mathbb {T}}\) consisting only of x and its descendents (relative to a base point \(x_0 \in {\mathbb {T}}\)). If \(\{x\in {{\mathbb {T}}}:\,R_x^{\infty }>A\}\) only has bounded connected components for some \(A>0,\) then (Cap) is verified.

Lemma 3.4 is proved in Appendix A. We proceed to make further comments around Theorem 3.2, Corollary 3.3 and Lemma 3.4.

Remark 3.5

  1. (1)

    In order to develop an intuition for the results of Theorem 3.2 and Corollary 3.3, consider the case where \({\mathcal {G}}\) is a finite transient graph. Recall that for \(x\in {G}\) such that \(\kappa _x>0\) (such x necessarily exists when \({\mathcal {G}}\) is finite and transient) the field \(\varphi \) on \(I_x,\) conditionally on \(\varphi _x,\) has the same law as a Brownian bridge of length \(\rho _x<\infty \) between \(\varphi _x\) and 0 of a Brownian motion with variance 2 at time 1,  see the discussion below (2.34). Therefore, for all \(h<0,\) we have that \({\mathbb {P}}^G(\varphi _y\ge h\text { for all }y\in {I_x})>0,\) and since \(I_x\) is non-compact, we obtain \({\widetilde{h}}_*^{\mathrm{com}} \ge 0.\) Now similarly if \(h\ge 0,\) then \({\mathbb {P}}^G(\varphi _y\ge h\text { for all }y\in {I_x})=0\) for all \(x\in {G},\) and since G is finite, it follows that \({\widetilde{h}}_*^{\mathrm{com}} \le 0.\) Since (Cap) is trivially verified on finite graphs, we thus have by (3.5) that \({\widetilde{h}}_*^{\mathrm{com}} ={\widetilde{h}}_*^{\mathrm{cap}} =0.\) Note, however, that trivially \({\widetilde{h}}_*=-\infty \) since there are no unbounded sets on finite graphs, and so the inequality in (3.5) can be strict. In fact, the situation \(0= {\widetilde{h}}_*^{\mathrm{com}} ={\widetilde{h}}_*^{\mathrm{cap}} > {\widetilde{h}}_* \ge -\infty \) is emblematic of graphs with sub-exponential volume growth and (say) a uniform killing measure, and one typically has both strict inequalities \(0> {\widetilde{h}}_* > -\infty \) when \({\mathcal {G}}\) is infinite, see Corollary 5.2 and Remark 5.7,2) in [22].

  2. (2)

    We refer to Proposition 8.1 in [22] for an example of a graph for which (Cap) is not satisfied, and \({\widetilde{h}}_*^{\mathrm{cap}} \le 0\) (necessarily by Theorem 3.2) yet \({\widetilde{h}}_*^{\mathrm{com}} ={\widetilde{h}}_*=\infty \) – in particular, this is a further example where the critical parameters do not coincide.

  3. (3)

    We now construct an example of a graph not fulfilling (Cap), but for which we still have \({\widetilde{h}}_*={\widetilde{h}}_*^{\mathrm{cap}} ={\widetilde{h}}_*^{\mathrm{com}} =0\) (and therefore, as will turn out, (Sign) holds, cf. Corollary 3.12 below, or the first equivalence in Theorem 1.1,2)). Consider a graph \({\mathcal {G}}\) with \(\kappa \equiv 0\) except possibly at \(x \in G\), where \(\kappa _x \in [0,\infty )\). Let \(A \subset I_x \) be an infinite sequence converging towards the open end of \(I_x\), and, simultaneously interpreting A as the set given by the values of A, consider \({\mathcal {G}}^A\) the graph given by Lemma 2.1. If \({\mathcal {G}}{\mathop {=}\limits ^{\text {def.}}}{\mathbb {Z}}^3\) with unit weights and \(\kappa \equiv 0\), then noting that \((\widetilde{{\mathcal {G}}}^A)\setminus \bigcup _{x\in {A}}I_x\) can be identified with \(\widetilde{{\mathcal {G}}}\) (see (2.7) and below (2.10)), it readily follows that \({\widetilde{h}}_*={\widetilde{h}}_*^{\mathrm{cap}} ={\widetilde{h}}_*^{\mathrm{com}} =0\) on \(\widetilde{{\mathcal {G}}}^A\). This chain of equalities follows (with a moment’s thought) from the corresponding one on \(\widetilde{{\mathcal {G}}}\), where it holds by Corollary 3.3, for instance using Lemma 3.4,ii) to argue that (Cap) holds on \(\widetilde{{\mathcal {G}}}\). But for \(A_n\) finite with \(A_n \nearrow A\), the capacity of \(A_n\) is supported on at most two points, whence \(\text {cap}(A)< \infty \), by (2.27). In particular, \({\mathcal {G}}^A\) does not fulfill (Cap).

    The previous example remains instructive if one considers instead \({\mathcal {G}}\) a finite graph and \(\kappa _x>0\), in order to appreciate the difference between \({\widetilde{h}}_*\) and \({\widetilde{h}}_*^{\mathrm{com}}\). With A as above, one has \({\widetilde{h}}_*^{\mathrm{com}}({{\mathcal {G}}}), {\widetilde{h}}_*^{\mathrm{com}}({{\mathcal {G}}}^A)\ge 0 \) by Theorem 3.2. On the other hand, \({\widetilde{h}}_*({{\mathcal {G}}})=-\infty \) since \({\mathcal {G}}\) is finite, but \({\widetilde{h}}_*({{\mathcal {G}}}^A)\ge 0\) by Corollary 3.3,ii) since \(\kappa ^A\equiv 0\). This shows that \({\widetilde{h}}_*\) really depends on the choice of base graph \({\mathcal {G}}\) and not only on \(\widetilde{{\mathcal {G}}}.\) We refer to Proposition 7.1 in [22] for a less trivial example of a graph verifying (Sign) but not (Cap).

  4. (4)

    An interesting direct consequence of Corollary 3.3 concerns \({{\mathcal {L}}}_{\alpha }\), the discrete (Poissonian) loop soup at intensity parameter \(\alpha > 0\) (we refer to [19] for precise definitions).

Corollary 3.6

Let \({\mathcal {G}}\) be a transient weighted graph such that (Cap) holds. Then \({\mathcal {L}}_{1/2}\) a.s. consists of finite clusters only.

Proof

If \({\mathcal {G}}\) satisfies (Cap), then by Corollary 3.3, i) and the symmetry and continuity of \(\varphi \), the set \(\{x\in {{\widetilde{{\mathcal {G}}}}}:|\varphi _x|>0\}\) only contains compact connected components. Hence, by Theorem 1 in [19], the loop soup \(\widetilde{{\mathcal {L}}}_{1/2}\) on \({\widetilde{{\mathcal {G}}}}\) only contains compact connected components on which its field of local times is positive. A fortiori, \({\mathcal {L}}_{1/2}\) only consists of finite clusters. \(\square \)

  1. (5)

    The condition (3.7) is strictly stronger than the condition (Cap). Indeed, consider \({\mathcal {G}}\) a rooted \((d+1)\)-regular tree, with weights \(1/(n+1)\) for each edge between a vertex at generation n and one of its children at generation \(n+1,\) and zero killing measure. Then \(g(x,x)\ge n+1\) for each x in generation n,  and so (3.7) does not hold. On the other hand, for each infinite connected subset K of the tree having at most one vertex per generation, denoting by \(K_n\subset K\) the subset of all points in K having generation at most n, one sees that for \(x\in {K}\) at generation k and all \(n \ge k\), the equilibrium measure of \(K_n\) at x is at least \(c(k+1)^{-1}\) for some absolute constant \(c=c(d),\) and so \(\text {{Cap}}(K)= \infty \) on account of (2.27). Since any infinite connected set A contains such K, (Cap) follows using Lemma 3.4,1) and (2.23). All in all, \({\mathcal {G}}\) verifies (Cap) but not (3.7).

Next, we investigate the random variable \(\text {cap}\big ({E}^{\ge h}(x_0)\big ), \text { for }x_0 \in \widetilde{{\mathcal {G}}}, \, h \in {\mathbb {R}}\) (see (2.27) for the definition of \(\text {cap}(\cdot )\) in this context), which will play a central role throughout the remainder of this article.

Theorem 3.7

Let \({\mathcal {G}}\) be a transient weighted graph. For all \(x_0\in {\widetilde{{\mathcal {G}}}}\) and \(h\ge 0,\) if \(E^{\ge h}(x_0)\) is \({\mathbb {P}}^G\)-a.s. bounded, then the random variable \({\mathrm{cap}}\big ({E}^{\ge h}(x_0)\big )\) has moment generating function given by (\({Law}_h\)) and density given by

$$\begin{aligned} \rho _h(t)=\frac{1}{2\pi t\sqrt{g(x_0,x_0)(t-g(x_0,x_0)^{-1})}}\exp \left( -\frac{h^2t}{2}\right) 1_{t\ge g(x_0,x_0)^{-1}}. \end{aligned}$$
(3.8)

Furthermore, assuming only that \({\mathcal {G}}\) satisfies (Cap), one has for each \(h\ge 0\) and \(x_0 \in \widetilde{{\mathcal {G}}}\) that

$$\begin{aligned}&(\mathrm {Law}_{h}) \text { holds, and} \end{aligned}$$
(3.9)
$$\begin{aligned}&{\mathrm{cap}}\big ({E}^{\ge -h}(x_0)\big )1_{{\mathrm{cap}}({E}^{\ge -h}(x_0))\in {(0,\infty )}}\text { has the same law as }{\mathrm{cap}}\big ({E}^{\ge h}(x_0)\big ) 1_{\varphi _{x_0}{\ge h}}. \end{aligned}$$
(3.10)

In particular,

$$\begin{aligned} {\mathbb {P}}^G\big ({\mathrm{cap}}\big ({E}^{\ge -h}(x_0)\big )=\infty \big )={\mathbb {P}}^G(\varphi _{x_0}\in {(-h,h)}). \end{aligned}$$
(3.11)

Remark 3.8

  1. (1)

    In case \(\kappa \equiv 0\) one can replace \({\widetilde{{\mathcal {G}}}}\) in the statements of Theorems 3.2 and 3.7 by \({\widetilde{{\mathcal {G}}}}^{-}\), which corresponds to removing the edges \(I_x,\) \(x\in {G},\) from \({\widetilde{{\mathcal {G}}}},\) see above (2.31) for notation, but not when \(\kappa \not \equiv 0,\) see Remark 2.5.

  2. (2)

    When \({\mathcal {G}}\) is a finite graph, one can deduce (3.11) directly from Corollary 1, (ii) in [21] with constant boundary condition \(h\ge 0,\) since saying that the random pseudo-metric between \(x_0\) and the boundary of \({\mathcal {G}}\) introduced therein is equal to 0,  is equivalent to saying that \(E^{\ge -h}(x_0)\) is non-compact, or equivalently has infinite capacity. The statement (3.11) then follows by using the reflection principle and that the effective resistance between \(x_0\) and the boundary of \({\mathcal {G}}\) is equal to \(g(x_0,x_0).\) When \({\mathcal {G}}={\mathbb {Z}}^d,\) \(d\ge 3,\) (3.11) is equivalent to the statement in Theorem 3 of [7].

The proof of Theorem 3.7 (along with that of Theorem 3.2) is given in the next two sections. Our starting point for both proofs is the observation (see Proposition 4.2 below) that, if true, the isomorphism (Isom) entails a great deal of information about the observables \(\text {cap}\big ({E}^{\ge h}(x_0)\big ),\) \(h \in {\mathbb {R}}\). We use this observation on suitable finite-volume approximations of the free field on \({\mathcal {G}}\), which our setup naturally allows for (essentially obtained by iteratively reducing \(\kappa \) starting from \(\kappa =\infty \) outside a finite set). This is possible because (Isom) can be shown to hold without further assumptions on finite graphs. The condition (Cap) then provides a very efficient criterion in order to avoid losing too much information when passing to the limit (in particular, one retains (\({Law}_h\))), thus yielding (3.9)–(3.11). In a sense, the first part of Theorem 3.2 describes the information that survives in the limit without any further assumptions on \({\mathcal {G}}\).

As (\({Law}_h\))\(_{h \ge 0}\) is essentially derived from (Isom) on finite-volume approximations of \(\widetilde{{\mathcal {G}}}\), one naturally wonders how the validity of (\({Law}_h\))\(_{h \ge 0}\) compares to that of (Isom) on \(\widetilde{{\mathcal {G}}}\) itself. This is the object of our next main result, Theorem 3.9 below; see in particular (3.14). Addressing this question will require us proving that the full strength of (Isom) can be passed to the limit (which is rather more involved than what is required for the proof of Theorem 3.7), and thereby obtain an isomorphism on \(\widetilde{{\mathcal {G}}}\), under suitable assumptions (namely (Sign) or (\(\mathrm {Law}_{0}\))).

In order to state Theorem 3.9, we introduce a variation (Isom’) of the identity (Isom), which will sometimes be more convenient to work with. The two are in fact equivalent, see (3.14) and Corollary 6.1 below. The appeal of (Isom’) is that it makes certain symmetries more apparent (see for instance Lemma 4.3). It will also naturally imply a certain discrete isomorphism on the base graph \({\mathcal {G}}\), see (3.16) below, interesting in its own right.

The identity (Isom’) involves additional randomness. We henceforth assume that, on a suitable extension \({\widetilde{{\mathbb {P}}}}_{{\widetilde{{\mathcal {G}}}}}\) of \({\mathbb {P}}^G_{{\widetilde{{\mathcal {G}}}}}\otimes {\mathbb {P}}^{I}_{{\widetilde{{\mathcal {G}}}}}\) (which we simply denote by \({\widetilde{{\mathbb {P}}}}\) when there is no risk of ambiguity) there exists for each \(u>0\) an additional process \((\sigma _x^u)_{x\in {{\widetilde{{\mathcal {G}}}}}}\in {\{-1,1\}^{{\widetilde{{\mathcal {G}}}}}},\) such that, conditionally on \((|\varphi _x|)_{x\in {{\widetilde{{\mathcal {G}}}}}}\) and \(\omega _u,\) \(\sigma ^u\) is constant on each of the connected components of \(\{x\in {{\widetilde{{\mathcal {G}}}}}:\ 2\ell _{x,u}+\varphi _x^2>0\},\) \(\sigma ^u_x=1\) for all \(x\in {{{{\mathcal {I}}}}^u},\) and the values of \(\sigma ^u\) on each other cluster of \(\{x\in {{\widetilde{{\mathcal {G}}}}}:\ 2\ell _{x,u}+\varphi _x^2>0\}\) are independent and uniformly distributed. For x such that \(2\ell _{x,u}+\varphi _x^2=0,\) the value of \(\sigma _x^u\) will not play any role in what follows, and one can fix it arbitrarily (e.g. to have the value \(+1\)). Recalling the definition of \({\mathcal {C}}_u\) from below (Isom), it is clear that the clusters of \(\{x\in {{\widetilde{{\mathcal {G}}}}}:\ 2\ell _{x,u}+\varphi _x^2>0\}\) are the union of the clusters of the interior of \({\mathcal {C}}_u\) and the clusters of \(\{x\in {{\widetilde{{\mathcal {G}}}}:|\varphi _x|>0}\}\cap ({\mathcal {C}}_u)^c,\) and so one can equivalently define \(\sigma ^u\) as follows: \(\sigma ^u_x=1\) for all \(x\in {{\mathcal {C}}_u},\) \(\sigma ^u\) is constant on each of the clusters of \(\{x\in {{\widetilde{{\mathcal {G}}}}}:\ |\varphi _x|>0\}\cap ({\mathcal {C}}_u)^c,\) and its values on each cluster are independent and uniformly distributed. We will investigate the validity of the relation

figure b

It is then an easy matter to see that (Isom) and (Isom’) are equivalent, see Lemma 6.1 below. Let \( p_e^{{\mathcal {G}}}:{\mathbb {R}}^{{G}}\times [0,\infty )^{{G}}\rightarrow [0,1]\) for \(e = \{x,y\} \in {E}\), and similarly \( p_x^{u,{\mathcal {G}}}\), \(x\in G\), be defined by

$$\begin{aligned}&p_e (f,g) \equiv p_e^{{\mathcal {G}}}(f,g)=\exp \Big (-\lambda _{x,y}\big (f(x)f(y)+\sqrt{(f(x)^2+2g(x))(f(y)^2+2g(y)})\big )\Big ), \end{aligned}$$
(3.12)
$$\begin{aligned}&p_x(f,g) \equiv p_x^{u,{\mathcal {G}}}(f,g)=\exp \Big (-\kappa _x\sqrt{2u(f(x)^2+2g(x))}\Big ). \end{aligned}$$
(3.13)

Our last main result is the following theorem, which is proved in Sect. 6.

Theorem 3.9

Let \({\mathcal {G}}\) be a transient weighted graph. Then

$$\begin{aligned} (\mathrm {Law}_0)\Longleftrightarrow (\mathrm {Law}_{h})_{h>0} \Longleftrightarrow (Isom) \Longleftrightarrow (Isom'). \end{aligned}$$
(3.14)

Moreover, defining for any \(u>0\) on a suitable extension \({\widehat{{\mathbb {P}}}}\) of \({\mathbb {P}}^G\otimes {\mathbb {P}}^I\) a random set \(\widehat{{\mathcal {E}}}_u\subset E\cup G\) such that, conditionally on \((\varphi _x)_{x\in {G}}\) and \({\widehat{\omega }}_u,\) the set \(\widehat{{\mathcal {E}}}_u\) contains each edge and vertex that is contained in \({{{\mathcal {I}}}}_E^u\) (see below (2.41) for notation), and it contains each additional edge and vertex \(e\in {E\cup G}\) conditionally independently with probability \(1-p_e(\varphi ,\ell _{.,u}),\) the following holds: If any of the conditions in (3.14) is fulfilled, with \({\mathcal {E}}_u{\mathop {=}\limits ^{\text {def.}}}\{e\in {E\cup {G}}:\,2\ell _{x,u}+\varphi _x^2>0\text { for all }x\in {I_e}\}\),

$$\begin{aligned} \widehat{{\mathcal {E}}}_u\ \text {has the same law under}\ {\widehat{{\mathbb {P}}}}\ \text {as}\ {\mathcal {E}}_u\ \text {under}\ {\widetilde{{\mathbb {P}}}}. \end{aligned}$$
(3.15)

In particular, if one defines (under \({\widehat{{\mathbb {P}}}}\)) a process \(({\widehat{\sigma }}_x^u)_{x\in {{G}}}\in \{-1,1\}^{G},\) such that, conditionally on \((\varphi _x)_{x\in {G}},\) \({\widehat{\omega }}_u\) and \(\widehat{{\mathcal {E}}}_u,\)

  • The process \({\widehat{\sigma }}^u\) is constant on each of the clusters (of edges) induced by \(\widehat{{\mathcal {E}}}_u\cap E,\)

  • \({\widehat{\sigma }}_x^u=1\) for all \(x\in {({{{\mathcal {I}}}}^u\cup \widehat{{\mathcal {E}}}_u)\cap G},\) and

  • The values of \({\widehat{\sigma }}^u\) on all other clusters are independent and uniformly distributed,

then

$$\begin{aligned} \big ({\widehat{\sigma }}_x^u\sqrt{2\ell _{x,u}+\varphi _x^2}\big )_{x\in {{G}}}\text { has the same law under }{\widehat{{\mathbb {P}}}} \text { as }\big (\varphi _x+\sqrt{2u}\big )_{x\in {{G}}}\text { under }{\mathbb {P}}^G.\nonumber \\ \end{aligned}$$
(3.16)

Remark 3.10

  1. (1)

    The conclusions of Theorem 3.7 in combination with (3.14) yield the validity of (Isom) assuming either (Sign) or (Cap) only.

  2. (2)

    The discrete isomorphism (3.16) bears similarities to the coupling derived in Theorem 1.bis of [19] (see also (4.8) below) in the context of loop soups, as well as with the coupling derived in Theorem 8 of [20] in the context of Markov jump processes. Notice that by construction, see the definition of \(\widehat{{\mathcal {E}}}_u\) and (3.12), (3.13), the coupling \({\widehat{{\mathbb {P}}}}\) yielding \(({\widehat{\sigma }}_{x})_{x\in G}\) only requires information on \({\mathcal {G}}\), i.e., the reference to \({\widetilde{{\mathcal {G}}}}\) can be completely bypassed.

  3. (3)

    If \({\mathbf {h}}\) is a harmonic function on \({\widetilde{{\mathcal {G}}}},\) one can define the notion of \({\mathbf {h}}\)-transform of random interlacements, and an isomorphism between the \({\mathbf {h}}\)-transform of random interlacements and the Gaussian free field on \({\widetilde{{\mathcal {G}}}}\) similar to (Isom) holds, under the same conditions, see Theorem 6.5 in [22] for details.

  4. (4)

    One can also deduce from Theorem 3.9 another isomorphism on \({\widetilde{{\mathcal {G}}}}^{-},\) see Sect. 2.3. Let \({\mathcal {E}}_u^{-}\subset {\widetilde{{\mathcal {G}}}}^{-}\) be a random set such that, conditionally on \((\varphi _x)_{x\in {{\widetilde{{\mathcal {G}}}}^{-}}}\) and \(\omega _u^{{\widetilde{{\mathcal {G}}}}^{-}},\) the trace of the random interlacement process \(\omega _u\) on \({\widetilde{{\mathcal {G}}}}^{-},\) the set \({\mathcal {E}}_u^{-}\) contains \({{{\mathcal {I}}}}^u\cap {\widetilde{{\mathcal {G}}}}^{-}\) and each additional vertex \(x\in {G}\) conditionally independently with probability \(1-p_x^{u,{\mathcal {G}}}(\varphi ,\ell _{\cdot ,u})\) (or equivalently \(1-p_x^{u,{\mathcal {G}}}(\varphi ,0)\)). Let also \({\mathcal {C}}_u^{-}\) be the closure of the union of the connected components of the sign clusters \(\{x\in {{\widetilde{{\mathcal {G}}}}^{-}}:\,|\varphi _x|>0\}\) intersecting \({\mathcal {E}}_u^{-}.\) Then the isomorphism obtained by replacing \({\widetilde{{\mathcal {G}}}}\) by \({\widetilde{{\mathcal {G}}}}^{-}\) and \({\mathcal {C}}_u\) by \({\mathcal {C}}_u^{-}\) in (Isom) is also equivalent to any of the conditions in (3.14). In particular, if \(\kappa \equiv 0,\) then \({\mathcal {C}}_u^{-}={\mathcal {C}}_u\cap {\widetilde{{\mathcal {G}}}}^-,\) and so the isomorphism (Isom) (or also (\({Law}_h\)) in view of Lemma 2.4) can be equivalently stated on \({\widetilde{{\mathcal {G}}}}\) or \({\widetilde{{\mathcal {G}}}}^{-}.\)

  5. (5)

    The conclusion (3.10) can a-posteriori be strengthened. Indeed, knowing that (Isom’) holds (which follows from (3.9) and (3.14)), one easily shows that compact clusters in \(E^{\ge h}\) and \(E^{\ge -h}\) have the same law, for all \(h >0\), see Lemma 4.3 below. In particular under (Sign), the clusters of \(E^{\ge h}\) have the same law as the compact clusters of \(E^{\ge -h},\) and so for all \(x_0\in {{\widetilde{{\mathcal {G}}}}}\)

    $$\begin{aligned} {\mathrm{cap}}\big ({E}^{\ge -h}(x_0)\big )1_{{E}^{\ge -h}(x_0)\text { is compact},\varphi _{x_0}\ge -h}\text { has the same law as }{\mathrm{cap}}\big ({E}^{\ge h}(x_0)\big ) 1_{\varphi _{x_0}{\ge h}}, \end{aligned}$$
    (3.17)

    whose law is described by (\({Law}_h\)) in view of Theorem 3.7. Contrary to (3.10), the conclusion (3.17) is however not sufficient to entirely describe the law of our variable of interest \(\text {cap}({E}^{\ge -h}(x_0)).\) But if condition (Cap) holds, then on account of Lemma 3.1\({E}^{\ge -h}(x_0)\) is compact if and only if \(\text {cap}({E}^{\ge -h}(x_0))<\infty ,\) and so (3.17) is then equivalent to (3.10).

    Similarly, with regards to (3.11), using Lemma 4.3 (which applies under (Sign) by means of Theorems 3.7 and 3.9 ), one finds that, under (Sign), for all \(h \ge 0\),

    $$\begin{aligned} \begin{aligned} {\mathbb {P}}^G({E}^{\ge -h}(x_0) \text { is compact})&= {\mathbb {P}}^G(\varphi _0 \le -h)+ {\mathbb {P}}^G( \emptyset \ne {E}^{\ge -h}(x_0) \text { is compact})\\&= {\mathbb {P}}^G(\varphi _0 \le -h)+ {\mathbb {P}}^G( \emptyset \ne {E}^{\ge h}(x_0) \text { is compact})\qquad \\&= {\mathbb {P}}^G(\varphi _0 \le -h)+{\mathbb {P}}^G(\varphi _0 \ge h), \end{aligned} \end{aligned}$$
    (3.18)

    using (Sign) and Lemma 3.1 in the last step. In particular, one recovers  (3.11) from (3.18) in case (Cap) holds. We further refer to Remark 5.3,2) regarding the symmetry of clusters in \(E^{\ge h}\) and \(E^{\ge -h}\) contained in a given compact set \(K \subset {\widetilde{{\mathcal {G}}}}\), which does not require (Isom’) to hold.

  6. (6)

    Let us explain how to explicitly construct the process \(\sigma \) on \({\widetilde{{\mathcal {G}}}}\) in (Isom’). Let \((x_n)_{n\in {\mathbb {N}}}\) be a dense sequence in \({\widetilde{{\mathcal {G}}}}\) and \((\sigma '_n)_{n\in {\mathbb {N}}}\in {\{-1,1\}^{\mathbb {N}}}\) be a sequence of independent and uniformly distributed random variables under \({\widetilde{{\mathbb {P}}}}.\) Let m(x) be the smallest \(n\in {\mathbb {N}}\) such that \(x_n\) and x are in the same cluster of \(\{y\in {{\widetilde{{\mathcal {G}}}}}:\ 2\ell _{y,u}+\varphi _y^2>0\};\) since \((x_n)_{n\in {\mathbb {N}}}\) is dense and \(y\mapsto 2\ell _{y,u}+\varphi _y^2\) is continuous, we have that \(m(x)<\infty \) once \(2\ell _{x,u}+\varphi _x^2>0.\) We then define \(\sigma _x=\sigma '_{{m(x)}}\) if \(\varphi _x^2>0\) and \(x\notin {{\mathcal {C}}_u},\) and \(\sigma _x=1\) otherwise, which has the desired properties. As an aside, note that in the isomorphism (4.6) between loop soups and the Gaussian free field, one could also construct explicitly the law of the signs \(\sigma \) by a similar procedure.

Let us now give several interesting consequences of Theorem 3.9, as well as the usual isomorphism (1.7). By continuity of the Gaussian free field, as already noted in (5.3) and below in [8], one can easily deduce from (1.7) that

$$\begin{aligned} \begin{aligned} \text {there exists a coupling between}\ {{{\mathcal {I}}}}^u\ \text {and}\ \varphi \ \text {such that a.s. each connected component}\qquad \\ \text {of }{{{\mathcal {I}}}}^u\text { is either included in }\{x\in {{\widetilde{{\mathcal {G}}}}}:\,\varphi _x>-\sqrt{2u}\}\text { or in }\{x\in {{\widetilde{{\mathcal {G}}}}}:\,\varphi _x<-\sqrt{2u}\}.\qquad \qquad \nonumber \end{aligned}\!\!\!\!\!\!\!\!\\ \end{aligned}$$
(3.19)

Moreover, if \({\mathbf {h}}_{\text {kill}}<1,\) see (1.2), then each forwards trajectory of the random interlacement process has a positive probability to not be killed, and so \({{{\mathcal {I}}}}^u\) is unbounded with positive probability for all \(u>0.\) Hence, we obtain that for all \(u>0\) either \(\{x\in {{\widetilde{{\mathcal {G}}}}}:\,\varphi _x>-\sqrt{2u}\}\) or \(\{x\in {{\widetilde{{\mathcal {G}}}}}:\,\varphi _x<-\sqrt{2u}\}\) is unbounded with positive probability, and by symmetry of the Gaussian free field, it follows that (1.8) holds.

Note that this improves the result from Corollary 3.3, ii). However, the proof of (1.8) relies on the isomorphism (1.7) between random interlacements and the Gaussian free field on infinite graphs, whereas the proof of Corollary 3.3, ii) only relies on this isomorphism on finite graphs, or equivalently the second Ray-Knight theorem (see Theorem 2 in [20]), or alternatively on an argument based on the Markov property for the Gaussian free field from [6], as explained below Theorem 3.2.

The advantage of the isomorphism (Isom) is that when it holds, or equivalently (\(\mathrm {Law}_{0}\)) by Theorem 3.9, one can directly improve (3.19) to prove that

$$\begin{aligned} \text {there exists a coupling between}\ {{{\mathcal {I}}}}^u\ \text {and}\ \varphi \ \text {such that a.s.}\ {{{\mathcal {I}}}}^u\subset \{x\in {{\widetilde{{\mathcal {G}}}}}:\,\varphi _x>-\sqrt{2u}\}.\nonumber \!\!\!\!\!\!\!\!\\ \end{aligned}$$
(3.20)

In particular, by symmetry of the Gaussian free field, we obtain that there exists a coupling between \({{{\mathcal {V}}}}^u\) and \(\varphi \) such that \(E^{\ge \sqrt{2u}}\subset {{{\mathcal {V}}}}^u,\) where \({{{\mathcal {V}}}}^u=({{{\mathcal {I}}}}^u)^c\) is the vacant set of random interlacements, thus generalizing Theorem 3 in [19] from \({\mathbb {Z}}^d\) to any graph satisfying (\(\mathrm {Law}_{0}\)), or simply (Cap) by (3.9). We refer to [1, 27] and [8] for other applications of couplings similar to (3.20). Another interesting consequence of Theorem 3.9 is the following for the value of \({\widetilde{h}}_*.\)

Corollary 3.11

Let \({\mathcal {G}}\) be a transient weighted graph satisfying (\(\mathrm {Law}_{0}\)). Then either \({\mathbb {P}}^G\)-a.s. the sign clusters of the Gaussian free field on \({\widetilde{{\mathcal {G}}}}\) only contain compact connected components, or \(E^{\ge h}\) contains for each \(h\in {\mathbb {R}}\) at least one unbounded connected component with \({\mathbb {P}}^G\)-positive probability. In particular, if (\(\mathrm {Law}_{0}\)) holds and \({\mathbf {h}}_{\text {kill}}<1,\) then by (1.8), \({\widetilde{h}}_*={\widetilde{h}}_*^{\mathrm{com}} \in {\{0,\infty \}}.\)

The proof of Corollary 3.11 appears at the end of Sect. 6. We refer to [22] for an example of a graph satisfying \({\mathbf {h}}_{\text {kill}}<1,\) but for which \({\widetilde{h}}_*={\widetilde{h}}_*^{\mathrm{com}} =\infty .\) Note however that we still have \({\widetilde{h}}_*^{\mathrm{cap}} \le 0\) by Theorem 3.2. In view of Corollary 3.11, an interesting open question is then whether a transient graph with \({\widetilde{h}}_*\in {(0,\infty )},\) or \({\widetilde{h}}_*^{\mathrm{com}} \in {(0,\infty )},\) exists or not. Another interesting consequence of Corollary 3.11 is that if \({\widetilde{h}}_*=0,\) then the level sets of the Gaussian free field do no percolate at the critical point \(h=0,\) as implied by the following:

Corollary 3.12

If \({\mathcal {G}}\) is a transient graph such that \({\widetilde{h}}_*\le 0,\) then \(E^{\ge 0}\) contains only bounded connected components.

We refer to the end of Sect. 6 for the proof of Corollary 3.12. We conclude this section with the short

Proof of Theorem 1.1

Theorem 1.1,1) follows from the first conclusion of Theorem 3.2 and Corollary 3.3,i), The first equivalence in Theorem 1.1,2) is a consequence of Corollary 3.12 (the reverse implication being immediate, see (1.6)). Finally, the implication (Sign)\(\Longrightarrow \) (\(\mathrm {Law}_{0}\)) is a consequence of the first conclusion of Theorem 3.7 and the remaining equivalences follow from Corollary 3.12 and (3.14) in Theorem 3.9. Finally, Theorem 1.1,3) is implied by Corollary 3.11. \(\square \)

4 Some preparation

In this section, we prepare the ground for the proofs of Theorems 3.2 and 3.7. Their proofs, given in the next section, combine three main ingredients, corresponding to Proposition 4.2, Lemma 4.4 and Lemma 4.6 below. They also rely on a symmetry property implied by (Isom’), stated in Lemma 4.3, which is of independent interest. These results will also be useful in Sect. 6 in the course of proving Theorem 3.9, albeit in a different manner.

Our starting point, Proposition 4.2 below, contains the key observation that (\({Law}_h\))\(_{h\ge 0}\) follows from the identity (Isom’), if assumed to hold. Lemma 4.4 implies a version of the isomorphism (Isom’), valid on finite graphs (this result is in fact a consequence of the isomorphism theorems between loop soups and the Gaussian free field from [19], see also (4.6) below; the proof of Lemma 4.4 is given in “Appendix B”). Importantly, Lemma 4.4 allows for Proposition 4.2 to automatically apply in a finite setup. Finally, Lemma 4.6 supplies a useful approximation scheme for \(\varphi \) based on (2.28), see (4.10) below, which entails the important limits (4.16), (4.17) from Corollary 4.7. With these results at hand, the proofs of Theorems 3.2 and 3.7 quickly follow. They appear in the next section.

Unless specified otherwise, we tacitly assume that \({\mathcal {G}}\) is a transient weighted graph (see above (1.1) for our setup). We begin with the following technical lemma.

Lemma 4.1

For each \(x_0\in {{\widetilde{{\mathcal {G}}}}}\) and \(h\in {\mathbb {R}},\) defining \(E^{>h}=\{y\in {{\widetilde{{\mathcal {G}}}}}:\varphi _y>h\}\) and \(E^{>h}(x_0)=\{y\in {{\widetilde{{\mathcal {G}}}}}:\,y\leftrightarrow x_0\text { in }E^{>h}\},\) and denoting by \(\overline{E^{>h}(x_0)}\) the closure of \(E^{>h}(x_0),\) one has

$$\begin{aligned} \overline{E^{>h}(x_0)}=E^{\ge h}(x_0)\ {\mathbb {P}}^G\text {-a.s.} \end{aligned}$$

Proof

Since \(E^{\ge h}(x_0)\) is closed, it is clear that \(\overline{E^{>h}(x_0)}\subset E^{\ge h}(x_0).\) Let us now fix some compact \(K\subset {\widetilde{{\mathcal {G}}}},\) let \(E^{>h}_K(x_0)=\{y\in {{\widetilde{{\mathcal {G}}}}}:\,y\leftrightarrow x_0\text { in }E^{>h}\cap K\},\) and \({\mathcal {K}}\) be the set containing \(\overline{E^{>h}_K(x_0)}\) as well as each \(x\in {G}\) such that \(\overline{I_{\{x,y\}}}\cap \overline{E^{>h}_K(x_0)}\ne \varnothing \) for some \(y\sim x.\) In order to apply the Markov property (2.34) to the random compact \({\mathcal {K}},\) we first need to show that it is compatible. Let us thus fix some open set O,  and let us define \(O'\) the set obtained from O by removing \(\overline{I_{\{x,y\}}}\) from O for all \(x\sim y\) such that \(I_{\{x,y\}}\cap O\ne \varnothing \) and \(x \notin O\) or \(y\notin O\). One then sees that \({\mathcal {K}}\subset O\) if and only if \(\overline{E^{>h}_K(x_0)}\subset O'.\) Moreover, \(\overline{E^{>h}_K(x_0)}\subset O'\) if and only if for every connected path \(\pi \) from \(x_0\) to \(y\in {\partial O'},\) with \(\pi \) closed in x and open in y,  there exists \(z\in {\pi }\) with \(\varphi _z\le h.\) Therefore, the event \(\overline{E^{>h}_K(x_0)}\subset O'\) is \({\mathcal {A}}_{O'}\subset {\mathcal {A}}_O\) measurable, and so \({\mathcal {K}}\) is compatible.

Let us now assume that \(E^{\ge h}_K(x_0)\not \subset \overline{E^{>h}_K(x_0)}\). Hence, there exists a closed path \(\pi \subset E^{\ge h}_K(x_0)\) starting in \(x_0\) such that \(\pi \not \subset \overline{E^{>h}_K(x_0)}.\) With probability one, we can moreover assume that \(\varphi \ne h\) on G. Then by definition of \({\mathcal {K}}\) there exists an edge or vertex \(e\in {E\cup G},\) \(x\in {{I_e}\cap \partial E^{>h}_{K}(x_0)},\) with x in the interior of \(\pi ,\) and, if \(e\in {E},\) \(y\in {\overline{I_e}\cap \partial {\mathcal {K}}}\) with \(y\ne x.\) Since \(\varphi _x=h\) by continuity of \(\varphi \), using the Markov property (2.34) and a similar reasoning as above (2.9) in [9], one can show that when \(e\in {E},\) conditionally on \({\mathcal {A}}_{\mathcal {{\mathcal {K}}}}^+,\) the law of \(\varphi \) on the edge between x and y is the same as the law of a Brownian bridge with variance 2 at time 1,  on the edge between x and y with value h at x and \(\varphi _y\) at y. This Brownian bridge is a.s. strictly smaller than h infinitely many times in any neighborhood of x,  and so a.s. \(\varphi <h\) infinitely many times in any neighborhood of x,  that is \(x\in {\partial E^{\ge h}(x_0)}.\) If \(e\in {G},\) one can prove similarly that \(x\in {\partial E^{\ge h}(x_0)}\) since the law of \(\varphi \) on the edge between x and the open end of \(I_e\) is the same as the law of a Brownian bridge with variance 2 at time 1 between \(\varphi _x\) and 0. This is a contradiction since x is in the interior of \(\pi \subset E^{\ge h}_K(x_0),\) and so \(E^{\ge h}_K(x_0)\subset \overline{E^{>h}_K(x_0)}\subset \overline{E^{>h}(x_0)}\) a.s. Taking a sequence of compacts \(K=K_n\) increasing to \({\widetilde{{\mathcal {G}}}},\) we conclude. \(\square \)

Proposition 4.2

Suppose (Isom’) is verified on \({\mathcal {G}}\). Then (\({Law}_h\))\(_{h\ge 0}\) holds true.

Proof

Let

$$\begin{aligned} \begin{aligned} \Sigma _h{\mathop {=}\limits ^{\text {def.}}}\{y\in {{\widetilde{{\mathcal {G}}}}};\,|\varphi _y- h|>0\}, \, \Sigma \equiv \Sigma _0 \text { and }\Sigma (x){\mathop {=}\limits ^{\text {def.}}}\{y\in {{\widetilde{{\mathcal {G}}}}}:y\leftrightarrow x\text { in }\Sigma \} \text { for}\ x\in {{\widetilde{{\mathcal {G}}}}} \end{aligned} \end{aligned}$$
(4.1)

(see below (1.4) for notation). We first consider the case \(h=0,\) and the sets \(\overline{\Sigma (x)},\) \(x\in {{\widetilde{{\mathcal {G}}}}},\) which are the closures of the sign clusters \({\Sigma (x)}\). Note that if \(\Sigma (x)\cap {{{\mathcal {I}}}}^u=\varnothing ,\) then the cluster of x in \(\{y\in {{\widetilde{{\mathcal {G}}}}}:2\ell _{y,u}+\varphi _y^2>0\}\) is equal to \(\Sigma (x)\) (both \(\Sigma (x)\) and \({{{\mathcal {I}}}}^u\) are open) and so \(\sigma ^u_x=\pm 1\) with conditional probability \(\frac{1}{2}\) given \((|\varphi _x|)_{x\in {{\widetilde{{\mathcal {G}}}}}}\) and \(\omega _u\) under \({\widetilde{{\mathbb {P}}}}\) (recall \(\sigma ^u\) as defined above Theorem 3.9). On the other hand, if \(\Sigma (x)\cap {{{\mathcal {I}}}}^u\ne \varnothing ,\) then \(x\leftrightarrow {{{\mathcal {I}}}}^u\) in \(\{y\in {{\widetilde{{\mathcal {G}}}}}:2\ell _{y,u}+\varphi _y^2>0\},\) and so \(\sigma ^u_x=1.\) As \(E[\text {sign}(X+a)]=P(|X|< a)\) for any centered Gaussian variable X and \(a >0\), by (Isom’), (2.41) and the symmetry of the Gaussian free field, we thus obtain, for all \(u>0\) and \(x\in {{\widetilde{{\mathcal {G}}}}}\),

$$\begin{aligned} \begin{aligned} 2{\mathbb {P}}^G(\varphi _x\ge \sqrt{2u})&=1-{\mathbb {E}}^G\big [\text {sign}(\varphi _x+\sqrt{2u})\big ] =1-{\widetilde{{\mathbb {E}}}}[\sigma _x^u]\\&=1-{\widetilde{{\mathbb {P}}}}\big (\Sigma (x)\cap {{{\mathcal {I}}}}^u\ne \varnothing \big ) ={\mathbb {E}}^G\left[ \exp \left( -u\mathrm {{cap}}\big (\overline{\Sigma (x)}\big )\right) \right] . \end{aligned} \end{aligned}$$
(4.2)

Next, we note that by Lemma 4.1 for \(h=0,\) \({\mathbb {P}}^G\)-a.s., \(\overline{\Sigma (x)}=E^{\ge 0}(x)\) on \(\{ \varphi _x>0\}\). Therefore, by symmetry of the Gaussian free field in combination with (4.2) we thus have

$$\begin{aligned} {\mathbb {E}}^G\Big [\exp \big (-u\mathrm {{ cap}}\big (E^{\ge 0}(x)\big )\big )1_{\varphi _x\ge 0}\Big ]&=\frac{1}{2}{\mathbb {E}}^G\left[ \exp \left( -u\mathrm {{cap}}\big (\overline{\Sigma (x)}\big )\right) \right] ={\mathbb {P}}^G(\varphi _x\ge \sqrt{2u}), \end{aligned}$$
(4.3)

which is (\(\mathrm {Law}_0\)).

Let us now consider some \(h>0,\) and let \(u_0=h^2/2.\) We will reduce this to the case \(h=0\). By the symmetry of the Gaussian free field, (Isom’) and Lemma 4.1, we have that \(E^{\ge h}(x)\) has the same law under \({\mathbb {P}}^G\) as the closure of the connected component of x in \(\{y\in {{\widetilde{{\mathcal {G}}}}}:\,\sigma ^{u_0}_y=-1\}\) under \({\widetilde{{\mathbb {P}}}},\) which is the law of the set that equals \(\overline{\Sigma (x)}\) if \({{{\mathcal {I}}}}^{u_0}\cap {\Sigma (x)}=\varnothing \) and \(\sigma _x=-1,\) and equals \(\varnothing \) otherwise. Therefore, by (2.41) we have for all \(u>0\)

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}^G\Big [\exp \big (-u\mathrm {{ cap}}(E^{\ge h}(x))\big )1_{\varphi _{x}\ge h}\Big ]={\widetilde{{\mathbb {E}}}}\left[ 1_{{{{\mathcal {I}}}}^{u_0}\cap {\Sigma (x)}=\varnothing ,\sigma ^{u_0}_x=-1}\exp \left( -u\mathrm {{ cap}}(\overline{\Sigma (x)})\right) \right] \qquad \\&\quad =\frac{1}{2}{\mathbb {E}}^G\left[ \exp \left( -(u+u_0)\mathrm {{ cap}}(\overline{\Sigma (x)})\right) \right] ={\mathbb {P}}^G\big (\varphi _x\ge \sqrt{2u+h^2}\big ), \end{aligned} \end{aligned}$$
(4.4)

using (4.2) in the last step. \(\square \)

Next, we observe a symmetry property of compact clusters implied by (Isom’).

Lemma 4.3

Let \({\mathcal {G}}\) be a graph such that (Isom’) holds. Then for all \(h \ge 0\), the compact clusters of \(E^{\ge -h}\) have the same law as the compact clusters of \(E^{\ge h}.\)

Proof

If (Isom’) holds, then by Lemma 4.1 the compact clusters of \(E^{\ge -\sqrt{2u}}\) have the same law as the closure of the clusters of \(\{x\in {{\widetilde{{\mathcal {G}}}}}:\sigma _x^u=1\}\) whose closure is compact. Each cluster of \({{{\mathcal {I}}}}^u\) is non-compact, and so by definition of \(\sigma ^u,\) the compact clusters of \(E^{\ge -\sqrt{2u}}\) have the same law as the closure of the clusters of \(\Sigma \) (cf. (4.1)) whose closure is compact, that do not intersect \({{{\mathcal {I}}}}^u\) and for which \(\sigma ^u=1\). By definition of \(\sigma ^u\), the law of these clusters of \(\Sigma \) is unchanged if one retains all the previous properties but the last one and requires \(\sigma ^u=-1\) instead. But by (Isom’), the resulting clusters have the same law as those of \(\{x\in {{\widetilde{{\mathcal {G}}}}}: \varphi _x < - \sqrt{2u} \}\) whose closure is compact, i.e. by Lemma 4.1 the clusters whose closures are the compact clusters of \(\{x\in {{\widetilde{{\mathcal {G}}}}}:\varphi _x\le -\sqrt{2u}\}.\) Finally by the symmetry of the Gaussian free field, these closures have the same law as the compact clusters of \(E^{\ge \sqrt{2u}}\). \(\square \)

The proofs of our next two ingredients, Lemmas 4.4 and 4.6 below, rely on certain aspects of Poissonian loop soups. This requires a small amount of notation, which we now introduce. We also review certain features of loop soups, which will be used in the sequel. Following e.g. [13, 17], one defines a measure \(\mu ^L\) on loops in \({\widetilde{{\mathcal {G}}}}\) with compact closure in \({\widetilde{{\mathcal {G}}}}\) associated with \(P^{{\widetilde{{\mathcal {G}}}}}_x,\) \(x\in {{\widetilde{{\mathcal {G}}}}},\) and, under a suitable probability measure \({\mathbb {P}}^L= {\mathbb {P}}^L_{{\widetilde{{\mathcal {G}}}}}\), for all \(\alpha >0\) the loop soup \(\widetilde{{\mathcal {L}}}_{\alpha }\) with parameter \(\alpha \) as the Poisson point process on the space of (compact) loops on \({\widetilde{{\mathcal {G}}}}\) with intensity \(\alpha \mu ^L.\) We denote by \((L_x^{(\alpha )})_{x\in {{\widetilde{{\mathcal {G}}}}}}\) its field of local times relative to m on \({\widetilde{{\mathcal {G}}}}\) (cf. above (2.1)), which can be taken to be continuous, see Lemma 2.2 in [19]. Moreover, we denote by \({\mathcal {L}}_{\alpha }\) the Poisson point process consisting of the trace on G of each loop in \(\widetilde{{\mathcal {L}}}_{\alpha },\) which has the same law as the loop soup associated with \(P_x^{\mathcal {G}},\) see Sect. 2 of [19] or Section 7.3 of [13] for details. An important property of the loop soup \(\widetilde{{\mathcal {L}}}_{\alpha }\) is the restriction property, see Sect. 6 of [13]: for all connected and open subsets A of \({\widetilde{{\mathcal {G}}}},\) if \(\widetilde{{\mathcal {L}}}_{\alpha }^A\) stands for the set of loops in \(\widetilde{{\mathcal {L}}}_{\alpha }\) which are entirely included in A,  then

$$\begin{aligned} \widetilde{{\mathcal {L}}}_{\alpha }^A\text { has the same law under }{\mathbb {P}}^L_{{\widetilde{{\mathcal {G}}}}} \text { as }\widetilde{{\mathcal {L}}}_{\alpha }\text { under }{\mathbb {P}}^L_{{\widetilde{{\mathcal {G}}}}^A_{\infty }}; \end{aligned}$$
(4.5)

here, \({\mathcal {G}}^A_{\infty }\) is the graph with the same vertices, edges and weights as \({\mathcal {G}}^{\partial A}\) (see Lemma 2.1), but with killing measure equal to \(\kappa \) on \({(G\cap A)\setminus \partial A},\) and equal to infinity on \(\partial A\cup (G\cap A^c).\) I.e., for all \(x\in {A},\) the diffusion X under \(P^{{\widetilde{{\mathcal {G}}}}^A_{\infty }}_x\) has the same law as X killed on exiting A under \(P^{\widetilde{{{\mathcal {G}}}}}_x.\)

When \(\alpha =\frac{1}{2},\) the loop soup \(\widetilde{{\mathcal {L}}}_{1/2}\) is linked to the Gaussian free field on \({\widetilde{{\mathcal {G}}}}\) via the following isomorphism, due to Lupu [19]; see also Le Jan, Theorem 2 of [17] for a similar identity regarding the square of the Gaussian free field on the discrete base graph G (not including the sign of \(\varphi \)). Introducing the shorthand \(L_{\cdot }=L_{\cdot }^{(1/2)}\) for the local time field of \(\widetilde{{\mathcal {L}}}_{1/2}\) to simplify notation, let \({\widetilde{{\mathbb {P}}}}^{L}_{{\widetilde{{\mathcal {G}}}}}\) be a suitable extension of \({\mathbb {P}}^{L}_{{\widetilde{{\mathcal {G}}}}}\) carrying a process \((\sigma _x)_{x\in {{\widetilde{{\mathcal {G}}}}}}\in {\{-1,1\}^{{\widetilde{{\mathcal {G}}}}}}\) such that, conditionally on \(\widetilde{{\mathcal {L}}}_{1/2}, \sigma \) is constant on each cluster of \(\{x\in {{\widetilde{{\mathcal {G}}}}}:L_x>0\}\), and its values on each cluster are independent and uniformly distributed. Then

$$\begin{aligned} \text { under }{\widetilde{{\mathbb {P}}}}^L_{{\widetilde{{\mathcal {G}}}}} \text { the law of } \big (\sigma _x\sqrt{2L_x}\big )_{x\in {{\widetilde{{\mathcal {G}}}}}}\text { is }{\mathbb {P}}^G_{{\widetilde{{\mathcal {G}}}}}; \end{aligned}$$
(4.6)

the measure \({\widetilde{{\mathbb {P}}}}^L_{{\widetilde{{\mathcal {G}}}}}\) is essentially the coupling constructed in Proposition 2.1 of [19], where the (explicit) law of \(\sigma \) on \({\widetilde{{\mathcal {G}}}}\) follows from a version of Lemma 3.2 in [19] on \({\widetilde{{\mathcal {G}}}}\) rather than \({\widetilde{{\mathcal {G}}}}^{-},\) cf. above (2.31).

The identity (4.6) also comes with the following discrete version. Define (still under \({\widetilde{{\mathbb {P}}}}^{L}_{{\widetilde{{\mathcal {G}}}}}\)) a random subset \(\widehat{{\mathcal {E}}}\) of E such that, conditionally on \({\mathcal {L}}_{\frac{1}{2}},\) \(\widehat{{\mathcal {E}}}\) contains each edge crossed by some loop in \({\mathcal {L}}_{\frac{1}{2}},\) and each additional edge \(e\in {E}\) conditionally independently with probability \(1-p_e^{{\mathcal {G}}}(\sqrt{L}),\) with \(p_e^{{\mathcal {G}}}\) as given by (2.37). Then

$$\begin{aligned}&\widehat{{\mathcal {E}}}\ \text {has the same law under}\ {\widetilde{{\mathbb {P}}}}^{L}_{{\widetilde{{\mathcal {G}}}}}(\cdot \,|\,{\mathcal {L}}_{\frac{1}{2}})\ \text {as}\ {\mathcal {E}}{\mathop {=}\limits ^{\text {def.}}}\{e\in {E}:\,L_x>0\nonumber \\&\text { for all }x\in {I_e}\}\ \text {under}\ {{\mathbb {P}}}^{L}_{{\widetilde{{\mathcal {G}}}}}(\cdot \,|\,{\mathcal {L}}_{\frac{1}{2}}). \end{aligned}$$
(4.7)

In particular, if we define a process \(({\widehat{\sigma }}_x)_{x\in {{G}}}\in \{-1,1\}^{\mathcal {G}},\) such that, conditionally on \({\mathcal {L}}_{\frac{1}{2}},\) and \(\widehat{{\mathcal {E}}},\) \({\widehat{\sigma }}\) is constant on each of the (discrete) clusters induced by \(\widehat{{\mathcal {E}}}\) and its values on each cluster are independent and uniformly distributed, then

$$\begin{aligned} \big ({\widehat{\sigma }}_x\sqrt{2L_x}\big )_{x\in {{{G}}}}\text { has the same law under }{\widetilde{{\mathbb {P}}}}^L_{{\widetilde{{\mathcal {G}}}}} \text { as }(\varphi _x)_{x\in {G}}\text { under }{\mathbb {P}}^G_{{\widetilde{{\mathcal {G}}}}} \end{aligned}$$
(4.8)

(Corollary 3.6 in [19] provides (4.7), and one can then directly derive (4.8), see Theorem 1.bis in [19]). The identity (4.6) is an analogue in the context of loop soups of the relation (Isom’) for interlacements (a similar analogy can be drawn between (4.8) and (3.16)). In particular, the following holds on finite graphs, i.e. on graphs \({\mathcal {G}}=({\overline{G}},{{\bar{\lambda }}},{{\bar{\kappa }}})\) such that \(\{x \in {\overline{G}} : {{\bar{\kappa }}}_x<\infty \}\) is finite (note that this implies that the induced graph \((G,\lambda , \kappa )\) has finite vertex set G, cf. (2.12)).

Lemma 4.4

If \({\mathcal {G}}\) is a finite transient weighted graph, then (Isom’) holds. Moreover, conditionally on \({\widehat{\omega }}_u\) and \((\varphi _x)_{x\in {{G}}},\) the family \(\{e\in {{\mathcal {E}}_u}\},\) \(e\in {E\cup {G}}\) (defined above (3.15)) is independent, and for all \(e\in {E\cup {G}}\)

$$\begin{aligned} {\widetilde{{\mathbb {P}}}}(e\in {{\mathcal {E}}_u}\,|\,{\widehat{\omega }}_u,(\varphi _x)_{x\in G})=1_{e\in {{{{\mathcal {I}}}}_E^u}}\vee (1-p_e(\varphi ,\ell _{.,u})). \end{aligned}$$
(4.9)

For completeness, we have included the proof of Lemma 4.4 in “Appendix B”. We briefly sketch the proof here. To deduce (Isom’), one essentially considers the decomposition \(\widetilde{{\mathcal {L}}}_{1/2}=\widetilde{{\mathcal {L}}}_{1/2}^{\, \text {in}}+\widetilde{{\mathcal {L}}}_{1/2}^{\, *}\) of the loop soup on the cable system \({\widetilde{{\mathcal {G}}}}^*\) of a suitable one-point compactification \({{\mathcal {G}}}^*={{\mathcal {G}}} \cup \{ x_*\}\) of \({{\mathcal {G}}}\) (with killing at \(x_*\), so \({\mathcal {G}}^{*}\) is transient), into the ‘interior’ loops constituting \(\widetilde{{\mathcal {L}}}_{1/2}^{\, \text {in}}\) which never hit \(x_*,\) and the loops \(\widetilde{{\mathcal {L}}}_{1/2}^{\, *}\) which contain \(x_*\). The two processes are independent. Inserting the corresponding decomposition of the local times \(L_{\cdot }\) of \(\widetilde{{\mathcal {L}}}_{1/2}\) into (4.6) (applied on \({\widetilde{{\mathcal {G}}}}_*\)), one can then generate in law the field \(\sigma _{\cdot }^u\sqrt{2\ell _{\cdot ,u}+\varphi _{\cdot }^2}\) appearing in (Isom’) by suitable conditioning, and witnesses that this conditioning causes a global shift by \(\sqrt{2u}\) in (4.6). Roughly speaking, the local times of \(\widetilde{{\mathcal {L}}}_{1/2}^{\, \text {in}}\) generate \(\varphi _{\cdot }^2/2\) in this procedure by (4.5) and (4.6), whereas the local times of \(\widetilde{{\mathcal {L}}}_{1/2}^{\, *}\) give rise to \(\ell _{\cdot ,u}\); see also [20], or Sect. 2 of [18], for similar ideas to deduce the second Ray-Knight theorem from (4.6), which is related to the interlacement by concatenating the trajectories contributing to \(\ell _{\cdot ,u}\) to represent the successive excursions of a single diffusion \(X_{\cdot \wedge \tau _u}\) under \(P_{x_*}^{{\widetilde{{\mathcal {G}}}}^*}\) stopped at \(\tau _u =\inf \{t \ge 0: \ell _{x_*}(t) \ge u\}\). The conditional law in (4.9) is then obtained by following ideas of [20], Section 2.5.

Remark 4.5

The proof of Lemma 4.4 delineated above uses the isomorphism (4.6) relating loop soups and the Gaussian free field. Similarly to the proof of Theorem 2.4 of [27], one could alternatively use the Markov property (2.34) to prove that (Isom) (which is easily seen to be equivalent to (Isom’), see Lemma 6.1 below) holds on any finite transient graph (or more generally on any transient graph with bounded Green function such that (Sign) holds). However, this approach does not directly provide the discrete isomorphism described by (4.9).

We proceed to state the third ingredient, Lemma 4.6 below, which supplies a way to approximate the Gaussian free field on any transient graph \({\mathcal {G}}\) by Gaussian free fields on finite graphs. The following definition is key. For a given graph \({\mathcal {G}}=({\overline{G}}, {\overline{\lambda }}, {\overline{\kappa }})\), we say that

$$\begin{aligned} \begin{aligned}&\text {a sequence of graphs}\ {\mathcal {G}}_n\ \text {increases to}\ {\mathcal {G}}\ \text {if}\ {\mathcal {G}}_n=({\overline{G}},{\overline{\lambda }}, {\overline{\kappa }}^{(n)})\ \text {for a sequence}\\&{\overline{\kappa }}^{(n)} \subset [0,\infty ]^{{\overline{G}}}\ \text {of killing measures such that}\ {\overline{\kappa }}^{(n)}_x \searrow {\overline{\kappa }}_x\ \text {as}\ n \rightarrow \infty \ \text {for all}\ x\in {{\overline{G}}}.\qquad \end{aligned} \end{aligned}$$
(4.10)

In particular, we will be interested in finite-volume approximations of \({\mathcal {G}}\), for which \({\overline{\kappa }}^{(n)}=\infty \) outside of a finite set \(U_n\) for every n, with \(U_n\) exhausting \({\overline{G}}\) as \(n \rightarrow \infty \). The graphs \({\mathcal {G}}_n\) thus considered are finite (in the sense defined above Lemma 4.4).

Due to the observations made around (2.28), for \({\mathcal {G}}_n\) as in (4.10), we can view \({\widetilde{{\mathcal {G}}}}_n\) as a subset of \({\widetilde{{\mathcal {G}}}}\) such that the sequence \({\widetilde{{\mathcal {G}}}}_n\) increases to \({\widetilde{{\mathcal {G}}}}\) and such that for each compact \(K \subset {\widetilde{{\mathcal {G}}}}\) we have \(K\subset {\widetilde{{\mathcal {G}}}}_n\) for large enough n.

Lemma 4.6

Let \({\mathcal {G}}\) be a transient weighted graph, and let \({\mathcal {G}}_n,\) \(n\in {\mathbb {N}},\) be a sequence of transient weighted graphs increasing to \({\mathcal {G}}_{\infty }={\mathcal {G}}.\) There exists a probability space \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) on which the processes \((\varphi ^{(n)}_x)_{x\in {{\widetilde{{\mathcal {G}}}}_n}},\) \(n\in {\mathbb {N}},\) and \((\varphi ^{(\infty )}_x)_{x\in {{\widetilde{{\mathcal {G}}}}}}\) are defined, with the following properties:

$$\begin{aligned}&\text {for all}\ n\in {\mathbb {N}}\cup \{\infty \}, (\varphi _{x}^{(n)})_{x\in {{\widetilde{{\mathcal {G}}}}_n}}\ \text {has law}\,\, {\mathbb {P}}_{{\widetilde{{\mathcal {G}}}}_n}^G; \end{aligned}$$
(4.11)
$$\begin{aligned}&{\mathbb {P}}\text {-a.s. for all compact}\ K\subset {\widetilde{{\mathcal {G}}}},\ \text {one has}\ \varphi _x^{(n)}=\varphi _x^{(\infty )}\ \text {for}\ x\in {K}\ \text {and}\ n\ \text {large enough.} \end{aligned}$$
(4.12)

Proof

Let \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) be a probability space carrying a process \(\widetilde{{\mathcal {L}}}^{(\infty )}\) with the same law as \(\widetilde{{\mathcal {L}}}_{1/2}\) under \({\mathbb {P}}_{{\widetilde{{\mathcal {G}}}}}^L\) (for instance one can choose \({\mathbb {P}}= {\mathbb {P}}_{{\widetilde{{\mathcal {G}}}}}^L\)). For each \(n\in {\mathbb {N}}\) we denote by \(({L}_x^{(n)})_{x\in {{\widetilde{{\mathcal {G}}}}_n}}\) the accumulated local times of those loops in \(\widetilde{{\mathcal {L}}}^{(\infty )}\) which are entirely contained in the open set \({\widetilde{{\mathcal {G}}}}_n\subset {\widetilde{{\mathcal {G}}}}.\) One can clearly identify \({\widetilde{{\mathcal {G}}}}_n\) with \({\widetilde{{\mathcal {G}}}}_\infty ^{{\widetilde{{\mathcal {G}}}}_n},\) and by (4.5), the law of \(({L}_x^{(n)})_{x\in { {\widetilde{{\mathcal {G}}}}_n}}\) is the same as the law of \((L_x)_{x\in {{\widetilde{{\mathcal {G}}}}}}\) under \({\mathbb {P}}_{{\widetilde{{\mathcal {G}}}}_n}^L.\) Moreover, for each \(x\in {{\widetilde{{\mathcal {G}}}}},\) the sequence \(L_x^{(n)},\) \(n\in {\mathbb {N}},\) is increasing, and we denote by \(L_x^{(\infty )}\) its limit. Since each loop of \(\widetilde{{\mathcal {L}}}^{(\infty )}\) is relatively compact, it is contained in \({\widetilde{{\mathcal {G}}}}_n\) for n large enough, and so \((L_x^{(\infty )})_{x\in {{\widetilde{{\mathcal {G}}}}}}\) equals the total local times of the loops in \(\widetilde{{\mathcal {L}}}^{(\infty )},\) whence

$$\begin{aligned} L_{\cdot }^{(\infty )} =\lim _n \uparrow L_{\cdot }^{(n)} {\mathop {=}\limits ^{\text {law}}} L_{\cdot } \end{aligned}$$
(4.13)

where \(L_{\cdot }\) is the occupation time field of \(\widetilde{{\mathcal {L}}}_{1/2}\) (on \({\widetilde{{\mathcal {G}}}}\)).

For each \(n\in {\mathbb {N}},\) let \(({\mathcal {A}}_p^{(n)})_{p\in {\mathbb {N}}}\) be some enumeration of the countably many clusters of \(\{ L^{(n)}>0 \} (=\{x\in {{\widetilde{{\mathcal {G}}}}}:L_x^{(n)}>0\} \subset {\widetilde{{\mathcal {G}}}}_n),\) and let \((\sigma _p)_{p\in {\mathbb {N}}}\in {\{-1,1\}^{\mathbb {N}}}\) be an independent sequence of uniformly distributed random variables. For each \(n\in {\mathbb {N}}\) and \(x\in {{\widetilde{{\mathcal {G}}}}_n}\) we define \(E_n^{{\mathcal {L}}}(x)=\{y\in {{\widetilde{{\mathcal {G}}}}_n}:x\leftrightarrow y\text { in } \{ L^{(n)}>0 \} \},\) and if \(L_x^{(n)}\ne 0,\) we denote by \(k_n(x)\in \{1,\dots ,n\}\) the smallest index k such that \({\widetilde{{\mathcal {G}}}}_k\) intersects the cluster of x in \(\{L^{(n)}>0\}\), i.e. \(E_n^{{\mathcal {L}}}(x)\cap {\widetilde{{\mathcal {G}}}}_{k_n(x)}\ne \varnothing \) and \(E_n^{{\mathcal {L}}}(x)\cap {\widetilde{{\mathcal {G}}}}_{k_n(x)-1}=\varnothing ,\) with the convention \({\widetilde{{\mathcal {G}}}}_0=\varnothing .\)

We also define \(p_n(x)=\inf \{p\in {\mathbb {N}}:\,{\mathcal {A}}_p^{(k_n(x))}\subset E^{{\mathcal {L}}}_n(x)\},\) with the convention \(\inf \varnothing =+\infty .\) Note that since \(L^{(n)}_x,\) \(n\in {\mathbb {N}},\) is increasing for all \(x\in {{\widetilde{{\mathcal {G}}}}}\) and \(k_n(x)\le n,\) we have that \(p_n(x)<\infty \) if \(L_x^{(n)}\ne 0.\) For each \(n\in {\mathbb {N}}\) and \(x\in {{\widetilde{{\mathcal {G}}}}_n},\) we then let \(\sigma _{x}^{(n)}=\sigma _{p_n(x)}\) if \(L_x^{(n)}>0\) and \(\sigma _x^{(n)}=1\) otherwise, and set

$$\begin{aligned} \varphi _x^{(n)}{\mathop {=}\limits ^{\text {def.}}}\sigma _x^{(n)}\sqrt{2L_x^{(n)}}. \end{aligned}$$
(4.14)

Due to (4.6), \((\varphi _x^{(n)})_{x\in {{\widetilde{{\mathcal {G}}}}}}\) has law \({\mathbb {P}}_{{\widetilde{{\mathcal {G}}}}_n}^G.\) Moreover, for each \(x\in {{\widetilde{{\mathcal {G}}}}}\) with \(L_x^{(\infty )}>0,\) for all n large enough we have \(x\in {{\widetilde{{\mathcal {G}}}}_n}\) as well as \(L_x^{(n)}>0,\) hence \(k_n(x)\) is constant for n large enough since \(E_n^{{\mathcal {L}}}(x)\) increases to \(E_\infty ^{{\mathcal {L}}}(x).\) As a consequence, the sequence \(p_n(x),\) \(n\in {\mathbb {N}},\) is decreasing for n large enough, and we denote by \(p_\infty (x)\) its limit. Note that we then have \(p_n(x)=p_\infty (x)\) for n large enough. We define \(\sigma _x^{(\infty )}=\sigma _{p_\infty (x)}\) if \(L_x^{(\infty )}>0\) and \(\sigma _x^{(\infty )}=1\) otherwise, and \(\varphi _x^{(\infty )}=\sigma _x^{(\infty )}\sqrt{2L_x^{(\infty )}}.\) We then have \(\varphi ^{(n)}_x\displaystyle \mathop {\longrightarrow }_{n\rightarrow \infty }\varphi _x^{(\infty )}\) for all \(x\in {{\widetilde{{\mathcal {G}}}}}\) due to (4.13), (4.14) and since \(\text {sign}(\varphi ^{(n)}_x)=\text {sign}(\varphi ^{(\infty )}_x) \) for all large enough n. Finally, \(g_{{\widetilde{{\mathcal {G}}}}_n}(x,y)\displaystyle \mathop {\longrightarrow }_{n\rightarrow \infty }g_{{\widetilde{{\mathcal {G}}}}}(x,y)=g(x,y)\) for all \(x,y\in {{\widetilde{{\mathcal {G}}}}},\) whence

$$\begin{aligned} \lim _n {\mathbb {E}}\big [\exp ( i \langle \mu _{\alpha },\varphi ^{(n)}\rangle )\big ] =\exp \big (-\langle \mu _{\alpha },G \mu _{\alpha } \rangle /2\big )= {\mathbb {E}}^G\big [\exp ( i \langle \mu _{\alpha },\varphi \rangle )\big ]\quad \end{aligned}$$
(4.15)

for any finite point measure \(\mu _{\alpha }=\sum _{x \in A}\alpha _x \delta _x\), \(\alpha \in {\mathbb {R}}^{A}\) with \(A \subset {\widetilde{{\mathcal {G}}}}\) finite and \((G\mu )(x)=\int _{{\widetilde{{\mathcal {G}}}}} g(x,y)d\mu (y)\). The statement that \((\varphi _x^{(\infty )})_{x\in {{\widetilde{{\mathcal {G}}}}}}\) has law \({\mathbb {P}}^G\) follows from (4.15) and convergence of \(\varphi ^{(n)}\) (in law). This shows (4.11).

With probability 1, for each \(K \subset {\widetilde{{\mathcal {G}}}}\) connected compact, there exists a random \(N\in {\mathbb {N}},\) such that for all \(n\ge N,\) one has \(K\subset {\widetilde{{\mathcal {G}}}}_n,\) and no trajectory in \(\widetilde{{\mathcal {L}}}^{(\infty )}\) hitting K hits \({\widetilde{{\mathcal {G}}}}\setminus {\widetilde{{\mathcal {G}}}}_n\). One then has the equality \(L_x^{(n)}=L_x^{(\infty )}\) for all \(n\ge N\) and \(x\in {K},\) and the clusters of \(\{L_{\cdot }^{(n)}>0\}\) in \({\widetilde{{\mathcal {G}}}}\) whose closure is contained in K are equal to the clusters of \(\{L_{\cdot }^{(\infty )}>0\}\) whose closure is contained in K. As a consequence, once \(n\ge N,\) on has that \(\sigma _x^{(n)}=\sigma _x^{(\infty )}\) on all these clusters. Since \(\partial K\) is finite, we also have \(\sigma _x^{(n)}=\sigma _x^{(\infty )} (=1)\) for all \(x\in {\partial K}\) and n large enough. The claim (4.12) follows. \(\square \)

Lemma 4.6 yields the following important result.

Corollary 4.7

(Limits of cluster capacities) Let \(E_{n}^{\ge h}(x_0)= E_{n,{\widetilde{{\mathcal {G}}}}}^{\ge h}(x_0)\), where \(E_{n,K}^{\ge h}(x_0)=\{x\in {{\widetilde{{\mathcal {G}}}}_n\cap K}:\,x_0\leftrightarrow x\text { in } \{ \varphi ^{(n)} \ge h\}\cap K\}\), for \(K \subset {\widetilde{{\mathcal {G}}}}\). Then \({\mathbb {P}}\)-a.s., for all \(h \in {\mathbb {R}}\), \(x_0\in {{\widetilde{{\mathcal {G}}}}}\),

$$\begin{aligned}&\lim _{n\rightarrow \infty }\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}_n}\big (E_{n,K}^{\ge h}(x_0)\big )=\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}}\big (E^{\ge h}_{\infty ,K}(x_0)\big ), \text { for compact}\ K \subset {\widetilde{{\mathcal {G}}}},\ \text {and} \end{aligned}$$
(4.16)
$$\begin{aligned}&\lim _{n\rightarrow \infty }\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}_n}\big (E_{n}^{\ge h}(x_0)\big )=\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}}\big (E^{\ge h}_\infty (x_0)\big ), \text { if}\ E^{\ge h}_\infty (x_0)\ \text {is compact.} \end{aligned}$$
(4.17)

Proof

As a consequence of (4.12) one knows that for compact \(K \subset {\widetilde{{\mathcal {G}}}}\), one has \(\varphi ^{(n)}=\varphi ^{(\infty )}\) on K for large enough n, whence \(\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}_n}(E_{n,K}^{\ge h}(x_0))=\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}_n}(E^{\ge h}_{\infty ,K}(x_0))\) for such n. From this, (4.16) follows using that \(\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}_n}(A) \rightarrow \mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}}(A)\) for compact A as \(n\rightarrow \infty \), applied with the choice \(A= E^{\ge h}_{\infty ,K}(x_0)\) (indeed, using (2.6), (2.16) and (2.20), it is not hard to show that the equilibrium measure of any compact set A on \({\widetilde{{\mathcal {G}}}}_n\) converges –in fact decreases– to the equilibrium measure of A on \({\widetilde{{\mathcal {G}}}}\)). Now, if \(E_{\infty }^{\ge h}(x_0)\) is compact, then \(E^{\ge h}_{\infty }(x_0)=E^{\ge h}_{\infty ,K}(x_0)=E^{\ge h}_{n,K}(x_0)\) for large enough n and K depending on \(\varphi \). Together with (4.16), this immediately gives (4.17). \(\square \)

5 Proofs of Theorems 3.2 and 3.7

With the results of the last section at hand, we are ready to give the proofs of Theorems 3.2 and 3.7. This is the subject of the present section. Both proofs rely on Proposition 4.2 in combination with Lemmas 4.4 and 4.6 and Corollary 4.7.

First, as a consequence of Proposition 4.2 and Lemma 4.3, we collect the following

Corollary 5.1

If (Isom’) and (Cap) are satisfied on \({\mathcal {G}}\), then (3.10) and (3.11) hold.

Proof

If (Isom’) and (Cap) are satisfied, (Sign) follows from (\(\mathrm {Law}_{0}\)) (which holds on account of Proposition 4.2) by letting \(u\downarrow 0\) and using (Cap). Therefore (3.17) holds, which, together with (Cap) and Lemma 3.1, yields (3.10). Then, using (3.10) we have that \({\mathbb {P}}^G\big (\mathrm {{ cap}}({E}^{\ge -h}(x_0))\in {(\mathrm {{ cap}}(\{x_0\}),\infty )}\big )={\mathbb {P}}^G(\varphi _{x_0}\ge h)\). Since \({\mathbb {P}}^G\big (\mathrm {{ cap}}({E}^{\ge -h}(x_0))\le \mathrm {{ cap}}(\{x_0\})\big )={\mathbb {P}}^G(\varphi _{x_0}\le -h),\) we infer (3.11). \(\square \)

We now give the

Proof of Theorem 3.2

For a given graph \({\mathcal {G}}=({\overline{G}},{\overline{\lambda }}, {\overline{\kappa }})\), consider an increasing sequence \(U_n,\) \(n\in {\mathbb {N}},\) of finite connected subsets of \({\overline{G}}\) exhausting \({\overline{G}}\), i.e. satisfying \(U_n \subset U_{n+1}\) for all n and \(\bigcup _n U_n={\overline{G}}\). Now, define \({\mathcal {G}}_n=({\overline{G}},{\overline{\lambda }}, {\overline{\kappa }}^{(n)})\) with killing measure \(\bar{\kappa }_x^{(n)}=\bar{\kappa } _x\) if \(x\in {U_n},\) and \(\bar{\kappa } _x^{(n)}=\infty \) otherwise. The sequence of graphs \({\mathcal {G}}_n,\) \(n\in {\mathbb {N}},\) increases to \({\mathcal {G}}\) in the sense of (4.10), and \(G_n\) is finite for each \(n\in {\mathbb {N}}\) in the sense as above Lemma 4.4. Fixing a point \(x_0 \in {\widetilde{{\mathcal {G}}}}\), we may furthermore assume that \(x_0 \in {{\widetilde{{\mathcal {G}}}}}_n\) for all \(n \in {\mathbb {N}}\) (for instance by choosing \(U_n= B_d(z_0, n+1) \), where \(z_0 \in G\) is the vertex closest to \(x_0\) relative to d).

Considering the sequence \((\varphi _x^{(n)})_{x\in {{\widetilde{{\mathcal {G}}}}_n}},\) \(n \in {\mathbb {N}},\) from Lemma 4.6, which is in force, we obtain, applying Lemma 4.4 and Proposition 4.2, which implies (\({Law}_h\)), that for all \(n\in {\mathbb {N}},\)

$$\begin{aligned} {\mathbb {E}}\left[ \exp \left( -u\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}_n}({E}_n^{\ge h}(x_0))\right) 1_{\varphi _{x_0}^{(n)}\ge h}\right] {=}{\mathbb {P}}(\varphi _{x_0}^{(n)}\ge \sqrt{2u+h^2})\text { for all }u>0, \, h \ge 0.\nonumber \\ \end{aligned}$$
(5.1)

Fixing \(h=0\), (5.1) and the monotonicity property (2.23) thus yield, for any compact \(K \subset {\widetilde{{\mathcal {G}}}}\),

$$\begin{aligned} {\mathbb {E}}\left[ \exp \left( -u\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}_n}({E}_{n,K}^{\ge 0}(x_0))\right) 1_{\varphi _{x_0}^{(n)}\ge 0}\right] \ge {\mathbb {P}}(\varphi _{x_0}^{(n)}\ge \sqrt{2u})\text { for all }u>0. \end{aligned}$$
(5.2)

with \({E}_{n,K}^{\ge h}(x_0)\) as defined above (4.16). Now, applying (4.16) and dominated convergence to take the limit \(n\rightarrow \infty \) on both sides of (5.1), and subsequently considering an increasing sequence of compacts K exhausting \({\widetilde{{\mathcal {G}}}}\), one obtains, in view of (2.27),

$$\begin{aligned} {\mathbb {E}}^G\left[ \exp \left( -u\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}}({E}^{\ge 0}(x_0))\right) 1_{\varphi _{x_0}\ge 0}\right] \ge {\mathbb {P}}^G(\varphi _{x_0}\ge \sqrt{2u})\text { for all }u>0. \end{aligned}$$
(5.3)

Hence, taking \(u\rightarrow 0\) we obtain by dominated convergence that

$$\begin{aligned} {\mathbb {P}}^G\big (\mathrm {{ cap}}(E^{\ge 0}(x_0))<\infty ,\varphi _{x_0}\ge 0 \big )\ge \frac{1}{2}. \end{aligned}$$

Since \(E^{\ge 0}(x_0)=\varnothing \) when \(\varphi _{x_0}<0\) and \({\mathbb {P}}^G(\varphi _{x_0}<0)=\frac{1}{2},\) we obtain that \(\mathrm {{ cap}}(E^{\ge 0}(x_0))\) is \({\mathbb {P}}^G\)-a.s. finite, which proves the first part of the statement.

Let us now fix some \(h<0.\) If \(E^{\ge h}_n(x_0)\) is a non-compact subset of \({\widetilde{{\mathcal {G}}}}_n\) for infinitely many n,  then for all compacts K of \({\widetilde{{\mathcal {G}}}}\) we have \(E_n^{\ge h}(x_0)\not \subset K\) for infinitely many \(n\in {\mathbb {N}}.\) Since \(\varphi ^{(n)}=\varphi ^{(\infty )}\) on a neighborhood of K for n large enough, we then have that \(E_{\infty }^{\ge h}(x_0)\not \subset K\) for all compacts K,  that is \(E_{\infty }^{\ge h}(x_0)\) is a non-compact subset of \({\widetilde{{\mathcal {G}}}}.\) Since (3.11) holds on \({\mathcal {G}}_n\) by Lemma 4.4 and Corollary 5.1, we moreover have that

$$\begin{aligned} {\mathbb {P}}(E^{\ge h}_n(x_0)\text { is non-compact in }{\widetilde{{\mathcal {G}}}}_n\text { i.o.})&\ge \liminf _{n\rightarrow \infty }{\mathbb {P}}(E^{\ge h}_n(x_0)\text { is non-compact in }{\widetilde{{\mathcal {G}}}}_n)\\&=\liminf _{n\rightarrow \infty }{\mathbb {P}}(\varphi _{x_0}^{(n)}\in {(-h,h)})\\&={\mathbb {P}}^G(\varphi _{x_0}\in {(-h,h)})>0, \end{aligned}$$

and so \(E_{\infty }^{\ge h}(x_0)\) is non-compact with positive probability. \(\square \)

Prior to giving the proof of Theorem 3.7, we first briefly study some properties of the law of the capacity of the level sets of the Gaussian free field, when their the Laplace transform is given by (\({Law}_h\)) (see above Theorem 1.1). The next lemma computes the corresponding density (on the event \( \{ {E}^{\ge h}(x_0) \ne \emptyset \}\)).

Lemma 5.2

For all \(u\ge 0\) and \(h\in {\mathbb {R}},\)

$$\begin{aligned} \int _{g(x_0,x_0)^{-1}}^\infty \rho _h(t)\exp (-ut)\,\mathrm {d}t={\mathbb {P}}^G\big (\varphi _{x_0}\ge \sqrt{2u+h^2}\big ), \end{aligned}$$
(5.4)

with \(\rho _h\) as defined in (3.8).

Proof

Taking \(v=u+h^2/2\) and \(a=g(x_0,x_0)^{-1},\) it is enough to show that

$$\begin{aligned} \int _{a}^\infty \frac{1}{t\sqrt{2\pi (t-a)}}\exp (-vt)\,\mathrm {d}t=\int _{\sqrt{2v}}^\infty \exp \Big (-\frac{at^2}{2}\Big )\,\mathrm {d}t\text { for all }v,a\ge 0. \end{aligned}$$
(5.5)

For \(v=0\) we have, taking \(s=\sqrt{t-a},\)

$$\begin{aligned} \int _{a}^\infty \frac{1}{t\sqrt{2\pi (t-a)}}\,\mathrm {d}t=\sqrt{\frac{2}{\pi }}\int _{0}^\infty \frac{1}{s^2+a}\,\mathrm {d}s=\sqrt{\frac{2}{a\pi }}\Big [\arctan \Big (\frac{s}{\sqrt{a}}\Big )\Big ]_0^\infty =\sqrt{\frac{\pi }{2a}}, \end{aligned}$$

and so (5.5) holds for \(v=0.\) Moreover, by dominated convergence, the left-hand side of (5.5) viewed as a function of \(v > 0\) is continuously differentiable with derivative

$$\begin{aligned} -\int _{a}^\infty \frac{1}{\sqrt{2\pi (t-a)}}\exp (-vt)\,\mathrm {d}t= & {} -\sqrt{\frac{2}{\pi }}\int _{0}^\infty \exp \big (-v(a+s^2)\big )\,\mathrm {d}s\\= & {} -\frac{1}{\sqrt{2v}}\exp (-va), \end{aligned}$$

and so is equal to the derivative with respect to v of the term on the right-hand side of (5.5). This yields (5.5) and hence (5.4). \(\square \)

We now proceed to the

Proof of Theorem 3.7

Consider the approximating sequence \({\mathcal {G}}_n\) introduced at the beginning of the proof of Theorem 3.2. In particular, (5.1) still holds (as a consequence of Lemma 4.4 and Proposition 4.2). Now, let \(h\ge 0\) and suppose \(E^{\ge h}(x_0)\) is \({\mathbb {P}}^G\)-a.s. bounded, hence compact in view of Lemma 3.1. Then (4.17) holds and one can safely pass to the limit in (5.1) using dominated convergence, thus obtaining that (\({Law}_h\)) holds on \({\widetilde{{\mathcal {G}}}}\). Then, (3.8) holds on \({\widetilde{{\mathcal {G}}}}\) by means of Lemma 5.2. In particular, the previous argument shows that, if \(h\ge 0\) and \(E^{\ge h}(x_0)\) is \({\mathbb {P}}^G\)-a.s. bounded, then

$$\begin{aligned} \mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}_n}(E_n^{\ge h}(x_0))\ \text {converges in law to}\ \mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}}(E^{\ge h}(x_0)), \text {which is given by}\,\, (\mathrm {Law}_{h}).\nonumber \\ \end{aligned}$$
(5.6)

Assume now that (Cap) is fulfilled on \({\mathcal {G}}.\) Then (Sign) holds by Corollary 3.3, and so we obtain (3.9) from (5.6). In order to deduce (3.10), first observe that (3.10) holds on \({\widetilde{{\mathcal {G}}}}_n\) by means of Lemma 4.4 and Corollary 5.1, as (Cap) is trivially satisfied on \({\widetilde{{\mathcal {G}}}}_n\). For all \(h \ge 0\), due to (5.6) the random variable \({\mathrm{cap}}({E}_n^{\ge h}(x_0)) 1_{\varphi _{x_0}^{(n)}{\ge h}}\) converges in law to \({\mathrm{cap}}({E}^{\ge h}(x_0)) 1_{\varphi _{x_0}{\ge h}}\), hence so does \({\mathrm{cap}}({E}_n^{\ge -h}(x_0))1_{{\mathrm{cap}}({E}_n^{\ge -h}(x_0))\in {(0,\infty )}}\). To identify this with the law of \({\mathrm{cap}}({E}^{\ge -h}(x_0))1_{{\mathrm{cap}}({E}^{\ge -h}(x_0))\in {(0,\infty )}}\), one applies dominated convergence, noting that, due to (Cap) and Lemma 3.1, \({\mathrm{cap}}({E}^{\ge -h}(x_0))< \infty \) is tantamount to \({E}^{\ge -h}(x_0)\) being compact, and using (4.17). All in all, this gives (3.10). Finally (3.11) is an immediate consequence of (3.10), as in the proof of Corollary 5.1. This completes the proof of Theorem 3.7. \(\square \)

Remark 5.3

  1. (1)

    In view of the above proof of Theorem 3.7, we see that the validity of (\(\mathrm {Law}_{0}\)) (and thus equivalently of (Isom) by (3.14) after Theorem 3.9 is proved) can be viewed as a question about removing the compactness assumption in (4.17). Indeed (\(\mathrm {Law}_{0}\)) holds if and only if there exists a sequence \({\mathcal {G}}_n\) of graphs verifying (\(\mathrm {Law}_{0}\)) increasing to \({\mathcal {G}}\) in the sense of (4.10) such that, \({\mathbb {P}}\)-a.s.,

    $$\begin{aligned} \mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}_n}\big (E_n^{\ge 0}(x_0)\big ){\mathop {\longrightarrow }\limits ^{n \rightarrow \infty }} \mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}}\big (E_{\infty }^{\ge 0}(x_0)\big )\text { for all }x_0\in {{\widetilde{{\mathcal {G}}}}}. \end{aligned}$$
    (5.7)
  2. (2)

    Let \(K \subset {\widetilde{{\mathcal {G}}}}\) be connected and compact. By Lemmas  4.4 and 4.3, it follows that if \({\mathcal {G}}\) is a finite graph, then the compact clusters of \(E^{\ge -h}\) and \(E^{\ge h}\) have the same law. In particular,

    $$\begin{aligned} \text {the clusters of}\ E^{\ge -h}\ \text {and}\ E^{\ge h}\ \text {included in}\ K\ \text {have the same law.} \end{aligned}$$
    (5.8)

    The conclusion (5.8) remains true for arbitrary transient graph \({\mathcal {G}}\). Indeed, by following the arguments of Proposition 1.11 in [26], starting from \({{\mathcal {G}}}^{\partial K}\), one can construct a transient weighted graph \({{\mathcal {G}}}_*^{\partial K}\) with (finite) vertex set \(G^{\partial K}\cap K\) (recall Lemma 2.1 for notation) whose weights coincide with \(\lambda _{x,y}^{\partial K}\) whenever \(x,y \in G^{\partial K}\cap K\) are neighbors in \({{\mathcal {G}}}^{\partial K}\), in such a way that \((\varphi _x)_{x\in {K}}\) has the same law under \({\mathbb {P}}_{{\widetilde{{\mathcal {G}}}}}^G\) as under \({\mathbb {P}}_{{\widetilde{{\mathcal {G}}}}^{\partial K}_*}^G\). The conclusion (5.8) for arbitrary \({\mathcal {G}}\) then simply follows by regarding the clusters of \(E^{\ge -h}\) and \(E^{\ge h}\) included in K as parts of \({\widetilde{{\mathcal {G}}}}^{\partial K}_*\). One can also prove that the conclusion (3.17) holds under condition (Sign) using (5.8), by considering a sequence of compacts increasing to \({\widetilde{{\mathcal {G}}}}.\)

6 Proof of Theorem 3.9

In this section, we prove Theorem 3.9, along with its corollaries. In particular, this comprises the isomorphism between random interlacements and the Gaussian free field and the equivalences (3.14), as well as its discrete counterpart (3.16). We first compare random interlacements on \({\mathcal {G}}={\mathcal {G}}_{{\bar{\kappa }}}\) (recall the notation from above (2.28)) with random interlacements on \({\mathcal {G}}_{{{\bar{\kappa }}}'}\) for some \({{\bar{\kappa }}}'\ge {{\bar{\kappa }}}\) in Lemma 6.2, and then take advantage of this comparison to approximate random interlacements on any transient graphs by random interlacements on finite graphs as in (4.10), see Lemma 6.3. Together with the corresponding ‘finite-volume’ approximation of the Gaussian free field from Lemma 4.6 and in combination with the fact that Theorem 3.9 holds on finite graphs (see Lemma 4.4), we can then prove the isomorphism (Isom), see Lemma 6.4, under suitable assumptions. This is the key step of the proof of Theorem 3.9, presented thereafter. Finally, at the end of the section, we deduce from Theorem 3.9 that Corollaries 3.11 and 3.12 also hold.

We first dispense with the equivalence between (Isom) (see above Theorem 1.1) and (Isom’) (see above Theorem 3.9).

Lemma 6.1

The identity (Isom) holds true if and only if (Isom’) does.

Proof

It suffices to argue that \((\varphi _x 1_{x\notin {{\mathcal {C}}_u}}+\sqrt{\varphi _x^2+2\ell _{x,u}}\, 1_{x\in {{\mathcal {C}}_u}})_{x\in {{\widetilde{{\mathcal {G}}}}}}\) has the same law under \({{\mathbb {P}}}^{I}\otimes {\mathbb {P}}^G \) as \((\sigma _x^u\sqrt{2\ell _{x,u}+\varphi _x^2})_{x\in {{\widetilde{{\mathcal {G}}}}}}\) under \({\widetilde{{\mathbb {P}}}}\). By definition of \({{\mathcal {C}}_u}\) and since \(|\sigma _x^u|=1\), the absolute value of either field equals \(\sqrt{2\ell _{\cdot ,u}+\varphi _{\cdot }^2}\) in law. To deal with the signs, rewriting \(\varphi _x=\text {sign}(\varphi _x)\sqrt{\varphi _x^2+2\ell _{x,u}}\) for all \(x\notin {{\mathcal {C}}_u}\), one observes that the law of \((\text {sign}(\varphi _x)1_{x\notin {{\mathcal {C}}_u}}+1_{x\in {{\mathcal {C}}_u}})_{x\in {{\widetilde{{\mathcal {G}}}}}}\) under \(({\mathbb {P}}^I\otimes {\mathbb {P}}^G)(\cdot \,|\,|\varphi |,\omega ^u)\) is the same as the law of \(\sigma ^u\) under \({\widetilde{{\mathbb {P}}}}_{{\widetilde{{\mathcal {G}}}}}(\cdot \,|\,|\varphi |,\omega ^u)\), which follows immediately from the definitions of \({\mathcal {C}}_u\) and \(\sigma ^u\), respectively, together with Lemma 3.2 in [19] (the latter asserts that given \(|\varphi |\), the field \(\text {sign}(\varphi )\) is constant on each cluster of \(\{|\varphi |>0\},\) and the values on each cluster are independent and uniformly distributed, a consequence of the strong Markov property). \(\square \)

We are now going to approximate random interlacements on any transient graph \({\mathcal {G}}\) by random interlacements on a sequence of finite graphs \({\mathcal {G}}_{n}\) increasing to \({\mathcal {G}}\) in the sense of (4.10). To this end, we first compare random interlacements on two graphs \({\mathcal {G}}= ({\overline{G}}, {\bar{\lambda }}, {\bar{\kappa }})\) and \({\mathcal {G}}' =({\overline{G}}, {\bar{\lambda }}, {\bar{\kappa }}')\) with killing measures \( {\bar{\kappa }}' \ge {\bar{\kappa }}\), and corresponding cable systems \({\widetilde{{\mathcal {G}}}}\) and \({\widetilde{{\mathcal {G}}}}'\). Thus, \({\widetilde{{\mathcal {G}}}}= {\widetilde{{\mathcal {G}}}}_{{{\bar{\kappa }}}}\), \({\widetilde{{\mathcal {G}}}}'= {\widetilde{{\mathcal {G}}}}_{{{\bar{\kappa }}}'}\) in the notation from the beginning of Sect. 2.3 and in particular, cf. (2.28), one can regard \({\widetilde{{\mathcal {G}}}}'\) as a subset of \({\widetilde{{\mathcal {G}}}}.\) Accordingly, for all trajectories \(w\in {W_{{\widetilde{{\mathcal {G}}}}}}\) with \(\zeta ^-< 0 < \zeta ^+\) (see Sect. 2.5 for notation; recall in particular that \(\zeta ^\pm \) are such that \(w(t)=\Delta \) if and only if \(t\notin {(\zeta ^-,\zeta ^+)}\)), we define the killing times \(\zeta _{{{\bar{\kappa }}}'}^{\pm }\) by

$$\begin{aligned} \zeta _{{{\bar{\kappa }}}'}^\pm (w)\overset{\text {def.}}{=}\pm \inf \big \{t\in {[0, \pm \zeta ^\pm (w))}:\, w(\pm t)\notin {{\widetilde{{\mathcal {G}}}}'}\big \} \end{aligned}$$

with the convention

$$\begin{aligned} \inf \varnothing =\pm \zeta ^\pm (w) \end{aligned}$$
(6.1)

so that \( \zeta ^-(w) \le \zeta _{{{\bar{\kappa }}}'}^{-}(w)< 0 < \zeta _{{{\bar{\kappa }}}'}^+(w) \le \zeta ^+(w)\) for any \(w\in {W_{{\widetilde{{\mathcal {G}}}}}}\). For any compact \(K\subset {\widetilde{{\mathcal {G}}}},\) we then introduce \(\pi _{K} :W_{K,{\widetilde{{\mathcal {G}}}}}^{0}\rightarrow W_{K,{\widetilde{{\mathcal {G}}}}'}^{0}\) by

$$\begin{aligned} \pi _{K}(w) \equiv \pi _{K,{\widetilde{{\mathcal {G}}}},{\widetilde{{\mathcal {G}}}}'} (w)={\left\{ \begin{array}{ll} w(t),&{}\text {if }t\in {(\zeta _{{{\bar{\kappa }}}'}^-(w),\zeta _{{{\bar{\kappa }}}'}^+(w))}, \\ \Delta ,&{}\text {otherwise,} \end{array}\right. } \end{aligned}$$
(6.2)

and denote by \(\pi _{K}^{*}:W_{K,{\widetilde{{\mathcal {G}}}}}^{*}\rightarrow W_{K,{\widetilde{{\mathcal {G}}}}'}^{*}\) the unique function such that \(p_{{\widetilde{{\mathcal {G}}}}'}^*\circ \pi _{K}(w)=\pi _{K}^{*}\circ p^*_{{\widetilde{{\mathcal {G}}}}}(w)\) for all \(w\in {W_{K,{\widetilde{{\mathcal {G}}}}}^{0}}.\) In words \(\pi _{K}^{*}(w^*)\) is the doubly infinite trajectory modulo time shift on \({\widetilde{{\mathcal {G}}}}',\) whose forward and backward parts seen from the first time of hitting K are the forward and backward parts of \(w^*\) seen from the first time of hitting K,  both stopped on exiting \({\widetilde{{\mathcal {G}}}}'.\)

Lemma 6.2

\(({\widetilde{{\mathcal {G}}}}= ({\overline{G}}, {\bar{\lambda }}, {\bar{\kappa }}),\, {\widetilde{{\mathcal {G}}}}'= ({\overline{G}}, {\bar{\lambda }}, {\bar{\kappa }}'), \, {\bar{\kappa }}' \ge {\bar{\kappa }} )\). Let \(V\subset K\) be compact subsets of \({\widetilde{{\mathcal {G}}}}'\). There exists a non-negative measure \({\mu }^{K,V}={\mu }^{K,V}_{{\widetilde{{\mathcal {G}}}},{\widetilde{{\mathcal {G}}}}'}\) on \(W^*_{K,{\widetilde{{\mathcal {G}}}}'}\) such that

$$\begin{aligned} \big (\nu _{{\widetilde{{\mathcal {G}}}}}1_{W^*_{K,{\widetilde{{\mathcal {G}}}}}\setminus W^*_{V,{\widetilde{{\mathcal {G}}}}}}\big )\circ ({\pi }_{K}^*)^{-1}+{\mu }^{K,V}=\nu _{{\widetilde{{\mathcal {G}}}}'}1_{W^*_{K,{\widetilde{{\mathcal {G}}}}'}\setminus W^*_{V,{\widetilde{{\mathcal {G}}}}'}} \end{aligned}$$
(6.3)

(with a slight abuse of notation, the right-hand side is viewed as a measure on \(W^*_{K,{\widetilde{{\mathcal {G}}}}'}\)). Moreover,

$$\begin{aligned} {\mu }^{K,V}(W^*_{K,{\widetilde{{\mathcal {G}}}}'})=\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}'}(K)-\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}'}(V)-\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}}(K)+\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}}(V). \end{aligned}$$
(6.4)

Proof

Throughout the proof, let \( {\widehat{\partial }}K\) be as in (2.13) but relative to \(P_x^{{\widetilde{{\mathcal {G}}}}'}\) (rather than \(P_x= P_x^{{\widetilde{{\mathcal {G}}}}}\)). Let \((G,\lambda ,\kappa )\) and \((G',\lambda ',\kappa ')\) refer to the induced graphs corresponding to \({\mathcal {G}}\) and \({\mathcal {G}}'\), respectively (cf. (2.12)). By considering the graphs \({\mathcal {G}}^{A}\) and \(({\mathcal {G}}')^{A}\) for any \(A \supset {\widehat{\partial }} K\), see Lemma 2.1 instead of \({\mathcal {G}}\) and \({\mathcal {G}}'\), we can assume without loss of generality that \( {\widehat{\partial }} K\subset ({G} \cap G')\). By choosing \(A= A' \cup {\widehat{\partial }} K\) where \(A'\subset {\widetilde{{\mathcal {G}}}}'\) is a set containing exactly one (arbitrary) vertex between each \(x\in {{\widehat{\partial }} K}\) and \(y\in {\partial {\widetilde{{\mathcal {G}}}}'}\) which are connected by a cable, we can further ‘move away’ \({\widehat{\partial }}K\) from \(\partial {\widetilde{{\mathcal {G}}}}',\) so that \(d({\widehat{\partial }}K,\partial {{\widetilde{{\mathcal {G}}}}'})>1,\) where d is the canonical distance on \({\widetilde{{\mathcal {G}}}}\) defined above (1.4). All in all, we thus assume henceforth that

$$\begin{aligned} {\widehat{\partial }} K\subset {(G \cap G')} \quad \text {and}\quad d({\widehat{\partial }}K,\partial {{\widetilde{{\mathcal {G}}}}'})>1, \end{aligned}$$
(6.5)

which is no loss of generality. Recall \(X' \equiv X^{{{\bar{\kappa }}}'}\) and \(\zeta ' \equiv \zeta _{{{\bar{\kappa }}}'}\) from (2.29) and note that for all \(w\in {W_{K,{\widetilde{{\mathcal {G}}}}}^0},\) the forward part \(\{ (\pi _{K}(w))_t : 0 \le t \le \zeta _{{{\bar{\kappa }}}'}^+\}\) of \(\pi _{K}(w)\) from the time of first hitting K onward, is precisely \(\{X_t'(w^+) : 0\le t \le \zeta '\}\), where \(w^+\) is the forward part of w. Recalling (6.1) as well as the notation from (2.16) and (2.38), we then define the countably additive set function \({\widetilde{\mu }}^{K,V}\) on \({\mathcal {W}}_{K,{\widetilde{{\mathcal {G}}}}'}^0\) by

$$\begin{aligned} \begin{aligned} {\widetilde{\mu }}^{K,V}(A){\mathop {=}\limits ^{\text {def.}}}\sum _{x\in { {\widehat{\partial }} K}}&\Big (e_{K,{\widetilde{{\mathcal {G}}}}'}(x)P_x^{{\widetilde{{\mathcal {G}}}}'}\big (X \in A^+,{H}_{V}=\zeta \big )P^{K,{\widetilde{{\mathcal {G}}}}'}_x(X\in A^-) \\&-e_{K,{\widetilde{{\mathcal {G}}}}}(x)P_x^{{\widetilde{{\mathcal {G}}}}}\big (X'\in {A^+},{H}_{V}=\zeta \big )P^{K,{\widetilde{{\mathcal {G}}}}}_x\big (X'\in {A^-}\big )\Big ) \end{aligned} \end{aligned}$$
(6.6)

(note that following our convention below (2.22), \(\{{H}_{V}=\zeta \}\) under \(P_x^{{\widetilde{{\mathcal {G}}}}}\) refers to the event that V is not visited by X) with \(A^{\pm }\) denoting \(\{(w(\pm t))_{t\ge 0}:\,w\in {A}\}\) for all \(A \in {\mathcal {W}}_{K,{\widetilde{{\mathcal {G}}}}'}^0\) and \(X'\) as introduced below (6.5). In (6.6), we also used implicitly the convention that \(e_{K,{\widetilde{{\mathcal {G}}}}}(x)P^{K,{\widetilde{{\mathcal {G}}}}}_x=0\) for all \(x\in { {\widehat{\partial }}K}\) with \(e_{K,{\widetilde{{\mathcal {G}}}}}(x)=0.\) Moreover, \(e_{K,{\widetilde{{\mathcal {G}}}}}(x)\le e_{K,{\widetilde{{\mathcal {G}}}}'}(x)\) for all \(x\in {{\widetilde{{\mathcal {G}}}}}\) by (2.6) and (2.17), and so it follows from (2.19) that \(\text {supp}(e_{K,{\widetilde{{\mathcal {G}}}}}) \subset {\widehat{\partial }} K.\) If \({\widetilde{\mu }}^{K,V}\) is non-negative on \({\mathcal {W}}_{K,{\widetilde{{\mathcal {G}}}}'}^0\) we can extend it to a measure on \({\mathcal {W}}_{K,{\widetilde{{\mathcal {G}}}}'}\) by taking \({\widetilde{\mu }}^{K,V}(A)=0\) for all \(A\in {{\mathcal {W}}_{K,{\widetilde{{\mathcal {G}}}}'}}\) with \(A\cap {W}_{K,{\widetilde{{\mathcal {G}}}}'}^0=\varnothing .\) Defining \({\mu }^{K,V}={\widetilde{\mu }}^{K,V}\circ (p_{{\widetilde{{\mathcal {G}}}}'}^*)^{-1},\) in view of (6.6), (2.39) and (2.40), it then follows that (6.3) is fulfilled.

We now show that \({\widetilde{\mu }}^{K,V}\) is non-negative. Recall \({\widehat{Z}}\), the discrete skeleton of Z, from below (2.4). We denote by \({\widehat{L}}_K=\sup \{n\in {{\mathbb {N}}}:\,{\widehat{Z}}_n\in {K}\}\) the last exit time of K for \({\widehat{Z}}\) and by \(L_K=\sup \{t\ge 0:\,X_t\in {K}\}\) the last exit time of K for X,  with the convention \(\sup \varnothing =\infty ,\) so in particular \(\{X_{L_K}=x\}=\{{\widehat{Z}}_{{\widehat{L}}_K}=x\}\) for all \(x\in {{\widehat{\partial }}K}\) (on the event \(\{L_K< \infty \}=\{{\widehat{L}}_K < \infty \} \), which has full \(P_x^{{\widetilde{{\mathcal {G}}}}}\)-measure by transience). We also define \((Y_t)_{t\ge 0}\) the same process as \((X_{t+L_K})_{t>0},\) but killed the first time \((X_t)_{t\ge L_K}\) hits \(\partial {\widetilde{{\mathcal {G}}}}'.\) By definition of \(P^{K,{\widetilde{{\mathcal {G}}}}}_x\), see (2.38), and (2.19), we have for all \(x\in { {\widehat{\partial }} K}\) with \(e_{K,{\widetilde{{\mathcal {G}}}}}(x)>0\) that

$$\begin{aligned}&e_{K,{\widetilde{{\mathcal {G}}}}}(x)P^{K,{\widetilde{{\mathcal {G}}}}}_x(X'\in {\cdot }\, ) \\&\ =e_{K,{\widetilde{{\mathcal {G}}}}}(x)P_x^{{\widetilde{{\mathcal {G}}}}}\big ((Y_t)_{t>0}\in {\cdot }\,|\,X_{L_K}=x\big ) =\frac{1}{g_{{\widetilde{{\mathcal {G}}}}}(x,x)}P_x^{{\widetilde{{\mathcal {G}}}}}\big ((Y_t)_{t>0}\in {\cdot },{\widehat{Z}}_{{\widehat{L}}_K}=x\big ) \\&\ =\frac{1}{g_{{\widetilde{{\mathcal {G}}}}}(x,x)}\sum _{n\ge 0}P_x^{{\widetilde{{\mathcal {G}}}}}\big ((Y_t)_{t>0}\in {\cdot },{\widehat{Z}}_{n}{=}x,{\widehat{L}}_{K}{=}n\big ) {=}\lambda _xP_x^{{\widetilde{{\mathcal {G}}}}}\big ((Y_t)_{t>0}\in {\cdot },{{\widehat{L}}_K}=0\big ); \end{aligned}$$

here, we used in the last equality the strong Markov property at the time of n-th jump and the fact that \(g_{{\widetilde{{\mathcal {G}}}}}(x,x)=\frac{1}{\lambda _x}\sum _{n\ge 0}P_x^{{\widetilde{{\mathcal {G}}}}}({\widehat{Z}}_n=x)\). By a similar calculation, and in view of (2.30), we obtain for \(x \in {\widehat{\partial }}K\),

$$\begin{aligned} e_{K,{\widetilde{{\mathcal {G}}}}'}(x)P^{K,{\widetilde{{\mathcal {G}}}}'}_x(X\in \cdot \,)&=\lambda '_xP_x^{{\widetilde{{\mathcal {G}}}}'}\big ((X_{t+L_K})_{t>0}\in {\cdot },{{\widehat{L}}_K}=0\big )\\&=\lambda '_xP_x^{{\widetilde{{\mathcal {G}}}}}\big ((X_{t+L_K})_{t>0}\in {\cdot },{{\widehat{L}}'_K}=0\big ), \end{aligned}$$

where \(L_K'\), \({\widehat{L}}'_K\) are defined as above but with \(X'\) in place of X. On the event \({\widehat{L}}_K=0,\) since \(d({\widehat{\partial }}K,\partial {{\widetilde{{\mathcal {G}}}}'})>1\) due to (6.5), we have and \(\lambda _x=\lambda '_x\) for all \(x\in {{\widehat{\partial }} K}.\) Hence, for all \(x\in {\widehat{\partial }} K\) with \(e_{K,{\widetilde{{\mathcal {G}}}}}(x)>0\),

(6.7)

Note that if \(e_{K,{\widetilde{{\mathcal {G}}}}}(x)=0\) and \(x\in {{\widehat{\partial }}K},\) then \(P^{{\widetilde{{\mathcal {G}}}}}_x\)-a.s., and so the previous equality still holds. Moreover, using (2.30), we have for all \(x\in { {\widehat{\partial }} K}\) that

$$\begin{aligned} P_x^{{\widetilde{{\mathcal {G}}}}'}\big (X \in \cdot ,{H}_{V}=\zeta \big )-P_x^{{\widetilde{{\mathcal {G}}}}}\big (X'\in {\cdot },{H}_{V}= \zeta \big )=P_x^{{\widetilde{{\mathcal {G}}}}}\big (X'\in \cdot ,\zeta>{H}_{V}>\zeta '\big ). \nonumber \\ \end{aligned}$$
(6.8)

Combining (6.6), (6.7) and (6.8), we thus obtain that, for \( A \in {\mathcal {W}}^0_{K,{\widetilde{{\mathcal {G}}}}'},\)

(6.9)

and so \({\widetilde{\mu }}^{K,V}\) is positive on \({\mathcal {W}}^0_{K,{\widetilde{{\mathcal {G}}}}'}.\) Finally, we have by (2.20) and (2.22) that

$$\begin{aligned} {\mu }^{K,V}(W^*_{K,{\widetilde{{\mathcal {G}}}}'})&={\widetilde{\mu }}^{K,V}(W_{K,{\widetilde{{\mathcal {G}}}}'}^0) {\mathop {=}\limits ^{(6.6)}}\sum _{x\in { {\widehat{\partial }} K}}\Big (e_{K,{\widetilde{{\mathcal {G}}}}'}(x)P^{{\widetilde{{\mathcal {G}}}}'}_x({H}_{V}=\zeta )-e_{K,{\widetilde{{\mathcal {G}}}}}(x)P^{{\widetilde{{\mathcal {G}}}}}_x({H}_{V}=\zeta )\Big ) \\&=\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}'}(K)-\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}'}({V})-\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}}(K)+\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}}({V}), \end{aligned}$$

which gives (6.4) and completes the proof. \(\square \)

In words, the difference between the trajectories under \(\nu _{{\widetilde{{\mathcal {G}}}}}\) and \(\nu _{{\widetilde{{\mathcal {G}}}}'}\) that hit K but not V,  when \(V\subset K\) are compact subsets of \({\widetilde{{\mathcal {G}}}}',\) comes in two parts: first it is more likely for the forward trajectories to not hit V before time \(\zeta '\) than before time \(\zeta ,\) and secondly it is more likely for the backward trajectories to not come back to K before time \(\zeta '\) than before time \(\zeta \). These two differences are contained in the measure \(\mu _{{\widetilde{{\mathcal {G}}}},\kappa '}^{K,V}\) from (6.3), see (6.9).

Taking a sequence \((K_p)_{p\in {\mathbb {N}}}\) of compacts increasing to \({\widetilde{{\mathcal {G}}}}',\) one can then use Lemma 6.2 to construct a random interlacement process on \({\widetilde{{\mathcal {G}}}}'\) from the random interlacement process \(\omega \) on \({\widetilde{{\mathcal {G}}}}\): take the image through \(\pi _{K_p}^*\) of each trajectory in the support of \(\omega \) hitting \(K_p\) but not \(K_{p-1}\) for all \(p\in {\mathbb {N}},\) with \(K_0=\varnothing ,\) and add Poisson point processes with intensity \(\mu _{{\widetilde{{\mathcal {G}}}},{\widetilde{{\mathcal {G}}}}'}^{K_{p},K_{p-1}}\otimes \lambda \) for all \(p\in {\mathbb {N}}.\) Using this construction and the estimate (6.4), we will now suitably approximate random interlacements on \({\mathcal {G}}\) by random interlacements on a sequence of finite graphs, thus mirrorring Lemma 4.6.

Lemma 6.3

Let \({\mathcal {G}}\) be a transient weighted graph and \({\mathcal {G}}_n,\) \(n\in {\mathbb {N}},\) be a sequence of transient weighted graphs increasing to \({\mathcal {G}}_{\infty }={\mathcal {G}}\) in the sense of (4.10). There exists a probability space \((\Omega ',{\mathcal {F}}',{\mathbb {P}}')\) on which one can define a sequence of processes \(\omega ^{(n)},\) \(n\in {\mathbb {N}},\) and \(\omega ^{(\infty )}\) with the following properties:

$$\begin{aligned}&\ \text { for all}\ n\in {\mathbb {N}}\cup \{\infty \},\ \text {the process}\ \omega ^{(n)}\ \text {has the same law as}\ \omega \ \text {under}\ {\mathbb {P}}_{{\widetilde{{\mathcal {G}}}}_{n}}^{I};\end{aligned}$$
(6.10)
$$\begin{aligned}&\begin{array}{l} \text {there exists an increasing sequence}\ (a_n)_{n\in {\mathbb {N}}}\ \text {such that for each}\ u>0, {\mathbb {P}}'\text {-a.s.~for}\\ \text {all compact}\ K\subset {\widetilde{{\mathcal {G}}}},\ \text {the restriction to}\ K\ \text {of the set of trajectories hitting}\ K\ \text {is the}\\ \text {same for}\ \omega ^{(a_n)}_u\ \text {and}\ \omega ^{(\infty )}_u\ \text {for all}\ n\ \text {large enough.} \end{array} \end{aligned}$$
(6.11)

Proof

Let \((K_n)_{n\in {\mathbb {N}}}\) be a sequence such that \(K_n\) is a compact subset of \({\widetilde{{\mathcal {G}}}}_n\) for each \(n\in {\mathbb {N}},\) and such that \(K_n,\) \(n\in {\mathbb {N}},\) increases to \({\widetilde{{\mathcal {G}}}}.\) Let \(\omega ^{(\infty )}\) be a Poisson point process under \((\Omega ',{\mathcal {F}}',{\mathbb {P}}')\) with the same law as the random interlacement process \(\omega \) under \({\mathbb {P}}^I_{{\widetilde{{\mathcal {G}}}}}.\) For each \(n\in {\mathbb {N}}\) and \(k\in \{1,\dots ,n\},\) we define, recalling the notation from (4.10), the process \(\omega _1^{(k,n)}\) as the Poisson point process which is given by the image through \( \pi ^*_{k,n} \equiv \pi _{ K_k, {\widetilde{{\mathcal {G}}}}, {\widetilde{{\mathcal {G}}}}_n}^*\), cf. (6.2), of all the trajectories in \(\omega ^{(\infty )}_u\) which hit \(K_{k}\) but not \(K_{k-1},\) with the convention \(K_0=\varnothing ;\) this constitutes a Poisson point process with intensity \((\nu _{{\widetilde{{\mathcal {G}}}}}1_{W^*_{K_k,{\widetilde{{\mathcal {G}}}}}\setminus W^*_{K_{k-1},{\widetilde{{\mathcal {G}}}}}})\circ \big ( \pi ^*_{k,n} )^{-1}.\) By suitably extending \({\mathbb {P}}'\) we further introduce \(\omega _2^{(k,n)}\) as an independent Poisson point process with intensity \(\mu ^{K_{k},K_{k-1}}_{{\widetilde{{\mathcal {G}}}}, {\widetilde{{\mathcal {G}}}}_n}\otimes \lambda \) (see Lemma 6.2) and \(\omega _3^{(n)}\) as an independent Poisson point process with intensity \((\nu _{{\widetilde{{\mathcal {G}}}}_{n}}1_{(W_{K_n,{\widetilde{{\mathcal {G}}}}_{n}}^*)^c})\otimes \lambda .\) Thus, defining for each \(n\in {\mathbb {N}}\)

$$\begin{aligned} \omega ^{(n)}{\mathop {=}\limits ^{\text {def.}}}\omega _3^{(n)}+\sum _{k=1}^{n}\big (\omega _1^{(k,n)}+\omega _2^{(k,n)}\big ), \end{aligned}$$

we have by (6.3) that \(\omega ^{(n)}\) has the same law as \(\omega \) under \({\mathbb {P}}_{{\widetilde{{\mathcal {G}}}}_{n}}^{I},\) whence (6.10).

We now argue that (6.11) holds. Let \(u>0\) and \(p\in {\mathbb {N}}.\) By definition, no trajectories of \(\omega _1^{(k,n)},\) \(\omega _2^{(k,n)}\) and \(\omega _3^{(n)}\) hit \(K_p\) if \(p<k\le n.\) Moreover, there is a only a finite number of trajectories in \(\omega _u^{(\infty )}\) hitting \(K_p,\) each returning finitely many times to \(K_p,\) and so for each \(k\in \{1,\dots ,p\},\) we have that the restriction to \(K_p\) of all the trajectories of \(\omega _1^{(k,n)}\) at level u hitting \(K_p\) is constant for all n large enough. By (6.4), for each \(n\ge p,\) the number of trajectories in \(\sum _{k=1}^p\omega _2^{(k,n)}\) at level u is a Poisson random variable with parameter \(u(\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}_n}(K_p)-\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}}(K_p)),\) and one can easily prove by (2.6), (2.16) and (2.20) since \(K_p\) is compact that \(\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}_n}(K_p)-\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}}(K_p)\rightarrow 0\) as \(n \rightarrow \infty \). As a consequence of Borel-Cantelli, one can thus find a sequence \((a_n)_{n\in {\mathbb {N}}}\) such that \({\mathbb {P}}'\)-a.s., \(\sum _{k=1}^p\omega _2^{(k,a_n)}\) contains no trajectory at level u for all \(u>0\) and n large enough, and by a diagonal argument, one can take \((a_n)_{n\in {\mathbb {N}}}\) independent of the choice of p. Since for all compacts \(K \subset {\widetilde{{\mathcal {G}}}},\) there exist \(p\in {\mathbb {N}}\) such that \(K\subset K_p,\) and \({\mathbb {P}}'\)-a.s., the restriction to \(K_p\) of all the trajectories of \(\omega ^{(a_n)}_u\) hitting \(K_p\) is constant for all n large enough, we conclude (6.11). \(\square \)

Together, Lemmas 4.6 and 6.3 supply suitable ‘finite-volume’ approximations for the Gaussian free field and random interlacements on a general transient weighted graph \({\widetilde{{\mathcal {G}}}}\). With the help of Lemma 4.4, this yields the following result, from which Theorem 3.9 will readily follow.

Lemma 6.4

If either (Sign) or (\({Law}_{0}\)) is fulfilled, then (Isom) and (4.9) hold true on \({\mathcal {G}}.\)

Proof

Let \({\mathcal {G}}_n,\) \(n\in {\mathbb {N}}\) be a sequence of finite graphs increasing to \({\mathcal {G}}\) in the sense of (4.10) (for instance, the one introduced at the beginning of the proof of Theorem 3.2) and consider the space \((\Omega \times \Omega ',{\mathcal {F}}\otimes {\mathcal {F}}',{\mathbb {P}}\otimes {\mathbb {P}}'),\) which is the product of the probability spaces from Lemmas 4.6 and 6.3. By passing to a subsequence of \({\mathcal {G}}_n,\) \(n\in {\mathbb {N}},\) we may assume that \(a_n=n\) in (6.11). Note that Lemma 4.4 applies to \({\mathcal {G}}_n.\) For each \(n\in {\mathbb {N}}\cup \{\infty \},\) let \((\ell _{x,u}^{(n)})_{x\in {{\widetilde{{\mathcal {G}}}}_{n}}}\) denote the total local times of the trajectories of \(\omega _u^{(n)},\) \({{{\mathcal {I}}}}^u_{n}=\{x\in {{\widetilde{{\mathcal {G}}}}_{n}}:\,\ell _{x,u}^{(n)}>0\},\) \({\Sigma _n(x)}=\{y\in {{\widetilde{{\mathcal {G}}}}_{n}}:x\leftrightarrow y\text { in }\{z\in {{\widetilde{{\mathcal {G}}}}_{n}}:|\varphi _z^{(n)}|>0\}\}\) and \(\overline{\Sigma _n(x)}\) its closure for all \(x\in {{\widetilde{{\mathcal {G}}}}_{n}},\) as well as \({\mathcal {C}}_{u,n}\) the closure of \(\{x\in {{\widetilde{{\mathcal {G}}}}}:{\Sigma _n(x)}\cap {{{\mathcal {I}}}}^u_n\ne \varnothing \}.\) Let us first prove that there exists a sequence \((b_n)_{n\in {\mathbb {N}}}\) such that, \({\mathbb {P}}\otimes {\mathbb {P}}'\)-a.s. for all \(x\in {{\widetilde{{\mathcal {G}}}}}\) with \(|\varphi _x^{(\infty )}|>0,\)

$$\begin{aligned} \big \{x\in {{\mathcal {C}}_{u,\infty }}\big \}=\liminf _{n\rightarrow \infty }\big \{x\in {{\mathcal {C}}_{u,b_n}}\big \}=\limsup _{n\rightarrow \infty }\big \{x\in {{\mathcal {C}}_{u,b_n}}\big \}. \end{aligned}$$
(6.12)

For this purpose, consider \(x\in {{\widetilde{{\mathcal {G}}}}}\) with \(|\varphi _x^{(\infty )}|>0.\) If \(x\in {{\mathcal {C}}_{u,\infty }},\) then there exists \(y\in {{{{\mathcal {I}}}}_{\infty }^u\cap \Sigma _{\infty }(x)}.\) By (6.11), \(y\in {{{{\mathcal {I}}}}^{u}_{n}}\) for n large enough and there is a path \(\pi \subset {\widetilde{{\mathcal {G}}}}\) between x and y in \(\{z\in {{\widetilde{{\mathcal {G}}}}}:|\varphi _z^{(\infty )}|>0\}.\) Since \(\pi \) can be chosen to be compact, by (4.12) we have \(\varphi ^{(n)}=\varphi ^{(\infty )}\) on \(\pi \) for all n large enough. Therefore, \(\pi \) is also a path between x and y in \(\{z\in {{\widetilde{{\mathcal {G}}}}}:|\varphi _z^{(n)}|>0\},\) and so \(y\in {{{{\mathcal {I}}}}^u_{n}\cap \Sigma _{n}(x)}\) for n large enough, that is \(x\in {{\mathcal {C}}_{u,n}}.\) As a consequence,

$$\begin{aligned} \big \{x\in {{\mathcal {C}}_{u,\infty }}\big \}\subset \liminf _{n\rightarrow \infty }\big \{x\in {{\mathcal {C}}_{u,n}}\big \}(\subset \limsup _{n\rightarrow \infty }\big \{x\in {{\mathcal {C}}_{u,n}}\big \}). \end{aligned}$$
(6.13)

To prove the reverse inclusions in (6.12), first assume that (Sign) is fulfilled and that \(x\in {{\mathcal {C}}_{u,n}}\) for infinitely many n. By (4.12) and (6.11), since \(\overline{\Sigma _{\infty }(x)}\) is compact, we have that \(\varphi ^{(n)}\) and \({{{\mathcal {I}}}}^u_n\) are constant for n large enough on \(\overline{\Sigma _{\infty }(x)},\) and then \(\Sigma _n(x)\cap {{{\mathcal {I}}}}_n^u=\Sigma _{\infty }(x)\cap {{{\mathcal {I}}}}_\infty ^u\) for n large enough. Therefore, infinitely often, \({{{\mathcal {I}}}}_{\infty }^u\cap \Sigma _{\infty }(x)={{{\mathcal {I}}}}_{n}^u\cap \Sigma _n(x)\ne \varnothing \) (note that x cannot lie in the boundary since \({{{\mathcal {I}}}}_{\infty }^u\), \({{{\mathcal {I}}}}_{n}^u\) are open and \(|\varphi _x^{(n)}|>0\) for large enough n), that is \(x\in {{\mathcal {C}}_{u,\infty }}.\) Combining with (6.13), we obtain (6.12) with \(b_n=n.\)

Now suppose that (\(\mathrm {Law}_{0}\)) holds on \({\mathcal {G}}.\) For all \(n\in {\mathbb {N}}\cup \{\infty \},\) by (2.41), since \({{{\mathcal {I}}}}^u_n\) is open,

$$\begin{aligned} ({\mathbb {P}}\otimes {\mathbb {P}}')\big (x\in {{\mathcal {C}}_{u,n}}\big )= & {} ({\mathbb {P}}\otimes {\mathbb {P}}')\big ({{{\mathcal {I}}}}^u_n\cap \overline{\Sigma _n(x)} \ne \emptyset \big )\nonumber \\= & {} 1-{\mathbb {E}}\Big [\exp \Big (-u\mathrm {{ cap}}_{{\widetilde{{\mathcal {G}}}}_n}\big (\overline{\Sigma _n(x)}\big )\Big )\Big ]. \end{aligned}$$
(6.14)

As \({\mathcal {G}}_n\) is finite for each \(n\in {\mathbb {N}},\) Lemma 4.4 and Proposition 4.2 imply that (\(\mathrm {Law}_{0}\)) holds on \({\widetilde{{\mathcal {G}}}}_{n}.\) Therefore, denoting by \(\Phi \) the distribution function of a standard Gaussian random variable, by symmetry of \(\varphi ^{(n)}\) we obtain that

$$\begin{aligned} \begin{aligned} ({\mathbb {P}}\otimes {\mathbb {P}}')\big (x\in {{\mathcal {C}}_{u,n}}\big )&{\mathop {=}\limits ^{(6.14),(\mathrm {Law}_0)}}1-2{\mathbb {P}}^G_{{\widetilde{{\mathcal {G}}}}_{n}}(\varphi _{x}\ge \sqrt{2u}) =2\Phi \big (\sqrt{2u}(g_{{\widetilde{{\mathcal {G}}}}_{n}}(x,x))^{-1/2}\big )-1\\&\quad \ \displaystyle \mathop {\longrightarrow }_{n\rightarrow \infty }2\Phi \big (\sqrt{2u}(g_{{\widetilde{{\mathcal {G}}}}}(x,x))^{-1/2}\big )-1 =({\mathbb {P}}\otimes {\mathbb {P}}')\big (x\in {{\mathcal {C}}_{u,\infty }}\big ), \end{aligned} \end{aligned}$$
(6.15)

taking advantage of the validity of (\(\mathrm {Law}_{0}\)) for the graph \({\mathcal {G}}\) and (6.14) in the last equality. Hence, using (6.13) and (6.15), there exists a sequence \((b_n)_{n\in {\mathbb {N}}}\) such that for all \(n\in {\mathbb {N}},\)

$$\begin{aligned} \sum _{n\in {\mathbb {N}}}{\mathbb {P}}\otimes {\mathbb {P}}'\big (\big \{x\in {{\mathcal {C}}_{u,b_n}}\big \}\setminus \{x\in {{\mathcal {C}}_{u,\infty }}\big \}\big )<\infty , \end{aligned}$$

and Borel-Cantelli entails that \(({\mathbb {P}}\otimes {\mathbb {P}}')\)-a.s., \(\limsup _{n\rightarrow \infty }\big \{x\in {{\mathcal {C}}_{u,b_n}}\big \}=\{x\in {{\mathcal {C}}_{u,\infty }}\big \}\). Using a diagonal argument and the separability of \({\widetilde{{\mathcal {G}}}},\) we can actually choose the sequence \((b_n)_{n\in {\mathbb {N}}}\) uniformly in \(x\in {{\widetilde{{\mathcal {G}}}}}.\) Combining with (6.13), we obtain (6.12).

By passing to a subsequence of \({\mathcal {G}}_n,\) \(n\in {\mathbb {N}},\) we assume without loss of generality from now on that \(b_n=n\) in (6.12), which, together with (4.12) and (6.11) directly implies that

$$\begin{aligned}&\lim \limits _{n\rightarrow \infty }\Big ( \varphi _x^{(n)} 1_{x\notin {{\mathcal {C}}_{u,n}}}+\sqrt{(\varphi _x^{(n)})^2+2\ell _{x,u}^{(n)}} \, 1_{x\in {{\mathcal {C}}_{u,n}}}\Big ) =\varphi _x^{(\infty )} 1_{x\notin {{\mathcal {C}}_{u,\infty }}}+\sqrt{(\varphi _x^{(\infty )})^2+2\ell _{x,u}^{(\infty )}}\, 1_{x\in {{\mathcal {C}}_{u,\infty }}}.\nonumber \\ \end{aligned}$$
(6.16)

for all \(x\in {{\widetilde{{\mathcal {G}}}}}\) with \(\varphi _x^{(\infty )}\ne 0.\) Moreover if \(\varphi _x^{(\infty )}=0,\) then by (4.12) and (6.11) we have \(\varphi _x^{(n)}=0\) and \(\ell _{x,u}^{(n)}=\ell _{x,u}^{(\infty )}\) for all n large enough, and so (6.16) remains true. Since \({\mathcal {G}}_n\) is finite for all \(n\in {\mathbb {N}},\) Lemma 6.1 and Lemma 4.4 yield that (Isom) holds on \({\mathcal {G}}_n\) for all \(n\in {\mathbb {N}},\) and, noting that \(\varphi _x^{(n)}+\sqrt{2u}\rightarrow \varphi _x^{(\infty )}+\sqrt{2u}\) as \(n \rightarrow \infty \) and applying (6.16), we infer that (Isom) holds for \({\mathcal {G}}_{\infty }={\mathcal {G}}.\)

It remains to show that (4.9) holds (on \({{\mathcal {G}}}\)). Fix \(e \in E\cup G\). For sufficently large n, which we will tacitly assume henceforth, \(e\in E_n\cup G_n\), where \((G_n,E_n)\) refers to the graph induced by \({\mathcal {G}}_n\). Define for all \(n\in {\mathbb {N}}\cup \{\infty \}\) the random set of edges and vertices \({\mathcal {E}}_u^{(n)}=\{e\in {E_n\cup {G}_n}:\,2\ell _{x,u}^{(n)}+(\varphi _x^{(n)})^2>0\text { for all }x\in {I_e}\}.\) By Lemma 4.4 applied to \({\mathcal {G}}_n\), we have for all \(n\in {\mathbb {N}}\) that

$$\begin{aligned} ({\mathbb {P}}\otimes {\mathbb {P}}')\big (e\in {{\mathcal {E}}_u^{(n)}}\,|\,{\widehat{\omega }}_u^{(n)},\varphi ^{(n)}_{|G_n}\big )= 1_{e\in {{{{\mathcal {I}}}}_{E,n}^u}}\vee p_e^{{\mathcal {G}}_n}(\varphi ^{(n)},\ell ^{(n)}_{.,u}), \end{aligned}$$

where \({{{\mathcal {I}}}}_{E,n}^u\) is the union of the set of edges crossed by the trace \({\widehat{\omega }}_u^{(n)}\) of \(\omega _u^{(n)}\) on \(G_n,\) and of the set of vertices on which a trajectory of \({\widehat{\omega }}_u^{(n)}\) is killed. Moreover, using (2.35) and (2.42), we have that for any finite set \(S \subset (E\cup G)\), conditionally on \((\varphi _x^{(n)})_{x\in {{G_n}}}\) and \({\widehat{\omega }}_u^{(n)},\) the family \(\{e\in {{\mathcal {E}}_u^{(n)}}\},\) \(e \in S,\) is independent for all large enough n (including \(\infty \)), and for all \(e\in S,\)

$$\begin{aligned} ({\mathbb {P}}\otimes {\mathbb {P}}')\big (e\in {{\mathcal {E}}_u^{(n)}}\,|\,{\widehat{\omega }}_u^{(n)},\varphi ^{(n)}_{|G_n}\big )=({\mathbb {P}}\otimes {\mathbb {P}}')\big (e\in {{\mathcal {E}}_u^{(n)}}\,|\,{\widehat{\omega }}_{u,e}^{(n)},(\varphi ^{(n)})_{|e}\big ). \end{aligned}$$

Note that \(({\mathbb {P}}\otimes {\mathbb {P}}')\)-a.s., for all large enough n, we have and \((\varphi ^{(n)})_{|e}=(\varphi ^{(\infty )})_{|e}\) and  \({\widehat{\omega }}_{u,e}^{(n)}={\widehat{\omega }}_{u,e}^{(\infty )}\) as well as \(1\{ e\in {{{\mathcal {I}}}}_{E,n}^u \}= 1\{ e\in {{{\mathcal {I}}}}_{E,\infty }^u\}\) for each \(e\in S\) by (4.12) and (6.11). Now due to (3.12) and (3.13), we also have \(p_e^{{\mathcal {G}}_n}(\varphi ^{(n)},\ell ^{(n)}_{.,u})=p_e^{{\mathcal {G}}}(\varphi ^{(\infty )},\ell ^{(\infty )}_{.,u})\) for each \(e\in S\) and all n large enough, and so

$$\begin{aligned} ({\mathbb {P}}\otimes {\mathbb {P}}')\big (e\in {{\mathcal {E}}_u^{(\infty )}}\,|\,{\widehat{\omega }}_u^{(\infty )},\varphi ^{(\infty )}_{|G}\big )= 1_{e\in {{{{\mathcal {I}}}}_{E,\infty }^u}}\vee p_e^{{\mathcal {G}}}(\varphi ^{(\infty )},\ell ^{(\infty )}_{.,u}), \end{aligned}$$

which yields (4.9) for the graph \({\mathcal {G}}\) on account of (4.11), (6.10) and since \(S \subset (E\cup G)\) was arbitrary. \(\square \)

Let us now quickly explain how to deduce Theorem 3.9 and Corollaries 3.11 and 3.12 from Lemma 6.4.

Proof of Theorem 3.9

We start with the proof of (3.14). If (Isom’) holds, then (\({Law}_h\))\(_{h>0}\) also holds by Proposition 4.2. If (\({Law}_h\))\(_{h>0}\) holds, then (\(\mathrm {Law}_{0}\)) also holds by taking the limit as \(h\searrow 0\) in (\({Law}_h\)) and using (2.23). If (\(\mathrm {Law}_{0}\)) holds, then (Isom) also holds by Lemma 6.4. Since (Isom’) and (Isom) are equivalent by Lemma 6.1, we obtain (3.14).

Let us now assume that one of the conditions in (3.14) holds. Then by Lemma 6.4, we have that (Isom’) and (4.9) hold. Moreover, the family \(\{e\in {{\mathcal {E}}_u}\},\) \(e\in {E\cup {G}},\) is independent conditionally on \({\widehat{\omega }}_u\) and \((\varphi _x)_{x\in {G}}\) by (2.35) and (2.42), and, by (4.9) we thus have that \(({\mathcal {E}}_u,(\varphi _x)_{x\in {{G}}},{\widehat{\omega }}_u)\) has the same law under \({\widetilde{{\mathbb {P}}}}\) as \((\mathcal {{\widehat{E}}}_u,(\varphi _x)_{x\in G},{\widehat{\omega }}_u)\) under \({\widehat{{\mathbb {P}}}}.\) Finally, since by (2.41) and (2.32) \({\mathbb {P}}^I({{{\mathcal {I}}}}^u\cap I_x\ne \varnothing )=1\) for all \(x\in {G}\) with \(\kappa _x>0,\) for each \(x\in {G},\) we have \(x\in {{\mathcal {C}}_u}\cap G\) if and only if there is a path \(\pi \subset {\mathcal {E}}_u\cap E\) between x and some \(y\in {({{{\mathcal {I}}}}^u\cup {\mathcal {E}}_u)\cap G},\) and so \((\sigma _x)_{x\in {G}}\) and \({\widehat{\sigma }}\) also have the same law. The equality (3.16) then follows directly from (Isom’). \(\square \)

Proof of Corollary 3.11

Let \({\mathcal {G}}\) be a graph such that (\(\mathrm {Law}_{0}\)) is fulfilled. Then (Isom) holds by (3.14). Let us assume that \({E}^{\ge 0}\) contains at least one non-compact component with positive probability. In particular, there exists \(x_0\in {{\widetilde{{\mathcal {G}}}}}\) such that \(E^{\ge 0}(x_0)\) is non-compact with positive probability. By Theorem 3.2, we know that \(\mathrm {{ cap}}(E^{\ge 0}(x_0))<\infty \) \({\mathbb {P}}^G\)-a.s, and so by Lemma 3.1, \(E^{\ge 0}(x_0)\) is also unbounded with positive probability. Now, by (2.41), it follows that for all \(u>0,\) with \(({{\mathbb {P}}}^I\otimes {\mathbb {P}}^G)\)-positive probability, \(E^{\ge 0}(x_0)\) is unbounded and \(x_0\notin {{\mathcal {C}}_{u}}.\) By (Isom) and symmetry of the Gaussian free field, we obtain that for all \(u>0\) \(E^{\ge \sqrt{2u}}(x_0)\) is unbounded with positive probability. In particular, if \({\widetilde{h}}_*^{\mathrm{com}} >0,\) then \(E^{\ge 0}\) contains a non-compact component with positive probability, and so \(E^{\ge h}\) contains an unbounded component for all \(h>0\) by the above reasoning, that is \({\widetilde{h}}_*=\infty .\) If moreover \({\mathbf {h}}_{\text {kill}}<1,\) then \({\widetilde{h}}_*\ge 0\) by (1.8). Therefore by (3.3), we have \({\widetilde{h}}_*^{\mathrm{com}} \ge {\widetilde{h}}_*\ge 0.\) Since \({\widetilde{h}}_*=\infty \) if \({\widetilde{h}}_*^{\mathrm{com}} >0,\) we thus obtain \({\widetilde{h}}_*={\widetilde{h}}_*^{\mathrm{com}} \in {\{0,\infty \}}.\) \(\square \)

Proof of Corollary 3.12

Let us assume that \({\widetilde{h}}_*\le 0,\) then \(E^{\ge h}\) is \({\mathbb {P}}^G\)-a.s. bounded for all \(h>0.\) By Theorem 3.7, we thus have that (\({Law}_h\)) holds for all \(h>0,\) and so (\(\mathrm {Law}_{0}\)) also holds by (3.14). Since \(E^{\ge h}\) is \({\mathbb {P}}^G\)-a.s. bounded for all \(h>0,\) we thus obtain by Corollary 3.11 that \(E^{\ge 0}\) is \({\mathbb {P}}^G\)-a.s. bounded. \(\square \)

Remark 6.5

  1. (1)

    From Proposition 4.2, Corollary 5.1 and Lemma 6.4, one could immediately prove again Theorem  3.7 (which however does not require accessing to the information (Isom’) on \({\widetilde{{\mathcal {G}}}}\)).

  2. (2)

    Similarly to Theorem 8 of [20], one could also use (4.6) to deduce an isomorphism theorem between random interlacements and the Gaussian free field even if G is infinite. More precisely, if \({\mathcal {G}}\) is a graph such that \(|\{x\in {G}:\kappa _x>0\}|<\infty ,\) one can merge all the open ends of the cables \(I_x,\) \(x\in {G}\) with \(\kappa _x>0,\) into a new vertex \(x_*,\) and apply (4.6) to the new (locally finite) graph \({\mathcal {G}}\cup \{x_*\}.\) Decomposing the loop soup into loops hitting \(x_*\) and loops avoiding \(x_*\) similarly as in “Appendix B”, one can then prove an isomorphism similar to Theorem 3.9, but replacing random interlacements on \({\widetilde{{\mathcal {G}}}}\) by killed random interlacements on \({\widetilde{{\mathcal {G}}}},\) that is all the trajectories in the random interlacement process whose forward and backward parts are both killed before escaping all bounded sets, and replacing \(\varphi +\sqrt{2u}\) by \(\varphi +\sqrt{2u}{\mathbf {h}}_{\text {kill}},\) see (1.2). In Corollary 6.9 of [22], this isomorphism between killed random interlacements and the Gaussian free field is extended to any graphs satisfying (\(\mathrm {Law}_{0}\)).

  3. (3)

    An interesting open question is whether a transient graph \({\mathcal {G}}\) exists such that (\(\mathrm {Law}_{0}\)) does not hold, or any of the other equivalent conditions appearing in (3.14). In view of Corollary 3.11, one could also ask if a transient graph \({\mathcal {G}}\) exists, such that \({\mathbf {h}}_{\text {kill}}<1\) is fulfilled, but \({\widetilde{h}}_*\in {(0,\infty )}\) or \({\widetilde{h}}_*^{\mathrm{com}} \in {(0,\infty )},\) and then (\(\mathrm {Law}_{0}\)) would not hold. On such a graph, we would still have by Theorem 3.7 that (\({Law}_h\)) holds for all \(h>{\widetilde{h}}_*^{\mathrm{com}} .\)