Cluster capacity functionals and isomorphism theorems for Gaussian free fields

We investigate level sets of the Gaussian free field on continuous transient metric graphs $\tilde{\mathcal G}$ and study the capacity of its level set clusters. We prove, without any further assumption on the base graph $\mathcal{G}$, that the capacity of sign clusters on $\tilde{\mathcal G}$ is finite almost surely. This leads to a new and effective criterion to determine whether the sign clusters of the free field on $\tilde{\mathcal G}$ are bounded or not. It also elucidates why the critical parameter for percolation of level sets on $\tilde{\mathcal G}$ vanishes in most instances in the massless case and establishes the continuity of this phase transition in a wide range of cases, including all vertex-transitive graphs. When the sign clusters on $\tilde{\mathcal G}$ do not percolate, we further determine by means of isomorphism theory the exact law of the capacity of compact clusters at any height. Specifically, we derive this law from an extension of Sznitman's refinement of Lupu's recent isomorphism theorem relating the free field and random interlacements, proved along the way, and which holds under the sole assumption that sign clusters on $\tilde{\mathcal G}$ are bounded. Finally, we show that the law of the cluster capacity functionals obtained in this way actually characterizes the isomorphism theorem, i.e. the two are equivalent.


Introduction
In this article, we consider the Gaussian free field ϕ on the cable system G associated to an arbitrary transient weighted graph G; see the discussion around (1.1) below for the precise setup. Cable processes have increasingly proved an insightful object of study, as shown for instance in the recent articles [19], [27], [21], [8], [7] and [29]. In the present work, we investigate a well-chosen observable, the capacity of finite clusters in the excursion set E ≥h of ϕ above height h ∈ R, see (1.5) below. This quantity features prominently in our article [10]. Our main result, stated below in Theorem 1.1 -see also Section 3 for a more exhaustive discussionunderlines the central nature of this observable and unveils some of its deeper ramifications.
To wit, our findings imply for instance that the cluster capacity observable at height h = 0 is finite almost surely, for any transient graph G, see Theorem 1.1,1) (our setup allows for a killing measure, including the degenerate case of Dirichlet boundary conditions, which will play an important role below). This immediately leads to a much improved understanding of why the height h = 0 tends to be critical for the percolation problem {E ≥h : h ∈ R} in the massless case, i.e. in the absence of killing, and more generally when h kill < 1 (see (1.2) below). A simple criterion, see (Cap) on p.4 and Theorem 1.1,1), which covers an extensive number of cases, can then be used to check if the sign clusters of ϕ percolate or not.
For instance, see Corollary 1.2, as a consequence of this criterion, our results yield that the sign clusters of ϕ on any vertex-transitive graph with no killing are bounded and thus establish the phase transition of {E ≥h : h ∈ R} as being second order. Corresponding results hold for the loop soup L 1/2 , see Corollary 3.6; see also the discussion following Theorem 1.1 regarding the current state of affairs.
When the sign clusters of ϕ are bounded -which holds e.g. when (Cap) holds -we are able to identify the distribution of the cluster capacity observable at any level h ∈ R, see Theorem 1.1,2) below. This law is explicitly characterized by (Law h ), introduced on p.4 (see also (3.8) for the corresponding density). Moreover, we show that this information is equivalent to the 'strong Ray-Knight-type' isomorphism recently derived in [27] (refining [19], see also (Isom) on p.4) under slightly stronger assumptions than those to follow. This identity relates the free field itself with the local times of random interlacements on G. Thus, we effectively obtain a characterization of an isomorphism theorem (in the non-interacting case) in terms of the free field alone. In fact, for massless graphs (or even if h kill < 1) our results imply under (Law 0 ) the dichotomy h * ∈ {0, ∞}, where h * refers to the corresponding critical level; cf. Theorem 1.1,3). We further refer to the forthcoming article [22] for sharpness and limitations to the validity of these results. The identity (Law h ) is derived in [10] by means of differential formulas, and has important consequences regarding the (near-)critical regime for level sets of ϕ on G; see [10] regarding these matters.
We now introduce our setup and refer to Section 2 for details. We consider a transient weighted graph G = (G,λ,κ), where G is a finite or countably infinite set,λ x,y ∈ [0, ∞), x, y ∈ G, are non-negative weights satisfyingλ x,y =λ y,x ≥ 0 andλ x,x = 0 for all x, y ∈ G. Furthermore,κ x ∈ [0, ∞], x ∈ G, is a killing measure, possibly infinite. To deal with the latter in a convenient way, given G = (G,λ,κ), we introduce the triplet (G, λ, κ), to which we will mostly refer throughout the article, by setting (G, λ, κ) = (G M ,λ M ,κ M ), the latter being defined in (2.12), with M a certain set of 'mid-points' given by (2.11). In particular, this definition entails that (G, λ, κ) = (G,λ,κ) wheneverκ x < ∞ for all x ∈ G. Otherwise (G, λ, κ) is obtained by suitable 'enhancement' of G (exploiting network equivalence). As a result, the killing measure κ is finite everywhere, i.e. κ x < ∞ for all x ∈ G.
We always tacitly assume that the induced graph (G, E) with edge set E = {{x, y} : x, y ∈ G, λ x,y > 0} is connected and locally finite. We write x ∼ y when {x, y} ∈ E, and we define λ x = κ x + y∈G λ x,y , ρ x = 1 2κ x for x ∈ G and ρ x,y = 1 2λ x,y for x ∼ y ∈ G (1.1) (with ρ x = ∞ when κ x = 0). One naturally associates to G a continuous version G, the corresponding cable system or metric graph, obtained by replacing each edge e = {x, y} ∈ E by an open interval I e of length ρ x,y , glued to G through its endpoints x and y. One further attaches to each vertex x ∈ G an additional interval I x isometric to [0, ρ x ), glued to x through 0 (we refer to Section 2.3 and Remark 3.8,1) for their raison-d'être).
One then defines (e.g. in terms of its associated Dirichlet form, see (2.1) and (2.2) below for details) a diffusion process (X t ) t≥0 on G ∪ {∆}, where ∆ denotes an (absorbing) cemetery state, which can be viewed as Brownian motion on the cable system. The process X induces a pure jump process Z = (Z t ) t≥0 on G ∪ {∆}, which we refer to as its trace (or print) on G, see (2.4), associated to a corresponding trace form. The induced process Z has the law of the continuous time Markov chain that jumps from x ∈ G to y ∈ G at rate λ x,y and is killed at rate κ x . Similarly, the trace of X on {x ∈ G :κ x < ∞} has the law of the continuous time Markov chain on G that jumps from x ∈ G to y ∈ G at rateλ x,y and is killed at rateκ x . We write P x for the canonical law of X · with starting point x ∈ G, and occasionally P G x in place of P x to stress the dependence on the datum G. We say that X · is killed if X · exits G via I x for some x ∈ G with κ x > 0 (which is equivalent to Z being killed, i.e. entering ∆). Accordingly, we define (1.2) h kill (x) def.
= P x (X · is killed), for all x ∈ G.
Moreover, we say that h kill < 1 if h kill (x) < 1 for all x ∈ G, or equivalently if h kill (x) < 1 for some x ∈ G (recall that (G, E) is assumed to be a connected graph). An important family of graphs satisfying h kill < 1 are massless graphs withκ = κ ≡ 0, or equivalently h kill (·) = 0.
Our results deal with the graph G and its associated metric graph G, when G is transient; that is, when the Markov chain Z is transient, which we tacitly assume from now on. In particular, the graph G may be finite when κ ≡ 0. We then define the Gaussian free field on G, whose canonical law P G (occasionally denoted as P G G ), defined on the space C( G, R) endowed with the σ-algebra generated by the coordinate maps ϕ x , x ∈ G, is such that (1.3) under P G , (ϕ x ) x∈ G is a centered Gaussian field with covariance function g(·, ·).
Here, g(·, ·) refers to the Green density of X · with respect to the Lebesgue measure m on G, see (2.5). The restriction of this process to G has the same law as the usual Gaussian free field on G associated to the discrete Markov chain Z.
We now describe our main results, which deal with the excursion sets E ≥h def. = {y ∈ G : ϕ y ≥ h} of ϕ, for varying height h ∈ R. We endow G with the (geodesic) distance d(·, ·) such that all intervals I e , e ∈ E, and I x , when ρ x < ∞, have length one (rather than ρ e and ρ x , respectively). Albeit not essential, we assume for convenience that d also assigns length one to I x when ρ x = ∞ (by means of some strictly increasing bijection [0, 1) → [0, ∞)). The clusters, i.e. maximal connected components, of E ≥h , are defined as = y ∈ G : x 0 ↔ y in E ≥h , for x 0 ∈ G, h ∈ R; (1.4) here, for measurable A ⊂ G and x, y ∈ G, we write {x ↔ y in A} if there exists a (continuous) path from x to y in A, and we say that A is connected in G if z ↔ z ′ in A for all z, z ′ ∈ A. A central role in this work will be played by the cluster capacity functional (1.5) cap(E ≥h (x 0 )), for h ∈ R, x 0 ∈ G; We refer to (2.20) and (2.27) below for the definition of cap(A), the electrostatic capacity of A, for arbitrary closed, possibly unbounded subsets A of G. For instance, in case A ⊂ G is finite (or more generally if A ′ ⊂ G is compact and ∂A ′ = A), then cap(A) (and cap(A ′ )) coincide with the usual capacity of the set A for the discrete chain Z.
One of our interests is on the percolative properties of the set E ≥h (with respect to d). We introduce the corresponding critical parameter h * = inf h ∈ R : for all x 0 ∈ G, P G (E ≥h (x 0 ) is unbounded) = 0 (1.6) (with the convention inf ∅ = ∞; note that h * is equivalently defined as the smallest level h such that P G -a.s. E ≥h contains no unbounded connected component). A fortiori, (1.6) entails that for each h < h * , with positive P G -probability the discrete set E ≥h ∩ G contains a percolating connected component in the usual sense (i.e., the component is unbounded with respect to the graph distance on (G, E)). In other words, the corresponding critical parameter h * (see for instance (1.8) in [8] for its definition) satisfies h * ≥ h * . Other natural definitions of critical parameters associated to the sets {E ≥h , h ∈ R} exist and will be of interest, see (3.1) and (3.2) below. They correspond to several natural ways of measuring the 'magnitude' of clusters in E ≥h , and (1.5) reflects one such choice, based on capacity as a measure of size.
We now briefly introduce the process of random interlacements on G, see [24], [11] and [28], to the extent necessary to formulate our main findings; further details are provided in Section 2.5. The interlacement process will play a prominent role in the present context, due to recent isomorphisms, see [19], [27] and (Isom) below, relating it to ϕ in a very explicit fashion. Under a suitable probability measure P I , for each u > 0, random interlacements at level u on G constitute a Poisson point process ω u with intensity uν G , where ν G is a measure on doubly non-compact trajectories modulo time-shift (when κ ≡ 0, these trajectories may be killed by the measure κ before escaping to infinity, i.e., they may 'exit G via I x ' for some x ∈ G with κ x > 0; see (2.39) and (2.40) for the precise definition of ν G ). We denote by (ℓ x,u ) x∈ G the continuous field of local times associated to ω u , i.e. the sum of the local time densities relative to the Lebesgue measure on G of all the trajectories in ω u . We then define the interlacement set as I u = {x ∈ G : ℓ x,u > 0}, a random open subset of G. Without any further assumptions on G, it can be shown that for all u > 0, (1.7) ℓ x,u + 1 2 ϕ 2 x x∈ G has the same law under P G ⊗ P I as see [25] for the original derivation of this result on the (discrete) base graph graph G in case κ ≡ 0, based on the generalized second Ray-Knight theorem of [12]; see also Proposition 6.3 of [19] and (1.27)-(1.30) in [27] for extensions to G. We refer to Remark 2.2 below regarding a justification for the validity of (1.7) in the present setup, which is more general. As first observed in [19], the isomorphism (1.7) implies a stochastic domination of each connected component of I u by a level-set cluster of ϕ, which straightforwardly yields (recall (1.2)) that (1.8) if h kill < 1, then h * ≥ 0, see the paragraph following (3.19) below for details. The reverse inequality h * ≤ 0 is an entirely different matter and has so far only been verified in a handful of cases (see below Theorem 1.1 for a list). Part of our main result addresses this issue.
Under additional assumptions, refining the link between I u and level-sets of ϕ described above (1.8), the identity (1.7) can be considerably strengthened. Indeed, Theorem 2.4 in [27] asserts that, if P G -a.s., E ≥0 only contains bounded connected components, (Sign) and g| G×G is uniformly bounded on the diagonal, see also (1.42) in [27] for a slightly weaker condition (but see below; our results will imply that this latter condition is in fact unnecessary), then where C u denotes the closure of the union of the connected components of those sign clusters {x ∈ G : |ϕ x | > 0} that intersect the interlacement set I u . In particular, noting that ℓ x,u = 0 if x / ∈ C u , (Isom) is seen to yield (1.7) upon taking squares. In practice, the main obstacle to deducing the identity (Isom) is showing that (Sign) holds (cf. the discussion following Theorem 1.1).
Our main result investigates the newly introduced capacity observable (1.5) and explores the links between this quantity, the value of the critical parameter h * in (1.6) and the validity of the identity (Isom). A natural structural property that will appear in this context is the (weak) condition that (3.6) for an equivalent formulation in terms of the base graph G and below (1.5) for the definition of cap(·) in the present context). One can for instance show that (Cap) is verified whenever the Green function g| G×G is uniformly bounded on the diagonal, see Lemma 3.4 below (cf. also (3.7) for a slightly more general condition). In particular, (Cap) holds on any vertex-transitive graph.
We now present a succinct version of our main result. It entails several findings which are discussed in Section 3 in a more comprehensive form. For later reference we introduce the condition note that the Laplace transform in (Law h ) can be equivalently described in terms of an associated density ρ h , which is explicit, see (3.8) and Lemma 5.2 below. Theorem 1.1. Let G be a transient weighted graph. Then: 1) P G -a.s., the random variable cap(E ≥0 (x 0 )) is finite for all x 0 ∈ G. In particular, the condition (Cap) implies (Sign) (see Theorem 3.2 and Corollary 3.3 for details).
2) The following implications hold true (cf. also Fig. 1 below): In particular, in view of (1.8), if G is a transient weighted graph such that h kill < 1 and (Cap) is fulfilled, then h * = 0 and the law of cap(E ≥h (x 0 )) is characterized by (Law h ), for h ≥ 0 (equivalently, (Isom) holds).
To appreciate the strength of Theorem 1.1, we highlight one particular consequence, which follows directly from items 1) and 2) above together with Corollary 3.4,2) below. Corollary 1.2 (No percolation at criticality). Let G be a vertex-transitive, massless, transient weighted graph. Then ( h * = 0 and) the clusters of E ≥0 are P-a.s. bounded.
We further refer to Corollary 3.6 below for interesting consequences of Theorem 1.1 regarding loop soups, and to [10] regarding the (near-)critical picture associated to the (continuous) phase transition exhibited by Corollary 1.2.
We now elaborate on the results of Theorem 1.1 in due detail and give some ideas concerning their proofs. In part 1) of Theorem 1.1, the finiteness of the capacity functional (1.5) at height h = 0 -which, remarkably, holds without any further assumption on G -can loosely be regarded as an indication that the sign clusters of the Gaussian free field on G do not percolate, at least when measured in terms of capacity, cf. also (3.2) and Theorem 3.2 below. Condition (Cap) formalizes this intuition, since it directly implies that closed connected sets have finite capacity if and only if they are bounded. Thus, if (Cap) holds true, so does (Sign), which in turn directly entails h * ≤ 0, see (1.6). The condition (Cap) is moreover usually easy to verify, since it depends only on the structure of the graph G, and not on the Gaussian free field. As alluded to above, the inequality h * ≤ 0 had previously only been proved on a certain number of graphs with κ ≡ 0, which all verify condition (Cap), namely: • Z d , d ≥ 3, with unit weights, see Theorem 1 and Proposition 5.5 in [19]. This proof could actually be easily extended to all amenable, vertex-transitive graphs, and such graphs verify (Cap), see Lemma 3.4,2). • The (d + 1)-regular tree T d , d ≥ 2, with unit weights, see Proposition 4.1 in [27]. It is easy to prove that these graphs verify (Cap), using Lemma 3.4,3), the fact that e K,T d (x) ≥ c(d) (which holds uniformly over connected finite subsets K ⊂ T d and x ∈ ∂K), along with the isoperimetric bound |∂K| ≥ c ′ (d)|K| (see for instance [2], p.80). • Any tree T with unit weights such that {x ∈ T : R ∞ x > A} only has bounded components for some A > 0, where R ∞ x is the effective resistance between x and infinity for the descendants of x, see Proposition 2.2 in [1]. These graphs verify (Cap) by Lemma 3.4,3). • Any transient graph with controlled weights (see e.g. condition (p 0 ) in [8]), such that the volume of balls have polynomial growth and the Green function decreases polyonomially fast, see Proposition 5.2 in [8]. These graphs verify (Cap), see Lemma 3.2 in [8].
Hence, Theorem 1.1 subsumes and generalizes all these previous results, and it covers many new cases, such as all vertex-transitive graphs, see Lemma 3.4,2) below. What is more, without assuming that (Cap) is fulfilled, it is possible to construct a graph G such that h * ≤ 0 fails to hold, see Proposition 8.1 in [22]. One can also easily find examples of graphs such that (Sign) is verified, while (Cap) is not, see Remark 3.5,3), or Proposition 7.1 in [22] for more details. A further, very interesting question is whether there exist examples of graphs G not satisfying (Law 0 ), or any of the other equivalent conditions appearing in Theorem 1.1,2). A stepping stone for the proof of Theorem 1.1,1) (and, as will soon turn out, of Part 2) as well) is the observation that the identity (Isom), if assumed to hold, implies (Law h ) h≥0 , see Proposition 4.2 and Lemma 6.1 below. Crucially, this observation can be applied immediately when G is a finite (transient) graph, for (Isom) is then a direct consequence of the isomorphism between loop soups and the Gaussian free field, see [17] and [19], that we recall in (4.6). We refer to Lemma 4.4, proved in Appendix B using similar ideas as in the proof of Theorem 8 in [20], for corresponding details.
Equipped with (Isom), and thus (Law h ) h≥0 , on finite transient graphs we then approximate the Gaussian free field on any infinite transient graph G by the Gaussian free field on a sequence of finite transient graphs G n increasing to G as n → ∞, see (4.10) and Lemma 4.6. The fact that our setup allows for 0-boundary conditions (i.e.κ x = ∞ for some x ∈ G) is central for this purpose. The capacity functional (1.5) has certain desirable monotonicity properties under this approximation, see (4.16), and Theorem 1.1,1) corresponds to the information that survives in the limit n → ∞ without further assumptions on G.
Let us now comment on Part 2) of Theorem 1.1 and its proof. Figure 1 illustrates the various implications involved in its statement in a more explicit fashion and will hopefully provide some useful guidance for the reader. The equivalence a) in Figure 1 entails that if h * = 0, then the level sets of the GFF never percolate at the critical point h = 0, even if (Cap) (which imply (Sign)) is not verified. We comment on its proof at the very end of this discussion. Implication b) represents the desired improvement over the argument delineated above yielding Theorem 1.1,1), by which the full information (Law h ) h≥0 survives in the limit as n → ∞ under the assumption that the sign clusters of ϕ are bounded (which holds e.g. under condition (Cap)). In fact, when (Cap) is satisfied, we also provide an explicit formula for the law of the capacity of clusters above negative levels, see Theorem 3.7 for further details; see also Remark 3.10,4), Lemma 4.3 and Remark 5.3,2) regarding the (related) symmetry properties relating compact clusters in E ≥h and E ≥−h , for arbitrary h > 0.
The exact formula (Law h ) h≥0 describing the law of the capacity functional (1.5) is of course instrumental and witnesses a certain degree of integrability of the model {E ≥h : h ∈ R}. For instance, one can immediately deduce from it (see (3.8)) that the capacity of critical clusters has heavy tails satisfying Further to (1.9), one can use (Law h ) h≥0 to directly deduce bounds on various quantities of interest related to the (near-)critical behavior for the percolation of {E ≥h : h ∈ R}, see [10]. The approach using differential formulas developed therein actually leads to an independent proof of the implication b), along with extended results valid on any transient graph G, see Theorem 1.1 in [10]. Incidentally, an explicit formula for the probability of the event {x ←→ y in E ≥0 } has also been obtained in Proposition 5.2 of [19], and was a key ingredient for all previous proofs of the inequality h * ≤ 0. We now turn to the equivalences c) and d) in the second line of Figure 1. The direct (i.e. right) implications appearing there already imply the equivalences. The direct implication in d) is another application of our initial observation, Proposition 4.2, applied above in the context of Theorem 1.1,1) for finite graphs only, but remaining valid in infinite volume.
Remarkably, the direct implication in c) asserts that it is sufficient to know that the law of the capacity of the sign clusters is given by (Law 0 ) in order to deduce the strong version (Isom) of the isomorphism theorem. In particular, together with b), this implies that (Isom) holds whenever (Sign) is verified, which generalizes Theorem 2.4 of [27] that required stronger assumptions, cf. the above discussion leading to (Isom).
Extending the setting in which the identity (Isom) is valid is also interesting as this relation has already been useful in [27] and [1] to compare the critical parameter for the percolation of random interlacements and the Gaussian free field on discrete trees, and in [8] to prove strong percolation for the level sets of the discrete Gaussian free field at a positive level on a large class graphs, for instance Z d , d ≥ 3, or various fractal graphs. It is not always easy to check that the conditions (1.32) and (1.34), or (1.42), of Theorem 2.4 in [27] are exactly verified, see the proof of Corollary 5.3 in [8] which sparked our interest, and it can thus be interesting to replace them by the weaker condition (Cap), which is easier to verify.
The proof of c) requires deriving a full-fledged isomorphism theorem relating random interlacements and the Gaussian free field on an adequate class of graphs, assuming the identity (Law 0 ) alone. In order to prove (Isom), we employ an approximation scheme, starting from a finite-volume setup. The scheme is similar in spirit to the previously used approximation for ϕ, but more involved, as it requires approximating random interlacements on infinite graphs by random interlacements on finite graphs, see Lemma 6.3. Combining the approximations for the free field and the interlacement process, we then obtain (Isom) if (Law 0 ) is fulfilled, see Lemma 6.4.
Moreover, our proof of (Isom), which relies on taking a suitable limit rather than proceeding directly in infinite volume and using the Markov property as in [27], immediately lets us derive a signed version of the isomorphism for random interlacements on discrete graphs, taking advantage of the equivalent discrete isomorphism for the loop soup, (4.8). As a by-product of the proof, we thus obtain a version of the isomorphism (Isom) for the discrete graph G in Theorem 3.9, see (3.16), similar to the version of the second Ray-Knight theorem from Theorem 8 in [20].
Finally, the isomorphism (Isom) has another interesting consequence, stated in Theorem 1.1,3) and Corollary 3.11: if (Law 0 ) holds but (Sign) does not hold, then h * = ∞. This can be regarded as a partial converse to the implication (Sign) =⇒ (Law 0 ) from part 2), which leads to a dichotomy for the value of h * in case h kill < 1. In particular, if G is a graph such that h * ≤ 0, then E ≥h is P G -a.s. bounded for all h > 0, and thus (Law h ) holds for all h > 0, see Theorem 3.7. Taking the limit as h ց 0, one can then prove that (Law 0 ), and thus (Isom), hold. Since h * = ∞, this means that (Sign) must hold, and thus we also obtain Theorem 1.1,2),a) (see Figure 1).
We now explain how this article is organized. Section 2 recalls the main objects of interest, the diffusion X, the Gaussian free field, and random interlacements on the cable system in the present (broad) setup. It also supplies suitable notions of equilibrium measure and capacity on G, see Lemma 2.1, (2.16) and (2.20).
Section 3 contains the detailed versions of all our findings, which together imply Theorem 1.1, and that we prove in the rest of the article. The central results are the three Theorems 3.2, 3.7 and 3.9, along with their respective corollaries. Section 4 gathers various key preliminary results, notably Proposition 4.2, which derives (Law h ) h≥0 as a consequence of (Isom) (or more precisely, an equivalent but more handy formulation (Isom') introduced in Section 3). It also contains the approximation scheme for ϕ, see Lemma 4.6, as well as the isomorphism (Isom) on finite graphs, see Lemma 4.4. These results are the ingredients of various arguments in the sequel.
First, Section 5 is devoted to the proof of Theorems 3.2 and 3.7, which roughly correspond to Theorem 1.1,1), and 2),b) in Figure 1, but contain more detailed results. Their proof quickly follows from the preparatory work done in Section 4. Section 6 is then concerned with the proof of the isomorphism between random interlace-ments and the Gaussian free field (Isom) under the condition (Law 0 ), and to its consequences, Corollaries 3.11 and 3.12. At the technical level, an important role is played by the approximation of random interlacements on a graph G, by random interlacements on a sequence of graphs increasing to G, see Lemmas 6.2 and 6.3. Some concluding remarks and open questions are gathered at the end of that section. Throughout the article, we will sometimes add G as a subscript to the notation to stress the underlying graph G that we consider. For the reader's orientation, we note that the conditions (Sign), (Law h ) and (Isom) are all introduced above Theorem 1.1, and that the condition (Isom') is introduced above Theorem 3.9.
Recall the definition of the cable system G: first, each edge e = {x, y} ∈ E is replaced by an open interval I e , isometric to (0, ρ x,y ), see (1.1). In addition, an open interval I x of length ρ x (= 1 2κx ) (possibly unbounded) is attached to each vertex x of G. The cable system G is then obtained by glueing together the intervals I e , e ∈ E, to G through their respective endpoints, and by glueing one endpoint of I x , x ∈ G, to x. Note that G can be naturally viewed as a subset of G. The elements of G will still be called vertices and the intervals I e , e ∈ E, and I x , x ∈ G, will be referred to as the edges of G.
The canonical distance on each I e , e ∈ E, and I x , x ∈ G, is denoted by ρ G (·, ·). Note that ρ G (x, y) is only defined if x and y are on the same edge. In a slight abuse of notation, for any edge e = {x, y} ∈ E and any t ∈ [0, ρ x,y ], we denote by x + t · I e = y + (ρ x,y − t) · I e the point of I e at (ρ G -)distance t from x, and for any vertex x ∈ G and t ∈ [0, ρ x ), by x + t · I x the point of 2.1. The canonical diffusion on the cable system. We define the set of forward trajectories W + G as the set of functions w + : [0, ∞) → G ∪ {∆}, where ∆ is a cemetery point (not in G), for which there exists ζ ∈ [0, ∞] such that w + |[0,ζ) ∈ C([0, ζ), G) and, when ζ < ∞, w + (t) = ∆ for all t ≥ ζ. For each t ≥ 0 we denote by X t the projection at time t, i.e. X t (w + ) = w + (t) for all w + ∈ W + G , and by W + G the σ-algebra on W + G generated by X t , t ≥ 0. By m we denote the Lebesgue measure on G, which can be informally described as the sum of the Lebesgue measures on each I e , e ∈ E, and I x , x ∈ G, with the normalization m(I e ) = ρ e and m(I x ) = ρ x (with, say, mass 1 associated to each sub-interval of Euclidean length 1). We proceed to define a diffusion on G, which we will characterize through its associated Dirichlet form. In order to define the latter, introduce for measurable f : G → R, (f, f ) m < ∞} (modulo the usual equivalence relation) and (f, g) m the associated quadratic form on L 2 ( G, m) obtained via polarization. Let C 0 ( G) be the closure for the · ∞ -norm of the set of continuous functions with compact support on G and let D( G, m) ⊂ L 2 ( G, m) be the space of functions f ∈ C 0 ( G) such that f |Ie ∈ W 1,2 (I e , m |Ie ) for all e ∈ E ∪ G and where W 1,2 (I e , m |Ie ) denotes the respective Sobolev space on I e . We now define the Dirichlet form on L 2 ( G, m) (in which D( G, m) is densely embedded), By Theorem 7.2.2. in [15], one associates to each x ∈ G an m-symmetric diffusion starting in x with state space G ∪ {∆} to the Dirichlet form E G . We denote by P x (= P G x ) its law on (W + G , W + G ) and also define, for any non-negative measure µ on G with countable support supp(µ), the measures Note that ζ = inf{t ≥ 0 : X t = ∆} is either ∞, or the first time X blows up (i.e., X escapes all d-bounded sets) or gets killed (i.e., exits G through some I x with κ x > 0). Informally, one can obtain a diffusion with law P x as follows: first, one runs a Brownian motion starting at x on I e , with x ∈ I e , e ∈ E ∪ G, until a vertex y is reached. Then one chooses uniformly at random an edge or vertex v among {y} ∪ {{y, z} : z ∼ y} and runs a Brownian excursion on I v until a vertex is reached; this procedure is iterated until either the process blows up or the open end of the interval I x is reached for some x ∈ G, in which case the process is killed at that time. We refer to Section 2 of [9] or [19] for a more formal description of this construction on Z d , d ≥ 3.
We now briefly review how to take traces of the process X on suitable subsets F of G. One can show, analogously to Section 2 of [19], that the process X under P G x allows for a space-time continuous family of local times (ℓ y (t)) y∈ G,t≥0 . Therefore, using that P G x lives on the canonical , for all sets F ⊂ G of the form F = e∈F 1 I e ∪ x∈F 2 {x}, where F 1 ⊂ E ∪ G and F 2 ⊂ G are arbitrary, we can define the time change Here, we use the convention inf ∅ = ζ and denote the trace of X on F by X F = (X τ F t ) t≥0 , with the convention X ∞ = ∆, which corresponds to a time changed process with respect to a positive continuous additive functional (PCAF), see (A.2.36) and below in [15] for instance. As a first application of this definition, letting it follows from Theorem 6.2.1. in [15] that for all x ∈ G the law of Z under P G x is that of the continuous time Markov chain that jumps from x ∈ G to y ∈ G at rate λ x,y and is killed at rate κ x . Furthermore, the local times (ℓ y (ζ)) y∈G of X after being killed have the same law under P G x as the total occupation times of that jump process (after being killed), see for instance (1.97) and (2.80) in [26]. We also denote by ( Z n ) n∈N the discrete time skeleton of Z, i.e. the sequence of elements of G visited by the process Z, with the convention that Z n = ∆ for all large enough n if Z gets killed.

2.2.
Elements of potential theory on G. Our next goal is to supply workable notions of equilibrium measure and capacity on G, for arbitrary closed (and in particular compact) subsets of G, as necessary in order to investigate observables like cap(E ≥h (x 0 )) (cf. Theorem 1.1). We first define the Green function of an open set U ⊂ G by where E x denotes expectation with respect to P x = P G x and T U = inf{t ≥ 0 : X t / ∈ U } is the first exit time of U, with the convention inf ∅ = ζ. We simply write g = g G for the usual Green function on G.
We now introduce the notions of equilibrium measure and capacity on G by 'enhancements', see Lemma 2.1 below. This will allow to directly reformulate the equilibrium problem in a discrete setup and to thereby import the respective standard versions of these notions on transient graphs, see (2.16), (2.20) and (2.27) below. In particular, this approach immediately provides several useful identities, e.g. relating exit distributions for the diffusion X with the corresponding equilibrium measure, cf. (2.19) and (2.17).
On the (transient) graph (G, λ, κ) associated to G, for all finite A ⊂ G the equilibrium measure and capacity of A are defined by = inf{n ≥ 1, Z n ∈ A}, with inf ∅ = ∞, is the first return time to A for the discrete time random walk Z on G, cf. below (2.4). The following observation is key.
(with a slight abuse of notation), G is a subset of G A , the cable system of G A ; (2.7) for all x ∈ G A , the laws of the traces X G A = (X τ G A t ) t≥0 under P G x and P G A x coincide; (2.8) Proof. We first introduce the weights λ A and the killing measure κ A . For each e = {x 0 , x 1 } ∈ E, let A ∩ I e = {z 1 (e), . . . , z n−1 (e)}, where n = n(e) ≥ 1 is such that n − 1 = |A ∩ I e | and the z k (e)'s are labeled by order of appearance as one traverses the (open) edge I e from, say, x 0 to x 1 (the underlying choice of orientation of e will not affect the definition of λ A , κ A in (2.9) below). For later convenience, we set z 0 (e) = x 0 and z n (e) = x 1 , and drop the argument e in the sequel whenever no risk of confusion arises. Similarly, for x ∈ G, we enumerate Thus, each edge e ∈ E is replaced by a linear chain of n = n(e) edges {z k−1 , z k }, 1 ≤ k ≤ n, with weights λ A z k−1 ,z k , and similarly a chain of n(x) − 1 edges is attached to each x ∈ G, with killing κ A z n−1 (x) at its 'dangling' end. By (2.9) and (1. (2.10) Therefore, G can be identified with the set G A \ I, where G A is the cable system associated to (G A , λ A , κ A ) and I = I 1 ∪ I 2 ∪ I 3 , where By a similar reasoning as detailed below around (2.31), it then follows that for all x ∈ G (viewed as a subset of G A ), the law of the trace of In view of (2.4), the claim (2.8) then follows.
By slightly adapting the above arguments, one defines the graph (G M ,λ M ,κ M ) alluded to at the beginning of Section 1, see above (1.1), as follows. Given G = (G,λ,κ), possibly with = {a : midpoint of I e for some e ∈ Eκ} where Eκ = {x, y} : x, y ∈ G,λ x,y > 0,κ x = ∞ andκ y < ∞ and I e is an interval isomorphic to the open interval (0, 1/(2λ x,y )) glued at 0 to y, with boundary {x, y}. Now, by a small extension of Lemma 2.1, one constructs from G = (G,λ,κ) the graph by treating I e for e = {x, y} ∈ Eκ withκ y < ∞ in the same manner as I y in (2.9) (whence λ y,a =λ M y,a = 2λ y,x , κ y =κ M y = 0 and κ a =κ M a = 2λ y,x for a ∈ M the midpoint of I x,y ), and keeping the same weights and killing measures for the other vertices. Plainly, Similarly as below (2.4), it follows from Theorem 6.2.1. in [15] that the law of the trace of X (under P G · ) on {x ∈ G :κ x < ∞} is that of the continuous time Markov chain on G that jumps from x ∈ G to y ∈ G at rateλ x,y and is killed at rateκ x , hence justifying our choice of (G, λ, κ) as in (2.12) to define the cable system G. Note also that (G, λ, κ) = G when κ < ∞ since Eκ = ∅ in that case.
The following remark turns out handy in a couple of instances in this article.
Remark 2.2 (Generating any given cable system from a graph without killing). As an application of Lemma 2.1, given (G, λ, κ) and the corresponding cable system G, one can naturally associate converging to the open end of I x (note that such a sequence does not have an accumulation point in G).
one defines G ′ = G A and λ ′ = λ A as given by Lemma 2.1 (note that κ A ≡ 0 by (2.9)). By (2.7), one has that G ⊂ G A and G is in fact obtained from G A by removing all (unbounded) cables I x , x ∈ A. In particular, combining this observation with the isomorphism [25], which holds on (G ′ , λ ′ ), one readily infers that (1.7) holds for G.
We now extend the definition of the equilibrium measure from (2.6) to the cable graph setting. When K is a compact subset of G, we define its exterior boundary Note that ∂K is finite since K is bounded and I e contains at most two points of ∂K for all e ∈ E ∪ G. Consider now any sets K, K, A ⊂ G such that (2.14) K is compact, K finite, A has no accumulation point and For all x, y ∈ A, by (2.8) as well as (1.56) in [26] (and its straightforward adaptation to infinite transient weighted graphs; this also applies to subsequent references to [26]) applied to the graph G A , noting that L K = L K a.s. and We now define the equilibrium measure of K in G by with G ∂K as supplied by Lemma 2.1 and the (discrete) equilibrium measure on the right-hand side as defined in (2.6). For K, K and A as in (2.14), we then have that Indeed, (2.17) follows from (2.15) when x ∈ ∂K, and both terms of (2.17) are equal to 0 when x ∈ A \ ∂K by (2.15) and (2.16). In particular if K ⊂ G, by (2.17) with K = K and A = ∅, the definition (2.16) of the equilibrium measure on the cable system coincides with the definition of the equilibrium measure from (2.6). Moreover, (2.17) can be used to obtain a description of the equilibrium measure purely in terms of the diffusion X, instead of using the equilibrium measure on the discrete graph G ∂K as in (2.16). Indeed, denoting by B ρ (x, ε) the ball centered at x ∈ G with radius ε ≥ 0 for the distance ρ introduced above Section 2.1, which is well defined for small enough ε, one has where d x is the degree of x if x ∈ G, and d x = 2 otherwise. In order to prove (2.18), one uses (2.17) with A = ∂B ρ (x, ε) ∪ ∂K and K = A ∩ K, and (2.6), noting that λ A x = d x /(2ε) by (2.9) and that H K (X G A ) = ∞ if and only if L K < H ∂Bρ(x,ε) for ε small enough. Actually, the equality (2.18) thus still holds when removing the limit as ε → 0, for small enough ε. Moreover, we obtain from (2.15) and (2.17) that The identity (2.19) is reminiscent of the equilibrium measure for the usual Brownian motion (on R d , with suitable killing when d = 1, 2), see for instance Proposition 3.3 in [23]. In fact, (2.19) (or (2.18)) could be used instead of (2.17) as defining e K, G (·).
The capacity of a compact set K ⊂ G is defined as the total mass of the equilibrium measure, When there is no risk of ambiguity, we will simply write e K , cap(K) instead of e K, G , cap G (K). Using (2.8), (2.16), and (2.17), we can now extend a variety of useful results on equilibrium measures from the discrete case to G. By (an adaptation of) [26, (1.57)], one easily shows the following characterization of the capacity in terms of a variational problem as for K, K ⊂ G as in (2.14) with A = K, where the infimum is over all probability measures µ on K, see e.g. Proposition 1.9 in [26]. In view of (2.17), when K ⊂ K ′ are two compacts of G, using (1.59) in [26], one obtains the 'sweeping identity' where H K = inf{t ≥ 0 : X t ∈ K}, with the convention inf ∅ = ζ. In particular, summing (2.22) over x ∈ ∂K yields the monotonicity property We now proceed to extend the notion of capacity to closed (not necessarily bounded) sets with finitely many components, cf. (2.26) below, which will turn out helpful in the proof of Lemma 4.6 below. For any measurable function f : G → R and K a compact subset of G, the harmonic extension η f K of f on K is defined as Note that the sum in (2.24) is well defined since for each x ∈ G the set ∂ x K def.
= {y ∈ ∂K : P x (X H K = y, H K < ζ) > 0} contains at most two points per edge of G intersecting K, and hence is finite. In the sequel, a decreasing sequence of compacts (K n ) n∈N is said to decrease to a compact K if K = n∈N K n . Moreover, in a slight abuse of notation, we say that an increasing sequence of compacts (K n ) n∈N increases to a compact K if K is the closure of n∈N K n (later on, this notion permits to assert for instance that if The following convergence result for harmonic extensions will be useful. 3. Let f : G → R be a continuous function and K n , n ∈ N, as well as K be compact subsets of G such that (K n ) n∈N increases or decreases to K. Then for all x ∈ G, 0 for all y ∈ ∂ x K, and there exists an integer N such that for all n ≥ N, the set (A y n ) y∈∂xK is a partition of ∂ x K n . By (2.24), for all x ∈ G and n ≥ N , By continuity, for any ε > 0 there exists Since for all x ∈ G and y ∈ ∂ x K the absolute value of the difference on the right-hand side is bounded by and each of these terms tends to 0 as n → ∞, (2.25) follows as f is uniformly bounded on compacts.
An interesting and immediate consequence of Lemma 2.3 and (2.22) is the following: if K n , n ∈ N, and K are compacts of G such that (K n ) n∈N increases or decreases to K, consider the quantity x∈∂K e K (x)η 1 Kn (x) in case the K n are increasing and x∈∂K e K 1 (x)η 1 Kn (x) in case the K n are decreasing, respectively (which both equal cap(K n ) by virtue of (2.22)). We can then take n → ∞ while applying (2.25) with f = 1 to obtain that Hence, we can extend the definition of the capacity to any closed set A ⊂ G by setting where (K n ) n∈N is any increasing sequence of compacts of G exhausting G. This limit exists and does not depend on the choice of the sequence (K n ) n∈N by (2.23), and it is consistent with the existing definition of capacity for compacts, cf. (2.20), by means of (2.26).

2.3.
Varying killing measure and the cables I x . In the sequel, it will repeatedly be useful to compare the diffusion X on G for varying killing measure. In particular, this comprises 'infinite-volume' limits, in which all but finitely many x ∈ G initially satisfyκ x = ∞, andκ is sequentially reduced, see (4.10) below. Consider the family of graphs (Gκ)κ, where Gκ = (G,λ,κ), for fixed G andλ and varying killing measureκ ∈ [0, ∞] G . Let Gκ be the cable system associated to Gκ (cf. below (1.1)). In view of (2.11), (2.12), one can interpret By Theorem 4.4.2. in [15], the Dirichlet form associated to Xκ ′ t is E Gκ′ , and so We now briefly compare the above setup to existing definitions of the metric graph G and its associated diffusion X, which do not usually involve attaching cables I x to the vertices x ∈ G (see e.g. Section 5 of [4], Section 2 of [14] or Section 2 of [19]). Upon considering a suitable trace process in the present context, see (2.31) below, these two descriptions are essentially equivalent and in particular, they lead to the same notion of capacity for most sets of interest. Most important to our investigations is the feature that the cables I x provide natural embeddings as κ varies, see (2.28)-(2.29) above. This will be useful for approximation purposes, see (4.10) and Lemmas 4.6 and 6.3 below, as well as to derive (Law h ) and (Isom) in the case κ = 0. We define G − as the closed subset of G consisting of the closure of the union of the intervals I e , e ∈ E, (or, in other words, the subset of G obtained upon removing the intervals I x , x ∈ G) and denote by X G − the trace on G − of X. One can prove by Theorem 6.2.1. in [15] that the Dirichlet form on where we recall that the space D had been introduced below (2.1). If κ ≡ 0 on G, the process X G − thus corresponds to the usual diffusion on the cable system G − . If κ ≥ 0 on G (i.e. G = G κ ), it follows from Theorems 6.1.1. and A.2.11. in [15] that X G − has the same law under P G x as the diffusion where ξ is an independent exponential variable with parameter 1 (with the convention inf ∅ = ζ 0 ). The latter is the process studied e.g. in Section 2 of [19]. Moreover, the trace of x ) on G has the same law as Z, hence the local times (ℓ y (t)) y∈ G − ,t≥0 have the same law under P G x as those of the process X G − 0 (killed at time ζ − κ ) under P G 0 x , i.e. the local times of the process introduced in [19].
Consequently, for compact K ⊂ G − one could have defined a notion cap G − (K) similarly as in (2.16) and (2.20), but starting from the process X G − and considering suitable enhancements of G − , resulting in cap G − (K) = cap G (K) for all K ⊂ G − . This can be further strengthened when κ ≡ 0, as asserted in the following lemma, which records the capacity of the cables I x for later purposes.
For all x ∈ G, the following dichotomy holds: Moreover, if κ ≡ 0, then for all connected and closed sets Proof. We first show (2.32). If κ x > 0, then for all t ∈ (0, ρ x ), writing y t = x + (ρ x − t) · I x (see the beginning of Section 2 for notation), we see by (2.9) that κ {yt} yt Hence, by (2.27), we obtain cap(I x ) = ∞ as t ↓ 0. If κ x = 0, then keeping the same notation, we have for all t ∈ (0, ∞) that P G {y t } yt ( H I t x = ∞) = 0, since X behaves like a Brownian motion on I x and hence always return , and by (2.27) we obtain that cap(I x ) = cap({x}). Suppose now that κ ≡ 0, and let K ⊂ G be a connected and compact set such that from which the claim follows for such K, and for arbitrary closed connected sets by means of (2.27).
Remark 2.5. The second part of Lemma 2.5 implies that, when κ ≡ 0, one can consider G − instead of G and all our results, for instance ( Note that this is not true anymore when κ ≡ 0. Indeed for instance one has by (2.32 , and, when considering G − instead of G, one has to change the isomorphism (Isom) to take into account the influence of the trajectories in the random interlacement process entirely included in one of the cables I x , x ∈ G with κ x > 0, possibly hitting the sign clusters, see Remark 3.10,4) for details.
2.4. The Gaussian free field. We now collect a few important properties of the Gaussian free field (ϕ x ) x∈ G on the cable system G defined in (1.3). We first recall its strong spatial Markov property and refer to Section 1 of [27] for details. For any open set O ⊂ G, we consider the σ-algebra We say that K is a compatible random compact subset of G if K is a compact subset of G with finitely many connected components and {K ⊂ O} ∈ A O for any open set O ⊂ G. We then define The Markov property now states that for any compatible random compact K, where η ϕ K was defined in (2.24) and g K c in (2.5). An application of the Markov property is that, conditionally on (ϕ x ) x∈G , if e = {y, z} ∈ E, the law of (ϕ x ) x∈Ie , is that of a Brownian bridge of length ρ e between ϕ y and ϕ z of a Brownian motion with variance 2 at time 1, and these Brownian bridges are independent as e varies. Similarly, conditionally on (ϕ x ) x∈G , one can describe the law of (ϕ x ) x∈Iy , as that of a Brownian bridge of length ρ y between ϕ y and 0 of a Brownian motion with variance 2 at time 1 if κ y > 0, and as that of a Brownian motion starting in ϕ y with variance 2 at time 1 if κ y = 0, and all these Brownian bridges and Brownian motions are independent. We refer to Section 2 of [9] for a proof of this result on Z d , d ≥ 3, which can easily be adapted to any transient graph. In particular, we have that (2.35) conditionally on (ϕ x ) x∈G , the random fields (ϕ x ) x∈Ie , e ∈ E ∪ G, are independent, and for all e ∈ E ∪ G, the field (ϕ x ) x∈Ie only depends on ϕ |e , Moreover, using the exact formula for the distribution of the maximum of a Brownian bridge, see e.g. [3], Chapter IV.26, one knows that for all e ∈ E ∪ G where for all e = {x, y} ∈ E and f : G → R, A useful notation p G e (f, g) will later be introduced and include (2.37) as a special case when g = 0, see (3.12) below.

Random interlacements.
We now briefly introduce random interlacements on the cable system G. We define the set of doubly infinite trajectories W G as the set of functions w : R → G ∪ ∆, for which there exist −∞ ≤ ζ − < ζ + ≤ ∞ such that w |(ζ − ,ζ + ) ∈ C((ζ − , ζ + ), G) and w(t) = ∆ for all t / ∈ (ζ − , ζ + ). For each w ∈ W G , we also define p * G (w) = p * (w) as the equivalence class of w modulo time shift; here, w and w ′ are equal modulo time shift if there exists t 0 ∈ R such that w(t + t 0 ) = w(t) for all t ∈ R, and W * G = {p * (w) : w ∈ W G }. Let W G be the σ-algebra on W G generated by the coordinate functions, and W * For w ∈ W G , we define the forward part of w as (w(t)) t≥0 and the backward part of w as (w(−t)) t≥0 , which are both elements of W + G , see above (2.1). For w * ∈ W * K, G we define the forward (resp. backward) part of w * on hitting K as the forward (resp. backward) part of the unique trajectory in The intensity measure underlying random interlacements on G is defined as follows. For a set A ∈ W G we write . We then observe that W 0 Recalling the definition of the last exit time L K and the exterior boundary ∂K from (2.13) and below, for all x ∈ ∂K let We now define a measure Q K, G on W G , whose restriction to W 0 K, G is given by It is essentially folklore by now that there exists a unique measure ν G on W * G , such that for all compacts K ⊂ G, We will not give a proof of the existence of the measure ν G ; instead, we refer to [28] for a proof of the existence of such a measure on the discrete graph G when κ ≡ 0, and to [19] for the setting of the cable system associated to Z d , d ≥ 3. Indeed, one can easily adapt these proofs to obtain a measure ν G such that (2.40) holds for all compacts K of G with ∂K ⊂ G, also in the case κ ≡ 0 (see also Remark 2.2). Considering now the case of arbitrary compact subsets K of G, one can thus construct a measure ν G ∂K such that (2.40) holds for ν G ∂K and K. Using , one easily deduces that ν G is the 'trace on G' of ν G ∂K , so that (2.40) also holds for ν G and K. Alternatively, a direct proof of (2.40) on the cable system is also presented in Theorem 3.2 of [22].
The random interlacement process ω is a Poisson point process on W * G × (0, ∞) under the probability P I G with intensity measure ν G ⊗ λ, where λ is the Lebesgue measure on (0, ∞). When κ ≡ 0, the forward and backward parts of the trajectories can be killed before blowing up; in our setup this is realized by either part of the trajectory exiting G to ∆ via I x for some x ∈ G with κ x > 0. We also denote by ω u the point process which consist of the trajectories in ω with label less than u, by (ℓ x,u ) x∈ G the continuous field of local times relative to m on G of ω u and by I u = {x ∈ G : ℓ x,u > 0} the interlacement set at level u. The set I u is characterized by the following identity: for any measurable set A ⊂ G, The trace ω u of ω u on G has the same law under P I G as the usual discrete random interlacement process, see [28] in the case κ ≡ 0. If κ ≡ 0, a trajectory in ω u can start or end at a fixed point x ∈ G, and in this case we say that this trajectory is killed at x. We also define I u E ⊂ E ∪ G to be the set of edges in E crossed by at least one single trajectory in ω u , union with the set of vertices at which a trajectory in ω u is killed. In the case λ x,y = T T +1 for all x, y ∈ E and κ x = deg(x) T +1 for all x ∈ G, T > 0, the discrete random interlacement process ω u corresponds to the model of 'finitary random interlacements' studied in [5]. In view of Remark 2.2, this actually fits within the framework of [28] upon suitable enhancement of G.
The law of ω u can also be described as follows: for any compact K of G, the law of the forward trajectories in ω u hitting K is a Poisson point process with intensity uP G e K which can be constructed from a Poisson point process of discrete trajectories with intensity uP G ∂K e K ( Z ∈ ·) by adding Brownian excursions on the edges. Hence, ω u can be constructed from ω u by adding independent Brownian excursion on the edges, see [19] for details. In particular, (2.42) conditionally on ω u , the random variables (ℓ x,u ) x∈Ie , e ∈ E ∪ G, are independent, and for all e ∈ E ∪ G, (ℓ x,u ) x∈Ie only depends on ω u,e , where ω u,e is the set of trajectories in ω u hitting e. When there is no risk of ambiguity, we abbreviate P I = P I G , and ν = ν G .

Main results
In this section, we state our main results, Theorems 3.2, 3.7 and 3.9, and explore their consequences. Put together, these results in particular imply Theorem 1.1, see the end of this section for the short proof, but in fact they provide more detailed results. Theorem 3.2, together with its Corollary 3.3, roughly corresponds to 1) in Theorem 1.1. Theorem 3.7 investigates the properties of the cluster capacity observable. In particular, it establishes that, when bounded almost surely, the cluster E ≥h (x 0 ) has a capacity described by (Law h ). Theorem 3.9 then broadly speaking relates (Law h ) h≥0 and the identity (Isom) between random interlacements and the Gaussian free field on G. In doing so, it also supplies new instances of (Isom), see Remark 3.10,1), along with a version on the discrete base graph G, see (3.16). Finally, some further interesting consequences are put together in Corollaries 3.11 and 3.12.
We now lay the ground for our first main result, Theorem 3.2. Its true meaning becomes transparent upon defining, next to h * (see (1.6)) two further critical parameters. As will soon become clear, the conditions κ ≡ 0 or (Cap) appearing in Theorem 3.2 will cause various of these parameters to coincide, leading to streamlined results. We first introduce (recall that compactness is with respect to the graph distance d). Every compact set is (d-)bounded, so we always have h com * ≥ h * . The third critical parameter, involving the capacity of clusters in E ≥h , is On any graph such that κ ≡ 0 or (Cap) is verified, the situation becomes simpler, due to the following basic result. Its proof can be omitted at first reading.
Proof. Observe that by definition, a connected set K is compact if and only if it is a closed and bounded subset of G such that I x ∩ K is a connected compact subset of I x for all x ∈ G. Therefore, if the level set E ≥h (x 0 ) of x 0 is compact, then it is bounded. Hence, we only have to show the reverse implication, and we assume from now on that E ≥h (x 0 ) is bounded. First note that, as explained below (2.34), if κ x = 0, since ϕ on I x conditioned on ϕ x has the same law as a Brownian motion starting in ϕ x with variance 2 at time 1, we have that If κ x > 0 we have by (2.32) applied to the graph G {x+t·Ix} (cf. Lemma 2.1 for notation) that cap( and h ≥ 0, as explained below (2.34), since ϕ on I x conditioned on ϕ x has the same law as a Brownian bridge of finite length between ϕ x and 0 of a Brownian motion with variance 2 at time 1, I x ∩ E ≥h (x 0 ) is a.s. a connected compact of I x , and so E ≥h (x 0 ) is a.s. compact. Lemma 3.1 has two immediate consequences. On the one hand, in view of (1.6), (3.1) and by (3.3), Lemma 3.1 (applied in the case κ ≡ 0) yields that (3.4) if G is a transient graph with κ ≡ 0, then h com * = h * ≥ h cap * .
We refer to Remark 8.2,3) in [22] for an example of a graph for which the inequality in (3.4) is strict. On the other hand, if condition (Cap) is fulfilled, then every connected closed set with finite capacity is bounded, and so h cap * ≥ h * by (1.6) and (3.2). But by Lemma 3.1, for all x 0 ∈ G, if cap(E ≥h (x 0 )) < ∞, then E ≥h (x 0 ) is also compact, and so h cap * ≥ h com * . Thus, we obtain that In particular, if G satisfies (Cap) and κ ≡ 0, then from (3.5) and (3.4) it is clear that the three critical parameters h com * , h * and h cap * coincide; hence, in this case, in order to prove that they are equal to zero, it is sufficient to show that one of them is non-negative while another one is non-positive. Our first main result provides such a statement, without any further assumption on G (recall our setup from above (1.1)).
Theorem 3.2. Let G be a transient weighted graph. For each x 0 ∈ G and h ≥ 0, the random variable cap(E ≥h (x 0 )) is P G -a.s. finite, and for each h < 0 the level set E ≥h (x 0 ) of x 0 is non-compact with positive probability.
The proof of Theorem 3.2 appears over the next two sections. Note that the fact that E ≥h (x 0 ) is non-compact with positive probability for all h < 0 could alternatively be obtained from the Markov property (2.34) similarly as in [6], see also the Appendix of [1] for details, or from the isomorphism (1.7), see (1.8) and above. Here, we will obtain it as a direct consequence of our methods. In particular, Theorem 3.   The following lemma supplies a large class of graphs for which (Cap) holds. In particular, by means of this lemma, Corollary 3.3 generalizes all previously known results about h * = 0 (see below Theorem 1.1 for a list). We highlight item 2) of Lemma 3.4, comprising the condition (3.7) which is sufficient for (Cap) but stated only in terms of the Green function on G, and thus can be easier to verify. It implies for instance that any vertex-transitive graph verifies (Cap). Part 3) below accounts for the trees studied in [1] and shows that Proposition 2.2 in [1] can be seen as direct consequence of Corollary 3.3,1); see also the discussion following Theorem 1.1.

2) If
there exists g 0 < ∞ such that {x ∈ G : g(x, x) > g 0 } has no unbounded connected component (3.7) then condition (Cap) is verified for G. In particular, if G is vertex-transitive, (Cap) holds.
3) Let T be a transient tree with zero killing measure and unit weights and denote by R ∞ x the effective resistance between x and ∞ in T x , the sub-tree of T consisting only of x and its descendents (relative to a base point x 0 ∈ T). If {x ∈ T : R ∞ x > A} only has bounded connected components for some A > 0, then (Cap) is verified.  1) In order to develop an intuition for the results of Theorem 3.2 and Corollary 3.3, consider the case where G is a finite transient graph. Recall that for x ∈ G such that κ x > 0 (such x necessarily exists when G is finite and transient) the field ϕ on I x , conditionally on ϕ x , has the same law as a Brownian bridge of length ρ x < ∞ between ϕ x and 0 of a Brownian motion with variance 2 at time 1, see the discussion below (2.34). Therefore, for all h < 0, we have that P G (ϕ y ≥ h for all y ∈ I x ) > 0, and since I x is noncompact, we obtain h com * ≥ 0. Now similarly if h ≥ 0, then P G (ϕ y ≥ h for all y ∈ I x ) = 0 for all x ∈ G, and since G is finite, it follows that h com * ≤ 0. Since (Cap) is trivially verified on finite graphs, we thus have by (3.5) that h com * = h cap * = 0. Note, however, that trivially h * = −∞ since there are no unbounded sets on finite graphs, and so the inequality in (3.5) can be strict. In fact, the situation 0 = h com * = h cap * > h * ≥ −∞ is emblematic of graphs with sub-exponential volume growth and (say) a uniform killing measure, and one typically has both strict inequalities 0 > h * > −∞ when G is infinite, see Corollary 5.2 and Remark 5.7,2) in [22].
2) We refer to Proposition 8.1 in [22] for an example of a graph for which (Cap) is not satisfied, and h cap * ≤ 0 (necessarily by Theorem 3. 2) yet h com * = h * = ∞ -in particular, this is a further example where the critical parameters do not coincide.
3) We now construct an example of a graph not fulfilling (Cap), but for which we still have h * = h cap * = h com * = 0 (and therefore, as will turn out, (Sign) holds, cf. Corollary 3.12 below, or the first equivalence in Theorem 1.1,2)). Consider a graph G with κ ≡ 0 except possibly at x ∈ G, where κ x ∈ [0, ∞). Let A ⊂ I x be an infinite sequence converging towards the open end of I x , and, simultaneously interpreting A as the set given by the values of A, consider G A the graph given by Lemma 2.1. If G def.
= Z 3 with unit weights and κ ≡ 0, then noting that ( G A ) \ x∈A I x can be identified with G (see (2.7) and below (2.10)), it readily follows that h * = h cap * = h com * = 0 on G A . This chain of equalities follows (with a moment's thought) from the corresponding one on G, where it holds by Corollary 3.3, for instance using Lemma 3.4,ii) to argue that (Cap) holds on G. But for A n finite with A n ր A, the capacity of A n is supported on at most two points, whence cap(A) < ∞, by (2.27). In particular, G A does not fulfill (Cap).
The previous example remains instructive if one considers instead G a finite graph and κ x > 0, in order to appreciate the difference between h * and h com * . With A as above, one has h com * (G), h com * (G A ) ≥ 0 by Theorem 3.2. On the other hand, h * (G) = −∞ since G is finite, but h * (G A ) ≥ 0 by Corollary 3.3,ii) since κ A ≡ 0. This shows that h * really depends on the choice of base graph G and not only on G. We refer to Proposition 7.1 in [22] for a less trivial example of a graph verifying (Sign) but not (Cap). 4) An interesting direct consequence of Corollary 3.3 concerns L α , the discrete (Poissonian) loop soup at intensity parameter α > 0 (we refer to [19] for precise definitions).
Corollary 3.6. Let G be a transient weighted graph such that (Cap) holds. Then L 1/2 a.s. consists of finite clusters only.
Proof. If G satisfies (Cap), then by Corollary 3.3, i) and the symmetry and continuity of ϕ, the set {x ∈ G : |ϕ x | > 0} only contains compact connected components. Hence, by Theorem 1 in [19], the loop soup L 1/2 on G only contains compact connected components on which its field of local times is positive. A fortiori, L 1/2 only consists of finite clusters.
5) The condition (3.7) is strictly stronger than the condition (Cap). Indeed, consider G a rooted (d + 1)-regular tree, with weights 1/(n + 1) for each edge between a vertex at generation n and one of its children at generation n + 1, and zero killing measure. Then g(x, x) ≥ n + 1 for each x in generation n, and so (3.7) does not hold. On the other hand, for each infinite connected subset K of the tree having at most one vertex per generation, denoting by K n ⊂ K the subset of all points in K having generation at most n, one sees that for x ∈ K at generation k and all n ≥ k, the equilibrium measure of K n at x is at least c(k + 1) −1 for some absolute constant c = c(d), and so cap(K) = ∞ on account of (2.27). Since any infinite connected set A contains such K, (Cap) follows using Lemma 3.4,1) and (2.23). All in all, G verifies (Cap) but not (3.7).
Next, we investigate the random variable cap E ≥h (x 0 ) , for x 0 ∈ G, h ∈ R (see (2.27) for the definition of cap(·) in this context), which will play a central role throughout the remainder of this article.
is P G -a.s. bounded, then the random variable cap E ≥h (x 0 ) has moment generating function given by (Law h ) and density given by Furthermore, assuming only that G satisfies (Cap), one has for each h ≥ 0 and x 0 ∈ G that (Law h ) holds, and (3.9) cap E ≥−h (x 0 ) 1 cap(E ≥−h (x 0 ))∈(0,∞) has the same law as cap E ≥h (x 0 ) 1 ϕx 0 ≥h . (3.10) In particular, Remark 3.8. 1) In case κ ≡ 0 one can replace G in the statements of Theorems 3.2 and 3.7 by G − , which corresponds to removing the edges I x , x ∈ G, from G, see above (2.31) for notation, but not when κ ≡ 0, see Remark 2.5.
2) When G is a finite graph, one can deduce (3.11) directly from Corollary 1, (ii) in [21] with constant boundary condition h ≥ 0, since saying that the random pseudo-metric between x 0 and the boundary of G introduced therein is equal to 0, is equivalent to saying that E ≥−h (x 0 ) is non-compact, or equivalently has infinite capacity. The statement (3.11) then follows by using the reflection principle and that the effective resistance between x 0 and the boundary of G is equal to g(x 0 , x 0 ). When G = Z d , d ≥ 3, (3.11) is equivalent to the statement in Theorem 3 of [7].
The proof of Theorem 3.7 (along with that of Theorem 3.2) is given in the next two sections. Our starting point for both proofs is the observation (see Proposition 4.2 below) that, if true, the isomorphism (Isom) entails a great deal of information about the observables cap E ≥h (x 0 ) , h ∈ R. We use this observation on suitable finite-volume approximations of the free field on G, which our setup naturally allows for (essentially obtained by iteratively reducing κ starting from κ = ∞ outside a finite set). This is possible because (Isom) can be shown to hold without further assumptions on finite graphs. The condition (Cap) then provides a very efficient criterion in order to avoid losing too much information when passing to the limit (in particular, one retains (Law h )), thus yielding (3.9)-(3.11). In a sense, the first part of Theorem 3.2 describes the information that survives in the limit without any further assumptions on G.
As (Law h ) h≥0 is essentially derived from (Isom) on finite-volume approximations of G, one naturally wonders how the validity of (Law h ) h≥0 compares to that of (Isom) on G itself. This is the object of our next main result, Theorem 3.9 below; see in particular (3.14). Addressing this question will require us proving that the full strength of (Isom) can be passed to the limit (which is rather more involved than what is required for the proof of Theorem 3.7), and thereby obtain an isomorphism on G, under suitable assumptions (namely (Sign) or (Law 0 )).
In order to state Theorem 3.9, we introduce a variation (Isom') of the identity (Isom), which will sometimes be more convenient to work with. The two are in fact equivalent, see (3.14) and Corollary 6.1 below. The appeal of (Isom') is that it makes certain symmetries more apparent (see for instance Lemma 4.3). It will also naturally imply a certain discrete isomorphism on the base graph G, see (3.16) below, interesting in its own right.
The identity (Isom') involves additional randomness. We henceforth assume that, on a suitable extension P G of P G G ⊗ P I G (which we simply denote by P when there is no risk of ambiguity) there exists for each u > 0 an additional process (σ u x ) x∈ G ∈ {−1, 1} G , such that, conditionally on (|ϕ x |) x∈ G and ω u , σ u is constant on each of the connected components of {x ∈ G : 2ℓ x,u + ϕ 2 x > 0}, σ u x = 1 for all x ∈ I u , and the values of σ u on each other cluster of {x ∈ G : 2ℓ x,u + ϕ 2 x > 0} are independent and uniformly distributed. For x such that 2ℓ x,u + ϕ 2 x = 0, the value of σ u x will not play any role in what follows, and one can fix it arbitrarily (e.g. to have the value +1). Recalling the definition of C u from below (Isom), it is clear that the clusters of {x ∈ G : 2ℓ x,u + ϕ 2 x > 0} are the union of the clusters of the interior of C u and the clusters of {x ∈ G : |ϕ x | > 0}∩(C u ) c , and so one can equivalently define σ u as follows: σ u x = 1 for all x ∈ C u , σ u is constant on each of the clusters of {x ∈ G : |ϕ x | > 0} ∩ (C u ) c , and its values on each cluster are independent and uniformly distributed. We will investigate the validity of the relation (Isom') for each u > 0, the field σ u x 2ℓ x,u + ϕ 2 x x∈ G has the same law under P as the field ϕ x + √ 2u x∈ G under P G .
It is then an easy matter to see that (Isom) and (Isom') are equivalent, see Lemma 6.1 below.
y} ∈ E, and similarly p u,G x , x ∈ G, be defined by Our last main result is the following theorem, which is proved in Section 6.  Moreover, defining for any u > 0 on a suitable extension P of P G ⊗ P I a random set E u ⊂ E ∪ G such that, conditionally on (ϕ x ) x∈G and ω u , the set E u contains each edge and vertex that is contained in I u E (see below (2.41) for notation), and it contains each additional edge and vertex e ∈ E ∪ G conditionally independently with probability 1 − p e (ϕ, ℓ .,u ), the following holds: If any of the conditions in (3.14) is fulfilled, with E u def.
= {e ∈ E ∪ G : 2ℓ x,u + ϕ 2 x > 0 for all x ∈ I e }, (3.15) E u has the same law under P as E u under P.
In particular, if one defines (under P) a process ( σ u x ) x∈G ∈ {−1, 1} G , such that, conditionally on (ϕ x ) x∈G , ω u and E u , • the process σ u is constant on each of the clusters (of edges) induced by E u ∩ E, • the values of σ u on all other clusters are independent and uniformly distributed, then (3.16) σ u x 2ℓ x,u + ϕ 2 x x∈G has the same law under P as ϕ x + √ 2u x∈G under P G .
1) The conclusions of Theorem 3.7 in combination with (3.14) yield the validity of (Isom) assuming either (Sign) or (Cap) only.
2) The discrete isomorphism (3.16) bears similarities to the coupling derived in Theorem 1.bis of [19] (see also (4.8) below) in the context of loop soups, as well as with the coupling derived in Theorem 8 of [20] in the context of Markov jump processes. Notice that by construction, see the definition of E u and (3.12), (3.13), the coupling P yielding ( σ x ) x∈G only requires information on G, i.e., the reference to G can be completely bypassed.
3) If h is a harmonic function on G, one can define the notion of h-transform of random interlacements, and an isomorphism between the h-transform of random interlacements and the Gaussian free field on G similar to (Isom) holds, under the same conditions, see Theorem 6.5 in [22] for details. 4) One can also deduce from Theorem 3.9 another isomorphism on G − , see Section 2.3. Let E − u ⊂ G − be a random set such that, conditionally on (ϕ x ) x∈ G − and ω G − u , the trace of the random interlacement process ω u on G − , the set E − u contains I u ∩ G − and each additional vertex x ∈ G conditionally independently with probability 1−p u,G x (ϕ, ℓ ·,u ) (or equivalently 1 − p u,G x (ϕ, 0)). Let also C − u be the closure of the union of the connected components of the sign clusters {x ∈ G − : |ϕ x | > 0} intersecting E − u . Then the isomorphism obtained by replacing G by G − and C u by C − u in (Isom) is also equivalent to any of the conditions in (3.14). In particular, if κ ≡ 0, then C − u = C u ∩ G − , and so the isomorphism (Isom) (or also (Law h ) in view of Lemma 2.4) can be equivalently stated on G or G − .
5) The conclusion (3.10) can a-posteriori be strengthened. Indeed, knowing that (Isom') holds (which follows from (3.9) and (3.14)), one easily shows that compact clusters in E ≥h and E ≥−h have the same law, for all h > 0, see Lemma 4.3 below. In particular under (Sign), the clusters of E ≥h have the same law as the compact clusters of E ≥−h , and so for all using (Sign) and Lemma 3.1 in the last step. In particular, one recovers (3.11) from (3.18) in case (Cap) holds. We further refer to Remark 5.3,2) regarding the symmetry of clusters in E ≥h and E ≥−h contained in a given compact set K ⊂ G, which does not require (Isom') to hold.
6) Let us explain how to explicitly construct the process σ on G in (Isom'). Let (x n ) n∈N be a dense sequence in G and (σ ′ n ) n∈N ∈ {−1, 1} N be a sequence of independent and uniformly distributed random variables under P. Let m(x) be the smallest n ∈ N such that x n and x are in the same cluster of {y ∈ G : 2ℓ y,u + ϕ 2 y > 0}; since (x n ) n∈N is dense and y → 2ℓ y,u + ϕ 2 y is continuous, we have that m(x) < ∞ once 2ℓ x,u + ϕ 2 x > 0 and x / ∈ C u , and σ x = 1 otherwise, which has the desired properties. As an aside, note that in the isomorphism (4.6) between loop soups and the Gaussian free field, one could also construct explicitly the law of the signs σ by a similar procedure.
Let us now give several interesting consequences of Theorem 3.9, as well as the usual isomorphism (1.7). By continuity of the Gaussian free field, as already noted in (5.3) and below in [8], one can easily deduce from (1.7) that (3.19) there exists a coupling between I u and ϕ such that a.s. each connected component Moreover, if h kill < 1, see (1.2), then each forwards trajectory of the random interlacement process has a positive probability to not be killed, and so I u is unbounded with positive probability for all u > 0. Hence, we obtain that for all u > 0 either {x ∈ G : ϕ x > − √ 2u} or {x ∈ G : ϕ x < − √ 2u} is unbounded with positive probability, and by symmetry of the Gaussian free field, it follows that (1.8) holds.
Note that this improves the result from Corollary 3.3, ii). However, the proof of (1.8) relies on the isomorphism (1.7) between random interlacements and the Gaussian free field on infinite graphs, whereas the proof of Corollary 3.3, ii) only relies on this isomorphism on finite graphs, or equivalently the second Ray-Knight theorem (see Theorem 2 in [20]), or alternatively on an argument based on the Markov property for the Gaussian free field from [6], as explained below Theorem 3.2.
The advantage of the isomorphism (Isom) is that when it holds, or equivalently (Law 0 ) by Theorem 3.9, one can directly improve (3.19) to prove that (3.20) there exists a coupling between I u and ϕ such that a.s.
In particular, by symmetry of the Gaussian free field, we obtain that there exists a coupling between V u and ϕ such that E ≥ √ 2u ⊂ V u , where V u = (I u ) c is the vacant set of random interlacements, thus generalizing Theorem 3 in [19] from Z d to any graph satisfying (Law 0 ), or simply (Cap) by (3.9). We refer to [27], [1] and [8] for other applications of couplings similar to (3.20). Another interesting consequence of Theorem 3.9 is the following for the value of h * .
Corollary 3.11. Let G be a transient weighted graph satisfying (Law 0 ). Then either P G -a.s. the sign clusters of the Gaussian free field on G only contain compact connected components, or E ≥h contains for each h ∈ R at least one unbounded connected component with P G -positive probability. In particular, if (Law 0 ) holds and h kill < 1, then by (1.8), The proof of Corollary 3.11 appears at the end of Section 6. We refer to [22] for an example of a graph satisfying h kill < 1, but for which h * = h com * = ∞. Note however that we still have h cap * ≤ 0 by Theorem 3.2. In view of Corollary 3.11, an interesting open question is then whether a transient graph with h * ∈ (0, ∞), or h com * ∈ (0, ∞), exists or not. Another interesting consequence of Corollary 3.11 is that if h * = 0, then the level sets of the Gaussian free field do no percolate at the critical point h = 0, as implied by the following: Corollary 3.12. If G is a transient graph such that h * ≤ 0, then E ≥0 contains only bounded connected components.
We refer to the end of Section 6 for the proof of Corollary 3.12. We conclude this section with the short Proof of Theorem 1.1. Theorem 1.1,1) follows from the first conclusion of Theorem 3.2 and Corollary 3.3,i), The first equivalence in Theorem 1.1,2) is a consequence of Corollary 3.12 (the reverse implication being immediate, see (1.6)). Finally, the implication (Sign) =⇒ (Law 0 ) is a consequence of the first conclusion of Theorem 3.7 and the remaining equivalences follow from Corollary 3.12 and (3.14) in Theorem 3.9. Finally, Theorem 1.1,3) is implied by Corollary 3.11.

Some preparation
In this section, we prepare the ground for the proofs of Theorems 3.2 and 3.7. Their proofs, given in the next section, combine three main ingredients, corresponding to Proposition 4.2, Lemma 4.4 and Lemma 4.6 below. They also rely on a symmetry property implied by (Isom'), stated in Lemma 4.3, which is of independent interest. These results will also be useful in Section 6 in the course of proving Theorem 3.9, albeit in a different manner.
Our starting point, Proposition 4.2 below, contains the key observation that (Law h ) h≥0 follows from the identity (Isom'), if assumed to hold. Lemma 4.4 implies a version of the isomorphism (Isom'), valid on finite graphs (this result is in fact a consequence of the isomorphism theorems between loop soups and the Gaussian free field from [19], see also (4.6) below; the proof of Lemma 4.4 is given in Appendix B). Importantly, Lemma 4.4 allows for Proposition 4.2 to automatically apply in a finite setup. Finally, Lemma 4.6 supplies a useful approximation scheme for ϕ based on (2.28), see (4.10) below, which entails the important limits (4.16), (4.17) from Corollary 4.7. With these results at hand, the proofs of Theorems 3.2 and 3.7 quickly follow. They appear in the next section.
Unless specified otherwise, we tacitly assume that G is a transient weighted graph (see above (1.1) for our setup). We begin with the following technical lemma. For each x 0 ∈ G and h ∈ R, defining E >h = {y ∈ G : ϕ y > h} and E >h (x 0 ) = {y ∈ G : y ↔ x 0 in E >h }, and denoting by E >h (x 0 ) the closure of E >h (x 0 ), one has and only if for every connected path π from x 0 to y ∈ ∂O ′ , with π closed in x and open in y, there exists z ∈ π with ϕ z ≤ h. Therefore, the event E >h With probability one, we can moreover assume that ϕ = h on G. Then by definition of K there exists an edge or vertex e ∈ E ∪ G, x ∈ I e ∩ ∂E >h K (x 0 ), with x in the interior of π, and, if e ∈ E, y ∈ I e ∩ ∂K with y = x. Since ϕ x = h by continuity of ϕ, using the Markov property (2.34) and a similar reasoning as above (2.9) in [9], one can show that when e ∈ E, conditionally on A + K , the law of ϕ on the edge between x and y is the same as the law of a Brownian bridge with variance 2 at time 1, on the edge between x and y with value h at x and ϕ y at y. This Brownian bridge is a.s. strictly smaller than h infinitely many times in any neighborhood of x, and so a.s. ϕ < h infinitely many times in any neighborhood of x, that is x ∈ ∂E ≥h (x 0 ). If e ∈ G, one can prove similarly that x ∈ ∂E ≥h (x 0 ) since the law of ϕ on the edge between x and the open end of I e is the same as the law of a Brownian bridge with variance 2 at time 1 between ϕ x and 0. This is a contradiction since x is in the interior of π ⊂ E ≥h K (x 0 ), and so E ≥h s. Taking a sequence of compacts K = K n increasing to G, we conclude. = {y ∈ G : y ↔ x in Σ} for x ∈ G (4.1) (see below (1.4) for notation). We first consider the case h = 0, and the sets Σ(x), x ∈ G, which are the closures of the sign clusters Σ(x). Note that if Σ(x) ∩ I u = ∅, then the cluster of x in {y ∈ G : 2ℓ y,u + ϕ 2 y > 0} is equal to Σ(x) (both Σ(x) and I u are open) and so σ u x = ±1 with conditional probability 1 2 given (|ϕ x |) x∈ G and ω u under P (recall σ u as defined above Theorem 3.9). On the other hand, if Σ(x) ∩ I u = ∅, then x ↔ I u in {y ∈ G : 2ℓ y,u + ϕ 2 y > 0}, and so σ u x = 1. As E[sign(X + a)] = P (|X| < a) for any centered Gaussian variable X and a > 0, by (Isom'), (2.41) and the symmetry of the Gaussian free field, we thus obtain, for all u > 0 and x ∈ G, Next, we note that by Lemma 4.1 for h = 0, P G -a.s., Σ(x) = E ≥0 (x) on {ϕ x > 0}. Therefore, by symmetry of the Gaussian free field in combination with (4.2) we thus have which is (Law 0 ).
Let us now consider some h > 0, and let u 0 = h 2 /2. We will reduce this to the case h = 0. By the symmetry of the Gaussian free field, (Isom') and Lemma 4.1, we have that E ≥h (x) has the same law under P G as the closure of the connected component of x in {y ∈ G : σ u 0 y = −1} under P, which is the law of the set that equals Σ(x) if I u 0 ∩ Σ(x) = ∅ and σ x = −1, and equals ∅ otherwise. Therefore, by (2.41) we have for all u > 0 using (4.2) in the last step.
Next, we observe a symmetry property of compact clusters implied by (Isom'). have the same law as the closure of the clusters of {x ∈ G : σ u x = 1} whose closure is compact. Each cluster of I u is non-compact, and so by definition of σ u , the compact clusters of E ≥− √ 2u have the same law as the closure of the clusters of Σ (cf. (4.1)) whose closure is compact, that do not intersect I u and for which σ u = 1. By definition of σ u , the law of these clusters of Σ is unchanged if one retains all the previous properties but the last one and requires σ u = −1 instead. But by (Isom'), the resulting clusters have the same law as those of {x ∈ G : ϕ x < − √ 2u} whose closure is compact, i.e. by Lemma 4.1 the clusters whose closures are the compact clusters of {x ∈ G : ϕ x ≤ − √ 2u}. Finally by the symmetry of the Gaussian free field, these closures have the same law as the compact clusters of E ≥ √ 2u .
The proofs of our next two ingredients, Lemmas 4.4 and 4.6 below, rely on certain aspects of Poissonian loop soups. This requires a small amount of notation, which we now introduce. We also review certain features of loop soups, which will be used in the sequel. Following e.g. [13], [17], one defines a measure µ L on loops in G with compact closure in G associated with P G x , x ∈ G, and, under a suitable probability measure P L = P L G , for all α > 0 the loop soup L α with parameter α as the Poisson point process on the space of (compact) loops on G with intensity αµ L . We denote by (L (α) x ) x∈ G its field of local times relative to m on G (cf. above (2.1)), which can be taken to be continuous, see Lemma 2.2 in [19]. Moreover, we denote by L α the Poisson point process consisting of the trace on G of each loop in L α , which has the same law as the loop soup associated with P G x , see Section 2 of [19] or Section 7. here, G A ∞ is the graph with the same vertices, edges and weights as G ∂A (see Lemma 2.1), but with killing measure equal to κ on (G ∩ A) \ ∂A, and equal to infinity on ∂A ∪ (G ∩ A c ). I.e., for all x ∈ A, the diffusion X under P G A ∞ x has the same law as X killed on exiting A under P G x . When α = 1 2 , the loop soup L 1/2 is linked to the Gaussian free field on G via the following isomorphism, due to Lupu [19]; see also Le Jan, Theorem 2 of [17] for a similar identity regarding the square of the Gaussian free field on the discrete base graph G (not including the sign of ϕ).
Introducing the shorthand L · = L (1/2) · for the local time field of L 1/2 to simplify notation, let P L G be a suitable extension of P L G carrying a process (σ x ) x∈ G ∈ {−1, 1} G such that, conditionally on L 1/2 , σ is constant on each cluster of {x ∈ G : L x > 0}, and its values on each cluster are independent and uniformly distributed. Then (4.6) under P L G the law of σ x 2L x x∈ G is P G G ; the measure P L G is essentially the coupling constructed in Proposition 2.1 of [19], where the (explicit) law of σ on G follows from a version of Lemma 3.2 in [19] on G rather than G − , cf. above (2.31). The identity (4.6) also comes with the following discrete version. Define (still under P L ).
In particular, if we define a process ( σ x ) x∈G ∈ {−1, 1} G , such that, conditionally on L 1

2
, and E, σ is constant on each of the (discrete) clusters induced by E and its values on each cluster are independent and uniformly distributed, then (4.8) σ x 2L x x∈G has the same law under P L G as (ϕ x ) x∈G under P G G (Corollary 3.6 in [19] provides (4.7), and one can then directly derive (4.8), see Theorem 1.bis in [19]). The identity (4.6) is an analogue in the context of loop soups of the relation (Isom') for interlacements (a similar analogy can be drawn between (4.8) and (3.16)). In particular, the following holds on finite graphs, i.e. on graphs G = (G,λ,κ) such that {x ∈ G :κ x < ∞} is finite (note that this implies that the induced graph (G, λ, κ) has finite vertex set G, cf. (2.12)).

Lemma 4.4.
If G is a finite transient weighted graph, then (Isom') holds. Moreover, conditionally on ω u and (ϕ x ) x∈G , the family {e ∈ E u }, e ∈ E ∪ G (defined above (3.15)) is independent, and for all e ∈ E ∪ G (4.9) P(e ∈ E u | ω u , (ϕ x ) x∈G ) = 1 e∈I u E ∨ (1 − p e (ϕ, ℓ .,u )). For completeness, we have included the proof of Lemma 4.4 in Appendix B. We briefly sketch the proof here. To deduce (Isom'), one essentially considers the decomposition L 1/2 = L in 1/2 + L * 1/2 of the loop soup on the cable system G * of a suitable one-point compactification G * = G ∪ {x * } of G (with killing at x * , so G * is transient), into the 'interior' loops constituting L in 1/2 which never hit x * , and the loops L * 1/2 which contain x * . The two processes are independent. Inserting the corresponding decomposition of the local times L · of L 1/2 into (4.6) (applied on G * ), one can then generate in law the field σ u · 2ℓ ·,u + ϕ 2 · appearing in (Isom') by suitable conditioning, and witnesses that this conditioning causes a global shift by √ 2u in (4.6). Roughly speaking, the local times of L in 1/2 generate ϕ 2 · /2 in this procedure by (4.5) and (4.6), whereas the local times of L * 1/2 give rise to ℓ ·,u ; see also [20], or Section 2 of [18], for similar ideas to deduce the second Ray-Knight theorem from (4.6), which is related to the interlacement by concatenating the trajectories contributing to ℓ ·,u to represent the successive excursions of a single diffusion X ·∧τu under P G * x * stopped at τ u = inf{t ≥ 0 : ℓ x * (t) ≥ u}. The conditional law in (4.9) is then obtained by following ideas of [20], Section 2.5.
Remark 4.5. The proof of Lemma 4.4 delineated above uses the isomorphism (4.6) relating loop soups and the Gaussian free field. Similarly to the proof of Theorem 2.4 of [27], one could alternatively use the Markov property (2.34) to prove that (Isom) (which is easily seen to be equivalent to (Isom'), see Lemma 6.1 below) holds on any finite transient graph (or more generally on any transient graph with bounded Green function such that (Sign) holds). However, this approach does not directly provide the discrete isomorphism described by (4.9).
We proceed to state the third ingredient, Lemma 4.6 below, which supplies a way to approximate the Gaussian free field on any transient graph G by Gaussian free fields on finite graphs. The following definition is key. For a given graph G = (G, λ, κ), we say that a sequence of graphs G n increases to G if G n = (G, λ, κ (n) ) for a sequence κ (n) ⊂ [0, ∞] G of killing measures such that κ (n) x ց κ x as n → ∞ for all x ∈ G. (4.10) In particular, we will be interested in finite-volume approximations of G, for which κ (n) = ∞ outside of a finite set U n for every n, with U n exhausting G as n → ∞. The graphs G n thus considered are finite (in the sense defined above Lemma 4.4). Due to the observations made around (2.28), for G n as in (4.10), we can view G n as a subset of G such that the sequence G n increases to G and such that for each compact K ⊂ G we have K ⊂ G n for large enough n.
Lemma 4.6. Let G be a transient weighted graph, and let G n , n ∈ N, be a sequence of transient weighted graphs increasing to G ∞ = G. There exists a probability space (Ω, F, P) on which the processes (ϕ (n) x ) x∈ Gn , n ∈ N, and (ϕ (∞) x ) x∈ G are defined, with the following properties: for all n ∈ N ∪ {∞}, (ϕ (n) x ) x∈ Gn has law P G Gn ; (4.11) P-a.s. for all compact K ⊂ G, one has ϕ (n) x = ϕ (∞) x for x ∈ K and n large enough. x ) x∈ Gn the accumulated local times of those loops in L (∞) which are entirely contained in the open set G n ⊂ G. One can clearly identify G n with G Gn ∞ , and by (4.5), the law of (L (n) x ) x∈ Gn is the same as the law of (L x ) x∈ G under P L Gn . Moreover, for each x ∈ G, the sequence L (n) x , n ∈ N, is increasing, and we denote by L (∞) x its limit. Since each loop of L (∞) is relatively compact, it is contained in G n for n large enough, and so (L (∞) x ) x∈ G equals the total local times of the loops in L (∞) , whence where L · is the occupation time field of L 1/2 (on G). > 0} ⊂ G n ), and let (σ p ) p∈N ∈ {−1, 1} N be an independent sequence of uniformly distributed random variables. For each n ∈ N and x ∈ G n we define E L n (x) = {y ∈ G n : x ↔ y in {L (n) > 0}}, and if L (n) x = 0, we denote by k n (x) ∈ {1, . . . , n} the smallest index k such that G k intersects the cluster of x in {L (n) > 0}, i.e. E L n (x) ∩ G kn(x) = ∅ and E L n (x) ∩ G kn(x)−1 = ∅, with the convention G 0 = ∅. We also define p n (x) = inf{p ∈ N : A (kn(x)) p ⊂ E L n (x)}, with the convention inf ∅ = +∞. Note that since L (n) x , n ∈ N, is increasing for all x ∈ G and k n (x) ≤ n, we have that p n (x) < ∞ if L (n) x = 0. For each n ∈ N and x ∈ G n , we then let σ x > 0 and σ (n) x = 1 otherwise, and set (4.14) ϕ (n) x def.

Gn
. Moreover, for each x ∈ G with L (∞) x > 0, for all n large enough we have x ∈ G n as well as L (n) x > 0, hence k n (x) is constant for n large enough since E L n (x) increases to E L ∞ (x). As a consequence, the sequence p n (x), n ∈ N, is decreasing for n large enough, and we denote by p ∞ (x) its limit. Note that we then have p n (x) = p ∞ (x) for n large enough. We define σ x . We then have ϕ x for all x ∈ G due to (4.13), (4.14) and since sign(ϕ (n) x ) for all large enough n. Finally, g Gn (x, y) −→ n→∞ g G (x, y) = g(x, y) for all x, y ∈ G, whence x ) x∈ G has law P G follows from (4.15) and convergence of ϕ (n) (in law). This shows (4.11).
With probability 1, for each K ⊂ G connected compact, there exists a random N ∈ N, such that for all n ≥ N, one has K ⊂ G n , and no trajectory in L (∞) hitting K hits G \ G n . One then has the equality L (n) for all n ≥ N and x ∈ K, and the clusters of {L (n) · > 0} in G whose closure is contained in K are equal to the clusters of {L (∞) · > 0} whose closure is contained in K. As a consequence, once n ≥ N, on has that σ (n) on all these clusters. Since ∂K is finite, we also have σ x (= 1) for all x ∈ ∂K and n large enough. The claim (4.12) follows.
Proof. As a consequence of (4.12) one knows that for compact K ⊂ G, one has ϕ (n) = ϕ (∞) on K for large enough n, whence cap Gn (E ≥h n,K (x 0 )) = cap Gn (E ≥h ∞,K (x 0 )) for such n. From this, (4.16) follows using that cap Gn (A) → cap G (A) for compact A as n → ∞, applied with the choice A = E ≥h ∞,K (x 0 ) (indeed, using (2.6), (2.16) and (2.20), it is not hard to show that the equilibrium measure of any compact set A on G n converges -in fact decreases-to the equilibrium for large enough n and K depending on ϕ. Together with (4.16), this immediately gives (4.17).

Proofs of Theorems 3.2 and 3.7
With the results of the last section at hand, we are ready to give the proofs of Theorems 3. Proof. If (Isom') and (Cap) are satisfied, (Sign) follows from (Law 0 ) (which holds on account of Proposition 4.2) by letting u ↓ 0 and using (Cap). Therefore (3.17) holds, which, together with (Cap) and Lemma 3.1, yields (3.10). Then, using (3.10) we have that We now give the Proof of Theorem 3.2. For a given graph G = (G, λ, κ), consider an increasing sequence U n , n ∈ N, of finite connected subsets of G exhausting G, i.e. satisfying U n ⊂ U n+1 for all n and n U n = G. Now, define G n = (G, λ, κ (n) ) with killing measure κ x = ∞ otherwise. The sequence of graphs G n , n ∈ N, increases to G in the sense of (4.10), and G n is finite for each n ∈ N in the sense as above Lemma 4.4. Fixing a point x 0 ∈ G, we may furthermore assume that x 0 ∈ G n for all n ∈ N (for instance by choosing U n = B d (z 0 , n + 1), where z 0 ∈ G is the vertex closest to x 0 relative to d).
Considering the sequence (ϕ (n) x ) x∈ Gn , n ∈ N, from Lemma 4.6, which is in force, we obtain, applying Lemma 4.4 and Proposition 4.2, which implies (Law h ), that for all n ∈ N, Fixing h = 0, (5.1) and the monotonicity property (2.23) thus yield, for any compact K ⊂ G, with E ≥h n,K (x 0 ) as defined above (4.16). Now, applying (4.16) and dominated convergence to take the limit n → ∞ on both sides of (5.1), and subsequently considering an increasing sequence of compacts K exhausting G, one obtains, in view of (2.27), Hence, taking u → 0 we obtain by dominated convergence that Since E ≥0 (x 0 ) = ∅ when ϕ x 0 < 0 and P G (ϕ x 0 < 0) = 1 2 , we obtain that cap(E ≥0 (x 0 )) is P G -a.s. finite, which proves the first part of the statement.
Let us now fix some h < 0. If E ≥h n (x 0 ) is a non-compact subset of G n for infinitely many n, then for all compacts K of G we have E ≥h n (x 0 ) ⊂ K for infinitely many n ∈ N. Since ϕ (n) = ϕ (∞) on a neighborhood of K for n large enough, we then have that E ≥h ∞ (x 0 ) ⊂ K for all compacts K, that is E ≥h ∞ (x 0 ) is a non-compact subset of G. Since (3.11) holds on G n by Lemma 4.4 and Corollary 5.1, we moreover have that and so E ≥h ∞ (x 0 ) is non-compact with positive probability.
Prior to giving the proof of Theorem 3.7, we first briefly study some properties of the law of the capacity of the level sets of the Gaussian free field, when their the Laplace transform is given by (Law h ) (see above Theorem 1.1). The next lemma computes the corresponding density (on the event {E ≥h (x 0 ) = ∅}).
Lemma 5.2. For all u ≥ 0 and h ∈ R, with ρ h as defined in (3.8).
Proof. Taking v = u + h 2 /2 and a = g(x 0 , x 0 ) −1 , it is enough to show that For and so (5.5) holds for v = 0. Moreover, by dominated convergence, the left-hand side of (5.5) viewed as a function of v > 0 is continuously differentiable with derivative and so is equal to the derivative with respect to v of the term on the right-hand side of (5.5). This yields (5.5) and hence (5.4).
We now proceed to the Proof of Theorem 3.7. Consider the approximating sequence G n introduced at the beginning of the proof of Theorem 3.2. In particular, (5.1) still holds (as a consequence of Lemma 4.4 and Proposition 4.2). Now, let h ≥ 0 and suppose E ≥h (x 0 ) is P G -a.s. bounded, hence compact in view of Lemma 3.1. Then (4.17) holds and one can safely pass to the limit in (5.1) using dominated convergence, thus obtaining that (Law h ) holds on G. Then, (3.8) holds on G by means of Lemma 5.2. In particular, the previous argument shows that, if h ≥ 0 and E ≥h (x 0 ) is P G -a.s. bounded, then (5.6) cap Gn (E ≥h n (x 0 )) converges in law to cap G (E ≥h (x 0 )), which is given by (Law h ).
Assume now that (Cap) is fulfilled on G. Then (Sign) holds by Corollary 3.3, and so we obtain (3.9) from (5.6). In order to deduce (3.10), first observe that (3.10) holds on G n by means of Lemma 4.4 and Corollary 5.1, as (Cap) is trivially satisfied on G n . For all h ≥ 0, due to (5.6) the random variable cap(E ≥h n (x 0 ))1 ϕ (n) x 0 ≥h converges in law to cap(E ≥h (x 0 ))1 ϕx 0 ≥h , hence so does cap(E ≥−h n (x 0 ))1 cap(E ≥−h n (x 0 ))∈(0,∞) . To identify this with the law of cap(E ≥−h (x 0 ))1 cap(E ≥−h (x 0 ))∈(0,∞) , one applies dominated convergence, noting that, due to (Cap) and Lemma 3.1, cap(E ≥−h (x 0 )) < ∞ is tantamount to E ≥−h (x 0 ) being compact, and using (4.17). All in all, this gives (3.10). Finally (3.11) is an immediate consequence of (3.10), as in the proof of Corollary 5.1. This completes the proof of Theorem 3.7. 1) In view of the above proof of Theorem 3.7, we see that the validity of (Law 0 ) (and thus equivalently of (Isom) by (3.14) after Theorem 3.9 is proved) can be viewed as a question about removing the compactness assumption in (4.17). Indeed (Law 0 ) holds if and only if there exists a sequence G n of graphs verifying (Law 0 ) increasing to G in the sense of (4.10) such that, P-a.s., 2) Let K ⊂ G be connected and compact. By Lemmas 4.4 and 4.3, it follows that if G is a finite graph, then the compact clusters of E ≥−h and E ≥h have the same law. In particular, (5.8) the clusters of E ≥−h and E ≥h included in K have the same law.
The conclusion (5.8) remains true for arbitrary transient graph G. Indeed, by following the arguments of Proposition 1.11 in [26], starting from G ∂K , one can construct a transient weighted graph G ∂K * with (finite) vertex set G ∂K ∩ K (recall Lemma 2.1 for notation) whose weights coincide with λ ∂K x,y whenever x, y ∈ G ∂K ∩ K are neighbors in G ∂K , in such a way that (ϕ x ) x∈K has the same law under P G G as under P G G ∂K * . The conclusion (5.8) for arbitrary G then simply follows by regarding the clusters of E ≥−h and E ≥h included in K as parts of G ∂K * . One can also prove that the conclusion (3.17) holds under condition (Sign) using (5.8), by considering a sequence of compacts increasing to G.
6 Proof of Theorem 3.9 In this section, we prove Theorem 3.9, along with its corollaries. In particular, this comprises the isomorphism between random interlacements and the Gaussian free field and the equivalences (3.14), as well as its discrete counterpart (3.16). We first compare random interlacements on G = Gκ (recall the notation from above (2.28)) with random interlacements on Gκ′ for somē κ ′ ≥κ in Lemma 6.2, and then take advantage of this comparison to approximate random interlacements on any transient graphs by random interlacements on finite graphs as in (4.10), see Lemma 6.3. Together with the corresponding 'finite-volume' approximation of the Gaussian free field from Lemma 4.6 and in combination with the fact that Theorem 3.9 holds on finite graphs (see Lemma 4.4), we can then prove the isomorphism (Isom), see Lemma 6.4, under suitable assumptions. This is the key step of the proof of Theorem 3.9, presented thereafter. Finally, at the end of the section, we deduce from Theorem 3.9 that Corollaries 3.11 and 3.12 also hold.
We first dispense with the equivalence between (Isom) (see p.4) and (Isom') (see p.23). Proof. It suffices to argue that (ϕ x 1 x / ∈Cu + ϕ 2 x + 2ℓ x,u 1 x∈Cu ) x∈ G has the same law under P I ⊗P G as (σ u x 2ℓ x,u + ϕ 2 x ) x∈ G under P. By definition of C u and since |σ u x | = 1, the absolute value of either field equals 2ℓ ·,u + ϕ 2 · in law. To deal with the signs, rewriting ϕ x = sign(ϕ x ) ϕ 2 x + 2ℓ x,u for all x / ∈ C u , one observes that the law of (sign(ϕ x )1 x / ∈Cu +1 x∈Cu ) x∈ G under (P I ⊗P G )(· | |ϕ|, ω u ) is the same as the law of σ u under P G (· | |ϕ|, ω u ), which follows immediately from the definitions of C u and σ u , respectively, together with Lemma 3.2 in [19] (the latter asserts that given |ϕ|, the field sign(ϕ) is constant on each cluster of {|ϕ| > 0}, and the values on each cluster are independent and uniformly distributed, a consequence of the strong Markov property).
We are now going to approximate random interlacements on any transient graph G by random interlacements on a sequence of finite graphs G n increasing to G in the sense of (4.10). To this end, we first compare random interlacements on two graphs G = (G,λ,κ) and G ′ = (G,λ,κ ′ ) with killing measuresκ ′ ≥κ, and corresponding cable systems G and G ′ . Thus, G = Gκ, G ′ = Gκ′ in the notation from the beginning of Section 2.3 and in particular, cf. (2.28), one can regard G ′ as a subset of G. Accordingly, for all trajectories w ∈ W G with ζ − < 0 < ζ + (see Section 2.5 for notation; recall in particular that ζ ± are such that w(t) = ∆ if and only if t / ∈ (ζ − , ζ + )), we define the killing times ζ ± κ ′ by with the convention for any w ∈ W G . For any compact K ⊂ G, we then introduce π K : W 0 and denote by π * K : W * . In words π * K (w * ) is the doubly infinite trajectory modulo time shift on G ′ , whose forward and backward parts seen from the first time of hitting K are the forward and backward parts of w * seen from the first time of hitting K, both stopped on exiting G ′ . Lemma 6.2. ( G = (G,λ,κ), G ′ = (G,λ,κ ′ ),κ ′ ≥κ). Let V ⊂ K be compact subsets of G ′ . There exists a non-negative measure µ K,V = µ K,V G, G ′ on W * K, G ′ such that (with a slight abuse of notation, the right-hand side is viewed as a measure on W * K, G ′ ). Moreover, Proof. Throughout the proof, let ∂K be as in (2.13) but relative to P G ′ x (rather than P x = P G x ). Let (G, λ, κ) and (G ′ , λ ′ , κ ′ ) refer to the induced graphs corresponding to G and G ′ , respectively (cf. (2.12)). By considering the graphs G A and (G ′ ) A for any A ⊃ ∂K, see Lemma 2.1 instead of G and G ′ , we can assume without loss of generality that ∂K ⊂ (G ∩ G ′ ). By choosing A = A ′ ∪ ∂K where A ′ ⊂ G ′ is a set containing exactly one (arbitrary) vertex between each x ∈ ∂K and y ∈ ∂ G ′ which are connected by a cable, we can further 'move away' ∂K from ∂ G ′ , so that d( ∂K, ∂ G ′ ) > 1, where d is the canonical distance on G defined above (1.4). All in all, we thus assume henceforth that which is no loss of generality. Recall X ′ ≡ Xκ ′ and ζ ′ ≡ ζκ′ from (2.29) and note that for all , the forward part {(π K (w)) t : 0 ≤ t ≤ ζ + κ ′ } of π K (w) from the time of first hitting K onward, is precisely {X ′ t (w + ) : 0 ≤ t ≤ ζ ′ }, where w + is the forward part of w. Recalling (6.1) as well as the notation from (2.16) and (2.38), we then define the countably additive set function µ K,V on W 0 K, G ′ by (note that following our convention below (2.22), {H V = ζ} under P G x refers to the event that V is not visited by X) with A ± denoting {(w(±t)) t≥0 : w ∈ A} for all A ∈ W 0 K, G ′ and X ′ as introduced below (6.5). In (6.6), we also used implicitly the convention that e K, G (x)P K, G for all x ∈ ∂K with e K, G (x) = 0. Moreover, e K, G (x) ≤ e K, G ′ (x) for all x ∈ G by (2.6) and (2.17), and so it follows from (2.19) that supp(e K, G ) ⊂ ∂K. If µ K,V is non-negative on W 0 K, G ′ we can extend it to a measure on W K, G ′ by taking µ K, in view of (6.6), (2.39) and (2.40), it then follows that (6.3) is fulfilled.
We now show that µ K,V is non-negative. Recall Z, the discrete skeleton of Z, from below (2.4). We denote by L K = sup{n ∈ N : Z n ∈ K} the last exit time of K for Z and by L K = sup{t ≥ 0 : X t ∈ K} the last exit time of K for X, with the convention sup ∅ = ∞, so in particular ∞}, which has full P G x -measure by transience). We also define (Y t ) t≥0 the same process as (X t+L K ) t≥0 , but killed the first time (X t+L K ) t≥0 hits ∂ G ′ . By definition of P K, G x , see (2.38), and (2.19), we have for all x ∈ ∂K with e K, G (x) > 0 that here, we used in the last equality the strong Markov property at the time of n-th jump and the fact that g G (x, x) = 1 λx n≥0 P G x ( Z n = x). By a similar calculation, and in view of (2.30), we obtain for x ∈ ∂K, where L ′ K , L ′ K are defined as above but with X ′ in place of X. On the event L K = 0, since d( ∂K, ∂ G ′ ) > 1 due to (6.5), we have L ′ Note that if e K, G (x) = 0 and x ∈ ∂K, then L ′ K < L K P G x -a.s., and so the previous equality still holds. Moreover, using (2.30), we have for all x ∈ ∂K that (6.8) Combining (6.6), (6.7) and (6.8), we thus obtain that, for A ∈ W 0 K, G ′ , which gives (6.4) and completes the proof.
In words, the difference between the trajectories under ν G and ν G ′ that hit K but not V, when V ⊂ K are compact subsets of G ′ , comes in two parts: first it is more likely for the forward trajectories to not hit V before time ζ ′ than before time ζ, and secondly it is more likely for the backward trajectories to not come back to K before time ζ ′ than before time ζ. These two differences are contained in the measure µ K,V G,κ ′ from (6.3), see (6.9). Taking a sequence (K p ) p∈N of compacts increasing to G ′ , one can then use Lemma 6.2 to construct a random interlacement process on G ′ from the random interlacement process ω on G: take the image through π * Kp of each trajectory in the support of ω hitting K p but not K p−1 for all p ∈ N, with K 0 = ∅, and add Poisson point processes with intensity µ Kp,K p−1 G, G ′ ⊗ λ for all p ∈ N. Using this construction and the estimate (6.4), we will now suitably approximate random interlacements on G by random interlacements on a sequence of finite graphs, thus mirrorring Lemma 4.6. Lemma 6.3. Let G be a transient weighted graph and G n , n ∈ N, be a sequence of transient weighted graphs increasing to G ∞ = G in the sense of (4.10). There exists a probability space (Ω ′ , F ′ , P ′ ) on which one can define a sequence of processes ω (n) , n ∈ N, and ω (∞) with the following properties: for all n ∈ N ∪ {∞}, the process ω (n) has the same law as ω under P I Gn ; (6.10) there exists an increasing sequence (a n ) n∈N such that for each u > 0, P ′ -a.s. for all compact K ⊂ G, the restriction to K of the set of trajectories hitting K is the same for ω  Proof. Let (K n ) n∈N be a sequence such that K n is a compact subset of G n for each n ∈ N, and such that K n , n ∈ N, increases to G. Let ω (∞) be a Poisson point process under (Ω ′ , F ′ , P ′ ) with the same law as the random interlacement process ω under P I G . For each n ∈ N and k ∈ {1, . . . , n}, we define, recalling the notation from (4.10), the process ω (k,n) 1 as the Poisson point process which is given by the image through π * k,n ≡ π * as an independent Poisson point process with intensity µ K k ,K k−1 G, Gn ⊗ λ (see I u n are constant for n large enough on Σ ∞ (x), and then Σ n (x) ∩ I u n = Σ ∞ (x) ∩ I u ∞ for n large enough. Therefore, infinitely often, I u ∞ ∩ Σ ∞ (x) = I u n ∩ Σ n (x) = ∅ (note that x cannot lie in the boundary since I u ∞ , I u n are open and |ϕ (n) x | > 0 for large enough n), that is x ∈ C u,∞ . Combining with (6.13), we obtain (6.12) with b n = n. Now suppose that (Law 0 ) holds on G. For all n ∈ N ∪ {∞}, by (2.41), since I u n is open, As G n is finite for each n ∈ N, Lemma 4.4 and Proposition 4.2 imply that (Law 0 ) holds on G n . Therefore, denoting by Φ the distribution function of a standard Gaussian random variable, by symmetry of ϕ (n) we obtain that (P ⊗ P ′ ) x ∈ C u,n (6.14),(Law 0 ) taking advantage of the validity of (Law 0 ) for the graph G and (6.14) in the last equality. Hence, using (6.13) and (6.15), there exists a sequence (b n ) n∈N such that for all n ∈ N, and Borel-Cantelli entails that (P ⊗ P ′ )-a.s., lim sup n→∞ x ∈ C u,bn = {x ∈ C u,∞ . Using a diagonal argument and the separability of G, we can actually choose the sequence (b n ) n∈N uniformly in x ∈ G. Combining with (6.13), we obtain (6.12).
for all x ∈ G with ϕ x,u = ℓ (∞) x,u for all n large enough, and so (6.16) remains true. Since G n is finite for all n ∈ N, Lemma 6.1 and Lemma 4.4 yield that (Isom) holds on G n for all n ∈ N, and, noting that ϕ 2u as n → ∞ and applying (6.16), we infer that (Isom) holds for G ∞ = G.
It remains to show that (4.9) holds (on G). Fix e ∈ E ∪ G. For sufficently large n, which we will tacitly assume henceforth, e ∈ E n ∪ G n , where (G n , E n ) refers to the graph induced by G n . Define for all n ∈ N ∪ {∞} the random set of edges and vertices E x ) 2 > 0 for all x ∈ I e }. By Lemma 4.4 applied to G n , we have for all n ∈ N that where I u E,n is the union of the set of edges crossed by the trace ω u }, e ∈ S, is independent for all large enough n (including ∞), and for all e ∈ S, u,e , (ϕ (n) ) |e .
Let us now quickly explain how to deduce Theorem 3.9 and Corollaries 3.11 and 3.12 from Lemma 6.4.
Let us now assume that one of the conditions in (3.14) holds. Then by Lemma 6.4, we have that (Isom') and (4.9) hold. Moreover, the family {e ∈ E u }, e ∈ E ∪ G, is independent conditionally on ω u and (ϕ x ) x∈G by (2.35) and (2.42), and, by (4.9) we thus have that (E u , (ϕ x ) x∈G , ω u ) has the same law under P as ( E u , (ϕ x ) x∈G , ω u ) under P. Finally, since by (2.41) and (2.32) P I (I u ∩ I x = ∅) = 1 for all x ∈ G with κ x > 0, for each x ∈ G, we have x ∈ C u ∩ G if and only if there is a path π ⊂ E u ∩ E between x and some y ∈ (I u ∪ E u ) ∩ G, and so (σ x ) x∈G and σ also have the same law. The equality (3.16) then follows directly from (Isom').
Proof of Corollary 3.11. Let G be a graph such that (Law 0 ) is fulfilled. Then (Isom) holds by (3.14). Let us assume that E ≥0 contains at least one non-compact component with positive probability. In particular, there exists x 0 ∈ G such that E ≥0 (x 0 ) is non-compact with positive probability. By Theorem 3.2, we know that cap(E ≥0 (x 0 )) < ∞ P G -a.s, and so by Lemma 3.1, E ≥0 (x 0 ) is also unbounded with positive probability. Now, by (2.41), it follows that for all u > 0, with (P I ⊗ P G )-positive probability, E ≥0 (x 0 ) is unbounded and x 0 / ∈ C u . By (Isom) and symmetry of the Gaussian free field, we obtain that for all u > 0 E ≥ √ 2u (x 0 ) is unbounded with positive probability. In particular, if h com * > 0, then E ≥0 contains a non-compact component with positive probability, and so E ≥h contains an unbounded component for all h > 0 by the above reasoning, that is h * = ∞. If moreover h kill < 1, then h * ≥ 0 by (1.8). Therefore by (3.3), we have h com * ≥ h * ≥ 0. Since h * = ∞ if h com * > 0, we thus obtain h * = h com * ∈ {0, ∞}.
Proof of Corollary 3.12. Let us assume that h * ≤ 0, then E ≥h is P G -a.s. bounded for all h > 0. By Theorem 3.7, we thus have that (Law h ) holds for all h > 0, and so (Law 0 ) also holds by (3.14). Since E ≥h is P G -a.s. bounded for all h > 0, we thus obtain by Corollary 3.11 that E ≥0 is P G -a.s. bounded. 2) Similarly to Theorem 8 of [20], one could also use (4.6) to deduce an isomorphism theorem between random interlacements and the Gaussian free field even if G is infinite. More precisely, if G is a graph such that |{x ∈ G : κ x > 0}| < ∞, one can merge all the open ends of the cables I x , x ∈ G with κ x > 0, into a new vertex x * , and apply (4.6) to the new (locally finite) graph G ∪ {x * }. Decomposing the loop soup into loops hitting x * and loops avoiding x * similarly as in Appendix B, one can then prove an isomorphism similar to Theorem 3.9, but replacing random interlacements on G by killed random interlacements on G, that is all the trajectories in the random interlacement process whose forward and backward parts are both killed before escaping all bounded sets, and replacing ϕ+ √ 2u by ϕ + √ 2uh kill , see (1.2). In Corollary 6.9 of [22], this isomorphism between killed random interlacements and the Gaussian free field is extended to any graphs satisfying (Law 0 ). 0 ) does not hold, or any of the other equivalent conditions appearing in (3.14). In view of Corollary 3.11, one could also ask if a transient graph G exists, such that h kill < 1 is fulfilled, but h * ∈ (0, ∞) or h com * ∈ (0, ∞), and then (Law 0 ) would not hold. On such a graph, we would still have by Theorem 3.7 that (Law h ) holds for all h > h com * .

3) An interesting open question is whether a transient graph G exists such that (Law
A Appendix: the condition (Cap) We gather in this section various pertinent observations around the condition (Cap) appearing on p.4, including a proof of Lemma 3.4. The following result is simple but useful in absence of any quantitative information on the asymptotic behavior of g(·, ·).
Lemma A.1 (Decay of Green's function). If A ⊂ G is an infinite set, then for all sequences x n ∈ A, n ≥ 0, such that lim n d G (x 0 , x n ) = ∞ and g(x n , x n ) ≤ g 0 ∈ (0, ∞) for all (but finitely many) n, one has (A.1) g(x 0 , x n ) → 0, as n → ∞.
Proof. We argue by contradiction. Suppose that for some ε > 0 and some x n ∈ A with g(x n , x n ) ≤ g 0 , n ≥ 0 and lim n d G (x 0 , x n ) = ∞ (A.2) g(x 0 , x n ) ≥ ε, for all n ≥ 0.
By passing to a subsequence we may also assume that d G (x 0 , x n ) > n, for all n ≥ 0. Let H y = inf{n ∈ N : Z n = y} the hitting time of y for the discrete skeleton Z, with inf ∅ = ∞.
Since d G (x 0 , x n ) > n, (A.3) and the strong Markov property then imply, for all n ≥ 0 where T B(x,n) = inf{p ∈ N : Z p / ∈ B(x, n)} is the first exit time of the discrete ball B(x, n) for the graph distance d G on G, with inf ∅ = ∞. Since T n = T B(x 0 ,n) increases to ∞, there exists a sequence (n k ) k≥0 such that Tn k ≤p< Tn k+1 a contradiction to the transience of G.
The utility of a control like (A.1) is illustrated by the following criterion. Proof. A proof of this can be found in [16], Lemma 2.13. We give a different proof. Let ε > 0 and n ≥ 1. Consider the 'refined' set A ε,n = {x 0 , . . . , x n } ⊂ A defined as follows. Fix x 0 ∈ A arbitrary. Given {x 0 , . . . , x k−1 } for some 1 ≤ k < n, applying Lemma A.2, which is in force due to (A.4), we find by means of (A.1) a point x k ∈ A such that g(x k , x k ′ ) < ε for all k ′ < k.
Overall it follows that (A.5) g(x, y) ≤ ε, for all x = y ∈ A ε,n . from which cap(A) = ∞ follows by letting first n → ∞ and then ε → 0.
We conclude this section with the Proof of Lemma 3.4. 1) Let us first assume that (Cap) holds true for the graph G. In this case, for all infinite and connected A ⊂ G, writing A for the union of the I e for all edges e ∈ E between two vertices of A, we have by (2.16) and (2.27) cap(A) = cap( A) = ∞, since A is an unbounded and connected set of G, and so (3.6) is satisfied. Assume now that G is a graph such that (3.6) is verified, and let A be a connected and unbounded subset of G. Then A contains an infinite and connected set A ⊂ G, and so by (2.23) and (3.6) cap( A) ≥ cap(A) = ∞, that is (Cap) holds.
3) By Lemma 3.4,1),(2.23) and (2.27), it is enough to prove that cap(B) = ∞ for all infinite and connected sets B containing exactly one vertex per generation. Let us fix some x 0 ∈ B, and for all i ≥ 0 define recursively x i+1 as the first descendant x ∈ B of x i in B such that T x \ B is infinite. Note that such a vertex x i+1 must exists, otherwise R ∞ x = ∞ for all descendants x of x i in B. For each i ∈ N, {x ∈ T x i \ B : R ∞ x > A} is finite, and so there exists a cut-set C i between x i and infinity in T x i \ B, such that R ∞ y ≤ A for all y ∈ C i . Taking B n = {x 0 , . . . , x n }, we have for all n ∈ N and i ∈ {1, . . . , n − 1} that where y − i is the first ancestor of y i and we used (1.11) in [1] in the second inequality. Since T is transient and the random walk on Z is recurrent, it is easy to see that B is visited infinitely often with probability 0. Therefore, for each i ∈ N, under P T x i , if Z n ∈ T x i for all n ∈ N, then there exists p ≥ i such that H Cp < H x i , and so where in the last inequality we used λ x i P T x i (H xp < H x i ) = λ xp P T xp (H x i < H xp ) ≤ λ xp . Moreover, for all y ∈ B between x i and x i+1 , the effective resistance between y and ∞ in T x i is R ∞ y , and so using a series transformation we have R ∞ y ≥ R ∞ x i+1 . Therefore, since B is an unbounded and connected set, we have R ∞ x i ≤ A infinitely often, and so the sum on the right-hand side of (A.6) must be infinite. Using (2.20) and (2.27) we conclude that cap(B) = lim n→∞ i∈{0,...,n} which completes the proof.
B Appendix: Proof of Lemma 4.4 In this Appendix we are going to prove that the coupling between loop soups and the Gaussian free field, (4.6), implies the coupling between random interlacements and the Gaussian free field on finite graphs, Lemma 4.4, following similar ideas to the proof of Theorem 8 in [20]. Let us define U κ def.
= {x ∈ G : κ x > 0} and let G * be the graph with vertex set G, plus an additional vertex x * . The symmetric weights on G * are λ * x,y =      λ x,y when x, y ∈ G κ x when x ∈ U κ and y = x * 0 when x / ∈ U κ and y = x * , and the killing measure κ * = 1 x * . We write G * = G∪ {x * } and E * = {{x, y} ∈ G * × G * : λ * x,y > 0} for the vertex and edge set of G * . Note that each edge I e of G * , e ∈ E * , can be identified with some edge I e of G, e ∈ E ∪ U κ , and one can then identify the cable system G * \ {I x * ∪ x∈Uκ I x } with G. By (2.30), for all x ∈ G the law of the trace of X on G killed on hitting x * under P G * x is thus P G x . Recall the decomposition of the loop soup L 1 2 = L in can be decomposed into its excursions outside x * , that is a trajectory entirely contained in G, starting and ending in U κ , and the process L e, * . We can now compare the Gaussian free field on G * with the Gaussian free field on G, and the loops L * 1 2 hitting x * on G * with random interlacements on G.
Proposition B.1. Let G be a transient graph such that G is finite. For any u > 0, (B.1) (ϕ x ) x∈ G has the same law under P G G * (· | ϕ x * = √ 2u) as (ϕ x + √ 2u) x∈ G under P G G ,