Geometry of Gaussian free field sign clusters and random interlacements

For a large class of amenable transient weighted graphs $G$, we prove that the sign clusters of the Gaussian free field on $G$ fall into a regime of strong supercriticality, in which two infinite sign clusters dominate (one for each sign), and finite sign clusters are necessarily tiny, with overwhelming probability. Examples of graphs belonging to this class include regular lattices like $\mathbb{Z}^d$, for $d \geqslant 3$, but also more intricate geometries, such as Cayley graphs of suitably growing (finitely generated) non-Abelian groups, and cases in which random walks exhibit anomalous diffusive behavior, for instance various fractal graphs. As a consequence, we also show that the vacant set of random interlacements on these objects, introduced by Sznitman in arXiv:0704.2560, and which is intimately linked to the free field, contains an infinite connected component at small intensities. In particular, this result settles an open problem from arXiv:1010.1490.


Introduction
This article rigorously investigates the phenomenon of phase coexistence which is associated to the geometry of certain random fields in their supercritical phase, characterized by the presence of strong, slowly decaying correlations. Our aim is to prove the existence of such a regime, and to describe the random geometry arising from the competing influences between two supercritical phases. The leitmotiv of this work is to study the sign clusters of the Gaussian free field in "high dimensions" (transient for the random walk), which offer a framework that is analytically tractable and has a rich algebraic structure, but questions of this flavor have emerged in various contexts, involving fields with similar large-scale behavior. One such instance is the model of random interlacements, introduced in [55] and also studied in this article, which relates to the broad question of how random walks tend to create interfaces in high dimension, see e.g. [53], [54], and also [65], [14]. Another case in point (not studied in this article) is the nodal domain of a monochromatic random wave, e.g. a randomized Laplace eigenfunction on the n-sphere S n , at high frequency, which appears to display supercritical behavior when n ≥ 3, see [49] and references therein.
As a snapshot of the first of our main results, Theorem 1.1 below gives an essentially complete picture of the sign cluster geometry of the Gaussian free field Φ (see (1.5) for its definition) on a large class of transient graphs G. It can be informally summarized as follows. Under suitable assumptions on G, which hold e.g. when G = Z d , d ≥ 3 -but see (1.4) below for further examples, which hopefully convey the breadth of our setup-, there exist exactly two infinite sign clusters of Φ, one for each sign, which "consume all the ambient space," up to (stretched) exponentially small finite islands of +/− signs; (1.1) see Theorem 1.1 for the corresponding precise statement. In fact, we will show that this regime of phase coexistence persists for level sets above small enough height h = ε > 0. It is worth emphasizing that (1.1) really comprises two distinct features, namely (i) the presence of unbounded sign clusters, which is an existence result, and (ii) their ubiquity, which is structural and forces bounded connected components to be very small. Our results further indicate a certain universality of this phenomenon, as the class of transient graphs G for which we can establish (1.1) includes possibly fractal geometries, see the examples (1.4) below, where random walks typically experience slowdown due to the presence of "traps at every scale," see e.g. [6], [24], [25] and the monograph [4].
As it turns out, the phase coexistence regime for sign(Φ) described by (1.1) is also related to the existence of a supercritical phase for the vacant set of random interlacements; cf. [55] and below (1.15) for a precise definition. This is due to a certain algebraic relation linking Φ and the interlacements, see [57], [33], [60], whose origins can be traced back to early work in constructive field theory, see [51], and also [13], [20], and which will be a recurrent theme throughout this work. Interestingly, the arguments leading to the phase described in (1.1), paired with the symmetry of Φ, allow us to embed (in distribution) a large part of the interlacement set inside its complement, the vacant set, at small levels. As a consequence, we deduce the existence of a supercritical regime of the latter by appealing to the good connectivity properties of the former, for all graphs G belonging to our class. We will soon return to these matters and explain them in due detail. For the time being, we note that these insights yield the answer to an important open question from [56], see the final Remark 5.6(2) therein and our second main result, Theorem 1.2 below.
We now describe our results more precisely, and refer to Section 2 for the details of our setup. We consider an infinite, connected, locally finite graph G endowed with a positive and symmetric weight function λ on the edges. To the data (G, λ), we associate a canonical discrete-time random walk, which is the Markov chain with transition probabilities given by p x,y = λ x,y /λ x , where λ x = y∈G λ x,y . It is characterized by the generator (1.2) Lf (x) = 1 λ x y∈G λ x,y (f (y) − f (x)), for x ∈ G, for f : G → R with finite support. We assume that the transition probabilities of this walk are uniformly bounded from below, see (p 0 ) in Section 2, and writing g(x, y), x, y ∈ G, for the corresponding Green density, see (2.3) below, that there exist parameters α and β with 2 ≤ β < α such that, for some distance function d(·, ·) on G, λ(B(x, L)) L α and g(x, y) d(x, y) −(α−β) , for x, y ∈ G, (1.3) where means that the quotient is uniformly bounded from above and below by positive constants, B(x, L) is the ball of radius L in the metric d(·, ·) and λ(A) = x∈A λ x is the measure of A ⊂ G, see (V α ) and (G β ) in Section 2 for the precise formulation of (1.3). The exponent β in (1.3) reflects the diffusive (when β = 2) or sub-diffusive (when β > 2) behavior of the walk on G, cf. Proposition 3.3 below. Note that the condition on g(·, ·) in (1.3) implies in particular that G is transient for the walk. For more background on why condition (1.3) is natural, we refer to [24], [25] as well as Theorem 2.2 and Remark 3.4 below regarding its relation to heat kernel estimates. As will further become apparent in Section 3, see in particular Proposition 3.5 and Corollary 3.9, choosing d to be the graph distance on G is not necessarily a canonical choice, for instance when G has a product structure. Apart from (p 0 ), (V α ) and (G β ), we will often make one additional geometric assumption (WSI) on G, introduced in Section 2. Roughly speaking, this hypothesis ensures a (weak) sectional isoperimetry of various large subsets of G, which allows for certain contour arguments. Rather than explaining this in more detail, we single out the following representative examples of graphs, which satisfy all four aforementioned assumptions (p 0 ), (V α ), (G β ) and (WSI), cf. Corollary 3.9 below: (see e.g. [6], pp.6-7 for definitions of G and G 3 , the latter corresponds to V (d) in the notation of [6]), all endowed with unit weights and a suitable distance function d (see Remark 2.1 and Section 3). The graph G 2 is a benchmark case for various aspects of [56], to which we will return in Theorem 1.2 below. The case G 3 underlines the fact that even in the fractal context a product structure is not necessarily required. The fact that (WSI) holds in cases G 2 , G 3 and G 4 is not evident, and will follow by expanding on results of [67], see Section 3. In the case of G 4 , (WSI) crucially relies on Gromov's deep structural result [27]. The reader may choose to focus on (1.4), or even G 1 , for the purpose of this introduction.
Our first main result deals with the Gaussian free field Φ on the weighted graph (G, λ). Its canonical law P G is the unique probability measure on R G such that (Φ x ) x∈G is a mean zero Gaussian field with covariance function (1.5) E G [Φ x Φ y ] = g(x, y), for any x, y ∈ G.
On account of (1.3), Φ exhibits (strong) algebraically decaying correlations with respect to the distance d, captured by the exponent (1.6) ν def.
We study the geometry of Φ in terms of its level sets The random set E ≥h decomposes into connected components, also referred to as clusters: two points belong to the same cluster of E ≥h if they can be joined by a path of edges whose endpoints all lie inside E ≥h . Finite clusters are sometimes called islands.
As h varies, the onset of a supercritical phase in E ≥h is characterized by a critical parameter h * = h * (G), which records the emergence of infinite clusters, (1.8) h * def.
= inf h ∈ R; P G there exists an infinite cluster in E ≥h = 0 (with the convention inf ∅ = ∞). The existence of a nontrivial phase transition, i.e., the statement −∞ < h * < ∞, was initially investigated in [12], and even in the case G = G 1 = Z d with d ≥ 3, has only been completely resolved recently in [47]. It was further shown in Corollary 2 of [12] that h * ≥ 0 on Z d , and this proof can actually be adapted to any locally finite transient weighted graph, see the Appendix of [1], or [33] for a different proof.
Of particular interest are the connected components of E ≥0 . The symmetry of Φ implies that E ≥0 and its complement in G have the same distribution. The connected components of E ≥0 and its complement are referred to as the positive and negative sign clusters of Φ, respectively. It is an important problem to understand if these sign clusters fall into a supercritical regime (below h * ), and, if so, what the resulting sign cluster geometry of Φ looks like. In order to formulate our results precisely, we introduce a critical parameter h characterizing a regime of local uniqueness for E h , whose distinctive features (1.10) and (1.11) below reflect (i) and (ii) in the discussion following (1.1). Namely, (1.9) h = sup{h ∈ R; Φ strongly percolates above level h for all h < h}, where the Gaussian free field Φ is said to strongly percolate above level h if there exist constants c(h) > 0 and C(h) < ∞ such that for all x ∈ G and L 1, (the constant C 10 is defined in (3.4) below). With the help of (1.10), (1.11) and a Borel-Cantelli argument, one can easily patch up large clusters in E ≥h ∩ B(x, 2 k ) for k ≥ 0 when h < h to deduce that h ≤ h * . One also readily argues that for all h < h, there is a unique infinite cluster in E ≥h , as explained in (2.12) below. We will prove the following result, which makes (1.1) precise. For reference, conditions (p 0 ), (V α ), (G β ) and (WSI) appearing in (1.13) are defined in Section 2. All but (p 0 ) depend on the choice of metric d on G. Following (1.3), in assuming that conditions (V α ), (G β ) and (WSI) are met in various statements below, we understand that (V α ), (G β ) and (WSI) hold with respect to some distance function d(·, ·) on G, for some values of α and β satisfying α > 2 and β ∈ [2, α). The proof of Theorem 1.1 is given in Section 8. For a list of pertinent examples, see (1.4) and Section 3, notably Corollary 3.9 below, which implies that all conditions appearing in (1.13) hold true for the graphs listed in (1.4), and in particular for Z d , d ≥ 3. Some progress in the direction of Theorem 1.1 was obtained in the recent work [17] by the authors, where it was shown that h * (Z d ) > 0. The sole existence of an infinite sign cluster without proof of (1.11) at small enough h ≥ 0 can be obtained under slightly weaker assumptions, see condition ( WSI) in Remark 8.5 and Theorem 8.8 below. As an immediate consequence of (1.10), (1.11) and (1.13), we note that for all h <h, and in particular when h = 0, denoting by C h (x) the cluster of x in E ≥h , (1.14) The parameter h, or a slight modification of it, see Remark 8.9, 1) below, has already appeared when G = Z d in [19], [59], [41], [48], [10] and [15] to test various geometric properties of the percolation cluster in E h in the regime h < h; note that h > −∞ is known to hold on Z d as a consequence of Theorem 2.7 in [19], thus making these results not vacuously true, but little is known about h otherwise. These findings can now be combined with Theorem 1.1. For instance, as a consequence of (1.13) and Theorem 1.1 in [41], when G = Z d , denoting by C + ∞ the infinite +-sign cluster, P G -a.s., conditionally on starting in C + ∞ , the random walk on C + ∞ (see below (1.2) in [41] for its definition) converges weakly to a non-degenerate Brownian motion under diffusive rescaling of space and time.
(1. 15) We refer to the above references for further results exhibiting, akin to (1.15), the "wellbehavedness" of the phase h < h, to which the sign clusters belong.
We now introduce and state our results regarding random interlacements, leading to Theorem 1.2 below, and explain their significance. As alluded to above, cf. also the discussion following Theorem 1.2 for further details, the interlacements, which constitute a Poisson cloud ω u of bi-infinite random walk trajectories as in (1.2) modulo time-shift, were introduced on Z d in [55], see also [62] and Section 2, and naturally emerge due to their deep ties to Φ. The parameter u > 0 appears multiplicatively in the intensity measure of ω u and hence governs how many trajectories enter the picture -the larger u, the more trajectories. The law of the interlacement process (ω u ) u>0 is denoted by P I and the random set I u ⊂ G, the interlacement set at level u, is the subset of vertices of G visited by at least one trajectory in the support of ω u . Its complement V u = G \ I u is called the vacant set (at level u). The process ω u is also related to the loop-soup construction of [32], if one "closes the bi-infinite trajectories at infinity," as in [58].
Originally, ω u was introduced in order to investigate the local limit of the trace left by simple random walk on large, locally transient graphs {G N ; N ≥ 1} with G N G as N → ∞, when run up to suitable timescales of the form u t N with u > 0 and t N = t N (G N ), see [9], [52], [53], [54], [65], as well as [69] and [14]. The trajectories in the support of ω u can roughly be thought of as corresponding to successive excursions of the random walk in suitably chosen sets, and the timescale t N defines a Poissonian limiting regime for the occurrence of these excursions (note that this limit is hard to establish due to the long-range dependence between the excursions of the walk). Of particular interest in this context are the percolative properties of V u , as described by the critical parameter (note that V u is decreasing in u) This corresponds to a drastic change in the behavior of the complement of the trace of the walk on G N , as the parameter u appearing multiplicatively in front of t N varies across u * , provided this threshold is non-trivial; see for instance [65] for simulations when G N = (Z/N Z) d with t N = N d . The finiteness of u * , i.e. the existence of a subcritical phase for V u , and even a phase of stretched exponential decay for the connectivity function of V u at large values of u, can be obtained by adapting classical techniques, once certain decoupling inequalities are available. As a consequence of Theorem 2.4 below, see Remark 7.2, 1) and Corollary 7.3, such a phase is exhibited for any graph G satisfying (p 0 ), (V α ) and (G β ) as in (1.12).
On the contrary, the existence of a supercritical phase is much less clear in general. It was proved in [56] that u * > 0 for graphs of the type G = G × Z, endowed with some distance d such that (1.3) hold, see (1.8) and (1.11) in [56], but only in cases where ν ≥ 1, cf. (1.6), excluding for instance the case G = G 2 in which ν = log 9−log 5 log 4 < 1, see [30] and [2]. As a consequence of the following result, we settle the question about positivity of u * affirmatively under our assumptions, thus solving a principal open problem from [56], see Remark 5.6(2) therein. We remind the reader of the convention (1.12) regarding conditions (V α ), (G β ) and (WSI), which is in force in the following: Theorem 1.2. Suppose G satisfies (p 0 ), (V α ), (G β ) and (WSI). Then there exists u > 0 and for every u ∈ (0, u], a probability space (Ω u , F u , Q u ) governing three random subsets I, V and K of G with the following properties: i) I, resp. V, have the law of I u , resp. V u , under P I . ii) K is independent of I. iii) Q u -a.s., I ∩ K contains an infinite cluster, and (I ∩ K) ⊂ V.

(1.17)
A fortiori, u * ≥ u(> 0). Thus, our construction of an infinite cluster of V u for small u > 0, and hence our resolution of the conjecture in [56], proceeds by stochastically embedding a large part of its complement, I u ∩ K inside V u . The law of the set K can be given explicitly, see Remark 8.9,2).
While we will in fact deal more generally with product graphs in Section 3, let us elaborate shortly on the important case G = G × Z considered in [56]. In this setting, the conclusions of Theorem 1.2 hold under the mere assumptions that (p 0 ) holds and G satisfies the upper and lower heat kernel estimates (UHK(α, β)) and (LHK(α, β)), see Remark 2.2, with respect to d = d G , the graph distance on G , for some α > 1 and β ∈ [2, 1 + α); for instance, if G = G 2 from (1.4), then α = log 3 log 2 and β = log 5 log 2 , see [7,30]. This (and more) will follow from Propositions 3.5 and 3.7 below; see also Remark 3.10 for further examples. Incidentally, let us note that Theorem 1.2 is also expected to provide further insights into the disconnection of cylinders G N × Z by a simple random walk trace, for G N a large finite graph, for instance when G N is a ball of radius N in the discrete skeleton of the Sierpinski gasket (corresponding to G 2 of (1.4)), cf. Remark 5.1 in [52].
Since Theorem 1.2 builds on the arguments leading to Theorem 1.1, we delay further remarks concerning (1.17) for a few lines, and first provide an overview of the proof of Theorem 1.1.
As hinted at above, a key ingredient and the starting point of the proof of Theorem 1.1 is a certain isomorphism theorem, see [57], [33], [60] and (5.2) and Corollary 5.3 below, which links the free field Φ to the interlacement ω u . The argument unfolds by first studying the random set I u , which has remarkable connectivity properties: even though its density tends to 0 as u ↓ 0, I u is an unbounded connected set for every u > 0. Much more is in fact true, see Section 4, in particular Proposition 4.1 below, the set I u is actually locally well-connected. These features of I u , especially for u close to 0, will figure prominently in our construction of various large random sets, and ultimately serve as an indispensable tool to build percolating sign clusters. Indeed, as a consequence of the aforementioned correspondence between Φ and ω u , see also (5.4) below, one can use I u in a first step as a system of "highways" to produce connections inside E ≥−h , for ever so small h = √ 2u > 0. A substantial part of these connections persists to exist in E ≥−h (h > 0), the level sets of the free field ϕ on a continuous extension G of the graph, the associated cable system. This object, to which all above processes can naturally be extended, goes back at least to [8] and is obtained by replacing the edges between vertices by one-dimensional cables. This result, which quantifies and strengthens the early insight h * (Z d ) ≥ 0 of [12] -deduced therein by a soft but indirect and general argument -is in fact sharp on the cables, see Theorem 8.10 below. Importantly, the recent result of [60], which can be applied in our framework, see Corollary 5.3, further allows to formulate a condition in terms of an (auxiliary) Gaussian free field γ appearing in the isomorphism and I u , the continuous interlacement, for points in E ≥−h to "rapidly" (i.e. at scale L 0 in the renormalization argument detailed in the next paragraph) connect to the interlacement I u=h 2 /2 . Following ideas from our precursor work [17], we can then rely on a certain robustness property exhibited on the cables to pass from E ≥−h to E ≥+h by means of a suitable coupling, which operates independently at any given vertex when certain favorable conditions are met. These conditions in turn become typical as u → 0 + , see Lemma 5.5 and Proposition 5.6.
The previous observations can be combined into a set of good features, assembled in Definition 7.4 below, which are both increasingly likely as L 0 → ∞ and entirely local, in that all properties constituting a good vertex x ∈ G are phrased in terms of the various fields inside balls of radius ≈ L 0 in the distance d around x. This notion can then be used as the starting point of a renormalization argument, presented in Sections 7 and 8, to show that good regions form large connected connected components. Importantly, with a view towards (1.10) and (1.11), good regions need not only to form but do so everywhere inside of G. This comes under the proviso of (WSI) as a feature of the renormalization scheme, which ensures that subsets of G having large diameter are typically connected by paths of good vertices, see Lemmas 8.6 and 8.7 below.
A renormalization of the parameters involved in the scheme is necessary due to the presence of the strong correlations, and it relies on suitable decoupling inequalities, see Theorem 2.4 below. At the level of generality considered here, namely assuming only (p 0 ), (V α ), (G β ), and particularly in the case of I u , see (2.21), these inequalities generalize results of [56] and are interesting in their own right. At the technical level, they are eventually obtained from the soft local time technique introduced in [39] and developed therein on Z d . The difficulty stems from having to control the resulting error term, which is key in obtaining (2.21). This control ultimately rests on chaining arguments and a suitable elliptic Harnack inequality, see in particular Lemmas 6.5 and 6.7, which provides good bounds if certain sets of interest do not get too close (note that, due to their Euclidean nature, the arguments leading to the precise controls of [39] valid even at short distances seem out of reach within the current setup). Fortunately, this is good enough for the purposes we have in mind.
The proof of Theorem 1.2 then proceeds by using the results leading to Theorem 1.1 and adding one more application of the coupling provided in Corollary 5.3. Indeed, the above steps essentially allow to roughly translate the probabilities in (1.10) and (1.11) regarding E ≥h , for h > 0 in terms of the interlacement I u , for u = h 2 /2 and some "noise", see Lemma 8.4 and (the proof of) Lemma 8.7, but E ≥h is in turn naturally embedded into V u , see (5.4). Following how the percolative regime for V u is obtained, one thus starts with its complement I u , first passes to Φ and proves the phase coexistence regime around h = 0 asserted in Theorem 1.1, and then translates back to V u . The existence of the phase coexistence regime along with the symmetry of Φ is then ultimately responsible for producing the inclusion iii) in (1.17). The set K appearing there morally corresponds to all the undesired noise produced by bad regions in the argument leading to Theorem 1.1. It would be interesting to devise a direct argument for u * > 0 which by-passes the use of Φ. We are currently unable to do so, except when ν > 1, in which case the reasoning of [56] can be adapted, see Remark 7.2, 2). We refer to Remark 8.9, 5)-8) for further open questions.
We now describe how this article is organized. Section 2 introduces the precise framework, the processes of interest and, importantly, the conditions (p 0 ), (V α ), (G β ) and (WSI) appearing in our main results. We then collect some first consequences of this setup. The decoupling inequalities mentioned above are stated in Theorem 2.4 at the end of that section. Section 3 has two main purposes. After gathering some preliminary tools from harmonic analysis (for L in (1.2)), which are used throughout, we first discuss in Proposition 3.5 how (V α ), (G β ) are obtained for product graphs of the form G = G × G , when the factors satisfy suitable heat kernel estimates. This has important applications, notably to the graph G = G 2 in (1.4), and requires that we work with general distances d in conditions (V α ), (G β ). For this reason, we have also included a proof of the classical (in case d = d G , the graph distance) estimates of Proposition 3.3 in the appendix. The second main result of Section 3 is to deduce in Corollary 3.9 that the relevant conditions (p 0 ), (V α ), (G β ) and (WSI) appearing in Theorems 1.1 and 1.2 apply in all cases of (1.4). In addition to Proposition 3.5, this requires proving (WSI) and dealing with boundary connectivity properties of connected sets, which is the object of Proposition 3.7.
Section 4 collects the local connectivity properties of the continuous interlacement set I u , see Proposition 4.1 and Corollary 4.2. The overall strategy is similar to what was done in [43] on Z d , see also [17], to which we frequently refer. The proof of Proposition 4.1 could be omitted on first reading. Section 5 is centered around the isomorphism on the cables. The main takeaway for later purposes is Corollary 5.3, see also Remark 5.4, which asserts that the coupling of Theorem 2.4 in [60] can be constructed in our framework. This requires that certain conditions be met, which are shown in Lemma 5.1 and Proposition 5.2. The latter also yields the desired inclusion (5.4). The generic absence of ergodicity makes the verification of these properties somewhat cumbersome. Lemma 5.5 contains the adaptation of the sign-flipping argument from [17], from which certain desirable couplings needed later on in the renormalization are derived in Proposition 5.6. Section 5 closes with a more detailed overview over the last three sections, leading to the proofs of our main results. Section 6 is devoted to the proof of Theorem 2.4, which contains the decoupling inequalities. While the free field can readily be dispensed with by adapting results of [38], the interlacements are more difficult to deal with. We apply the soft local times technique from [39]. All the work lies in controlling a corresponding error term, see Lemma 6.6. The regularity estimates for hitting probabilities needed in this context, see the proof of Lemma 6.7, rely on Harnack's inequality, see Lemma 6.5 for a tailored version.
Section 7 introduces the renormalization scheme needed to put together the ingredients of the proof. The important Definition 7.4 of good vertices appears at the end of that section, and Lemma 7.6 collects the features of good long paths, which are later relied upon. The good properties appearing in this context are expressed in terms of (an extension of) the coupling from Corollary 5.3.
The pieces are put together in Section 8, and the proofs of Theorems 1.1 and 1.2 appear towards the end of this last section. The important steps leading up to them are Proposition 8.3 and Lemmas 8.4 and 8.7. By applying the scheme from Section 7 with the decoupling inequalities of Theorem 2.4, Proposition 8.3 yields the desired estimate that long paths of bad vertices are very unlikely, for suitable choices of the parameters. Lemmas 8.4 and 8.7 provide precursor estimates to (1.10) and (1.11), which are naturally associated to our notion of goodness, and from which (together with the couplings from Proposition 5.6) (1.10) and (1.11) are eventually inferred. An important technical step with regards to Lemma 8.7 and (1.11) is Lemma 8.6, which asserts that large sets in diameter are typically connected by a path of good vertices. Proposition 5.6 then exhibits the coupling transforming (for instance) good regions into subsets of E ≥h , h > 0. Finally, Section 8 also contains the simpler existence result, Theorem 8.8, alluded to above, which can be obtained under a slightly weaker condition ( WSI), introduced in Remark 8.5.
We conclude this introduction with our convention regarding constants. In the rest of this article, we denote by c, c , . . . and C, C , . . . positive constants changing from place to place. Numbered constants c 0 , C 0 , c 1 , C 1 , . . . are fixed when they first appear and do so in increasing numerical order. All constants may depend implicitly "on the graph G" through conditions (p 0 ), (V α ) and (G β ) below, in particular they may depend on α and β. Their dependence on any other quantity will be made explicit.
For the reader's orientation, we emphasize that the conditions (p 0 ), (V α ), (G β ) and (WSI), which will be frequently referred to, are all introduced in Section 2. We seize this opportunity to highlight the set of assumptions (3.1) on (G, λ) appearing at the beginning of Section 3, which will be in force from then on until the end.

Basic setup and first properties
In this section, we introduce the precise framework alluded to in the introduction, formulate the assumptions appearing in Theorems 1.1 and 1.2, and collect some of the basic geometric features of our setup. We also recall the definitions and several useful facts concerning the two protagonists, random interlacements and the Gaussian free field on G, as well as their counterparts on the cable system. We then state in Theorem 2.4 the relevant decoupling inequalities for both interlacements and the free field, which will be proved in Section 6.
Let (G, E) be a countably infinite and connected graph with vertex set G and (unoriented) edge set E ⊂ G × G. We will often tacitly identify the graph (G, E) with its vertex set G. We write x ∼ y, or y ∼ x, if {x, y} ∈ E, i.e., if x and y are connected by an edge in G. Such vertices x and y will be called neighbors. We also say that two edges in E are neighbors if they have a common vertex. A path is a sequence of neighboring vertices in G, finite or infinite. For A ⊂ G, we set A c = G \ A, we write ∂A = {y ∈ A; ∃ z ∈ A c , z ∼ y} for its inner boundary, and define the external boundary of A by = {y ∈ A c ; ∃ an unbounded path in A c beginning in y and ∃ z ∈ A, z ∼ y} We write x ↔ y in A (or x A ←→ y in short) if there exists a nearest-neighbor path in A containing x and y, and we say that A is connected if x A ←→ y for any x, y ∈ A. For all A 1 ⊂ A 2 ⊂ G, we write A 1 ⊂⊂ A 2 to express that A 1 is a finite subset of A 2 . We endow G with a non-negative and symmetric weight function λ = (λ x,y ) x,y∈G , such that λ x,y ≥ 0 for all x, y ∈ G and λ x,y > 0 if and only if {x, y} ∈ E. We define the weight of a vertex x ∈ G and of a set A ⊂ G by λ x = y∼x λ x,y and λ(A) = x∈A λ x . We often regard {λ x : x ∈ G} as a positive measure on G endowed with its power set σ-algebra in the sequel.
To the weighted graph (G, λ), we associate the discrete-time Markov chain with transition probabilities We write P x , x ∈ G, for the canonical law of this chain started at x, and Z = (Z n ) n≥0 for the corresponding canonical coordinates. For a finite measure µ on G, we also set Our assumptions, see in particular (G β ) below, will ensure that Z is in fact transient. We assume that G has controlled weights, i.e., there exists a constant c 0 such that Note that (p 0 ) implies that each x ∈ G has at most 1/c 0 neighbors, so G has uniformly bounded degree. We introduce the symmetric Green function associated to Z, = T A c = inf{k ≥ 0; Z k ∈ A} the first entrance time in A, and introduce the killed Green function Applying the strong Markov property at time T A for A ⊂⊂ G, we obtain the relation Finally, the heat kernel of Z is defined as (2.7) p n (x, y) = λ −1 y P x (Z n = y) for all x, y ∈ G and n ∈ N.
We further assume that G is endowed with a distance function d.
Remark 2.1. A natural choice is d = d G , the graph distance on G, but this does not always fit our needs. We will return to this point in the next section. Roughly speaking, some care is needed due to our interest in product graphs such as G 1 in (1.4), and more generally graphs of the type G = G × Z as in [56]. This is related to the way by which conditions (V α ) and (G β ) below propagate to a product graph, especially in cases where the factors have different diffusive scalings, see Proposition 3.5 and in particular (3.22) below.
We denote by B(x, L)={y ∈ G : d(x, y) ≤ L} the closed ball of center x and radius L for the distance d and by B E (x, L) the set of edges for which both endpoints are in = sup x,y∈A d(x, y) ∈ [0, ∞] the diameter of A. Note that unless d = d G , balls in the distance d are not necessarily connected in the sense defined below (2.1).
We now introduce two -natural, see Theorem 2.2 below -assumptions on (G, λ), one geometric and the other analytic. We suppose that G has regular volume growth of degree α with respect to d, that is, there exists α > 2 and constants 0 < c 1 ≤ C 1 < ∞ such that for all x ∈ G and L ≥ 1.
We also assume that the Green function g has the following decay: there exist constants 0 < c 2 ≤ C 2 < ∞ such that, with α as in (V α ), for some β ∈ [2, α), g satisfies where we recall that ν = α − β from (1.6). The parameter β ≥ 2 in (1.6) can be thought of as characterizing the order of the mean exit time from balls (of radius L), which grows like L β as L → ∞, see Lemma A.1.
Remark 2.2 (Equivalence to heat kernel bounds). The above assumptions are very natural. Indeed, in case d(·, ·) is the graph distance -but see Remark 2.1 above -the results of [24], see in particular Theorem 2.1 therein, assert that, assuming (p 0 ), the conditions (V α ) and (G β ) are equivalent to the following sub-Gaussian estimates on the heat kernel: for all x, y ∈ G and n ≥ 0, Many examples of graphs G for which (UHK(α, β)) and (LHK(α, β)) hold for the graph distance are given in [30], [5] and [28], and further characterizations of these estimates can be found in [25], [3], [7] and [4]. We will return to the consequences of (V α ), (G β ), and their relation to estimates of the above kind within our framework, i.e., for general distance function d, in Section 3, cf. Proposition 3.3 and Remark 3.4 below.
We now collect some simple geometric consequences of the above setup. We seize the opportunity to recall our convention regarding constants at the end of Section 1. Lemma 2.3. Assume (p 0 ), (V α ), and (G β ) to be fulfilled. Then: Proof. We first show (2.8). Using (p 0 ), (G β ), and the strong Markov property at time H y , there exists c > 0 such that for all x ∼ y ∈ G, g(x, y) = P x (H y < ∞)g(y, y) ≥ p x,y g(y, y) ≥ c 0 c 2 , where p x,y is the transition probability between x and y for the random walk Z, see (2.2). Thus, one can find C 3 such that (2.11) d(x, y) For arbitrary x and y in G, we then consider a geodesic for the graph distance between x and y, apply the triangle inequality (for d) and use (2.11) repeatedly to deduce (2.8).
Similarly, for all x = y ∈ G, d(x, y) We now turn to (2.10). For x ∼ y ∈ G, we have x ∈ B(x, 1) and thus, by x by definition, and thus by (p 0 ) and (G β ), We now define the weak sectional isoperimetric condition alluded to in Section 1. This is an additional condition on the geometry of G that will enter in Section 8 to guarantee that certain "bad" regions are sizeable and thus costly in terms of probability, cf. the proofs of Lemma 8.4 and Lemma 8.6. We say that ( The weak sectional isoperimetric condition is a condition on the existence of long R-path in the boundary of sets, and similar conditions have already been used to study Bernoulli percolation, see [40]. More precisely, this weak sectional isoperimetric condition states that there exists R 0 ≥ 1 and c 5 > 0 such that for each finite connected subset A of G and all x ∈ ∂ ext A, there exists an R 0 -path from x to B(x, c 5 δ(A)) c in ∂ ext A.

(WSI)
We now introduce the processes of interest. For each x ∈ G, we denote by Φ x the coordinate map on R G endowed with its canonical σ-algebra, Φ x (ω) = ω x for all ω ∈ R G , and P G is the probability measure defined in (1.5). Any process (ϕ x ) x∈G with law P G will be called a Gaussian free field on G; see [50] as well as the references therein for a rigorous introduction to the relevance of this process. Recalling the definition of the level sets E ≥h of Φ in (1.7) and of the parameter h from (1.9), we now provide a simple argument that (2.12) for each h < h, P G -a.s., E ≥h contains a unique infinite cluster.
Indeed, on the event A h L = {B(x, L/2) intersects at least two infinite clusters of E ≥h }, if L is large enough, there is at least two clusters of E ≥h ∩ B(x, L) with diameter at least L/10 which are not connected in G, and thus the event in (1.11) occurs. The events A h L are increasing toward {E ≥h has at least two infinite clusters} as L goes to infinity, and thus by (1.11) E ≥h contains P G -a.s. at most one infinite cluster for all h < h, and (2.12) follows since h h * as explained below (1.11).
On the other hand, random interlacements on a graph G as above are defined under a probability measure P I as a Poisson point process ω on the product space of doubly infinite trajectories on G modulo time-shift, whose forward and backward parts escape all compact sets in finite time, times the label space [0, ∞), see [62]. For u > 0, we denote by ω u the random interlacement process at level u, which consists of all the trajectories in ω with label at most u. By I u we denote the random interlacement set associated to ω u , which is the set of vertices visited by at least one trajectory in the support of ω u , by V u def.
= G \ I u the vacant set of random interlacements, and by ( x,u ) x∈G the field of occupation times associated to ω u , see (1.8) in [57], which collects the total time spent in each vertex of G by the trajectories in the support of ω u . As stated in Corollary 4.2 below, if (p 0 ), (V α ) and (G β ) hold, (2.13) for all u > 0, I u is P I -a.s. an infinite connected subset of G.
For vertex-transitive G, (2.13) is in fact a consequence of Theorem 3.3 of [64], since all graphs considered in the present paper are amenable on account of (3.16) below as well as display (14) and thereafter in [64] (their spectral radius is equal to one).
Recall the definitions of the critical parameters h * and u * from (1.8) and (1.16), which describe the phase transition of E ≥h , the level sets of Φ (as h varies), and that of V u (as u varies). Note that (2.13) indicates a very different geometry of I u and V u as u → 0 in comparison with independent Bernoulli percolation on G. Indeed, it is proved in [63] that for all the graphs from (1.4), both the set of open vertices and its complement undergo a non-trivial phase transition.
In order to derive an alternative representation of the critical parameters u * and h * , we recall that the FKG inequality was proved in Theorem 3.1 of [62] for random interlacements, and that it also holds for the Gaussian free field on G. Indeed, it is shown in [37] for any centered Gaussian field with non-negative covariance function on a finite space, and by conditioning on a finite set and using a martingale convergence theorem this result can be extended to an infinite space, see for instance the proof of Theorem 2.8 in [26]. As a consequence, for any x ∈ G, we have that (2.14) u * = inf u ≥ 0; P I (the connected component of V u containing x is infinite) = 0 , and similarly for h * .
The proofs of Theorems 1.1 and 1.2 involve a continuous version of the graph G, its cable system G, and of the various processes associated to it. We attach to each edge e = {x, y} of G a segment I e of length ρ x,y = 1/(2λ x,y ), and G is obtained by glueing these intervals to G through their respective endpoints. In other words, G is the metric graph where every edge e has been replaced by an interval of length ρ e . We regard G as a subset of G, and the elements of G will still be called vertices. One can define on G a continuous diffusion X, via probabilities P z , z ∈ G, such that for all x ∈ G, the projection on G of the trajectory of X under P x has the same law as the discrete random walk Z on the weighted graph G under P x . This diffusion can be defined from its Dirichlet form or directly constructed from the random walk Z by adding independent Brownian excursions on the edges beginning at a vertex. We refer to Section 2 of [33] or Section 2 of [21] for a precise definition and construction of the cable system G and the diffusion X; see also Section 2 of [17] for a detailed description in the case G = Z d .
For all x, y ∈ G we denote by g(x, y), x, y ∈ G, the Green function associated to X, i.e., the density relative to the Lebesgue measure on G of the 0-potential of X, which agrees with g on G, as well as g U for U ⊂ G the Green function associated to the process X killed on exiting U.
We define for A ⊂ G the set A * ⊂ G as the smallest set such that A * ⊃ G ∩ A, and such that for all z ∈ A \ G, there exist x, y ∈ A * such that z ∈ I {x,y} . For all x ∈ G and L > 0, we write B(x, L) for the largest subset B of G such that B * = B(x, L), and for all A ⊂ G and L > 0, we let B( A, L) denote the largest subset B of G such that B * = B( A * , L). Moreover, for A ⊂ G, we write if there exists a continuous path between z and z in A. We say that The Gaussian free field naturally extends to the metric graph G: Let Φ z , z ∈ G, be the coordinate functions on the space of continuous real-valued functions C( G, R), the latter endowed with the σ-algebra generated by the maps Φ z , z ∈ G. Let P G be the probability measure on C( G, R) such that, under P G , ( Φ z ) z∈ G is a centered Gaussian field with covariance function The existence of such a continuous process was shown in [33]. Any random variable ϕ on C( G, R) with law P G will be called a Gaussian free field on G. Moreover, if ϕ is a Gaussian free field on G, then it is plain that ( ϕ x ) x∈G is a Gaussian free field on G.
With a slight abuse of notation, we will henceforth write ϕ x instead of ϕ x when x ∈ G for emphasis. We now recall the spatial Markov property for the Gaussian free field on G, see Section 1 of [60]. Let K ⊂ G be a compact subset with finitely many connected components, and let U = G \ K be its complement. We can decompose any Gaussian free field ϕ on G as ϕ U is a Gaussian free field independent of σ( ϕ z , z ∈ K) and with covariance function g U , and in particular ϕ U vanishes on K.
One can also adapt the usual definition of random interlacements on G, see [62], to the cable system G as in [33], [60] and [17]. For each u > 0, one thus introduces under a probability measure P I the random interlacement process ω u on G at level u, whose restriction to the trajectories hitting K ⊂⊂ G can be described by a Poisson point process with intensity u P e K where e K is the usual equilibrium measure of K ⊂⊂ G, see (3.6) below. One then defines a continuous field of local times ( z,u ) z∈ G relative to the Lebesgue measure on G associated to the random interlacement process on G at level u, i.e., z,u corresponds for all z ∈ G to the density with respect to the Lebesgue measure on G of the total time spent by the random interlacement process around z. For all u > 0, the restriction ( x,u ) x∈G of the local times to G coincides with the field of occupation times ( x,u ) x∈G associated with the discrete random interlacement process ω u defined above (2.13), and just like for the free field, we will write x,u instead of x,u when x ∈ G. We also define for each measurable subset B of G and u > 0 the family = ( z,u ) z∈ B ∈ C( B, R), and the random interlacement set at level u by (2.19) I u = {z ∈ G; z,u > 0}.
The connectivity properties of I u will be studied in Section 4. In particular, as stated in Corollary 4.2, I u is P I -a.s. an unbounded and connected subset of G, and the same is true of I u (as a subset of G). We will elaborate on an important link between the fields G,u and ϕ from (2.16) and (2.18) in Section 5. Finally, one of the main tools in the study of the percolative properties of the vacant set of random interlacements and of the level sets of the Gaussian free field, and the driving force behind the renormalization arguments of Section 8 are a certain family of correlation inequalities on G, which we now state. Their common feature is a small sprinkling for the parameters u and h, respectively, which partially compensates the absence of a BK-inequality (after van den Berg and Kesten, see for instance [26]) caused by the presence of long-range correlations in these models. The results below, in particular (2.21) below, are of independent interest. We recall the notation from the paragraph preceding (2.16) and (2.18) and use C(A, R) to denote the space of continuous functions from A to the reals, where the topology on A is generally clear from the context.
. There exist C 6 and c 6 such that for all ε ∈ (0, 1), and all measurable functions f i : C( A i , R) → [0, 1], i = 1, 2, which are either both increasing or both decreasing, if s > 0, (2.20) and there exist C 7 , C 8 and c 8 such that for all u > 0, ε ∈ (0, 1) and f i as above, if s ≥ C 7 (r ∨ 1), where the plus sign corresponds in both equations to the case where the functions f i are increasing and the minus sign to the case where the functions f i are decreasing.
The proof of Theorem 2.4 is deferred to Section 6. While (2.20) follows rather straightforwardly from the decoupling inequality from [38] for the Gaussian free field (see also Theorem 6.2 for a strengthening of (2.20)), the proof of (2.21) is considerably more involved. It uses the soft local times technique introduced in [39] on Z d for random interlacements, but a generalization to the present setup requires some effort (note also that for graphs of the type G = G × Z, one could also use the inequalities of [56], which are proved by different means).

Preliminaries and examples
We now gather several aspects of potential theory for random walks on the weighted graphs introduced in the last section. These include estimates on killed Green functions, see Lemma 3.1 below, a resulting (elliptic) Harnack inequality, bounds on the capacities of various sets, see Lemma 3.2, and on the heat kernel, see Proposition 3.3, which will be used throughout. We then proceed to discuss product graphs in Proposition 3.5 and, with a view towards (WSI), connectivity properties of external boundaries in Proposition 3.7. These results are helpful in showing how the examples from (1.4), which constitute an important class, fit within the framework of the previous section. We conclude this section by deducing in Corollary 3.9 that our main results, Theorems 1.1 and 1.2, apply in all cases of (1.4).
From now on, we assume that (G, λ) is an infinite, connected, weighted graph endowed with a distance function d that satisfies (p 0 ), (V α ) and (G β ) (3.1) (see Section 2). Throughout the remainder of this article, we always tacitly work under the assumptions (3.1). Any additional assumption will be mentioned explicitly.
The following lemma collects an estimate similar to (G β ) for the stopped Green function (2.5).

(3.2)
Proof. Let U 1 ⊂ U 2 ⊂⊂ G. The upper bound in (3.2) follows immediately from (G β ) since g U 2 (x, y) ≤ g(x, y) for all x, y ∈ G by definition. For the lower bound, using (2.6) and (G β ), we obtain that for all x = y ∈ U 1 , Thus, choosing C 9 large enough such that c 2 The lower bound for g U 2 (x, x), x ∈ U 1 , is obtained similarly.
Using Lemma 10.2 in [24], an important consequence of (3.2) is the elliptic Harnack inequality in (3.3) below. For this purpose, recall that a function f defined on U 2 def. = B G (U 2 , 1), the closed 1-neighborhood of U 2 for the graph distance, is called L-harmonic (or simply harmonic) in U 2 if E x [f (Z 1 )] = f (x), or equivalently Lf (x) = 0 (see (1.2)), for all x ∈ U 2 . The bounds of (3.2) imply that there exists a constant c 9 ∈ (0, 1) such that for all U 1 ⊂ U 2 ⊂⊂ G with d(U 1 , U c 2 ) ≥ C 9 (δ(U 1 ) ∨ 1), and any non-negative function f on U 2 which is harmonic in U 2 , Another important consequence of (3.2) is that the balls for the distance d are almost connected in the following sense: (3.4) ∀x ∈ G, R ≥ 1 and y, y ∈ B(x, R), y ↔ y in B(x, C 10 R), with C 10 = 2C 9 + 1.
Indeed, for all U ⊂⊂ G and y, y ∈ G, y U ←→ y is equivalent to g U (y, y ) > 0, and by definition, As a consequence, (3.2) implies that g B(x,C 10 R) (y, y ) > 0 for all y, y ∈ B(x, R). We now recall some facts about the equilibrium measure and capacity of various sets. For A ⊂⊂ U ⊂ G, the equilibrium measure of A relative to U is defined as where H A def.
= inf{n ≥ 1, Z n ∈ A 1 } is the first return time in A for the random walk on G, and the capacity of A relative to U as the total mass of the equilibrium measure, For all A ⊂⊂ U ⊂ G, the following last-exit decomposition relates the entrance time H A of Z in A, the exit time T U of U, the stopped Green function and the equilibrium measure: For A ⊂⊂ G and x ∈ G, we introduce the equilibrium measure, capacity and harmonic measure as respectively. The capacity is a central notion for random interlacements, since we have the following characterization for the random interlacement set I u (3.10) see Remark 2.3 in [62]. With these definitions, it then follows using (3.8) and (2.8) that for all R ≥ C 3 and x 0 ∈ G, and hence there exist constants 0 < c 11 ≤ C 11 < ∞ only depending on G such that for all R ≥ 1 and x ∈ G, A useful characterization of capacity in terms of a variational problem is given by where the infimum is over probability measures µ on A, see e.g. Proposition 1.9 in [58] for the case of a finite graph with non-vanishing killing measure (the proof can be extended to the present setup). In particular, since every probability measure µ on A is also a probability measure on any set containing A, the capacity is increasing, so for A, B ⊂ G, Another consequence of the representation (3.12) is the following lower bound on the capacity of a set.
There exists a constant c depending only on G such that for all L 1 and A ⊂ G connected with diameter at least L, (3.14) Moreover, if A ⊂ G is infinite and connected, then for all and thus A ∩ I u = ∅ P I -a.s.
Proof. Let us fix some L 1, A connected subset of G with diameter at least L, and Combining this bound with (3.12), the inequality (3.14) follows. If A is now an infinite and connected subset of G, then for each x 0 ∈ G there exists L 0 > 0 such that for all L L 0 , the set A∩B G (x 0 , L/C 3 ) has diameter at least L 2C 3 , and thus by (2.8) A∩B(x 0 , L) contains at least a connected component of diameter L 2C 3 , and (3.15) then follows directly from (3.14). Finally, by (3.10), Next, we collect an upper bound on the heat kernel (2.7) and an estimate on the distribution of the exit time of a ball T B(x,R) .
i) There exists a constant C such that for all x, y ∈ G and n > 0, ii) There exist constants c and C such that for all x ∈ G, R > 0 and positive integer n, Proposition 3.3 is essentially known, for instance if d is the graph distance d G then these results (as well as (UHK(α, β)) and (LHK(α, β))) are proved in [24]. For a general distance d, some estimates similar to (3.16) and (3.17) (as well as (UHK(α, β)) and (LHK(α, β))) are also proved in [23] and [22] in the more general setting of metric spaces, and we could apply them to the variable rate continuous time Markov chain on G. However, there does not seem to be any proof in the literature that exactly fits our needs (general distance d, discrete time random walk Z), and so, for the reader's convenience, we have included a proof of Proposition 3.3 in the Appendix. Remark 3.4. 1) With Proposition 3.3 at our disposal, following up on Remark 2.2, we briefly discuss the relation of the above assumptions (3.1) to heat kernel bounds within our setup. A consequence of (3.16) and (3.17) is that, under condition (p 0 ), note that in contrast to the results of Remark 2.2, this holds true even when d is not the graph distance, where (UHK(α, β)) is defined in Remark 2.2. Indeed, for d = d G this implication is part of Proposition 8.1 in [24], but the proof remains valid for any distance d. However the corresponding lower bound (LHK(α, β)) on the heat kernel does not always hold. To see this, take for example G a graph such that (p 0 ), (V α ) and (G β ) hold when d is the graph distance, and let d = d 1 κ for some κ > 1 (cf. Proposition 3.5 and (3.22) below for a situation where this is relevant). Then for the graph G endowed with the distance d , the conditions (p 0 ), (V α ) and (G β ) hold with α = ακ and β = βκ. Moreover, using (UHK(α, β)) for the distance d, one thus (LHK(α , β )) cannot hold for G endowed with the distance d .
2) Even in cases where (LHK(α, β)) does not hold, it is still possible to obtain some slightly worse lower bounds for a general distance d. We will not need these results in the rest of the article, and therefore we only sketch the proofs. We introduce the following near-diagonal lower estimate for all x, y ∈ G and n cd(x, y) β .
Let us assume that the condition (p 0 ) is fulfilled, we then have the following equivalences for all α > 2 and β ∈ [2, α) The first implication follows from (13.3) in [24], whose proof remains valid for a general distance d, given (3.18), (3.16), (A.1) and (3.3), and the proof of its converse is exactly the same as the proof of Proposition 15.1 in [24] or Lemma 4.22 and Theorem 4.26 in [4]. Estimates similar to (UHK(α, β)) and (NLHK(α, β)) for the continuous time Markov chain on G with jump rates (λ x ) x∈G and transition probabilities (p x,y ) x,y∈G , see (2.2), are also equivalent to (3.19), see Theorem 3.14 in [23]. Let us now also assume that there exist constants c > 0 and ζ ∈ [1, β) such that for all r > 0, k ∈ N and x, y ∈ G such that d(x, y) ck then the conditions in (3.19) are also equivalent to (UHK(α, β)) plus the following lower estimate (LHK(α, β, ζ)) Indeed, under condition (D ζ ), the proof that (3.19) implies (LHK(α, β, ζ)) is similar to the proof of Proposition 13.2 in [24] or Proposition 4.38 in [4], modulo some slight modifications when d is a general distance, and its converse is trivial. Note that if d = d G , it is clear that (D 1 ) hold and that the lower estimate (LHK(α, β, 1)) is the same as (LHK(α, β)), and thus we recover the results from Remark 2.2. If d = d 1 κ G for some κ 1 as in the counter-example of Remark 3.4, 1), and (V α ) and (G β ) hold with the distance d G , then (D κ ) hold for the distance d and thus also (LHK(α , β , κ)) for the distance d , where β = βκ and α = ακ, which is exactly the same as (LHK(α, β)) for the distance d G .
We now discuss product graphs. Let G 1 and G 2 be two graphs as in the previous section (countably infinite, connected and with bounded degree), endowed with weight functions λ 1 and λ 2 . The graph G = G 1 × G 2 is defined such that x = (x 1 , x 2 ) ∼ y = (y 1 , y 2 ) if and only there exists i = j ∈ {1, 2} such that x i ∼ y i and x j = y j . One naturally associates with G the weight function λ such that for all x = (x 1 , x 2 ) ∼ y = (y 1 , y 2 ), one has · are independent. Let X · be the corresponding walk on G (with jump rates λ x , cf. (3.20)). Then X · has the same law as (X 1 · , X 2 · ) and in view of (2.4), x 2 ) and y = (y 1 , y 2 ). We introduce for i = 1, 2, the additive functionals along with τ i t = inf{s ≥ 0; A i s ≥ t} and the corresponding time-changed processes By the above assumptions, the discrete skeletons of Y i · , i = 1, 2, satisfy the respective heat kernel bounds HK(α i , β i ) in the notation of [4], and thus by Theorem 5.25 in [4] (the process Y i · has unit jump rate), for all x = (x 1 , x 2 ) and y = (y 1 , y 2 ) in G, abbreviating where the lower bound holds for all t ≥ d i ∨ 1 and the upper bound for all t ≥ d i . Going back to (3.23), noting that t for all t ≥ 0 by (2.10) and (3.24), and observe that which follows for instance from Theorem 5.17 in [4]. We obtain for all x and y, with constants possibly depending on α i and β i , keeping in mind that d β 2 = d β i i for some i in the third line below, recalling the definition of α and β from (3.22) in the last step; we also note that the integral over u in the last but one line is finite since In view of (1.6), (3.27) yields the desired upper bound. For the corresponding lower bound, one proceeds similarly, starting from (3.23), discarding the integral over , and applying the lower bound from (3.25). Thus, (G β ) holds, which completes the proof.
We now turn to the proof of (WSI) for product graphs and the standard d-dimensional Sierpinski carpet, d 3. If G = G 1 × G 2 , we say that two vertices x = (x 1 , x 2 ) and y = (y 1 , y 2 ) are * -neighbors if and only if both, the graph distance in G 1 between x 1 and y 1 and the graph distance in G 2 between x 2 and y 2 , are at most 1. If G is the standard ddimensional Sierpinski carpet, we say that x = (x 1 , . . . , x d ) and y = (y 1 , . . . , y d ) in G are * -neighbors if and only if there exist i, j ∈ {1, . . . , d} such that |x i − y i | 1, |x j − y j | 1, and x k = y k for all k = i, j. Moreover, we say in both cases that A ⊂ G is * -connected if every two vertices of A are connected by a path of * -neighbors vertices. We are going to prove that in these two examples, the external boundary of any finite and connected subset A of G is * -connected. In order to do this, we are first going to prove a property which generalizes Lemma 2 in [67], and then apply it to our graphs. In Proposition 3.7, we say that C is a cycle of edges if it is a finite set of edges such that every vertex has even degree in C, that P is a path of edges between x and y in G if x and y are the only vertices with odd degree in P, and we always understand the addition of sets of edges modulo 2. We also define for all Proposition 3.7. Let C be a set of cycles of edges such that for all finite sets of edges S ⊂ E and all cycles of edges Q, Then for all finite and connected sets A ⊂ G and for all x ∈ A c , the set ∂ x ext A is connected in G + , the graph with the same vertices as G and where {y, z} is an edge of G + if and only if y and z are both traversed by some C ∈ C.
In particular, if A is either a finite and connected subset of G 1 × G 2 for two infinite and locally finite graphs G 1 and G 2 , or of the standard d-dimensional Sierpinski carpet for d 3, then ∂ ext A is * -connected.
Proof. Let A be a finite and connected subset of G, and let us fix some x 0 ∈ A, x 1 ∈ A c , and S 1 and S 2 two arbitrary non-empty disjoints subsets of G such that ∂ x ∈ A and y ∈ S i } for each i ∈ {1, 2}. We will prove that there exists C ∈ C which contains at least one edge of S 1 and one edge of S 2 ; thus by contraposition ∂ x 1 ext A will be connected in G + since S 1 and S 2 were chosen arbitrary. Since A is finite and connected and S 1 and S 2 are non-empty, there exist two paths P 1 and P 2 of edges between x 0 and x 1 such that P i ∩ S i = ∅ but P i ∩ S 3−i = ∅ for all i ∈ {1, 2}, and then Q = P 1 + P 2 is a cycle of edges. By (3.29), there exists C 0 ⊂ C such that The left-hand side of (3.30) is a path of edges between x 0 and x 1 which does not intersect S 1 by definition, and thus it intersects S 2 . Therefore, the right-hand side of (3.30) intersects S 2 as well, i.e., there exists C ∈ C 1 which intersects S 2 , and also S 1 by definition.
We now prove that ∂ ext A is * -connected when G = G 1 ×G 2 , for G 1 and G 2 two infinite and locally finite graphs. We start with considering the case that G 2 is a tree, i.e., it does not contain any cycle. We define C by saying that C ∈ C if and only if it contains exactly every edge between (x 1 , x 2 ), (x 1 , y 2 ), (y 1 , y 2 ) and (y 1 , x 2 ) for some x 1 ∼ y 1 ∈ G 1 and x 2 ∼ y 2 ∈ G 2 . Hence a set is connected in G + if and only if it is * -connected. Note that since G 1 and G 2 are infinite, ∂ ext A = ∂ x ext A for all x ∈ A c , and thus we only need to prove (3.29).
Let S be a finite set of edges and Q 0 be a cycle of edges. We fix a nearest-neighbor path of vertices π = (y 0 , y 1 , . . . , y p ) ⊂ G p+1 2 such that all the vertices visited by the edges For all n ∈ {0, . . . , p − 1} and all edges e = (e 1 , y n ) ∈ E 1 × {y n }, with E 1 denoting the edges of G 1 , we define C n e as the unique cycle in C containing the edges e and (e 1 , y n+1 ). Next, we recursively define a sequence (Q n ) n∈{0,...,p} of sets of edges by By construction, for all n ∈ {0, . . . , p − 1}, Q p does not contain any edge in G 1 × {y n } and thus if e is an edge in Q p of the form (e 1 , y) for some e 1 ∈ E 1 and y ∈ G 2 , then necessarily y = y p . Since Q p is a cycle of edges and since G 2 does not have any cycle, Q p ⊂ G 1 × {y p }, and thus Q p ∩ S = ∅, which gives us (3.29).
Let us now assume that G 2 contains exactly one cycle of edges, and let {x 2 , y 2 } and {x 2 , z 2 } be two different edges of this cycle. Let A be a finite and connected subset of G, then the exterior boundary of A in G 1 × (G 2 \ {x 2 , y 2 }) and the exterior boundary The other cases are similar, and we obtain that the exterior boundary of A in G is * -connected. We can thus prove by induction on the number of cycles that if G 2 has a finite number of cycles of edges, then the external boundary of any finite and connected subset A of G is * -connected. Otherwise, let x and y be any two vertices in ∂ ext A, and let π x be an infinite nearest-neighbor path in A c , without loops, beginning in x, such that the projection of π x on G 1 is a finite path on G 1 , i.e. constant after some time, and π y be a finite nearest-neighbor path in A c , without loops, beginning in y and ending in π x . Let G 2 be the graph with vertices the projection on G 2 of A ∪ ∂ ext A ∪ {π x } ∪ {π y }, and with the same edges between two vertices of G 2 as in G 2 . By definition G 2 is infinite and only contains a finite number of cycles of edges, so the exterior boundary of A in G 1 × G 2 is * -connected in G 1 × G 2 , and thus x and y are * -connected in G.
Let us now take G to be the standard d-dimensional Sierpinski carpet, d 3, that we consider as a subset of N d , and A a finite and connected subset of G. We define C as the set of cycles with exactly 4 edges, and then a set is connected in G + if and only if it is * -connected, thus we only need to prove (3.29). Let S be a finite set of edges, Q 0 be a cycle of edges, and p ∈ N such that Let us now define recursively two sequences (Q n ) n∈{0,...,p} and (R n ) n∈{1,...,p} of cycles of edges such that Q n ⊂ {n, . . . , p} × Z d−1 for all n ∈ {0, . . . , p}. For each square V ∈ V n , all the vertices of {n} × V have an even degree in and Q n is a cycle of edges. Moreover, since d 3, every cycle of edges in {n} × V is a sum of cycles with exactly 4 edges in {n} × V , and thus one can find a set By construction, every edge e = (n, is such that (n + 1, e 1 ) ∈ G, and we then define C n e as the unique cycle in C containing the edges e and (n + 1, e 1 ), and we take Remark 3.8. 1) One can extend Proposition 3.7 similarly to Theorem 3 in [67]. Let us assume that there exists C such that (3.29) hold, and that for each Then for all finite set A connected in G + and for all x ∈ A c , the set ∂ x ext A is connected in G ++ , the graph with the same vertices and edges as G + plus every edge of the type {x, y} for x, y both crossed by O e for some edge e ∈ E + \ E. Indeed let G + A be the graph with the same vertices as G, and edge set E + A which consists of E plus the edges in E + \ E with both endpoints in A, and let is a cycle of edges in E, and thus by (3.29) for G with the set of cycles of edges C, one can easily show that (3.29) also hold for G + A with the set of cycles of edges taking O e such that O e \ {e} only contains two connected edges of E for each e ∈ E + \ E, we get that the external boundary of every finite and * -connected subset A of G is * -connected since G ++ = G + .
2) Proposition 3.7 provides us with a stronger result than Lemma 2 in [67] even when G = Z d , d 3. Indeed, Z d = Z d−1 × Z and thus the external boundary of every finite and connected (or even * -connected) subset of Z d is * -connected in the sense of product graphs previously defined, i.e., it is connected in Z d ∪{{(x, n), (y, n+1)}; n ∈ Z, x ∼ y ∈ Z d−1 }.
3) An example of a graph G for which we cannot apply Proposition 3.7, and in fact where we can find a finite and connected set whose boundary is not * -connected, and where (WSI) does not hold, but where (G β ) and (V α ) hold, is the Menger sponge. It is defined as the graph associated to the following generalized 3-dimensional Sierpinski carpet, see Section 2 of [6]: split [0, 1] 3 into 27 cubes of size length 1/3, remove the central cube of each face as well as the central cube of [0, 1] 3 , and iterate this process for each remaining cube. It is easy to show that G endowed with the graph distance verifies (V α ) with α = log (20) log (3) , and (G β ) follows from Theorem 5.3 in [6] since the random walk on the Menger sponge is transient, see p.741 of [5]. One can then easily check that taking A n = (3 n /2, 5 × 3 n /2) 3 ∩ G, where we see G as a subset of R 3 , then ∂ ext A is not * -connected. In fact for each x ∈ ∂ ext A n and p < n, there is no 3 p path between x and B(x, 2 × 3 p ) c , and thus (WSI) does not hold.
We can now conclude that our main results apply to the examples mentioned in the introduction.
Proof. Condition (p 0 ) holds plainly in all cases since all graphs in (1.4) have unit weights and uniformly bounded degree. For G 1 , we classically have α = d, β = 2 and (WSI) follows e.g. from Proposition 3.
Finally, G 4 endowed with the graph distance d = d G 4 satisfies (V α ) for some α > 2 by assumption and (G β ) holds with β = 2 by Theorem 5.1 in [29]. To see that (WSI) holds, we first observe that the group Γ = S which has G 4 as a Cayley graph is finitely presented. Indeed, by a classical theorem of Gromov [27], Γ is virtually nilpotent, i.e., it has a a normal subgroup H of finite index which is nilpotent. Furthermore, H is finitely generated (this is because Γ/H is finite, so writing gH, g ∈ C with |C| < ∞ and 1 ∈ C for all the cosets, one readily sees that H = {h ∈ H; h = g −1 sg for some g, g ∈ C and some s ∈ S} ).
Since H is nilpotent and finitely generated, it is in fact finitely presented, see for instance 2.2.4 (and thereafter) and 5.2.18 in [45], and so is Γ/H, being finite. Together with the normality of H one straightforwardly deduces from this that Γ is finitely presented, see again 2.2.4 in [45]. As a consequence Γ = S|R for a suitable finite set of relators R. This yields a generating set of cycles for G 4 of maximal cycle length t < ∞, where t is the largest length of any relator in R, and Theorem 5.1 of [66] (alternatively, one could also apply Proposition 3.7) readily yields that, for all x ∈ ∂ ext A, every two vertices of ∂ x ext A are linked via an R 0 path in ∂ x ext A, with R 0 = t/2. Moreover, since G has sub-exponential growth, {∂ x ext A, x ∈ ∂ ext A} contains at most two elements, see for instance Theorem 10.10 and 12.2, (g), in [70] and, since G does not have linear growth, in fact only 1, see for instance Lemma 5.4, (a), and Theorem 5.12 in [31]. We also prove this fact for any graph satisfying (3.1) in the course of proving Lemma 6.5.
In order to prove (WSI), we thus only need to show that there exists c > 0 such that δ(∂ ext A) cδ(A) for all finite and connected subgraphs A of G, and we are actually going to show this inequality in the general setting of vertex-transitive graphs G. Write m def.
= δ(∂ ext A), let us fix some x 0 ∈ ∂ ext A, and let us call B(x, m) = {y ∈ G; every unbounded path beginning in y intersects B(x, m)}, for all x ∈ G. Let us assume that there exists m). Iterating this reasoning, we can thus construct recursively a sequence (x n ) n∈N of vertices such that B(x n+1 , m) ⊂ B(x n , m) \ B(x n , m), and x n ↔ x n+1 in B(x n , m) for all n ∈ N. Therefore, there exists an unbounded path beginning in  .4), but also for any product graphs G 1 × G 2 under the same hypotheses as in Proposition 3.5. Further interesting examples can be generated involving graphs G endowed with a distance d = d G which is not of the form of a product of graph distances as in (3.22). For instance, in Corollary 4.12 of [28], estimates similar to (UHK(α , α + 1)) and (LHK(α , α + 1, ζ)) for some α > 1 and ζ ∈ [1, α + 1) are proved for different recurrent fractal graphs G when the distance d on G is the effective resistance as defined in (2.4) of [28]. By Lemma 3.2 in [28], (V α ) hold on G endowed with the distance d , and thus one can then prove similarly as in the proof of Proposition 3.5 that G = G × Z (or some other product with an infinite graph satisfying (UHK(α, β)) and (NLHK(α, β))) satisfy (V α ) and (G β ) with α = 3α +1 2 and β = α + 1 for the distance for all x , y ∈ G and n, m ∈ Z.
Moreover, (WSI) is also verified on G by Proposition 3.7, and thus the conclusions of Theorems 1.1 and 1.2 hold for G. It should be noted that d is not always equivalent to the graph distance on G , see for instance the graph G considered in Corollary 4.16 of [28]. This graph is also another example of a graph where (D ζ ) hold for some ζ > 1 but not ζ = 1, and where the estimates (UHK(α, β)) and (LHK(α, β, ζ)) are optimal at this level of generality.

Strong connectivity of the interlacement set
We now prove a strong connectivity result for the random interlacement set on the cable system, Proposition 4.1 below; see also Proposition 1 in [43] and Lemma 3.2 in [17] for similar findings in the case G = Z d . We recall our standing assumption (3.1). The availability of controls on the heat kernel and exit times provided by Proposition 3.3 will figure prominently in obtaining the desired estimates; see also Remark 4.8 below. The connectivity result will play a crucial role in Section 8, where I u will be used as a random network to construct certain continuous level-set paths for the free field. For each z ∈ G, u > 0, and L 1, if z ∈ I u we denote by C u (z, L) the set of points of G connected to z by a continuous path in I u ∩ B(z, L), and we take C u (z, L) = ∅ if x ∈ V u . We recall the notation introduced in (2.15) and (3.4), and our standing assumptions (3.1).
Proposition 4.1. For each u 0 > 0, there exist constants c 12 > 0, c > 0 and C < ∞ all depending on u 0 such that, for all x 0 ∈ G, u ∈ (0, u 0 ] and L ≥ 1, The proof of Proposition 4.1 requires some auxiliary lemmas and appears at the end of the section. In the case G = Z d , Proposition 4.1 might appear stronger than Proposition 1 in [43] or Lemma 3.2 in [17] at a first glance, since for all z, , but it is in fact essentially equivalent, see for instance the proof of Lemma 13 in [43]. An immediate consequence of Proposition 4.1 is the following corollary, which is a generalization of Corollary 2.3 of [55] from Z d to G as in (3.1).
Corollary 4.2. Let u > 0. Then P I -a.s., the subset I u of G is unbounded and connected. Analogously, P I -a.s., the subset I u of G is infinite and connected.
Proof of Corollary 4.2. Fix any vertex x 0 ∈ G. Let A L denote the event appearing on the left-hand side of (4.1), and The events A L are increasing with lim L P I (A L ) = 1 by (3.11), and by (4.1) and a Borel-Cantelli argument, The same reasoning applies also to I u (with (4.2) below in place of (4.1)).
Let us denote for each u > 0 by I u the set of edges of G traversed by at least one of the trajectories in the trace of the random interlacement process ω u , and for each x ∈ G and L 1, if x ∈ I u , by C u (x, L) the set of vertices in G connected to x by a path of edges in I u ∩ B E (x, L), and we take C u (x, L) = ∅ otherwise. From the construction of the random interlacement process on the cable system G from the corresponding process on G by adding Brownian excursions on the edges, it follows that the inequality for all u ≤ u 0 , will entail (4.1), where for x, y ∈ G and A ⊂ E, {x ∧ ←→ y in A} means that there exists a nearest neighbor path from x to y crossing only edges contained in A. We refer to the discussion at the beginning of the Appendix of [17] for a similar argument on why (4.2) implies (4.1). In order to prove (4.2), we will apply a strategy inspired by the proof of Proposition 1 in [43] for the case G = Z d .
For U ⊂⊂ G let N u U be the number of trajectories in supp(ω u ) which enter U. By definition, N u U is a Poisson variable with parameter ucap(U ), and thus there exist constants c, C ∈ (0, ∞) such that uniformly in u ∈ (0, ∞), [43]. We now state a lemma which gives an estimate in terms of capacity for the probability to link two subsets of B(x, L) through edges in I u ∩ B(x, C 10 L). Lemma 4.3. There exist constants c ∈ (0, 1) and C ∈ [1, ∞) such that for all L ≥ 1, u > 0 and all subsets U and V of B(x, L), with ν as in (1.6).
Proof. For U not to be connected to V through edges in I u ∩ B E (x, C 10 L), all of the N u U trajectories hitting U must not hit V after hitting U and before leaving B(x, C 10 L), so where we also used e V ≤ e V,B(x,C 10 L) in the first inequality. Since cap(V ) ≤ C 11 L ν by (3.11), we can combine (4.5), (4.3) and (4.6) to get (4.4).
On our way to establishing (4.2) we introduce the following thinned processes.  L) for all x ∈ G and L > 0. Now fix some x 0 ∈ G and L > 0, and assume there exist x, y ∈ I u ∩ B (x 0 , L) such that C u (x, L) is not connected to C u (y, L) through edges in I u/2 ∩ B E (x 0 , C 10 L) . Let i, j ∈ {1, . . . , 6} be such that x ∈ I u/6 i and y ∈ I u/6 j , and let k = k(i, j) ∈ {1, 2, 3} be different from i and j. By definition, C  .7). In order to obtain (4.2), we now need a lower bound on the capacity of C u/6 i (x, L), and for this purpose we begin with a lower bound on the capacity of the range of N random walks. For each N ∈ N and S N = (x 1 , . . . , x N ) ∈ G N we define a sequence (Z i ) i∈{1,...,N } of independent random walks on G with fixed initial point Z i 0 = x i under some probability measure P S N , i.e., for each i ∈ {1, . . . , N }, Z i has the same law under P S N as Z under P x i . For all positive integers M and N we define the trace T (N, M ) on G of the N first random walks up to time M by For ease of notation, we also set with α and β from (V α ) and (G β ). The function F γ reflects the fact that the "size" of {Z n ; n ≥ 0} (as captured by β, see Lemma A.1) becomes increasingly small relative to the overall geometry of G (controlled by α) as γ grows. As a consequence, intersections between independent walks in I u are harder to produce for larger γ. This is implicit in the estimates below.
Proof. Consider positive integers N and M, and S N ∈ G N . By Markov's inequality, (4.10) Applying (3.12) with the probability measure µ = Moreover, using the heat kernel bound (3.16) and the Markov property at time p, we have uniformly in all p ∈ N and x, y ∈ G, and, thus, for p < q, with P · an independent copy of P · governing the process Z, using symmetry of g(·, ·), and the same upper bound applies to E x i g(Z i q , Z i p ) , again by symmetry of g. Considering the on-diagonal terms in the first sum on the right-hand side of (4.11), we obtain (4.14) For i = j on the other hand, (4.12) implies Combining this with (4.10), (4.11) and (4.14) yields (4.9).
We now iterate the bound from Lemma 4.4 over the different parts of the random walks (Z i ) i∈{1,...N } in order to improve it.
Proof. For ε ∈ (0, 1), all positive integers N, M and k, we define By the Markov property and Lemma 4.4, for all t > 0, ε ∈ (0, 1) and Moreover, by the monotonicity property (3.13). Thus, applying the Markov property and using (4.17) inductively we obtain, for all t small enough and M ≥ 2. This yields (4.15).
The next step is to transfer the bound in Lemma 4.5 from the trace on G of N independent random walks to a subset of the random interlacement. For all u > 0 and A ⊂⊂ G, conditionally on the number N u A of trajectories in supp(ω u ) which hit A, let S u A ∈ G N u A be the family of entrance points in A by trajectories in the support of the random interlacement process ω u on G. With a slight abuse of notation, we identify Proof. Writing, with N = cucap(A) , the inequality (4.18) easily follows from the Poisson bound (4.3) and Lemma 4.5. We turn to the proof of (4.19), and we fix x ∈ G, ε ∈ (0, 1 ∧ (γ − 1)) as well as positive integers k and M. Let us write A k = B x, kM 1+ε β to simplify notation. If Ψ(u, A k , M ) ⊂ A k+1 , then for at least one trajectory Z i among the forward trajectories Z 1 , . . . , Z N u A k in supp(ω u ) which hit A k , the walk Z i will leave B Z i 0 , M 1+ε β before time M, which is atypically short on account of Proposition 3.3 ii). Therefore, since N u Using (4.3), (3.11) and (3.17), we get and (4.19) follows since ε β−1 ≤ ε ≤ γ − 1 = ν β ≤ ν(1+ε) β by our hypothesis on ε.
For γ ≥ 2, stronger bounds are required than the one provided by Lemma 4.6 to deduce (4.20). The idea is to apply recursively Lemma 4.6 to a sequence of γ independent random interlacement processes at level u/ γ as in Lemma 8, 9 and 10 of [43] or Lemma A.3 and Corollary A.4 in [17] for G = Z d . We refer the reader to these references for details.
We conclude with the proof of Proposition 4.1.
Remark 4.8. The resulting connectivity estimate (4.1) is not optimal, see for instance (4.22). Notwithstanding, its salient feature for later purposes (see Section 8) is that it imposes a polynomial condition on u and L of the type u a L b ≥ C, for some a, b > 0, in order for the complement of the probability in (4.1) to fall below any given deterministic threshold (later denoted c 16 l −4α 0 , see Proposition 7.1).

Isomorphism, cable system and sign flipping
In the first part of this section we explore some connections between the interlacement I u and the (continuous) level sets of the Gaussian free field on the cable system defined in (2.16). Among other things, we aim to eventually apply a recent strengthening of the Ray-Knight type isomorphism from [57], see Theorem 2.4 in [60] and Corollary 5.3 below. This improvement will be crucial in our understanding that certain level sets tend to locally (i.e. at the smallest scale L 0 of our renormalization scheme -see Section 7) connect to I u and that the latter can be used to build connections of desired type, but it requires that certain conditions be met within our framework (3.1). We will in fact prove that the critical parameter for the percolation of the (continuous) level sets (5.1) is zero, and that E >−h contains P G -a.s. a unique unbounded connected component for all h > 0. In the second part of this section, we use a "sign-flipping" device which we introduced in [17], see Lemma 5.5, but improve it in view of the isomorphism from Corollary 5.3, which leads to certain desirable couplings gathered in Proposition 5.6 as a first step in proving Theorem 1.1 and 1.2. Our starting point is the following observation from [33], see also (1.27)-(1.30) in [57] (N.B.: (5.2) below is in fact true on any transient weighted graph (G, λ)). For each u > 0, there exists a coupling P u between two Gaussian free fields ϕ and γ on G, and local times G,u of a random interlacement process on G at level u such that, P u -a.s., G,u and γ are independent and (5. 2) The isomorphism (5.2) has the following immediate consequence: P u -a.s., In particular, by continuity, I u is either included in {z ∈ G; ϕ z > − √ 2u} or {z ∈ G; ϕ z < − √ 2u}. This result will be improved with the help of Corollary 4.2 in Proposition 5.2. We begin with the following lemma about the connected components of {z ∈ G; | Φ z + h| > 0}.
contains a unique unbounded connected component.
Proof. We begin with i). Conditionally on γ, let us denote by ( C γ i ) 0≤i<N an enumeration of the unbounded connected components of {z ∈ G; | γ z | > 0} for some N ∈ N ∪ {∞} (if there is no such component, we take N = 0), and let C γ i = C γ i ∩ Z d for all 0 ≤ i < N. Under P u , γ and I u are independent, see (5.2), and thus, P u -a.s., for all 0 ≤ i < N, x 0 ∈ G and L > 0, Using a union bound, we get that conditionally on γ, P u -a.s. all the unbounded connected components of {z ∈ G; | γ z | > 0} intersect I u and the claim follows by averaging under P u . We now argue that ii) holds. By symmetry of Φ it is sufficient to consider the case h > 0. For convenience, we write h = √ 2u for suitable u > 0. The existence of an unbounded connected component as desired follows from (5.3) in combination with Corollary 4.2. Thus, it remains to show uniqueness. Assume on the contrary that the set {z ∈ G; | ϕ z + √ 2u| > 0} contains at least two unbounded connected components. Then by connectivity of I u , see Corollary 4.2, and by the inclusion (5.3), at least one of these unbounded connected components does not intersect I u . Call it C u . Since C u ⊂ V u , the isomorphism (5.2) and continuity imply that C u is an infinite cluster of {z ∈ G; | γ z | > 0}, which contradicts i).
The uniqueness and existence of the unbounded component of {z ∈ G; | Φ z + h| > 0} for h > 0 ensured by Lemma 5.1 implies that P G -a.s. either E >−h or G \ E >−h contains an unbounded connected component, and we are about to show that it is always E >−h . For graphs G having a suitable action by a group of translations (for instance graphs of the form G = G × Z), this result is clear by ergodicity and symmetry of the Gaussian free field. Due to the lack of ergodicity, we use a different argument here. The measure P u refers to the coupling in (5.2).
Proposition 5.2. For all h > 0, P G -a.s., the set E >h only contains bounded connected components whereas the set E >−h contains a unique unbounded connected component. Moreover, for all u > 0, P u -a.s., Proof. We only need to show that for all h > 0 Assume that (5.5) does not hold for some height h > 0, which is henceforth fixed, and set u = h 2 2 . Let C h ⊂ G be the set of points belonging to the infinite connected component of {z ∈ G; ϕ z < −h} whenever it exists (C h = ∅ if there is no such component). By a union bound there exists x 0 ∈ G such that For all n ∈ N, we define the random variable All constants from here on until the end of this proof may depend implicitly on u (or h). By definition of random interlacements, P I (x ∈ I u ) = 1 − e − u g(x,x) , whence for all x ∈ G, c ≤ P I (x ∈ I u ) ≤ C due to (G β ) and thus, in view of (5.7), Following the lines of the proof of (1.38) in [56] one finds with the help of (G β ) that there exists a constant C such that for all x, x ∈ G, Moreover, by (2.10) and Lemma A.1, there exists a constant C < ∞ such that for all x ∈ G and n ∈ N, (5.10) y∈B(x,n) g(x, y) ≤ Cn β .
Having shown Proposition 5.2, taking complements in (5.4), we know that for all u > 0, (and in particular h * ≤ √ 2u * ) for all graphs G satisying our assumptions (3.1). Moreover, as will become clear in the proof of Corollary 5.3 below, Proposition 5.2 provides us with a very explicit way to construct a coupling P u as in (5.2) with the help of [60]. With a slight abuse of notation (which will soon be justified), for all u > 0, we consider a (canonical) coupling P u between a Gaussian free field γ on G (with law P G ) and an independent family of local times ( z,u ) z∈ G continuous in z ∈ G of a random interlacement process with the same law as under P I , cf. (2.18). Note that this defines the set I u by means of (2.19). We then define C ∞ u as the union of the connected components of {z ∈ G; 2 z,u + γ 2 z > 0} intersecting I u .

(5.19)
The following is essentially an application of Theorem 2.4 in [60].
for all z ∈ G, is a Gaussian free field, i.e., its law is P G , and the joint field ( γ · , ·,u , ϕ · ) thereby defined constitutes a coupling such that (5.2) holds. Moreover, C ∞ u is the unique unbounded connected component of {z ∈ G; ϕ z > − √ 2u}.
Proof. We aim at invoking Theorem 2.4 in [60] in order to deduce that the field ϕ defined in (5.20) is indeed a Gaussian free field. The conditions to apply this result are that (5.21) P G -a.s., {z ∈ G; | Φ z | > 0} only contains bounded connected components, and g(x, x) is uniformly bounded. The latter is clear by (G β ), but it is not obvious that (5.21) holds. However, by direct inspection of the proof of Theorem 2.4 in [60], we see that (5.21) is only used to prove (1.33) and (2.48) in [60], and that it can be replaced by the following (weaker) conditions: for all u > 0, P u -a.s., I u ⊂ {z ∈ G; ϕ z > − √ 2u} and By (5.19), z,u = 0 for z / ∈ C ∞ u and it then follows plainly from (5.20) that (5.2) holds. Finally, the fact that C ∞ u is the unique unbounded cluster of {z ∈ G; ϕ z > − √ 2u} is a consequence of Proposition 5.2 and the definitions of C ∞ u and ϕ, recalling that I u = {z ∈ G; z,u > 0} is an unbounded connected set due to Corollary 4.2 and (2.19).
Remark 5.4. 1) An interesting consequence of Corollary 5.3 is that for all graphs satisfying our assumptions (3.1), the inclusion (5.18) can be strengthened to see Corollary 2.5 in [60].
2) For the remainder of this article, with a slight abuse of notation, we will solely refer to P u as the coupling between ( γ · , ·,u , ϕ · ) constructed around (5.19) and (5.20). Thus, the conclusions of Corollary 5.3 hold, and in particular P u satisfies (5.2).
We now adapt a result from Section 5 in [17] which roughly shows that, under P u , for each x ∈ G and with u = h 2 /2 for a suitable h > 0, except on an event with small probability, a suitable conditional probability that ϕ z ≥ −h for all z on the first half of an edge starting in x is smaller than the respective conditional probability that ϕ x ≥ h at the vertex x whenever h (or u) is small enough.
For each x ∼ y ∈ G, we denote by U x,y the compact subset of G which consist of the points on the closed half of the edge I {x,y} beginning in x, and for x ∈ G let U x = y∼x U x,y and K x = ∂U x , i.e., K x is the finite set of midpoints on any edge incident on x. For all U ⊂ G, we denote by A U the σ-algebra σ( ϕ z , z ∈ U ). For all x ∈ G, u > 0 and K > 0, we also define the events For all z ∈ K x , let y z be the unique y ∼ x such that z ∈ U x,y . Recall that by the Markov property (2.17) of the free field, one can write, for all x ∈ G, x is a centered Gaussian variable independent of A K x and with variance g U x (x, x) = 2 y∼x (ρx,y/2) −1 = 1 2λx , where we recall ρ x,y = 1/(2λ x,y ) and refer to Section 2 of [33] for details on these calculations. Lemma 5.5. There exists c 13 > 0 such that for all u > 0, x ∈ G and K > √ 2u satisfying we have and, denoting by F the cumulative distribution function of a standard normal variable, Proof. We first consider the event (5.25) and (5.26) and thus For any x ∈ G and z ∈ K x , by the Markov property (2.17), the law of the Gaussian free field ϕ on U x,yz conditionally on A K x ∪{x} is that of a Brownian bridge of length ρ x,yz /2 = (4λ x,yz ) −1 between ϕ x and ϕ z of a Brownian motion with variance 2 at time 1. Furthermore, still conditionally on A K x ∪{x} , these bridges form an independent family in z ∈ K x . Therefore, on the event using an exact formula for the distribution of the maximum of a Brownian bridge, see for instance [11], Chapter IV.26, we obtain We now choose the constant c 13 such that the right-hand side of (5.33) is smaller than 1 if √ 2uλ x K ≤ c 13 , and (5.28) then readily follows from (5.33). The inequality (5.29) follows simply from (5.26): for all u > 0, K > √ 2u and x ∈ G, on the event {β U x x ≥ K}, This completes the proof of Lemma 5.5.
For all parameters u > 0 and p ∈ (0, 1), we consider a probability measure Q u,p , extension of the coupling P u introduced above (5.19), see also Remark 5.4, 2), governing the fields (( γ z ) z∈ G , ( z,u ) z∈ G , (B p x ) x∈G ) such that, under Q u,p , the fields γ · , ·,u are those from above (5.19) (and thus Corollary 5.3 applies), , 1}-valued random variables with Q u,p (B p x = 1) = p, the three fields B p · , γ · , ·,u are independent. (5.34) Recalling the definition of the σ-algebra A K x , x ∈ G, we consider a family (X x u,K,p ) x∈G ∈ {0, 1} G of random variables defined with the same underlying probability Q u,p from (5.34) and the property that, for K > √ 2u and all x ∈ G, We will consider the following two natural choices for X u,K,p , either and we will allow for both. The reason for this twofold choice is explained below in Remark 8.9, 2). In case (5.36), inequality (5.35) follows directly from the definition (5.34) and (5.37), whereas in the case (5.38) it is a consequence of the decomposition (5.26) and the fact that = γ y ≥ −K + √ 2u for all y ∈ K x and the following random subsets of G, cf. (5.25) for the definitions of R x u and S x K : = {x ∈ G; S x K occurs}, and X u,K,p def. = {x ∈ G; X x u,K,p = 1}.
(5.40) By (5.20), under Q u,p , if ϕ z < −K, then γ z < −K + √ 2u for all z ∈ G, and thus for all x ∈ G, in view of (5.25) and (5.39), We now take advantage of Lemma 5.5 to obtain the following coupling.
Proposition 5.6. For all u > 0, K > √ 2u and p ∈ (0, 1) such that (5.27) and (5.37) hold true for all x ∈ G, with (X x u,K,p ) x∈G as in (5.36) or (5.38), there exists a probability space (Ω u,K,p , F u,K,p , Q u,K,p ) on which, with a slight abuse of notation, one can define eight random subsets I u , H u,K,p , H u,K,p , R u , ) has the same law under Q u,K,p as as well as 43) and the following inclusions hold: Proof. For fixed values of u, K and p satisfying the above assumptions, we define the probability ν 1 on ({0, 1} G ) 2 × ({0, 1} G ) 3 as the (joint) law of with respect to Q u,p ( · | A K ) only depend on ϕ |K x for every x ∈ G, and thus can be written as for all x ∈ G. Now, for all x ∈ G and ψ x ∈ R K x , let ν x ψ x be the law of (5.46) where U is an independent uniform random variable on [0, 1] and (ψ x ) (this union should be read as corresponding to a partition of the event E x 6 ). Let us denote by (ξ 1 , . . . , ξ 5 ) the coordinates of {0, 1} 5 . In view of (5.46), (5.45) and the definition of the events E x i , one readily checks that Q u,p -a.s., s., ξ 3 = ξ 5 , and, For all ψ = (ψ x ) x∈G ∈ R K we define the following probabilites on By ( ) has a product law under Q u,p ( · | A K ) and they have the same law under Q u,p as (ξ 1 , ξ 2 , ξ 3 ) and (ξ 4 , ξ 5 ) under ν 2 , where ξ i = {x ∈ G; ξ x i = 1}, respectively, and ξ x i , x ∈ G and i ∈ {1, . . . , 5} are the coordinates of ({0, 1} G ) 3 × ({0, 1} G ) 2 . We finally concatenate these probabilities by defining the probability Q u,K,p on the product space where we wrote the coordinates under ν i as (η i 1 , η i 2 ) for all i ∈ {1, 2, 3}, and furthermore ν 3 (η 3 2 ∈ · | η 3 1 = ·) is a regular conditional probability distribution on ({0, 1} G ) 2 for η 3 2 given σ(η 3 1 ) (and similarly for ν 2 (η 2 2 ∈ · | η 2 1 = ·)). One then defines the eight random sets from the statement of the theorem under Q u,K,p as follows: the sets I u and H u,K,p are defined by the marginals of η 1 1 , the sets R u , H u,K,p , E ≥− √ 2u ϕ by those of η 1 2 , the first marginal of η 2 2 gives E ≥ √ 2u ϕ , and V u as well as E ≥0 γ are given by η 3 2 . One verifies that (5.42) and (5.43) hold, and, using (5.41), (5.4), (5.47) and (5.24), that (5.44) holds.
Remark 5.7. Lemma 5.5 is stated in terms of the field ϕ under the measure P u with u > 0, or equivalently under the measure Q u,p , to which it will eventually be applied. Nevertheless, let us note here that it could in fact be stated for the Gaussian free field Φ under P G for any weighted graph (G, λ) since the assumptions (3.1) are not required for its proof. Proposition 5.6 is valid on any transient weighted graph (G, λ) such that We close this section with an outlook of the remaining sections. Under Q u,K,p from Proposition 5.6 with X u,K,p from (5.36), we have that H u,K,p and I u are independent, and that I u ∩H u,K,p ⊂ V u . In order to prove Theorem 1.2 (but not Theorem 1.1), we thus only need to show that I u ∩ H u,K,p percolates for a suitable choice of u, K and p with Kλ x √ 2u ≤ c 13 and p ≤ F √ 2λ x (K − √ 2u) for all x ∈ G. A promising strategy to prove that the intersection of I u and a large set percolates on G is to apply the decoupling inequalities of Theorem 2.4 to a suitable renormalization scheme, similarly to [44] and [17]. This requires roughly the same amount of work as obtaining an estimate like (1.10) for small h > 0 (both are "existence"-type results), and they will follow as a by-product of the renormalization argument developed in the course of the next three sections. The actual renormalization scheme will be considerably more involved than the arguments presented in [44] and [17] in order to produce an estimate like (1.11) for small h > 0 and thereby allow us to deduce Theorem 1.1.

Proof of decoupling inequalities
The coupling Q u,p of (5.34) will eventually feature within a certain renormalization scheme that will lead to the proof of our main results, Theorems 1.1 and 1.2. This is the content of Sections 7 and 8. The successful deployment of these multi-scale techniques hinges on the availability of suitable decoupling inequalities, which were stated in Theorem 2.4 and which we now prove. In essence, both inequalities (2.20) (for the free field) and (2.21) (for interlacements) constituting Theorem 2.4 will follow from two corresponding results in [38] and [39], see also (6.4) and (6.29) below (these results are stated in [38], [39], for Z d but can be extended to G, the cable system of any graph satisfying (3.1)), once certain error terms are shown to be suitably small. In the free field case, see Lemma 6.4, the respective estimate is straightforward and we give the short argument, along with the proof of (2.20), first.
The issue of controlling the error term is considerably more delicate for the interlacement. The key control comes in Lemma 6.6 below. Following arguments in [39], it essentially boils down to estimates on the second moment and on the tail of the so-called soft local times attached the relevant excursion process (for one random walk trajectory), see (6.25) below, which are given in Lemma 6.7. For G = Z d , these bounds follow from the strong estimates of Proposition 6.1 in [39], but its proof is no longer valid at the level of generality considered here (the details of the argument are very Euclidean; see for instance Section 8 in [39]). We bypass this issue by presenting a way to obtain the desired bounds in Lemma 6.7 and along with it, the decoupling inequality (2.21), without relying on (strong) estimates akin to Proposition 6.1 of [39]. This approach is shorter even when G = Z d but comes at the price of requiring an additional assumption on the distance between the sets. An essential ingredient is a certain consequence of the Harnack inequality (3.3), see Lemma 6.5 below.
The following lemma will be useful to find "approximate lattices" at all scales inside G. It will be applied in the context of certain chaining arguments below. These lattices will also be essential in setting up an appropriate renormalization scheme in Section 7.
We start with some preparation towards (2.20). Let A 1 and A 2 be two disjoints measurable subsets of G such that A 1 is compact with finitely many connected components, and let U 1 = A c 1 . We recall the definition of the harmonic extension β U 1 of the Gaussian free field Φ from (2.17), and for each ε > 0 define the event The following result is stated on Z d in [38] but its proof is actually valid on G, for any G as in (3.1), using the Markov property of the free field on G, cf. (2.17), instead of the Markov property on Z d .
. Let A 1 and A 2 be two disjoints measurable subsets of G such that A 1 is compact with finitely many connected components, and let f 2 : C( A 2 , R) → [0, 1] be a measurable and increasing or decreasing function. Then for all ε > 0, P G -a.s., where σ = 1 if f 2 is increasing and σ = −1 if f 2 is decreasing. Remark 6.3. We note in passing that conditions (p 0 ), (V α ) and (G β ) are not even necessary here: Theorem 6.2 holds on any locally finite, transient, connected weighted graph (G, λ).
Assume now that A 1 is no longer compact, but only bounded (and measurable) and let A 1 be the largest subset B of G such that B * = A * 1 (see before display (2.15) for a definition of B * ), i.e., A 1 is the closure of the set where one adds to A 1 all the edges I e such that A 1 ∩ I e = ∅, and A * 1 = A * 1 ⊂ G is the "print" of A 1 in G. Note that every continuous path started in G \ A 1 and entering A 1 will do so by traversing one of the vertices in A * 1 . The set A 1 is a compact subset of G with finitely many connected components. We can thus define H ε as in (6.3) but with U 1 def.
= ( A 1 ) c in place of U 1 , for any bounded measurable set A 1 ⊂ G. The inequality (2.20) will readily follow from Theorem 6.2 once we have the following lemma, which is similar to Proposition 1.4 in [38]. Lemma 6.4. Let A 1 and A 2 be two Borel-measurable subsets of G, s = d( A * 1 , A * 2 ) and r = δ( A * 1 ). Assume that s > 0 and r < ∞. There exist constants c 6 > 0 and C 6 < ∞ such that for all such A 1 , A 2 and all ε > 0, Proof. Let K = ∂B( A * 1 , s). By assumption, every connected path on G from A 2 to A 1 must enter K prior to A * 1 . By the strong Markov property of X, we have β x for all z ∈ A 2 and therefore, in view of (6.3), we obtain the bound Here, the equality follows from the fact that under P x for x ∈ K, X T U 1 = X H A 1 is always on A * 1 (cf. the discussion below Remark 6.3), that the law of Φ |G under P G is P G , and that the law of X |G under P x is P x for each x ∈ G. Following the proof of Proposition 1.4 in [38] (see the computation of Var(h x ) therein), if s > 2C 3 , then for each x ∈ K, β A * 1 x is a centered Gaussian variable with variance upper bounded by noting that d(K, A * 1 ) ≥ s − C 3 by (2.8). By possibly adjusting the constant C, we see that (6.7) continues to hold if s ≤ 2C 3 , for then s −ν ≥ c and sup x∈K,y∈ A * 1 g(x, y) ≤ sup x∈G g(x, x) ≤ C 2 by (G β ) and using that g(x, y) = P x (H y < ∞)g(y, y) ≤ g(y, y). By a union bound, using (V α ) and (2.10), we finally get with (6.7) and (6.6), for all s > 0 and r < ∞, which completes the proof.
We now turn to (2.21), the decoupling inequality for random interlacements. We will eventually use the soft local times technique which has been introduced in [39] to prove a similar (stronger) inequality on Z d , for d ≥ 3. In anticipation of arising difficulties when estimating the error term which naturally appears within this method, we first show a certain Harnack-type inequality, see (6.11) below, which will be our main tool to deal with this issue. Let be a parameter to be fixed later (the choice of K will correspond to the constant C 7 appearing above (2.21), see (6.36) below). We consider A 1 and A 2 two measurable subsets of G and we assume that the diameter r of A * 1 is finite and smaller than the diameter of A * 2 (recall the definition of A * ⊂ G for A ⊂ G from Section 2), and that s = d( A * 1 , A * 2 ) ≥ K(r ∨ 1) and s > 0. We then define (6.10) These assumptions imply that s ≥ K (6.9) ≥ 2C 3 √ K, so that by (2.8), the sets A 1 , A 2 and V are disjoints subsets of G, A 2 ⊃ A * 2 and any nearest neighbor path from A 1 to A 2 crosses V. The following lemma will follow from (3.3) and a chaining argument. Lemma 6.5. For all K ≥ c, there exists C 14 = C 14 (K) ≥ 1 such that for any A 1 , A 2 , V as above, B ∈ {A 1 , A 2 , A 1 ∪ A 2 }, v a non-negative function on G, L-harmonic on B c , Proof. Set ε(K) = 1 √ K and and V the largest component of V (= ∂U 1 ) which is connected in U c 0 ∩U 2 , where C 9 corresponds to the constant in the elliptic Harnack inequality, see above (3.3) and Lemma 3.1. We first prove that if K ≥ c (so that ε is small enough) then V = V , i.e., V is connected in U c 0 ∩ U 2 . We first assume that K ≥ c so that U 0 ⊂ U 1 ⊂ U 2 . If V = V , then there exist y, y ∈ V such that y is not connected to y in U c 0 ∩ U 2 , and in particular using the strong Markov property of Z at time H U 0 , Recall the relative equilibrium measure e U 0 ,U 2 (·) and capacity cap U 2 (U 0 ) from (3.6) and (3.7). Using that s ≥ Kr, it follows that for K ≥ c , d(U 1 , U c 2 ) ≥ C 9 δ(U 1 ) so that, by (3.2) and (3.8), one obtains for all x ∈ A 1 ⊂ U 0 , We further assume that K ≥ c and ε is small enough so that d(U 0 , V ) ≥ εs 4 , and then, using again (3.2) and (3.8), for all y ∈ V , We stress that C is uniform in K (and ε) in (6.14). On the other hand, applying the strong Markov property at time H y and (3.2) we find for all x ∈ U 1 Combining (6.12) with (6.14) and (6.15) (recall that U 0 ⊂ U 1 and y ∈ U 1 ) we get, for K ≥ c (with constants C and C uniform in K and ε). But since y, y ∈ V, (6.17) d(y, y ) ≤ 2 (r + εs) ≤ 4εs.
We now recall some facts about soft local times from [39]. We continue with the setup of (6.10) and introduce the excursion process between B ∈ {A 1 , A 2 , A 1 ∪ A 2 } and V for the Markov chain Z · on G as follows. Let θ n : G N → G N denote the canonical time shifts on G N , that is for all n, p ∈ N and ω ∈ G N , (θ n (ω)) p = ω n+p . The successive return times to B and V are recursively defined by D 0 = 0 and for all k ≥ 1, where H B is the first hitting of B by Z · , cf. below (2.4). Let N B = inf{k ≥ 0 : R k = ∞}, and note that = (Z n ) n∈{R k ,...,D k } is called an excursion between B and V . It takes values in Ξ B , the set of trajectories starting in ∂B and either ending the first time V is hit or never visiting V. We add a cemetery point ∆ to Ξ B and, with a slight abuse of notation, introduce a new point ∆ in G such that for any random variable For each x ∈ ∂B, let Ξ B (x) be the set of trajectories in Ξ B \ {∆} starting in x. Set Ξ B (∆ ) = {∆} and for all σ ∈ Ξ B , let σ e ∈ V be the last point visited by σ if σ is a finite trajectory of Ξ B \ ∆, and σ e = ∆ otherwise. Upon defining Σ k = ∆ for k ≥ N B , the sequence (Σ k ) k≥1 can be viewed as a Markov process on Ξ B , called the excursion process between B and V.
According to Proposition 4.3 in [39], (σ n ) n≥1 has the same law under P as (Σ n ) n≥1 under P e V (recall the notation from (2.3)). By definition, see (6.21), for all σ, σ ∈ Ξ B , p B (σ, σ ) only depend on the last vertex visited by σ and on the first vertex visited by σ and thus, on account of (6.23), for all x ∈ ∂B ∪ {∆ } and σ, σ ∈ Ξ B (x), Γ n (σ) = Γ n (σ ). In particular, we can define the soft local time up to time T B def. = inf{n; σ n = ∆} of the excursion process between B and V by where σ x is any trajectory in Ξ B (x). By definition, see (6.23), we can also write Assume that (Ω, F, P) is suitably enlarged as to carry a family F = {F B k ; k = 1, 2, . . . } of i.i.d. random variables with the same law as F B 1 , and, for each u > 0, a random variable correspond to the soft local times attached to each of the trajectories in the support of ω u , the interlacement point process, which visit the set V (by (6.10) these are the trajectories causing correlations between A 1 ,u and A 2 ,u ). For all u > 0 and x ∈ ∂B, we then set which has the same law as the accumulated soft local time of the excursion process between B and V up to level u defined in (5.22) of [39] (note that Section 5 in [39] can be adapted, mutatis mutandis, to any transient graph). The proof of Proposition 5.3 in [39] then asserts that there exists a coupling Q between three random interlacements processes ω, ω 1 and ω 2 such that ω 1 and ω 2 are independent and, for all u > 0 and ε ∈ (0, 1), where (ω u ) |A i is the point process consisting of the restriction to A i of the trajectories in ω u hitting A i and we write µ ≤ ν if and only if ν − µ is a non-negative measure. Adding independent Brownian excursions on the cable system G as in the proof of Theorem 3.6 in [17], one then easily infers that (6.28) can be extended to the local times on the cable system, and thus, in the framework of (6.10), since A 1 = A * 1 and A * 2 ⊂ A 2 , that there exists a coupling Q such that 29) where ( x,u ) x∈ G , ( 1 x,u ) x∈ G and ( 2 x,u ) x∈ G have the law under Q of local times of random interlacements on the cable system G, cf. around (2.18), with 1 independent from 2 . The decoupling inequality (2.21) will follow at once from (6.29), see the end of this section, once the following large deviation inequality on the error term is shown. We continue with the setup leading to (6.10). Recall the multiplicative parameter K in (6.9) controlling the distance d( A * 1 , A * 2 ).
Lemma 6.6. There exists K 0 ≥ 5 ∨ (2C 3 ) 2 such that for all u > 0, ε ∈ (0, 1) and B ∈ {A 1 , A 2 , A 1 ∪ A 2 } as in (6.10) with K ≥ K 0 and x ∈ ∂B, In order to prove Lemma 6.6, cf. (6.27), we need some estimates on the law of F B 1 (x), which deals with one excursion process between B and V. Let us define the average number of times an excursion starts in x for the excursion process beginning in y (here, δ x,y = 1 if x = y and 0 otherwise; recall N B from below (6.19)). It follows from (5.24) in [39] that The following estimates will be useful to prove Lemma 6.6.
Lemma 6.7. For K ≥ K 0 , there exist c 15 (K) > 0 and C 15 (K) < ∞ such that, for all B ∈ {A 1 , A 2 , A 1 ∪ A 2 } as in (6.10), all x ∈ ∂B and v ∈ (0, ∞), Proof. We tacitly assume throughout the proof that K ≥ c so that Lemma 6.5 applies. Theorem 4.8 in [39] asserts that for all x ∈ B The function y → π B (y , x) is L-harmonic on B c , and (i) follows from (6.31) and Lemma 6.5. We now turn to the proof of (ii). Using (6.26) and (6.21), we have for all x ∈ ∂B and x ∈ ∂B ∪ {∆ }, P-a.s., where we used the fact that y → P y (Z H B = x) is harmonic on B c and Lemma 6.5 in the last inequality. Slight care is needed above if σ e T B −1 = ∆ , in which case P σ e T B −1 for all x ∈ ∂B and x ∈ ∂B ∪ {∆ } so that (6.32) continues to hold. With (6.32), we obtain for all x ∈ ∂B and v ∈ (0, ∞), since π B (x) ≥ inf y ∈V P y (Z H B = x) by (6.30) and (6.31). By (6.24) and (6.25), if v 1 ), . . . , (σ T B , v T B )}, and thus by (6.33), for all x ∈ ∂B and v ∈ (0, ∞), where We bound a 1 and a 2 separately. For all x ∈ ∂B ∪ {∆ }, µ B (Ξ B (x )) = 1, see (6.20), so the parameter of the Poisson variable in (6.34) is by Lemma 6.5, and thus a 1 in (6.34) is indeed bounded by C(K) exp{−c (K)v} by a standard concentration estimate for the Poisson distribution (recall that C 14 = C 14 (K)).
We now seek an upper bound for a 2 . Assume for now that B = A 1 , whence {Σ 1 = ∆} = {H A 1 = ∞} P y -a.s. for all y ∈ V, and thus T B (= inf{n; Σ n = ∆}) is dominated by a geometric random variable with parameter inf y∈V P y (H A 1 = ∞) = 1 − sup y∈V P y (H A 1 < ∞). By (3.8) and (6.10), for all y ∈ V, (6.36) for all y ∈ V , where we used s ≥ (2C 3 √ K) ∨ (Kr) in the last inequality (this is guaranteed, cf. around (6.10)). By choosing K 0 large enough, we can ensure that the last constant in (6.36) is, say, at most 1/2 for all K ≥ K 0 , so that T B is dominated by a geometric random variable with positive parameter and then a 2 in (6.35) is bounded by C(K) exp{−c(K)v}, for all K ≥ 0 and v ∈ (0, ∞). The proof is essentially the same if B = A 2 or B = A 1 ∪ A 2 ; the only point that requires slight care is that T B ≥ 2 on account of (6.10), and thus we use instead that T B −1 is bounded by a suitable geometric random variable.
With Lemma 6.7 at hand, we are now able to prove Lemma 6.6 using arguments similar to those appearing in the proof of Theorem 2.1 in [39].
Proof of Lemma 6.6. By (6.27), (6.31) and Markov's inequality, we can write for all a > 0, x ∈ ∂B and ε ∈ (0, 1), recalling that Θ V u and the family F are independent, We now bound E exp aF B 1 (x) for small enough a. If t ∈ [0, 1], e t ≤ 1 + t + t 2 , so by (i) of Lemma 6.7, for K ≥ K 0 , x ∈ ∂B and a > 0, x) 2 (recall for purposes to follow that C 14 and also C 15 , c 15 all depend on K). Moreover, by (ii) of Lemma 6.7, for all K ≥ K 0 , x ∈ ∂B and a ∈ 0, c 15 2π B (x) , where we took advantage of the inequality e −x < 1 x 2 for x > 0 in the last step. Thus, combining (6.37), (6.38) and (6.39) with the choice a = c(K)ε π B (x) for a small enough constant c(K) > 0, we have for all x ∈ ∂B and ε ∈ (0, 1) and K ≥ K 0 , In a similar way, one can bound P( ) from above. Indeed, using instead that for all t > 0, e −t ≤ 1 − t + t 2 , and so by (i) of Lemma 6.7, one obtains for a > 0, x ∈ ∂B and K ≥ K 0 , This completes the proof.
We can now conclude.
Proof of (2.21). Consider A 1 and A 2 as in the statement of Theorem 2.4 and set C 7 = K 0 with K 0 as appearing in Lemma 6.6. This fits within the framework described above (6.10) with K = K 0 , whence (6.29) and Lemma 6.6 apply. Thus, (2.21) follows upon using (V α ), (2.10) and (6.10) to bound |∂B| for any B ∈ {A 1 , A 2 , A 1 ∪ A 2 }.

General renormalization scheme
We now set up the framework for the multi-scale analysis that will lead to the proof of Theorems 1.1 and 1.2 in Section 8. This will bring together the coupling P u from Section 5, see Corollary 5.3 and Remark 5.4, 2), and the decoupling inequalities of Theorem 2.4, which have been proved in Section 6 and which will be used to propagate certain estimates from one scale to the next, see Proposition 7.1 below, much in the spirit of [55] and [56]. Crucially, this renormalization scheme will be applied to a carefully chosen set of "good" local features indexed by points on the approximate lattice Λ(L 0 ) (cf. Lemma 6.1) at the lowest scale L 0 , see Definition 7.4, which involve the fields ( γ · , ·,u , B p · ) from the coupling Q u,p , see (5.34). Importantly, good regions will allow for good local control on the set C ∞ u which is defining for ϕ · , see (5.20), and in particular of the γ · -sign clusters in the vicinity to the interlacement, cf. (5.19). This will for instance be key in obtaining the desired ubiquity of the two infinite sign clusters in (1.13), see also (1.10) and (1.11).
Following ideas of [55], improved in [56], [39] for random interlacements and extended in [47], [38] to the Gaussian free field, we first introduce an adequate renormalization scheme. As before, G is any graph satisfying the assumptions (3.1). We introduce a triple L = (L 0 , l, l 0 ) of parameters for the definition of C 3 , before (2.21) for C 7 , (6.2) for C 13 , and recall ν from (1.6)), and define We recall here that the distance d in (7.3) and entering the definition of balls is the one from (3.1) (consistent with the regularity assumptions (V α ) and (G β )) and thus in general not the graph distance, cf. Remark 3.4. Note that since L 0 C 3 and l 0 2l 4, see (7.1), then by (2.8), (6.1) and (7.2) the union in (7.3) is not empty. For A any measurable subset of G and B a measurable subset of C( A, R), we say that B is increasing if for all f ∈ B and f ∈ C( A, R) with f ≤ f , f ∈ B, and B is decreasing if B c is increasing. For h ∈ R and u > 0, we define the events and we add the convention B I,u = ∅ for u ≤ 0. If B is increasing then (7.4) implies that B G,h ⊂ B G,h for h < h and B I,u ⊂ B I,u for u < u .
Proposition 7.1. For all graphs G satisfying (3.1), there exist c 16 > 0 and C 16 ≥ 1 such that for all all L 0 , l and l 0 as in (7.1), all ε > 0 and h ∈ R (resp. u > 0) with and all families B = {B x : x ∈ Λ L 0 } such that the sets B x , x ∈ Λ L 0 , are either all increasing or all decreasing measurable subsets of C( B(x, lL 0 ), R) satisfying one has for all n ∈ {0, 1, 2, . . . } and x ∈ Λ L n , where the plus sign corresponds to the case where the sets B x are all decreasing and the minus sign to the case where the sets B x are all increasing.
Proof. We give the proof for the Gaussian free field in the case of decreasing events. The proof for increasing events and/or random interlacements is similar and relies in the latter case on (2.21) rather than (2.20), which will be used below. Thus, fix some ε > 0, h ∈ R, l and l 0 as in (7.1), and assume B = {B x : x ∈ Λ L 0 } is such that B x is a decreasing subset of C( B(x, lL 0 ), R) satisfying (7.6), for all x ∈ Λ L 0 . The sequence (h n ) n≥0 is defined by h 0 = h and for all n ≥ 1, h n = h + n k=1 ε∧1 2 k , whence h n ≤ h + ε for all n.
We now argue that there exists a constant C 16 such that, if the first inequality in (7.5) holds, then for all n ∈ {0, 1, 2, . . . }, with α as in (V α ) and C 13 defined by (6.2). It is then clear that (7.7) follows from (7.8) 13 and the sets B x , x ∈ Λ L 0 , are decreasing. We prove (7.8) by induction on n: for n = 0, (7.8) is just (7.6) upon choosing Assume that (7.8) holds at level n − 1 for some n ≥ 1. Note that by (7.3) and (7.1), for all h > 0 and . Let r n = 2lL n−1 . Then, for all x ∈ Λ L n and y, y ∈ Λ L n−1 ∩ B(x, lL n ) such that d(y, y ) ≥ L n (as appearing in the union in (7.3)), lL n ≥ d B(y, r n ), B(y , r n ) ≥ l 0 − 4l L n−1 = s n .
Using (6.2), (7.3), (7.2), a union bound and the decoupling inequality (2.20), we get where the supremum is over all y ∈ Λ L n−1 ∩ B(x, lL n ). Then (7.8) follows by the induction hypothesis upon choosing C 16 large enough such that for all l and l 0 as in (7.1), ε ∈ (0, 1) and L 0 1 such that the first inequality in (7.5) holds, as well as all n ≥ 1, which is possible since ε 2 s ν n ε 2 (L 0 l n 0 /2) ν C 16 log(L 0 + 1)( √ l 0 l n−1 0 /2) ν and l ν 0 ≥ 8. Remark 7.2. 1) (Existence of a subcritical regime) As a first consequence of the scheme put forth in (7.1)-(7.4) and noteworthily under the mere assumptions (3.1), Proposition 7.1 can be readily applied to a suitable family of events B = {B x : x ∈ Λ L 0 } and of parameters L in (7.1) to obtain (stretched) exponential controls on the connectivity function above large levels. This complements results in [56]. The argument is classical, see e.g. [56], so we collect this result and simply sketch its proof. Let where the event under the probability refers to the existence of a nearest neighbor path of vertices from the ball B(x, L) to the boundary of the ball ∂B(x, 2L) in E ≥h . The parameter u * * is defined similarly, but with the infimum ranging over u ≥ 0 in (7.9) and the probability under consideration replaced by P I B(x, L)  Moreover, for all h > h * * and u > u * * , there exist constants c > 0 and C < ∞ depending on u and h such that for all x ∈ G and L ≥ 1, (7.12) We now outline the proof, and focus on (7.11). One chooses l = 4 and l 0 = 8 1/ν ∨ C − 1 2α 13 ∨ (8 + 4C 7 )l in (7.1), takes ε = 1 and fixes some L 0 large enough so that the second condition in (7.5) holds for all u ≥ 1. It is then clear from (V α ), (G β ) and (2.10) that one can find u ≥ 1 large enough such that P I B(x, 2L 0 ) , for all x ∈ G, and where we used (3.10) and a union bound to infer the first inequality. Having fixed such u, one first shows that u * * 2u and hence u * * is finite as asserted by applying Proposition 7.1 as follows: for x ∈ G, one considers which are decreasing measurable subsets of C( B(x, 4L 0 ), R), and one proves by induction over n with the help of (6.1) that for all n ∈ {0, 1, 2, . . . } and x ∈ G, ←→ ∂B(x, 4L n )} ⊂ G L x,n (B I,u(1+ε) ) (for now ε = 1 but this is in fact true for any ε, u > 0). By the above choices, Proposition 7.1 applies, yielding for all n ≥ 0 that P I G L x,n (B I,2u ) ≤ 2 −2 n ≤ C exp{−L c n }, and in particular, lim n P I B(x, 2L n ) V 2u ←→ ∂B(x, 4L n )} = 0, as desired.
To prove the equality in (7.11), one repeats the above argument but with different choices of u, L 0 and ε. Namely, one considers any u > 0 for which (7.14) lim inf It suffices to show that u(1 + ε) ≥ u * * , for then by letting ε ↓ 0, it follows that u * * is smaller or equal than the infimum in (7.11), and the reverse inequality is obvious, as follows from (7.9). With u and ε fixed, one selects L 0 ≥ 1 large enough so as to ensure (7.5), and such that the probabilities in (7.14) are smaller than c 17 . Proposition 7.1 then implies as explained above that lim n P I B(x, 2L n ) ←→ ∂B(x, L) has stretched exponential decay in L for all x ∈ G, thus yielding that u(1 + ε) ≥ u * * and the interlacement part of (7.12) as a by-product. The proof of (7.10) and the free field part of (7.12) follow similar lines.
2) (Existence of a supercritical regime for ν > 1) Another simple consequence of Proposition 7.1 is that if G is a graph satisfying (3.1) with ν > 1 which contains a subgraph isomorphic to N 2 , then, identifying with a slight abuse of notation this subgraph with N 2 , there exists u > 0 such that P I -a.s., Examples of graphs for which we can prove (7.15) are product graphs G = G 1 × G 2 as in Proposition 3.5 with ν = α − β > 1 since if P 1 and P 2 are two semi-infinite geodesics of G 1 and G 2 , which exist by Theorem 3.1 in [68], then P 1 ×P 2 is a subgraph of G isomorphic to N 2 . Also, finitely generated Cayley graphs verifying (V α ) for some α > 3 which are not almost isomorphic to Z, see Theorem 7.18 in [36], are covered by this setting.
Let us now sketch the proof of (7.15). Using the result from Exercise 1.16 in [18], which is given for Z d but immediately transfers to our setting, we have for all positive integer L, M and N, since ν > 1, Here, we used that d((k, N ), (p, N )) C 3 d G ((k, N ), (p, N )) C 3 |k − p| in the second inequality, see ( ∨ (8 + 4C 7 )l in (7.1), take ε = 1/2, and let C 17 be such that for all u > 0 and L 0 C 3 with uL 0 C 17 , and all x ∈ {4L 0 + 1, 4L 0 + 2, . . . } 2 , ←→ C means that there exists a * -path in B ⊂ N 2 , as defined above Proposition 3.7, beginning in A and ending in C. Since ν > 1 one can find L 0 large enough so that (7.5) hold when u = C 17 L −1 0 , and, applying Proposition 7.1 and using a property similar to (7.13) for * -paths of I u , we get that L → sup x P I (x * -I u/2 ∩N 2 ←→ S(x, L)) has stretched exponential decay, with the supremum ranging over all x ∈ {L + 1, L + 2, . . . } 2 . If V u ∩ N 2 has no infinite connected component, then for any positive integer L the sphere ∂ N 2 [0, L] 2 is not connected to ∞ in V u ∩ N 2 . Thus, by planar duality, see for instance Proposition 3.7, there exists L L − 1 and x ∈ {L + 1, L + 2} × {L + 1} which is connected to S(x, L ) by a * -path in I u ∩ N 2 , which happens with probability 0.
In order to prove u * > 0 for ν = 1 by the same method, one would need to remove the polynomial term (r n + s n ) α in the decoupling inequality (2.21), and it seems plausible that one could do that for a large class of graphs (including Z 3 ), using arguments similar to [16] or [61]. This is proved in the case G = G × Z in [56]. However, this method does not seem to work in the case ν < 1. A (simpler) proof of u * > 0 is given for G = Z d in [42] without using decoupling inequalities, but it seems that one cannot adapt simply its proof to more general graphs if ν < 1. Therefore, the result u * > 0 from Theorem 1.2 is particularly interesting when ν < 1.
We now introduce the families of events of the form (7.4) to which Proposition 7.1 will eventually be applied. The reason for the following choices will become apparent in the next section. The strategy developed in [17] to prove h * > 0 on Z d , d ≥ 3, serves as a starting point in the current setting, but the desired ubiquity result (1.13) requires a considerably finer analysis, which is more involved, see also Remark 7.5 below. All our events will be defined under the probability Q u,p from (5.34), under which the Gaussian free field ϕ · on G is defined in terms of ( γ · , ·,u ) by means of (5.20).
We now come to the central definition of good vertices. As usual, we denote by ( x,u ) x∈G = ( x,u ) x∈G , I u = I u ∩ G, γ = ( γ x ) x∈G and ϕ = ( ϕ x ) x∈G the projections of , I u , γ and ϕ on the graph G. For all u > 0, these fields have the same law as the occupation time field of random interlacements at level u, a random interlacement set at level u and two Gaussian free fields on G, respectively. We recall the definition of the constants C 10 from (3.4) and C 3 from (2.8), as well as the definition of B p y from (5.34). Definition 7.4 (Good vertex). For u > 0, L 0 ≥ 1, K > 0, p ∈ (0, 1), x ∈ G, the event Moreover, a vertex x ∈ G is said to be (L 0 , u, K, p)-good if the event x occurs, and (L 0 , u, K, p)-bad otherwise.
Remark 7.5. The above definition of good vertices differs in a number of ways from a corresponding notion introduced in previous work [17] (cf. Definition 4.2 therein) by the authors. This is due to the refined understanding of the isomorphism (5.2) stemming from (5.19) and (5.20). Notably, properties (i) and (iii) above are new in dealing directly with γ · (rather than ϕ · ). Observe that (iii) involves both fields γ · and ·,u simultaneously, which are however independent. It will play a fundamental role for considerations ultimately leading to a proof of (1.11) for small h > 0. Property (ii) can be viewed as a more transparent substitute for the events involved in Lemma 3.3 and Definition 3.4 in [17] (see also (4.1) in [44]). It would be possible to find sharp estimates on the 'size' of the interlacement in a ball similar to Lemma 3.3 in [17] on the class of graphs considered here, but such bounds are in fact unnecessary once we have Proposition 4.1.
We conclude this section by collecting the following result, which will be crucially used in the next section. It sheds some light on why good vertices may be useful. For A ⊂ G, we denote by B G ( A, 1) the largest subset B of G such that B * is equal to the ball B G ( A ∩ G, 1) for the graph distance, i.e., B G ( A, 1) is the set of elements of G either in A or on an edge on the exterior boundary of A ∩ G.
Lemma 7.6. For all u > 0, L 0 ≥ 1, K > 0, p ∈ (0, 1) and any nearest neighbor path (x 0 , . . . , x n ) in G such that for each i ∈ {0, . . . , n}, the vertex x i is an (L 0 , u, K, p)-good vertex, let Then A is a connected subset of I u such that for all i, A ∩ B(x i , L 0 ) = ∅ and every connected component of {y ∈ G; γ y > 0} ∩ B(x i , L 0 ) as well as of {y ∈ G; γ y < 0} ∩ B(x i , L 0 ) with diameter at least L 0 4 intersects A, γ z ≥ −K for all z ∈ B G ( A, 1) and B p y = 1 for all y ∈ A ∩ G.
Proof. By (ii) of Definition 7.4, for each i ∈ {0, . . . , n} there exists z i ∈ I u ∩ B(x i , L 0 ). By (2.8), for each i, d(x i , z i+1 ) ≤ L 0 + C 3 and thus, by (7.17), Since z i ∈ I u ∩ B(x i , L 0 ) was chosen arbitrarily, any z i ∈ I u ∩ B(x i , 2C 10 (L 0 + C 3 )) which is connected to I u ∩ B(x i , L 0 ) by a path in I u ∩ B(x i , 2C 10 (L 0 + C 3 )), is also connected to every element of I u ∩ B(x i+1 , L 0 ) by a path in I u ∩ B(x i , 2C 10 (L 0 + C 3 )), and hence A is connected. Moreover, by (2.8), and thus by definition of B G ( A, 1), see above Lemma 7.6, as well as B(x i , 2C 10 (L 0 +C 3 )+ C 3 ), see above (2.15), one infers from (i), (iii) and (iv) of Definition 7.4 the remaining properties of A.

Denouement
We proceed to the proof of our main results, Theorems 1.1 and 1.2. This comes in several steps. The first one is reached in Proposition 8.3 below and yields under the mere assumptions (3.1) that long good (R-)paths, cf. Definition 7.4, are very likely for suitable choices of the parameters. The second step entails two precursor estimates to (1.10) and (1.11) (for small h > 0), presented in Lemmas 8.4 and 8.7, respectively. In a sense, the resulting estimates (8.16) and (8.24) provide a rough translation of the events appearing in (1.10) and (1.11) to the world of interlacements. Apart from the quantitative bounds leading to Proposition 8.3, these two estimates crucially rely on the additional geometric information provided by (WSI), on all aspects of Definition 7.4 and on certain features of the renormalization scheme, in particular with regards to the desired ubiquity, gathered in Lemma 8.6 below. The third step then deduces Theorems 1.1 and 1.2 from these results, using the couplings gathered in Proposition 5.6. As a by-product of our methods, Theorem 8.8 asserts the existence of infinite sign clusters (in slabs) without any statements regarding their local structural properties under the slightly weaker assumption ( WSI), introduced in Remark 8.5 below. We then conclude with some final remarks.
We continue in the framework of the previous section and recall in particular the scheme (7.1)-(7.3), the measure Q u,p from (5.34) and Definition 7.4. We also keep our standing (but often implicit) assumption that G satisfies (3.1) and mention any other condition, such as (WSI), explicitly. Henceforth, we set Note that l and l 0 satisfy the conditions appearing in (7.1). For all L 0 ≥ C 3 , we write L 0 = (L 0 , l, l 0 ) rather than L to insist on the choice (8.1). Thus L 0 ≥ C 3 remains a free parameter at this point. We now define bad vertices at all scales L n , n ≥ 0, cf. (7.2).
For all L 0 ≥ C 3 , x ∈ Λ(L 0 ) = Λ L 0 0 , u > 0, K > 0 and p ∈ (0, 1), we introduce 2)), we then say that the vertex x is n − (L 0 , u, K, p) bad if (recall (7.3)) , and x is n − (L 0 , u, K, p) good otherwise. In view of (7.18) and the first line of (7.3), an (L 0 , u, K, p)-bad vertex in Λ L 0 0 is always a 0 − (L 0 , u, K, p) bad vertex, but not vice versa. A key to Proposition 8.3, see (8.14) below, is to prove that the probability of having an n − (L 0 , u, K, p) bad vertex decays rapidly in n for a suitable range of parameters (L 0 , u, K, p). This relies on individual bounds for each of the events in (8.4), which are the objects of Lemmas 8.1 and 8.2 as well as (8.10) below. Due to the presence of long-range correlations, the decoupling estimates from Proposition 7.1 will be crucially needed.
Lemma 8.1. There exist constants C 18 < ∞ and C 18 < ∞ such that for all L 0 ≥ C 18 , K ≥ C 18 log(L 0 ), n ∈ {0, 1, 2, . . . } and x ∈ Λ L 0 n , and all u > 0, p ∈ (0, 1), With this observation, and since γ has the same law under Q u,p as Φ under P G , in order to show (8.5), it is enough by Proposition 7.1 to prove that there exists C 18 such that where C 18 ≥ C 3 ∨ 2 is chosen so that the first inequality in (7.5) holds for all L 0 ≥ C 18 , with l 0 as in (8.1) and ε = 1. Conditionally on the field γ = γ |G , and for each edge e = {y, y }, the process ( γ y+te ) t∈[0,ρ y,y ] on I e has the same law as a Brownian bridge of length ρ y,y = 1/(2λ y,y ) (the length of I e , cf. below (2.14)) between γ y and γ y of a Brownian motion with variance 2 at time 1, as defined in Section 2 of [17]. This fact has already appeared in the literature, see Section 2 of [33], Section 1 of [35] or Section 2 of [34] for example. We refer to Section 2 of [17] for a proof of this result when G = Z d , which can be easily adapted to a general graph satisfying (3.1). Let us denote by (W y,y t ) t∈[0,ρ y,y ] defined as W y,y t = γ y+te − 2λ y,y t γ y − (1 − 2λ y,y t) γ y the Brownian bridge of length ρ y,y between 0 and 0 of a Brownian motion with variance 2 at time 1 associated with ( γ y+te ) t∈[0,ρ y,y ] . For all L ≥ 1, K > 0 and x ∈ G, we thus have We consider both terms in (8.7) separately. For all y ∈ B(x, L), γ y is a centered Gaussian variable with variance g(y, y), thus by (V α ), (G β ) and (2.10), The law of the maximum of a Brownian bridge is well-known, see for instance [11], Chapter IV.26, and so for all y ∼ y in G, where to obtain the inequality we took advantage of the fact that 1 ρ y,y = 2λ y,y ≥ c, cf. (2.10). Therefore, returning to (8.7), using (V α ), (2.10) and the fact that G has uniformly bounded degree, we obtain that for all L ≥ 1 and K ≥ 1, Q u,p (sup z∈ B(x,L) γ z ≥ K) ≤ CL α exp{−cK 2 }. Choosing L = lL 0 and using the symmetry of γ · , we can finally bound for all L 0 ≥ C 18 and K ≥ 1, from which (8.6) readily follows for a suitable choice of C 18 .
The next lemma deals with the events involving the families D L 0 ,u x and E L 0 ,u x in (8.4), which both involve the interlacement parameter u > 0. In the former case, this will bring into play the connectivity estimates from Section 4 in order to initiate the decoupling.
Finally for the events involving the family (F L 0 ,p ) c in (8.4), by a similar reasoning as in Lemma 4.7 of [44] and using (V α ), there exists a constant C 20 such that for all p ∈ (0, 1) such that p ≥ exp{−C 20 L −α 0 }, all u > 0, n ≥ 0 and x ∈ Λ L 0 n , For all u 0 > 0 and R ≥ 1 we define where we keep the dependence of various constants and of L 0 (u) on u 0 and R implicit. Furthermore, we choose constants C 21 and c 21 such that log(C 21 u −c 21 ) ≥ C 18 log(l 0 L 0 (u)) for all u ∈ (0, u 0 ), and constants C 22 and c 22 such that 1 − C 22 u c 22 ≥ exp − C 20 (l 0 L 0 (u)) −α for all u ∈ (0, u 0 ), which can both be achieved on account of (8.11). Then, by (8.4), Lemmas 8.1 and 8.2 and (8.10), for all n ∈ N and u ∈ (0, u 0 ), Relying on (8.12), we now deduce a strong bound on the probability to see long Rpaths of (L 0 (u), u, K, p)-bad vertices (see above (WSI) for a definition of R-paths). We emphasize that the following result holds for all graphs satisfying (3.1). In particular, (WSI) is not required for (8.13) below to hold. Proposition 8.3. For G satisfying (3.1) and each u 0 > 0, there exist constants c(u 0 ), C(u 0 ) ∈ (0, ∞) such that for all R ≥ 1, x ∈ G, u ∈ (0, u 0 ), K > 0 with K ≥ log(C 21 u −c 21 ), p ∈ (0, 1) with p ≥ 1 − C 22 u c 22 , and N > 0, (8.13) Q u,p there exists an R-path of (L 0 (u), u, K, p)-bad vertices from x to B(x, N ) c ≤ C(u 0 ) exp −(N/L 0 (u)) c(u 0 ) .
Using the additional condition (WSI), Proposition 8.3 together with Lemma 7.6 can be used to show the existence of a certain set A, see Lemma 8.4 below, from which the prevalence of the infinite cluster of E ≥h , h > 0 small, will eventually be deduced. The bound obtained in (8.16) will later lead to (1.10).
Lemma 8.4. Assume G satisfies (WSI) (in addition to (3.1)), and let R = R 0 as in (WSI). Furthermore, let u 0 > 0, u ∈ (0, u 0 ), K > 0 with K ≥ log(C 21 u −c 21 ), and p ∈ (0, 1) with p ≥ 1 − C 22 u c 22 . Then Q u,p -a.s. there exists a connected and unbounded set A ⊂ G such that A ⊂ I u , γ z ≥ −K for all z ∈ B G ( A, 1) and B p x = 1 for all x ∈ A ∩ G, (8.15) and there exist constants c > 0 and C < ∞ depending on u and u 0 such that for all x 0 ∈ G and L > 0, Proof. Fix a vertex x 0 ∈ G. By (WSI), there exists R 0 ≥ 1 such that, for all finite connected subsets A of G with x 0 ∈ A and δ(A) ≥ C 3 , noting that d(x, x 0 ) ≤ δ(A)+C 3 ≤ 2δ(A) for all x ∈ ∂ ext A by (2.8), It is then enough to prove that for all u ∈ (0, u 0 ), K ≥ log(C 21 u −c 21 ) and p ≥ 1 − C 22 u c 22 , the probability under Q u,p of the event (8.18) there does not exist an unbounded nearest neighbor path in G of (L 0 (u), u, K, p)-good vertices starting in B(x 0 , L) has stretched-exponential decay in L (with constants depending on u and u 0 ). Indeed, if (8.18) does not occur, then by Lemma 7.6 there exists an unbounded connected component A ⊂ I u intersecting B(x 0 , L + L 0 (u)) such that γ z ≥ −K for all z ∈ B G ( A, 1) and B p x = 1 for all x ∈ A ∩ G, as required by (8.15); therefore, the bound (8.16) follows. Thus, in order to establish the desired decay, assume that (8.18) occurs for some u ∈ (0, u 0 ), K ≥ log(C 21 u −c 21 ), p ≥ 1 − C 22 u c 22 and a positive integer L. We may assume that L ≥ C 3 . We now use Proposition 8.3 and a contour argument involving (8.17) to bound its probability. Note that the assumptions of Proposition 8.3 on the set of parameters (L 0 , u, K, p) are met for all u ∈ (0, u 0 ) with L 0 as in (8.11) and by our choice of constants. Define which is the set of vertices in G either in, or connected to B(x 0 , L) by a nearest neighbor path of (L 0 (u), u, K, p)-good vertices in G. Since (8.18) occurs, A L is finite. It is also connected, and δ(A L ) ≥ C 3 . Hence, since every vertex in ∂ ext A L is (L 0 (u), u, K, p)-bad, by (8.17) there exists x ∈ ∂ ext A L and an R 0 -path of (L 0 (u), u, K, p)-bad vertices from x to B(x, c 6 d(x, x 0 )/2) c . Let N = d(x, x 0 ) , then N ≥ L, and thus by a union bound the probability that the event (8.18) occurs is smaller than which has stretched-exponential decay in L by (V α ), (2.10) and Proposition 8.3.
Remark 8.5. One can replace (WSI) by the following (weaker) condition ( WSI) and still retain a statement similar to Lemma 8.4. This is of interest in order to determine how little space (in G) one can afford to use in order for various sets, in particular V u at small u > 0 in Theorem 1.2, to retain an unbounded component; see Theorem 8.8 and Remark 8.9, 5) below. We first introduce ( WSI). Suppose that there exists an infinite connected subgraph G p of G, ζ > 0, R 0 ≥ 1, a vertex x 0 ∈ G p and c 23 > 0 such that for all finite connected A ⊂ G p with x 0 ∈ A, there exists x ∈ (∂ ext A) ∩ G p and an R 0 -path from x to B(x, c 23 i.e., all the vertices of this path are in (∂ ext A) ∩ G p . It is easy to see that (WSI) implies ( WSI) with ζ = 1. Suppose now that instead of (WSI), condition ( WSI) hold for some subgraph G p of G. Then the conclusions of Lemma 8.4 leading to (8.15) still hold and the set A thereby constructed satisfies A ⊂ B(G p , 2C 10 (L 0 (u) + C 3 )). To see this, one replaces (8.17) by the following consequence of ( WSI): there exists R 0 ≥ 1, x 0 ∈ G p and c > 0 such that for all finite connected subsets A of G p with x 0 ∈ A, One then argues as above, with small modifications due to (8.17'), whence, in particular, the set A L needs to be replaced by A L (G p ) def.
= B(x 0 , L)∩G p ∪{x ∈ G p ; x ↔ B(x 0 , L)∩ G p in the set of (L 0 (u), u, K, p)-good vertices in G p }, so that A L = A L (G).
The bound (8.16) will be useful to prove that (1.10) holds, and we seek a similar result which roughly translates (1.11) to the world of random interlacements. This appears in Lemma 8.7 below. Its proof rests on the following technical result, which is a feature of the renormalization scheme. Lemma 8.6. Assume G satisfies (WSI), and recall the definition of c 18 from (8.2). For any L 0 ≥ C 3 , K > 0, u > 0 and n ∈ {0, 1, 2, . . . }, if there exists a vertex x ∈ Λ L 0 n which is n − (L 0 , u, K, p) good, then every two connected components of B(x, 20c 18 L n ) with diameter at least c 18 L n are connected via a path of (L 0 , u, K, p)-good vertices in B(x, 30c 18 C 10 L n ).
Proof. We use induction on n. For n = 0, if x is 0 − (L 0 , u, K, p) good, then in view of (8.3), (8.4) and Definition 7.4, every path in B(x, 20c 18 L 0 ) is a path of (L 0 , u, K, p)good vertices and all the vertices in B(x, 20c 18 C 10 L 0 ) are (L 0 , u, K, p)-good, so the result follows directly from (3.4). Let us now assume that the conclusion of the lemma holds at level n − 1 for some n ≥ 1 and let (8.19) x be an n − (L 0 , u, K, p) good vertex.
Using Lemma 7.6 and the quantitative bounds derived earlier in this section, we infer from Lemma 8.6 the following estimate tailored to our later purposes. In referring to two (discrete) sets S, S ⊂ G connected by a (continuous) path π in G below, we mean that the two endpoints of π are two vertices belonging to S and S . Lemma 8.7. Assume G satisfies (WSI) (in addition to (3.1)), and take R = R 0 from (WSI). Then for all u 0 > 0, u ∈ (0, u 0 ), x ∈ G, K > 0 with K ≥ log(C 21 u −c 21 ), p ∈ (0, 1) with p ≥ 1 − C 22 u c 22 and L > 0, Q u,p   every two components of {y ∈ G; ϕ y ≥ − √ 2u} ∩ B(x, L) with diameter at least L 10 are connected via a path π in {z ∈ G; ϕ z ≥ − √ 2u} ∩ B(x, 2C 10 L) such that γ z ≥ −K for all z ∈ B G ( π, 1) and B p y = 1 for all y ∈ π ∩ G   ≥ 1 − C(u, u 0 ) exp{−L c(u,u 0 ) }. Let us call E u,n L 0 the complement of the event on the left-hand side of (8.25). On the event E u,n L 0 , fix any two components U u 1 and U u 2 of {y ∈ G; ϕ y ≥ − √ 2u} ∩ B(x, 20c 18 L n ) with diameter at least c 18 L n , which are thus connected by a nearest neighbor path (x 0 , . . . , x p ) of (L 0 , u, K, p)-good vertices in B(x, 30c 18 C 10 L n ). We may assume that ϕ · > − √ 2u on U u 1 and U u 2 . We now argue that if E u,n L 0 occurs, We distinguish two cases. On U u 1 , we have ϕ · > − √ 2u and thus by (5.20) either i) every y ∈ U u 1 ∩ B(x 0 , L 0 /2) is in (C ∞ u ) c , and then U u 1 ∩ B(x 0 , L 0 /2) is contained in a connected component of {y ∈ G; γ y > 0} of diameter at least L 0 /4, which must intersect I u by (iii) of Definition 7.4, whence (8.26) holds; or ii) there exists y ∈ U u 1 ∩ B(x 0 , L 0 /2) which is in C ∞ u , and then if y ∈ I u or if there exists a path in {z ∈ G; | γ z | > 0}∩ B(x 0 , L 0 ) connecting y to I u , (8.26) immediately follows. Otherwise, C ∞ u ∩ B(x 0 , L 0 ) contains a connected component of {z ∈ G; | γ z | > 0} not intersecting I u whose intersection with B(x 0 , L 0 ) is contained in a connected component of {y ∈ G; γ y > 0} or of {y ∈ G; γ y < 0} having diameter at least L 0 /4, which again is excluded by (iii) of Definition 7.4. Thus, (8.26) holds in all cases. Similarly, U u 2 is connected to I u ∩ B(x p , L 0 ) by a continuous path in {z ∈ G; ϕ z ≥ − √ 2u}∩ B(x p , L 0 ). With this observation, by (5.4) and Lemma 7.6, on the event E u,n L 0 , we can thus find a continuous path π in B(x, 30c 18 C 10 L n + 2C 10 (L 0 + C 3 )) ⊂ B(x, 31c 18 C 10 L n ) between U u 1 and U u 2 such that ϕ z ≥ − √ 2u for all z ∈ π, γ z ≥ −K for all z ∈ B G ( π, 1) and B p y = 1 for all y ∈ A ∩ G. Therefore, for all L L 0 (u), taking n ∈ N and L 0 ∈ L 0 (u), . . . , l 0 L 0 (u) such that L = l n 0 L 0 , the probability in (8.24) is larger than Q u,p E u,n L 0 (8.25) ≥ 1 − 4 × 2 −2 n ≥ 1 − C(u, u 0 ) exp{−L c(u,u 0 ) }.
We now deduce Theorem 1.1 from Proposition 5.6 and Lemmas 8.4 and 8.7.
Proof of Theorem 1.1. We first show that there exists h 1 > 0 such that for all h ≤ h 1 , (1.10) holds. Recall that λ x ≤ C 4 for all x ∈ G, see (2.10). We now specify the range of values of u > 0 and p ∈ (0, 1) for which we will consider Q u,p , cf. (5.34). Fix an arbitrary reference level u 0 > 0, say u 0 = 1, and choose u 1 ∈ (0, u 0 ) such that, for all 0 < u ≤ u 1 , and ∃ p ∈ (0, 1) such that where we recall that F denotes the cumulative distribution function of a standard normal distribution. Also, note that u 1 with the desired properties exists by considering the limit as u ↓ 0 and using the standard bound F (x) ≥ 1 − 1 √ 2πx exp{− x 2 2 } for all x > 0 in the second line. For a given u ∈ (0, u 1 ], we then select any specific value of K = K(u) and p = p(u) satisfying the constraints in (8.27), and henceforth refer to these values when writing K and p. By (8.27), Lemma 8.4 applies for each u ∈ (0, u 1 ] (with K − √ 2u in place of K), yielding Q u,p -a.s. the existence of an unbounded cluster A ⊂ G with the respective properties. Setting A def.
= A ∩ G and recalling S K from (5.39) and (5.40), we infer A ⊂ (I u ∩ S K ) by (8.15). Moreover, for u ∈ (0, u 1 ] by (8.15), Q u,p -a.s., A is a subset of X u,K,p (cf. (5.36) and (5.40) for the choice of X x u,K,p , x ∈ G), and p satisfies the constraint in (5.36) on account of (8.27) and (2.10). All in all, we thus obtain that for each u ∈ (0, u 1 ], Q u,p -a.s., (8.28) I u ∩ S K ∩ X u,K,p contains an infinite cluster C ∞ (⊃ A) and (5.35) holds.
The properties of p entailed by the latter parts of (8.28) and (8.27) also imply that Proposition 5.6 applies for u ∈ (0, u 1 ]; thus, in view of (5.42) there exists Q u,K,p -a.s. infinite cluster C ∞ ⊂ I u ∩H u,K,p corresponding to C ∞ in (8. = √ 2u 1 , and by monotonicity for all h ≤ h 1 . It is clear from (8.24) that (1.11) holds for all h < 0, and we now argue that it also holds for all h ∈ [−h 1 , h 1 ]. This will follow from an application of Lemma 8.7 (with K − √ 2u in place of K), whose assumptions are met on account of (8.27). The event in (8.24) implies the existence of a continuous path π with ( π ∩ G) ⊂ (R u ∩ S K ∩ X u,K,p ), cf. with diameter at least L 10 inside B(x, 2C 10 L). Applying Lemma 8.7 and Proposition 5.6, we thus obtain, in view of (5.42), for all u ∈ (0, u 1 ], Q u,K,p   ∃ connected components of E ≥− √ 2u ϕ ∩ B(x, L) with diameter at least L 10 which are not connected via a nearest neighbor path π in R u ∩ H u,K,p ∩ B(x, 2C 10 L)   ≤ C(u) exp{−L c(u) }. (8.30) This inequality, together with (5.44), implies (1.11) for any h ∈ [− √ 2u, √ 2u], and for any u ∈ (0, u 1 ], and thus (1.11) hold for all h ∈ [−h 1 , h 1 ]. In view of (1.9), the statement(1.13) follows.
We now continue with the proof of Theorem 1.2.
Proof of Theorem 1.2. We continue with the same setup as in the proof of Theorem 1.1. In particular u 1 is defined by (8.27), K and p are given functions of u for every u ∈ (0, u 1 ] and (8.28) holds. We now show that Theorem 1.2 holds with u def.
= u 1 and Q u def.
= Q u,K,p as given by Proposition 5.6, for every u ∈ (0, u 1 ], upon choosing (under Q u ) (8.31) I = I u , K = H u,K,p and V = V u .
With the choices (8.31), part i) of (1.17) follows plainly from (5.42) and (5.43) and ii) is a consequence of (5.43), noting that I u and S K ∩ X u,K,p with X u,K,p coming from (5.36) are independent under Q u,p , which follows from (5.34) on account of (5.39). Finally, the inclusion I ∩ K ⊂ V holds by (5.44) and I ∩ K contains an infinite cluster C ∞ , see below (8.28). This completes the proof.
As the perceptive reader will have noticed, the inclusion in part iii) of Theorem 1.2 can be somewhat strengthened to a statement of the form (I ∩ K) ⊂ (V ∩ K ) with K independent of V by taking into account the effect of E ≥0 γ in (5.44), cf. also (5.43), (8.31) and (5.34) regarding the asserted independence.
The sole existence of an infinite cluster without the local connectivity picture entailed in (1.11) can be obtained under the slightly weaker geometric assumption ( WSI) from Remark 8.5. We record this in the following Theorem 8.8. Under the assumptions (3.1) and ( WSI) on G, there exists h 1 > 0 such that for all h ≤ h 1 , (1.10) holds for some x ∈ G and there exists a.s. an infinite connected component in E ≥h ∩ B(G p , CL 0 (h 2 /2)) and in V h 2 /2 ∩ B(G p , CL 0 (h 2 /2)) with L 0 (·) given by (8.11). In particular h * > 0 and u * > 0.
Proof. One adapts the argument leading to (1.10) in the proof of Theorem 1.1, replacing the use of Lemma 8.4 by the corresponding result obtained under the weaker assumption ( WSI) described in Remark 8.5. We omit further details.
We conclude with several comments. Remark 8.9. 1) In [19], on Z d , d ≥ 3, a slightly different parameter h 1 is introduced since only a super-polynomial decay in L is required in the conditions corresponding to (1.10) and (1.11), and in [59] yet another parameter h 2 is introduced by allowing the addition of a small sprinkling parameter h to connect together the large paths of E ≥h . However, it is clear that h ≤ h 1 ≤ h 2 , and so the parameters h 1 and h 2 are also positive as a consequence of Theorem 1.1.
2) Looking at the proof of Theorem 1.2, one sees that for u small enough, the set K can be taken with the same law under Q u as S K ∩ X u,K,p under Q u,p , for some K > 0 and p ∈ (0, 1) as in (8.27), where S K is defined in (5.39) and (5.40), and X u,K,p in (5.36) and (5.40). Changing the event C L 0 ,p x in Definition 7.4 by the increasing event C L 0 ,p x which occurs if and only if for all z ∈ B(x, 2C 10 (L 0 + C 3 ) + C 3 ), ϕ z ≥ −K, and the event F L 0 ,p x by the decreasing event F L 0 ,p x which occurs if and only if for all z ∈ B(x, 2C 10 (L 0 + C 3 ) + C 3 ), ϕ z ≤ K, one can show as in Lemma 8.4 that there exists a connected and unbounded set A ⊂ G such that A ⊂ I u , and | ϕ z | ≤ K for all z ∈ B G ( A, 1).
Therefore, adapting the proof of Theorem 1.2, one can take K with the same law under Q u as S K ∩ X u,K,p under Q u,p , for some K > 0 and p ∈ (0, 1) as in (8.27), where S K is defined in (5.25) and (5.40), and X u,K,p in (5.38) and (5.40), or with the same law as {x ∈ G; | ϕ z | ≤ K for all z ∈ U x }, and i) and iii) in (1.17) still hold. This choice for K has a simple expression and would be enough for the purpose of proving h > 0 and u * > 0, but has the disadvantage of not being independent from I. Independence, however, is expected to be useful for future applications.
3) Looking at the proof of Theorem 8.8, we have in reality shown the following statement: for all h ≤ h 1 , P G ∃ connected components of E ≥−h ∩ B(x, L) with diameter at least L 10 which are not connected in E ≥h ∩ B(x, 2C 10 L) ≤ Ce −L c , which is stronger than (1.11), and in particular any component of E ≥−h ∩ B(x, L) with diameter at least L 10 intersects E ≥h with very high probability. 4) A direct consequence of (8.16) and (8.24) is that we have strong percolation as in (1.9) for the level sets E >h , see (5.1), for all h < 0, in the sense that (1.10) and (1.11) hold but for the level sets E >h of the Gaussian free field on the cable system G instead of the graph G. Moreover, the critical parameter h * for percolation of the continuous level sets E >h is exactly equal to 0 by Proposition 5.2, and thus the strongly percolative phase consists of the entire supercritical phase for the Gaussian free field on the cable system, i.e. if one introduces h as in (1.9), but putting "tildes everywhere" in (1.10) and (1.11), one arrives at the following This result can also be proved without condition (WSI). Indeed, by (5.4), (3.11) and the definition of random interlacements, the probability that E >− √ 2u does not contain a connected component of diameter at least L/10 has stretched exponential decay in L for any u > 0. Moreover, by Corollary 5.3, any connected component of {z ∈ G; ϕ z > − √ 2u} ∩ B(x, L) either intersects I u or is a connected component of {z ∈ G; γ z > 0} not intersecting I u . Since I u and γ are independent under Q u,p , the probability that I u does not intersect a component of {z ∈ G; γ z > 0} with diameter at least L/10 has stretched exponential decay by Lemma 3.2 and (3.10). Therefore, with high enough probability, any connected component of {z ∈ G; ϕ z > − √ 2u} ∩ B(x, L) with diameter at least L/10 intersects I u , and strong connectivity of E − √ 2u then readily follows from Proposition 4.1. 5) Looking at Theorem 8.8, we have in fact proved that if ( WSI) holds for some subgraph in G p of G, then there exists 0 < h 1 ≤ h * such that for all h < h 1 , there exists L > 0 with P G there exists an infinite connected components in E ≥h ∩ B(G p , L) = 1.
It then follows by (5.18), that the same is true for V u i.e., there exists 0 < u 1 ≤ u * such that for all u < u 1 , and some L > 0, P I there exists an infinite connected components in V u ∩ B(G p , L) = 1.
If G = G 1 × G 2 , we may choose G p = P 1 × P 2 a half-plane, where P 1 and P 2 are two semi-infinite geodesic in G 1 and G 2 . Hence, we obtain that E ≥h and V u percolate in thick planes B(G p , L) for h > 0 and u > 0 small enough. If ν > 1, then V u actually percolates in the plane G p for u small enough, see Remark 7.2, 2), and in Theorem 5.1 of [56], it is shown that this is also true if ν = 1 and G 1 = Z. It is still unclear, and an interesting open question, whether this holds true for ν < 1 or not.
6) The existence of a non-trivial supercritical phase for Bernoulli percolation (and other models) is proved in [63] if G satisfies the volume upper bound of (V α ) and a local isoperimetric inequality. The proof involves events similar to those considered in (1.11), and it is possible that our condition (WSI) could be replaced by this local isoperimetric inequality, which would for example cover the case of the Menger sponge, see Remark 3.8, 3). However, one would then need to take a super-geometric scale in our renormalization scheme (7.2), and then lose the stretched exponential decay in (1.10) and (1.11). 7) One may also inquire whether a phase coexistence regime for percolation of {|ϕ| > h} and {|ϕ| < h} exists, or similarly for the level sets of local times {x ∈ G; x,u > α} of random interlacements, with u > 0, α ≥ 0, considered in [46]. For instance, regarding the latter, is it possible for all α > 0 to find u ≥ 0 such that percolation for the local times at level u above and below α occur simultaneously? 8) Finally, it would be desirable to have a conceptual understanding of the mechanism that lurks behind the percolation above small enough levels h ≥ 0 for the discrete level sets E ≥h (as opposed to their continuous counterparts E ≥h , cf. 3) above). Our current techniques are based on stochastic comparison, see Lemma 5.5 and Proposition 5.6, but the induced couplings suggest that one should be able to exhibit these features as a property of ϕ itself, without resorting to additional randomness.
In order to prove Proposition 3.3, ii) we first need the following bounds on the expected time at which the random walk Z on G leaves a ball.
Lemma A.1. There exist constants 0 < c 24 ≤ C 24 < ∞ only depending on G such that for all x ∈ G and R ≥ 1, For the lower bound, we can assume w.l.o.g. that R is large, and we write y∈B(x,R) ) λ y g B(x,R) (x, y) )\{x} λ y d(x, y) −ν (Vα) ≥ cR α−ν .
We now follow the proof of Proposition 4.33 in [4]. The bounds in Lemma A.1 on the expected exit time of a ball give us the following lemma as a first step in the proof of Proposition 3.3, ii).
Lemma A.2. There exist constants C 25 > 0 and c 25 > 0 only depending on G such that for all x ∈ G and R > 0, P x T B(x,R) > C 25 R β ≥ c 25 Proof. Take C 25 = (c 24 ∧ 1)/4. Let us fix x ∈ G and R > 0, and we can assume w.l.o.g. that C 25 R β ≥ 1/2 (and then R ≥ 1). We first need to remark that, by Lemma A.1, for all y ∈ B(x, R), E y T B(x,R) ≤ E y T B(y,2R) ≤ C 24 (2R) β .
Let us write n = C 25 R β . An application of the Markov property of Z at time n gives us On the other hand, by Lemma A.1, and combining (A.2) and (A.3) let us conclude.
It is interesting to note that Lemma A.2 is analogue to Proposition 3.3, ii) for n = C 25 R β , and we are going to use it iteratively with the help of (2.8) to finish the proof of Proposition 3.3.
Proof of Proposition 3.3, ii). Let us fix x ∈ G, r > 0 and a positive integer m. We define recursively the sequence of stopping time S p , p ∈ N by S 0 = x, and for all p ≥ 1, S p = T B(X S p−1 ,r) . and (3.17) (with C = 1) then readily follows from (A.4) as long as C −1 3 R < n < c 26 R β . Finally, if n < C −1 3 R, then by (2.8) B G (x, n) ⊂ B(x, R) and the left-hand side of (3.17) is always 0, and it is easy to find a constant C large enough so that the right-hand side of (3.17) is always larger than 1 whenever n > c 26 R β .