Harnack inequality and one-endedness of UST on reversible random graphs

We prove that for recurrent, reversible graphs, the following conditions are equivalent: (a) existence and uniqueness of the potential kernel, (b) existence and uniqueness of harmonic measure from infinity, (c) a new anchored Harnack inequality, and (d) one-endedness of the wired uniform spanning tree. In particular this gives a proof of the anchored (and in fact also elliptic) Harnack inequality on the UIPT. This also complements and strengthens some results of Benjamini et al. (Ann Probab 29(1):1–65, 2001). Furthermore, we make progress towards a conjecture of Aldous and Lyons by proving that these conditions are fulfilled for strictly subdiffusive recurrent unimodular graphs. Finally, we discuss the behaviour of the random walk conditioned to never return to the origin, which is well defined as a consequence of our results.


Introduction 1.Background and main result
Let (G, o) be a random unimodular rooted graph, which is almost surely recurrent (with E(deg(o)) < ∞).The wired Uniform Spanning Tree (UST for short) on G is defined to be the unique weak limit of the uniform spanning tree on any finite exhaustion of the graph, with wired boundary conditions.The existence of this limit is well known, see e.g.[LP16].(In fact, since the graph is assumed to be recurrent, the wired or free boundary conditions give the same weak limit).The UST is a priori a spanning forest of the graph G, but since G is recurrent this spanning forest consists in fact a.s. of a single spanning tree which we denote by T (see e.g.[Pem91]).We say that T is one-ended if the removal of any finite set of vertices A does not disconnect T into at least two infinite connected components.Intuitively, a one-ended tree consists of a unique semi-infinite path (the spine) to which finite bushes are attached.
The question of the one-endedness of the UST (or the components of the UST, when the graph is not assumed to be recurrent) has been the focus of intense research ever since the seminal work of Benjamini, Lyons, Peres and Schramm [BLPS01].Among many other results, these authors proved (in Theorem 10.1) that on every vertex-transitive graph, and more generally on a network with a transitive unimodular automorphism group, that every component is a.s.one-ended unless the graph is itself roughly isometric to Z (in which case it and the UST are both two-ended).(This was extended by Lyons, Morris and Schramm [LMS08] to graphs that are neither transitive nor unimodular but satisfy a certain isoperimetric condition slightly stronger than uniform transience).More generally, a conjecture attributed to Aldous and Lyons is that every unimodular one-ended graph is such that every component of the UST is a.s.one-ended.This has been proved in the planar case in the remarkable paper of Angel, Hutchcroft, Nachmias and Ray [AHNR18] (Theorem 5.16) and in the transient case by results of Hutchcroft [Hut18,Hut16].The conjecture therefore remains open in the recurrent case, which is the focus of this article.
Let us motivate further the question of the one-enededness of the UST.It can in some sense be seen as the analogue 1 of the question of percolation at the critical value.To see this, note that when the UST is one-ended, every edge can be oriented towards the unique end, so that following the edges forward from any given vertex w, we have a unique semi-infinite path starting from w obtained by following the edges forward successively.Observe that this forward path necessarily eventually arrives at the spine and moves to infinity along it.Given a vertex v, we may define the past Past(v) of v to be the set of vertices w for which the forward path from w contains v; it is natural to view Past(v) as the analogue of a connected component in percolation.From this point of view, the a.s.one-endedness of the tree is equivalent to the finiteness of the past (i.e., connected component in this analogy) of every vertex, as anticipated.We further note that on a unimodular graph, the expected value of the size of the past is however always infinite, as shown by a simple application of the mass transport principle.This confirms the view that the past displays properties expected from a critical percolation model.In fact, Hutchcroft proved in [Hut20] that the two models have same critical exponents in sufficiently high dimension.
In this paper we give necessary and sufficient conditions for the one-endedness of the UST on a recurrent, unimodular graph.These are, respectively: (a) existence of the potential kernel, (b) existence of the harmonic measure from infinity, and finally (c) an anchored Harnack inequality.Although we do not solve the remaining case of the Aldous-Lyons conjecture, we illustrate our results by showing that they give straightforward proofs of the aforementioned result of Benjamini, Lyons, Peres and Schramm [BLPS01] in the recurrent case (which is one of the most difficult aspects of the proof of the whole theorem, and is in fact stated as Theorem 10.6).We also apply our results to some unimodular random graphs of interest such as the Uniform Infinite Planar Triangulation (UIPT) and related models of infinite planar maps, for which we deduce the Harnack inequality.
To state these results, we first recall the following definitions.Our results can be stated for reversible environments or reversible random graphs, i.e., random rooted graphs such that if X 0 is the root and X 1 the first step of the random walk conditionally given G and X 0 then (G, X 0 , X 1 ) and (G, X 1 , X 0 ) have the same law.As noted by Benjamini and Curien in [BC12], any unimodular graph (G, o) with E(deg(o)) < ∞ satisfies this reversibility condition after biasing by the degree of o.Conversely, any reversible random graph gives rise to a unimodular rooted random graph after unbiasing by the degree of the root.This biasing/unbiasing does not affect any of the results below since they are almost sure properties of the graph.Note also that again by results in [BC12], a rooted random graph whose law is stationary for random walk is in fact necessarily reversible.See also Hutchcroft and Peres [HP15] for a nice discussion and Aldous and Lyons [AL07] for a systematic treatment.
For a nonempty set A ⊂ v(G) we define the Green function by setting for x, y ∈ v(G): where T A denote the hitting time of A, and let denote the normalised Green function.(Note that due to reversibility, g A (x, y) = g A (y, x).)Let A n be any (sequence) of finite sets of vertices such that d(A n , o) → ∞ as n → ∞.Here, by d(A n , o) we just mean the minimal distance of any vertex in A n to o.It is natural to construct the potential kernel of the infinite graph G by an approximation procedure; we set a An (x, y) := g An (y, y) − g An (x, y). (2) In this manner, the potential kernel compares the number of visits to y, starting from x versus y, until hitting the far away set A n .We are interested in existence and uniqueness of limits for a An as n → ∞.In this case we call the unique limit the potential kernel of the graph G.We will see that the existence and uniqueness of this potential kernel turns out to be equivalent to a number of very different looking properties of the graph.We move on to harmonic measure from infinity.Let A be a fixed finite, nonempty set of vertices.Let µ n (•) denote the harmonic measure on A, started from A n if we wire all the vertices in A n .The harmonic measure from infinity, if it exists, is the limit of µ n (necessarily a probability measure on A).Now let us turn to Harnack inequality.We say that (G, o) satisfies an (anchored) Harnack inequality (AHI) if there exists an exhaustion (V R ) R≥1 of the graph (i.e.V R is a finite subset of vertices and ∪ R≥1 V R = v(G)), and there exists a nonrandom constant C > 0, such that the following holds.For every function h : v(G) → R + which is harmonic except possibly at 0, and such that h(0) = 0, then: max The word anchored in this definition refers to the fact that the exhaustion is allowed to depend on the choice of root o, and the functions are not required to be harmonic there.(As we show in Remark 6.14, a consequence of our results is that an anchored Harnack inequality automatically implies the Elliptic Harnack inequality (EHI) on a suitably defined sequence of growing sets.) We now state the main theorem.
Theorem 1.1.Suppose (G, o) is a recurrent reversible graph (or equivalently after unbiasing by the degree of the root, (G, o) is recurrent and unimodular with E(deg(o)) < ∞).The following properties are equivalent.
(a) The pointwise limit of the truncated potential kernel a An (x, y) exists and does not depend on the choice of A n .
(b) The weak limit of the harmonic measure µ n from A n exists and does not depend on A n .
(d) The uniform spanning tree T is a.s.one-ended.
Furthermore, if any of these conditions hold, a suitable exhaustion for the anchored inequality is provided by the sublevel sets of the potential kernel, see Sections 5 and 6.

Some applications
Strengthening of [BLPS01].Before showing some applications of this result, let us point out that Theorem 1.1 complements and strengthens some of the results of Benjamini, Lyons, Peres and Schramm [BLPS01].In that paper, the (easy) implication (d) implies (b) was noted.We therefore in particular obtain a converse.One can furthermore easily see using their results that on any recurrent planar graph with bounded face degrees (e.g., any recurrent triangulation) (d) holds, i.e., the uniform spanning tree is a.s.one-ended: indeed, for such a graph, there is a rough embedding from the planar dual to the primal, which is assumed to be recurrent, and therefore the planar dual must be recurrent too by Theorem 2.17 in [LP16].By Theorem 12.4 in [BLPS01] this implies that the uniform spanning tree (on the primal) is a.s.one-ended, and so (d) holds.(In fact, Theorem 5.16 in [AHNR18] shows that the bounded face degree assumption is not needed).
Applications to planar maps.Therefore, in combination with [BLPS01], Theorem 1.1 above applies in particular to unimodular, recurrent triangulations such as the UIPT, or similar maps such as the UIPQ.This therefore implies that these maps have a well-defined potential kernel, harmonic measure from infinity, and satisfy the anchored Harnack inequality.As shown in Remark 6.14, this also implies the elliptic Harnack inequality (for sublevel sets of the potential kernel, see Theorem 6.2 for a precise statement).We point out that the elliptic Harnack inequality should not be expected to hold on usual metric balls, but can only be expected on growing sequences of sets which take into account the "natural conformal embedding" of these maps.This is exactly what the potential kernel and its sublevel sets allows us to do.
More general implications.We already mention that the equivalence between (a) and (b) is valid more generally, for instance for any locally finite, recurrent graph.The implication (a) =⇒ (c) to the Harnack inequality (c) is then valid under the additional assumption that the potential kernel grows to infinity (something which we can prove assuming unimodularity).We recall that (d) implies (b) is also true for deterministic graphs, as proved in [BLPS01].
Remark 1.2.Many of the arguments in this article are true for deterministic graphs.The unimodularity (or reversibility) of the graph with respect to random walk is only used in Lemma 5.4, whose main use is to show that the potential kernel, if it exists, diverges to infinity along any sequence going to infinity (see Lemma 5.5).This property is used for instance in both directions of the relations between (c) and (d), since both go via (a).The unimodularity (or stationarity) is also used to prove that the walks conditioned not to return to the origin satisfy the infinite intersection property, a key aspect of the proof one-endedness.Finally this is also proved to show that if there is a bi-infinite path in the UST then it must essentially be almost space-filling, which is the other main argument of the proof of one-endedness.
Deterministic case of the Aldous-Lyons conjecture.As previously mentioned, Theorem 1.1 can be applied to give a direct proof of the one-endedness of the UST for recurrent vertextransitive graphs not roughly equivalent to Z, which is Theorem 10.6 in [BLPS01].
Corollary 1.3.Suppose G is a recurrent, vertex-transitive graph.If G is one-ended then the UST is also a.s.one-ended.Otherwise G is roughly isometric to Z.
Proof.First note that the volume growth of the graph is at most polynomial (as otherwise the walk cannot be recurrent).By results of Trofimov [Tro85], the graph is therefore roughly isometric to a Cayley graph Γ.Since it is recurrent (as recurrence is preserved under rough isometries, see Theorem 2.17 and Proposition 2.18 of [LP16]), we deduce by a classical theorem of Varopoulos (see e.g.Theorem 1 and its corollary in [Var91]) that Γ is a finite extension of Z or Z 2 and is therefore (as is relatively easily checked) roughly isometric to either of these lattices.Since either of these lattices enjoy the Parabolic Harnack Inequality (PHI), which is, by a consequence of a result proved by Grigoryan [Gri91] and Saloff-Coste [SC92] independently, preserved under rough isometries (see also [CSC95]), we see that G itself satisfies PHI and therefore also the Elliptic Harnack Inequality (EHI): for any R > 1, if h is harmonic in the metric ball B(2R) of radius 2R around the origin, then sup B(R) h(x) ≤ C inf B(R) h(x).(In fact, by a deep recent result of Barlow and Murugan, EHI is now known directly to be stable under rough isometries [BM18], but here we can appeal to the much simpler stability of PHI.We recommend the following textbooks for related expository material: [Kum14], [Bar17] and [VSCC08].)Suppose that G is not roughly isometric to Z, therefore it is roughly isometric to Z 2 .Let us show that G satisfies the anchored Harnack inequality (3), with the exhaustion sequence simply obtained by considering metric balls V R = B(R).Let h be nonnegative harmonic on G except at 0. Since G is rough isometric to Z 2 , we can cover ∂V R with a fixed number (say K) of balls of radius R/10, such that the union of these balls is connected (here we used two-dimensionality).Let x, y ∈ ∂V R , we can find x = x 0 , . . ., x K = y with d(x i , x i+1 ) ≤ R/10, and d(x i , o) > 2R/10.Exploiting the EHI in each of the K balls B(x i , 2R/10) inductively (since h is harmonic in each of these balls), we find that h(x) ≤ C K h(y).Since x, y are arbitrary in ∂V R , this proves the anchored Harnack inequality (3).
Subdiffusivity implies one-endedness.As an application of our results we also show that the one-endedness of the UST holds for unimodular recurrent graphs if we in addition assume that they are strictly subdiffusive; that is, we settle the Aldous-Lyons conjecture in that case.(This encompasses many models of random planar maps, but can of course hold on more general graphs, see in particular [Lee17], recalled also in Remark 4.2, for sufficient conditions guaranteeing this).
Theorem 1.4.Suppose (G, o) is unimodular, almost surely recurrent and strictly subdiffusive (i.e., satisfies (SD) below) and satisfies This applies e.g. for high-dimensional incipient infinite percolation cluster, as explained after Remark 4.2.The proof of Theorem 1.4 takes as an input the results of Benjamini, Duminil-Copin, Kozma and Yadin [BDCKY15] which shows that for strictly subdiffusive unimodular graphs there are no nonconstant harmonic functions of linear growth, and the trivial observation that the effective resistance between points is at most linear in the distance between these points.We believe it should be possible to use the same idea to prove the result assuming only diffusivity: to do this, it would suffice to prove that the effective resistance grows strictly sublinearly, except on graphs roughly isometric to Z.
Random walk conditioned to avoid the origin.The existence of the potential kernel allows us to define (by h-transform) a random walk conditioned to never touch a given point (even though this is of course a degenerate conditioning on recurrent graphs).We study some properties of the conditioned walk and show among other things that two independent conditioned walks must intersect infinitely often, a fact which plays an important role in the proof of Theorem 1.1 for the equivalence between (a) and (d).We conclude the article with a finer study of this conditioned walk on CRT-mated random planar maps.In this case we are able to show that the hitting probability of a point far away from the origin by the conditioned walk remains bounded away from 1 in the limit as the point diverges to infinity (and is bounded away from 0 for "almost all" such points).See Theorem 9.1 for a precise statement.We also discuss a conjecture (see (49)) which, if true, would show a significant difference of behaviours with respect to the more standard case of Z 2 (where these hitting probabilities converge to 1/2, as surprisingly shown in [Pop21]).

Background and notation
Before we begin with the proofs of our theorems, we need to introduce the main notations that we will use throughout this text.
A graph G consists of a countable collection of vertices v(G) and edges e(G) ⊂ {{x, y} : x, y ∈ v(G)} and we will always assume that the vertex degrees are finite.We will work with undirected graphs, but will sometimes take the directed edges e(G) = {(x, y) : {x, y} ∈ e(G)}.
The graph G comes with a natural metric d(x, y), which is the graph distance, i.e. the minimal length of a path between two vertices x and y.For n ∈ N, we will denote by the metric ball of radius n.For a set A ⊂ v(G), we will write ∂A for its outer boundary in v(G), that is ∂A = {x ∈ v(G) \ A : there exists a y ∈ A with x ∼ y}.
We will make extensive use of the graph Laplacian which we normalise as follows: for functions f : v(G) → R (here c(x, y) is the conductance of the edge (x, y), which is typically equal to one in this paper, except in Section 7 where we consider random walk conditioned to avoid the origin forever).A function denote the simple random walk on G, with its law written as P and P x to mean P(• | X 0 = x).For a set A ⊂ v(G), we define the hitting time T A = inf{n ≥ 0 : X n ∈ A} and T x := T {x} whenever A = {x} consists of just one element.We will write T + A for the first return time to a set A. Suppose that G is a connected graph.The effective resistance is defined through (recall our normalisation of the Green function G in (1)).Recall the useful identity The proof is obvious from the definition of effective resistance and our normalisation of the Green function when we use the obvious identity , which can be seen by considering the number of excursions from y to y, which is a geometric random variable by the Markov property.
For infinite graphs G, we will say that a sequence of subgraphs (G n ) n≥1 of G is an exhaustion of G whenever G n is finite for each n and v(G n ) → G as n → ∞.Fix some exhaustion (G n ) n≥1 of an infinite graph G and define the graph G * n as G n , together with the identification of G c n , where we have deleted all self-loops created in the process.For two vertices x, y ∈ v(G) we recall that see for instance [LP16, Section 9.1].As is well known, the effective resistance defines a metric (see for instance exercise 2.67 in [LP16]).
Later, we will often work with the metric R eff (• ↔ •) on v(G), instead of the standard graph distance.We introduce the notation for the closed ball with respect to the effective resistance metric.Notice that, in general, this metric space is not a length space -making it somewhat inconvenient.
Another result that we will need to use a few times is the 'last exit decomposition', or rather two versions thereof which can be proved similarly to [LL10, Proposition 4.6.4].
Lemma 2.1 (Last Exit Decomposition).Let G be a graph and A ⊂ B ⊂ v(G) finite.Then for all x ∈ A and b ∈ ∂B we have Moreover, for x ∈ B we have 3 Equivalence between (a) and (b)

Base case of equivalence
We will say that a sequence of finite sets of vertices (A n ) n≥1 'goes to infinty' whenever Here, by d(A n , o) we just mean the minimal distance of any vertex in A n to o.
Recall the definition of a An , which also satisfies a An (x, y) = g An (y, y) − g An (x, y) = 1 deg(y) Clearly, both the numerator and the denominator tend to 0 as n tends to infinity by recurrence of the underlying graph G.When a sequence of subsets A n has be chosen we will write a n instead of a An with a small abuse of notations.The goal of this section is to prove the equivalence between (a) and (b) in Theorem 1.1 (in the base case where the set A consists of two points; this will be extended to arbitrary finite sets in Section 3.3).First, we show that subsequential limits of a n always exist.
Lemma 3.1.Let (A n ) n≥1 be some sequence of finite sets of vertices going to infinity.There exists a subsequence (n k ) k≥1 going to infinity such that for all x, y ∈ v(G) the limit a(x, y) := lim k→∞ a n k (x, y) exists in [0, ∞).Moreover, a(x, y) > 0 precisely when the removal of y from G does not disconnect x from A n k for all k large enough.
Proof.Fix y ∈ v(G) and suppose first that for u ∼ y we have P u (T An < T y ) > 0 for all n large enough (i.e., y does not disconnect a portion of the graph from infinity).
Let x ∈ v(G) and fix n so large that A n does not contain y, x or any of the neighbors of y.For each u ∼ y, we can force the random walk started from x to go through u before touching A n or y to get P x (T An < T y ) ≥ P x (T u < T An ∧ T y )P u (T An < T y ).
Upon taking u ∼ y such that it maximizes P u (T An < T y ) and by recurrence of G we get the existence of c(x, y) > 0 for which a n (x, y) = P x (T An < T y ) u∼y P u (T An < T y ) The same reasoning as in (8) but in the other direction gives Hence, using again recurrence of G we get that there is some C(x, y) < ∞ such that (upon taking the right u) We deduce that for fixed x, y, subsequential limits of a n (x, y) exist and the existence of subsequential limits for all x, y simultaneously follows from diagonal extraction.
The existence of subsequential limits in the general case is the same as we can always lower bound a n (x, y) by 0 and the upper bound does not change.Now, if x ∈ v(G) is such that the removal of y disconnects x from A n k , then a n k (x, y) = 0. Suppose thus that x is such that the removal of y does not disconnect x from A n k for all k large enough.In this case, we can restrict ourselves to just the component of G with y removed, in which both A n k and x are as the hitting probabilities are the same in this case.Hence, we are back in the situation above and a n k (x, y) ≥ c(x, y) > 0.
We next present a result, which shows that any subsequential limit appearing in Lemma 3.1 must satisfy a certain number of properties.Proposition 3.2.Let a(x, y) be any subsequential limit as in Lemma 3.1.Then a : v(G) → R + satisfies (i) for each y ∈ v(G) ∆a(•, y) = δ y (•) and a(y, y) = 0, where we recall that ∆ is defined in (4) and is normalised so that ∆f (x) = y (f (y)−f (x)).
(ii) for all x, y ∈ v(G) we have where P A refers to the law of a random walk starting from A, when all of the vertices in A have been wired together.
The equivalence between (a) and (b) of Theorem 1.1 (in the base case where the finite set B on which we need to define harmonic measure consists of two points) is then obvious, and we collect it here: Corollary 3.3.Let G be a recurrent graph.Then hm x,y (x) := lim n→∞ P An (T x < T y ) exists for all x, y ∈ v(G) and is independent of the sequence (A n ) n if and only if the potential kernel is uniquely defined.Furthermore, in this case, a(x, y) = hm x,y (x)R eff (x ↔ y).
Proof iof Proposition 3.2.The proof of item (i) is rather elementary.Fix y ∈ v(G) and n ≥ 1.Since x → P x (T An < T y ) is a harmonic function outside of y and A n by the simple Markov property, we get that x → a n (x, y) is harmonic outside y and A n , see (7).It follows that x → a(x, y) is harmonic at least away from y. Furthermore, note that a n (y, y) = 0 by definition and u∼y a n (u, y) = u∼y P u (T An < T y ) u∼y P u (T An < T y ) = 1 so ∆a n (•, y)| •=y = 1.This finishes the proof of (i).
For part (ii), we notice first that by properties of the electrical resistance, which allows us to write Identify the vertices in A n and delete possible self-loops created in the process.The resulting graph G n is then still recurrent.Let G y (•, •) denote the Green function on this graph when the walk is killed at y.We can also express the effective resistance in terms of the normalised Green function: that is, Using the Markov property and since G n is reversible, by using the same argument in the other direction, and where the effective resistance in the last line is calculated in G n .
Since the graph G is recurrent, it follows that R eff (x ↔ y; G n ) converges to R eff (x ↔ y; G) as n → ∞ (as the free and wired effective resistances agree).We deduce that a(x, y) = lim which finishes part (ii).

Triangle inequality for the potential kernel
Before we start of the proof of the remaining implications, we need some preliminary estimates on the potential kernel, showing that it satisfies a form of triangle inequality.This plays a crucial role throughout the rest of this paper.We also need a decomposition of the potential kernel in order to prove that for reversible graphs, the potential kernel (if it is well defined) satisfies the growth condition.
We start with a simple and well known application of the optional stopping theorem: Lemma 3.4.Let A be some finite set and suppose that x, y ∈ A. Then Proof.This is Proposition 4.6.2 in [LL10], but we include for completeness since its proof if simple.Let x, y ∈ A and notice that is a martingale.Applying the optional stopping theorem at Taking n → ∞, since A is finite, we deduce from dominated (resp.monotone) convergence that showing the result.
Proposition 3.5.Let x, y, z ∈ v(G) be three vertices.We have the identity Proof.Fix x, y, z ∈ v(G) and let (A n ) n≥1 be some sequence of finite sets of vertices going to infinity2 .Glue together A n on the one hand, and the vertices of B(o, m) c on the other hand.Delete all self-loops created in the process and write ∂ m for the vertex corresponding to B(o, m) c .Let Xk be the simple random walk on the graph obtained from gluing A n and ∂ m .We define for w, w ∈ B(o, m) ∪ {∂ m } the function By recurrence and (9), we have that a n,m (w, w ) → a n (w, w ) as m → ∞, for all w, w .
Fix n so large that x, y and z are not in A n .Let m be so large that x, y, z and On the other hand, by definition of E m,n we have where a priori the hitting probabilities are calculated on the graph where A n and ∂ m are glued.However, as we are only interested in the first hitting time of either of these sets, it does not matter and we can calculate the probabilities also for the random walk on the graph G. Notice that, by definition, a m,n (∂ m , y) = 0. Plugging this back into (11) we obtain We have already observed that a m,n (w, y) → a n (w, y) for each w as m → ∞.Then, by recurrence of G and monotone convergence, we get Next, we wish to take n → ∞.The left-hand side converges to a(x, y) as n → ∞, by definition of the potential kernel.The first term on the right-hand side converges to a(z, y) by the same argument and recurrence of the graph G. Using once more monotone convergence, we find as n goes to infinity.We are left to deal with the term P x (T An < T z )a n (A n , y), which we claim converges to a(x, z).
From the definition of a n , together with the representation in (9), we find Thus, using again the same representation of a n (x, z), we see that Using recurrence of G, we notice that as n → ∞.In particular, we deduce that as n → ∞.Plugging this, together with (13) back into (12) we conclude: as desired.
Remark 3.6.Proposition 3.5 is an extensions of results known for the lattice Z 2 , see Proposition 4.6.3 in [LL10] and the discussion thereafter.As far as we know, these proofs are based on precise asymptotic behavior of the potential kernel, a tool we do not seem to have.
Remark 3.7.The statement of Proposition 3.5 is also valid for an arbitrary subsequential limit a(•, •) of a n (•, •), even when a proper limit is not known to exist.In particular, it shows that given such a subsequential limit a(•, y) there is a unique way to coherently define a(•, z).For this reason, if lim n→∞ a n (x, y) is shown to exist for a fixed y and all x ∈ v(G), it follows that this limit exists for all x, y ∈ v(G) simultaneously.This will be used in Theorem 3.10.
Corollary 3.8.For each x, z ∈ v(G) and all > 0 there exists an N = N ( , x, z) such that for all y with d(x, y) ≥ N we have |a(x, y) − a(z, y)| ≤ and in particular lim n→∞ a(x, y n ) − a(z, y n ) = 0 for any sequence (y n ) n≥1 going to infinity.
Notice that Corollary 3.8 does not say that a(y n , x) − a(y n , z) → 0 as z → ∞ in general!Indeed, a similar argument shows that a(y n , x) − a(y n , z) → a(z, x) − a(x, z), which is nonzero in general.
Proof.Fix x, z ∈ v(G) and suppose by contradiction that there is some > 0, such that for infinitely many n ≥ 1 (but in fact we can with a small abuse of notation assume for all n ≥ 1 after taking a subsequence), there is some By Proposition 3.5 and deg(•)-reversibility of the Simple Random Walk we have

x) .
Take A n = {y n } and recall (see e.g. ( 10)) that Since this converges to zero as n → ∞, we get the desired contradiction.
We immediately deduce that the harmonic measures from infinity of {x, y} and {z, y} are very similar if y is far away from x and z.Corollary 3.9.Fix x, z ∈ v(G).For every > 0, there exists an N = N (x, z, ) such that for all y with d(x, y) ≥ N we have Proof.This follows from Corollary 3.8 and using the expression of Corollary 3.3.

Gluing and harmonic measure
We suppose throughout this section that the potential kernel is well defined in the sense that the subsequential limits appearing in Lemma 3.1 are all equal.By Corollary 3.3, this implies that the harmonic measure from infinity is well defined for two-point sets.
Let B ⊂ v(G) be a set.Glue together all vertices in B and delete all self-loops that were created in the process.We denote the graph induced by the gluing G B .Note that G B need not be a simple graph, even when G was.
We will prove in this section that, if the potential kernel is well defined on G, it is also well defined on G B , whenever B is a finite set.Furthermore, we will prove an explicit expression of the potential kernel on the graph G B in the case where B is a finite set.These results are an extension of results on the lattice Z 2 , see for instance [LL10, Chapter 6], but we will use different arguments, following from the expression for the potential kernel in terms of harmonic measure from infinity as in Corollary 3.3.Theorem 3.10 (Gluing Theorem).Suppose a(x, y) = lim n→∞ a n (x, y) exists for all x, y ∈ v(G) and does not depend on the choice of the sequence of sets A n going to infinity.Let B ⊂ v(G) be a finite set, whose removal does not disconnect G, and suppose x ∈ B. Then exists and is given by Extending q B to v(G) in the natural way (i.e., using (15) with w ∈ v(G)), we have where the Laplacian ∆ is calculated on G via (4).
Note in particular, that in the expression (15) for q B , any choice of x ∈ B gives the same value and so is irrelevant.We will prove this theorem in the two subsequent subsections, proving first ( 14) and (15) in Section 3.3.1,and then (16) in Section 3.3.2.
Before we give the proof, we first state some corollaries.The first one is that the harmonic measure from infinity is well defined for the arbitrary finite set B (subject to the assumption that the removal of B does not disconnect G).
Corollary 3.11.Fix a finite set B ⊂ v(G) as in Theorem 3.10.Let A n be a set of vertices tending to infinity.Then for any x ∈ B, exists and is positive for all x ∈ B such that the removal of B \ {x} does not disconnect x from infinity.
Proof.Fix w / ∈ B, then arguing as in ( 9) and (10) we get as n → ∞.This limit is by definition the desired value of hm B∪{w} (w).Note furthermore that q B (w) is strictly positive by Lemma 3.1.
Applying the same reasoning but with B changed into B = B \{x} (with x ∈ B) and w = x, shows that the limit in (17) exists.Furthermore, if the removal of B does not disconnect x from ∞, we see that q B (w) > 0 again, and so hm B (x) > 0.
Next, we show that the potential kernel can only be well defined if the graph G is one-ended.
Corollary 3.12.If the potential kernel is well defined, G is one-ended.
Proof.Intuitively, on multiple-ended graphs there isn't a single harmonic measure from infinity since there are several ways of converging to infinity.Suppose G has more than one end.Let x 1 , x 2 , . . ., x M be some finite number of vertices, such that removing then from v(G) and looking at the induced graph, we have (at least) two infinite components.Write B n = B(o, n) and choose n large enough that x 1 , . . ., x M ∈ B n .Consider the graph G Bn resulting from gluing B n together as in the theorem.Clearly, the removal of B n creates at least two infinite components.Pick a vertex z of B c n and suppose it is in one infinite component.Let ({w i }) i≥1 be any sequence of vertices going to infinity in an infinite component that does not contain z.Then P w i (T z < T Bn ) = 0 (for each i), yet this converges by Corollary 3.11 to hm Bn∪z (z) > 0 since the removal of B n does not disconnect z from infinity.This is the desired contradiction.
Theorem 3.10 a priori only shows that the potential kernel with 'pole' B is well defined when B does not disconnect G.We can, however, extend it to arbitrary finite sets B and to an arbitrary second variable y.
Corollary 3.13.Let B ⊂ v(G) be any finite set.The potential kernel a B : v(G B ) 2 → R + is well defined in the sense that the limit exist for all w, y ∈ v(G B ) and does not depend on the choice of sequence of sets A n .Here, the probability and effective resistance are calculated on the graph G B .
Proof.We start with taking B as the hull (in the sense of complex analysis, meaning we "fill it in" with respect to the point at infinity) of B, defined by adding to B all the points in v(G B ) that belong to finite connected components of v(G B ) \ B. Since G is one-ended by Corollary 3.12, B does not disconnect G.By Theorem 3.10, we have that for any sequence of sets (A n ) n≥1 going to infinity, the limit exists for each w ∈ v(G B ) and does not depend on the choice of sequence of vertices A n going to infinity.Moreover, this limit also trivially exists (and is zero) if w is in one of the finite components of v(G) \ B.
Hence we deduce that actually for all w ∈ v(G B ) we have that the limit exists and does not depend on the choice of sequence of sets going to infinity A n .Now, by Proposition 3.5 (see Remark 3.7) we get that for all w, y ∈ v(G B ) the limit exists and does not depend on the choice of the sequence A n .This is the desired result.

Proof of (14) and (15)
Proof.Fix (A n ) n≥1 a sequence of finite sets of vertices going to infinity.For a finite set B ⊂ v(G) and x ∈ B, we will define the function q and q B (B) = 0, whenever the potential kernel on G is well defined.We will prove (14) using induction on the number of vertices m in B. To be more precise, we will show that for any recurrent graph G for which the potential kernel is well defined (in other words, lim n→∞ a An (x, y) = a(x, y) and does not depend on the sequence A n ) for any set B ⊂ v(G) with |B| = m and v(G) \ B connected, we have that ( 14) holds.The base case m = 1 holds trivially.Let m ∈ N and suppose that for any recurrent graph G on which the potential kernel is well defined and for any subset B ⊂ v(G) with |B| = m and v(G) \ B connected we have that ( 14) and ( 15) are satisfied for each x ∈ B.
In this case, q B (w) = lim by assumption exists and does not depend on the sequence (A n ) n≥1 , so we also have that a G B (•, B) = q B (•) by ( 9).Remark 3.7 then shows us that a G B (•, y) is well defined for any y ∈ v(G B ) and hence we know that the potential kernel is well defined on G B too.
Induction.Let G be a recurrent graph for which the potential kernel is well defined and let We split into two cases, depending on x: We begin with the easy case.Suppose we are in situation (i).We have that for all w / ∈ B (for n large enough) The limit on the left-hand side exists as the potential kernel is well defined, see Corollary 3.3, and hence lim n→∞ P An (T w < T B )R eff (w ↔ B) exists and equals a(w, x).Moreover, we also have which proves the result for this choice of x.
We move on to the more interesting case (ii).Since we are not in case (i), we can find a set B ⊂ B with |B | = m and v(G) \ B connected (indeed, since we are not in case (i), there is at least a path going from some vertex in B to infinity, without touching x, and removing from B the last vertex in B visited by this path provides such a set B ).Take y to be the vertex such that {y} = B \ B .
Since |B | = m, we have by the induction hypothesis that the potential kernel a G B (•, •) is well defined.Pick w ∈ v(G) such that w / ∈ B, which we can view also as a vertex in G B and G B .Fix n so large that both B and w are not in A n .Using (9) we have that We focus on the probability appearing on the right-hand side.By the law of total probability and the strong Markov property of the simple random walk, we have Since G (and hence Taking n → ∞ in the above identity after multiplying by R eff (B ↔ A n ) and using once more recurrence, we deduce that because the potential kernel on G B is well defined by assumption.This implies in particular that lim exists and does not depend on the sequence A n and, thus, we deduce that a G B (w, B) is well defined and satisfies We are left to prove that q B (w) = a G B (w, B).By the induction hypothesis (because Using this in (18) we get = a(w, x) − P w (X T B = y)a(y, x) − z∈B P w (X T B = z)a(z, x) where in the last line we used for z ∈ B the equality which holds due to the strong Markov property for the random walk.But of course, this is the same as so indeed we have that a G B (w, B) = q B (w), which finishes the induction argument.

Proof of (16)
Let B ⊂ v(G) be a finite set, such that its removal does not disconnect G.So far, we have shown that the potential kernel is well defined on the graph G B and hence that the harmonic measure from infinity is well defined, see Corollary 3.11.In this section, we will prove (16); the third statement of Theorem 3.10.First, let us introduce some notation that will only be used here.If G is a graph and B ⊂ v(G) a (finite) set, then we will write ∆ for the Laplacian on G and ∆ G B for the Laplacian on G B .
Proof of (16).Let G be a recurrent graph on which the potential kernel is well defined, and suppose that B ⊂ v(G) is a finite set such that v(G) \ B is connected.Fix x ∈ B. We split into two cases: (i) the removal of x disconnects B \ {x} from infinity in G or (ii) is does not.
In the first case, we have that hm B (x) = 1 and also that q B (w) = a(w, x) for all w ∈ B (indeed, for w / ∈ B this follows immediately from (15) and for w ∈ B \ {x} we have that q B (w) = 0 = a(w, x) in this case).Hence, we deduce which shows the result in case (i).
In case (ii), take B = B \ {x}.We will show that where ∆ is the Laplacian acting on the function with variable u.Let us first explain how this shows the final result.As in ( 18) and (15) we know that (when q B is viewed as a function on v(G B )) Moreover, when w ∈ B, we have Hence, actually, so that (19) implies the final result.
To prove (19), recall from (5) that , The last equality follows from Corollary 3.3, which allows us to write This shows (16) and therefore concludes the proof of Theorem 3.10.In turn this finishes the proof that (a) is equivalent to (b) in Theorem 1.1 (see e.g.Corollary 3.11).
4 Proof of Theorem 1.4 Before proceeding with the remaining equivalences we give a proof that (a) holds under the assumption of Theorem 1.4.Recall that a random graph (G, o) is strictly subdiffusive whenever there exits a β > 2 such that We collect the following theorem of [BDCKY15].The main theorem from that paper shows that, assuming subdiffusivity, strictly sublinear harmonic functions must be constant.In fact, as already mentioned in that paper (see Example 2.10), the arguments in that paper also show that assuming strict subdiffusivity, even harmonic functions of at most linear growth must be constant.It is this extension which we use here, and which we quote below.
Theorem 4.1 (Theorem 3 in [BDCKY15]).Let (G, o, X 1 ) be a strictly subdiffusive (SD), recurrent, stationary environment.A.s., every harmonic function on G that is of at most linear growth is constant.
We now give the proof of Theorem 1.4 using this result.
Proof of Theorem 1.4.Let (G, o) be a unimodular graph that is almost surely strictly subdiffusive (SD) and recurrent, satisfying E[deg(o)] < ∞.Then degree biasing (G, o) gives a reversible environment and hence, almost surely, all harmonic functions on (G, o, X 1 ) that are at most linear are constant due to Theorem 4.1.After degree unbiasing, the same statement is true for (G, o).
Let a 1 , a 2 : v(G) 2 → R + be two potential kernels arising as subsequential limits in the sense of Lemma 3.1.Fix y ∈ v(G).By Proposition 3.2 we have that a i (•, y) is of the form Clearly, h is harmonic everywhere outside y by choice of the a i 's and linearity of the Laplacian.Since ∆a 1 (•, y) = ∆a 2 (•, y) by Proposition 3.2, we also get that ∆h(y) = 0 and we deduce that h is harmonic everywhere.
Next, we notice implying that h is (at most) linear.Thus h must be constant.Since h(y) = a 1 (y, y) − a 2 (y, y) = 0, it follows that h(x) = 0 and hence we finally obtain a 1 (x, y) = a 2 (x, y) for all x ∈ v(G).Since y ∈ v(G) was arbitrary, we deduce the desired result.
Remark 4.2.Strict subdiffusivity on the UIPT was obtained by Benajmini and Curien in the beautiful paper [BC13].A result of Lee [Lee17, Theorem 1.10] gives a more general condition which guarantees strict subdiffusivity (essentially, the graph needs to be planar with at least cubic volume growth).
As a prominent example of application of Theorem 1.4 consider the Incipient Infinite percolation Cluster (IIC) of Z d for sufficiently large d.By a combination of Theorem 1.2 in [KN09] and Theorem 1.1 in [Lee20], one can check that the strict subdiffusivity (SD) is satisfied in all sufficiently high dimensions.The recurrence is easier to check.(Note that a weaker form of subdiffusivity can be deduced by combining [KN09] with [BJKS08]).In fact, it was already checked earlier that in high dimensions the backbone of the IIC is one-ended ( [VdHJ04]), implying also the UST is one-ended in this case.
We point out that the result should apply in dimension two (even for non-nearest neighbour walk), or for the IIC of spread-out percolation, although we do not know if strict subdiffusivity has been checked in that case.
5 The sublevel set of the potential kernel Let (G, o) be some recurrent, rooted, graph for which the potential kernel is well defined in the sense that a n (x, y) obtains a limit and this does not depend on the choice of the sequence (A n ) n≥1 of finite sets of vertices going to infinity.
Fix z ∈ v(G) and R ∈ R + .Recall the notation in (6) for the ball with respect to the effective resistance metric: We also introduce the notation for the sublevel set of a(•, z) through In case z = o, we will drop the notation for z and write respectively.Although a(•, •) fails to be a distance as it lacks to be symmetric, it is what we call a quasi-distance as it does satisfy the triangle inequality due to Proposition 3.5.On 2-connected graphs (where the removal of any single vertex does not disconnect the graph), we have that a(x, y) = 0 precisely when x = y.In particular, this is true for triangulations.
Let us first explain why we care about the sublevel sets of the potential kernel and why we will prefer it over the effective resistance balls.We will call a set A ⊂ v(G) simply connected whenever it is connected (that is, for any two vertices x, y in A, there exists a path connecting x and y, using only vertices inside A) and when removing A from the graph does not disconnect a part of the graph from infinity.We make the following observation, which holds because x → a(x, o) is harmonic outside of o.
Observation.The set Λ a (R) is simply connected.This is not true, in general, for B eff (R).Introduce the hull B eff (z, R) of B eff (z, R) as the set B eff (z, R) together with the finite components of v(G) \ B eff (z, R).Even though B eff (z, R) does not have any more "holes", we notice that still, it is not evident (or true in general) that B eff (z, R) is connected.
We do notice that by Corollary 3.3.See also Figure 1 for a schematic picture.We thus get that the sets Λ a (R) are more regular than the sets B eff (R) and if G is planar, they correspond to euclidean simply connected sets.
In this section, we are interested in some properties of Λ a (R), that we will need to prove our Harnack inequalities.We now state the main result, which shows that lim z→∞ a(z, x) = ∞, under the additional assumption that the underlying rooted graph is random and (stationary) reversible.
Proposition 5.1.Suppose (G, o) is a reversible random graph, that is a.s.recurrent and for which the potential kernel is a.s.well defined.Almost surely, the sets Λ a (z, R) are finite for each R ≥ 1 and all z ∈ v(G), and hence (Λ a (z, R)) R≥1 defines an exhaustion of G.Although we expect this proposition to hold for all graphs where the potential kernel is well defined, we do not manage to prove the general case.In addition, the proof actually yields something slightly stronger which may not necessarily hold in full generality.
Note also that for all R ≥ 0 we have v(G) \ Λ a (R) is non-empty because x → a(x, o) is unbounded (to see this, assume it is bounded and use recurrence and the optional stopping theorem to deduce that a(x, o) would be identically zero, which is not possible since the Laplacian is nonzero at o).We introduce the following definition, that we will use throughout the remaining document.
• We call x (δ, o)-good if hm o,x (x) ≥ δ.We will omit the notation for the root if it is clear from the context.
• We call the rooted graph (G, o) δ-good if for all > 0, there exist infinitely many (δ − , o)good points.
• We call the rooted graph (G, o) uniformly δ-good if all vertices are (δ, o)-good.
Note that if the graph (G, o) is uniformly δ-good for some δ > 0, then actually Λ a (δR) ⊂ B eff (R), so that the sets Λ a (δR) are finite for each R. It turns out that the graph (G, o) being δ-good is also enough, which is the content of Lemma 5.5 below.
Although the definition of δ-goodness is given in terms of rooted graphs (G, o), the next (deterministic) lemma shows that the definition is actually invariant under the choice of the root, and hence we can can omit the root and say "G is δ-good" instead.
By Corollary 3.9, we can take R 0 := R 0 (z, o, 2 ) so large that for all x / ∈ B(o, R 0 ) we have This implies that any vertex x ∈ G δ− 1 ,o ∩ B(o, R 0 ) c must in fact be (δ − , z)-good since = 1 + 2 .This shows the desired result as was arbitrary.
The next lemma shows the somewhat interesting result that reversible environments are always δ-good, with δ arbitrary close to 1 2 .Lemma 5.4.Suppose that (G, o, X 1 ) is a reversible environment on which the potential kernel is a.s.well defined.Then for each δ < 1 2 , a.s., (G, o) is δ-good.Proof.In this proof we will write P, E to denote probability respectively expectation with respect to the law of the random rooted graph (G, o).In compliance with the rest of the document, we will write P, E to denote the probability respectively expectation w.r.t. the law of the simple random walk, conditional on (G, o).
By Lemma 5.3, we note that (G, o) being δ-good is independent of the root and hence for each δ > 0, the event A natural approach to go forward, would be to use that any unimodular law is a mixture of ergodic laws [AL07, Theorem 4.7].We will not use this, as there is an even simpler argument in this case.Nevertheless, we will use the invariance under re-rooting to prove that A δ has probability one.Suppose, to the contrary, that the event A δ does not occur with probability one, so that P(A δ ) ∈ [0, 1).Then we can condition the law P on A c δ to obtain again a reversible law P(• | A c δ ) (it is here that we use the invariance under re-rooting of A δ ), under which A δ has probability zero.However, we will show that P(A δ ) > 0 always holds when δ < 1 2 , independent of what the exact underlying reversible law P is -as long as the potential kernel is a.s.well defined and the graph is a.s.recurrent.Now, this implies that we actually need to have P(A δ ) = 1, which is the desired result.
Fix δ < 1 2 .We thus still need to prove that P(A δ ) > 0, which we do by contradiction.Assume henceforth that P(A δ ) = 0.By reversibility, we get for each n ∈ N the equality due to the fact that (G, o, X n ) has the same law as (G, X n , o), which is reversibility (here, the expectation is both with respect to the environment and the walk).As P(A δ ) = 0, we can assume that a.s.there exists a (random) N = N (G, o) ∈ N, such that for all x ∈ B(o, N ) we have hm x,o (x) ≤ δ Also, as the environment is a.s.null-recurrent, we have that whenever n → ∞.Moreover, notice that for each n we have Since hm o,Xn (X n ) ∈ [0, 1], we can apply Fatou's lemma (applied to just the expectation with respect to the law of (G, o), so that we can use the just found inequality) from which we deduce that which is a contradiction as δ < 1 2 .
We next show that for any δ-good (rooted) graph, the set Λ a (R) is finite for each R ≥ 1.Combined with Lemma 5.4, this implies Proposition 5.1 in case of reversible environments.However, Lemma 5.4 shows more than just this fact.Indeed, Λ a (R) being finite need not imply that (G, o) is δ-good for some δ > 0.
Proof.Let δ > 0 and suppose that G is δ-good.We will show that for each R ≥ 1, there exists an M ≥ 1 such that for all x / ∈ B(o, M ) we have This implies the final result.
By assumption on δ-goodness, for each R ≥ 1 there exists a vertex This implies by Corollary 3.3 that a(x R , o) ≥ δ 2 R eff (o ↔ x R ) ≥ δR 2 .Fix R ≥ 1 and define the set B R = {o, x R }.By Theorem 3.10, we get for all x the decomposition a(x, o) where q B R (•) is the potential kernel on the graph G B R , which we recall is the graph G, with B R glued together.Since potential kernels are non-negative, we can focus our attention to the right-most term.Take M = M (o, x R , δ) so large that for all x / ∈ B(o, M ) which is possible as the potential kernel is well defined, see Proposition 3.2 and Corollary 3.3.We deduce that for all x / ∈ B(o, M ) as desired.

Two Harnack inequalities
We are now ready to prove the equivalence between (c) and (a).The first part of this section deals with a classical Harnack inequality, whereas the second part of this section provides a variation thereof, where the functions might have a single pole.The first Harnack inequality (Theorem 6.2 below) does not involve Theorem 1.1.
Recall that Λ a (z, R) is the sublevel set {x ∈ v(G) : a(x, z) ≤ R} (for R not necessarily integer valued) and that a(z, x) defines a quasi distance on G. Also recall the notation B eff (z, R) = {x : R eff (z ↔ x) ≤ R}, for the (closed) ball with respect to the effective resistance distance.

The standing assumptions
Throughout this section we will work with deterministic graphs G, which satisfy a certain number of assumptions.To be precise, we will say that G satisfies the standing assumptions whenever it is recurrent, the potential kernel is well defined and the level sets (Λ a (z, R)) R≥1 are finite for some (hence all) z ∈ v(G).Remark 6.1.Proposition 5.1 implies that any unimodular random graph with E[deg(o)] < ∞ that is a.s.recurrent and for which the potential kernel is a.s.well defined, the level sets Λ a (z, R) are finite for all R and z ∈ v(G) (and so satisfies the standing assumptions).Note for instance that the UIPT therefore satisfies the standing assumptions.

Elliptic Harnack Inequality
We first show that under the standing assumptions, a version of the elliptic Harnack inequality holds, where the constants are uniform over all graphs that satisfy the standing assumptions.Recall the definition of the "hull" B eff (z, R) introduced in Section 5.

Theorem 6.2 (Harnack Inequality
).There exist M, C > 1 such that the following holds.Let G be a graph satisfying the standing assumptions.For all z ∈ v(G), all R ≥ 1 and all h : Λ a (z, M R) ∪ ∂Λ a (z, M R) → R + that are harmonic on Λ a (z, M R) we have Remark 6.3.In case the rooted graph (G, o) is in addition uniformly δ-good for some δ (that is, hm x,o (x) ≥ δ for each x, see Definition 5.2), then we have that and hence the Harnack inequality above becomes a standard "elliptic Harnack inequality" for the graph equipped with the effective resistance distance.(As will be discussed below, we conjecture that many infinite models of random planar maps, including the UIPT, satisfy the property of being δ-good for some nonrandom δ > 0.) The harmonic exit measure.
In the proof, we fix the root o ∈ v(G), but it plays no special role.Define for k ∈ N, x ∈ Λ a (k) and b ∈ ∂Λ a (k) the "harmonic exit measure" where T k is the first hitting time of ∂Λ a (k).We will write where we recall the definition of the Green function in (1).The following proposition shows that changing the starting points x, y ∈ B eff (R), does not significantly change the exit measure µ k (•, b).The Harnack inequality will follow easily from this proposition (in fact, it is equivalent).
Proposition 6.4.There exist constants C, M > 1 such that for all G satisfying the standing assumptions, all R ≥ 1 and all x, y ∈ ∂B eff (R) we have for each b ∈ ∂Λ a (M R).
We first prove the following lemma, giving an estimate on the number of times the simple random walk started from x visits y, before exiting the set Λ a (M R).Lemma 6.5.For all M 0 > 1, there exists an M > M 0 and C = C(M, M 0 ) > 1 such that for all G satisfying the standing assumptions and for all R ≥ 1 we have Proof.Fix M 0 > 1 let M > M 0 + 2 for now.Let G be any graph satisfying the standing assumptions.Let R ≥ 1, take x ∈ ∂Λ a (M 0 R) and y ∈ ∂B eff (R).Notice that, by Lemma 3.4, we can write Recalling that a(•, •) is a quasi metric that satisfies the triangle inequality due to Proposition 3.5, we have, by assumption on x and y and the expression for the potential kernel in terms of harmonic measure and effective resistance (Corollary 3.3), that Going back to (21) and upper-bounding −a(x, y) ≤ 0, we find the desired upper bound: For the lower bound, fix again z ∈ ∂Λ a (M R).From Theorem 3.10 (and the fact that On the other hand, invoking the triangle inequality (as in ( 22)), we have The lower-bound now follows from ( 21) and ( 23) as Since M > M 0 + 2, we can take C = C(M, M 0 ) such that we get the result.
Proof of Proposition 6.4.Take M 0 > 1, M = M (M 0 ) and C > 1 as in Lemma 6.5.Let G be a graph satisfying the standing assumptions.Fix R ≥ 1 and let x, y ∈ ∂B eff (R).For b ∈ Λ a (M R) we use the last-exit decomposition (Lemma 2.1) to see By Lemma 6.5, we have for each z ∈ ∂Λ a (M 0 R) We thus get, defining C = C 2 , and using deg(•)-reversibility of the simple random walk that showing the final result.
Proof of Theorem 6.2.The proof of Theorem 6.2 is easy now.Indeed, let C, M > 1 large enough, as in Proposition 6.4 and take any graph G satisfying the standing assumptions and Using the maximum principle for harmonic functions, we deduce that it is enough to prove max Take x, y ∈ ∂B eff (R).By optional stopping and Proposition 6.4 we have showing the result.

(a) implies (c): anchored Harnack inequality
Sometimes, one wants to apply a version of the Harnack inequality to functions that are harmonic on a big ball, but not in some vertex inside this ball (the pole).Clearly, we can only hope to compare the value of harmonic function in points that are "far away" from the pole, say on the boundary of a ball centered at the pole.This "anchored" inequality does not always follow from the Harnack inequality as stated in Theorem 6.2.As an example, think of the graph Z with nearest neighbor connections.Pick any two positive real numbers α, β satisfying α + β = 1.Then the function h that maps x to α(−x) when x is negative and to βx when x is positive, is harmonic everywhere outside of 0, with ∆h(0) = 1.This implies that no form of "anchored Harnack inequality" can hold.
We next present a reformulation of (a) implies (c) in Theorem 1.1.We will use it to prove results for the "conditioned random walk" as introduced in Section 7. Theorem 6.6 (Anchored Harnack Inequality).There exists a C < ∞ such that the following holds.Let G be a graph satisfying the standing assumptions.For z ∈ v(G), R ≥ 1 and all h : v(G) → R + that are harmonic outside of z and satisfy h(z) = 0, we have (aH) Remark 6.7.Actually, we will prove that for each z ∈ v(G) and R ≥ 1, there exists Ψ z (R) ≥ R such that for all harmonic functions h : Λ a (z, Ψ z (R)) ∪ ∂Λ a (z, Ψ z (R)) → R + that are harmonic on Λ a (z, Ψ z (R)) \ {z} and h(z) = 0, we have max As before, if the graph is uniformly δ-good for some δ > 0, we can actually take Ψ z (R) = M R for some M = M (δ) depending only on δ.
Proof of Theorem 6.6 The proof will be somewhat similar to the proof of Theorem 6.2.Again, we will prove it for the vertex o to simplify our writing, but it will not matter which vertex we choose.For k ∈ N, we will write again T k = T Λa(k) c for the first time the random walk exists the sublevel-set Λ a (k).Fix k ∈ N and x ∈ Λ a (k).Define the exit measure for b ∈ ∂Λ a (k).We begin by showing that, taking x, y in Λ a (R), the exit measures ν k (x, •) and ν k (y, •) are similar up to division by a(x, o), a(y, o) respectively, when k is large enough.
Although it might seem at first slightly counterintuitive that that we need to divide by a(x, o), this actually means that the conditional exit measures P w (X Proposition 6.8.There exists a C < ∞ such that for each R ≥ 1, there exists a constant Ψ(R) ≥ R such that for all x, y ∈ Λ a (R) \ Λ a (1) and all b ∈ ∂Λ a (Ψ(R)) we have In order to prove this proposition, we will first prove a few preliminary lemma's.The next result offers bounds on the probability that the random walk goes "far away" before hitting o in terms of the potential kernel.Lemma 6.9.For each z ∈ v(G) \ Λ a (1) and all M > a(z, o), we have Proof.This is a straightforward consequence of the optional stopping theorem.Indeed, and since M ≤ a(w, o) ≤ M + 1 for each w ∈ ∂Λ a (M ) and a(o, o) = 0, we find which are the desired bounds.
Lemma 6.10.For each R ≥ 1, there exist M, M 0 > R such that for all x ∈ Λ a (R) and where Proof.Fix R ≥ 1 and x, y ∈ Λ a (R).Take M 0 = M 0 (R) at least so large that for all w / ∈ Λ a (M 0 ) we have 1 2 which is possible due to Corollary 3.8.Fix then M = 5M 0 and B M = {o} ∪ Λ a (M ) c .Take z ∈ Λ a (M 0 ).By choice of M and Lemma 6.9, we have Using the strong Markov property of the walk we get The definition of the Green function and Corollary 3.3 allow us to write which implies that for each b ∈ Λ a (M ).Thus (26) is equivalent to (27) Hence, by (25) and using (24) twice with w = z and w = b respectively in (27) we get which is the desired result.
Proof of Proposition 6.8.Just as in the proof of Proposition 6.4, we use the last-exit decomposition to see We are left to define Ψ(R) = M and C = 20 to obtain the desired result.
Finishing the proof of Theorem 6.6 is now straightforward.Indeed, we fix C > 1 and Ψ as in Proposition 6.8.Let R ≥ 1 and h : Λ a (Ψ(R)) → R + harmonic outside o; with h(o) = 0. Fix x, y ∈ Λ a (R).By optional stopping, which holds as Λ a (Ψ(R)) is finite, This shows the desired result when x, y ∈ ∂Λ a (R).
6.4 (c) implies (a) Proposition 6.11.Suppose that the (rooted) graph (G, o) satisfies the anchored Harnack inequality with respect to the sequence (V R ) R≥1 and some (non-random) constant C: for all h : v(G) → R + harmonic outside possibly o and such that h(o) = 0, In this case, the potential kernel a(x, o) is well defined.
We take some inspiration from [SBS15], although the strategy goes back in fact to a paper of Ancona [Anc78].Pick some sequence e = (e R ) R≥1 on v(G) satisfying e R ∈ ∂V R .Lemma 6.12.Let R ≥ 1 and suppose that h, g are two positive, harmonic functions on Λ a (Ψ(R)) \ {o} vanishing at o.We have g(e R ) .
Proof.Fix R ≥ 1 and let h, g be as above.Write T R = T ∂V R .By optional stopping, h(o) = 0 and the Harnack inequality, we get showing the final result.
Proof of Proposition 6.11.We follow closely Section 3.2 in [SBS15].We will show that whenever h 1 , h 2 : v(G) → [0, ∞) are harmonic functions on v(G) \ {o}, vanishing at o, such that h 1 (e 1 ) = h 2 (e 1 ), we have h 1 = h 2 .The result then follows as we can pick h 1 (•) and h 2 (•) to be two subsequential limits of a An (•, o) (for possibly different sequences (A n ) going to infinity), and rescaling so that they are equal at e 1 .Consider h 1 , h 2 : v(G) → [0, ∞) harmonic functions on v(G) \ {o}, vanishing at o. Assume without loss of generality that h 1 (e 1 ) = h 2 (e 1 ) = 1.By Lemma 6.12 we get that there is some appropriate (large) M which does not depend on h 1 , h 2 , for which for all x ∈ V R and R ≥ 1.It follows that (setting x = e 1 ) Using this in (28) and letting R → ∞, we obtain for all x ∈ v(G) \ {o}.Define recursively, for i ≥ 3, It is straightforward to check that h i is non-negative (as follows from an iterated version of (29)) and harmonic outside o.Since M did not depend on h 1 , h 2 , and because h i (e 1 ) = 1 also, we obtain that 1 On the other hand, it is straightforward to check that the recursion (30) can be solved explicitly to get: Unless h 1 (x) = h 2 (x), this grows exponentially, which is incompatible with (31).Therefore Remark 6.13.The proof above makes it clear that if the potential kernel is uniquely defined (i.e. if (a) holds), then any function h : v(G) → R + satisfying ∆h(x) = 0 for all x ∈ v(G) \ {o} and for which h(o) = 0, is of the form αa(x, o) for some α ≥ 0.
Remark 6.14.If G is reversible, and satisfies the anchored Harnack inequality, then it satisfies (a) as a consequence of the above.It therefore satisfies the standing assumptions: in particular, by Theorem 6.2 holds so it also satisfies the Elliptic Harnack Inequality (EHI).We have therefore proved that anchored Harnack inequality (AHI) =⇒ (EHI) at least for reversible random graphs, which is not a priori obvious.
7 Random Walk conditioned to not hit the root Let (G, o) satisfy the standing assumptions, i.e., it is recurrent, the potential kernel is well defined and the potential kernel tends to infinity.In this section, we will define what we call the conditioned random walk (CRW), which is the simple random walk on G, conditioned to never hit the root o (or any other vertex).Of course, a priori this does not make sense as the event that the simple random walk X will never hit o has probability zero.However, we can take the Doob a(•, o)-transform and use this to define the CRW.We make this precise below.We apply some of the results derived earlier to answer some basic questions about CRW.For example: is there a connection between the harmonic measure from infinity and the hitting probability of points (and sets)?What is the probability that the CRW will ever hit a given vertex?Do the traces of two independent random walks intersection infinitely often?Does the random walk satisfy a Harnack inequality?Does is satisfy the Liouville property?The answers will turn out to be yes for all of the above, and the majority of this section is devoted to proving such statements.
In a series of papers studying the conditioned random walk ([CPV15, GPV19, PRU20], see also the lecture notes by Popov [Pop21]), the following remarkable observation about the CRW ( Xt , t ≥ 0) on Z 2 was made.Let q(y) = P( X t = y for some t ≥ 0) = P( T y < ∞), then lim y→∞ q(y) = 1/2, even though asymptotically the conditioned walk X looks very similar to the unconditioned walk.
One may wonder if such a fact holds in the generality of stationary random graphs for which the potential kernel is well defined.This question was in fact an inspiration for the rest of the paper.Unfortunately, we are not able to answer this question in generality, but believe it should not be true in general.In fact, on most natural models of random planar maps, we expect with every possible value in the interval between lim inf y→∞ q(y) and lim sup y→∞ q(y) a possible subsequential limit.We will prove the upper-bound of (32) and a form of the lower bound on CRT-mated maps in Theorem 9.1.The fact that every possibly value between lim inf y→∞ q(y) and lim sup y→∞ q(y) will have a subsequential limit converging to it, holds in general and will be proved in Proposition 7.6.

Definition and first estimates
Instead of the graph distance or effective resistance distance, we will work with the quasi distance a(x, y).Recall the definition Λ a (y, R) := {x ∈ v(G) : a(x, y) ≤ R} and Λ a (R) = Λ a (o, R).We will fix y = o, but we note that in the random setting, it is of no importance that we perform our actions on the root (in that setting, everything here is conditional on some realization (G, o)).
We can thus define the conditioned random walk (CRW), denoted by X, as the so called Doob h-transform of the simple random walk, with h(x) = a(x, o).To avoid unnecessarily loaded notations, we will in fact denote a(x) = a(x, o) in the rest of this section.
To be precise, let p(x, y) denote the transition kernel of the simple random walk on G. Then the transition kernel of the CRW is defined as p(x, y) = a(y) a(x) p(x, y), x = 0 0, else .
It is a standard exercise to show that p indeed defines a transition kernel.To include the root o as a possible starting point for the CRW, we will let X 1 have the law P o ( X 1 = x) = a(x), and then take the law of the CRW afterwards.In this case, we can think of the CRW as the walk conditioned to never return to o.
We now collect some preliminary results, starting with transience, and showing that the walk conditioned to hit a far away region before returning to the origin converges to the conditioned walk, as expected.
We will write T A for the first hitting time of a set A ⊂ v(G) by the conditioned random walk, and T x when A = {x}.We will also denote T R = T v(G)\Λa(R) .We recall that a(•, •) satisfies a triangle inequality (see Proposition 3.5) and hence we have the growth condition a(x) ≤ a(y) + 1 (33) for two neighboring sites x, y since a(x, y) ≤ 1 in this case.
Proposition 7.1.Let x ∈ v(G) and X the CRW avoiding the root o.Then (i) The walk X is transient.
(ii) The process n → 1/a( X n∧ T N ) is a martingale, where N = {y : y ∼ o} Proof.The proof of (ii) is straightforward since 1/a( X n∧ T N ) is the Radon-Nikdoym derivative of the usual simple random walk with respect to the conditioned walk.(i) then follows from the fact that a(y, o) → ∞ along at least a sequence of vertices.Indeed, fix 2 < r < R large and y ∈ v(G) \ Λ a (o, r).By optional stopping (since 1/a(y) is bounded) Rearranging gives Taking R → ∞, we see that P y ( T r < ∞) ≤ (r + 1)/(1 + a(y)) < 1, showing that the chain is transient.
We now check (as claimed earlier) that the conditioned walk X can be viewed as a limit of simple random walk conditioned on an appropriate event of positive (but vanishingly small) probability.
Proof.The proof is similar to [Pop21,Lemma 4.4].Assume here that x = o for simplicity.The proof for x = o follows after splitting into first taking one step and, comparing this, and then do the remainder.Let us first assume that the end point ϕ m of ϕ lies in ∂Λ a (R).Then Since ϕ m ∈ ∂Λ a (R), we know that a(ϕ m ) ∈ (R, R + 1] due to (33).By optional stopping, we see a and also a(X T R ) ∈ (R, R + 1].We thus find that We can deal with the denominator in a similar fashion, only this time we note that the beginning and end point are the same.Hence, the a(y)-terms cancel and we get G R (x, y) = a(y) a(x) This shows the first equality appearing in Proposition 7.3 upon taking R → ∞.The second statement follows from Proposition 3.5.

Intersection and hitting probabilities
Suppose X and Y are two independent CRW's.We will begin by describing hitting probabilities of points and sets and use this to prove that the traces of X and Y intersect infinitely often a.s.We begin giving a description of the hitting probability of a vertex y by the CRW started from x.Although it is a rather straightforward consequence of the expression for the Green function of the CRW, it is still remarkably clean.as desired.
Since the potential kernel is assumed to be well defined, we also have that P y (T x < T o ) → hm o,x (x) as y → ∞ due to Corollary 3.3, and hence we deduce immediately the next result.In particular, it is true that on transitive graphs that are recurrent and for which the potential kernel is well defined, by symmetry one always has q(y) → 1 2 .This gives another proof to a result of [Pop21] on the square lattice.
We can now prove that the subsequential limits of the hitting probabilities q(y) define an interval, as promised before.Note that this proposition is fairly general: it does not require the underlying graphs to be unimodular, only for the graph to satisfy the standing assumption (recurrence, existence of potential kernel and convergence to infinity of the potential kernel).
Proposition 7.6.For each q ∈ [lim inf y→∞ q(y), lim sup y→∞ q(y)], there exists a sequence of vertices (y n ) n≥1 going to infinity such that lim n→∞ q(y n ) = q.
Proof.Assume that there exist q 1 < q 2 such that there are sequences (y 1 n ) n≥1 and (y 2 n ) n≥1 going to infinity for which lim n→∞ q(y i n ) = q i , but there does not exists a sequence y n going to infinity for which q 1 < lim n→∞ q(y n ) < q 2 .We will derive a contradiction.We do so via the following claim.
To see this claim is true, we use Lemma 7.4 and Corollary 3.9 to get the existence of Thus, taking together equations ( 38) and (39) we obtain q(x) − q(y) ≤ .
Since x, y are arbitrary neighbors, this implies the claim when taking N = N 1 ∨ N 2 .By Corollary 3.12, we know that the graph G is one-ended as the potential kernel is assumed to be well defined.Take > 0 so small that q 2 > q 1 + 3 .By assumption on q 1 , q 2 , we thus have that for each n large enough, there exist two neighboring vertices x, y / ∈ B(o, n) satisfying q(y) > q 2 − > q 1 + 2 > q(x) + , so that q(y) > q(x) + , a contradiction.

Harnack inequality for conditioned walk
Notice that the conditioned random walk viewed as a Doob h-transform may be viewed as a random walk on the original graph G but with new conductances by c(x, y) = a(x)a(y) for each edge {x, y} ∈ e(G).Indeed the symmetry of this function is obvious, as is nonnegativity, and since a is harmonic for the original graph Laplacian ∆, we get that the random walk associated with these conductances coincides indeed with our Doob h-transform description of the conditioned walk.We can thus consider the network (G, c ), which is transient by Proposition 7.1.It will be useful to consider the graph Laplacian ∆, associated with these conductances, defined by setting ( ∆h)(x) = y∼x c(x, y)(h(y) − h(x)).
for a function h defined on the vertices of G, although h does not need to be defined at o.We will say that a function h : v(G) \ {o} → R is harmonic (w.r.t. the network (G, c )) whenever ∆h ≡ 0. This is of course equivalent to It might be of little surprise that the anchored Harnack inequality (Theorem 6.6) implies (in fact, it is equivalent but this will not be needed) to an elliptic Harnack inequality on the graph G with conductance function c, at least when viewed from the root (i.e., for exhaustion sequences centered on the root o).
Proposition 7.7.There exists a C > 1 such that the following holds.Suppose the graph G satisfies the standing assumptions.Let ĥ : v(G) \ {o} → R + be harmonic with respect to (G, c ).
Equivalently, the max and the min could (by the maximum principle) be taken over Λ a (R) instead of ∂Λ a (R).
Proof.Since the graph follows the standing assumptions it satisfies the anchored Harnack inequality of Theorem 6.6.Furthermore, ĥ(x) is ∆-harmonic if and only if is harmonic for ∆ away from o. Since on |a(x) − R| ≤ 1 for x ∈ ∂Λ a (R), we obtain the result immediately.
As a corollary we obtain the Liouville property for X: (G, ĉ) does not carry any non-constant, bounded harmonic functions.This implies in turn that the invariant σ-algebra I of the CRW is trivial.
Corollary 7.8.The network (G, c ) satisfies the Liouville property, that is: any function h : v(G) \ {o} → R that is harmonic and bounded must be constant.
Proof.Let h be a bounded, harmonic function with respect to (G, c ). Define the function ĥ which is non-negative and harmonic.Moreover, for each > 0, there exists an x such that ĥ(x ) ≤ .Take R so large that x ∈ Λ a (R ).By the Harnack inequality (Proposition 7.7) we deduce that for all x ∈ Λ a (R ), 0 ≤ ĥ(x) ≤ C ĥ(x ) ≤ C .
Since is arbitrary, and C does not depend on R nor , this shows the desired result.

Infinite intersection of two conditioned walks
We finish this section by showing that two independent conditioned random walks have traces that intersect infinitely often (for simplicity here the CRW's are conditioned to not hit the same root o).We manage to prove this under two (different) additional assumptions.We start by adding the assumption that (G, o) is random and reversible.
Proposition 7.10.Suppose that (G, o) is a reversible random graph, such that a.s. it is recurrent and a.s. the potential kernel is well defined.Let X, Y be two independent CRW's started from x, y ∈ v(G) respectively, avoiding o.Then a.s.

P(|{ X
Proof.Suppose that (G, o) has infinitely many 1 3 -good vertices, and call the set of such vertices A := A(G, o).Since there are various sources of randomness here, it is useful to recall that P the underlying probability measure P is always conditional on the rooted graph (G, o).Then by Lemma 7.9, we know that Since Y is independent of X (when conditioned on (G, o)), we can use Lemma 7.9 again to see that on an event of P-probability 1, Taking expectation w.r.t.X we deduce that the traces of X and Y intersect infinitely often P-almost surely, conditioned on (G, o) having infinitely many 1 3 -good vertices.However, Lemma 5.4 implies that, under our assumptions on (G, o), this happens with P-probability one, showing the desired result.
A consequence of the infinite intersection property is that the (random) network (G, c ) is a.s.Liouville.Therefore we get a new proof of the already obtained (in Corollary 7.8) Liouville property for the conditioned walk, but this time without using the Harnack inequality.On the other hand, [BCG12] proved that for planar graphs, the Liouville property is in fact equivalent to the infinite intersection property and this results extends without any additional arguments to the case of planar networks.By Proposition 7.7 and Corollary 7.8 we thus also obtain as a corollary of [BCG12] the infinite intersection property for planar networks such that the potential kernel tends to infinity.Proposition 7.11.Suppose G is a (not necessarily reversible) planar graph satisfying the standing assumptions.Let X and Y be two independent CRW's avoiding o, started from x, y ∈ v(G) respectively.Then Remark 7.12.It will be useful for us to recall that the infinite intersection property implies that one walk intersects the loop-erasure of the other: where LE( X) is the Loop Erasure of X and X, Y are two CRW's that don't hit the root o, started from x, y respectively.See [LPS03] for this result.

(a) implies (d): One-endedness of the uniform spanning tree
In this section we show that the uniform spanning tree is one ended, provided that the underlying graph satisfies the standing assumptions and is either planar or unimodular.In particular, since on unimodular graphs a(x) → ∞ along any sequence x → ∞ (see Proposition 5.1) we prove that (a) implies (d) in Theorem 1.1.
Theorem 8.1.Suppose that (G, o) is a reversible, recurrent graph for which the potential kernel is a.s.well defined and such that a(x) → ∞ along any sequence x → ∞.Then the uniform spanning tree is one-ended almost surely.
Before proving this theorem, we start with a few preparatory lemmas.We will write T to denote the uniform spanning tree and begin by recalling the following "path reversal" for the simple random walk, a standard result.In what follows, fix the vertex o ∈ v(G), but it plays no particular role (in the random setting) other than to simplify the notation.Lemma 8.2 (Path reversal).Let o, u ∈ v(G).For any subset of paths P where a path ϕ ∈ P if and only if the reversal of the path is in P.
See Exercise (2.1d) in [LP16].The next result says that the random walk started from o and stopped when hitting u, conditioned to hit u before returning to o looks locally like a conditioned random walk when u is far away.This is an extension of Lemma 7.2 and its proof is similar.
Lemma 8.3.For each M ∈ N and > 0, there exists an L such that for all u / ∈ Λ a (L) and uniformly over all paths ϕ going from o to ∂Λ a (M ), oriented unform spanning tree obtained in this fashion.Note that if x, y are two vertices on a bi-infinite path of T , then it makes sense to ask if y is in the past of x or vice-versa: exactly one of these alternatives must hold.
We are now ready to start with the proof of Theorem 8.1.
Proof of Theorem 8.1.Notice that if G is a graph satisfying the standing assumptions (recurrent, the potential kernel a(x) is well defined and a(x) → ∞ as x → ∞) and is moreover planar or random and unimodular then (almost surely), G satisfies the intersection property for CRW (cIP) due to Propositions 7.11 and 7.10 respectively.Suppose (G, o) is reversible, and satisfies the standing assumptions a.s.For a vertex x of G, consider the event A 2 (x) that there are two disjoint and simple paths from x to infinity in the UST T , in other words there is a bi-infinite path going through x.Note that it is sufficient to prove P(A 2 (x)) = 0 for each x ∈ v(G) a.s., where we remind the reader that here P is conditional given the graph (i.e., it is an average over the spanning tree T ).Indeed, for the tree T to be more than oneended, there must at least be some simple path in T which goes to infinity in both directions.By biaising and unbiaising by the degree of the root to get a unimodular graph, it is sufficient to prove that P(A 2 (o)) = 0 a.s.Therefore it is sufficient to prove P(A 2 (o)) = 0, where we remind the reader that P is averaged also over the graph.Suppose for contradiction that P(A 2 (o)) ≥ > 0. The idea will be to say that if this is the case then it is possible for both A 2 (o) and A 2 (x) to hold simultaneously, for many other vertices -including vertices far away from o.However, T is connected (since G is recurrent) and by Theorem 6.2 and Proposition 7.1 in [AL07], T is at most two-ended.Therefore the bi-infinite paths going through x and o must coincide: essentially, the bi-infinite path containing o must be almost space-filling.Suppose x is in the past of o (which we can assume without loss of generality by reversibility).Using Wilson's algorithm rooted at infinity to sample first the path from o and then that from x, the event A 2 (o) ∩ A 2 (x) requires a very unlikely behaviour: namely, a random walk starting from x must hit the loop-erasure of the conditioned walk starting from o exactly at o.This is precisely what Lemma 8.4 shows is unlikely, because of the infinite intersection properties.
Let us now give the details.Given G, we sample k independent random walks (X 1 , . . ., X k ) from o, independently of T , where k = k( ) will be chosen below.Observe that by stationarity of (G, o), we have for every n ≥ 0, P(A 2 (X i n )) = P(A 2 (o)) ≥ .
First we show that we can choose k such that for every n, there is i and j such that A 2 (X i n ) ∩ A 2 (X j n ) holds with Pprobability at least /2.Indeed fix n ≥ 0 arbitrarily for now, write Then by the Bonferroni inequalities, Choose k = 2/ , then we deduce that for some 1 ≤ i < j ≤ k, theorem guaranteeing the existence of the potential kernel (Theorem 1.4) applies a.s. to these maps.We now discuss a more quantitative statement concerning the harmonic measure from infinity which underlines substantial differences with the usual square lattice.We will write B euc (x, n) for the ball of vertices z ∈ v(G) such that the Euclidean distance between z and x (w.r.t. the natural embedding) is at most n.
In fact, we expect the following stronger result to hold: In fact, sharp values for a, b can be conjectured by considering the minimal and maximal exponents for the LQG volume of a Euclidean ball of radius ε in a γ-quantum cone, which all decay polynomially as ε → 0 (see Lemma A.1 in [BG20]).We also conjecture that this holds for, say, the UIPT.
Based on this we conjecture that max(a, b) < 1/2.This would show a stark contrast with the square lattice Z 2 where we recall that a = b = 1/2 (see e.g.[Pop21]).The upper bound in (49) is of course stated in Theorem 9.1 so that the lower bound in ( 49) is what we are asking about.While we are not able to prove this, we may use the unimodularity of the law P is unimodular, to prove a slightly weaker lower bound: Proof.Let P denote the law P after degree biasing.We write ( G, õ) for the random graph with law P. On the one hand, by reversibility of P, we know that P(hm õ,Xn (X n ) > 1 − δ) = P(hm õ,Xn (o) > 1 − δ) = P(hm õ,Xn (X n ) < δ).
On the other hand, by Theorem 9.1 and the reversed Fatou's lemma, we have The result now follows by contradiction: indeed, suppose that with positive probability, there is a positive asymptotic fraction of vertices x ∈ B(n) which have hm õ,x (x) < δ, then the random walk will spend a positive fraction of time in these points, giving a contradiction.
Take throughout the proof the constants C, α such that Lemmas 9.6, 9.5 and 9.4 and Proposition 9.7 hold simultaneously with the same constants.
Proof.The second statement follows immediately from the first statement, from the identity lim sup y→∞ q(y) = lim sup y→∞ hm y,o (y), in Corollary 7.5, and from the fact that for each > 0, there are infinitely many ( 1 2 − )-good vertices by Lemma 5.4.We are thus left to prove the first statement.
To that end, fix N 0 so large that for all n ≥ N 0 , n 2/α (n − 1) 2/α ≤ 3.By Lemmas 9.5 and 9.6 respectively, it holds that P(H c m ) ≤ m −α and P(R m ) ≤ m −α .Therefore, using again a Borel-Cantelli argument, there exists some (random) N 2 ≥ N 1 ≥ N 0 such that almost surely, for all n ≥ N 2 the events H n 2/α and R n 2/α occur.In particular, we know that almost surely,

Figure 1 :
Figure 1: A schematic drawing.In dark gray, we see the set B eff (R).The blue parts are B eff (R) \ B eff (R).The red area (and everything inside) is then the sublevel set Λ a (R).
Define next for m ≥ 1 the event E m the event that a(x, o) ≤ C log(m) for all x ∈ B euc (m).(50)By Proposition 9.7, we know that P(E c m ) ≤ log(m) −α and therefore,∞ n=1 P(E c e n 2/α ) < ∞.By Borel-Cantelli, this implies that there is some (random)N 1 = N 1 (G, o) < ∞ such that E e n 2/α occurs for all n ≥ N 1 .Suppose without loss of generality that N 1 ≥ N 0 almost surely.In this case, it follows that a(x, o) ≤ C log |x| for all x / ∈ B euc (o, N 1 ).(51) Next, define the events H m the event that for all x ∈ B euc (3m) \ B euc (m) and for all h : v(G) → R + harmonic outside of x, max z∈∂Beuc(x,|x|) h(z) ≤ C min z∈∂Beuc(x,|x|) h(z) and R m the event that for all x ∈ B euc (3m), R eff (x ↔ ∂B euc (x, m)) ≥ 1 C log(m).
). Next, pick N 2 such that all z / ∈ B(o, N 2 ) have R eff (o ↔ z) > 4 .Let x, y / ∈ B(o, N 1 ∨ N 2) be neighbors.Due to (33) we have a(x) − a(y) ≤ 1 and by the triangle inequality for effective resistance also R eff