Non-ubiquity in high dimensions
The goal of this section is to prove the following.
Proposition 4.1
Let \(\mathbb {G}\) be a d-dimensional transitive graph with \(d>4\), let \(\mathfrak {F}\) be the uniform spanning forest of G, let H be a finite hypergraph with boundary, and let \(r\ge 1\). Then the following hold:
-
(1)
If H has a subhypergraph that does not have any d-buoyant coarsenings, then H is not faithfully ubiquitous in \(\mathcal {C}^{hyp}_r(\mathfrak {F})\) almost surely.
-
(2)
If every quotient \(H'\) of H such that \(R_\mathbb {G}(H') \le r\) has a subhypergraph that does not have any d-buoyant coarsenings, then H is not ubiquitous in \(\mathcal {C}^{hyp}_r(\mathfrak {F})\) almost surely.
Let \(\mathbb {G}\) be a d-dimensional graph with \(d > 4\), and let \(\mathfrak {F}\) be the uniform spanning forest of \(\mathbb {G}\). Let \(H=(\partial V,V_\circ ,E)\) be a finite hypergraph with boundary such that \(E \ne \emptyset \), and let \(r\ge 1\). Recall that \(\mathscr {W}(x,\xi )\) is defined to be the event that \(\xi \) is a witness for the faithful presence of H at x. For each \(N> n\), we define
$$\begin{aligned} S^H_x(n,N) = \sum _{\xi \in \Xi _{\bullet x}(n,N)} \mathbb {1}\left[ \mathscr {W}(x,\xi )\right] . \end{aligned}$$
For each \((\xi _e )_{e\in E} \in \mathbb {V}^{E}\), we also define
$$\begin{aligned} W^H(x,\xi ) = \prod _{u \in \partial V} \langle x_u, \{\xi _e: e \perp u\} \rangle ^{-(d-4)} \prod _{u \in V_\circ } \langle \{\xi _e: e \perp u\} \rangle ^{-(d-4)} \end{aligned}$$
and
$$\begin{aligned} \mathbb {W}^H_x(n,N) = \sum _{\xi \in \Xi _x(n,N)} W^H(x,\xi ), \end{aligned}$$
so that, if we choose a vertex \(u(e) \perp e\) arbitrarily for each \(e\in E\) and set \((\xi _e)_{e\in E} = (\xi _{(e,u(e))})_{e\in E}\), it follows from Proposition 2.2 that
$$\begin{aligned} \mathbb {E}\left[ S^H_{x}(n,N)\right] = \sum _{\xi \in \Xi _{\bullet x}(n,N)} \mathbb {P}(\mathscr {W}(x,\xi )) \preceq \sum _{\xi \in \Xi _{\bullet x}(n,N)^{E}} W^H(x,\xi ) = \mathbb {W}^H_{x}(n,N) \end{aligned}$$
for every x, n, and N.
To avoid trivialities, in the case that H does not have any edges we define \(\mathbb {W}^H_x(n,N)=1\) for every \(x\in \mathbb {V}^{\partial V}\) and \(N>n\).
In order to prove Proposition 4.1, it will suffice to show that if H has a subhypergraph with boundary that does not have any d-buoyant coarsenings, then for every \(\varepsilon >0\) there exists a collection of vertices \((x_u)_{u\in \partial V}\) such that all the vertices \(x_u\) are in a different component of \(\mathfrak {F}\) with probability at least 1 / 2 (which, by Theorem 2.1, will be the case if the vertices are all far away from each other), but \(\mathbb {P}(H\) is faithfully present at \(x)= \mathbb {P}(S^H_{x}(0,\infty ) >0) \le \varepsilon \). In order to prove this, we seek to obtain upper bounds on the quantity \(\mathbb {W}^H_{x}(n,N)\). We begin by considering the case of a single distant scale. That is, the case that \(|N-n|\) is a constant and all the points of x are contained in \(\Lambda _x(0,n-1)\). Recall that \(\hat{\eta }_{d}(H)\) is defined to be \(\min \{\eta _{d}(H') : \text { H'{} { isacoarseningof}H}\}\).
Lemma 4.2
(A single distant scale) Let \(\mathbb {G}\) be a d-dimensional transitive graph and let H be a finite hypergraph with boundary. Then for every \(m \ge 0\), there exists a constant \(c=c(\mathbb {G},H,m)\) such that
$$\begin{aligned} \log _2\mathbb {W}^H_x(n,n+m) \le -\hat{\eta }_d(H) \, n + |E|^2\log _2 n + c \end{aligned}$$
for all \(x=(x_u)_{u\in \partial V} \in \mathbb {V}^{\partial V}\) and all n such that \(\langle x_u x_v \rangle \le 2^{n-1}\) for all \(u,v \in \partial V\).
It will be useful for applications in Sect. 4.3 to prove a more general result. A graph \(\mathbb {G}\) is said to be d-Ahlfors regular if there exists a positive constant c such that \(c^{-1} r^d \le |B(x,r)| \le cr^d\) for every \(r\ge 1\) and every \(x \in V\) (in which case we say \(\mathbb {G}\) is d-Ahlfors regular with constant c). Given \(\alpha >0\) and a finite hypergraph with boundary H, we define
$$\begin{aligned} \eta _{d,\alpha }(H) = (d-2\alpha )\Delta - d|E| - (d-2\alpha )|V_\circ |, \end{aligned}$$
where we recall that \(\Delta =\sum _{e\in E}\deg (e) = \sum _{v\in V} \deg (v)\), and define \(\hat{\eta }_{d,\alpha }(H) = \min \{\eta _{d,\alpha }(H') : H'\text { is a coarsening of }H\}\). Given a graph \(\mathbb {G}\), a finite hypergraph with boundary \(H=(\partial V, V_\circ , E)\), and points \((x_v)_{v\in \partial V}\), \((\xi _e)_{e\in E}\) we also define
$$\begin{aligned} W_\alpha ^H(x,\xi ) = \prod _{u \in \partial V} \langle x_u, \{\xi _e: e \perp u\} \rangle ^{-(d-2\alpha )} \prod _{u \in V_\circ } \langle \{\xi _e: e \perp u\} \rangle ^{-(d-2\alpha )} \end{aligned}$$
and, for each \(N> n\),
$$\begin{aligned} \mathbb {W}^{H,\alpha }_{x}(n,N) = \sum _{\xi \in \Xi _x(n,N)} W_\alpha ^H(x,\xi ). \end{aligned}$$
Note that \(\eta _d=\eta _{d,2}\) and \(\mathbb {W}_x^H=\mathbb {W}_x^{H,2}\), so that Lemma 4.2 follows as a special case of the following lemma.
Lemma 4.3
(A single distant scale, generalised) Let \(\mathbb {G}\) be a d-Ahlfors regular graph with constant \(c'\), let H be a finite hypergraph with boundary, and let \(\alpha \in \mathbb {R}\) be such that \(d\ge 2\alpha \). Then for every \(m \ge 0\), there exists a constant \(c=c(c',H,\alpha ,d,m)\) such that
$$\begin{aligned} \log _2\mathbb {W}^{H,\alpha }_{x}(n,n+m) \le -\hat{\eta }_{d,\alpha }(H) \, n + |E|^2\log _2 n + c \end{aligned}$$
for all \(x=(x_u)_{u\in \partial V} \in \mathbb {V}^{\partial V}\) and all n such that \(\langle x_u x_v \rangle \le 2^{n-1}\) for all \(u,v \in \partial V\).
Before proving this lemma, we will require a quick detour to analyze a relevant optimization problem.
Optimization on the ultrametric polytope
Recall that a (semi)metric space (X, d) is an ultrametric space if \(d(x,y) \le \max \{d(x,z),d(z,y)\}\) for every three points \(x,y,z\in X\). For each finite set A, the ultrametric polytope on A is defined to be
$$\begin{aligned} \mathcal {U}_A = \left\{ (x_{a,b})_{a,b \in A} \in [0,1]^{A^2} : \begin{array}{l} x_{a,a}=0 \text { for all }a \in A,\, x_{a,b}=x_{b,a} \text { for all }a,b\in A, \\ \text {and } x_{a,b} \le \max \left\{ x_{a,c},x_{c,b}\right\} \text {for all }a,b,c \in A \end{array} \right\} , \end{aligned}$$
which is a closed convex subset of \(\mathbb {R}^{A^2}\). We consider \(\mathcal {U}_A\) to be the set of all ultrametrics on A with distances bounded by 1. We write \(\mathcal {P}(A^2)\) for the set of subsets of \(A^2\).
Lemma 4.4
Let A be a finite non-empty set, and let \(F:\mathbb {R}^{A^2}\rightarrow \mathbb {R}\) be of the form
$$\begin{aligned} F(x) = \sum _{k=1}^K c_k \min \{x_{a,b} : (a,b) \in W_k\}, \end{aligned}$$
where \(K<\infty \), \(c_1,\ldots ,c_K \in \mathbb {R}\), and \(W_1,\ldots , W_K \in \mathcal {P}(A^2)\). Then the maximum of F on \(\mathcal {U}_A\) is obtained by an ultrametric for which all distances are either zero or one. That is,
$$\begin{aligned} \max \{F(x) : x \in \mathcal {U}_A\} = \max \left\{ F(x) : x \in \mathcal {U}_A,\, x_{a,b} \in \{0,1\} \text { for all }a,b\in A\right\} . \end{aligned}$$
Proof
We prove the claim by induction on |A|. The case \(|A|=1\) is trivial. Suppose that the claim holds for all sets with cardinality less than that of A. We may assume that \((a,a) \notin W_k\) for every \(1\le k \le K\) and \(i \in A\), since if \((a,a)\in W_k\) for some \(1\le k \le K\) then the term \(c_k \min \{x_{a,b} : (a,b) \in W_k\}\) is identically zero on \(\mathcal {U}_A\). We write \(\mathbf {1}\) for the vector
$$\begin{aligned} \mathbf {1}_{(a,b)}=\mathbb {1}(a\ne b). \end{aligned}$$
It is easily verified that
$$\begin{aligned} F(\lambda x) = \lambda F(x) \quad \text { and } \quad F(x+\alpha \mathbf {1}) = F(x) + \alpha F(\mathbf {1}) \end{aligned}$$
for every \(x\in \mathbb {R}^{A^2}\), every \(\lambda \ge 0\), and every \(\alpha \in \mathbb {R}\).
Suppose \(y\in \mathcal {U}_A\) is such that \(F(y) = \max _{x\in \mathcal {U}_A} F(x)\). We may assume that \(F(y)>F(\mathbf {1})\) and that \(F(y)>F(0)=0\), since otherwise the claim is trivial. Let \(m = \min \{ y_{a,b} : a,b \in A, a\ne b\}\), which is less than one by assumption. We have that
$$\begin{aligned} \frac{y}{1-m}- \frac{m}{1-m}\mathbf {1} \in \mathcal {U}_A \end{aligned}$$
and
$$\begin{aligned} F\left( \frac{y}{1-m}- \frac{m}{1-m}\mathbf {1}\right) = \frac{F(y)}{1-m} - \frac{mF(\mathbf {1})}{1-m} = F(y) + \frac{m}{1-m}(F(y)-F(\mathbf {1})), \end{aligned}$$
and so we must have \(m=0\) since y maximizes F.
Define an equivalence relation \(\bowtie \) on A by letting a and b be related if and only if \(y_{a,b}=0\). We write \(\hat{a}\) for the equivalence class of b under \(\bowtie \). Let C be the set of equivalence classes of \(\bowtie \), and let \(\phi : \mathcal {U}_C \rightarrow \mathcal {U}_A\) be the function defined by
$$\begin{aligned} \phi (x)_{a,b} = x_{\hat{a}, \hat{b}} \end{aligned}$$
for every \(x\in \mathcal {U}_n\). For each \(1\le k \le K\), let \(\hat{W}_k\) be the set of pairs \(\hat{a}, \hat{b} \in C\) such that \((a,b) \in W_k\) for some a in the equivalence class \(\hat{a}\) and b in the equivalence class \(\hat{b}\). Let \(\hat{F} : \mathcal {U}_C \rightarrow \mathbb {R}\) be defined by
$$\begin{aligned} \hat{F}(x) = \sum _{k=1}^K c_k \min \{x_{a,b} : (\hat{a}, \hat{b}) \in \hat{W}_k \}. \end{aligned}$$
We have that \(\hat{F} = F \circ \phi \), and, since y maximized F, we deduce that, by the induction hypothesis,
$$\begin{aligned} \max \{F(x): x \in \mathcal {U}_A\}&= \max \{\hat{F}(x) : x \in \mathcal {U}_C\}\\&= \max \{\hat{F}(x) : x \in \mathcal {U}_C,\, x_{\hat{a}, \hat{b}} \in \{0,1\} \text { for all }\hat{a}, \hat{b} \in C\}, \end{aligned}$$
completing the proof. \(\square \)
We will also require the following generalisation of Lemma 4.4. For each finite collection of disjoint finite sets \(\{A_i\}_{i\in I}\) with union \(A = \bigcup _{i\in I} A_i\), we define
$$\begin{aligned} \mathcal {U}_{\{A_i\}_{i\in I}} = \{ x \in \mathcal {U}_{A} : x_{a,b}=1 \text { for every distinct }i,j \in I\text { and every }a \in A_i\text { and }b \in A_j.\}. \end{aligned}$$
Lemma 4.5
Let \(\{A_i\}_{i\in I}\) be a finite collection of disjoint, finite, non-empty sets with union \(A = \bigcup _{i\in I}A_i\), and let \(F:\mathbb {R}^{A^2}\rightarrow \mathbb {R}\) be of the form
$$\begin{aligned} F(x) = \sum _{k=1}^K c_k \min \{x_{(i,j)} : (i,j) \in W_k\}, \end{aligned}$$
where \(K<\infty \), \(c_1,\ldots ,c_K \in \mathbb {R}\), and \(W_1,\ldots , W_K \in \mathcal {P}(A^2)\). Then the maximum of F on \(\mathcal {U}_A\) is obtained by an ultrametric for which all distances are either zero or one. That is,
$$\begin{aligned} \max \{F(x) : x \in \mathcal {U}_{\{A_i\}_{i\in I}}\} = \max \left\{ F(x) : x \in \mathcal {U}_{\{A_i\}_{i\in I}},\, x_{a,b} \in \{0,1\} \text { for all } a,b\in A\right\} . \end{aligned}$$
Proof
We prove the claim by fixing the index set I and inducting on |A|. The case \(|A|=|I|\) is trivial. Suppose that the claim holds for all collections of finite disjoint sets indexed by I with total cardinality less than that of A. We may assume that \((i,i) \notin W_k\) for every \(1\le k \le K\) and \(i \in A\), since if \((i,i)\in W_k\) for some \(1\le k \le K\) then the term \(c_k \min \{x_{i,j} : (i,j) \in W_k\}\) is identically zero on \(\mathcal {U}_A\). Furthermore, we may assume that \(W_k\) contains more than one element of at least one of the sets \(A_i\) for each \(1 \le k \le K\), since otherwise the term \(c_k \min \{x_{i,j} : (i,j) \in W_k\}\) is equal to the constant \(c_k\) on \(\mathcal {U}_{\{A_i\}_{i\in I}}\). We write \(\mathbf {1}\) and \(\mathbf {i}\) for the vectors
$$\begin{aligned} \mathbf {1}_{a,b} = \mathbb {1}(a\ne b) \end{aligned}$$
and
$$\begin{aligned} \mathbf {i}_{a,b} = \mathbb {1}(a\ne b,\text { and }a,b\in A_i\text { for some }i \in I). \end{aligned}$$
It is easily verified that
$$\begin{aligned} F(\lambda x) = \lambda F(x) \quad \text { and } \quad F(x+\alpha \mathbf {i}) = F(x) + \alpha F(\mathbf {1}) \end{aligned}$$
for every \(x\in \mathcal {U}_{\{A_i\}_{i\in I}}\), every \(\lambda \ge 0\), and every \(\alpha \in \mathbb {R}\) such that \(x + \alpha \mathbf {i} \in \mathcal {U}_{\{A_i\}_{i\in I}} \).
The rest of the proof is similar to that of Lemma 4.4. \(\square \)
Back to the uniform spanning forest
We now return to the proofs of Proposition 4.1 and Lemma 4.3.
Proof of Lemma 4.3
In this proof, implicit constants will be functions of \(c',H,\alpha ,d\) and m. The case that \(E = \emptyset \) is trivial (by the assumption that \(d \ge 2 \alpha \)), so we may assume that \(|E|\ge 1\).
Write \(\Xi =\Xi _x(n,n+m)\). First, observe that
$$\begin{aligned} \langle x_u, \{\xi _e: e \perp u\} \rangle \asymp 2^{n} \langle \{\xi _e: e \perp u\} \rangle \end{aligned}$$
for every \(\xi \in \Xi \) and \(u \in \partial V\), and hence that
$$\begin{aligned} \mathbb {W}^{H,\alpha }_{x}(n,n+m)&= \sum _{\xi \in \Xi } \prod _{u \in \partial V} \langle x_u, \{\xi _e: e \perp u\} \rangle ^{-(d-2\alpha )} \prod _{u \in V_\circ } \langle \{\xi _e: e \perp u\} \rangle ^{-(d-2\alpha )} \\&\preceq 2^{-(d-2\alpha )|\partial V|n} \sum _{\xi \in \Xi } \prod _{u \in V} \langle \{\xi _e: e \perp u\} \rangle ^{-(d-2\alpha )}. \end{aligned}$$
Let L be the set of symmetric functions \(\ell :E^2 \rightarrow \{0,\ldots ,n\}\) such that \(\ell (e,e)=0\) for every \(e\in E\). For each \(\ell \in L\), let
$$\begin{aligned} \Xi _\ell = \left\{ \xi \in \Xi : \begin{array}{l} 2^{\ell (e,e')} \le \langle \xi _e \xi _{e'} \rangle \le 2^{\ell (e,e')+m+1} \text { for all }e,e' \in E \end{array} \right\} , \end{aligned}$$
so that \(\Xi = \bigcup _{\ell \in L} \Xi _\ell \). For each \(\ell \) in L, let
$$\begin{aligned} \hat{\ell }(e,e') = \min \{\ell (e,e')\}\cup \left\{ \max \{\ell (e,e_1),\ldots ,\ell (e_k,e')\}: k\ge 1 \text { and } e_1,\ldots ,e_k \in E \right\} . \end{aligned}$$
In other words, \(\hat{\ell }\) is the largest ultrametric on E that is dominated by \(\ell \). Observe that for every \(\ell \in L\), every \(\xi \in \Xi _\ell \), end every \(e,e',e'' \in E\), we have that
$$\begin{aligned} \log _2 \langle \xi _e \xi _{e'} \rangle&\le \log _2\left[ \langle \xi _e \xi _{e''} \rangle + \langle \xi _{e''} \xi _{e'} \rangle \right] \le \log _2\max \{\langle \xi _e \xi _{e''} \rangle ,\, \langle \xi _{e''} \xi _{e'} \rangle \} +1\\&\le \max \{\ell (e,e''),\, \ell (e'',e')\}+2m +3, \end{aligned}$$
and hence, by induction, that
$$\begin{aligned} \log _2 \langle \xi _e \xi _{e'} \rangle \le \hat{\ell }(e,e')+(2m+3)|E| \approx \hat{\ell }(e,e'). \end{aligned}$$
Let \(e_1,\ldots ,e_{|E|}\) be an enumeration of E. For every \(\ell \in L\), every \(1 \le j < i \le |E|\) and every \(\xi \in \Xi _\ell \) we have that
$$\begin{aligned} \xi _{e_i} \in B\left( \xi _{e_j},\, 2^{\hat{\ell }(e_i,e_j)+(2m+3)|E|}\right) \text { and } \left| B\left( \xi _{e_j},\, 2^{\hat{\ell }(e_i,e_j)+(2m+3)|E|}\right) \right| \preceq 2^{d\hat{\ell }(e_i,e_j)}. \end{aligned}$$
By considering the number of choices we have for \(\xi _{e_i}\) at each step given our previous choices, it follows that
$$\begin{aligned} \log _2 |\Xi _\ell | \lesssim dn+ d\sum _{i=2}^{|E|}\min \left\{ \hat{\ell }(e_i,e_j) : j<i\right\} . \end{aligned}$$
(4.1)
Now, for every \(\xi \in \Xi _\ell \), we have that
$$\begin{aligned}&\log _2 \prod _{u \in V} \langle \{\xi _e: e \perp u\} \rangle ^{-(d-2\alpha )} \nonumber \\&\quad \approx -(d-2\alpha )\sum _{u \in V}\sum _{i=2}^{|E|}\mathbb {1}(e_i \in u) \min \left\{ \ell (e_i,e_j) : j<i,\, e_j \perp u\right\} \nonumber \\&\quad = -(d-2\alpha )\sum _{i=2}^{|E|}\sum _{u \perp e}\min \left\{ \ell (e_i,e_j) : j<i,\, e_j \perp u\right\} . \end{aligned}$$
(4.2)
Thus, from (4.5) and (4.2) we have that
$$\begin{aligned}&\log _2 \sum _{\xi \in \Xi _\ell } \prod _{u \in V} \langle \{\xi _e: e \perp u \} \rangle ^{-(d-2\alpha )}\nonumber \\&\quad \lesssim dn+ \sum _{i=2}^{|E|}\left[ d\min \{\hat{\ell }(e_i,e_j) : j<i\} -(d-2\alpha ) \sum _{u \perp e}\min \{\ell (e_i,e_j) : j<i,\, e_j \perp u\}\right] . \nonumber \\ \end{aligned}$$
(4.3)
Let \(Q: L \rightarrow \mathbb {R}\) be defined to be the expression on the right hand side of (4.3). We clearly have that \(Q(\hat{\ell }) \ge Q(\ell )\) for every \(\ell \in L\), and so there exists \(\ell \in L\) maximizing Q such that \(\ell \) is an ultrametric. It follows from Lemma 4.4 (applied to the normalized ultrametric \(\ell /n\)) that there exists \(\ell \in L\) maximizing Q such that \(\ell \) is an ultrametric and every value of \(\ell \) is in \(\{0,n\}\). Fix one such \(\ell \), and define an equivalence relation \(\bowtie \) on E by letting \(e \bowtie e'\) if and only if \(\ell (e,e')=0\), which is an equivalence relation since \(\ell \) is an ultrametric. Observe that, for every \(2 \le i \le |E|\),
$$\begin{aligned} \min \{\ell (e_i,e_j) : j< i\} = \mathbb {1}[e_j\text { is not in the equivalence class of }e_i\text { for any }j<i]\, n, \end{aligned}$$
and hence that
$$\begin{aligned} dn + \sum _{i=2}^{|E|} \min \{\ell (e_i,e_j) : j < i\} = |\{\text {equivalence classes of }\bowtie \}| \, n. \end{aligned}$$
Similarly, we have that, for every vertex u of H,
$$\begin{aligned} \sum _{i=2}^{|E|} \min \{\ell (e_i,e_j) : j < i, e_j \perp u\} = \left( |\{\text {equivalence classes of }\bowtie \text { incident to } u\}|-1\right) \,n, \end{aligned}$$
where we say that an equivalence class of \(\bowtie \) is incident to u if it contains an edge that is incident to u. Thus, we have that
$$\begin{aligned} Q(\ell )= & {} d|\{\text {equivalence classes of } \bowtie \}|\, n\nonumber \\&-(d-2\alpha )\sum _{u \in V} (|\{\text {equivalence classes of } \bowtie \hbox { incident to }u\}|-1)\, n.\nonumber \\ \end{aligned}$$
(4.4)
Let \(H'=H/\!\bowtie \) be the coarsening of H associated to \(\bowtie \) as in Sect. 2.7. We can rewrite (4.4) as
$$\begin{aligned} Q(\ell )&= d|E(H')|\, n -(d-2\alpha )\Delta (H')\, n+(d-2\alpha )|V(H)|\, n\\&= -\eta _{d,\alpha }(H')\, n+(d-2\alpha )|\partial V| \, n. \end{aligned}$$
Since \(|L| \le (n+1)^{|E|^2}\), we deduce that
$$\begin{aligned} \log _2\mathbb {W}^{H,\alpha }_{x}(n,n+m)\lesssim & {} -(d-2\alpha )|\partial V|\, n + \log _2 \sum _{\ell \in L} Q(\ell )\\\le & {} \max _{\ell \in L}Q(\ell ) - (d-2\alpha )|\partial V|\, n + \log _2|L| \\\lesssim & {} - \hat{\eta }_{d,\alpha }(H)\, n + |E|^2 \log _2 n \end{aligned}$$
as claimed. \(\square \)
Next, we consider the case that the points \(x_v\) are roughly equally spaced and we are summing over points \(\xi \) that are on the same scale as the spacing of the \(x_v\).
Lemma 4.6
(The close scale) Let \(\mathbb {G}\) be a d-dimensional transitive graph with \(d>4\) and let H be a finite hypergraph with boundary. Let \(m_1,m_2\ge 0\). Then there exists a constant \(c=c(\mathbb {G},H,m_1,m_2)\) such that
$$\begin{aligned} \log _2\mathbb {W}^H_x(0,n+m_2) \le -\hat{\eta }_d(H)\, n + |E \cup \partial V|^2\log _2 n + c \end{aligned}$$
for every \(n \ge 1\) and every \(x=(x_u)_{u\in \partial V} \in \mathbb {V}^{\partial V}\) are such that \(2^{n-m_1} \le \langle x_u x_v \rangle \le 2^n\) for every \(u,v\in V\).
Proof
We may assume that \(E \ne \emptyset \), the case \(E=\emptyset \) being trivial. For notational convenience, we will write \(\xi _v=x_v\), and consider \(v \perp v\) for every vertex \(v\in \partial V\). Write \(\Xi =\Xi _x(0,n+m_2)\), and observe that for each \(\xi \in \Xi \) and \(e \in E\) there exists at most one \(v\in \partial V\) for which \(\log _2 \langle \xi _e \xi _v\rangle < n-m_1-1\). To account for these degrees of freedom, we define \(\Phi \) to be the set of functions \(\phi :E \cup \partial V \rightarrow \partial V \cup \{\star \}\) such that \(\phi (v)=v\) for every \(v\in \partial V\). For each \(\phi \in \Phi \), let \(L_\phi \) be the set of symmetric functions \(\ell : (E \cup \partial V)^2 \rightarrow \{0,\ldots ,n\}\) such that \(\ell (e,e)=0\) for every \(e\in E \cup \partial V\) and \(\ell (e,e')=n\) for every \(e,e' \in E \cup \partial V\) such that \(\phi (e)\ne \phi (e')\). For each \(\phi \in \Phi \) and \(\ell \in L_\phi \), let
$$\begin{aligned} \Xi _{\phi ,\ell }= & {} \left\{ \xi \in \Xi : \ell (e,e')- m_1 - 1 \le \log _2 \langle \xi _e \xi _{e'} \rangle \le \ell (e,e') + m_2\right. \\&\left. +\,1 \text { for every }e,e' \in E \cup \partial V\right\} , \end{aligned}$$
and observe that \(\Xi = \bigcup _{\phi \in \Phi } \bigcup _{\ell \in L_\phi } \Xi _{\phi ,\ell }\).
Now, for each \(\phi \in \Phi \) and \(\ell \in L_\phi \), let \(\hat{\ell }\) be the largest ultrametric on \(E \cup \partial V\) that is dominated by \(\ell \). Observe that \(\hat{\ell } \in L_\phi \), and that, as in the previous lemma, we have that
$$\begin{aligned} \log _2 \langle \xi _e \xi _{e'} \rangle \lesssim \hat{\ell }(e,e') \end{aligned}$$
for every \(e,e' \in E \cup \partial V\).
Let \(e_1,\ldots ,e_{|E|}\) be an enumeration of E, and let \(e_0,e_{-1},\ldots ,e_{-|\partial V|+1}\) be an enumeration of \(\partial V\). As in the proof of the previous lemma, we have the volume estimate
$$\begin{aligned} \log _2 |\Xi _{\phi ,\ell }| \lesssim d\sum _{i=1}^{|E|}\min \{\hat{\ell }(e_i,e_j) : j<i\} \end{aligned}$$
(4.5)
Now, for every \(\xi \in \Xi _{\phi ,\ell }\), we have that, similarly to the previous proof,
$$\begin{aligned} \log _2 W(x,\xi )&\approx -(d-4)\sum _{i=1}^{|E|}\sum _{u \perp e}\min \{\ell (e_i,e_j) : j<i,\, e_j \perp u\}. \end{aligned}$$
(Recall that we are considering \(u\perp u\) for each \(u \in \partial V\).) Thus, we have
$$\begin{aligned}&\log _2 \sum _{\xi \in \Xi _{\phi ,\ell }} W(x,\xi )\nonumber \\&\quad \lesssim \sum _{i=1}^{|E|}\left[ d\min \{\hat{\ell }(e_i,e_j) : j<i\} -(d-4) \sum _{u \perp e}\min \{\ell (e_i,e_j) : j<i,\, e_j \perp u\}\right] .\nonumber \\ \end{aligned}$$
(4.6)
Let \(Q: L_\phi \rightarrow \mathbb {R}\) be defined to be the expression on the right hand side of (4.6). Similarly to the previous proof but applying Lemma 4.5 instead of Lemma 4.4, there is an \(\ell \in L_\phi \) maximizing Q such that \(\ell \) is an ultrametric and \(\ell (e,e') \in \{0,n\}\) for all \(e,e' \in E \cup \partial V\). Fix one such \(\ell \), and define an equivalence relation \(\bowtie \) on \(E \cup \partial V\) by letting \(e \bowtie e'\) if and only if \(\ell (e,e')=0\), which is an equivalence relation since \(\ell \) is an ultrametric. Similarly to the proof of the previous lemma, we can compute that
$$\begin{aligned} Q(\ell )= & {} dn\left| \{\text {equivalence classes of }\bowtie \text { that are contained in }E\}\right| \\&-(d-4)n\sum _{u \in \partial V} \left| \{\text {equivalence classes of } \bowtie \text { incident to }u\text { that do not contain }u\}\right| \\&-(d-4)n \sum _{u \in V_\circ } \left( \left| \{\text {equivalence classes of }\bowtie \text { incident to }u\}\right| -1\right) . \end{aligned}$$
Since \(d>4\) and each equivalence class of \(\bowtie \) can contain at most one vertex of v, we see that Q increases if we remove a vertex \(v\in \partial V\) from its equivalence class. Since \(\ell \) was chosen to maximize Q, we deduce that the equivalence class of v under \(\bowtie \) is a singleton for every \(v\in \partial V\). Thus, there exists an ultrametric \(\ell \in L_\phi \) maximizing Q such that \(\ell (e,e')\in \{0,n\}\) for every \(e,e' \in E\) and \(\ell (e,v)=n\) for every \(e\in E\) and \(v\in \partial V\). Letting \(\bowtie '\) be the equivalence relation on E (rather than \(E \cup \partial V\)) corresponding to such an optimal \(\ell \), we have
$$\begin{aligned} Q(\ell )= & {} dn\left| \{\text {equivalence classes of } \bowtie '\}\right| \nonumber \\&-(d-4)n\sum _{u \in \partial V} \left| \{\text {equivalence classes of } \bowtie '\text { incident to }u\}\right| .\nonumber \\&-(d-4)n \sum _{u \in V_\circ } \left( \left| \{\text {equivalence classes of }\bowtie '\text { incident to }u\}\right| -1\right) .\nonumber \\ \end{aligned}$$
(4.7)
The rest of the proof is similar to the Proof of Lemma 4.2. \(\square \)
We can now bootstrap from the single scale estimates Lemmas 4.2 and 4.6 to a multi-scale estimate. Given a hypergraph with boundary \(H=(\partial V, V_\circ , E)\) and a set of edges \(E'\subseteq E\), we write \(V_\circ (E')=\bigcup _{e\in E'}\{v\in V_\circ : v \perp e\}\) and define \(H(E') = (\partial V, V_\circ (E'), E')\).
Lemma 4.7
(Induction estimate) Let \(\mathbb {G}\) be a d-dimensional transitive graph and let H be a finite hypergraph with boundary. Then there exists a constant \(c=c(\mathbb {G},H)\) such that
$$\begin{aligned}&\log _2\left[ \mathbb {W}^H_x(0,N+|E|+2) - \mathbb {W}^H_x(0,N)\right] \\&\quad \le \max _{E' \subsetneq E} \left\{ \log _2 \mathbb {W}^{H(E')}_x(0,N+|E|+2)\right. \\&\qquad \left. - \left[ \hat{\eta }_d(H) -\hat{\eta }_d(H(E')) \right] \, N + |E{\setminus } E'|^2\log _2 N \right\} + c \end{aligned}$$
for every \(x=(x_u)_{u\in \partial V} \in \mathbb {V}^{\partial V}\) and every N such that \(\langle x_u x_v \rangle \le 2^{N-1}\) for all \(u,v \in \partial V\).
Note that when \(|E|\ge 1\) we must consider the term \(E'=\emptyset \) when taking the maximum in this lemma, which gives \(-\hat{\eta }_d(H) N + |E|^2 \log _2 N\).
Proof
The claim is trivial in the case \(E=\emptyset \), so suppose that \(|E|\ge 1\). Let \(\Xi = \Xi _x(0,N+|E|+2) {\setminus } \Xi _x(0,N)\) so that
$$\begin{aligned} \mathbb {W}^H_x(0,N+|E|+2) - \mathbb {W}^H_x(0,N) \le \sum _{\xi \in \Xi } W^H(x,\xi ). \end{aligned}$$
For each \(E'\subsetneq E\) and every \(1 \le m \le |E|+1\), let
$$\begin{aligned} \Xi ^{E', m} = \left( \Lambda _x(0,N+m-1)\right) ^{E'} \times \left( \Lambda _x(N+m,N+|E|+2)\right) ^{E {\setminus } E'}. \end{aligned}$$
Observe that if \(\xi \in \Xi \) then, by the Pigeonhole Principle, there must exist \(1 \le m \le |E|+2\) such that \(\xi _e\) is not in \(\Lambda _x(N-m-1,N-m)\) for any \(e \in E\), and we deduce that
$$\begin{aligned} \Xi = \bigcup \left\{ \Xi ^{E',m} : E'\subsetneq E,\, 1\le m \le |E|+2 \right\} . \end{aligned}$$
Thus, to prove the lemma it suffices to show that
$$\begin{aligned}&\log _2 \sum _{\xi \in \Xi ^{H(E'),m}} W^H(x,\xi ) \lesssim \log _2 \mathbb {W}^{H(E')}_x(0,N) - \left( \hat{\eta }_d(H) -\hat{\eta }_d(H(E')) \right) \, N \nonumber \\&\quad + |E {\setminus } E'|^2\log _2 N \end{aligned}$$
(4.8)
whenever \(1\le m \le |E|+2\) and \(E' \subsetneq E\). If \(E'=\emptyset \) then this follows immediately from Lemma 4.2, so we may suppose not.
To this end, fix \(E' \subsetneq E\) with \(|E'|\ge 1\) and write \(H'=H(E') = (\partial V,V_\circ (E'),E') = (\partial V, V_\circ ',E')\). Choose some \(v_0 \in \partial V\) arbitrarily, and write \(x_v = x_{v_0}\) for every \(v \in V_\circ '\). Then for every \(\xi \in \Xi ^{E',m}\), using the fact that we have the empty scale \(\Lambda _x(N-m-1,N-m)\) separating \(\{\xi _e : e \in E'\}\) from \(\{\xi _e : e \notin E'\}\), we have that
$$\begin{aligned} \left\langle \{x_u\} \cup \{\xi _e: e \perp u\} \right\rangle \asymp \left\langle \{x_u\}\cup \{\xi _e: e\in E',\, e \perp u\} \right\rangle \left\langle \{x_u\}\cup \{\xi _e: e \notin E',\, e \perp u\} \right\rangle \end{aligned}$$
for every vertex \(u\in \partial V\),
$$\begin{aligned} \left\langle \{\xi _e: e \perp u\} \right\rangle \asymp \left\langle \{\xi _e: e\in E',\, e \perp u\} \right\rangle \left\langle \{x_u\}\cup \{\xi _e: e \notin E',\, e \perp u\} \right\rangle \end{aligned}$$
for every vertex \(u \in V'_\circ \), and that, trivially,
$$\begin{aligned} \left\langle \{\xi _e: e \perp u\} \right\rangle = \left\langle \{\xi _e: e \notin E',\, e \perp u\} \right\rangle \end{aligned}$$
for every vertex \(u \in V_\circ {\setminus } V'_\circ \). Define a hypergraph with boundary \(H'' =(\partial V'', V_\circ '', E'',\perp '')\) by setting
$$\begin{aligned} \partial V''= & {} \partial V \cup V_\circ ',\qquad V_\circ '' = V_\circ {\setminus } V_\circ ',\qquad V''= \partial V'' \cup V_\circ ''=V,\qquad E'' = E {\setminus } E',\\ \text {and }\perp ''= & {} \perp \cap \, (V'' \cap E''). \end{aligned}$$
For each \(\xi \in \Xi ^{E',m}\), let \(\xi '=(\xi '_e)_{e\in E'}=(\xi _e)_{e\in E'}\) and \( \xi ''=( \xi ''_e)_{e\in E''} = (\xi _e)_{e\in E''}\). Then the above displays imply that
$$\begin{aligned} W^H(x,\xi ) \asymp W^{H'}\left( x,\xi '\right) \cdot W^{H''}\big (x, \xi ''\big ) \end{aligned}$$
for every \(\xi \in \Xi ^{E',m}\). Thus, summing over \(\xi ' \in (\Lambda _x(0,N+m-1))^{E'}\) and \( \xi '' \in (\Lambda _x(N+m,N+|E|+2))^{E''}\), we obtain that
$$\begin{aligned} \log _2 \sum _{\xi \in \Xi ^{E',m}} W^H(x,\xi )&\lesssim \log _2 \mathbb {W}^{H'}_x(0,N+m-1) + \log _2 \mathbb {W}^{H''}_x(N+m,N+|E|+2) \nonumber \\&\lesssim \log _2 \mathbb {W}^{H'}_x(0,N+|E|+2) - \hat{\eta }_d(H'') N + |E''|^2 \log _2 N, \end{aligned}$$
(4.9)
where the second inequality follows from Lemma 4.2.
To deduce (4.8) from (4.9), it suffices to show that
$$\begin{aligned} \hat{\eta }_d(H) \le \hat{\eta }_d(H') + \hat{\eta }_d(H''). \end{aligned}$$
(4.10)
To this end, let \(\bowtie '\) be an equivalence relation on \(E'\) and let \(\bowtie ''\) be an equivalence relation on \(E''\). We can define an equivalence relation \(\bowtie \) on E by setting \(e \bowtie e'\) if and only if either \(e,e' \in E'\) and \(e \bowtie ' e'\) or \(e,e' \in E''\) and \(e \, \bowtie '' \,e'\). We easily verify that \(\Delta (H/\!\bowtie )\)\(=\)\(\Delta (H'/\!\bowtie ')\) + \(\Delta (H''/\!\bowtie '')\), \(|V_\circ (H/\!\bowtie )| = |V_\circ (H'/\!\bowtie ')| + |V_\circ (H''/\!\bowtie '')|\), and \(|E(H/\!\bowtie )| = |E(H'/\!\bowtie ')|\)\(+ |E(H''/\!\bowtie '')|\), so that
$$\begin{aligned} \eta _d(H/\!\bowtie ) = \eta _d(H'/\!\bowtie ') + \eta _d(H''/\!\bowtie ''), \end{aligned}$$
and the inequality (4.10) follows by taking the minimum over \(\bowtie '\) and \(\bowtie ''\). \(\square \)
We now use Lemmas 4.6 and 4.7 to perform an inductive analysis of \(\mathbb {W}\). Although we are mostly interested in the non-buoyant case, we begin by controlling the buoyant case.
Lemma 4.8
(Many scales, buoyant case) Let H be a finite hypergraph with boundary. Let \(m \ge 1\), and suppose that \(x=(x_u)_{u\in \partial V} \in \mathbb {V}^{\partial V}\) are such that \(2^{n-m} \le \langle x_u x_v \rangle \le 2^{n-1}\). If every subhypergraph of H has a d-buoyant coarsening, then there exists a constant \(c=c(\mathbb {G},H,m)\) such that
$$\begin{aligned} \log _2\mathbb {W}^H_x(0,N) \le -{\hat{\eta }}_d(H) \, N + (|E \cup \partial V|^2+1)\log _2 N + c \end{aligned}$$
for all \(N\ge n\).
Proof
We induct on the number of edges in H. The claim is trivial when \(E = \emptyset \). Suppose that \(|E|\ge 1\) and that the claim holds for all finite hypergraphs with boundary that have fewer edges than H. By assumption, \(\hat{\eta }_d(H') \le 0\) for all subhypergraphs \(H'\) of H. Thus, it follows from the induction hypothesis that
$$\begin{aligned} \log _2\mathbb {W}^{H'}_x(0,N+|E|+2) \lesssim -{\hat{\eta }}_d(H') \, N + (|E' \cup \partial V'|^2+1)\log _2 N \end{aligned}$$
for each proper subhypergraph \(H'\) of H, and hence that
$$\begin{aligned}&\log _2 \mathbb {W}^{H'}_x(0,N+|E|+2) - \left[ \hat{\eta }_d(H) -\hat{\eta }_d(H') \right] N + |E{\setminus } E'|^2\log _2 N \\&\quad \lesssim - \hat{\eta }_d(H) \, N + (|E' \cup \partial V|^2+1 + |E{\setminus } E'|^2) \log _2 N. \end{aligned}$$
(Note that the implicit constants depending on \(H'\) from the induction hypothesis are bounded by a constant depending on H since H has only finitely many subhypergraphs.) Observe that whenever \(E' \subsetneq E\) we have that
$$\begin{aligned} |E' \cup \partial V|^2+1 + |E{\setminus } E'|^2 \le |E\cup \partial V|^2, \end{aligned}$$
and so we deduce that
$$\begin{aligned}&\log _2 \mathbb {W}^{H'}_x(0,N+|E|+2) - \left[ \hat{\eta }_d(H) -\hat{\eta }_d(H') \right] N + |E{\setminus } E'|^2\log _2 N\\&\quad \lesssim - \hat{\eta }_d(H) \, N + |E \cup \partial V|^2 \log _2 N \end{aligned}$$
for every proper subhypergraph \(H'\) of H. Thus, we have that
$$\begin{aligned} \log _2 \left[ \mathbb {W}^H_x(0,N+1) - \mathbb {W}^H_x(0,N) \right]&\le \log _2 \left[ \mathbb {W}^H_x(0,N+|E|+2) - \mathbb {W}^H_x(0,N) \right] \\&\lesssim - \hat{\eta }_d(H) \, N + |E \cup \partial V|^2 \log _2 N \end{aligned}$$
for all \(N \ge n\), where we applied Lemma 4.7 in the second inequality. Summing from n to N we deduce that
$$\begin{aligned} \mathbb {W}^H_x(0,N) - \mathbb {W}^H_x(0,n)&\preceq \sum _{i=n}^N \exp _2\left[ - \hat{\eta }_d(H)\,i + |E\cup \partial V|^2\log _2 i\right] \\&\preceq \exp _2\left[ - \hat{\eta }_d(H)\,N + (|E\cup \partial V|^2+1)\log _2 N \right] . \end{aligned}$$
Using Lemma 4.6 to control the term \(\mathbb {W}^H_x(0,n)\) completes the induction. \(\square \)
We are now ready to perform a similar induction for the non-buoyant case. Note that in this case the induction hypothesis concerns probabilities rather than expectations. This is necessary because the expectations can grow as \(N\rightarrow \infty \) for the wrong reasons if H has a buoyant coarsening but has a subhypergraph that does not have a buoyant coarsening (e.g. the tree in Fig. 3).
Lemma 4.9
(Every scale, non-buoyant case) Let H be a finite hypergraph with boundary such that \(E\ne \emptyset \), let \(m \ge 1\), and suppose that H has a subhypergraph that does not have any d-buoyant coarsenings. Then there exist positive constants \(c_1=c_1(\mathbb {G},H,m)\) and \(c_2=c_2(\mathbb {G},H.m)\) such that
$$\begin{aligned} \log _2\mathbb {P}(S^H_{x}(0,\infty ) > 0) \le -c_1 \, n + |E\cup \partial V|^2\log _2 n + c_2 \end{aligned}$$
for all \(x=(x_u)_{u\in \partial V} \in \mathbb {V}^{\partial V}\) such that \(2^{n-m} \le \langle x_u x_v \rangle \le 2^{n-1}\) for all \(u,v \in \partial V\).
Proof
We induct on the number of edges in H. For the base case, suppose that H has a single edge. In this case we must have that \(\eta _d(H)>0\), and we deduce from Lemmas 4.2 and 4.6 that
$$\begin{aligned} \mathbb {W}^H_x(0,N)&\le \mathbb {W}^H_x(0,n)+ \sum _{i=n+1}^N \mathbb {W}^H_x(i-1,i)\\&\preceq \exp _2\left[ -\hat{\eta }_d(H)\, n + |E\cup \partial V|^2\log _2 n \right] \\&\quad + \sum _{i=n+1}^N \exp _2\left[ -\hat{\eta }_d(H) \, i + |E|^2\log _2 i \right] \\&\preceq \exp _2\left[ -\hat{\eta }_d(H) \, n + |E\cup \partial V|^2\log _2 n\right] , \end{aligned}$$
so that the claim follows from Markov’s inequality. This establishes the base case of the induction.
Now suppose that \(|E|>1\) and that the claim holds for all finite hypergraphs with boundary that have fewer edges than H. If H has a proper subhypergraph \(H'\) with \(\hat{\eta }_d(H')>0\), then \(S^{H'}_x(0,\infty )\) is positive if \(S^{H}_x(0,\infty )\) is, and so the claim follows from the induction hypothesis, letting \(c_1(\mathbb {G},H,m)=c_1(\mathbb {G},H',m)\) and \(c_2(\mathbb {G},H,m)=c_2(\mathbb {G},H',m)\).
Thus, it suffices to consider the case that \(\hat{\eta }_d(H) >0\) but that \(\hat{\eta }_d(H')\le 0\) for every proper subhypergraph \(H'\) of H. In this case, we apply Lemma 4.7 to deduce that
$$\begin{aligned}&\log _2 \left[ \mathbb {W}^H_x(0,N+1) - \mathbb {W}^H_x(0,N) \right] \le \log _2 \left[ \mathbb {W}^H_x(0,N+|E|+2) - \mathbb {W}^H_x(0,N) \right] \\&\quad \lesssim \max _{E'\subsetneq E} \left\{ \log _2 \mathbb {W}^{H(E')}_x(0,N+|E|+2) \right. \\&\qquad \left. - \left[ \hat{\eta }_d(H) -\hat{\eta }_d(H(E')) \right] \, N + |E{\setminus } E'|^2\log _2 N \right\} . \end{aligned}$$
Lemma 4.8 then yields that
$$\begin{aligned}&\log _2 \left[ \mathbb {W}^H_x(0,N+1) - \mathbb {W}^H_x(0,N) \right] \\&\quad \lesssim - \hat{\eta }_d(H) \, N + (|E'\cup \partial V|^2+1 +|E{\setminus } E'|^2 ) \log _2 N\\&\quad \lesssim - \hat{\eta }_d(H) \, N + |E\cup \partial V|^2 \log _2 N. \end{aligned}$$
Finally, combining this with Lemma 4.6 yields that, since \(\hat{\eta }_d(H)>0\),
$$\begin{aligned} \mathbb {W}_x^H(0,N)&\preceq \exp _2\left[ -\hat{\eta }_d(H)\, n + |E\cup \partial V|^2\log _2 n \right] \\&\quad + \sum _{i=n}^N \exp _2\left[ -\hat{\eta }_d(H)\,i+|E\cup \partial V|^2\log _2 i\right] \\&\preceq \exp _2\left[ -\hat{\eta }_d(H)\, n + |E\cup \partial V|^2\log _2 n\right] , \end{aligned}$$
and the claim follows from Markov’s inequality.\(\square \)
Proof of Proposition 4.1
Let H be a finite hypergraph with boundary that has a subhypergraph that does not have any d-buoyant coarsenings, so that in particular H has at least one edge. Lemma 4.9 and Proposition 2.2 imply that for every \(\varepsilon >0\), there exists \(x=(x_v)_{v\in \partial V}\) such that each of the points \(x_v\) are in different components of \(\mathfrak {F}\) with probability at least \(1-\varepsilon \), but H has probability at most \(\varepsilon \) to be faithfully present at x in the component hypergraph \(\mathcal {C}^{hyp}_r(\mathfrak {F})\). It follows that H is not faithfully ubiquitous in the component graph \(\mathcal {C}^{hyp}_r(\mathfrak {F})\) a.s.
Now suppose that H is a hypergraph with boundary such that every quotient \(H'\) of H such that \(R_\mathbb {G}(H') \le r\) has a subhypergraph that does not have any d-buoyant coarsenings. Note that if \(H'\) is a quotient of H such that \(R_\mathbb {G}(H') > r\) then \(H'\) is not faithfully present anywhere in \(\mathbb {G}\) a.s. This follows immediately from the definition of \(R_\mathbb {G}(H')\). On the other hand, Lemma 4.9 and Proposition 2.2 imply that for every \(\varepsilon >0\), there exists \(x=(x_v)_{v\in \partial V}\) such that each of the points \(x_v\) are in different components of \(\mathfrak {F}\) with probability at least \(1-\varepsilon \), but, for each quotient \(H'\) of H with \(R_\mathbb {G}(H') \le r\), the hypergraph \(H'\) has probability at most \(\varepsilon / |\{\)quotients of \(H\}|\) to be faithfully present at x in the component hypergraph \(\mathcal {C}^{hyp}_r(\mathfrak {F})\), since \(H'\) must have a subhypergraph none of whose coarsenings are d-buoyant by assumption. It follows by a union bound that H has probability at most \(\varepsilon \) to be present in \(\mathcal {C}^{hyp}_r(\mathfrak {F})\) at this x.
It follows as above that H is not ubiquitous in the component hypergraph \(\mathcal {C}^{hyp}_r(\mathfrak {F})\) a.s. \(\square \)
Positive probability of robust faithful presence in low dimensions
Recall that if \(\mathbb {G}\) is a d-dimensional transitive graph, \(H=(\partial V, V_\circ , E)\) is a finite hypergraph with boundary, that \(r\ge 1\) and that \((x_v)_{v\in \partial V}\) is a collection of points in \(\mathbb {G}\), we say that H is r-robustly faithfully present at \(x=(x_v)_{v\in V}\) if there is an infinite collection \(\{ \xi ^i = (\xi ^i_{(e,v)})_{(e,v)\in E_\bullet } : i \ge 1 \}\) such that \(\xi ^i\) is a witness for the r-faithful presence of H at x for every i, and \(\xi ^j_{(e,v)} \ne \xi ^j_{(e',v')}\) for every \(i, j \ge 1\) and \((e,v),(e',v') \in E_\bullet \) such that \(i \ne j\). As in the introduction, for each \(M\ge 1\) we let \(R_\mathbb {G}(M)\) be minimal such that it is possible for a set of diameter \(R_\mathbb {G}(M)\) to intersect M distinct components of the uniform spanning forest of \(\mathbb {G}\), and let \(R_\mathbb {G}(H) = R_\mathbb {G}(\max _{e\in E} \deg (e) ).\)
We say that a set \(W \subset \mathbb {V}\) is well-separated if the vertices of W are all in different components of the uniform spanning forest \(\mathfrak {F}\) with positive probability.
Lemma 4.10
Let \(\mathbb {G}\) be a d-dimensional transitive graph with \(d>4\), and let \(\mathfrak {F}\) be the uniform spanning forest of \(\mathbb {G}\). Then a finite set \(W \subset \mathbb {V}\) is well-separated if and only if when we start a collection of independent simple random walks \(\{X^v : v \in W\}\) at the vertices of W, the event that \(\{ X^u_i : i\ge 0 \} \cap \{X^v_i :i \ge 0 \} = \emptyset \) for every distinct \(u,v\in W\) has positive probability.
Proof
We will be brief since the statement is intuitively obvious from Wilson’s algorithm and the details are somewhat tedious. The ‘if’ implication follows trivially from Wilson’s algorithm. To see the reverse implication, suppose that W is well-separated and consider the paths \(\{(\Gamma ^v_i)_{i\ge 0} : v\in W\}\) from the vertices of W to infinity in \(\mathfrak {F}\). Using Wilson’s algorithm and the Green function estimate (2.4), it is easily verified that
$$\begin{aligned} \lim _{i\rightarrow \infty } \sum _{v\in W} \sum _{u\in W {\setminus } \{v\}} \left[ \langle \Gamma ^v_i \Gamma ^u_j \rangle ^{-d+4} + \sum _{j=0}^{i-1} \langle \Gamma ^v_i \Gamma ^u_j \rangle ^{-d+2}\right] =0 \end{aligned}$$
(4.11)
almost surely on the event that the vertices of W are all in different components of \(\mathfrak {F}\). Let \(i\ge 1\) and consider the collection of simple random walks \(Y^{v,i}\) started at \(\Gamma ^v_i\) and conditionally independent of each other and of \(\mathfrak {F}\) given \((\Gamma ^v_i)_{v\in W}\), and let \(\tilde{Y}^{v,i}\) be the random path formed by concatenating \((\Gamma ^{v}_j)_{j=1}^i\) with \(Y^{v,i}\). It follows from (4.11) and Markov’s inequality that
$$\begin{aligned} \limsup _{i\rightarrow \infty }\mathbb {P}\left( \bigl \{\tilde{Y}^{v,i}_j : j \ge 0\bigr \} \cap \bigl \{\tilde{Y}^{u,i}_j: j \ge 0\bigr \} = \emptyset \text { for every }v\in W\right) = \mathbb {P}(\mathscr {F}(W))>0, \end{aligned}$$
(4.12)
where we recall that \(\mathscr {F}(W)\) is the event that all the vertices of W are in different components of \(\mathfrak {F}\). In particular, it follows that the probability appearing on the left hand side of (4.12) is positive for some \(i_0\ge 0\). The result now follows since the walks \(\{X^v : v \in W\}\) have a positive probability of following the paths \(\Gamma ^v\) for their first \(i_0\) steps, and on this event their conditional distribution coincides with that of \(\{\tilde{Y}^{v,i_0} : v\in W\}\). \(\square \)
The goal of this subsection is to prove criteria for robust faithful presence to occur with positive probability. We begin with the case that \(d/(d-4)\) is not an integer (i.e., \(d\notin \{5,6,8\}\)), which is technically simpler. The corresponding proposition for \(d=5,6,8\) is given in Proposition 4.15.
Proposition 4.11
Let \(\mathbb {G}\) be a d-dimensional transitive graph with \(d>4\) such that \(d/(d-4)\) is not an integer, and let \(\mathfrak {F}\) be the uniform spanning forest of \(\mathbb {G}\). Let H be a finite hypergraph with boundary with at least one edge, and suppose that H has a coarsening all of whose subhypergraphs are d-buoyant. Then for every \(r\ge R_\mathbb {G}(H)\) and every well-separated collection of points \((x_v)_{v\in \partial V}\) in \(\mathbb {V}\), there is a positive probability that the vertices \(x_u\) are all in different components of \(\mathfrak {F}\) and that H is robustly faithfully present at x in \(\mathcal {C}^{hyp}_r(\mathfrak {F})\).
The Proof of Proposition 4.11 will employ the notion of constellations. The reason we work with constellations is that a constellation of witnesses for the presence of H (defined below) necessarily contains a witness for every refinement of H. This allows us to pass to a coarsening and work in the setting that every subhypergraph of H is d-buoyant.
For each finite set A, we define the rooted powerset of A, denoted \(\mathcal {P}_\bullet (A)\), to be
$$\begin{aligned} \mathcal {P}_\bullet (A) := \{(B,b) : B \text { is a subset of }A\hbox { and }b \in B\}. \end{aligned}$$
We call a set of vertices \(y=(y_{(B,b)})\) of \(\mathbb {G}\) indexed by \(\mathcal {P}_\bullet (A)\) an A-constellation. Given an A-constellation y, we define \(\mathscr {A}_r(y)\) to be the event that \(y_{(B,b)}\) and \(y_{(B',b')}\) are connected in \(\mathfrak {F}\) if and only if \(b=b'\), and in this case they are connected by a path in \(\mathfrak {F}\) with diameter at most r. We say that an A-constellation y in \(\mathbb {G}\) is r-good if it satisfies the following conditions.
-
(1)
\(\langle y_{(B,b)} y_{(B',b')} \rangle \le r\) for every \((B,b),(B',b') \in \mathcal {P}_\bullet (A)\).
-
(2)
\(\langle y_{(B,b)} y_{(B,b')} \rangle \le R_{\mathbb {G}}(|B|) +1\) for every \(B \subseteq A\) and \(b,b' \in B\), and
-
(3)
\(\mathbb {P}(\mathscr {A}_r(y)) \ge 1/r\).
The proof of the following lemma is deferred to Sect. 4.3.
Lemma 4.12
Let \(\mathbb {G}\) be a d-dimensional transitive graph with \(d>4\). Let A be a finite set. Then there exists \(r=r(|A|)\) such that for every vertex x of \(\mathbb {G}\), there exists an r-good A-constellation contained in the ball of radius r around x.
Let \(H=(\partial V,V_\circ ,E)\) be a finite hypergraph with boundary with at least one edge, and let \(r=r(\max _e\deg (e))\) be as in Lemma 4.12. We write \(\mathcal {P}_\bullet (e) = \mathcal {P}_\bullet (\{v \in V : v \perp e\})\) for each \(e\in E\). For each \(\xi =(\xi _e)_{e\in E}\in \mathbb {V}^E\) and each \(e\in E\), we let \((\xi _{(e,B,v)})_{(B,v) \in \mathcal {P}_\bullet (e)}\) be an r-good e-constellation contained in the ball of radius r about \(\xi _e\), whose existence is guaranteed by Lemma 4.12.
For each \(x=(x_v)_{v\in \partial V}\) and \(\xi =(\xi _e)_{e\in E}\), we define \(\tilde{\mathscr {W}}(x,\xi )\) to be the event that the following conditions hold:
-
(1)
For each boundary vertex \(v \in \partial V\), every point in the set \(\{x_v\} \cup \{\xi _{(e,A,v)} : e\in E, (A,v) \in \mathcal {P}_\bullet (e)\}\) is in the same component of \(\mathfrak {F}\),
-
(2)
For each interior vertex \(v \in V_\circ \), every point in the set \(\{\xi _{(e,A,v)} : e\in E, (A,v) \in \mathcal {P}_\bullet (e)\}\) is in the same component of \(\mathfrak {F}\), and
-
(3)
For any two distinct vertices \(u,v \in V\), the components of \(\mathfrak {F}\) containing the sets \(\{\xi _{(e,A,u)} : e\in E, (A,u) \in \mathcal {P}_\bullet (e)\}\) and \(\{\xi _{(e,A,v)} : e\in E, (A,v) \in \mathcal {P}_\bullet (e)\}\) are distinct.
Thus, on the event \(\tilde{\mathscr {W}}(x,\xi )\) every refinement \(H'\) of H is \(R_\mathbb {G}(H')\)-faithfully present at x: Indeed, letting \(\phi _V: V'\rightarrow V\) and \(\phi _E: E'\rightarrow E\) be as in the definition of a coarsening and letting \(A(e') = \{v \in V : \phi ^{-1}_V(v) \perp ' e'\}\) for each \(e\in E'\), the collection \((\xi _{(e',v')})_{(e',v') \in E_\bullet '} = (\xi _{(\phi _E(e'),A(e')\phi _V(v'))})_{(e',v') \in E_\bullet '}\) is a witness for the \(R_\mathbb {G}(H')\)-faithfully presence of \(H'\) at x.
For each \(n\ge 0\), let \(\Omega _x(n)\) be the set
$$\begin{aligned} \Omega _x(n) = \left\{ (\xi _{e})_{e\in E } \in \Lambda _x(n,n+1)^{E} : \langle \xi _e \xi _{e'} \rangle \ge 2^{n-C_1} \text { for all distinct }e,e' \in E\right\} , \end{aligned}$$
where \(C_1=C_1(E)\) is chosen so that \(\log _2|\Omega _x(n)|\approx nd|E|\) for all n sufficiently large and all x. It is easy to see that such a constant exists using the d-dimensionality of \(\mathbb {G}\). For each \(n\ge 0\) we define \(\tilde{S}_x(n)\) to be the random variable
$$\begin{aligned} \tilde{S}_x(n) := \sum _{\xi \in \Omega _x(n)}\mathbb {1}(\tilde{\mathscr {W}}(x,\xi )), \end{aligned}$$
so that every refinement \(H'\) of H is \(R_\mathbb {G}(H')\)-faithfully present at x on the event that \(\tilde{S}_x(n)\) is positive for some \(n\ge 0\), and every refinement \(H'\) of H is \(R_\mathbb {G}(H')\)-robustly faithfully present at x on the event that \(\tilde{S}_x(n)\) is positive for infinitely many \(n\ge 0\).
The following lemma lower bounds the first moment of \(\tilde{S}_n\).
Lemma 4.13
Let \(\mathbb {G}\) be a d-dimensional transitive graph with \(d>4\). Let H be a finite hypergraph with boundary with at least one edge, let \(\varepsilon >0\), and suppose that \(x=(x_v)_{v\in \partial V}\) is such that \(\langle x_u x_v \rangle \le 2^{n-1}\) for all \(u,v \in \partial V\) and satisfies
$$\begin{aligned} \mathbb {P}\Bigl (\{X^u_i : i \ge 0 \} \cap \{X^v_i : i \ge 0\} =\emptyset \text { for every distinct }u,v\in \partial V\Bigr )\ge \varepsilon \end{aligned}$$
when \(\{ X^v : v \in \partial V\}\) are a collection of independent simple random walks started at \((x_v)_{v\in \partial V}\). Then there exist constants \(c=c(\mathbb {G},H,\varepsilon )\) and \(n_0=n_0(\mathbb {G},H,\varepsilon )\) such that if \(n\ge n_0\) then
$$\begin{aligned} \log _2 \mathbb {P}(\tilde{\mathscr {W}}(x,\xi )) \ge -(d-4)\left( \Delta -|V_\circ |\right) \, n -c \end{aligned}$$
for every \(\xi \in \Omega _x(n)\) and hence that
$$\begin{aligned} \log _2\mathbb {E}[\tilde{S}_x(n)] \ge -\eta _d(H)\, n - c. \end{aligned}$$
The proofs of Lemmas 4.12 and 4.13 are unfortunately rather technical, and are deferred to Sect. 4.3. For the rest of this section, we will take these lemmas as given, and use them to prove Proposition 4.11. The key remaining step is to upper bound the second moment of the random variable \(\tilde{S}_x(n)\).
Lemma 4.14
(Restricted second moment upper bound) Let \(\mathbb {G}\) be a d-dimensional transitive graph with \(d>4\) such that \(d/(d-4)\) is not an integer. Let H be a hypergraph with boundary with at least one edge. Suppose that every subhypergraph of H is d-buoyant. Then there exists a positive constant \(c=c(\mathbb {G},H)\) such that
$$\begin{aligned} \log _2\mathbb {E}[\tilde{S}_x(n)^2] \le -2\eta _d(H)\, n + c \end{aligned}$$
for all \(x=(x_u)_{u\in \partial V} \in (\mathbb {V})^{\partial V}\) and all n such that \(\langle x_u x_v \rangle \le 2^{n-1}\) for all \(u,v \in \partial V\).
Proof
Observe that if \(\xi ,\zeta \in \Omega _x(n)\) are such that the events \(\tilde{\mathscr {W}}(x,\xi )\) and \(\tilde{\mathscr {W}}(x,\zeta )\) both occur, then the following hold:
-
(1)
For each \(v \in V\), there is at most one \(v' \in V\) such that \(\xi _{(e,A,v)}\) and \(\zeta _{(e',A',v')}\) are in the same component of \(\mathfrak {F}\) for some (and hence every) \(e,e' \in E\) and \((A,v) \in \mathcal {P}_\bullet (e)\), \((A',v')\in \mathcal {P}_\bullet (e')\).
-
(2)
For each \(e \in E\), there is at most one \(e'\) such that \(\langle \xi _e \zeta _{e'} \rangle \le 2^{n-C_1-1}\).
As a bookkeeping tool to account for the first of these degrees of freedom, we define \(\Phi \) be the set of functions \(\phi : V_\circ \rightarrow V_\circ \cup \{\star \}\) such that the preimage \(\phi ^{-1}(v)\) has at most one element for each \(v\in V_\circ \). We write \(\phi ^{-1}(v)=\star \) if v is not in the image of \(\phi \in \Phi \), and write \(\phi (v)=v\) for every \(v\in \partial V\). (Here and elsewhere, we use \(\star \) as a dummy symbol so that we can encode partial bijections by functions.) For each \(\phi \in \Phi \), and \(\xi ,\zeta \in \mathbb {V}\), define the event \(\tilde{\mathscr {W}}_\phi (\zeta ,\xi )\) to be the event that both the event \(\tilde{\mathscr {W}}(x,\xi )\cap \tilde{\mathscr {W}}(x,\zeta )\) occurs, and that for any two distinct vertices \(u,v \in V_\circ \) the components of \(\mathfrak {F}\) containing \(\{\xi _{(e,A,u)} : e\in E, (A,u)\in \mathcal {P}_\bullet (e)\}\) and \(\{\zeta _{(e,A,v)}: e\in E, (A,v)\in \mathcal {P}_\bullet (e) \}\) coincide if and only if \(v =\phi (u)\). Thus, we have that
$$\begin{aligned} \tilde{\mathscr {W}}(x,\xi )\cap \tilde{\mathscr {W}}(x,\zeta ) = \bigcup _{\phi \in \Phi } \tilde{\mathscr {W}}_{\phi }(\xi ,\zeta ) \end{aligned}$$
and hence that
$$\begin{aligned} \tilde{S}_x(n)^2 = \sum _{\xi ,\zeta \in \Omega _x(n)} \mathbb {1}[\tilde{\mathscr {W}}(x,\xi ) \cap \tilde{\mathscr {W}}(x,\zeta )] \le \sum _{\phi \in \Phi }\sum _{\xi ,\zeta \in \Omega _x(n)} \mathbb {1}[\tilde{\mathscr {W}}_\phi (\xi ,\zeta )]. \end{aligned}$$
It follows from Proposition 2.2 that
$$\begin{aligned} \mathbb {P}(\tilde{\mathscr {W}}_\phi (\xi ,\zeta ))\preceq & {} \prod _{u \in \partial V}\langle \{x_u\}\cup \{\xi _{e} : e \perp u\},\{\zeta _{e} : e\perp u\} \rangle ^{-(d-4)}\nonumber \\&\cdot \prod _{u \in V_\circ ,\, \phi (v)=\star }\langle \{\xi _{e} : e \perp u\}\rangle ^{-(d-4)}\nonumber \\&\cdot \prod _{u \in V_\circ ,\, \phi ^{-1}(v)=\star }\langle \{\xi _{e} : e \perp u\}\rangle ^{-(d-4)} \nonumber \\&\prod _{u \in V_\circ ,\, \phi (v) \ne \star }\langle \{\xi _{e} : e \perp u\} \cup \{\zeta _e : e \perp \phi (u)\}\rangle ^{-(d-4)} \end{aligned}$$
(4.13)
We define \(R_\phi (\xi ,\zeta )\) to be the expression on the right hand side of (4.13), so that
$$\begin{aligned} \mathbb {E}\left[ \tilde{S}_x(n)^2\right] \preceq \sum _{\phi \in \Phi }\sum _{\xi ,\zeta \in \Omega _x(n)} R_\phi (\xi ,\zeta ). \end{aligned}$$
We now account for the second of the two degrees of freedom above. Let \(\Psi \) be the set of functions \(\psi : E \rightarrow E \cup \{\star \}\) such that the preimage \(\psi ^{-1}(e)\) has at most one element for every \(e\in E\). For each \(\psi \in \Psi \) and \(k = (k_e)_{e \in E} \in \{0,\ldots ,n\}^{E}\), let
$$\begin{aligned}&\Omega ^{\psi ,k} \\&\quad =\left\{ (\xi ,\zeta ) \in (\Omega _x(n))^2 : \begin{array}{l} 2^{n-k_e} \le \langle \zeta _e \xi _{\psi (e)} \rangle \le 2^{n-k_e+2} \text { for all }e\in E\hbox { such that }\psi (e) \ne \star ,\\ \text {and }\langle \zeta _e \xi _{e'} \rangle \ge 2^{n-C_1-2} \text { for all }e,e'\in E\hbox { such that }e' \ne \psi (e) \end{array} \right\} , \end{aligned}$$
where \(C_1\) is the constant from the definition of \(\Omega _x(n)\), and observe that
$$\begin{aligned} \log _2|\Omega ^{\psi ,k}| \lesssim 2d|E|n - d\sum _{\psi (e)\ne \star } k_e. \end{aligned}$$
(4.14)
For each \(\xi ,\zeta \in \Omega _x(n)\) and \(e \in E\), there is at most one \(e' \in E\) such that \(\langle \zeta _e \xi _{e'} \rangle \le 2^{n-C_1-2}\), and it follows that
$$\begin{aligned} \left( \Omega _x(n)\right) ^2 = \bigcup _{\psi ,k} \Omega ^{\psi ,k}, \end{aligned}$$
where the union is taken over \(\psi \in \Psi \) and \(k \in \{0,\ldots ,n\}^E\).
Now, for any \(\xi ,\zeta \in \Omega ^{\psi ,k}\) and \(u \in V_\circ \) with \(\phi (u)\ne \star \), we have that
$$\begin{aligned}&\log _2 \langle \{\xi _e : e \perp u\}\cup \{\zeta _e : e \perp \phi (u)\}\rangle ^{-(d-4)} \approx -(d-4)\left( \deg (u)+\deg (\phi (u))-1\right) \, n\\&\quad + (d-4) \sum _{e \perp u} \mathbb {1}[\psi (e)\perp \phi (u)] \, k_e. \end{aligned}$$
Meanwhile, we have that
$$\begin{aligned} \log _2 \langle \{\xi _e : e \perp u\} \rangle ^{-(d-4)} \approx \log _2 \langle \{\zeta _e : e \perp u\} \rangle ^{-(d-4)} \approx -(d-4)(\deg (u)-1)\, n \end{aligned}$$
for every \(u\in V_\circ \), and
$$\begin{aligned}&\log _2 \langle x_u,\{\xi _e : e \perp u\},\{\zeta _e : e\perp u\} \rangle ^{-(d-4)} \\&\quad \approx -2(d-4)\deg (u)n + (d-4)\sum _{e \perp u}\mathbb {1}[\psi (e)\perp u]\,k_e \end{aligned}$$
for every \(u\in \partial V\). Summing these estimates yields
$$\begin{aligned} \log _2 R_\phi (\xi ,\zeta )\approx & {} -2(d-4)\Delta n + 2(d-4)|V_\circ |n - (d-4)|\{ v \in V_\circ : \phi (v) \ne \star \}|\,n\\&+ (d-4)\sum _{e}|\{u \perp e : \phi (u) \perp \psi (e)\}|k_e. \end{aligned}$$
Thus, using the volume estimate (4.14), we have that
$$\begin{aligned}&\log _2 \sum _{(\xi ,\zeta ) \in \Omega ^{\psi ,k}}R_\phi (\xi ,\zeta ) \lesssim -2\eta _d(H)n - (d-4)|\{ u \in V_\circ : \phi (u) \ne \star \}|\,n\\&\quad + (d-4)\sum _{\psi (e)\ne \star }|\{u \perp e : \phi (u) \perp \psi (e)\}|k_e - d \sum _{\psi (e)\ne \star }k_e. \end{aligned}$$
Observe that for every \(\psi \in \Psi \) and \(e\in E\), we have that
$$\begin{aligned}&\sum _{k_e=0}^n \exp _2\left( \left[ (d-4)|\{u\perp e:\phi (u)\perp \psi (e)\}| - d\right] k_e \right) \\&\quad \preceq {\left\{ \begin{array}{ll} \exp _2\left( \left[ (d-4)|\{u\perp e:\right. \right. &{}\\ \quad \left. \left. \phi (u)\perp \psi (e)\}| - d\right] n\right) &{} \text { if }(d-4)|\{u\perp e:\phi (u)\perp \psi (e)\}| >d\\ n &{} \text { if }(d-4)|\{u\perp e:\phi (u)\perp \psi (e)\}| =d\\ 1 &{} \text { if }(d-4)|\{u\perp e:\phi (u)\perp \psi (e)\}| <d. \end{array}\right. } \end{aligned}$$
Thus, summing over k, we see that for every \(\psi \in \Psi \) and \(\phi \in \Phi \) we have that
$$\begin{aligned}&\log _2 \sum _{k\in \{0,\ldots ,n\}^E}\sum _{(\xi ,\zeta ) \in \Omega ^{\psi ,k}}R_\phi (\xi ,\zeta ) \lesssim -2\eta _d(H)\, n - (d-4)|\{ u \in V_\circ : \phi (v) \ne \star \}| n\nonumber \\&\quad +\sum _{e\in E}\left[ (d-4)|\{u \perp e : \phi (u) \perp \psi (e)\}|-d\right] \mathbb {1}\left( |\{u \perp e : \phi (u) \perp \psi (e)\}| > d/(d-4) \right) n\nonumber \\&\quad +\sum _{e\in E}\mathbb {1}\left( |\{u\perp e : \phi (u) \perp \psi (e)\}| = d/(d-4)\right) \log _2 n. \end{aligned}$$
(4.15)
Since \(d/(d-4)\) is not an integer, the last term is zero, so that if we define \(Q : \Phi \times \Psi \rightarrow \mathbb {R}\) by
$$\begin{aligned} Q(\phi ,\psi )= & {} - (d-4)|\{ u \in V_\circ : \phi (v) \ne \star \}|\nonumber \\&+\sum _{e\in E}\left[ (d-4)|\{u \perp e : \phi (u) \perp \psi (e)\}|-d\right] \nonumber \\&\qquad \mathbb {1}[|\{u \perp e : \phi (u) \perp \psi (e)\}| > d/(d-4) ], \end{aligned}$$
(4.16)
then we have that
$$\begin{aligned} \log _2 \sum _{k \in \{0,\ldots ,n\}^E} \sum _{(\xi ,\zeta )\in \Omega ^{\psi ,k}}R_\phi (\xi ,\zeta ) \lesssim -2\eta _d(H)n + Q(\phi ,\psi )n. \end{aligned}$$
Thus, since \(|\Phi \times \Psi |\) does not depend on n, we have that
$$\begin{aligned} \log _2 \mathbb {E}[\tilde{S}_x(n)^2]\lesssim & {} \log _2 \sum _{\phi \in \Phi }\sum _{\psi \in \Psi } \sum _{k \in \{0,\ldots ,n\}^E} \sum _{(\xi ,\zeta )\in \Omega ^{\psi ,k}}R_\phi (\xi ,\zeta ) \\\lesssim & {} \max _{\phi ,\psi } \log _2 \sum _{k \in \{0,\ldots ,n\}^E} \sum _{(\xi ,\zeta )\in \Omega ^{\psi ,k}}R_\phi (\xi ,\zeta ) \lesssim -2\eta _d(H)n + \max _{\phi ,\psi }Q(\phi ,\psi )n, \end{aligned}$$
and so it suffices to prove that \(Q(\phi ,\psi )\le 0\) for every \((\phi ,\psi )\in \Phi \times \Psi \).
To prove this, first observe that we can bound
$$\begin{aligned} Q(\phi ,\psi ) \le \tilde{Q}(\phi ):= & {} - (d-4)|\{ u \in V_\circ : \phi (v) \ne \star \}|\\&+\sum _{e\in E}\left[ (d-4)|\{u \perp e : \phi (u) \ne \star \}|-d\right] \\&\quad \mathbb {1}[|\{u \perp e : \phi (u) \ne \star \}| > d/(d-4) ]. \end{aligned}$$
Let \(H'\) be the subhypergraph of H with boundary vertices given by the boundary vertices of H, edges given by the set of edges of H that have \(|\{u\perp e:\phi (u)\ne \star \}|>d/(d-4)\), and interior vertices given by the set of interior vertices u of H for which \(\phi (u)\ne \star \) and \(\phi (u)\perp e\) for some \(e\in E'\). Then we can rewrite
$$\begin{aligned} \tilde{Q}(\phi ) = \eta _d(H')-(d-4)\bigl |\{v\in V_\circ : \phi (v)\ne \star \}{\setminus } V'\bigr | \le 0, \end{aligned}$$
(4.17)
where the second inequality follows by the assumption that every subhypergraph of H is d-buoyant. This completes the proof. \(\square \)
Proof of Proposition 4.11
Suppose that the finite hypergraph with boundary H has a d-optimal coarsening all of whose subhypergraphs are d-buoyant. Then the lower bound on the square of the first moment of \(\tilde{S}^{H'}_x(n)\) provided by Lemma 4.13 and the upper bound on the second moment of \(\tilde{S}^{H'}_x(n)\) provided by Lemma 4.14 coincide, so that the Cauchy–Schwarz inequality implies that
for every n such that \(\langle x_u x_v \rangle \le 2^{n-1}\) for every \(u,v \in \partial V\). It follows from Fatou’s lemma that
$$\begin{aligned} \mathbb {P}\left( \tilde{S}^{H'}_x(n)>0 \text { for infinitely many { n}}\right) \ge \limsup _{n\rightarrow \infty } \mathbb {P}\left( \tilde{S}^{H'}_x(n) > 0\right) \succeq 1, \end{aligned}$$
so that H is robustly faithfully present at x with positive probability as claimed.\(\square \)
The cases \(d=5,6,8\).
We now treat the cases in which \(d/(d-4)\) is an integer. This requires somewhat more care owing to the possible presence of the logarithmic term in (4.15). Indeed, we will only treat certain special ‘building block’ hypergraphs directly via the second moment method. We will later build other hypergraphs out of these special hypergraphs in order to prove the main theorems.
Let \(H=(\partial V,V_\circ , E)\) be a finite hypergraph with boundary. We say that a subhypergraph \(H'=(\partial V',V_\circ ',E')\) of H is bordered if \(\partial V'=\partial V\) and every vertex \(v\in V {\setminus } V'\) is incident to at most one edge in \(E'\). For example, every full subhypergraph containing every boundary vertex is bordered. We say that a subhypergraph of H is proper if it is not equal to H and non-trivial if it has at least one edge. We say that H is d-basic if it does not have any edges of degree less than or equal to \(d/(d-4)\) and does not contain any proper, non-trivial bordered subhypergraphs \(H'\) with \(\eta _d(H')=0\).
Proposition 4.15
Let \(\mathbb {G}\) be a d-dimensional transitive graph with \(d\in \{5,6,8\}\), and let \(\mathfrak {F}\) be the uniform spanning forest of \(\mathbb {G}\). Let H be a finite hypergraph with boundary with at least one edge. Suppose additionally that one of the following assumptions holds:
-
(1)
H is a refinement of a hypergraph with boundary that has exactly one edge, the unique edge contains exactly \(d/(d-4)\) boundary vertices, and every interior vertex is incident to the unique edge.
or
-
(2)
H has a d-basic coarsening with more than one edge, all of whose subhypergraphs are d-buoyant.
Then for every \(r\ge R_\mathbb {G}(H)\) and every well-separated collection of points \((x_v)_{v\in \partial V}\) in \(\mathbb {V}\) there is a positive probability that the vertices \(x_u\) are all in different components of \(\mathfrak {F}\) and that H is robustly faithfully present at x.
The Proof of Proposition 4.15 will apply the following lemma, which is the analogue of Lemma 4.14 in this context.
Lemma 4.16
Let \(\mathbb {G}\) be a d-dimensional transitive graph with \(d\in \{5,6,8\}\). Let H be a hypergraph with boundary with at least one edge such that every subhypergraph of H is d-buoyant.
-
(1)
If H has exactly one edge, this unique edge is incident to exactly \(d/(d-4)\) boundary vertices, and every interior vertex is incident to this unique edge, then there exists a constant \(c=c(\mathbb {G},H)\) such that
$$\begin{aligned} \log _2\mathbb {E}[\tilde{S}_x(n)^2] \le \log _2 n + c \end{aligned}$$
for all \(x=(x_u)_{u\in \partial V} \in (\mathbb {V})^{\partial V}\) and all n such that \(\langle x_u x_v \rangle \le 2^{n-1}\) for all \(u,v \in \partial V\).
-
(2)
If H is d-basic, then there exists a constant \(c=c(\mathbb {G},H)\) such that
$$\begin{aligned} \log _2\mathbb {E}[\tilde{S}_x(n)^2] \le -2\eta _d(H) +c \end{aligned}$$
for all \(x=(x_u)_{u\in \partial V} \in (\mathbb {V})^{\partial V}\) and all n such that \(\langle x_u x_v \rangle \le 2^{n-1}\) for all \(u,v \in \partial V\).
Proof
Note that in both cases we have that every subhypergraph of H is d-buoyant. We use the notation of the Proof of Proposition 4.11. As in Eq. (4.15) of that proof, we have that
$$\begin{aligned}&\log _2 \sum _{k\in \{0,\ldots ,n\}^E}\sum _{(\xi ,\zeta ) \in \Omega ^{\psi ,k}}R_\phi (\xi ,\zeta )\nonumber \\&\quad \lesssim -2\eta _d(H)\, n +Q(\phi ,\psi )n +\left| \left\{ e \in E : \left| \left\{ u\perp e : \phi (u) \perp \psi (e)\right\} \right| \right. \right. \nonumber \\&\quad \left. \left. = d/(d-4)\right\} \right| \log _2 n, \end{aligned}$$
(4.18)
where \(Q(\phi ,\psi )\) is defined as in (4.16). Moreover, the same argument used in that proof shows that \(Q(\phi ,\psi )\le 0\) for every \((\phi ,\psi )\in \Phi \times \Psi \). In case (1) of the lemma, in which H has a single edge, we immediately obtain the desired bound since \(\eta _d(H)=0\) and the coefficient of the \(\log _2 n\) term is either 0 or 1.
Now suppose that H is d-basic. Let \(L(\phi ,\psi )\) be the coefficient of \(\log _2 n\) in (4.18). Note that H cannot have an edge whose intersection with \(\partial V\) has \((d-4)/d\) elements or more, since otherwise the subhypergraph \(H'\) of H with that single edge and with no internal vertices is proper, bordered, and has \(\eta _d(H')\ge 0\). Thus, we have that if \(\phi _0\) is defined by \(\phi _0(v)=\star \) for every \(v\in V_\circ \) then
$$\begin{aligned} L(\phi _0,\psi )\le \left| \left\{ e \in E : |\psi (e) \cap \partial V| \ge d/(d-4)\right\} \right| =0 \end{aligned}$$
for every \(\psi \in \Psi \).
Let \({\text {Isom}} \subseteq \Phi \times \Psi \) be the set of all \((\phi ,\psi )\) such that \(\phi (u)\perp \psi (e)\) for every \(e\in E\) and \(v\perp e\). Since H is d-basic we have that if \((\phi ,\psi )\in {\text {Isom}}\) then
$$\begin{aligned} L(\phi ,\psi ) = \left| \left\{ e \in E : \deg (e) = d/(d-4)\right\} \right| =0. \end{aligned}$$
We claim that \(Q(\phi ,\psi )\le -(d-4)\) unless either \(\phi =\phi _0\) or \((\phi ,\psi )\in {\text {Isom}}\). Once proven this will conclude the proof, since we will then have that
$$\begin{aligned}&\log _2 \sum _{k\in \{0,\ldots ,n\}^E}\sum _{(\xi ,\zeta ) \in \Omega ^{\psi ,k}}R_\phi (\xi ,\zeta ) \lesssim -2\eta _d(H)\, n\\&\quad + \max \{ -(d-4) n + |E|\log _2 n, 0\} \lesssim -2\eta _d(H) n \end{aligned}$$
for every \((\phi ,\psi )\in \Phi \times \Psi \), from which we can conclude by summing over \(\Phi \times \Psi \) as done previously.
We first prove that \(Q(\phi ,\psi )\le -(d-4)\) unless either \(\phi =\phi _0\) or \(\phi (v) \ne \star \) for every \(v\in V\). Note that since \(d-4\) divides d, the d-apparent weight of every hypergraph with boundary is a multiple of \(d-4\), and so we must have that \(\eta _d(H')\le -(d-4)\) for every subhypergraph \(H'\) of H with \(\eta _d(H')<0\). As in (4.17), we have that \(Q(\phi ,\psi )\le \eta _d(H')\), where \(H'=H'(\phi )\) is the subhypergraph of H with boundary vertices given by the boundary vertices of H, edges given by the set of edges of H that have \(|\{u\perp e:\phi (u)\ne \star \}|\ge d/(d-4)\), and interior vertices given by the set of interior vertices u of H for which \(\phi (u)\ne \star \) and \(\phi (u)\perp e\) for some \(e\in E'\).
We claim that if \(\phi \) is such that \(\eta _d(H')=0\) then \(H'\) is bordered, and consequently is either equal to H or does not have any edges by our assumptions on H. To see this, suppose for contradiction that \(H'\) is not bordered, so that there exists a vertex \(v\in V_\circ {\setminus } V_\circ '\) that is incident to more than one edge of \(H'\). Let \(H''\) be the subhypergraph of \(H'\) obtained from \(H'\) by adding the vertex v. Then we have that \(|E(H'')|=|E(H')|\), \(|V_\circ (H'')|=|V_\circ (H')|+1\) and \(\Delta (H'')\ge \Delta (H'')+2\), and consequently that \(\eta _d(H'')\ge \eta _d(H')+(d-4)\). Since every subhypergraph of H is d-buoyant, we have that \(\eta _d(H'')\le 0\) and consequently that \(\eta _d(H')\le -(d-4)\), a contradiction. This establishes that \(Q(\phi ,\psi )\le -(d-4)\) unless either \(\phi =\phi _0\) or \(\phi (v) \ne \star \) for every \(v\in V\), as claimed.
It remains to show that if \(\phi (v) \ne \star \) for every \(v\in V\) then \(Q(\phi _1,\psi ) \le -(d-4)\) unless \((\phi ,\psi )\in {\text {Isom}}\). Since every edge of H has degree strictly larger than \(d/(d-4)\), we have that
$$\begin{aligned}&\left[ (d-4)|\{u \perp e : \phi (u) \perp \psi (e)\}|-d\right] \mathbb {1}[|\{u \perp e : \phi (u) \perp \psi (e)\}| > d/(d-4) ]\\&\quad \le \left[ (d-4)\deg (e)-d\right] -(d-4) \end{aligned}$$
for every \(e\in E\) and every \((\phi ,\psi )\in \Phi \times \Psi \) such that \(|\{u\perp e : \phi (u) \perp \psi (e)\}| < \deg (e)\). It follows easily from this and the definition of \(Q(\phi ,\psi )\) that if \(\phi \) has \(\phi (v) \ne \star \) for every \(v\in V\), then
$$\begin{aligned} Q(\phi ,\psi ) \le \eta _d(H) - (d-4) |\{ e \in E : |\{u \perp e : \phi (u) \perp \psi (e)\}| < \deg (e) \}|. \end{aligned}$$
Since \(\eta _d(H)\le 0\) by assumption, it follows that \(Q(\phi ,\psi )\le -(d-4)\) unless \((\phi ,\psi )\in {\text {Isom}}\). This concludes the proof. \(\square \)
Lemma 4.14 (together with Lemma 4.13) is already sufficient to yield case (2) of Proposition 4.15. To handle case (1), we will require the following additional estimate.
Lemma 4.17
(Different scales are uncorrelated) Let \(\mathbb {G}\) be a d-dimensional transitive graph with \(d>4\). Let H be a hypergraph with boundary. Then there exists a positive constant \(c=c(\mathbb {G},H,r)\) such that
$$\begin{aligned} \log _2\mathbb {E}[\tilde{S}_x(n) \tilde{S}_{x}(n+m)] \le -\eta _d(H)\, (2n+m) + c \end{aligned}$$
for all \(x=(x_u)_{u\in \partial V} \in (\mathbb {V})^{\partial V}\), all \(m\ge 2\), and all n such that \(\langle x_u x_v \rangle \le 2^{n-1}\) for all \(u,v \in \partial V\).
Proof
Let \(\Phi \) and \(\tilde{\mathscr {W}}_\phi (\xi ,\zeta )\) be defined as in the Proof of Lemma 4.14.
For every \(\xi \in \Omega _x(n)\) and \(\zeta \in \Omega _x(n+m)\), we have that all distances relevant to our calculations are on the order of either \(2^n\) or \(2^{n+m}\). That is,
$$\begin{aligned} \log _2 \langle \xi _e \xi _{e'} \rangle ,\, \log _2 \langle \xi _e x_v \rangle \approx n \quad \text { and } \quad \log _2 \langle \xi _e \zeta _{e'} \rangle ,\, \log _2 \langle \zeta _e \zeta _{e'} \rangle ,\, \log _2 \langle \zeta _e x_v \rangle \approx n+m \end{aligned}$$
for all \(e,e' \in E\) and \(v\in \partial V\). Thus, using (4.13), can estimate
$$\begin{aligned}&\frac{1}{d-4}\log _2\mathbb {P}(\tilde{\mathscr {W}}_\phi (\xi ,\zeta ))\\&\quad \lesssim -\sum _{u\in \partial V}|\{e \in E : e\perp u\}|\,(2n+m) \\&\qquad -\sum _{u \in V_\circ ,\, \phi (u) =\star } (|\{e\in E : e \perp u\}|-1)\, n\\&\qquad -\sum _{u \in V_\circ ,\, \phi ^{-1}(u) =\star } (|\{e\in E : e \perp u\}|-1)\, (n+m)\\&\qquad -\sum _{u \in V_\circ ,\, \phi (u) \ne \star } \left( \left| \{e\in E : e \perp u\}\right| n-n +\left| \{e\in E : e \perp \phi (u)\}\right| (n+m)\right) \,\\&\quad = -\Delta (2n+m) + |V_\circ |\,(2n+m) - |\{v \in V_\circ : \phi (v)\ne \star \}|\, (n+m), \end{aligned}$$
which is maximized when \(\phi (v)=\star \) for all \(v\in V_\circ \). Now, since
$$\begin{aligned} \log _2|\Omega _x(n)\times \Omega _x(n+m)| \lesssim |\Lambda (n-1,n)^{E} \times \Lambda (n+m-1,n+m)^{E}| \lesssim d(2n+m), \end{aligned}$$
we deduce that
$$\begin{aligned} \log _2\mathbb {E}[\tilde{S}_x(n)\tilde{S}_x(n+m)]&\le \log _2|\Omega _x(n)\times \Omega _x(n+m)| + \log |\Phi |\\&\quad + \max \{\mathbb {P}(\tilde{\mathscr {W}}_\phi (\xi ,\zeta )): \xi \in \Omega _x(n),\zeta \in \Omega _x(n+m), \phi \in \Phi \}\\&\lesssim d|E|(n+m) - (d-4)\Delta (2n+m) +(d-4) |V_\circ |(2n+m)\\&= -\eta _d(H) (2n+m) \end{aligned}$$
as claimed. \(\square \)
Proof of Proposition 4.15 given Lemmas 4.12 and 4.13
The second case, in which H has a d-basic coarsening with more than one edge all of whose subhypergraphs are d-buoyant, follows from Lemma 4.12 and Lemmas 4.13 and 4.16 exactly as in the proof of Proposition 4.11. Now suppose that H is a refinement of a hypergraph with boundary \(H'\) that has \(d/(d-4)\) boundary vertices and a single edge incident to every vertex. Then \(\eta _d(H')=0\) and every subhypergraph of \(H'\) is d-buoyant. Applying Lemmas 4.13, 4.16 and 4.17, we deduce that
$$\begin{aligned} \mathbb {E}\left[ \sum _{k=n}^{2n} \tilde{S}^{H'}_x(2k)\right] \succeq n, \quad \text { and } \quad \mathbb {E}\left[ \left( \sum _{k=n}^{2n} \tilde{S}^{H'}_x(2k)\right) ^2\right] \preceq n^2, \end{aligned}$$
for every n such that \(\langle x_u x_v \rangle \le 2^{n-1}\) for every \(u,v \in \partial V\), from which it follows by Cauchy–Schwarz that
$$\begin{aligned} \mathbb {P}\left( \sum _{k=n}^{2n}\tilde{S}^{H'}_x(2k) >0 \right) \succeq 1. \end{aligned}$$
for every n such that \(\langle x_u x_v \rangle \le 2^{n-1}\) for every \(u,v \in \partial V\). The proof can now be concluded as in the Proof of Proposition 4.11. \(\square \)
Proof of Lemmas 4.12 and 4.13
In this section we prove Lemmas 4.12 and 4.13. We begin with some background on random walk estimates. Given a graph G and a vertex u of G, we write \(\mathbf {P}_u\) for the law of the random walk on G started at u. Let G be a graph, and let \(p_n(x,y)\) be the probability that a random walk on G started at x is at y at time n. Given positive constants c and \(c'\), we say that G satisfies \((c,c')\)-Gaussian heat kernel estimates if
$$\begin{aligned} \frac{c}{|B(x,n^{1/2})|}e^{-c d(x,y)^2/n} \le p_n(x,y) + p_{n+1}(x,y) \le \frac{c'}{|B(x,n^{1/2})|}e^{-d(x,y)^2/(c' n)} \end{aligned}$$
(4.19)
for every \(n\ge 0\) and every pair of vertices x, y in G with \(d(x,y)\le n\). We say that G satisfies Gaussian heat kernel estimates if it satisfies \((c,c')\)-Gaussian Heat Kernel Estimates for some positive constants c and \(c'\).
Theorem 4.18
(Hebisch and Saloff-Coste [10]) Let \(\mathbb {G}\) be a d-dimensional transitive graph. Then G satisfies Gaussian heat kernel estimates.
Hebisch and Saloff-Coste proved their result only for Cayley graphs, but the general case can be proven by similar methods,Footnote 2 see e.g. [30, Corollary 14.5 and Theorem 14.19].
Now, recall that two graphs \(G=(V,E)\) and \(G'=(V',E')\) are said to be \((\alpha ,\beta )\)-rough isometric if there exists a function \(\phi :V \rightarrow V'\) such that the following conditions hold.
-
(1)
\(\phi \) roughly preserves distances: The estimate
$$\begin{aligned} \alpha ^{-1} d(x,y) - \beta \le d'(\phi (x),\phi (y)) \le \alpha d(x,y) + \beta \end{aligned}$$
holds for all \(x,y \in V\).
-
(2)
\(\phi \) is roughly surjective: For every \(x \in V'\), there exists \(y \in V\) such that \(d'(x,\phi (y)) \le \beta \).
The following stability theorem for Gaussian heat kernel estimates follows from the work of Delmotte [5]; see also [15, Theorem 3.3.5].
Theorem 4.19
Let G and \(G'\) be \((\alpha ,\beta )\)-roughly isometric graphs for some positive \(\alpha ,\beta \), and suppose that the degrees of G and \(G'\) are bounded by \(M<\infty \) and that G satisfies \((c,c')\)-Gaussian heat kernel estimates for some positive \(c,c'\). Then there exist \(\tilde{c} = \tilde{c}(\alpha ,\beta ,M,c,c')\) and \(\tilde{c}' = \tilde{c}'(\alpha ,\beta ,M,c,c')\) such that \(G'\) satisfies \((\tilde{c}, \tilde{c}')\)-Gaussian heat kernel estimates.
Recall that a function \(h:V\rightarrow \mathbb {R}\) defined on the vertex set of a graph is said to be harmonic on a set \(A \subseteq V\) if
$$\begin{aligned} h(v)=\frac{1}{\deg (v)} \sum _{u \sim v} h(u) \end{aligned}$$
for every vertex \(v\in A\), where the sum is taken with appropriate multiplicities if there are multiple edges between u and v. The graph G is said to satisfy an elliptic Harnack inequality if for every \(\alpha >1\), there exist a constant \(c(\alpha ) \ge 1\) such that
$$\begin{aligned}c(\alpha )^{-1} \le h(v)/h(u) \le c(\alpha )\end{aligned}$$
for every two vertices u and v of G and every positive function h that is harmonic on the set
$$\begin{aligned}\left\{ w \in V : \min \{d(u,w),d(w,v)\} \le \alpha d(u,v)\right\} ,\end{aligned}$$
in which case we say that G satisfies an elliptic Harnack inequality with constants \(c(\alpha )\).
The following theorem also follows from the work of Delmotte [5], and was implicit in the earlier work of e.g. Fabes and Stroock [6]; see also [15, Theorem 3.3.5]. Note that these references all concern the parabolic Harnack inequality, which is stronger than the elliptic Harnack inequality.
Theorem 4.20
Let G be a graph. If G satisfies \((c_1,c_1')\)-Gaussian heat kernel estimates, then there exists \(c_2(\alpha )=c_2(\alpha ,c_1)\) such that G satisfies an elliptic Harnack inequality with constants \(c_2(\alpha )\).
We remark that the elliptic Harnack inequality has recently been shown to be stable under rough isometries in the breakthrough work of Barlow and Murugan [1].
Recall that a graph is said to be d-Ahlfors regular if there exists a positive constant c such that \(c^{-1} r^d \le |B(x,r)| \le cr^d\) for every \(r\ge 1\) and every \(x \in V\) (in which case we say G is d-Ahlfors regular with constant c). Ahlfors regularity is clearly preserved by rough isometry, in the sense that if G and \(G'\) are \((\alpha ,\beta )\)-rough isometric graphs for some positive \(\alpha ,\beta \), and G is d-Ahlfors regular with constant c, then there exists a constant \(c'=c'(\alpha ,\beta ,c)\) such that \(G'\) is d-Ahlfors regular with constant \(c'\).
Observe that if the graph G is d-Ahlfors regular for some \(d>2\) and satisfies a Gaussian heat kernel estimate, then summing the estimate (4.19) yields that
$$\begin{aligned} 1 \le \sum _{n\ge 0} p_n(v,v) \preceq 1 \end{aligned}$$
for every vertex v, and that
$$\begin{aligned} \mathbf {P}_u(\text {hit v}) = \frac{\sum _{n\ge 0}{p_n(u,v)}}{\sum _{n\ge 0} p_n(v,v)} \asymp \langle uv\rangle ^{-(d-2)} \end{aligned}$$
(4.20)
for all vertices u and v of G.
We now turn to the proofs of Lemmas 4.12 and 4.13. The key to both proofs is the following lemma.
Lemma 4.21
Let \(\mathbb {G}\) be a d-Ahlfors regular graph with constant \(c_0\) for some \(d>4\), let \(\mathfrak {F}\) be the uniform spanning forest of \(\mathbb {G}\), and suppose that \(\mathbb {G}\) satisfies \((c_0^{-1},c_0)\)-Gaussian heat kernel estimates. Let \(K_1,\ldots ,K_N\) be a collection of finite, disjoint sets of vertices, and let \(K = \bigcup _{i=1}^k K_N\). Let \(\{X^v : v \in K\}\) be a collection of independent simple random walks started from the vertices of K. If
$$\begin{aligned} \mathbb {P}\Bigl ( \{X^u_i : i \ge 0\} \cap \{ X^v_i : i\ge 0\} = \emptyset \text { for all }u\ne v \in K \Bigr ) \ge \varepsilon > 0, \end{aligned}$$
(4.21)
then there exist constants \(c=c(\mathbb {G},H,\varepsilon ,|K|,c_0)\) and \(C=C(\mathbb {G},H,\varepsilon ,|K|,c_0)\) such that
$$\begin{aligned}&\log _2\mathbb {P}\left( \begin{array}{l} \mathscr {F}(K_i \cup K_j) \text { if and only if }i=j\text { , and each two points in }K_i\text { are connected}\\ \text {by a path in }\mathfrak {F}\text { of diameter at most }C {\text {diam}}(K)\text { for each }1 \le i \le N \end{array} \right) \nonumber \\&\quad \ge -(d-4)(|K| -N)\log _2 {\text {diam}}(K) + c.\end{aligned}$$
(4.22)
Here we are referring to the diameter of the path considered as a subset of \(\mathbb {G}\).
Before proving Lemma 4.21, let us see how it implies Lemmas 4.12 and 4.13.
Proof of Lemma 4.12 given Lemma 4.21
Let \(r'\) be a large constant to be chosen later. By definition of \(R_\mathbb {G}\) and Lemma 4.10, there exists \(\varepsilon =\varepsilon (|A|)>0\) such that for each set \(B \subseteq A\), there exists a set \(\{\xi _{(B,b)} : b \in B\} \subset \mathbb {V}\) of diameter at most \(R_\mathbb {G}(|B|)\) such that if \(\{X^{(B,b)}:b\in B\}\) are independent simple random walks then
$$\begin{aligned} \mathbb {P}\left( \{X^{(B,b)}_i:i\ge 0\} \cap \{X^{(B,b')}_i:i\ge 0\} = \emptyset \text { for every }b\ne b' \in B\right) \ge (2\varepsilon )^{2^{-|A|}}. \end{aligned}$$
Take such a set for each B in such a way that the set \(\{\xi _{(B,b)} : (B,b) \in \mathcal {P}_\bullet (A)\}\) is contained in the ball of radius \(r'\) around x, and for each distinct \(B,B' \subseteq A\), the sets \(\{\xi _{(B,b)} : b\in B\}\) and \(\{\xi _{(B',b)} : b\in B'\}\) have distance at least \(r'/2\) from each other. Clearly this is possible for sufficiently large \(r'\). We have by independence that
$$\begin{aligned} \mathbb {P}\left( \bigcap _{B \subseteq A} \left\{ \{X^{(B,b)}_i:i\ge 0\} \cap \{X^{(B,b')}_i:i\ge 0\} = \emptyset \text { for every }b\ne b' \in B\right\} \right) \ge 2\varepsilon . \end{aligned}$$
On the other hand, it follows easily from the Greens function estimate (2.4) that if \(r'\) is sufficiently large (depending on |A| and \(\varepsilon \)) then
$$\begin{aligned} \mathbb {P}\biggl ( \begin{array}{l} \{X^{(B,b)}_i:i\ge 0\} \cap \{X^{(B',b')}_i:i\ge 0\} = \emptyset \text { for}\\ \text {some }B,B' \subseteq A, b \in B\hbox { and }b'\in B\hbox { with }B\ne B' \end{array}\biggr ) \le \varepsilon , \end{aligned}$$
and we deduce that
$$\begin{aligned} \mathbb {P}\left( \{X^{(B,b)}_i:i\ge 0\} \cap \{X^{(B',b')}_i:i\ge 0\} = \emptyset \text { for every distinct }(B,b),(B',b')\in \mathcal {P}_\bullet (A)\right) \ge \varepsilon \end{aligned}$$
for such \(r'\). Applying Lemma 4.21, we deduce that \(\mathbb {P}( \mathscr {A}_{Cr'}(\xi ) ) \ge c\) for some \(C=C(\mathbb {G},|A|,\varepsilon ,r')\) and \(c=c(\mathbb {G},|A|,\varepsilon )\). It follows that \((\xi _{(B,b)})_{(B,b) \in \mathcal {P}_\bullet (A)}\) is an r-good A constellation for some \(r=r(|A|)\) sufficiently large. \(\square \)
Proof of Lemma 4.13 given Lemma 4.21
Let \(\mathbb {G}\) be a d-dimensional transitive graph for some \(d>4\). Let \(x=(x_v)_{v\in \partial V}\) be such that \(\langle x_u x_v \rangle \le 2^{n-1}\) for every \(u,v \in \partial V\), let \(\xi =(\xi _e)_{e\in E} \in \Omega _x(n)\), and let \(r=r(H)\) and \((\xi _{(e,A,v)})_{e \in E, (A,v) \in \mathcal {P}_\bullet (e)}\) be as in Sect. 4.2.
For each edge e of H, write \(\mathscr {A}_e(\xi )\) for the event \(\mathscr {A}_r((\xi _{(e,A,v)})_{(A,v) \in \mathcal {P}_\bullet (e)})\), which has probability at least 1 / r by definition of the r-good constellation \((\xi _{(e,A,v)})_{(A,v) \in \mathcal {P}_\bullet (e)}\). Since the number of subtrees of a ball of radius r in \(\mathbb {G}\) is bounded by a constant, it follows that there exists a constant \(\varepsilon =\varepsilon (\mathbb {G},H)\) and a collection of disjoint subtrees \((T_{(e,v)}(\xi ))_{(e,v) \in E_\bullet }\) of \(\mathbb {G}\) such that the tree \(T_{(e,v)}(\xi )\) has diameter at most r and contains each of the vertices \(\xi _{(e,A,v)}\) with \((A,v)\in \mathcal {P}_\bullet (e)\) for every \((e,v)\in E_\bullet \), and the estimate
$$\begin{aligned} \mathbb {P}\left( \mathscr {A}_r(\hat{\xi }_e) \cap \bigcap _{v\perp e} \{T_{(e,v)}(\xi ) \subset \mathfrak {F}\}\right) \ge (2\varepsilon )^{1/|E|} \end{aligned}$$
holds for every \(e\in E\). Fix one such collection \((T_{(e,v)}(\xi ))_{(e,v) \in E_\bullet }\) for every \(\xi \in \Omega _x(n)\), and for each \(e\in E\) let \(\mathscr {B}_e(\xi )\) be the event that \(T_{(e,v)}(\xi )\) is contained in \(\mathfrak {F}\) for every \(v\in E\). Let \(\mathscr {B}(\xi ) = \bigcap _{e\in E} \mathscr {B}_e(\xi )\). Considering generating \(\mathfrak {F}\) using Wilson’s algorithm, starting with random walks \(\{X^{(e,A,v)} : e \in E,\)\((A,v) \in \mathcal {P}_\bullet (e)\}\) such that \(X^{(e,A,v)}_0=\xi _{(e,A,v)}\) for every \(e \in E\) and \((A,v) \in \mathcal {P}_\bullet (e)\), we observe that
$$\begin{aligned} \Big |\mathbb {P}\left( \mathscr {B}(\xi ) \right) - \prod _{e\in E} \mathbb {P}\left( \mathscr {B}_e(\xi ) \right) \Big | \le \mathbb {P}\left( \begin{array}{l} X^{(e,A,v)} \text { and } X^{(e',A',v')} \text { intersect for some distinct} \\ e,e' \in E\text { and some }(A,v) \in \mathcal {P}_\bullet (e), (A',v') \in \mathcal {P}_\bullet (e') \end{array}\right) \end{aligned}$$
(4.23)
and hence that
$$\begin{aligned} \mathbb {P}\left( \mathscr {B}(\xi ) \right) \ge \frac{1}{2} \prod _{e\in E} \mathbb {P}\left( \mathscr {B}_e(\xi ) \right) \ge \varepsilon \end{aligned}$$
(4.24)
for all n sufficiently large and \(\xi \in \Omega _x(n)\).
Let \(\mathbb {G}_\xi \) be the graph obtained by contracting the tree \(T_{(e,v)}(\xi )\) down to a single vertex for each \((e,v) \in E_\bullet \). The spatial Markov property of the USF (see e.g. [14, Section 2.2.1]) implies that the law of \(\mathfrak {F}\) given the event \(\mathscr {B}(\xi )\) is equal to the law of the union of \(\bigcup _{(e,v) \in E_\bullet } T_{(e,v)}(\xi )\) with the uniform spanning forest of \(\mathbb {G}_\xi \). Observe that \(\mathbb {G}_\xi \) and \(\mathbb {G}\) are rough isometric, with constants depending only on \(\mathbb {G}\) and H, and that \(\mathbb {G}_\xi \) has degrees bounded by a constant depending only on \(\mathbb {G}\) and H. Thus, it follows from Theorem 4.18–4.20 that \(\mathbb {G}_\xi \) is d-Ahlfors regular, satisfies Gaussian heat kernel estimates, and satisfies an elliptic Harnack inequality, each with constants depending only on H and \(\mathbb {G}\).
Let \(E_\star = E \cup \{\star \}\), and let \(K = E_\bullet \cup \{(\star ,v) : v \in \partial V\}\). For each \((e,v)\in E_\bullet \), let \(x_{(e,v)}\) be the vertex of \(\mathbb {G}_\xi \) that was formed by contracting \(T_{(e,v)}(\xi )\), and let \(x_{(\star ,v)} = x_v\) for each \(v\in \partial V\). For each vertex v of H, choose an edge \(e_0(v)\perp v\) arbitrarily from \(E_\star \), and let \(K' = K {\setminus } \{x_i: v \in V\}\). Let \(\mathfrak {F}_\xi \) be the uniform spanning forest of \(\mathbb {G}_\xi \), and let \(\tilde{\mathscr {W}}'(x,\xi )\) be the event that for each \((e,v),(e',v') \in K\) the vertices \(x_{(e,v)}\) and \(x_{(e',v')}\) are in the same component of \(\mathfrak {F}_\xi \) if and only if \(v=v'\). The spatial Markov property and (4.24) imply that
$$\begin{aligned} \mathbb {P}\left( \tilde{\mathscr {W}}(x,\xi )\right) \ge \varepsilon \mathbb {P}\left( \bar{\mathscr {W}}'(x,\xi )\right) \succeq \mathbb {P}\left( \tilde{\mathscr {W}}'(x,\xi )\right) \end{aligned}$$
whenever n is sufficiently large and \(\xi \in \Omega _x(n)\). Thus, applying Lemma 4.21 to \(\mathbb {G}_\xi \) by setting \(N=|V|\), enumerating \(V=\{v_1,\ldots ,v_N\}\) and setting \(K_i = \{ x_{(e,v_i)} : (e,v_i) \in K \}\) for each \(v\in V\) yields that
$$\begin{aligned} \log _2 \mathbb {P}\left( \tilde{\mathscr {W}}(x,\xi )\right) \gtrsim \log _2 \mathbb {P}\left( \tilde{\mathscr {W}}'(x,\xi )\right) \gtrsim -(d-4)\left( \Delta -|V_\circ |\right) \, n, \end{aligned}$$
completing the proof. \(\square \)
We now start working towards the Proof of Lemma 4.21. We begin with the following simple estimate.
Lemma 4.22
Let G be d-Ahlfors regular with constant \(c_1\), and suppose that G satisfies \((c_2,c_2')\)-Gaussian heat kernel estimates. Then there exist a positive constant \(C=C(d,c_1,c_2,c_2')\) such that
$$\begin{aligned} C^{-1} \langle u w \rangle ^{-(d-2)} \le \mathbf {P}_u\left( \text {hit } w \text { before }\Lambda _x(n+3c,\infty ) \text {, do not hit } \Lambda _x(0,n)\right) \le C \langle u w \rangle ^{-(d-2)} \end{aligned}$$
for every \(c \ge C\), every vertex x, every \(n\ge 1\), and every \(u,w \in \Lambda _x(n+c,n+2c)\).
Proof
The upper bound follows immediately from (4.20). We now prove the lower bound. For every \(c \ge 1\) and every \(u,w \in \Lambda _x(n+c,\infty )\), we have that
$$\begin{aligned} \mathbf {P}_u(\text {hit }\Lambda _x(0,n)) = \frac{\mathbf {P}_u(\text {hit } x)}{\mathbf {P}_u(\text {hit } x \mid \text { hit }\Lambda _x(0,n))} \asymp \frac{\langle u x \rangle ^{-(d-2)}}{2^{-(d-2)n}} \preceq 2^{-(d-2)c}. \end{aligned}$$
Thus, we have that
$$\begin{aligned} \mathbf {P}_u(\text {hit } w \text { and } \Lambda _x(0,n))&\le \mathbf {P}_u(\text {hit }\Lambda _x(0,n)\hbox { after hitting }w) + \mathbf {P}_u(\text {hit }w\hbox { after hitting }\Lambda _x(0,n))\\&\preceq \langle u w \rangle ^{-(d-2)}2^{(d-2)n}\langle wx\rangle ^{-(d-2)} + 2^{(d-2)n}\langle u x \rangle ^{-(d-2)} \langle w x \rangle ^{-(d-2)}, \end{aligned}$$
where the second term is bounded by conditioning on the location at which the walk hits \(\Lambda _x(0,n)\) and then using the strong Markov property. By the triangle inequality, we must have that at least one of \(\langle u x \rangle \) or \(\langle w x \rangle \) is greater than \(\frac{1}{2}\langle u w \rangle \). This yields the bound
$$\begin{aligned} \mathbf {P}_u(\text {hit } w \text { and } \Lambda _x(0,n))&\preceq \left( 2^{(d-2)n}\langle wx\rangle ^{-(d-2)} + 2^{(d-2)n}\left( \min \left\{ \langle u x \rangle ,\, \langle w x \rangle \right\} \right) ^{-(d-2)}\right) \\&\quad \langle uw \rangle ^{-(d-2)}\\&\preceq 2^{-(d-2)c}\langle u w \rangle ^{-(d-2)}. \end{aligned}$$
On the other hand, if \(u,w \in \Lambda _x(n+c,n+2c)\) then conditioning on the location at which the walk hits \(\Lambda _x(n+3c,\infty )\) yields that
$$\begin{aligned} \mathbf {P}_u(\text {hit } w \text { after } \Lambda _x(n+3c,\infty )) \preceq 2^{-(d-2)(n+3c)} \preceq \langle uw \rangle ^{-(d-2)}. \end{aligned}$$
The claim now follows easily.\(\square \)
Proof of Lemma 4.13
For each \(1 \le i \le N\), let \(x_i\) be chosen arbitrarily from the set \(K_i\). Let \((X^{x})_{x \in K}\) be a collection of independent random walks on \(\mathbb {G}\), where \(X^{x}\) is started at x for each \(x\in K\), and write \(X^i=X^{x_i}\). Let \(K'_i=K_i {\setminus } \{x_i\}\) for each \(1 \le i \le N\) and let \(K'=\bigcup _{i=1}^N K'_i\). In this proof, implicit constants will be functions of \(|K|, N, c_0,\) and d. We take n such that \(2^{n-1} \le \text {diam}(K) \le 2^{n}\).
Let \(c_1,c_2,c_3\) be constants to be determined. For each \(y=(y_{x})_{x\in K} \in (\Lambda (n+c_1,n+c_3))^{K}\), let \(\mathscr {Y}_y\) be the event
$$\begin{aligned} \mathscr {Y}_y = \{ X^{x}_{2^{2(n+c_2)}} = y_{x} \text { for each }x\in K\}. \end{aligned}$$
Let \(\mathscr {C}(c_2)\) be the event that none of the walks \(X^{x}\) intersect each other before time \(2^{2(n+c_2)}\), so that \(\mathbb {P}(\mathscr {C}(c_2)) \ge \varepsilon \) for every \(c_2 \ge 0\) by assumption. For each \(x\in K\), let \(\mathscr {D}_{x}(c_1,c_3)\) be the event that \(X^{x}_{2^{2(n+c_2)}}\) is in \(\Lambda (n+c_1,n+c_3)\) and that \(X^{x}_m \in \Lambda (n,\infty )\) for all \(m \ge 2^{2(n+c_2)}\), and let \(\mathscr {D}(c_1,c_3) = \bigcap \mathscr {D}_{x}(c_1,c_3)\). It follows by an easy application of the Gaussian heat kernel estimates that we can choose \(c_2=c_2(\mathbb {G},N,\varepsilon )\) and \(c_3=c_3(\mathbb {G},N,\varepsilon )\) sufficiently large that
$$\begin{aligned} \mathbb {P}(\mathscr {D}(c_1,c_3) \mid \mathscr {Y}_y) \ge 1- \varepsilon /2 \end{aligned}$$
(4.25)
for every \(y=(y_{x})_{x\in K} \in (\Lambda (n+c_1,n+c_3))^{K}\), and in particular so that \(\mathbb {P}(\mathscr {C}(c_2) \cap \mathscr {D}(c_1,c_3)) \ge \varepsilon \). We fix some such sufficiently large \(c_1,c_2,\) and \(c_3\), and also assume that \(c_1\) is larger than the constant from Lemma 4.22. We write \(\mathscr {C}=\mathscr {C}(c_2)\), \(\mathscr {D}_{x}=\mathscr {D}_{x}(c_1,c_3)\), and \(\mathscr {D}=\mathscr {D}(c_1,c_3)\).
For each \(1 \le i \le N\) and \(x\in K'_i\), we define \(\mathscr {I}_{x}\) to be the event that the walk \(X^{x}\) hits the set
$$\begin{aligned} L^i_{\text {good}}= & {} \left\{ \textsf {LE}(X^i)_m : \textsf {LE}(X^i)_m \in \Lambda (n+2c_3, n+ 4c_3),\, \textsf {LE}(X^i)_{m'} \right. \\&\left. \in \Lambda (0, n+ 6c_3) \text { for all } 0 \le m' \le m \right\} \end{aligned}$$
before hitting \(\Lambda (n + 6c_3, \infty )\), and let \(\mathscr {I}= \bigcap _{x\in K'} \mathscr {I}_{x}\).
For each x and \(x'\) in K, we define \(\mathscr {E}_{x,x'}\) to be the event that the walks \(X^{x}\) and \(X^{x'}\) intersect, and let
$$\begin{aligned} \mathscr {E}= \bigcup \left\{ \mathscr {E}_{x,x'} : 1 \le i < j \le N,\, x \in K_i,\, x'\in K_j \right\} \cup \bigcup \left\{ \mathscr {E}_{x,x'} : x,x' \in K' \right\} . \end{aligned}$$
These events have been defined so that, if we sample \(\mathfrak {F}\) using Wilson’s algorithm, beginning with the walks \(\{ X^v : v \in V\}\) (in any order) and then the walks \(\{ X^{x} : x\in K\}\) (in any order), we have that
$$\begin{aligned}&\left\{ \begin{array}{l} \mathscr {F}(K_i \cup K_j) \text { if and only if }i=j\text { , and each two points in }K_i\text { are connected}\\ \text {by a path in }\mathfrak {F}\text { of diameter at most }2^{6c_3} {\text {diam}}(K)\text { for each }1 \le i \le N \end{array}\right\} \\&\quad \supseteq (\mathscr {C}\cap \mathscr {D}\cap \mathscr {I}) {\setminus } \mathscr {E}. \end{aligned}$$
Thus, it suffices to prove that
$$\begin{aligned} \log _2 \mathbb {P}\left( \left( \mathscr {C}\cap \mathscr {D}\cap \mathscr {I}\right) {\setminus } \mathscr {E}\right) \gtrsim -(d-4)\left( K -N\right) \, n = -(d-4)|K'|\,n . \end{aligned}$$
We break this estimate up into the following two lemmas: one lower bounding the probability of the good event \(\mathscr {C}\cap \mathscr {D}\cap \mathscr {I}\), and the other upper bounding the probability of the bad event \(\mathscr {C}\cap \mathscr {D}\cap \mathscr {I}\cap \mathscr {E}\).
Lemma 4.23
The estimate
$$\begin{aligned} \log _2 \mathbb {P}(\mathscr {I}_{x} \mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y) \gtrsim -(d-4)n \end{aligned}$$
holds for every \(x \in K'\) and \(y=(y_{x})_{x\in K} \in (\Lambda (n+c_1,n+c_3))^{K}\).
The proof uses techniques from [19] and the Proof of [2, Theorem 4.2].
Proof of Lemma 4.23
Fix \(x \in K'\), and let \(1\le i \le N\) be such that \(x\in K'_i\). Write \(Y=X^i\) and \(Z=X^{x}\). Let \(L = ( L(k) )_{k\ge 0}\) be the loop-erasure of \((Y_k)_{k\ge 0}\) and, for each \(m\ge 0\), let \(L_m= (L_m(k))_{k= 0}^{q_m}\) be the loop-erasure of \(( Y_k )_{k=0}^m\). Define
$$\begin{aligned} \tau (m) = \inf \{ 0 \le r \le q_m : L_m(r) = Y_k \text { for some }k \ge m\} \end{aligned}$$
and
$$\begin{aligned} \tau (m,\ell ) = \inf \{ 0 \le r \le q_m : L_m(r) = Z_k \text { for some }k \ge \ell \}. \end{aligned}$$
The definition of \(\tau (m)\) ensures that \(L_m(k)=L(k)\) for all \(k\le \tau (m)\). We define the indicator random variables
$$\begin{aligned} I_{m,\ell }= & {} \mathbb {1}(Y_m = Z_\ell \in \Lambda (n+2c_3,n+4c_3), \text { and } Y_{m'}, Z_{\ell '} \\&\in \Lambda (0,n+6c_3) \text { for all }m' \le m, \ell '\le \ell ) \end{aligned}$$
and
$$\begin{aligned} J_{m,\ell }&= I_{m,\ell } \, \mathbb {1} \!\big (\tau (m,\ell ) \le \tau (m)\big ). \end{aligned}$$
Observe that
$$\begin{aligned} \mathscr {I}_x \subseteq \left\{ J_{m,\ell } =1 \text { for some } m, \ell \ge 2^{2(n+c_2)} \right\} . \end{aligned}$$
Moreover, for every \(m,\ell \ge 2^{2(n+c_2)}\) and every \(y \in (\Lambda (n+c_1,n+c_3))^{K}\), the walks \(\langle Y_k \rangle _{k\ge m}\) and \(\langle Z_k \rangle _{k\ge \ell }\) have the same distribution conditional on the event
$$\begin{aligned} \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y \cap \{I_{m,\ell }=1\}. \end{aligned}$$
Thus, we deduce that
$$\begin{aligned} \mathbb {P}\left( \tau (m) \ge \tau (m,\ell ) \mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y \cap \{I_{m,\ell }=1\}\right) \ge 1/2 \end{aligned}$$
whenever the event being conditioned on has positive probability, and therefore that
$$\begin{aligned} \mathbb {E}[ I_{m,\ell } \mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y] \; \ge \; \mathbb {E}[J_{m,\ell }\mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y] \; \ge \; \frac{1}{2}\mathbb {E}[ I_{m,\ell } \mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y]. \end{aligned}$$
Let
$$\begin{aligned} I = \sum _{\ell \ge 2^{2(n+c_2)}}\sum _{m \ge 2^{2(n+c_2)}} I_{m,\ell } \quad \text { and } \quad J = \sum _{\ell \ge 2^{2(n+c_2)}}\sum _{m \ge 2^{2(n+c_2)}} J_{m,\ell }, \end{aligned}$$
and note that the conditional distribution of I given the event \(\mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y\) is the same as the conditional distribution of I given the event \(\mathscr {D}\cap \mathscr {Y}_y \). For every \(y \in (\Lambda (n+c_1,n+c_3))^{K}\), we have that, decomposing \(\mathbb {E}[I \mid \mathscr {D}\cap \mathscr {Y}_y]\) according to the location of the intersections and applying the estimate Lemma 4.22,
$$\begin{aligned}&\mathbb {E}[J \mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y] \asymp \mathbb {E}[I \mid \mathscr {D}\cap \mathscr {Y}_y] \\&\quad \succeq \sum _{w \in \tilde{\Lambda }(n+2c_3,n+4c_3)} \mathbf {P}_{y_{x_i}}(\,\text {hit } w \text { before } \Lambda (n+6c_3,\infty ) \mid \text {do not hit } \Lambda (0,n))\\&\quad \cdot \mathbf {P}_{y_{x}}(\,\text {hit } w \text { before }\Lambda (n+6c_3,\infty ) \mid \text {do not hit } \Lambda (0,n))\\&\quad \quad \succeq 2^{-2(d-2)n} | \Lambda (n+2c_3,n+4c_3)| \asymp 2^{-(d-4)n}. \end{aligned}$$
On the other hand, we have that
$$\begin{aligned} \mathbb {E}[J^2 \mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y] \, \le \, \mathbb {E}[I^2 \mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y] \, = \, \mathbb {E}[I^2 \mid \mathscr {D}\cap \mathscr {Y}_y] \preceq \mathbb {E}[I^2 \mid \mathscr {Y}_y]. \end{aligned}$$
Meanwhile, decomposing \(\mathbb {E}[I^2 \mid \mathscr {Y}_y]\) according to the location of the intersections and applying the Gaussian heat kernel estimates yields that
$$\begin{aligned} \mathbb {E}[I^2 \mid \mathscr {Y}_y]\preceq & {} \sum _{w,z \in \Lambda (n+2c_3,n+4c_3)} \langle y_{x_i} w \rangle ^{-(d-2)}\langle w z \rangle ^{-(d-2)}\langle y_{x} w \rangle ^{-(d-2)} \langle wz \rangle ^{-(d-2)}\\&+ \sum _{w,z \in \Lambda (n+2c_3,n+4c_3)} \langle y_{x_i} w \rangle ^{-(d-2)}\langle w z \rangle ^{-(d-2)}\langle y_{x} z \rangle ^{-(d-2)} \langle z w \rangle ^{-(d-2)}, \end{aligned}$$
where the two different terms come from whether Y and Z hit the points of intersection in the same order or not. With the possible exception of \(\langle wz \rangle \), all the distances involved in this expression are comparable to \(2^n\). Thus, we obtain that
$$\begin{aligned} \mathbb {E}[I^2 \mid \mathscr {Y}_y] \preceq 2^{-2(d-4)n} \sum _{w,z \in \Lambda (n+2c_3,n+4c_3)} \langle w z \rangle ^{-2(d-2)}. \end{aligned}$$
For each \(w \in \mathbb {V}\), considering the contributions of dyadic shells centred at w yields that, since \(d>4\),
$$\begin{aligned} \sum _{z\in \mathbb {V}} \langle w z\rangle ^{-2(d-2)} \preceq \sum _{n\ge 0}2^{dn}2^{-2(d-2)n} \le \sum _{n\ge 0} 2^{-(d-4)n} \preceq 1, \end{aligned}$$
and we deduce that
$$\begin{aligned} \mathbb {E}[I^2 \mid \mathscr {Y}_y] \preceq 2^{-2(d-4)n} |\Lambda (n+2c_3,n+4c_3)| \preceq 2^{-(d-4)n}. \end{aligned}$$
Thus, the Cauchy–Schwarz inequality implies that
as claimed. \(\square \)
We next use the elliptic Harnack inequality to pass from an estimate on \(\mathscr {I}_x\) to an estimate on \(\mathscr {I}\).
Lemma 4.24
\(\log _2 \mathbb {P}(\mathscr {C}\cap \mathscr {D}\cap \mathscr {I}) \gtrsim -(d-4)|K'|\, n\)
Proof
For each \(1\le i \le N\), let \(x'_i\) be chosen arbitrarily from \(K'_i\). To deduce Lemma 4.24 from Lemma 4.23, it suffices to prove that
$$\begin{aligned} \mathbb {P}\Bigg (\bigcap _{x \in K'} \mathscr {I}_{x} \mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y \Bigg ) \succeq \prod _{i=1}^N \mathbb {P}\left( \mathscr {I}_{x_i'} \mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y\right) ^{|K_i'|} \end{aligned}$$
for every \(y=(y_{x})_{x\in K} \in (\Lambda (n+c_1,n+c_3))^{K}\).
Let \(\mathcal {X}\) be the \(\sigma \)-algebra generated by the random walks \((X^{i})_{i=1}^N\). Observe that for each \(x\in K'\) we have
$$\begin{aligned} \mathbb {P}(\mathscr {I}_{x} \mid \mathcal {X},\, \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y)&= \frac{\mathbf {P}_{y_{x}} \left( \text {hit }L^i_{\text {good}}\text { before }\Lambda (0,n+6c_3), \text { never leave }\Lambda (n,\infty ) \right) }{\mathbf {P}_{y_{x}} \left( \text {never leave }\Lambda (n,\infty ) \right) }\\&\asymp \mathbf {P}_{y_{x}} \left( \text {hit }L^i_{\text {good}}\text { before }\Lambda (0,n+6c_3)\text {, never leave }\Lambda (n,\infty ) \right) . \end{aligned}$$
The right hand side of the second line is a positive harmonic function of \(y_{x}\) on \(\Lambda (n+c_1,n+c_3+1)\), and so the elliptic Harnack inequality implies that for every \(y,y' \in (\Lambda (n+c_1,n+c_3))^{K}\) and every \(x\in K'\), we have that
$$\begin{aligned} \mathbb {P}\left( \mathscr {I}_{x} \mid \mathcal {X},\, \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y\right) \asymp \mathbb {P}(\mathscr {I}_{x} \mid \mathcal {X},\, \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_{y'}). \end{aligned}$$
Furthermore, if \(y'\) is obtained from y by swapping \(y_{x}\) and \(y_{x'}\) for some \(1\le i \le N\) and \(x,x' \in K'_i\), then clearly
$$\begin{aligned} \mathbb {P}(\mathscr {I}_{x} \mid \mathcal {X},\, \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y) = \mathbb {P}(\mathscr {I}_{x'} \mid \mathcal {X},\, \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_{y'}). \end{aligned}$$
Therefore, it follows that
$$\begin{aligned} \mathbb {P}(\mathscr {I}_{x} \mid \mathcal {X},\, \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y) \asymp \mathbb {P}(\mathscr {I}_{x'} \mid \mathcal {X},\, \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_{y}) \end{aligned}$$
for all \(1\le i \le N\) and \(x,x' \in K'_i\).
Since the events \(\mathscr {I}_{x}\) are conditionally independent given the \(\sigma \)-algebra \(\mathcal {X}\) and the event \(\mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y\), we deduce that
$$\begin{aligned} \mathbb {P}(\mathscr {I}\mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y)&= \mathbb {E}\left[ \mathbb {P}(\mathscr {I}\mid \mathcal {X},\, \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y) \mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y \right] \\&= \mathbb {E}\left[ \prod _{x\in K'}\mathbb {P}(\mathscr {I}_{x} \mid \mathcal {X},\, \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y) \mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y\right] \\&\asymp \mathbb {E}\left[ \prod _{i=1}^N\mathbb {P}(\mathscr {I}_{x'_i} \mid \mathcal {X},\, \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y)^{|K'_i|} \mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y\right] . \end{aligned}$$
Now, the random variables \(\mathbb {P}(\mathscr {I}_{x'_i} \mid \mathcal {X},\, \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y)^{|K_i'|}\) are independent conditional on the event \(\mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y\), and so we have that
$$\begin{aligned} \mathbb {P}(\mathscr {I}\mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y)&\asymp \prod _{i=1}^N \mathbb {E}\left[ \mathbb {P}(\mathscr {I}_{x'_i} \mid \mathcal {X},\, \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y)^{|K'_i|} \mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y\right] \\&\ge \prod _{i=1}^N \mathbb {P}(\mathscr {I}_{x'_i} \mid \mathscr {C}\cap \mathscr {D}\cap \mathscr {Y}_y)^{|K'_i|}, \end{aligned}$$
as claimed, where the second line follows from Jensen’s inequality. \(\square \)
Finally, it remains to show that the probability of getting unwanted intersections in addition to those that we do want is of lower order than the probability of just getting the intersections that we want.
Lemma 4.25
We have that
$$\begin{aligned} \log _2 \mathbb {P}(\mathscr {C}\cap \mathscr {D}\cap \mathscr {I}\cap \mathscr {E}) \lesssim - \bigl [(d-4)|K'|+2\bigr ] \, n + {|K'|^2}\log _2 n. \end{aligned}$$
Proof
For each \(w \in \mathbb {V}\) and \(x,x' \in K\), let \(\mathscr {E}_{x,x'}(w)\) be the event that \(X^{x}\) and \(X^{x'}\) both hit w. Let \(\zeta =(\zeta _{x})_{x\in K'}\) and let \(\sigma = (\sigma _i)_{i=1}^N\) be such that \(\sigma _v\) is a bijection from \(\{1,\ldots ,|K'_i|\}\) to \(K'_i\) for each \(1 \le i \le N\). We define \(\mathscr {R}_\sigma (\zeta )\) to be the event that for each \(1 \le i \le N\) the walk \(X^{i}\) passes through the points \(\{ \zeta _{x} : x \in K'_i\}\) in the order given by \(\sigma \) and that for each \(x\in K'\) the walk \(X^{x}\) hits the point \(\zeta _{x}\). We also define
$$\begin{aligned} R_\sigma (\zeta ) = \prod _{i=1}^N\left\langle x_i \zeta _{\sigma _i(1)} \right\rangle ^{-(d-2)}\prod _{j=1}^{|K'_i|} \left\langle \zeta _{\sigma _i(j-1)} \zeta _{\sigma _i(j)} \right\rangle ^{-(d-2)} \left\langle \sigma _i(j) \zeta _{\sigma _i(j)} \right\rangle ^{-(d-2)}, \end{aligned}$$
so that \(\mathbb {P}(\mathscr {R}_\sigma (\zeta )) \asymp R_\sigma (\zeta )\) for every \(\zeta \in \mathbb {V}^{K'}\).
Let \(\Lambda _\zeta = \Lambda (n+c_1,n+c_1+c_2)^{K'}\), \(\Lambda _{w,1}= \Lambda (n,n+c_2+1),\)\(\Lambda _{w,2} = \Lambda (n+c_2+1,\infty )\), and \(\Lambda _w=\Lambda _{w,1}\cup \Lambda _{w,2}\). (Note that these sets are not functions of \(\zeta \) or w, but rather are the sets from which \(\zeta \) and w will be drawn.) We also define
$$\begin{aligned} O=K^2 {\setminus } \left[ \{(x,x) : x \in K\} \cup \bigcup _{i=1}^N\left[ \{(x_i,x) : x \in K_i \} \cup \{(x,x_i) : x \in K_i \} \right] \right] . \end{aligned}$$
To be the set of pairs of points at least one of which must have their associated pair of random walks intersect in order for the event \(\mathscr {E}\) to occur. Define the random variables \(M_{\sigma ,0}\), \(M_{\sigma ,1}\), and \(M_{\sigma ,2}\) to be
$$\begin{aligned} M_{\sigma ,0}&= \sum _{\zeta \in \Lambda _\zeta } \mathbb {1}\big [\mathscr {R}_\sigma (\zeta )\big ]\\ M_{\sigma ,1}&= \sum _{(x,x') \in O}\; \sum _{w \in \Lambda _{w,1}} \sum _{\zeta \in \Lambda _\zeta } \mathbb {1}\big [\mathscr {R}_\sigma (\zeta ) \cap \mathscr {E}_{x,x'}(w)\big ], \quad \text {and}\\ M_{\sigma ,2}&= \sum _{(x,x')\in O} \;\sum _{w \in \Lambda _{w,2}} \sum _{\zeta \in \Lambda _\zeta } \mathbb {1}\big [\mathscr {R}_\sigma (\zeta ) \cap \mathscr {E}_{x,x'}(w)\big ]. \end{aligned}$$
Observe that \(\sum _{\sigma }(M_{\sigma ,1} + M_{\sigma ,2}) \ge 1\) on the event \(\mathscr {C}\cap \mathscr {B}\cap \mathscr {I}\cap \mathscr {E}\), and so to prove Lemma 4.25 it suffices to prove that
$$\begin{aligned} \log _2\mathbb {E}\left[ M_{\sigma ,1}+M_{\sigma ,2}\right] \lesssim -\bigl [(d-4)|K'|+2\bigr ]\, n + 2\log _2 n \end{aligned}$$
(4.26)
for every \(\sigma \). We will require the following estimate.\(\square \)
Lemma 4.26
The estimate
$$\begin{aligned} \mathbb {P}\left( \mathscr {R}_\sigma (\zeta ) \cap \mathscr {E}_{x,x'}(w) \right) \preceq R_\sigma (\zeta ) \langle w \zeta _{x} \rangle ^{-(d-2)}\langle w \zeta _{x'} \rangle ^{-(d-2)}. \end{aligned}$$
(4.27)
holds for every \((x,x') \in O\), every \(\zeta \in \Lambda _\zeta \), every \(w \in \Lambda _w\), and every collection \(\sigma =(\sigma _i)_{i=1}^N\) where \(\sigma _i : \{1,\ldots ,|K'_i|\} \rightarrow K'_i\) is a bijection for each \(1\le i \le N\).
Proof
Unfortunately, this proof requires a straightforward but tedious case analysis. We will give details for the simplest case, in which both \(x,x'\in K'\). A similar proof applies in the cases that one or both of x or \(x'\) is not in \(K'\), but there are a larger amount of subcases to consider according to when the intersection takes place. In the case that \(x,x' \in K'\), let \(\mathscr {E}^{-,-}(\zeta ,w)\), \(\mathscr {E}^{-,+}(\zeta ,w)\), \(\mathscr {E}^{+,-}(\zeta ,w)\) and \(\mathscr {E}^{+,+}(\zeta ,w)\) be the events defined as follows:
-
\(\mathscr {E}^{-,-}(\zeta ,w)\): The event \(\mathscr {R}_\sigma (\zeta )\) occurs, and \(X^{x}\) and \(X^{x'}\) both hit w before they hit \(\zeta _{x}\) and \(\zeta _{x'}\) respectively.
-
\(\mathscr {E}^{-,+}(\zeta ,w)\): The event \(\mathscr {R}_\sigma (\zeta )\) occurs, \(X^{x}\) hits w before hitting \(\zeta _{x}\), and \(X^{x'}\) hits w after hitting \(\zeta _{x'}\).
-
\(\mathscr {E}^{+,-}(\zeta ,w)\): The event \(\mathscr {R}_\sigma (\zeta )\) occurs, \(X^{x}\) hits w after hitting \(\zeta _{x}\), and \(X^{x'}\) hits w before hitting \(\zeta _{x'}\).
-
\(\mathscr {E}^{+,+}(\zeta ,w)\): The event \(\mathscr {R}_\sigma (\zeta )\) occurs, and \(X^{x}\) and \(X^{x'}\) both hit w after they hit \(\zeta _{x}\) and \(\zeta _{x'}\) respectively.
We have the estimates
$$\begin{aligned} \mathbb {P}(\mathscr {E}^{-,-}(\zeta ,w))&\asymp R(x,\zeta ) \frac{\langle x w \rangle ^{-(d-2)}\langle w \zeta _{x} \rangle ^{-(d-2)} \langle x' w \rangle ^{-(d-2)}\langle w \zeta _{x'} \rangle ^{-(d-2)}}{\langle x \zeta _{x} \rangle ^{-(d-2)}\langle x' \zeta _{x'} \rangle ^{-(d-2)}},\\ \mathbb {P}(\mathscr {E}^{-,+}(\zeta ,w))&\asymp R(x,\zeta ) \frac{\langle x w \rangle ^{-(d-2)}\langle w \zeta _{x} \rangle ^{-(d-2)}}{\langle x \zeta _{x} \rangle ^{-(d-2)}} \langle \zeta _{x'} w \rangle ^{-(d-2)}, \\ \mathbb {P}(\mathscr {E}^{+,-}(\zeta ,w))&\asymp R(x,\zeta )\frac{\langle x' w \rangle ^{-(d-2)}\langle w \zeta _{x'} \rangle ^{-(d-2)}}{\langle x' \zeta _{x'} \rangle ^{-(d-2)}}\langle \zeta _{x} w \rangle ^{-(d-2)},\\ \text {and}\\ \mathbb {P}(\mathscr {E}^{+,+}(\zeta ,w))&\asymp R(x,\zeta )\langle \zeta _{x} w \rangle ^{-(d-2)}\langle \zeta _{x'} w \rangle ^{-(d-2)}. \end{aligned}$$
In all cases, a bound of the desired form follows since \(\langle w x \rangle \succeq \langle \zeta _{x} x \rangle \) and \(\langle w x' \rangle \succeq \langle \zeta _{x'} x' \rangle \) for every \(x,x'\in K'\), \(\zeta \in \Lambda _\zeta \), and \(w\in \Lambda _w\), and we conclude by summing these four bounds. \(\square \)
Our aim now is to prove Eq. (4.26) by an appeal to Lemma 4.3. To do this, we will encode the combinatorics of the potential ways that the walks can intersect via hypergraphs. To this end, let \(H_\sigma \) be the finite hypergraph with boundary that has vertex set
$$\begin{aligned} V(H_\sigma ) = \left( \{1\} \times K\right) \cup \left( \{2\} \times K'\right) , \end{aligned}$$
boundary set
$$\begin{aligned} \partial V(H_\sigma ) = \left( \{1\} \times \{x_i : 1 \le i \le N \}\right) \cup \left( \{2\}\times K'\right) , \end{aligned}$$
and edge set
$$\begin{aligned} E(H_\sigma )= & {} \left\{ \left\{ (2,\sigma _i(j)), (1,\sigma _i(j)), (1,i,\sigma _i(j+1))\right\} : 1 \le i \le N,\, 1 \le j \le |K'_i|-1 \right\} \\&\cup \left\{ \left\{ (2,\sigma _i(|K'_i|)), (1,\sigma _i(|K'_i|)\right\} : 1 \le i \le N\right\} . \end{aligned}$$
See Fig. 8 for an illustration. Note that the isomorphism class of \(H_\sigma \) does not depend on \(\sigma \). The edge set \(E(H_\sigma )\) can be identified with \(K'\) by taking the intersection of each edge with the set \(\{2\}\times K'\). Under this identification, the definition of \(H_\sigma \) ensures that
$$\begin{aligned} R_\sigma (\zeta ) = W^{H_\sigma ,2}(x,\zeta ) \end{aligned}$$
and consequently that
$$\begin{aligned} \mathbb {E}[M_{\sigma ,0}] \preceq \mathbb {W}^{H_\sigma ,2}_x(n,n+c_1+c_2). \end{aligned}$$
We claim that
$$\begin{aligned} \eta _{d,2}(H_\sigma ') \ge \eta _{d,2}(H_\sigma )+2 \end{aligned}$$
(4.28)
for any coarsening \(H_\sigma '\) of \(H_\sigma \), so that
$$\begin{aligned} \hat{\eta }_{d,2}(H_\sigma ) = \eta _{d,2}(H_\sigma )&= (d-2)(3|K'|-|V|) -d|K'| -(d-2)(|K'|-|V|)\\&= (d-4)|K'|. \end{aligned}$$
and hence that
$$\begin{aligned} \log _2 \mathbb {E}[M_{\sigma ,0}] \lesssim -(d-4) |K'| \, n + |K'|^2 \log _2(n) \end{aligned}$$
(4.29)
by Lemma 4.3. Indeed, suppose that \(H_\sigma /\!\bowtie \) is a proper coarsening of \(H_\sigma \) corresponding to some equivalence relation \(\bowtie \) on \(E(H_\sigma )\), and that the edge corresponding to \(x=\sigma _i(j) \in K'\) is maximal in its equivalence class in the sense that there does not exist \(\sigma _i(j')\) in the equivalence class of \(\sigma _i(j)\) with \(j' > j\). Clearly such a maximal x must exist in every equivalence class. Moreover, for such a maximal \(x = \sigma _i(j)\) there can be at most one edge of \(H_\sigma \) that it shares a vertex with and is also in its class, namely the edge corresponding to \(\sigma _i(j-1)\). Thus, if x is maximal and its equivalence class is not a singleton, let \(H_\sigma /\!\bowtie '\) be the coarsening corresponding to the equivalence relation \(\bowtie '\) obtained from \(\bowtie \) by removing x from its equivalence class. Then we have that \(\Delta (H_\sigma /\!\bowtie ') \le \Delta (H_\sigma /\!\bowtie )+1\) and that \(|E(H_\sigma /\!\bowtie ')| = |E(H_\sigma /\!\bowtie )|+1\), so that
$$\begin{aligned} \eta _d(H/\!\bowtie ) \ge \eta _d(H_\sigma /\!\bowtie ') + d - (d-2)= \eta _d(H_\sigma /\!\bowtie ') + 2, \end{aligned}$$
(4.30)
and the claim follows by inducting on the number of edges in non-singleton equivalence classes.
To obtain a bound on the expectation of \(M_{\sigma ,2}\), considering the contribution of each shell \(\Lambda (m,m+1)\) yields the estimate
$$\begin{aligned} \sum _{w \in \Lambda _{w,2}} \langle \zeta _{x} w \rangle ^{-(d-2)}\langle \zeta _{x'} w \rangle ^{-(d-2)}&\preceq \sum _{m \ge n+ c_2 + 1} 2^{dn} 2^{-2(d-2)n} \preceq 2^{-(d-4)n} \end{aligned}$$
for every \(\zeta \in \Lambda _\zeta \), and it follows from Lemma 4.26 and (4.29) that
$$\begin{aligned} \log _2 \mathbb {E}[M_{\sigma ,2}]&\lesssim \log _2 \mathbb {E}[M_{\sigma ,0}] - (d-4)\, n \nonumber \\&\lesssim -(d-4)(|K'|+1)\, n + |K'|^2 \log _2 n. \end{aligned}$$
(4.31)
It remains to bound the expectation of \(M_{\sigma ,1}\). For each two distinct \(x,x' \in K'\), let \(H_\sigma (x,x')\) be the hypergraph with boundary obtained from \(H_\sigma \) by adding a single vertex, \(\star \), and adding this vertex to the two edges corresponding to x and \(x'\) respectively. These hypergraphs are defined in such a way that, by Lemma 4.26,
$$\begin{aligned} \mathbb {E}[M_{\sigma ,1}] \preceq \sum _{(x,x')\in O} \mathbb {W}^{H_\sigma (x,x'),\, 2}_x(n+c_1,n+c_1+c_2) \end{aligned}$$
We claim that
$$\begin{aligned} \hat{\eta }_{d,2}(H_\sigma (x,x')) \ge \hat{\eta }_{d,2}(H) +2 = (d-4)|K'|+2 \end{aligned}$$
(4.32)
for every two distinct \(x,x' \in K\). First observe that coarsenings of \(H_\sigma \) and of \(H_\sigma (x,x')\) both correspond to equivalence relations on K. Let \(\bowtie \) be an equivalence relation on K, and let \(H'_\sigma (x,x')\) and \(H_\sigma '\) be the corresponding coarsenings. Clearly \(|E(H'_\sigma (x,x'))|=|E(H_\sigma ')|\) and \(|V_\circ (H'_\sigma (x,x'))|=|V_\circ (H_\sigma ')|+1\). If x and \(x'\) are related under \(\bowtie \), then we have that \(\Delta (H'_\sigma (x,x')) = \Delta (H_\sigma ')+1\), while if x and \(x'\) are not related under \(\bowtie \), then we have that \(\Delta (H'_\sigma (x,x')) = \Delta (H_\sigma ')+2\). We deduce that
$$\begin{aligned} \eta _{d,2}(H'_\sigma (x,x')) \ge {\left\{ \begin{array}{ll} \eta _{d,2}(H_\sigma ') &{}\text { if }x \bowtie x'\\ \eta _{d,2}(H_\sigma ') + 2 &{}\text { otherwise.} \end{array}\right. } \end{aligned}$$
If \(x \bowtie x'\) then \(H_\sigma '\) must be a proper coarsening of \(H_\sigma \), and we deduce from (4.28) that the inequality \(\eta _{d,2}(H'_\sigma (x,x')) \ge \eta _{d,2}(H_\sigma ) +2\) holds for every coarsening \(H'_\sigma (x,x')\) of \(H_\sigma (x,x')\), yielding the claimed inequality (4.32). Using (4.32), we deduce from Lemma 4.3 that
$$\begin{aligned} \log _2\mathbb {E}[M_{\sigma ,1}] \lesssim -\bigl [(d-4)|K'|+2\bigr ]\, n + |K'|^2\log _2 n. \end{aligned}$$
(4.33)
Combining (4.31) and (4.33) yields the claimed estimate (4.26), completing the proof. \(\square \)
Completion of the Proof of Lemma 4.21. Since the upper bound given by Lemma 4.25 is of lower order than the lower bound given by Lemma 4.24, it follows that there exists \(n_0=n_0(|K|,N,d,c_1,c_2)\) such that
$$\begin{aligned}\mathbb {P}(\mathscr {C}\cap \mathscr {D}\cap \mathscr {I}\cap \mathscr {E}) \le \frac{1}{2}\mathbb {P}(\mathscr {C}\cap \mathscr {D}\cap \mathscr {I})\end{aligned}$$
if \(n \ge n_0\), and hence that
$$\begin{aligned}\log _2 \mathbb {P}(\mathscr {C}\cap \mathscr {D}\cap \mathscr {I}{\setminus } \mathscr {E}) \gtrsim \log _2 \mathbb {P}(\mathscr {C}\cap \mathscr {D}\cap \mathscr {I}) \gtrsim -(d-4)|K'|\, n \end{aligned}$$
for sufficiently large n as claimed. \(\square \)