A reduction
We begin our proof with the following proposition, which shows that it suffices for us to find sparse random graphs on \(\Gamma \) that have a unique infinite connected component. We define \({\mathcal {U}}(\Gamma ) \subseteq \{0,1\}^{\Gamma \times \Gamma }\) to be the set of graphs on \(\Gamma \) that have a unique infinite connected component.
Proposition 2.1
Let \(\Gamma \) be an infinite, finitely generated group. Then
$$\begin{aligned} {\text {cost}}(\Gamma ) \le 1 + \frac{1}{2} \inf \left\{ \int _{\omega \in {\mathcal {U}}(\Gamma )} \deg _\omega (o) \hbox {d}\mu (\omega ) : \mu \in M(\Gamma ,{\mathcal {U}}(\Gamma )) \right\} . \end{aligned}$$
Proposition 2.1 can be easily deduced from the induction formula of Gaboriau [13, Proposition II.6]. We provide a direct proof for completeness.
Proof
Take a Cayley graph G corresponding to a finite symmetric generating set of \(\Gamma \). Let \(\mu \in M(\Gamma ,{\mathcal {U}}(\Gamma ))\), let \(\omega \) be a random variable with law \(\mu \), and let \(\eta _0\) be the set of vertices of its unique infinite connected component. For each \(i\ge 1\), let \(\eta _i\) be the set of vertices in G that have graph distance exactly i from \(\eta _0\) in G. Note that \(\bigcup _{i\ge 0} \eta _i = \Gamma \), and that if \(i\ge 1\) then every vertex in \(\eta _i\) has at least one neighbour in \(\eta _{i-1}\). For each \(i\ge 1\) and each vertex \(v \in \eta _i\), let \(e^\rightarrow (v)\) be chosen uniformly at random from among those oriented edges of G that begin at v and end at a vertex of \(\eta _{i-1}\), and let e(v) be the unoriented edge obtained by forgetting the orientation of \(e^\rightarrow (v)\). These choices are made independently conditional on \(\omega \). We define \(\zeta =\{e(v) : v\in V\setminus \eta _0\}\) and define \(\nu \) to be the law of \(\xi =\omega \cup \zeta \). We clearly have that \(\xi \) is in \({\mathcal {S}}(\Gamma )\) whenever \(\omega \in {\mathcal {U}}(\Gamma )\), and hence that \(\nu \in M(\Gamma ,{\mathcal {S}}(\Gamma ))\). On the other hand, the mass-transport principle (see [32, Section 8.1]) implies that, writing \(\mathbb {P}\) and \(\mathbb {E}\) for probabilities and expectations taken with respect to the joint law of \(\omega \) and \(\{e(v) : v \in V \setminus \eta \}\),
$$\begin{aligned} \mathbb {E}\deg _\zeta (o) = \mathbb {P}(o\notin \eta _0) + \mathbb {E}\sum _{v\in V} {1}\bigl (v\notin \eta _0,\, e^\rightarrow (v)^+\!=o\bigr ) = 2\mathbb {P}(o\notin \eta _0) \le 2, \end{aligned}$$
where \(e^\rightarrow (v)^+\) denotes the other endpoint of \(e^\rightarrow (v)\). We deduce that
$$\begin{aligned} \int _{\xi \in {\mathcal {S}}(\Gamma )} \deg _\xi (o) \hbox {d}\nu (\xi ) = \mathbb {E}\deg _\zeta (o)+ \mathbb {E}\deg _\omega (o) \le 2 + \int _{\omega \in {\mathcal {U}}(\Gamma )} \deg _\omega (o) \hbox {d}\mu (\omega ), \end{aligned}$$
and the claim follows by taking the infimum over \(\mu \in M(\Gamma ,{\mathcal {U}}(\Gamma ))\). \(\square \)
Remark 2.2
An arguably more canonical way to prove Proposition 2.1 is to take the union of \(\omega \) with an independent copy of the wired uniform spanning forest (\(\mathsf {WUSF}\)) of the Cayley graph G. Indeed, it is clear that some components of \(\mathsf {WUSF}\) must intersect the infinite component of \(\omega \) a.s., and it follows by indistinguishability of trees in \(\mathsf {WUSF}\) [20] that every tree intersects the infinite component of \(\omega \) a.s., so that the union of \(\mathsf {WUSF}\) with \(\omega \) is a.s. connected. (It should also be possible to argue that this union is connected more directly, using Wilson’s algorithm [7, 41].) The result then follows since \(\mathsf {WUSF}\) has expected degree 2 in any transitive graph [7, Theorem 6.4].
This alternative construction may be of interest for the following reason: It is well known [32, Question 10.12] that an affirmative answer to Question 1.1 would follow if one could construct for every \(\varepsilon >0\) an invariant coupling \((\mathsf {FUSF},\eta )\) of the free uniform spanning forest of a Cayley graph of \(\Gamma \) with a percolation process \(\eta \) of density at most \(\varepsilon \) such that \(\mathsf {FUSF}\cup \eta \in S(\Gamma )\) almost surely. Since Kazhdan groups have \(\beta _1=0\), their free and wired uniform spanning forests always coincide [32, Section 10.2], so that proving Theorem 1.2 via this alternative proof of Proposition 2.1 can be seen as a realization of this possibly general strategy.
A construction
We now construct an invariant measure \(\mu \in M(\Gamma ,{\mathcal {U}}(\Gamma ))\) with arbitrarily small expected degree. We will work on an arbitrary Cayley graph of the Kazhdan group \(\Gamma \), and the measure we construct will be concentrated on subgraphs of this Cayley graph. (Recall from the introduction that countable Kazhdan groups are always finitely generated.)
Let \(G=(V,E)\) be a connected, locally finite graph. For each \(\omega \in \{0,1\}^V\), the clusters of \(\omega \) are defined to be the vertex sets of the connected components of the subgraph of G induced by the vertex set \(\{v\in V: \omega (v)=1\}\) (that is, the subgraph of G with vertex set \(\{v\in V: \omega (v)=1\}\) and containing every edge of G both of whose endpoints belong to this set). Fix \(p\in (0,1)\), and let \(\mu _1\) be the law of Bernoulli-p site percolation on G. For each \(i\ge 1\), we recursively define \(\mu _{i+1}\) to be the law of the random configuration \(\omega \in \{0,1\}^V\) obtained as follows:
-
1.
Let \(\omega _1,\omega _2\in \{0,1\}^V\) be independent random variables each with law \(\mu _i\).
-
2.
Let \(\eta _1\) and \(\eta _2\) be obtained from \(\omega _1\) and \(\omega _2\) respectively by choosing to either delete or retain each cluster independently at random with retention probability
$$\begin{aligned} q(p) := \frac{1-\sqrt{1-p}}{p} \in \left( \frac{1}{2},1\right) . \end{aligned}$$
-
3.
Let \(\omega \) be the union of the configurations \(\eta _1\) and \(\eta _2\).
It follows by induction that if G is a Cayley graph of a finitely generated group \(\Gamma \) then \(\mu _i \in M(\Gamma ,\Omega )\) for every \(i\ge 1\). More generally, for each measure \(\mu \) on \(\{0,1\}^V\) and \(q\in [0,1]\) we write \(\mu ^q\) for the q-thinned measure, which is the law of the random variable \(\eta \) obtained by taking a random variable \(\omega \) with law \(\mu \) and choosing to either delete or retain each cluster of \(\omega \) independently at random with retention probability q. (See [33, Section 6] for a more formal construction of this measure.)
We write \(\delta _V\) and \(\delta _\emptyset \) for the probability measures on \(\{0,1\}^V\) giving all their mass to the all 1 and all 0 configurations respectively.
Proposition 2.3
Let \(G=(V,E)\) be a connected, locally finite graph, let \(p\in (0,1)\) and let \((\mu _i)_{i\ge 1}\) be as above. Then \(\mu _i(\{\omega : \omega (u) =1\})=p\) for every \(i\ge 1\) and \(u\in V\) and \(\mu _i\)\(\hbox {weak}^*\) converges to the measure \(p \delta _V + (1-p) \delta _\emptyset \) as \(i\rightarrow \infty \).
Proof
It suffices to prove that for every pair of adjacent vertices \(u,v \in V\) we have that
$$\begin{aligned} \mu _i(\{\omega : \omega (u) =1\})=p \quad \hbox { for every }\ i\ge 1 \quad \text { and } \quad \lim _{i\rightarrow \infty }\mu _i\bigl (\{\omega : \omega (u)=\omega (v)\}\bigr ) = 1. \end{aligned}$$
For each \(u,v\in V\) and \(i\ge 1\) let \(p_i(u) = \mu _i(\{\omega : \omega (u) =1\})\) and let \(\sigma _i(u,v) = \mu _i(\{\omega : \omega (u) = \omega (v)=1\})\). Note that \(p_1(u)=p\) for every \(u\in V\), that \(\sigma _1(u,v) = p^2 >0\) for every \(u,v\in V\), and that \(\sigma _i(u,v) \le p_i(u)\) for every \(u,v\in V\) and \(i\ge 1\). Write \(q=q(p)\). For each \(i\ge 1\) and \(u\in V\), it follows by definition of \(\mu _{i+1}\) that
$$\begin{aligned} p_{i+1}(u) =(1-(1-q)^2) \, p_i(u)^2 + 2q \, p_i(u) \, (1-p_i(u)) = \phi \big ( p_i(u) \big ),\nonumber \\ \end{aligned}$$
(2.1)
where \(\phi :\mathbb {R}\longrightarrow \mathbb {R}\) is the polynomial
$$\begin{aligned} \phi (x) := (2q-q^2)x^2+2qx(1-x) = 2qx-q^2 x^2. \end{aligned}$$
It follows by elementary analysis that \(\phi \) is strictly increasing and concave on (0, p), with \(\phi (0)=0\) and \(\phi (p)=p\). Thus, we deduce by induction that \(p_i(u)=p\) for every \(i\ge 1\) and \(u\in V\) as claimed. Similarly, for each \(i \ge 1\) and adjacent \(u,v \in V\) we have by definition of \(\mu _{i+1}\) that
$$\begin{aligned} \sigma _{i+1}(u,v)&= (1-(1-q)^2) \, \mu _i\bigl (\omega (u) = \omega (v)=1)^2 \\&\quad + 2q \, \mu _i\bigl (\omega (u) = \omega (v) = 1\bigr ) \, (1-\mu _i\bigl (\omega (u) = \omega (v)=1))\\&\quad +2q^2\mu _i\bigl (\omega (u) =1, \omega (v)=0\bigr ) \, \mu _i\bigl (\omega (u) =0, \omega (v)=1\bigr ) \\&= \phi (\sigma _i(u,v))+ 2q^2\mu _i\bigl (\omega (u) =1, \omega (v)=0\bigr ) \, \mu _i\bigl (\omega (u) =0, \omega (v)=1\bigr )\\&\ge \phi (\sigma _i(u,v)), \end{aligned}$$
where we have used that fact that if \(\omega (u)=\omega (v)=1\) then u and v are in the same cluster of \(\omega \). Since \(\phi \) is strictly increasing and concave on (0, p), with the only fixed points 0 and p, and since \(\sigma _1(u,v)>0\), it follows that \(\sigma _i(u,v) \uparrow p\) as \(i\rightarrow \infty \). The claim now follows since
$$\begin{aligned} \mu _i(\omega (u) \ne \omega (v))= & {} \mu _i(\omega (u) =1, \omega (v) =0) + \mu _i(\omega (u) =0, \omega (v) =1) \\= & {} 2p\left( 1-\frac{\sigma _i(u,v)}{p}\right) , \end{aligned}$$
which tends to zero as \(i\rightarrow \infty \). \(\square \)
See Figs. 1 and 2 for simulations of the measures \(\mu _i\) on \(\mathbb {Z}^2\) and \(\mathbb {Z}^3\).
Ergodicity and condensation
On Cayley graphs of infinite Kazhdan groups, Proposition 2.3 will be useful only if we also know something about the ergodicity of the measures \(\mu _i\). To this end, we will apply some tools introduced by Lyons and Schramm [33] that give sufficient conditions for ergodicity of q-thinned processes. The first such lemma, which is proven in [33, Lemma 4.2] and is based on an argument of Burton and Keane [8], shows that every cluster of an invariant percolation process has an invariantly-defined frequency as measured by an independent random walk. Moreover, conditional on the percolation configuration, the frequency of each cluster is non-random and does not depend on the starting point of the random walk.
Lemma 2.4
(Cluster frequencies) Let \(G=(V,E)\) be a Cayley graph of an infinite, finitely generated group \(\Gamma \). There exists a Borel measurable, \(\Gamma \)-invariant function \({\text {freq}}:\{0,1\}^V \rightarrow [0,1]\) with the following property. Let \(\mu \in M(\Gamma ,\Omega )\) be an invariant site percolation, and let \(\omega \) be a random variable with law \(\mu \). Let v be a vertex of G and let \(\mathbb {P}_v\) be the law of simple random walk \(\{X_n\}_{n\ge 0}\) on G started at v. Then
$$\begin{aligned} \lim _{N\rightarrow \infty } \frac{1}{N} \sum _{n=0}^{N-1} {1}_{\{X_n \in C\}} = {\text {freq}}(C) \qquad \hbox { for every cluster } C \hbox { of } \omega \end{aligned}$$
(2.2)
\(\mu \otimes \mathbb {P}_v\)-almost surely.
This notion of frequency is used in the next proposition, which is a slight variation on [33, Lemma 6.4]. We define \(\mathscr {F}\subseteq \{0,1\}^V\) to be the event that there exists a cluster of positive frequency. Note that the \(\Gamma \)-invariance and Borel measurability of \({\text {freq}}\) implies that \(\mathscr {F}\) is \(\Gamma \)-invariant and Borel measurable also.
Proposition 2.5
(Ergodicity of the q-thinning) Let \(G=(V,E)\) be a Cayley graph of an infinite, finitely generated group \(\Gamma \), and let \(\mu \in E(\Gamma ,\Omega )\) be an ergodic invariant site percolation such that \(\mu (\mathscr {F})=0\). Then the q-thinned measure \(\mu ^q\) is also ergodic for every \(q\in [0,1]\). Similarly, if we have k measures \(\nu _1,\ldots ,\nu _k \in E(\Gamma ,\Omega )\) such that \(\nu _i(\mathscr {F})=0\) for every \(1\le i \le k\) and \(\nu _1\otimes \dots \otimes \nu _k\) is ergodic, then \(\nu _1^q\otimes \dots \otimes \nu _k^q\) is also ergodic for every \(q\in [0,1]\).
Proof
Let \(\omega \) be a random variable with law \(\mu \in M(\Gamma ,\Omega )\). We first show that if \(\mu (\mathscr {F})=0\) then
$$\begin{aligned} \lim _{N\rightarrow \infty } \frac{1}{N} \sum _{n=0}^{N-1} \mathbb {P}_o\Big ( B(X_0,r) \longleftrightarrow B(X_n,r) \Big ) = 0\qquad \mu \,-\,\text {a.s.}, \end{aligned}$$
(2.3)
for every \(r\ge 0\), where B(v, r) is the ball of radius r around \(v\in V\), and for \(U_1,U_2\subseteq V\), we write \(\{U_1 \longleftrightarrow U_2\}\) for the event that there exist \(x_1\in U_1\) and \(x_2\in U_2\) that are in the same cluster of \(\omega \). An easy but important implication of (2.3) is that
$$\begin{aligned} \inf _{x\in V} \mu \left( B(o,r) \longleftrightarrow B(x,r) \right) = 0 \end{aligned}$$
(2.4)
for every \(r \ge 0\) and every \(\mu \in M(\Gamma ,\Omega )\) such that \(\mu (\mathscr {F})=0\). (Note that the proof of [33, Lemma 6.4] established this fact under the additional assumption that \(\mu \) is insertion tolerant.)
Condition on \(\omega \), and denote the finitely many clusters that intersect B(o, r) by \(\{C_i\}_{i=1}^m\). Taking \(\mathbb {P}_o\)-expectations in (2.2) and using the dominated convergence theorem, Lemma 2.4 implies that
$$\begin{aligned}&\lim _{N\rightarrow \infty } \frac{1}{N} \sum _{n=0}^{N-1} \mathbb {P}_o\big ( B(X_0,r) \longleftrightarrow X_n \big ) \nonumber \\&\quad = \lim _{N\rightarrow \infty } \frac{1}{N} \sum _{i=1}^m \sum _{n=0}^{N-1} \mathbb {P}_o\big ( X_n \in C_i \big ) = 0 \qquad \mu -\text {a.s.} \end{aligned}$$
(2.5)
Now notice that
$$\begin{aligned} \sum _{i=0}^r \mathbb {P}_o\Big ( B(X_0,r) \longleftrightarrow X_{n+i} \;\Big |\;B(X_0,r) \longleftrightarrow B(X_n,r) \Big ) \ge \deg (o)^{-r} \end{aligned}$$
for every \(n,r\ge 0\), and hence that
$$\begin{aligned}&\sum _{n=0}^{N-1} \mathbb {P}_o\big ( B(X_0,r) \longleftrightarrow B(X_n,r) \big ) \nonumber \\&\quad \le (r+1)\deg (o)^{r}\sum _{n=0}^{N-1+r} \mathbb {P}_o\big ( B(X_0,r) \longleftrightarrow X_n \big ) \end{aligned}$$
(2.6)
for every \(N\ge 1\) and \(r\ge 0\). Dividing by N and letting \(N\rightarrow \infty \), this inequality and (2.5) imply (2.3).
The rest of the proof of the ergodicity of \(\mu ^q\) is identical to the argument in [33, Lemma 6.4], which we recall here for the reader’s convenience. Suppose that \(\mu \) is ergodic. Denote by \(\omega ^q\) the q-thinned configuration obtained from \(\omega \), let \(\mathbb {P}^q\) denote the joint law of \((\omega ,\omega ^q)\), and let A be any invariant event for \((\omega ,\omega ^q)\). For every \(\varepsilon >0\) there exists some \(r>0\) and an event \(A_{\varepsilon ,r}\) depending only on the restriction of \((\omega ,\omega ^q)\) to B(o, r) such that \(\mathbb {P}^q\big (A \,\triangle \, A_{\varepsilon ,r}\big ) < \varepsilon \). By (2.4) we may take x such that \(\mu \big ( B(o,r) \longleftrightarrow B(x,r) \big )<\varepsilon \). Conditionally on \(D_x:=\{B(o,r) \,\,\, \not \!\!\!\longleftrightarrow B(x,r)\}\) in \(\omega \), the coin flips for the q-thinning of the clusters intersecting B(o, r) and B(x, r) are independent, hence
$$\begin{aligned} \Big | \mathbb {P}^q \big (A_{\varepsilon ,r} \cap \gamma _x A_{\varepsilon ,r} \,\big |\,\omega \big ) - \mathbb {P}^q \big (A_{\varepsilon ,r} \,\big |\,\omega \big ) \, \mathbb {P}^q \big (\gamma _x A_{\varepsilon ,r} \,\big |\,\omega \big ) \Big | \le 2 \cdot {\mathbf {1}}_{D_x}(\omega )\,, \end{aligned}$$
where \(\gamma _x\) is translation by \(x \in \Gamma \). Taking expectation with respect to \(\mu \) then letting \(\varepsilon \rightarrow 0\), we get that
$$\begin{aligned} \mathbb {E}_\mu \Big | \mathbb {P}^q(A\mid \omega ) - \mathbb {P}^q(A\mid \omega )^2 \Big | = 0\,, \end{aligned}$$
and hence that \( \mathbb {P}^q (A\mid \omega ) \in \{0,1\}\)\(\mu \)-almost surely. By the ergodicity of \(\mu \), this implies that \(\mathbb {P}^q(A) \in \{0,1\}\). It follows that \(\mathbb {P}^q\) is ergodic and hence that \(\mu ^q\) is ergodic also.
Similarly, if \(\nu _1\otimes \dots \otimes \nu _k\) is ergodic and \(\nu _i(\mathscr {F})=0\) for every \(1\le i \le k\), then we have by (2.3) that if \(\mathbf {\omega }=(\omega _1,\ldots ,\omega _k)\) is a random variable with law \(\mathbf {\nu }=\nu _1\otimes \dots \otimes \nu _k\) then
$$\begin{aligned}&\inf _{x\in V} \nu \left( B(o,r) \leftrightarrow B(x,r) \hbox { in } \omega _i \hbox { for some } 1\le i \le k \right) \\&\quad \le \lim _{N\rightarrow \infty } \frac{1}{N} \sum _{n=0}^{N-1} \sum _{i=1}^k \nu _i \otimes \mathbb {P}_o\Big (B(X_0,r) \longleftrightarrow B(X_n,r) \Big ) =0. \end{aligned}$$
The ergodicity of \(\nu _1^q\otimes \dots \otimes \nu _k^q\) then follows by a similar argument to that above. \(\square \)
Define \(i_{\mathrm {freq}}\) to be the minimal \(i\ge 1\) such that \(\mu _i(\mathscr {F})>0\), letting \(i_{\mathrm {freq}}=\infty \) if this never occurs. We want to prove, using induction and Proposition 2.5, that \(\mu _i\) is ergodic for every \(1\le i\le i_{\mathrm {freq}}\). However, it is not always true that the union of two independent ergodic percolation processes is ergodicFootnote 1 To circumvent this problem, we instead prove a slightly stronger statement. Recall that a measure \(\mu \in M(\Gamma ,\Omega )\) is weakly mixing if and only if the independent product \(\mu \otimes \mu \in M(\Gamma ,\Omega ^2)\) is ergodic when \(\Gamma \) acts diagonally on \(\Omega ^2\), if and only if the k-wise independent product \(\mu ^{\otimes k} \in M(\Gamma ,\Omega ^k)\) is ergodic for every \(k\ge 2\) [40, Theorem 1.24]. This can be taken as the definition of weak mixing for the purposes of this paper.
Proposition 2.6
Let G be a Cayley graph of an infinite, finitely generated group \(\Gamma \), let \(p\in (0,1)\), and let \((\mu _i)_{i\ge 1}\) be as above. Then \(\mu _i\) is weakly mixing for every \(1\le i\le i_{\mathrm {freq}}\).
Proof
We will prove the claim by induction on i. For \(i=1\), \(\mu _1\) is simply the law of Bernoulli-p percolation, which is certainly weakly mixing. Now assume that \(i<i_{\mathrm {freq}}\) and that \(\mu _i\) is weakly mixing, so that \(\mu _i^{\otimes 4}\) is ergodic. Applying Proposition 2.5 we obtain that the independent 4-wise product \((\mu ^q_i)^{\otimes 4}\) of the q-thinned percolations is again ergodic. Since \(\mu _{i+1}^{\otimes 2}\) can be realized as a factor of \((\mu _i^q)^{\otimes 4}\) by taking the unions in the first and second halves of the 4 coordinates, and since factors of ergodic processes are ergodic, it follows that \(\mu _{i+1}^{\otimes 2}\) is ergodic and hence that \(\mu _{i+1}\) is weakly mixing. \(\square \)
Since \(\mathscr {F}\) is an invariant event, Proposition 2.6 has the following immediate corollary.
Corollary 2.7
Let G be a Cayley graph of an infinite, finitely generated group \(\Gamma \), let \(p\in (0,1)\), and let \((\mu _i)_{i\ge 1}\) be as above. If \(i_{\mathrm {freq}}<\infty \) then \(\mu _{i_{\mathrm {freq}}}(\mathscr {F})=1\).
Remark 2.8
It is possible to prove by induction that the measures \(\mu _i\) are both insertion tolerant and deletion tolerant for every \(i\ge 1\). Thus, it follows from the indistinguishability theorem of Lyons and Schramm [33], which holds for all insertion tolerant invariant percolation processes, that if \(i_{\mathrm {freq}} < \infty \) then \(\mu _{i_{\mathrm {freq}}}\) is supported on configurations in which there is a unique infinite cluster; see [33, Section 4]. We will not require this result.
Next, we deduce the following from Proposition 2.6.
Corollary 2.9
(Condensation) Let G be a Cayley graph of a countably infinite Kazhdan group, let \(p\in (0,1)\) and let \((\mu _i)_{i\ge 1}\) be as above. Then \(i_{\mathrm {freq}}<\infty \).
Proof
Suppose for contradiction that \(i_{\mathrm {freq}}=\infty \). Then it follows by Proposition 2.6 that \(\mu _i\) is weakly mixing and hence ergodic for every \(i\ge 1\). But \(\mu _i\)\(\hbox {weak}^*\) converges to the non-ergodic measure \(p\delta _V +(1-p)\delta _\emptyset \) by Proposition 2.3, contradicting property (T). \(\square \)
Proof of Theorem 1.2
Recall that every countable Kazhdan group is finitely generated [4, Theorem 1.3.1]. Let \(G=(V,E)\) be a Cayley graph of \(\Gamma \), let \(p\in (0,1)\), and let \((\mu _i)_{i\ge 1}\) be as above. It follows from Corollaries 2.9 and 2.7 that \(1\le i_{\mathrm {freq}}<\infty \) and that \(\mu _{i_{\mathrm {freq}}}\) is supported on \(\mathscr {F}\). Let \(\omega \in \{0,1\}^V\) be sampled from \(\mu _{i_{\mathrm {freq}}}\), so that \(\omega \in \mathscr {F}\) almost surely. Fatou’s lemma implies that the total frequency of all components of \(\omega \) is at most 1 almost surely, and consequently that \(\omega \) has at most finitely many components of maximal frequency almost surely. Let \(\omega '\) be obtained from \(\omega \) by choosing one of the maximum-frequency components of \(\omega \) uniformly at random, retaining this component, and deleting all other components of \(\omega \), so that \(\omega '\) has a unique infinite cluster almost surely. Let \(\eta \in \{0,1\}^{\Gamma \times \Gamma }\) be defined by setting \(\eta (u,v)=1\) if and only if u and v are adjacent in G and have \(\omega '(u)=\omega '(v)=1\), and let \(\nu \) be the law of \(\eta \), so that \(\nu \in M(\Gamma ,{\mathcal {U}}(\Gamma ))\). It follows by Propositions 2.1 and 2.3 that
$$\begin{aligned} {\text {cost}}(\Gamma )\le & {} 1 + \frac{1}{2}\int _{{\mathcal {U}}(\Gamma )} \deg _\eta (o) \hbox {d}\nu (\eta ) \le 1 + \frac{\deg (o)}{2}\int _{\Omega } \omega (o) \hbox {d}\mu _{i_{\mathrm {freq}}}(\omega )\\= & {} 1 + \frac{p \deg (o)}{2}. \end{aligned}$$
The claim now follows since \(p\in (0,1)\) was arbitrary. \(\square \)