1 Introduction

The conformal loop ensemble \({{\mathrm{CLE}}}_\kappa \) for \(\kappa \in (8/3,8)\) is the canonical conformally invariant measure on countably infinite collections of non-crossing loops in a simply connected domain \(D \subsetneq \mathbb {C}\) [12, 14]. It is the loop analogue of \({{\mathrm{SLE}}}_\kappa \), the canonical conformally invariant measure on non-crossing paths. Just as \({{\mathrm{SLE}}}_\kappa \) arises as the scaling limit of a single interface in many two-dimensional discrete models, \({{\mathrm{CLE}}}_\kappa \) is a limiting law for the joint distribution of all of the interfaces. Figures 1 and 2 shows two discrete loop models believed or known to have \({{\mathrm{CLE}}}_\kappa \) as a scaling limit. Figure 3 illustrates these scaling limits for several values of \(\kappa \).

Fig. 1
figure 1

Nesting of loops in the \(O(n)\) loop model. Each \(O(n)\) loop configuration has probability proportional to \(x^{\text {total length of loops}} \times n^{\#\text { loops}}\). For a certain critical value of \(x\), the \(O(n)\) model for \(0\le n\le 2\) has a “dilute phase”, which is believed to converge \({{\mathrm{CLE}}}_\kappa \) for \(8/3<\kappa \le 4\) with \(n=-2\cos (4\pi /\kappa )\). For \(x\) above this critical value, the \(O(n)\) loop model is in a “dense phase”, which is believed to converge to \({{\mathrm{CLE}}}_\kappa \) for \(4\le \kappa \le 8\), again with \(n=-2\cos (4\pi /\kappa )\). See [6] for further background, a site percolation, b \(O(n)\) loop model. Percolation corresponds to \(n=1\) and \(x=1\), which is in the dense phase, c area shaded by nesting of loops

Fig. 2
figure 2

Nesting of loops separating critical Fortuin-Kasteleyn (FK) clusters from dual clusters. Each FK bond configuration has probability proportional to \((p/(1-p))^{\#\text { edges}} \times q^{\#\text { clusters}}\) [4], where there is believed to be a critical point at \(p=1/(1+1/\sqrt{q})\) (proved for \(q\ge 1\) [2]). For \(0\le q\le 4\), these loops are believed to have the same large-scale behavior as the \(O(n)\) model loops for \(n=\sqrt{q}\) in the dense phase, that is, to converge to \({{\mathrm{CLE}}}_\kappa \) for \(4\le \kappa \le 8\) (see [6, 11]), a Critical FK bond configuration. Here \(q=2\), b loops separating FK clusters from dual clusters, c area shaded by nesting of loops

Fig. 3
figure 3

Simulations of discrete loop models which converge to (or are believed to converge to, indicated with asterisk) \({{\mathrm{CLE}}}_\kappa \) in the fine mesh limit. For each of the \({{\mathrm{CLE}}}_\kappa \)’s, one particular nested sequence of loops is outlined. For \({{\mathrm{CLE}}}_\kappa \), almost all of the points in the domain are surrounded by an infinite nested sequence of loops, though the discrete samples shown here display only a few orders of nesting, a \({{\mathrm{CLE}}}_3\) (from critical Ising model), b \({{\mathrm{CLE}}}_4\) (from the FK model with \(q=4) \star \), c \({{\mathrm{CLE}}}_{16/3}\) (from the FK model with \(q=2\)), d \({{\mathrm{CLE}}}_6\) (from critical bond percolation) \(\star \)

Let \(\kappa \in (8/3,8)\), let \(D \subsetneq \mathbb {C}\) be a simply connected domain, and let \(\Gamma \) be a \({{\mathrm{CLE}}}_\kappa \) in \(D\). For each point \(z \in D\) and \(\varepsilon > 0\), we let \(\mathcal {N}_z(\varepsilon )\) be the number of loops of \(\Gamma \) which surround \(B(z,\varepsilon )\), the ball of radius \(\varepsilon \) centered at \(z\). We prove the existence and conformal invariance of the limit as \(\varepsilon \rightarrow 0\) of the random function \(z\mapsto \mathcal {N}_z(\varepsilon ) - \mathbb {E}[\mathcal {N}_z(\varepsilon )]\) (with no additional normalization) in an appropriate space of distributions (Theorem 1.1). We refer to this object as the nesting field because, roughly, its value describes the fluctuations of the nesting of \(\Gamma \) around its mean. This result also holds when the loops are assigned i.i.d. weights. More precisely, we fix a probability measure \(\mu \) on \(\mathbb {R}\) with finite second moment, define \(\Gamma _z(\varepsilon )\) to be the set of loops in \(\Gamma \) surrounding \(B(z,\varepsilon )\), and define

$$\begin{aligned} \mathcal {S}_z(\varepsilon ) = \sum _{\mathcal {L}\in \Gamma _z(\varepsilon )} \xi _\mathcal {L}, \end{aligned}$$
(1.1)

where \(\xi _\mathcal {L}\) are i.i.d. random variables with law \(\mu \). We show that \(z\mapsto \mathcal {S}_z(\varepsilon ) - \mathbb {E}[\mathcal {S}_z(\varepsilon )]\) converges as \(\varepsilon \rightarrow 0\) to a distribution we call the weighted nesting field. When \(\kappa =4\) and \(\mu \) is a signed Bernoulli distribution, the weighted nesting field is the GFF [9, 10]. Our result serves to generalize this construction to other values of \(\kappa \in (8/3,8)\) and weight measures \(\mu \). In Theorem 1.2, we answer a question asked in [12, Problem 8.2].

The weighted nesting field is a random distribution, or generalized function, on \(D\). Informally, it is too rough to be defined pointwise on \(D\), but it is still possible to integrate it against sufficiently smooth compactly supported test functions on \(D\). More precisely, we prove convergence to the nesting field in a certain local Sobolev space \(H_{{\text {loc}}}^s(D)\subset C_c^\infty (D)^{\prime }\) on \(D\), where \(C_c^\infty (D)\) is the space of compactly supported smooth functions on \(D, C_c^\infty (D)^{\prime }\) is the space of distributions on \(D\), and the index \(s\in \mathbb {R}\) is a parameter characterizing how smooth the test functions need to be. We review all the relevant definitions in Sect. 5.

The nesting field gives a loop-free description of the conformal loop ensemble. For \(\kappa \le 4\) we believe that the nesting field determines the CLE, but that for \(\kappa >4\) the CLE contains more information. (See Question 2 in the open problems section.) In order to prove the existence of the nesting field, we show that the law of CLE near a point rapidly forgets loops that are far away, in a sense that we make quantitative.

Given \(h\in C_c^\infty (D)^{\prime }\) and \(f\in C_c^\infty (D)\), we denote by \(\langle h,f \rangle \) the evaluation of the linear functional \(h\) at \(f\). Recall that the pullback \(h\circ \varphi ^{-1}\) of \(h \in C_c^\infty (D)^{\prime }\) under a conformal map \(\varphi ^{-1}\) is defined by \(\langle h\circ \varphi ^{-1},f\rangle {{:}{=}}\langle h,|\varphi ^{\prime }|^2f\circ \varphi \rangle \) for \(f\in C_c^\infty (\varphi (D))\).

Theorem 1.1

Fix \(\kappa \in (8/3,8)\) and \(\delta >0\), and suppose \(\mu \) is a probability measure on \(\mathbb {R}\) with finite second moment. Let \(D \subsetneq \mathbb {C}\) be a simply connected domain. Let \(\Gamma \) be a \({{\mathrm{CLE}}}_\kappa \) on \(D\) and \((\xi _\mathcal {L})_{\mathcal {L}\in \Gamma }\) be i.i.d. weights on the loops of \(\Gamma \) drawn from the distribution \(\mu \). Recall that for \(\varepsilon > 0\) and \(z \in D, \mathcal {S}_z(\varepsilon )\) denotes

$$\begin{aligned} \mathcal {S}_z(\varepsilon ) = \sum _{\begin{array}{c} \mathcal {L}\in \Gamma \\ \mathcal {L}\text { surrounds }B(z,\varepsilon ) \end{array}} \xi _\mathcal {L}. \end{aligned}$$

Let

$$\begin{aligned} h_\varepsilon (z) = \mathcal {S}_z(\varepsilon ) - \mathbb {E}[ \mathcal {S}_z(\varepsilon )]. \end{aligned}$$
(1.2)

There exists an \(H_{{\text {loc}}}^{-2-\delta }(D)\)-valued random variable \(h=h(\Gamma ,(\xi _\mathcal {L}))\) such that for all \(f\in C_c^\infty (D)\), almost surely \(\lim _{\varepsilon \rightarrow 0} \langle h_{\varepsilon },f\rangle = \langle h,f\rangle \). Moreover, \(h(\Gamma ,(\xi _\mathcal {L}))\) is almost surely a deterministic conformally invariant function of the CLE \(\Gamma \) and the loop weights \((\xi _\mathcal {L})_{\mathcal {L}\in \Gamma }\): almost surely, for any conformal map \(\varphi \) from \(D\) to another simply connected domain, we have

$$\begin{aligned} h(\varphi (\Gamma ),(\xi _{\varphi (\mathcal {L})})_{\mathcal {L}\in \Gamma }) = h(\Gamma ,(\xi _\mathcal {L})_{\mathcal {L}\in \Gamma }) \circ \varphi ^{-1}. \end{aligned}$$

In Theorem 6.2, we prove a stronger form of convergence, namely almost sure convergence in the norm topology of \(H^{-2-\delta }(D)\), when \(\varepsilon \) tends to \(0\) along any given geometric sequence.

We also consider the step nesting sequence, defined by

$$\begin{aligned} \mathfrak {h}_n(z) = \sum _{k=1}^n \xi _{\mathcal {L}_k(z)} - \mathbb {E}\left[ \sum _{k=1}^n \xi _{\mathcal {L}_k(z)}\right] , \quad n\in \mathbb {N}, \end{aligned}$$

where the random variables \((\xi _{\mathcal {L}})_{\mathcal {L}\in \Gamma }\) are i.i.d. with law \(\mu \). We may assume without loss of generality that \(\mu \) has zero mean, so that \(\mathfrak {h}_n(z) = \sum _{k=1}^n \xi _{\mathcal {L}_k(z)}\). We establish the following convergence result for the step nesting sequence, which parallels Theorem 1.1:

Theorem 1.2

Suppose that \(D \subsetneq \mathbb {C}\) is a proper simply connected domain and \(\delta >0\). Assume that the weight distribution \(\mu \) has a finite second moment and zero mean. There exists an \(H_{{\text {loc}}}^{-2-\delta }(D)\)-valued random variable \(\mathfrak {h}\) such that \(\lim _{n\rightarrow \infty } \mathfrak {h}_{n} = \mathfrak {h}\) almost surely in \(H_{{\text {loc}}}^{-2-\delta }(D)\). Moreover, \(\mathfrak {h}\) is almost surely determined by \(\Gamma \) and \((\xi _\mathcal {L})_{\mathcal {L}\in \Gamma }\).

Suppose that \(\acute{D}\) is another simply connected domain and \(\varphi :D \rightarrow \acute{D}\) is a conformal map. Let \(\acute{h}\) be the random element of \(H_{{\text {loc}}}^{-2-\delta }(\acute{D})\) associated with the \({{\mathrm{CLE}}}\acute{\Gamma } = \varphi (\Gamma )\) on \(\acute{D}\) and weights \((\xi _{\varphi ^{-1}(\acute{\mathcal {L}})})_{\acute{\mathcal {L}} \in \acute{\Gamma }}\). Then \(\acute{\mathfrak {h}} = \mathfrak {h} \circ \varphi ^{-1}\) almost surely.

In Proposition 7.2, we show that the step nesting field and the weighted nesting field are equal, under the assumption that \(\mu \) has zero mean.

When \(\kappa =4, \sigma =\sqrt{\pi /2}\), and \(\mu =\mu _B\) where \(\mu _\mathrm{B}(\{\sigma \}) = \mu _\mathrm{B}(\{-\sigma \})=1/2\) (as in Theorem 1.2 of [10]) the distribution \(h\) of Theorem 1.1 is that of a GFF on \(D\) [9]. The existence of the distributional limit for other values of \(\kappa \) was posed in [12, Problem 8.2]. Note that in this context, \(\tfrac{2}{\pi } \mathbb {E}[ \mathcal {S}_z(\varepsilon ) \mathcal {S}_w(\varepsilon )]\) is equal to the expected number of loops which surround both \(B(z,\varepsilon )\) and \(B(w,\varepsilon )\). Let \(G_D(z,w)\) be the Green’s function for the negative Dirichlet Laplacian on \(D\). Since \(\mathcal {S}_z(\varepsilon )\) converges to the GFF [9], it follows that \(\tfrac{2}{\pi } \mathbb {E}[ \mathcal {S}_z(\varepsilon ) \mathcal {S}_w(\varepsilon )]\) converges to \(\frac{2}{\pi }G_D(z,w)\) (see Section 2 in [3]). That is, the expected number of \({{\mathrm{CLE}}}_4\) loops which surround both \(z\) and \(w\) is given by \(\frac{2}{\pi }G_D(z,w)\).

One of the elements of the proof of Theorem 1.1 is an extension of this bound which holds for all \(\kappa \in (8/3,8)\). We include this as our final main theorem.

Theorem 1.3

Let \(\Gamma \) be a \({{\mathrm{CLE}}}_\kappa \) (with \(8/3<\kappa <8\)) on a simply connected proper domain \(D\). For \(z,w \in D\) distinct, let \(\mathcal {N}_{z,w}\) be the number of loops of \(\Gamma \) which surround both \(z\) and \(w\). For each integer \(j\ge 1\), there exists a constant \(C_{\kappa ,j} \in (0,\infty )\) such that

$$\begin{aligned} \big |\mathbb {E}[\mathcal {N}_{z,w}^j] - (\nu _{\mathrm {typical}}\,2\pi \, G_D(z,w))^j\big | \le C_{\kappa ,j} (G_D(z,w)+1)^{j-1}. \end{aligned}$$
(1.3)

Outline

In Sect. 2 we review background material and establish some general CLE estimates, and in Sect. 3 we prove Theorem 1.3. Section 4 includes proofs of several technical results used in the proof of Theorem 1.1. In Sect. 5 we provide a brief overview of the necessary material on distributions and Sobolev spaces, and we establish a general result (Proposition 5.1) regarding the almost-sure convergence of a sequence of random distributions. In Sects. 6 and 7 we prove Theorems 1.1 and 1.2, respectively. We conclude by listing open questions in Sect. 8.

2 Basic CLE estimates

In this section we record some facts about CLE. We refer the reader to the preliminaries section [10] for an introduction to CLE. We begin by reminding the reader of the Koebe distortion theorem and the Koebe quarter theorem.

Theorem 2.1

(Koebe distortion theorem) If \(f:\mathbb {D}\rightarrow \mathbb {C}\) is an injective analytic function and \(f(0)=0\), then

$$\begin{aligned} \frac{r}{(1+r)^2}|f^{\prime }(0)| \le |f(re^{i\theta })| \le \frac{r}{(1-r)^2}|f^{\prime }(0)|, \quad {\text {for}}\,\theta \in \mathbb {R}\quad {\text { and }}\, 0 \le r < 1. \end{aligned}$$

The Koebe quarter theorem, which says that \(B\left( 0,\tfrac{1}{4}|f^{\prime }(0)|\right) \subset f(\mathbb {D})\), follows from the lower bound in the distortion theorem [8, Theorem 3.17]. Combining the quarter theorem with the Schwarz lemma [8, Lemma 2.1], we obtain the following corollary.

Corollary 2.2

If \(D\subsetneq \mathbb {C}\) is a simply connected domain, \(z\in D\), and \(f:\mathbb {D}\rightarrow D\) is a conformal map sending 0 to \(z\), then the inradius \({{\mathrm{inrad}}}(z;D) {{:}{=}}\inf _{w \in \mathbb {C}{\setminus } D}|z-w|\) and the conformal radius \({{\mathrm{CR}}}(z;D) \, {{:}{=}}\, |f^{\prime }(0)|\) satisfy

$$\begin{aligned} {{\mathrm{inrad}}}(z;D) \le {{\mathrm{CR}}}(z;D) \le 4\,{{\mathrm{inrad}}}(z;D). \end{aligned}$$

For the \({{\mathrm{CLE}}}_\kappa \Gamma \) in \(D, z\in D\), and \(j \ge 0\), we define \(\mathcal {L}_z^j\) to be the \(j\)th outermost loop of \(\Gamma \) which surrounds \(z\). For \(r>0\), we define

$$\begin{aligned} J^\cap _{z,r}&{{:}{=}}\min \left\{ j \ge 1: \mathcal {L}_z^j \cap B(z,r) \ne \varnothing \right\} \end{aligned}$$
(2.1a)
$$\begin{aligned} J^\subset _{z,r}&{{:}{=}}\min \left\{ j \ge 1 : \mathcal {L}_z^j \subset B(z,r) \right\} . \end{aligned}$$
(2.1b)

Lemma 2.3

For each \(\kappa \in (8/3,8)\) there exists \(p = p(\kappa ) > 0\) such that for any proper simply connected domain \(D\) and \(z \in D\),

$$\begin{aligned} \mathbb {P}[\mathcal {L}_z^2 \subseteq B(z,{{\mathrm{dist}}}(z,\partial D))] \ge p. \end{aligned}$$

Corollary 2.4

\(J^\subset _{z,r} - J^\cap _{z,r}\) is stochastically dominated by \(2\widetilde{N}\) where \(\widetilde{N}\) is a geometric random variable with parameter \(p = p(\kappa ) > 0\) which depends only on \(\kappa \in (8/3,8)\).

Proof

See Corollary 3.5 in [10]. \(\square \)

We use the following estimate for the overshoot of a random walk the first time it crosses a given threshold. We will apply this lemma to the random walk which tracks the negative log conformal radius of the sequence of CLE loops surrounding a given point \(z\in D\), as viewed from \(z\). See Lemma 2.8 in [10] for a proof.

Lemma 2.5

Suppose \(\{X_j\}_{j\in \mathbb {N}}\) are nonnegative i.i.d. random variables for which \(\mathbb {E}[X_1] > 0\) and \(\mathbb {E}[e^{\lambda _0 X_1}]<\infty \) for some \(\lambda _0>0\). Let \(S_n=\sum _{j=1}^n X_j\) and \(\tau _x = \inf \{n \ge 0 : S_n \ge x\}\). Then there exists \(C>0\) (depending on the law of \(X_1\) and \(\lambda _0\)) such that \(\mathbb {P}[S_{\tau _{x}} - x \ge \alpha ] \le C\exp (-\lambda _0 \alpha )\) for all \(x\ge 0\) and \(\alpha >0\).

The following lemma provides a quantitative version of the statement that it is unlikely that there exists a CLE loop surrounding the inner boundary but not the outer boundary of a given small, thin annulus. We make use of a quantitative coupling between CLE in large domains and full-plane CLE, which appears as Theorem 9.1 in the “Appendix”.

Lemma 2.6

Let \(\Gamma \) be a \({{\mathrm{CLE}}}_\kappa \) in \(\mathbb {D}\). There exist constants \(C>0, \alpha >0\), and \(\varepsilon _0>0\) depending only on \(\kappa \) such that for \(0<\varepsilon <\varepsilon _0\) and \(0\le \delta < 1/2\),

$$\begin{aligned} \mathbb {E}[ \mathcal {N}_{0}(\varepsilon (1-\delta )) - \mathcal {N}_{0}(\varepsilon )] \le C \delta + C \varepsilon ^\alpha . \end{aligned}$$
(2.2)

Proof

We couple the \({{\mathrm{CLE}}}_\kappa \Gamma _\mathbb {D}=\Gamma \) in the disk with a whole-plane \({{\mathrm{CLE}}}_\kappa \Gamma _\mathbb {C}\) as in Theorem 9.1. Index the loops of \(\Gamma _\mathbb {C}\) surrounding \(0\) by \(\mathbb {Z}\) in such a way that \(\mathcal {L}_0^n(\Gamma _\mathbb {C})\) and \(\mathcal {L}_0^n(\Gamma _\mathbb {D})\) are exponentially close for large \(n\). For \(n\in \mathbb {N}\) define \(V^\mathbb {D}_n=-\log {{\mathrm{inrad}}}\mathcal {L}_0^n(\Gamma _\mathbb {D})\), and for \(n\in \mathbb {Z}\) define \(V^\mathbb {C}_n=-\log {{\mathrm{inrad}}}\mathcal {L}_0^n(\Gamma _\mathbb {C})\). Since whole-plane \({{\mathrm{CLE}}}_\kappa \) is scale invariant, the set \(\left\{ V^\mathbb {C}_n:n\in \mathbb {Z}\right\} \) is translation invariant. Using Corollary 2.2 to compare \(\left( V^\mathbb {C}_n\right) _{n\in \mathbb {Z}}\) to the sequence of log conformal radii of the loops of \(\Gamma _\mathbb {C}\) surrounding the origin, the translation invariance implies

$$\begin{aligned} \mathbb {E}\left[ \#\left\{ n:\,a\le V^\mathbb {C}_n <b\right\} \right] = \nu _{\mathrm {typical}}(b-a). \end{aligned}$$

Let \(\alpha \) and the term low distortion be defined as in the statement of Theorem 9.1. With probability \(1-O(\varepsilon ^\alpha )\) there is a low distortion map from \(\Gamma _\mathbb {D}|_{B(0,\varepsilon )^+}\) to \(\Gamma _\mathbb {C}|_{B(0,\varepsilon )^+}\), and on this event, we can bound

$$\begin{aligned} \# \left\{ n:\,\log \frac{1}{\varepsilon }\right.\le & {} \left. V^\mathbb {D}_n < \log \frac{1}{\varepsilon (1-\delta )}\right\} \\\le & {} \#\left\{ n: \log \frac{1}{\varepsilon }-O(\varepsilon ^\alpha ) \le V^\mathbb {C}_n < \log \frac{1}{\varepsilon (1-\delta )} +O(\varepsilon ^\alpha )\right\} . \end{aligned}$$

On the event that there is no such low distortion map, this can be detected by comparing the boundaries of \(\Gamma _\mathbb {D}|_{B(0,\varepsilon )^+}\) and \(\Gamma _\mathbb {C}|_{B(0,\varepsilon )^+}\), so that conditional on this unlikely event, \(\Gamma _\mathbb {D}|_{B(0,\varepsilon )^+}\) is still an unbiased \({{\mathrm{CLE}}}_\kappa \) conformally mapped to the region surrounded by the boundary of \(\Gamma _\mathbb {D}|_{B(0,\varepsilon )^+}\). In particular, the sequence of log-conformal radii of loops of \(\Gamma _\mathbb {D}|_{B(0,\varepsilon )^+}\) surrounding \(0\) is a renewal process, which together with the Koebe distortion theorem and the bound \(\delta \le 1/2\) imply

$$\begin{aligned} \mathbb {E}[ \mathcal {N}_{0}(\varepsilon (1-\delta )) - \mathcal {N}_{0}(\varepsilon ) \,|\, \text {no low distortion map}] \le \text {constant}. \end{aligned}$$

Combining these bounds yields (2.2). \(\square \)

Lemma 2.7

For each \(\kappa \in (8/3,8)\) and integer \(j\in \mathbb {N}\), there are constants \(C>0, \alpha >0\), and \(\varepsilon _0>0\) (depending only on \(\kappa \) and \(j\) ) such that whenever \(D\) is a simply connected proper domain, \(z\in D, \varphi \) is a conformal transformation of \(D\), and \(0<\varepsilon <\varepsilon _0\), if \(\Gamma \) is a \({{\mathrm{CLE}}}_\kappa \) in \(D\), then

$$\begin{aligned} \mathbb {E}\bigg [ \big |\mathcal {N}_z(\varepsilon {{\mathrm{CR}}}(z;D) ;\Gamma ) - \mathcal {N}_{\varphi (z)}(\varepsilon {{\mathrm{CR}}}(\varphi (z);\varphi (D));\varphi (\Gamma )) \big |^j\bigg ] \le C \varepsilon ^\alpha . \end{aligned}$$

Proof

Observe that translating and scaling the domain \(D\) or its conformal image \(\varphi (D)\) has no effect on the loop counts, so we assume without loss of generality that \(z=0, \varphi (z)=0, {{\mathrm{CR}}}(z;D)=1\), and \({{\mathrm{CR}}}(\varphi (z);\varphi (D))=1\). Observe also that it suffices to prove this lemma in the case that the domain \(D\) is the unit disk \(\mathbb {D}\), since a general \(\varphi \) may be expressed as the composition \(\varphi = \varphi _2 \circ \varphi _1^{-1}\) where \(\varphi _1\) and \(\varphi _2\) are conformal transformations of the unit disk with \(\varphi _i(0)=0\) and \(\varphi _i^{\prime }(0)=1\), and the desired bound follows from the triangle inequality.

Let \(\Gamma \) be a \({{\mathrm{CLE}}}_\kappa \) on \(\mathbb {D}\), and let \(\acute{\Gamma }=\varphi (\Gamma )\). By the Koebe distortion theorem and the elementary inequality

$$\begin{aligned} 1-3r\le \frac{1}{(1+r)^2}\le \frac{1}{(1-r)^2}\le 1+3r, \quad \text {for }\, r\,\text { small enough}, \end{aligned}$$
(2.3)

we have

$$\begin{aligned} B(0,\varepsilon -3\varepsilon ^2)\subset \varphi ^{-1}(B(0,\varepsilon )) \subset B(0,\varepsilon +3\varepsilon ^2), \end{aligned}$$

for small enough \(\varepsilon \). Hence \( \mathcal {N}_0 (\varepsilon +3\varepsilon ^2; \Gamma ) \le \mathcal {N}_0 (\varepsilon ; \acute{\Gamma }) \le \mathcal {N}_0 (\varepsilon -3\varepsilon ^2; \Gamma )\), and so for

$$\begin{aligned} X\, {{:}{=}}\, \mathcal {N}_0(\varepsilon -3\varepsilon ^2;\Gamma ) - \mathcal {N}_0(\varepsilon +3\varepsilon ^2;\Gamma ) \end{aligned}$$

we have \(|\mathcal {N}_0(\varepsilon ;\acute{\Gamma }) - \mathcal {N}_0(\varepsilon ;\Gamma )| \le X\).

By Lemma 2.6 we have \(\mathbb {E}[X]=O(\varepsilon ^\alpha )\), which proves the case \(j=1\).

Notice that the conformal radius of every new loop after the first that intersects \(B(0,\varepsilon +3\varepsilon ^2)\) has a uniformly positive probability of being less than \(\frac{1}{4}(\varepsilon -3\varepsilon ^2)\), conditioned on the previous loop. By the Koebe quarter theorem, such a loop intersects \(B(0,\varepsilon -3\varepsilon ^2)\). Thus for some \(p<1\) we have \(\mathbb {P}[X \!\ge \! k+1] \!\le \! p \mathbb {P}[X \!\ge \! k]\) for \(k\!\ge \! 0\). Hence

$$\begin{aligned} \mathbb {E}[X^j]&= \sum _{k=1}^\infty k^j \mathbb {P}[X=k]\\&\le \sum _{k=1}^{\infty }k^j p^k\mathbb {P}[X=1]\\&\le \left( \sum _{k=1}^{\infty }k^j p^k\right) \mathbb {E}[X]=O(\varepsilon ^\alpha ), \end{aligned}$$

which proves the cases \(j>1\). \(\square \)

3 Co-nesting estimates

We use the following lemma in the proof of Theorem 1.3:

Lemma 3.1

Let \(\lambda _0>0\), and suppose \(\{X_j\}_{j\in \mathbb {N}}\) are nonnegative i.i.d. random variables for which \(\mathbb {E}[X_1] > 0\) and \(\mathbb {E}[e^{\lambda _0 X_1}]<\infty \). Let \(\Lambda (\lambda ) = \log \mathbb {E}[ e^{\lambda X_1}]\) and let \(S_n=\sum _{j=1}^n X_j\). For \(x > 0\), define \(\tau _x = \inf \{n \ge 0 : S_n \ge x\}\). For \(\lambda <\lambda _0\), let

$$\begin{aligned} M_n^\lambda = \exp (\lambda S_n - \Lambda (\lambda ) n). \end{aligned}$$

Then for \(\lambda <\lambda _0\) and \(x\ge 0\), the random variables \(\left\{ M_{n \wedge \tau _x}^{\lambda }\right\} _{n\in \mathbb {N}}\) are uniformly integrable.

Proof

Fix \(\beta > 1\) such that \(\beta \lambda < \lambda _0\). By Hölder’s inequality, any family of random variables which is uniformly bounded in \(L^p\) for some \(p>1\) is uniformly integrable. Therefore, it suffices to show that \(\sup _{n \ge 0}\mathbb {E}\left[ \left( M_{n \wedge \tau _x}^\lambda \right) ^\beta \right] < \infty \). We have,

$$\begin{aligned} \left( M_{n \wedge \tau _x}^\lambda \right) ^\beta&= \exp ( \beta \lambda (S_{n \wedge \tau _x} - x)) \times \exp ( \beta \lambda x - \beta \Lambda (\lambda ) (n \wedge \tau _x ))\\&\le \exp ( \beta \lambda (S_{\tau _x} - x)) \times \exp ( \beta \lambda x). \end{aligned}$$

The result follows from Lemma 2.5. \(\square \)

Proof of Theorem 1.3

Fix \(z,w \in D\) distinct and \(j \in \mathbb {N}\). Let \(\varphi :D \rightarrow \mathbb {D}\) be the conformal map which sends \(z\) to \(0\) and \(w\) to \(e^{-x} \in (0,1)\). Let \(G_D\) (resp. \(G_\mathbb {D}\)) be the Green’s function for \(-\Delta \) with Dirichlet boundary conditions on \(D\) (resp. \(\mathbb {D}\)). Explicitly,

$$\begin{aligned} G_\mathbb {D}(u,v) = \frac{1}{2\pi } \log \frac{|1-\overline{u} v|}{|u-v|} \quad \text {for }\, u,v \in \mathbb {D}. \end{aligned}$$

In particular, \(G_\mathbb {D}(0,u) = \frac{1}{2\pi }\log |u|^{-1}\) for \(u \in \mathbb {D}\). By the conformal invariance of \({{\mathrm{CLE}}}_\kappa \) and the Green’s function, i.e. \(G_D(u,v) = G_\mathbb {D}(\varphi (u),\varphi (v))\), it suffices to show that there exists a constant \(C_{j,\kappa } \in (0,\infty )\) which depends only on \(j\) and \(\kappa \in (8/3,8)\) such that

$$\begin{aligned} \big | \mathbb {E}[ (\mathcal {N}_{0,e^{-x}})^j] - (\nu _{\mathrm {typical}}x )^j \big | \le C_{j,\kappa } (x+1)^{j-1} \quad \text {for all }\,x>0. \end{aligned}$$
(3.1)

Let \(\{T_i\}_{i \in \mathbb {N}}\) be the sequence of \(\log \) conformal radii increments associated with the loops of \(\Gamma \) which surround \(0\), let \(S_k = \sum _{i=1}^k T_i\), and let \(\tau _x = \min \{k \ge 1 : S_k \ge x\}\). Recall that \(\Lambda _\kappa (\lambda )\) denotes the \(\log \) moment generating function of the law of \(T_1\). Let \(M_n = \exp (\lambda S_n - \Lambda _\kappa (\lambda ) n)\). By Lemma 3.1, \(\{M_{n \wedge \tau _x}\}_{n\in \mathbb {N}}\) is a uniformly integrable martingale for \(\lambda < 1-\tfrac{2}{\kappa } - \tfrac{3\kappa }{32}\). By Lemma 2.5, we can write \(S_{\tau _x} = x + X\) where \(\mathbb {E}[e^{\lambda X}] < \infty \). By the optional stopping theorem for uniformly integrable martingales (see [17, § A14.3]), we have that

$$\begin{aligned} 1 = \mathbb {E}[ \exp (\lambda S_{\tau _x} - \Lambda _\kappa (\lambda ) \tau _x)] = \mathbb {E}[ \exp (\lambda x + \lambda X - \Lambda _\kappa (\lambda ) \tau _x)]. \end{aligned}$$
(3.2)

We argue by induction on \(j\) that

$$\begin{aligned} \mathbb {E}[(\Lambda ^{\prime }_\kappa (0) \tau _x)^j] = x^j + O((x+1)^{j-1}). \end{aligned}$$
(3.3)

The base case \(j=0\) is trivial.

If we differentiate (3.2) with respect to \(\lambda \) and then evaluate at \(\lambda =0\), we obtain

$$\begin{aligned} 0 = \mathbb {E}[(x + X - \Lambda ^{\prime }_\kappa (0) \tau _x)]. \end{aligned}$$

If we instead differentiate twice, we obtain

$$\begin{aligned} 0 = \mathbb {E}[(x + X - \Lambda ^{\prime }_\kappa (0) \tau _x)^2 -\Lambda ^{\prime \prime }_\kappa (0)\tau _x]. \end{aligned}$$

Similarly, if we differentiate \(j\) times with respect to \(\lambda \) and then evaluate at \(\lambda =0\), we obtain

$$\begin{aligned} 0 = \mathbb {E}[(x + X - \Lambda ^{\prime }_\kappa (0) \tau _x)^j] + \sum _{\begin{array}{c} i\ge 0,k\ge 1\\ i+2k\le j \end{array}} A_{\kappa ,i,k} \mathbb {E}[(x + X - \Lambda ^{\prime }_\kappa (0) \tau _x)^i \tau _x^k], \end{aligned}$$
(3.4)

where the \(A_{\kappa ,i,k}\)’s are constant coefficients depending on the higher order derivatives of \(\Lambda _\kappa \) at \(0\). By our induction hypothesis, for \(h<j\) we have \(\mathbb {E}[\tau _x^{h}]=O((x+1)^{h})\). Conditional on \(\tau _x, X\) has exponentially small tails, so \(\mathbb {E}\left[ \tau _x^{h} X^{\ell }\right] =O((x+1)^{h})\) as well. From this we obtain

$$\begin{aligned} 0 = \mathbb {E}\left[ (x - \Lambda ^{\prime }_\kappa (0) \tau _x)^j\right] + O((x+1)^{j-1}). \end{aligned}$$
(3.5)

Using our induction hypothesis again for \(h<j\), we obtain

$$\begin{aligned} 0 = \sum _{h=0}^{j-1} \left( {\begin{array}{c}j\\ h\end{array}}\right) (-1)^{h} x^j + \mathbb {E}\left[ (-\Lambda ^{\prime }_\kappa (0) \tau _x)^j\right] + O((x+1)^{j-1}), \end{aligned}$$
(3.6)

from which (3.3) follows, completing the induction.

Recall that \(J_{0,r}^{\cap }\) (resp. \(J_{0,r}^{\subset }\)) is the smallest index \(j\) such that \(\mathcal {L}_0^j\) intersects (resp. is contained in) \(B(0,r)\). It is straightforward that

$$\begin{aligned} \tau _{x-\log 4} \le J_{0,e^{-x}}^{\cap } \le \mathcal {N}_{0,e^{-x}}+1 \le J_{0,e^{-x}}^{\subset }. \end{aligned}$$

Since the \(\tau \)’s are stopping times for an i.i.d. sum, conditional on the value of \(\tau _{x-\log 4}\), the difference \(\tau _x-\tau _{x-\log 4}\) has exponentially decaying tails. Moreover, by Lemma 2.3, conditional on the value of \(\tau _x, J_{0,e^{-x}}^{\subset }-\tau _x\) has exponentially decaying tails. Thus \(\mathbb {E}\left[ \mathcal {N}_{0,e^{-x}}^j\right] = \mathbb {E}\left[ \tau _x^j\right] + O((x+1)^{j-1})\). Finally, we recall that \(1/\Lambda _\kappa ^{\prime }(0) = 1/\mathbb {E}[T_1] = \nu _{\mathrm {typical}}\).

By combining Theorem 1.3 and Corollary 2.4, we can estimate the moments of the number of loops which surround a ball in terms of powers of \(G_D(z,w)\).

Corollary 3.2

There exists a constant \(C_{j,\kappa } \in (0,\infty )\) depending only on \(\kappa \in (8/3,8)\) and \(j\in \mathbb {N}\) such that the following is true. For each \(\varepsilon > 0\) and \(z\in D\) for which \({{\mathrm{dist}}}(z,\partial D) \ge 2\varepsilon \) and \(\theta \in \mathbb {R}\), we have

$$\begin{aligned} \big | \mathbb {E}[ (\mathcal {N}_z(\varepsilon ))^j] - (2\pi \nu _{\mathrm {typical}}G_D(z,z+\varepsilon e^{i\theta }))^j\big | \le C_{j,\kappa }(G_D(z,z+\varepsilon e^{i\theta })+1)^{j-1}. \end{aligned}$$
(3.7)

In particular, there exists constant a constant \(C_\kappa \in (0,\infty )\) depending only on \(\kappa \in (8/3,8)\) such that

$$\begin{aligned} \left| \mathbb {E}[ \mathcal {N}_z(\varepsilon )] - \nu _{\mathrm {typical}}\log \frac{{{\mathrm{CR}}}(z;D)}{\varepsilon }\right| \le C_\kappa . \end{aligned}$$
(3.8)

Proof

Let \(w=z+\varepsilon e^{i\theta }\). Corollary 2.4 implies that \(|\mathcal {N}_{z,w} - \mathcal {N}_z(\varepsilon )|\) is stochastically dominated by a geometric random variable whose parameter \(p\) depends only on \(\kappa \). Consequently, (3.7) is a consequence of Theorem 1.3. To see (3.8), we apply (3.7) for \(j=1\) and use that \(G_D(u,v) = \tfrac{1}{2\pi } \log |u-v|^{-1} - \psi _u(v)\) where \(\psi _u(v)\) is the harmonic extension of \(v \mapsto \tfrac{1}{2\pi } \log |u-v|^{-1}\) from \(\partial D\) to \(D\). In particular, \(\psi _z(z) = \tfrac{1}{2\pi } \log {{\mathrm{CR}}}(z;D)\). \(\square \)

4 Regularity of the \(\varepsilon \)-ball nesting field

A key estimate that we use in the proof of Theorem 1.1 is the following bound on how much the centered nesting field \(h_\varepsilon \) depends on \(\varepsilon \). The proof of Theorem 4.1 and the remaining sections may be read in either order.

Theorem 4.1

Let \(D\) be a proper simply connected domain, and let \(h_\varepsilon (z)\) be the centered weighted nesting around the ball \(B(z,\varepsilon )\) of a \({{\mathrm{CLE}}}_\kappa \) on \(D\), defined in (1.2). Suppose \(0<\varepsilon _1(z)\le \varepsilon \) and \(0<\varepsilon _2(z)\le \varepsilon \) on a compact subset \(K\subset D\) of the domain. Then there is some \(c>0\) (depending on \(\kappa \)) and \(C_0>0\) (depending on \(\kappa , D, K\), and the loop weight distribution) for which

$$\begin{aligned} \iint \limits _{K\times K} \big |\mathbb {E}\big [ (h_{\varepsilon _1(z)}(z){-}h_{\varepsilon _2(z)}(z))\,(h_{\varepsilon _1(w)}(w){-}h_{\varepsilon _2(w)}(w)) \big ]\big | \,dz\,dw \le C_0 \varepsilon ^c. \end{aligned}$$
(4.1)

Proof

Let \(A, B\), and \(C\) be the disjoint sets of loops for which \(A\cup B\) is the set of loops surrounding \(B(z,\varepsilon _1(z))\) or \(B(z,\varepsilon _2(z))\) but not both, and \(B\cup C\) is the set of loops surrounding \(B(w,\varepsilon _1(w))\) or \(B(w,\varepsilon _2(w))\) but not both. Letting \(\xi _\mathcal {L}\) denote the weight of loop \(\mathcal {L}\), then we have

$$\begin{aligned}&\mathbb {E}[ (h_{\varepsilon _1(z)}(z) {-} h_{\varepsilon _2(z)}(z))(h_{\varepsilon _1(w)}(w) {-} h_{\varepsilon _2(w)}(w))] \nonumber \\&\quad = {{\mathrm{Cov}}}[h_{\varepsilon _1(z)}(z) {-} h_{\varepsilon _2(z)}(z),h_{\varepsilon _1(w)}(w) - h_{\varepsilon _2(w)}(w)] \nonumber \\&\quad = \pm {{\mathrm{Cov}}}\left[ \sum _{a\in A} \xi _a+\sum _{b\in B} \xi _b, \sum _{b\in B} \xi _b+\sum _{c\in C} \xi _c\right] \nonumber \\&\quad = \pm {{\mathrm{Var}}}[\xi ]\,\mathbb {E}[|B|] \, \pm \mathbb {E}[\xi ]^2{{\mathrm{Cov}}}[|A|{+}|B|,|B|{+}|C|]\nonumber \\&\quad = \pm {{\mathrm{Var}}}[\xi ]\,\mathbb {E}[|B|] \, + \mathbb {E}[\xi ]^2 {{\mathrm{Cov}}}(\mathcal {N}_z(\varepsilon _1){-}\mathcal {N}_z(\varepsilon _2), \mathcal {N}_w(\varepsilon _1){-}\mathcal {N}_w(\varepsilon _2)), \end{aligned}$$
(4.2)

where the \(\pm \) signs are the sign of \((\varepsilon _1(z){-}\varepsilon _2(z))(\varepsilon _1(w){-}\varepsilon _2(w))\).

Let \(G_D^{\kappa ,\varepsilon }(z,w)\) denote the expected number of loops surrounding \(z\) and \(w\) but surrounding neither \(B(z,\varepsilon )\) nor \(B(w,\varepsilon )\). Then \(\mathbb {E}[|B|]\le G_D^{\kappa ,\varepsilon }(z,w)\). In Lemma 4.3 we prove

$$\begin{aligned} \iint _{K\times K} G_D^{\kappa ,\varepsilon }(z,w) \,dz\,dw \le C_1 \varepsilon ^c, \end{aligned}$$

and in Lemma 4.8 we prove

$$\begin{aligned} \iint _{K\times K} \big |{{\mathrm{Cov}}}(\mathcal {N}_z(\varepsilon _1(z)) - \mathcal {N}_z(\varepsilon _2(z)), \mathcal {N}_w(\varepsilon _1(w)) - \mathcal {N}_w(\varepsilon _2(w)))\big | \, dz\,dw \le C_2 \varepsilon ^c, \end{aligned}$$

where \(c\) depends only on \(\kappa \) and \(C_1\) and \(C_2\) depend only on \(\kappa , D\), and \(K\). Equation (4.1) follows from these bounds. \(\square \)

In the remainder of this section we prove Lemmas 4.3 and 4.8.

Lemma 4.2

For any \(\kappa \in (8/3,8)\) and \(j\in \mathbb {N}\), there is a positive constant \(c>0\) such that, whenever \(D\subsetneq \mathbb {C}\) is a simply connected proper domain, \(z\in D\), and \(0<\varepsilon <r\), the \(j\)th moment of the number of \({{\mathrm{CLE}}}_\kappa \) loops surrounding \(z\) which intersect \(B(z,\varepsilon )\) but are not contained in \(B(z,r)\) is \(O((\varepsilon /r)^c)\).

Proof

If there is a loop \(\mathcal {L}=\mathcal {L}_z^k\) surrounding \(z\) which is not contained in \(B(z,r)\) and comes within distance \(\varepsilon \) of \(z\), then \(J_{z,\varepsilon }^\cap \le k\) and \(J_{z,r}^\subset > k\), so \(J_{z,\varepsilon }^\cap < J_{z,r}^\subset \). But from Corollary 2.4 \(J_{z,r}^\subset -J_{z,r}^\cap \) is dominated by twice a geometric random variable, and by Lemma 2.8 in [10] together with the Koebe quarter theorem we have \(J_{z,\varepsilon }^\cap -J_{z,r}^\cap \) is order \(\log (r/\varepsilon )\) except with probability \(O((\varepsilon /r)^{c_1})\), for some constant \(c_1>0\) (depending on \(\kappa \)). Therefore, except with probability \(O((\varepsilon /r)^{c_2})\) (with \(c_2=c_2(\kappa )>0\)), we have \(J_{z,\varepsilon }^\cap \ge J_{z,r}^\subset \). In this case there is no loop \(\mathcal {L}\) surrounding \(z\), not contained in \(B(z,r)\), and coming within distance \(\varepsilon \) of \(z\). Finally, note that conditioned on the event that there is such a loop \(\mathcal {L}\), the conditional expected number of such loops is by Corollary 2.4 dominated by twice a geometric random variable. \(\square \)

Lemma 4.3

For some positive constant \(c<2\),

$$\begin{aligned} \iint _{K \times K} G_D^{\kappa ,\varepsilon }(z,w) \,dz \,dw=O({\text {area}}(K)^{2-c/2} \varepsilon ^c). \end{aligned}$$
(4.3)

Proof

Let \(F^\varepsilon _{z,w}\) denote the number of loops surrounding both \(z\) and \(w\) but not \(B(z,\varepsilon )\) or \(B(w,\varepsilon )\). Then \(G_D^{\kappa ,\varepsilon }(z,w)=\mathbb {E}[F^\varepsilon _{z,w}]\).

Suppose \(|z-w|\le \varepsilon \). Let \(\mathcal {L}\) be the outermost loop (if any) surrounding both \(z\) and \(w\) but not \(B(z,\varepsilon )\) or \(B(w,\varepsilon )\). The number of additional such loops is \(\mathcal {N}_{z,w}(\Gamma ^{\prime })\), where \(\Gamma ^{\prime }\) is a \({{\mathrm{CLE}}}_\kappa \) in \({{\mathrm{int}}}\mathcal {L}\), and by Theorem 1.3 we have \(\mathbb {E}[\mathcal {N}_{z,w}(\Gamma ^{\prime })]\le C_1 \log (\varepsilon /|z-w|) + C_2\) for some constants \(C_1\) and \(C_2\). Integrating the logarithm, we find that

$$\begin{aligned} \iint _{\begin{array}{c} K\times K\\ |z-w|\le \varepsilon \end{array}} G_D^{\kappa ,\varepsilon }(z,w) \,dz\,dw = O({\text {area}}(K) \varepsilon ^2). \end{aligned}$$
(4.4)

Next suppose \(|z-w|>\varepsilon \). Now \(F^\varepsilon _{z,w}\) is dominated by the number of loops surrounding \(z\) which intersect \(B(z,\varepsilon )\) but are not contained in \(B(z,|z-w|)\), and Lemma 4.2 bounds the expected number of these loops by \(O((\varepsilon /|z-w|)^c)\) for some \(c>0\). We decrease \(c\) if necessary to ensure \(0<c<2\), and let \(R={\text {area}}(K)^{1/2}\). Since \((\varepsilon /|z-w|)^c\) is decreasing in \(|z-w|\), we can bound

$$\begin{aligned} \iint _{\begin{array}{c} K\times K\\ |z-w|>\varepsilon \end{array}} G_D^{\kappa ,\varepsilon }(z,w) \,dz\,dw&\le \iint _{\begin{array}{c} R\mathbb {D}\times R\mathbb {D}\\ |z-w|>\varepsilon \end{array}} O((\varepsilon /|z-w|)^c) \,dz\,dw \nonumber \\&= O({\text {area}}(K)^{2-c/2} \varepsilon ^{c}). \end{aligned}$$
(4.5)

Combining (4.4) and (4.5), using again \(c<2\), we obtain (4.3). \(\square \)

We let \(S_{z,w}\) be the index of the outermost loop surrounding \(z\) which separates \(z\) from \(w\) in the sense that \(w\notin U_z^{S_{z,w}}\). Note that \(S_{z,w}\) is also the smallest index for which \(z\notin U_w^{S_{z,w}}\):

$$\begin{aligned} S_{z,w} {{:}{=}}\min \{k:w\notin U_z^k\} = \min \{k:z\notin U_w^k\} . \end{aligned}$$
(4.6)

We let \(\Sigma _{z,w}\) denote the \(\sigma \)-algebra

$$\begin{aligned} \Sigma _{z,w} \, {{:}{=}}\, \sigma (\{\mathcal {L}_z^{k}: 1\le k \le S_{z,w}\}\cup \{\mathcal {L}_w^{k}: 1\le k \le S_{z,w}\}). \end{aligned}$$
(4.7)

Lemma 4.4

There is a constant \(C\) (depending only on \(\kappa \)) such that if \(z,w\in D\) are distinct, then

$$\begin{aligned} -C\le \mathbb {E}\!\left[ \log \frac{{{\mathrm{CR}}}(z;U_z^{S_{z,w}})}{\min (|z-w|,{{\mathrm{CR}}}(z;D))}\right] \le C. \end{aligned}$$

Proof

Let \(r=\min (|z-w|,{{\mathrm{dist}}}(z,\partial D))\). By the Koebe distortion theorem, \({{\mathrm{CR}}}\left( z;U_z^{S_{z,w}}\right) \le 4 r\), which gives the upper bound. By [10, Lemma 3.6 ], there is a loop contained in \(B(z,r)\) but which surrounds \(B(z,r/2^k)\) except with probability exponentially small \(k\), which gives the lower bound. \(\square \)

Lemma 4.5

There exists a constant \(C>0\) (depending only on \(\kappa \)) such that if \(z,w \in D\) are distinct, and \(0<\varepsilon <\min (|z-w|,{{\mathrm{CR}}}(z;D))\), then on the event \(\left\{ {{\mathrm{CR}}}\left( z;U_z^{S_{z,w}}\right) \ge 8\varepsilon \right\} \),

$$\begin{aligned}&\bigg | \mathbb {E}\big [J^\cap _{z,\varepsilon } - S_{z,w} \,|\, U_{z}^{S_{z,w}} \big ] - \mathbb {E}\big [J^\cap _{z,\varepsilon } - S_{z,w} \big ] \nonumber \\&\quad -\nu _{\mathrm {typical}}\log \frac{{{\mathrm{CR}}}(z;U_z^{S_{z,w}})}{\min (|z-w|,{{\mathrm{CR}}}(z;D))} \bigg | \le C. \end{aligned}$$
(4.8)

Proof

Let \(S=S_{z,w}\). By (3.8) of Corollary 3.2 we see that there exist \(C_1 >0\) such that on the event \(\left\{ {{\mathrm{CR}}}\left( z; U_z^S\right) \ge 8\varepsilon \right\} \) we have

$$\begin{aligned} \left| \mathbb {E}\big [J_{z,\varepsilon }^\cap - S_{z,w} \,|\, U_z^{S_{z,w}}\big ]- \nu _{\mathrm {typical}}\log \frac{{{\mathrm{CR}}}\big (z;U_z^{S_{z,w}}\big )}{\varepsilon }\right| \le C_1. \end{aligned}$$
(4.9)

We can write

$$\begin{aligned} \mathbb {E}\left[ J_{z,\varepsilon }^\cap - S \right]&=\mathbb {E}\left[ (J_{z,\varepsilon }^\cap - S)\mathbf{1}_{\{{{\mathrm{CR}}}(z;U_z^S) \ge 8\varepsilon \}} \right] + \mathbb {E}\left[ (J_{z,\varepsilon }^\cap - S)\mathbf{1}_{\{{{\mathrm{CR}}}(z;U_z^S) < 8\varepsilon \}} \right] . \end{aligned}$$
(4.10)

Applying (4.9), we can write the first term of (4.10) as,

$$\begin{aligned} \mathbb {E}\left[ \left( J_{z,\varepsilon }^\cap - S\right) \mathbf{1}_{\{{{\mathrm{CR}}}(z;U_z^S) \ge 8\varepsilon \}} \right]&= \mathbb {E}\!\left[ \mathbb {E}[J_{z,\varepsilon }^\cap - S\,|\,U_z^S]\, \mathbf{1}_{\{{{\mathrm{CR}}}(z;U_z^S) \ge 8\varepsilon \}}\right] \\&= \mathbb {E}\!\left[ \left( \nu _{\mathrm {typical}}\log \frac{{{\mathrm{CR}}}(z;U_z^S)}{\varepsilon } \pm C_1\right) \mathbf{1}_{\{{{\mathrm{CR}}}(z;U_z^S) \ge 8\varepsilon \}} \right] \\&= \nu _{\mathrm {typical}}\log \frac{\min (|z-w|,{{\mathrm{CR}}}(z;D))}{\varepsilon } \pm \text {const}\\&\quad - \mathbb {E}\!\left[ \left( \nu _{\mathrm {typical}}\log \frac{{{\mathrm{CR}}}(z;U_z^S)}{\varepsilon }\right) \mathbf{1}_{\{{{\mathrm{CR}}}(z;U_z^S) < 8\varepsilon \}} \right] . \end{aligned}$$

Using [10, Lemma 3.6], there is a loop contained in \(B(z,\varepsilon )\) which surrounds \(B(z,\varepsilon /2^k)\) except with probability exponentially small in \(k\), so the last term on the right is bounded by a constant (depending on \(\kappa \)).

If \(J_{z,\varepsilon }^\cap \ge S\), then \(J_{z,\varepsilon }^\cap - S\) counts the number of loops \((\mathcal {L}_z^k)_{k \in \mathbb {N}}\) after separating \(z\) from \(w\) before hitting \(B(z,\varepsilon )\). If \(J_{z,\varepsilon }^\cap \le S\), then \(S-J_{z,\varepsilon }^\cap \) counts the number of loops \((\mathcal {L}_z^k)_{k \in \mathbb {N}}\) after intersecting \(B(z,\varepsilon )\) before separating \(z\) from \(w\). Consequently, by Corollary 2.4, we see that absolute value of the second term of (4.10) is bounded by some constant \(C_2 > 0\). Putting these two terms of (4.10) together, we obtain

$$\begin{aligned} \left| \mathbb {E}\left[ J_{z,\varepsilon }^\cap - S_{z,w} \right] - \nu _{\mathrm {typical}}\log \frac{\min (|z-w|,{{\mathrm{CR}}}(z;D))}{\varepsilon } \right| \le \text {const}. \end{aligned}$$
(4.11)

Subtracting (4.11) from (4.9) and rearranging gives (4.8). \(\square \)

Lemma 4.6

Let \(\{X_j\}_{j\in \mathbb {N}}\) be non-negative i.i.d. random variables whose law has a positive density with respect to Lebesgue measure on \((0,\infty )\) and for which there exists \(\lambda _0 >0\) such that \(\mathbb {E}[e^{\lambda _0 X_1}]<\infty \). For \(a \ge 0 \), let \(S^a_n=a+\sum _{j=1}^n X_j\), and for \(a,M > 0\), let \(\tau ^a_M = \min \{n \ge 0 : S^a_n \ge M\}\). There exists a coupling between \(S^a\) and \(\widehat{S}^b\) (identically distributed to \(S^b\) but not independent of it) and constants \(C,c>0\) so that for all \(0\le a \le b \le M\), we have

$$\begin{aligned} \mathbb {P}\bigg [S^a_{\tau ^a_M} = \widehat{S}^b_{\widehat{\tau }^b_M}\bigg ] \ge 1 - Ce^{-cM}. \end{aligned}$$

Similar but non-quantitative convergence results are known for more general distributions (for example, see [5, Chapt. 3.10]). For our results we need this convergence to be exponentially fast, for which we did not find a proof, so we provide one.

Proof of Lemma 4.6

For \(M > N > 0\), we construct a coupling between \(\rho _N\) and \(\rho _M\) as follows. We take \(S_0 = 0\) and \(\widehat{S}_0=N-M\), and then take \(\{X_j\}_{j \in \mathbb {N}}\) and \(\{\widehat{X}_j\}_{j \in \mathbb {N}}\) to be two i.i.d. sequences with law as in the statement of the lemma, with the two sequences coupled with one another in a manner that we shall describe momentarily. We let \(S_n = \sum _{i=1}^n X_i\) and \(\widehat{S}_n = \widehat{S}_0+\sum _{i=1}^n \widehat{X}_i\). Define stopping times

$$\begin{aligned} \tau _N = \min \{ n \ge 0 : S_n \ge N\} \quad \quad \text {and}\quad \quad \widehat{\tau }_N = \min \{ n \ge 0 : \widehat{S}_n \ge N\}. \end{aligned}$$

Then \(S_{\tau _N} - N \sim \rho _N\) and \(\widehat{S}_{\widehat{\tau }_N} - N \sim \rho _M\). We will couple the \(X_j\)’s and \(\widehat{X}_j\)’s so that with high probability \(S_{\tau _N} = \widehat{S}_{\widehat{\tau }_N}\).

Lemma 2.5 implies that there exists a law \(\widetilde{\rho }\) on \((0,\infty )\) with exponential tails such that \(\widetilde{\rho }\) stochastically dominates \(\rho _M\) for all \(M > 0\). We choose \(\theta \) to be big enough so that \(\widetilde{\rho }([0,2\theta ])\ge 1/2\).

We inductively define a sequence of pairs of integers \((i_k,j_k)\) for \(k\in \{0,1,2,\ldots \}\) starting with \((i_0,j_0) = (0,0)\). If \(S_{i_k}+\theta \le \widehat{S}_{j_k}\) then we set \((i_{k+1},j_{k+1})\, {{:}{=}}\, (i_k+1,j_k)\) and sample \(X_{i_{k+1}}\) independently of the previous random variables. If \(\widehat{S}_{j_k}+\theta \le S_{i_k}\), then we set \((i_{k+1},j_{k+1})\, {{:}{=}}\, (i_k,j_k+1)\) and sample \(\widehat{X}_{j_{k+1}}\) independently of the previous random variables. Otherwise, \(\big |S_{i_k}-\widehat{S}_{j_k}\big |\le \theta \). In that case, we set \((i_{k+1},j_{k+1}) \, {{:}{=}}\, (i_k+1,j_k+1)\) and sample \((X_{i_{k+1}},\widehat{X}_{j_{k+1}})\) independently of the previous random variables and coupled so as to maximize the probability that \(S_{i_{k+1}}=\widehat{S}_{j_{k+1}}\). Note that once the walks coalesce, they never separate.

We partition the set of steps into epochs. We adopt the convention that the \(k\)th step is from time \(k-1\) to time \(k\). The first epoch starts at time \(k=0\). For the epoch starting at time \(k\) (whose first step is \(k+1\)), we let

$$\begin{aligned} \ell (k) = \min \left\{ k^{\prime }\ge k : \min (S_{i_{k^{\prime }}},\widehat{S}_{j_{k^{\prime }}}) \ge \max (S_{i_{k}},\widehat{S}_{j_{k}})-\theta \right\} \!. \end{aligned}$$

Let \(E_k\) be the event

$$\begin{aligned} E_k=\{|S_{i_{\ell (k)}}-\widehat{S}_{j_{\ell (k)}}|\le \theta \}. \end{aligned}$$

By our choice of \(\theta , \mathbb {P}[E_k] \ge 1/2\). If event \(E_k\) occurs, then we let \(\ell (k)+1\) be the last step of the epoch, and the next epoch starts at time \(\ell (k)+1\). Otherwise, we let \(\ell (k)\) be the last step of the epoch, and the next epoch starts at time \(\ell (k)\).

Let \(D(t)\) denote the total variation distance between the law of \(X_1\) and the law of \(t+X_1\). Since \(X_1\) has a density with respect to Lebesgue measure which is positive in \((0,\infty )\), it follows that

$$\begin{aligned} q\, {{:}{=}}\, \sup _{0\le t \le \theta } D(t) < 1. \end{aligned}$$

In particular, if the event \(E\) occurs, i.e., \(\big |S_{i_{\ell (k)}}-\widehat{S}_{j_{\ell (k)}}\big |\le \theta \), and the walks have not already coalesced, then \(\mathbb {P}[S_{i_{\ell (k)+1}}\ne \widehat{S}_{j_{\ell (k)+1}}] \le q\).

Let \(Y_k=\max (S_{i_k},\widehat{S}_{j_k})\). For the epoch starting at time \(k\), the difference \(Y_{\ell (k)}-Y_k\) is dominated by a random variable with exponential tails, since \(\widetilde{\rho }\) has exponential tails. On the event \(E_k\) there is one more step of size \(Y_{\ell (k)+1}-Y_{\ell (k)}\) in the epoch. This step size is dominated by the maximum of two independent copies of the random variable \(X_1\) and therefore has exponential tails. Thus if \(k^{\prime }\) is the start of the next epoch, then \(Y_{k^{\prime }}-Y_k\) is dominated by a fixed distribution (depending only on the law of \(X_1\)) which has exponential tails. It follows from Cramér’s theorem that for some \(c >0\), it is exponentially unlikely that the number of epochs (before the walks overshoot \(N\)) is less than \(c N\).

For each epoch, the walks have a \((1-q)\mathbb {P}[E_k]>0\) chance of coalescing if they have not done so already. After \(c N\) epochs, the walkers have coalesced except with probability exponentially small in \(N\), and except with exponentially small probability, these epochs all occur before the walkers overshoot \(N\). \(\square \)

Lemma 4.7

There exist constants \(C_3,c>0\) (depending only on \(\kappa \)) such that if \(z,w \in D\) are distinct, and \(0<\varepsilon ^{\prime }\le \varepsilon \le r\) where \(r=\min (|z-w|,{{\mathrm{CR}}}(z;D))\), then

$$\begin{aligned} \mathbb {E}\left[ \left( \mathbb {E}\big [J^\cap _{z,\varepsilon } - J^\cap _{z,\varepsilon ^{\prime }} \,|\, U_{z}^{S_{z,w}} \big ] - \mathbb {E}[J^\cap _{z,\varepsilon } - J^\cap _{z,\varepsilon ^{\prime }}]\right) ^2\right] \le C_3 \left( \frac{\varepsilon }{r}\right) ^{c}. \end{aligned}$$
(4.12)

Proof

We construct a coupling between three \({{\mathrm{CLE}}}_\kappa \)’s, \(\Gamma , \widetilde{\Gamma }\), and \(\acute{\Gamma }\), on the domain \(D\). Let \(S=S_{z,w}, \widetilde{S}=\widetilde{S}_{z,w}\), and \(\acute{S}=\acute{S}_{z,w}\) denote the three corresponding stopping times. We take \(\Gamma \) and \(\acute{\Gamma }\) to be independent. On \(D {\setminus } \acute{U}_z^{\acute{S}}\), we take \(\widetilde{\Gamma }\) to be identical to \(\acute{\Gamma }\). In particular, \(\widetilde{S}=\acute{S}\) and \(\widetilde{U}_z^{\widetilde{S}} = \acute{U}_z^{\acute{S}}\). Within \(\widetilde{U}_z^{\widetilde{S}}\), we couple \(\widetilde{\Gamma }\) to \(\Gamma \) as follows. We sample so that the sequences

$$\begin{aligned} \left\{ -\log {{\mathrm{CR}}}\left( z; U_{z}^{S+k}\right) \right\} _{k\in \mathbb {N}}\quad \text {and}\quad \left\{ -\log {{\mathrm{CR}}}\left( z; \widetilde{U}_{z}^{\widetilde{S}+k}\right) \right\} _{k\in \mathbb {N}} \end{aligned}$$

are coupled as in Lemma 4.6. Define

$$\begin{aligned} K = \min \left\{ k \ge S: {{\mathrm{CR}}}\left( z; U_{z}^k\right) ={{\mathrm{CR}}}\left( z; \widetilde{U}_{z}^{\widetilde{k}}\right) \text { for some }\widetilde{k} \ge \widetilde{S}\right\} , \end{aligned}$$

and let \(\widetilde{K}\) be the value of \(\widetilde{k}\) for which the conformal radius equality is realized. Let \(\psi :U_z^K\rightarrow \widetilde{U}_z^{\widetilde{K}}\) be the unique conformal map with \(\psi (z)=z\) and \(\psi ^{\prime }(z)>0\). We take \(\widetilde{\Gamma }\) restricted to \(\widetilde{U}_z^{\widetilde{K}}\) to be given by the image under \(\psi \) of the restriction of \(\Gamma \) to \(U_z^K\).

Since \(|\log {{\mathrm{CR}}}\left( z;U_z^S\right) - \log r|\) and \(|\log {{\mathrm{CR}}}\left( z;\widetilde{U}_z^{\widetilde{S}}\right) - \log r|\) have exponential tails, and since the coupling time from Lemma 4.6 has exponential tails, each of \(K-S, \widetilde{K}-\widetilde{S}\), and \(|\log {{\mathrm{CR}}}\left( z;U_z^K\right) - \log r| = |\log {{\mathrm{CR}}}\left( z;\widetilde{U}_z^{\widetilde{K}}\right) - \log r|\) have exponential tails, with parameters depending only on \(\kappa \).

Let

$$\begin{aligned} \Delta \, {{:}{=}}\, \mathbb {E}\left[ J_{z,\varepsilon }^\cap - J_{z,\varepsilon ^{\prime }}^\cap \,\,|\,U_z^S\right] - \mathbb {E}\left[ \widetilde{J}_{z,\varepsilon }^\cap - \widetilde{J}_{z,\varepsilon ^{\prime }}^\cap \,\,|\,\widetilde{U}_z^{\widetilde{S}}\right] . \end{aligned}$$

In the above coupling \(U_z^S\) and \(\widetilde{U}_z^{\widetilde{S}}\) are independent, so we have

$$\begin{aligned} \mathbb {E}\left[ J_{z,\varepsilon }^\cap - J_{z,\varepsilon ^{\prime }}^\cap \,\,|\,U_z^S\right] - \mathbb {E}\left[ J_{z,\varepsilon }^\cap - J_{z,\varepsilon ^{\prime }}^\cap \right] = \mathbb {E}\left[ \Delta \,\,|\,U_z^S\right] . \end{aligned}$$

Therefore, the left-hand side of (4.12) is equal to \(\mathbb {E}\left[ \left( \mathbb {E}\left[ \Delta |U_z^S\right] \right) ^2\right] \). Jensen’s inequality applied to the inner expectation yields

$$\begin{aligned} \mathbb {E}\left[ \left( \mathbb {E}\left[ \Delta |U_z^S\right] \right) ^2\right] \le \mathbb {E}\left[ \mathbb {E}\left[ \Delta ^2\,|\,U_z^S\right] \right] = \mathbb {E}[\Delta ^2]. \end{aligned}$$

We can also write \(\Delta \) as

$$\begin{aligned} \Delta&=\mathbb {E}\big [J_{z,\varepsilon }^\cap - J_{z,\varepsilon ^{\prime }}^\cap - \widetilde{J}_{z,\varepsilon }^\cap + \widetilde{J}_{z,\varepsilon ^{\prime }}^\cap \,\,|\,U_z^S,\widetilde{U}_z^{\widetilde{S}}\big ] \\&= \mathbb {E}\!\left[ J_{z,\varepsilon }^\cap -K - \widetilde{J}_{z,\varepsilon }^\cap +\widetilde{K} \,|\,U_z^S,\widetilde{U}_z^{\widetilde{S}}\right] - \mathbb {E}\!\left[ J_{z,\varepsilon ^{\prime }}^\cap - K - \widetilde{J}_{z,\varepsilon ^{\prime }}^\cap +\widetilde{K} \,|\,U_z^S,\widetilde{U}_z^{\widetilde{S}}\right] . \end{aligned}$$

and then use the inequality \((a+b)^2 \le 2 (a^2+b^2)\) for \(a,b\in \mathbb {R}\) to bound

$$\begin{aligned} \Delta ^2 \le 2 Y_\varepsilon + 2 Y_{\varepsilon ^{\prime }}, \end{aligned}$$

where for \(\hat{\varepsilon }\le \varepsilon \) we define

$$\begin{aligned} Y_{\hat{\varepsilon }} \, {{:}{=}}\, \mathbb {E}\!\left[ J_{z,\hat{\varepsilon }}^\cap - K - \widetilde{J}_{z,\hat{\varepsilon }}^\cap + \widetilde{K} \,\,|\,U_z^S,\widetilde{U}_z^{\widetilde{S}}\right] ^2. \end{aligned}$$

We define the event

$$\begin{aligned} A = \{ {{\mathrm{CR}}}(z;U_z^K) \ge \sqrt{r \varepsilon }\}. \end{aligned}$$

Then

$$\begin{aligned} \mathbb {E}[Y_{\hat{\varepsilon }}\,\mathbf{1}_A]&= \mathbb {E}\!\left[ \mathbb {E}\!\left[ J_{z,\hat{\varepsilon }}^\cap -K - \widetilde{J}_{z,\hat{\varepsilon }}^\cap +\widetilde{K} \,\,|\,U_z^S,\widetilde{U}_z^{\widetilde{S}}\right] ^2 \mathbf{1}_A \right] \\&\le \mathbb {E}\!\left[ \mathbb {E}\!\left[ \left( J_{z,\hat{\varepsilon }}^\cap - K - \widetilde{J}_{z,\hat{\varepsilon }}^\cap +\widetilde{K}\right) ^2\,\mathbf{1}_A \,\big |\, U_z^S,\widetilde{U}_z^{\widetilde{S}} \right] \right] \\&= \mathbb {E}\!\left[ \left( J_{z,\hat{\varepsilon }}^\cap - K - \widetilde{J}_{z,\hat{\varepsilon }}^\cap +\widetilde{K}\right) ^2\,\mathbf{1}_A\right] \\&\le \text {const}\times (\varepsilon /r)^c \end{aligned}$$

where the last inequality follows from Lemma 2.7, for some \(c>0\) and for suitably large \(r/\varepsilon \).

Next we apply Cauchy-Schwarz to find that

$$\begin{aligned} \mathbb {E}[Y_{\hat{\varepsilon }}\mathbf{1}_{A^c}] \le \sqrt{\mathbb {E}\left[ Y_{\hat{\varepsilon }}^2\right] \mathbb {P}[A^c]}. \end{aligned}$$

Lemma 4.6 and the construction of the coupling between \(\Gamma \) and \(\widetilde{\Gamma }\) imply that \(\mathbb {P}[A^c] \le \text {const} \times (\varepsilon /r)^c\) for some \(c>0\). It therefore suffices to show that \(\mathbb {E}\left[ Y_{\hat{\varepsilon }}^2\right] \le C\) for some constant \(C\) which does not depend on \(\varepsilon \) or \(\varepsilon ^{\prime }\). By Jensen’s inequality, it suffices to show that there exists \(C\) such that

$$\begin{aligned} \mathbb {E}\left[ \left( J_{z,\hat{\varepsilon }}^\cap - K - \widetilde{J}_{z,\hat{\varepsilon }}^\cap + \widetilde{K}\right) ^4\right] \le C. \end{aligned}$$
(4.13)

To prove (4.13), we consider the event \(B=\{ {{\mathrm{CR}}}(z;U_z^K)\ge \varepsilon \}\). By Lemma 2.7,

$$\begin{aligned} \mathbb {E}\left[ \left( J_{z,\hat{\varepsilon }}^\cap - K - \widetilde{J}_{z,\hat{\varepsilon }}^\cap + \widetilde{K}\right) ^4 \mathbf{1}_{B}\right] \le \text {const} \end{aligned}$$

where the constant depends only on \(\kappa \).

Using \((a+b)^4\le 8(a^4+b^4)\) for \(a,b\in \mathbb {R}\), and the fact that \(J_{z,\hat{\varepsilon }}^\cap - K\) and \(\widetilde{J}_{z,\hat{\varepsilon }}^\cap - \widetilde{K}\) are equidistributed, we have

$$\begin{aligned} \mathbb {E}\left[ \left( J_{z,\hat{\varepsilon }}^\cap - K - \widetilde{J}_{z,\hat{\varepsilon }}^\cap + \widetilde{K}\right) ^4 \,\mathbf{1}_{B^c}\right] \le 16\, \mathbb {E}\left[ \left( J_{z,\hat{\varepsilon }}^\cap - K\right) ^4\,\mathbf{1}_{B^c}\right] . \end{aligned}$$

On the event \(B^c\), we have \(K\ge J_{z,\hat{\varepsilon }}^\cap \). Conditional on this, \(K-J_{z,\hat{\varepsilon }}^\cap \) has exponentially decaying tails, so the above fourth moment is bounded by a constant (depending on \(\kappa \)), which completes the proof. \(\square \)

Lemma 4.8

Suppose \(0<\varepsilon _1(z)\le \varepsilon \) and \(0<\varepsilon _2(z)\le \varepsilon \) on a compact subset \(K\subset D\) of the domain \(D\). Then there is some \(c>0\) (depending on \(\kappa \)) and \(C_0>0\) (depending on \(\kappa , D\), and \(K\)) for which

$$\begin{aligned} \iint \limits _{K \times K} \left| {{\mathrm{Cov}}}(\mathcal {N}_z(\varepsilon _1(z)) - \mathcal {N}_z(\varepsilon _2(z)), \mathcal {N}_w(\varepsilon _1(w)) - \mathcal {N}_w(\varepsilon _2(w)))\right| \, dz\,dw \le C_0 \varepsilon ^c. \end{aligned}$$
(4.14)

Proof

For a random variable \(X\), we let \({{\mathop {X}\limits ^{\circ }}}\) denote

$$\begin{aligned} {{\mathop {X}\limits ^{\circ }}} = X - \mathbb {E}[X]. \end{aligned}$$
(4.15)

We let \(Y_z\) denote

$$\begin{aligned} Y_{z}\, {{:}{=}}\, {J}_{z,\,\varepsilon _1(z)}^\cap -{J}_{z,\,\varepsilon _2(z)}^\cap . \end{aligned}$$
(4.16)

Recalling that \(J_{z,r}^{\cap }=\mathcal {N}_z(r)+1\), we see that

$$\begin{aligned} \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_z {{\mathop {Y}\limits ^{\circ }}}_w\right] = {{\mathrm{Cov}}}(\mathcal {N}_z(\varepsilon _1(z)) - \mathcal {N}_z(\varepsilon _2(z)), \mathcal {N}_w(\varepsilon _1(w)) - \mathcal {N}_w(\varepsilon _2(w))) , \end{aligned}$$

so we need to bound \(\left| \mathbb {E}[{{\mathop {Y}\limits ^{\circ }}}_z {{\mathop {Y}\limits ^{\circ }}}_w]\right| \).

We treat two subsets of \(K\times K\) separately: (1) the near regime \(\{(z,w): |z-w|\le \varepsilon \}\), and (2) the far regime \(\{(z,w): \varepsilon <|z-w|\}\).

For the near regime, we first write

$$\begin{aligned} Y_z = Y_{z,w}^{(1)} + Y_{z,w}^{(2)}, \end{aligned}$$

where \(Y_{z,w}^{(1)}\) counts those loops surrounding \(B(z,\min (\varepsilon _1(z),\varepsilon _2(z)))\) and intersecting \(B(z,\max (\varepsilon _1(z),\varepsilon _2(z)))\) with index smaller than \(S_{z,w}\), and \(Y_{z,w}^{(2)}\) counts those loops with index at least \(S_{z,w}\). Then \(\Sigma _{z,w}\) determines \(Y_{z,w}^{(1)}\) and \(Y_{w,z}^{(1)}\), and conditional on \(\Sigma _{z,w}, Y_{z,w}^{(2)}\) and \(Y_{w,z}^{(2)}\) are independent [recall that \(\Sigma _{z,w}\) was defined in (4.7)]. Thus \(Y_{z,w}^{(i)}\) and \(Y_{w,z}^{(j)}\) are conditionally independent (given \(\Sigma _{z,w}\)) for \(i,j\in \{1,2\}\).

Observe that

$$\begin{aligned} \left| \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_z{{\mathop {Y}\limits ^{\circ }}}_w\right] \right| \le \sum _{i,j\in \{1,2\}} \left| \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_{z,w}^{(i)}{{\mathop {Y}\limits ^{\circ }}}_{w,z}^{(j)}\right] \right| . \end{aligned}$$
(4.17)

For \(i,j\in \{1,2\}\),

$$\begin{aligned} \left| \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_{z,w}^{(i)}{{\mathop {Y}\limits ^{\circ }}}_{w,z}^{(j)}\right] \right|&= \left| \mathbb {E}\left[ \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_{z,w}^{(i)}{{\mathop {Y}\limits ^{\circ }}}_{w,z}^{(j)}\,|\,\Sigma _{z,w}\right] \right] \right| \nonumber \\&= \left| \mathbb {E}\left[ \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_{z,w}^{(i)}\,|\,\Sigma _{z,w}\right] \, \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_{w,z}^{(j)}\,|\,\Sigma _{z,w}\right] \right] \right| \nonumber \\&\le \mathbb {E}\left[ \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_{z,w}^{(i)}\,|\,\Sigma _{z,w}\right] ^2\right] ^{1/2} \, \mathbb {E}\left[ \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_{w,z}^{(j)}\,|\,\Sigma _{z,w}\right] ^2\right] ^{1/2}. \end{aligned}$$
(4.18)

For the index \(i=1\), we write

$$\begin{aligned} \mathbb {E}\left[ \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_{z,w}^{(1)}\,|\,\Sigma _{z,w}\right] ^2\right] = \mathbb {E}\left[ \left( {{\mathop {Y}\limits ^{\circ }}}_{z,w}^{(1)}\right) ^2\right] \le \mathbb {E}\left[ \left( Y_{z,w}^{(1)}\right) ^2\right]&= \mathbb {E}\!\left[ \mathbb {E}\left[ \left( Y_{z,w}^{(1)}\right) ^2\right] \,\big |\, U_z^{J_{z,\varepsilon }^\cap } \right] \!. \end{aligned}$$

But

$$\begin{aligned} Y_{z,w}^{(1)} \le 1+\mathcal {N}_{z,w}\left( \Gamma |_{U_z^{J_{z,\varepsilon }^\cap }}\right) . \end{aligned}$$

By Theorem 1.3, \(\mathbb {E}\left[ (1+\mathcal {N}_{z,w}(\Gamma |_U))^2\right] \le \text {const}+\text {const}\times G_{U}(z,w)^2\), where \(G_U\) denotes the Green’s function for the Laplacian in the domain \(U\). By the Koebe distortion theorem, the Green’s function is in turn bounded by \(G_U(z,w) \le \text {const}+\text {const}\times \max (0,\log ({{\mathrm{CR}}}(z;U)/|z-w|))\). Therefore,

$$\begin{aligned} \mathbb {E}\left[ \left( {{\mathop {Y}\limits ^{\circ }}}_{z,w}^{(1)}\right) ^2\right] \le \mathbb {E}\left[ O\left( 1+\log ^2\frac{|z-w|}{{{\mathrm{CR}}}\left( z;U_z^{J_{z,\varepsilon }^\cap }\right) }\right) \right] . \end{aligned}$$

By Lemma 2.5, \(-\log {{\mathrm{CR}}}\left( z;U_z^{J_{z,\varepsilon }^\cap }\right) = -\log \varepsilon + X\) for some random variable \(X\) with exponentially decaying tails. It follows that

$$\begin{aligned} \mathbb {E}\left[ \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_{z,w}^{(1)}\,|\,\Sigma _{z,w}\right] ^2\right] = \mathbb {E}\left[ \left( {{\mathop {Y}\limits ^{\circ }}}_{z,w}^{(1)}\right) ^2\right] = O\left( 1+\log ^2\frac{|z-w|}{\varepsilon }\right) . \end{aligned}$$
(4.19)

For the index \(i=2\), we express \(Y_{z,w}^{(2)}\) in terms of \(J_{z,\varepsilon _1(z)}\) and \(J_{z,\varepsilon _2(z)}\) and use Lemma 4.5 twice (once with \(\varepsilon _1(z)\) and once with \(\varepsilon _2(z)\) playing the role of \(\varepsilon \) in the lemma statement) and subtract to write

$$\begin{aligned} \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_{z,w}^{(2)}\,|\,\Sigma _{z,w}\right] = \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_{z,w}^{(2)}\,|\,U_z^{S_{z,w}}\right]&\le \text {const}\nonumber \\ \mathbb {E}\left[ \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_{z,w}^{(2)}\,|\,\Sigma _{z,w}\right] ^2\right]&\le C. \end{aligned}$$
(4.20)

for some constant \(C\) depending only on \(\kappa \).

Combining (4.17), (4.18), (4.19), and (4.20), we obtain

$$\begin{aligned} \left| \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_z {{\mathop {Y}\limits ^{\circ }}}_w\right] \right|&\le \text {const}+\text {const}\times \log ^2\frac{\varepsilon }{|z-w|}, \end{aligned}$$

which implies

$$\begin{aligned} \iint \limits _{\begin{array}{c} K\times K\\ |z-w|\le \varepsilon \end{array}} \left| \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_z {{\mathop {Y}\limits ^{\circ }}}_w\right] \right| \,dz \quad dw&\le \text {const}\times {\text {area}}(K)\times \varepsilon ^2. \end{aligned}$$
(4.21)

For the far regime, we again condition on \(\Sigma _{z,w}\), the loops up to and including the first ones separating \(z\) from \(w\), and use Cauchy-Schwarz, as in (4.18), but without first expressing \(Y_z\) and \(Y_w\) as sums:

$$\begin{aligned} \left| \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_z{{\mathop {Y}\limits ^{\circ }}}_w\right] \right| \le \mathbb {E}\left[ \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_z\,|\,\Sigma _{z,w}\right] ^2\right] ^{1/2} \, \mathbb {E}\left[ \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_w\,|\,\Sigma _{z,w}\right] ^2\right] ^{1/2}. \end{aligned}$$
(4.22)

By Lemma 4.7, we have

$$\begin{aligned} \mathbb {E}\left[ \mathbb {E}\left[ {{\mathop {Y}\limits ^{\circ }}}_z\,|\,\Sigma _{z,w}\right] ^2\right] \le C \left( \frac{\varepsilon }{\min (|z-w|,{{\mathrm{CR}}}(z;D))}\right) ^c. \end{aligned}$$
(4.23)

Integrating over \(\{(z,w)\in K\times K: \varepsilon <|z-w|\}\) gives (4.14). \(\square \)

5 Properties of Sobolev spaces

In this section we provide an overview of the distribution theory and Sobolev space theory required for the proof of Theorem 1.1. We refer the reader to [15] or [16] for a more detailed introduction.

Fix a positive integer \(d\). Recall that the Schwartz space \(\mathcal {S}(\mathbb {R}^d)\) is defined to be the set of smooth, complex-valued functions on \(\mathbb {R}^d\) whose derivatives of all orders decay faster than any polynomial at infinity. If \(\beta = (\beta _1,\beta _2,\ldots ,\beta _d)\) is a multi-index, then the partial differentiation operator \(\partial ^\beta \) is defined by \(\partial ^\beta =\partial _{x_1}^{\beta _1} \partial _{x_2}^{\beta _2}\ldots \partial _{x_d}^{\beta _d}\). We equip \(\mathcal {S}(\mathbb {R}^d)\) with the topology generated by the family of seminorms

$$\begin{aligned} \left\{ {\displaystyle \Vert \phi \Vert _{n,\beta } {{:}{=}}\sup _{x\in \mathbb {R}^d} |x|^n |\partial ^\beta \phi (x)|: n \ge 0, \, \beta \text { is a multi-index}} \right\} . \end{aligned}$$

The space \(\mathcal {S}^{\prime }(\mathbb {R}^d)\) of tempered distributions is defined to be the space of continuous linear functionals on \(\mathcal {S}(\mathbb {R}^d)\). We write the evaluation of \(f\in \mathcal {S}^{\prime }(\mathbb {R}^d)\) on \(\phi \in \mathcal {S}(\mathbb {R}^d)\) using the notation \(\langle f,\phi \rangle \). For any Schwartz function \(g\in \mathcal {S}(\mathbb {R}^d)\) there is an associated continuous linear functional \(\phi \mapsto \int _{\mathbb {R}^d} g(x)\phi (x)\,dx\) in \(\mathcal {S}^{\prime }(\mathbb {R}^d)\), and \(\mathcal {S}(\mathbb {R}^d)\) is a dense subset of \(\mathcal {S}^{\prime }(\mathbb {R}^d)\) with respect to the weak* topology.

For \(\phi \in \mathcal {S}(\mathbb {R}^d)\), its Fourier transform \(\widehat{\phi }\) is defined by

$$\begin{aligned} \widehat{\phi }(\xi ) = \int _{\mathbb {R}^d} e^{-2\pi i x\cdot \xi }\phi (x)\,dx \quad \text {for }\,\xi \in \mathbb {R}^d. \end{aligned}$$

Since \(\phi \in \mathcal {S}(\mathbb {R}^d)\) implies \(\widehat{\phi }\in \mathcal {S}(\mathbb {R}^d)\) [15, Section 1.13] and since \(\langle \widehat{\phi }_1,\phi _2 \rangle =\iint \phi _1(x) e^{-2\pi i x\cdot y} \phi _2(y)\,dx\,dy =\langle \phi _1,\widehat{\phi }_2 \rangle \) for all \(\phi _1,\phi _2\in \mathcal {S}(\mathbb {R}^d)\), we may define the Fourier transform \(\widehat{f}\) of a tempered distribution \(f\in \mathcal {S}^{\prime }(\mathbb {R}^d)\) by setting \(\langle \widehat{f},\phi \rangle {{:}{=}}\langle f,\widehat{\phi } \rangle \) for each \(\phi \in \mathcal {S}(\mathbb {R}^d)\).

For \(x\in \mathbb {R}^d\), we define \(\langle x \rangle {{:}{=}}(1+|x|^2)^{1/2}\). For \(s\in \mathbb {R}\), define \(H^s(\mathbb {R}^d) \subset \mathcal {S}^{\prime }(\mathbb {R}^d)\) to be the set of functionals \(f\) for which there exists \(R^s_f\in L^2(\mathbb {R}^d)\) such that for all \(\phi \in \mathcal {S}(\mathbb {R}^d)\),

$$\begin{aligned} \langle \widehat{f},\phi \rangle = \int _{\mathbb {R}^d} R^s_f(\xi ) \phi (\xi ) \langle \xi \rangle ^{-s}\,d\xi . \end{aligned}$$
(5.1)

Equipped with the inner product

$$\begin{aligned} \langle f,g \rangle _{H^s(\mathbb {R}^d)} \, {{:}{=}}\, \int _{\mathbb {R}^d} R^s_f(\xi ) \overline{R^s_g(\xi )} \,d\xi , \end{aligned}$$
(5.2)

\(H^s(\mathbb {R}^d)\) is a Hilbert space. (The space \(H^s(\mathbb {R}^d)\) is the same as the Sobolev space denoted \(W^{s,2}(\mathbb {R}^d)\) in the literature.)

Recall that the support of a function \(f:\mathbb {R}^d \rightarrow \mathbb {C}\) is defined to the closure of the set of points where \(f\) is nonzero. Define \(\mathbb {T}=[-\pi ,\pi ]\) with endpoints identified, so that \(\mathbb {T}^d\), the \(d\)-dimensional torus, is a compact manifold. If \(M\) is a manifold (such as \(\mathbb {R}^d\) or \(\mathbb {T}^d\)), we denote by \(C_c^\infty (M)\) the space of smooth, compactly supported functions on \(M\). We define the topology of \(C_c^\infty (M)\) so that \(\psi _n \rightarrow \psi \) if and only if there exists a compact set \(K\subset M\) on which each \(\psi _n\) is supported and \(\partial ^\alpha \psi _n \rightarrow \partial ^\alpha \psi \) uniformly, for all multi-indices \(\alpha \) [15]. We write \(C_c^\infty (M)^{\prime }\) for the space of continuous linear functionals on \(C_c^\infty (M)\), and we call elements of \(C_c^\infty (M)^{\prime }\) distributions on \(M\). For \(f\in C_c^\infty (\mathbb {T}^d)^{\prime }\) and \(k\in \mathbb {Z}^d\), we define the Fourier coefficient \(\widehat{f}(k)\) by evaluating \(f\) on the element \(x\mapsto e^{-ik\cdot x}\) of \(C_c^\infty (\mathbb {T}^d)\). For distributions \(f\) and \(g\) on \(\mathbb {T}^d\), we define an inner product with Fourier coefficients \(\widehat{f}(k)\) and \(\widehat{g}(k)\):

$$\begin{aligned} \langle f,g \rangle _{H^s(\mathbb {T}^d)} \, {{:}{=}}\, \sum _{k\in \mathbb {Z}^d} \langle k \rangle ^{2s}\widehat{f}(k) \overline{\widehat{g}(k)}. \end{aligned}$$
(5.3)

If \(f\in \mathcal {S}^{\prime }(\mathbb {R}^d)\) is supported in \((-\pi ,\pi )^d\), i.e. vanishes on functions which are supported in the complement of \((-\pi ,\pi )^d\), then \(f\) can be thought of as a distribution on \(\mathbb {T}^d\), and the norms corresponding to the inner products in (5.2) and (5.3) are equivalent [16] for such distributions \(f\).

Note that \(H^{-s}(\mathbb {R}^d)\) can be identified with the dual of \(H^s(\mathbb {R}^d)\): we associate with \(f\in H^{-s}(\mathbb {R}^d)\) the functional \(g\mapsto \langle f,g \rangle \) defined for \(g\in H^{s}(\mathbb {R}^d)\) by

$$\begin{aligned} \langle f,g \rangle \, {{:}{=}}\, \int _{\mathbb {R}^d} R^{-s}_f(\xi ) \overline{R^s_g(\xi )}\,d\xi . \end{aligned}$$

This notation is justified by the fact that when \(f\) and \(g\) are in \(L^2(\mathbb {R}^d)\), this is the same as the \(L^2(\mathbb {R}^d)\) inner product of \(f\) and \(g\). By Cauchy-Schwarz, \(g\mapsto \langle f,g \rangle \) is a bounded linear functional on \(H^s(\mathbb {R}^d)\). Observe that the operator topology on the dual \(H^s(\mathbb {R}^d)\) coincides with the norm topology of \(H^{-s}(\mathbb {R}^d)\) under this identification.

It will be convenient to work with local versions of the Sobolev spaces \(H^s(\mathbb {R}^d)\). If \(h\in \mathcal {S}^{\prime }(\mathbb {R}^d)\) and \(\psi \in C_c^\infty (\mathbb {R}^d)\), we define the product \(\psi h \in \mathcal {S}^{\prime }(\mathbb {R}^d)\) by \(\langle \psi h,f \rangle = \langle h,\psi f \rangle \). Furthermore, if \(h\in H^{s}(\mathbb {R}^d)\), then \(\psi h\in H^s(\mathbb {R}^d)\) as well [1, Lemma 4.3.16]. For \(h\in C_c^\infty (D)^{\prime }\), we say that \(h \in H_{{\text {loc}}}^{s}(D)\) if \(\psi h \in H^{s}(\mathbb {R}^d)\) for every \(\psi \in C_c^\infty (D)\). We equip \(H_{{\text {loc}}}^{s}(D)\) with a topology generated by the seminorms \(\Vert \psi \cdot \Vert _{H^{s}(\mathbb {R}^d)}\), which implies that \(h_n \rightarrow h\) in \(H_{{\text {loc}}}^{s}(D)\) if and only if \(\psi h_n \rightarrow \psi h\) in \(H^{s}(\mathbb {R}^d)\) for all \(\psi \in C_c^\infty (D)\).

The following proposition provides sufficient conditions for proving almost sure convergence in \(H_{{\text {loc}}}^{-d-\delta }(\mathbb {R}^d)\).

Proposition 5.1

Let \(D\subset \mathbb {R}^d\) be an open set, let \(\delta >0\), and suppose that \((f_n)_{n\in \mathbb {N}}\) is a sequence of random measurable functions defined on \(D\). Suppose further that for every compact set \(K\subset D\),

$$\begin{aligned} \int _K \mathbb {E}\big [|f_n(x)|^2\big ]\,dx < \infty \end{aligned}$$

and there exist a summable sequence \((a_n)_{n\in \mathbb {N}}\) of positive real numbers such that for all \(n\in \mathbb {N}\), we have

$$\begin{aligned} \iint _{K\times K} \left| \mathbb {E}[(f_{n+1}(x)-f_{n}(x)) (f_{n+1}(y)-f_n(y))] \right| \,dx \quad dy \le a_n^3. \end{aligned}$$
(5.4)

Then there exists a random element \(f\in H_{{\text {loc}}}^{-d-\delta }(\mathbb {R}^d)\) supported on the closure of \(D\) such that \(f_n \rightarrow f\) in \(H_{{\text {loc}}}^{-d-\delta }(D)\) almost surely.

Before proving Proposition 5.1, we prove the following lemma. Recall that a sequence \((K_n)_{n\in \mathbb {N}}\) of compact sets is called a compact exhaustion of \(D\) if \(K_n\subset K_{n+1} \subset D\) for all \(n\in \mathbb {N}\) and \(D=\bigcup _{n\in \mathbb {N}} K_n\).

Lemma 5.2

Let \(s>0\), let \(D\subset \mathbb {R}^d\) be an open set, suppose that \((K_j)_{j\in \mathbb {N}}\) is a compact exhaustion of \(D\), and let \((f_n)_{n \in \mathbb {N}}\) be a sequence of elements of \(H^{-s}(\mathbb {R}^d)\). Suppose further that \((\psi _j)_{j\in \mathbb {N}}\) satisfies \(\psi _j \in C_c^\infty (D)\) and \(\left. \psi _j\right| _{K_j}=1\) for all \(j\in \mathbb {N}\). If for every \(j\) there exists \(f^{\psi _j} \in H^{-s}(\mathbb {R}^d)\) such that \(\psi _j f_n \rightarrow f^{\psi _j}\) as \(n\rightarrow \infty \) in \(H^{-s}(\mathbb {R}^d)\), then there exists \(f\in H_{{\text {loc}}}^{-s}(D)\) such that \(f_n \rightarrow f\) in \(H_{{\text {loc}}}^{-s}(D)\).

Proof

We claim that for all \(\psi \in C_c^\infty (D)\), the sequence \(\psi f_n\) is Cauchy in \(H^{-s}(\mathbb {R}^d)\). We choose \(j\) large enough that \({{\mathrm{supp}}}\psi \subset K_j\). For all \(g\in H^s(\mathbb {R}^d)\),

$$\begin{aligned} |\langle \psi f_n,g \rangle - \langle \psi f_m,g \rangle | = |\langle \psi _j(f_n-f_m),\psi g \rangle |. \end{aligned}$$

By hypothesis \(\psi _j f_n\) converges in \(H^{-s}(\mathbb {R}^d)\) as \(n\rightarrow \infty \), so we may take the supremum over \(\{g: \Vert g\Vert _{H^s(\mathbb {R}^d)}\le 1\}\) of both sides to conclude \(\Vert \psi f_n-\psi f_m\Vert _{H^{-s}(\mathbb {R}^d)}\rightarrow 0\) as \(\min (m,n)\rightarrow \infty \). Since \(H^{-s}(\mathbb {R}^d)\) is complete, it follows that for every \(\psi \in C_c^\infty (D)\), there exists \(f^\psi \in H^{-s}(\mathbb {R}^d)\) such that \(\psi f_n \rightarrow f^\psi \) in \(H^{-s}(\mathbb {R}^d)\).

We define a linear functional \(f\) on \(C_c^\infty (D)\) as follows. For \(g\in C_c^\infty (D)\), set

$$\begin{aligned} \langle f,g \rangle {{:}{=}}\langle f^{\psi },g \rangle , \end{aligned}$$
(5.5)

where \(\psi \) is a smooth compactly supported function which is identically equal to 1 on the support of \(g\). To see that this definition does not depend on the choice of \(\psi \), suppose that \(\psi _1\in C_c^\infty (D)\) and \(\psi _2\in C_c^\infty (D)\) are both equal to 1 on the support of \(g\). Then we have

$$\begin{aligned} \langle f^{\psi _1},g \rangle - \langle f^{\psi _2},g \rangle = \lim _{n\rightarrow \infty }\langle (\psi _1-\psi _2)f_n,g \rangle = 0, \end{aligned}$$

as desired. From the definition in (5.5), \(f\) inherits linearity from \(f^\psi \) and thus defines a linear functional on \(C_c^\infty (D)\). Furthermore, \(f\in H_{{\text {loc}}}^{-s}(D)\) since \(\psi f = f^\psi \in H^{-s}(R^d)\) for all \(\psi \in C_c^\infty (D)\). Finally, \(f_n \rightarrow f\) in \(H_{{\text {loc}}}^{-s}(D)\) since \(\psi f_n \rightarrow \psi f=f^\psi \) in \(H^{-s}(\mathbb {R}^d)\). \(\square \)

Proof of Proposition 5.1

Fix \(\psi \in C_c^\infty (D)\). Let \(D_\psi \) be a bounded open set containing the support \(\psi \) and whose closure is contained in \(D\). Since \(D_\psi \) is bounded, we may scale and translate it so that it is contained in \((-\pi ,\pi )^d\). We will calculate the Fourier coefficients of \(\psi (f_{n+1}-f_n)\) in \((-\pi ,\pi )^d\), identifying it with \(\mathbb {T}^d\). By Fubini’s theorem, we have for all \(k\in \mathbb {Z}^d\)

$$\begin{aligned} \mathbb {E}|&\widehat{\psi f_{n+1}-\psi f_n}(k)|^2 \nonumber \\&= \mathbb {E}\left[ \left( \int _D f_{n+1}(x)\psi (x) e^{-ik\cdot x} dx - \int _D f_{n}(x)\psi (x)e^{-ik\cdot x} dx \right) ^2\right] \nonumber \\&\le \Vert \psi \Vert ^2_{L^\infty (\mathbb {R}^d)} \iint \limits _{D_\psi \times D_\psi } \left| \mathbb {E}\left[ (f_{n+1}(x) - f_n(x))(f_{n+1}(y) - f_n(y))\right] \right| \, dx \,dy \nonumber \\&\le \,\Vert \psi \Vert ^2_{L^\infty (\mathbb {R}^d)}\,a_n^3, \end{aligned}$$
(5.6)

by (5.4). By Markov’s inequality, (5.6) implies

$$\begin{aligned} \mathbb {P}\left[ |\widehat{\psi f_{n+1}-\psi f_{n}}(k)| \ge a_n \langle k \rangle ^{d/2+\delta /2}\right] \le \Vert \psi \Vert ^2_{L^\infty (\mathbb {R}^d)}\,\langle k \rangle ^{-d-\delta }a_n. \end{aligned}$$

The right-hand side is summable in \(k\) and \(n\), so by the Borel-Cantelli lemma, the event on the left-hand side occurs for at most finitely many pairs \((n,k)\), almost surely. Therefore, for sufficiently large \(n_0\), this event does not occur for any \(n\ge n_0\). For these values of \(n\), we have

$$\begin{aligned} \Vert \psi f_n - \psi f_{n+1}\Vert ^2_{H^{-d-\delta }(\mathbb {T}^d)}&= \sum _{k\in \mathbb {Z}^d} |\widehat{\psi f_n - \psi f_{n+1}}(k)|^2 \langle k \rangle ^{-2(d+\delta )} \\&\le \sum _{k\in \mathbb {Z}^d} a_n^2\langle k \rangle ^{d+\delta }\langle k \rangle ^{-2d-2\delta } \\&= O(a_n^2/\delta ), \end{aligned}$$

Applying the triangle inequality, we find that for \(m,n\ge n_0\)

$$\begin{aligned} \Vert \psi f_m - \psi f_n \Vert _{H^{-d-\delta }(\mathbb {T}^d)} =O\left( \delta ^{-1/2}\sum _{j=m}^{n-1}a_j\right) . \end{aligned}$$
(5.7)

Recall that the \(H^{-d-\delta }(\mathbb {T}^d)\) and \(H^{-d-\delta }(\mathbb {R}^d)\) norms are equivalent for functions supported in \((-\pi ,\pi )^d\) [(see the discussion following (5.3)]. The sequence \((a_n)_{n\in \mathbb {N}}\) is summable by hypothesis, so (5.7) shows that \((\psi f_n)_{n\in \mathbb {N}}\) is almost surely Cauchy in \(H^{-d-\delta }(\mathbb {R}^d)\). Since \(H^{-d-\delta }(\mathbb {R}^d)\) is complete, this implies that with probability 1 there exists \(h^\psi \in H^{-d-\delta }(\mathbb {R}^d)\) to which \(\psi f_n\) converges in the operator topology on \(H^{-d-\delta }(\mathbb {R}^d)\).

By assumption \(f_n\in H_{{\text {loc}}}^0(\mathbb {R}^d)\), so \(f_n\in H^{-d-\delta }(\mathbb {R}^d)\). We may then apply Lemma 5.2 to obtain a limiting random variable \(f\in H_{{\text {loc}}}^{-d-\delta }(\mathbb {R}^d)\) to which \((f_n)_{n\in \mathbb {N}}\) converges in \(H_{{\text {loc}}}^{-d-\delta }(\mathbb {R}^d)\). \(\square \)

6 Convergence to limiting field

We have most of the ingredients in place to prove the convergence of the centered \(\varepsilon \)-nesting fields, but we need one more lemma.

Lemma 6.1

Fix \(C>0, \alpha >0\), and \(L\in \mathbb {R}\). Suppose that \(F, F_1,\) and \(F_2\) are real-valued functions on \((0,\infty )\) such that

  1. (i)

    \(F_1\) is nondecreasing on \((0,\infty )\),

  2. (ii)

    \(|F_2(x+\delta )-F_2(x)|\le C \max (\delta ^\alpha ,e^{-\alpha x})\) for all \(x>0\) and \(\delta >0\),

  3. (iii)

    \(F=F_1+F_2\), and

  4. (iv)

    For all \(\delta >0, F(n\delta ) \rightarrow L\) as \(n\rightarrow \infty \) through the positive integers.

Then \(F(x) \rightarrow L\) as \(x\rightarrow \infty \).

Proof

Let \(\varepsilon >0\), and choose \(\delta >0\) so that \(C\delta ^\alpha < \varepsilon \). Choose \(x_0\) large enough that \(C e^{-\alpha x_0}<\varepsilon \) and \(|F(n\delta )-L|< \varepsilon \) for all \(n>x_0/\delta \). Fix \(x>x_0\), and define \(a=\delta \lfloor x/\delta \rfloor \). For \(u\in \{F,F_1,F_2\}\), we write \(\Delta u=u(a+\delta )-u(a)\). Observe that \(|\Delta F_2|\le \varepsilon \) by (ii). By (iii) and (iv), this implies

$$\begin{aligned} |\Delta F_1| = |\Delta F - \Delta F_2| \le |\Delta F| + |\Delta F_2| < 3\varepsilon . \end{aligned}$$

Since \(F_1\) is monotone, we get \(|F_1(x)-F_1(a)|< 3\varepsilon \). Furthermore, (ii) implies \(|F_2(x)-F_2(a)|< \varepsilon \). It follows that

$$\begin{aligned} |F(x)-L| \le |F_1(x)-F_1(a)|+|F_2(x)-F_2(a)|+|F(a)-L| < 5\varepsilon . \end{aligned}$$

Since \(x>x_0\) and \(\varepsilon >0\) were arbitrary, this concludes the proof. \(\square \)

Theorem 6.2

Let \(h_\varepsilon (z)\) be the centered weighted nesting of a \({{\mathrm{CLE}}}_\kappa \) around the ball \(B(z,\varepsilon )\), defined in (1.2). Suppose \(0<a<1\). Then \((h_{a^n})_{n\in \mathbb {N}}\) almost surely converges in \(H_{{\text {loc}}}^{-2-\delta }(D)\).

Proof

Immediate from Theorem 4.1 and Proposition 5.1. \(\square \)

Proof of Theorem 1.1

We claim that for all \(g\in C_c^\infty (D)\), we have \(\langle h_\varepsilon ,g \rangle \rightarrow \langle h,g \rangle \) almost surely. Suppose first that the loop weights are almost surely nonnegative and that \(g\in C_c^\infty (D)\) is a nonnegative test function. Define \(F(x){{:}{=}}\langle h_{e^{-x}},g \rangle , F_1(x){{:}{=}}\langle S_z(e^{-x}),g \rangle \), and \(F_2(x){{:}{=}}-\langle \mathbb {E}[S_z(e^{-x})],g \rangle \). We apply Lemma 6.1 with \(\alpha \) as given in Lemma 2.6, which implies

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \langle h_\varepsilon ,g \rangle = \langle h,g \rangle \quad \text {for} \quad g\in C_c^\infty (D), g\ge 0. \end{aligned}$$
(6.1)

For arbitrary \(g\in C_c^\infty (D)\), we choose \(\tilde{g}\in C_c^\infty (D)\) so that \(\tilde{g}\) and \(g+\tilde{g}\) are both nonnegative. Applying (6.1) to \(\tilde{g}\) and \(g+\tilde{g}\), we see that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \langle h_\varepsilon ,g \rangle = \langle h,g \rangle \quad \text {for} \quad g\in C_c^\infty (D). \end{aligned}$$
(6.2)

Finally, consider loop weights which are not necessarily nonnegative. Define loop weights \(\xi ^{\pm }_\mathcal {L}=(\xi _\mathcal {L})^{\pm }\), where \(x^+=\max (0,x)\) and \(x^-=\max (0,-x)\) denote the positive and negative parts of \(x\in \mathbb {R}\). Define \(h^{\pm }\) to be the weighted nesting fields associated with the weights \(\xi ^{\pm }_\mathcal {L}\) (associated with the same \({{\mathrm{CLE}}}\)). Then \(\langle h^{\pm }_{\varepsilon },g \rangle \rightarrow \langle h^{\pm },g \rangle \) almost surely, and

$$\begin{aligned} \langle h_\varepsilon ,g \rangle = \langle h_\varepsilon ^+,g \rangle -\langle h_\varepsilon ^-,g \rangle \rightarrow \langle h^+,g \rangle -\langle h^-,g \rangle = \langle h,g \rangle , \end{aligned}$$

which concludes the proof that \(\langle h_\varepsilon ,g \rangle \rightarrow \langle h,g \rangle \) almost surely.

To see that the field \(h\) is measurable with respect to the \(\sigma \)-algebra \(\Sigma \) generated by the \({{\mathrm{CLE}}}_\kappa \) and the weights \((\xi _\mathcal {L})_{\mathcal {L}\in \Gamma }\), note that there exists a countable dense subset \(\mathcal {F}\) of \(C_c^\infty (D)\) [15, Exercise 1.13.6]. Observe that \(h_{2^{-n}}\) is \(\Sigma \)-measurable and \(h\) is determined by the values \(\{h_{2^{-n}}(g): n\in \mathbb {N},g\in \mathcal {F}\}\). Since \(h\) is an almost sure limit of \(h_{2^{-n}}\), we conclude that \(h\) is also \(\Sigma \)-measurable.

To establish conformal invariance, let \(z\in D\) and \(\varepsilon >0\) and define the sets of loops

$$\begin{aligned} \Xi _1&= \text {loops surrounding }B(\varphi (z),\varepsilon |\varphi ^{\prime }(z)|),\text { and} \\ \Xi _2&= \text {loops surrounding }\varphi (B(z,\varepsilon )) \\ \Xi _3&= \Xi _1 \Delta \,\Xi _2, \end{aligned}$$

where \(\Delta \) denotes the symmetric difference of two sets. Since either \(\Xi _1\subset \Xi _2\) or \(\Xi _2\subset \Xi _1\),

$$\begin{aligned} h_\varepsilon (z) - \acute{h}_{\varepsilon |\varphi ^{\prime }(z)|}(\varphi (z)) = \pm \sum _{\xi \in \Xi _3} \xi _\mathcal {L}. \end{aligned}$$

Multiplying by \(g\in C_c^\infty (D)\), integrating over \(D\), and taking \(\varepsilon \rightarrow 0\), we see that by Lemma 2.7 and the finiteness of \(\mathbb {E}[|\xi _\mathcal {L}|]\), the sum on the right-hand side goes to 0 in \(L^1\) and hence in probability as \(\varepsilon \rightarrow 0\). Furthermore, we claim that

$$\begin{aligned} \int _D \left[ \acute{h}_{\varepsilon |\varphi ^{\prime }(z)|}(\varphi (z)) - \acute{h}_{\varepsilon }(\varphi (z))\right] g(z)\,dz \rightarrow 0 \end{aligned}$$

in probability as \(\varepsilon \rightarrow 0\). To see this, we write the difference in square brackets as

$$\begin{aligned} \acute{h}_{\varepsilon |\varphi ^{\prime }(z)|}(\varphi (z)) - \acute{h}_{C\varepsilon }(\varphi (z)) + \acute{h}_{C\varepsilon }(\varphi (z)) - \acute{h}_{\varepsilon }(\varphi (z)), \end{aligned}$$

where \(C\) is an upper bound for \(|\varphi ^{\prime }(z)|\) as \(z\) ranges over the support of \(g\). Note that \(\int _D \left[ \acute{h}_{C\varepsilon }(\varphi (z)) - \acute{h}_{\varepsilon }(\varphi (z))\right] g(z)\,dz \rightarrow 0\) in probability because for all \(0<\varepsilon ^{\prime }<\varepsilon \) and \(\psi \in C_c^\infty (D)\), we have

$$\begin{aligned} \mathbb {E}\Vert \psi h_\varepsilon - \psi h_{\varepsilon ^{\prime }}\Vert ^2_{H^{-d-\delta }(\mathbb {T}^d)}&= \sum _{k\in \mathbb {Z}^d} \mathbb {E}|\widehat{\psi h_\varepsilon - \psi h_{\varepsilon ^{\prime }}}(k)|^2 \langle k \rangle ^{-2(d+\delta )} \\&\le \sum _{k\in \mathbb {Z}^d} \Vert \psi \Vert ^2_{L^\infty (\mathbb {R}^d)}\iint _{D_\psi ^2} \big |\mathbb {E}[ (h_\varepsilon (x) - h_{\varepsilon ^{\prime }}(x))(h_\varepsilon (y) \\&\quad - h_{\varepsilon ^{\prime }}(y))]\big | \, dx \,dy \langle k \rangle ^{-2(d+\delta )} \\&\le \varepsilon ^{\Omega (1)}/\delta ; \end{aligned}$$

see (5.6) for more details. The same calculation along with Theorem 4.1 show that

$$\begin{aligned} \int _D \left[ \acute{h}_{C\varepsilon }(\varphi (z)) - \acute{h}_{\varepsilon |\varphi ^{\prime }(z)|}(\varphi (z))\right] g(z)\,dz \rightarrow 0, \end{aligned}$$

in probability. It follows that \(\langle h,g \rangle = \langle \acute{h}\circ \varphi , g \rangle \) for all \(g\in C_c^\infty (D)\), as desired. \(\square \)

7 Step nesting

In this section we prove Theorem 1.2. Suppose that \(D\) is a proper simply connected domain, and let \(\Gamma \) be a \({{\mathrm{CLE}}}_\kappa \) in \(D\). Let \(\mu \) be a probability measure with finite second moment and zero mean, and define

$$\begin{aligned} \mathfrak {h}_n(z) = \sum _{k=1}^n \xi _{\mathcal {L}_k(z)}, \quad n\in \mathbb {N}. \end{aligned}$$

We call \((\mathfrak {h}_n)_{n\in \mathbb {N}}\) the step nesting sequence associated with \(\Gamma \) and \((\xi _\mathcal {L})_{\mathcal {L}\in \Gamma }\).

Lemma 7.1

For each \(\kappa \in (8/3,8)\) there are positive constants \(c_1, c_2\), and \(c_3\) (depending on \(\kappa \)) such that for any simply connected proper domain \(D\subsetneq \mathbb {C}\) and points \(z,w\in D\), for a \({{\mathrm{CLE}}}_\kappa \) in \(D\),

$$\begin{aligned} \Pr \left[ \mathcal {N}_{z,w}\ge c_1 \log \frac{{{\mathrm{CR}}}(z;D)}{|z-w|}+c_2 j+c_3\right] \le \exp [-j]. \end{aligned}$$

Proof

Let \(X_i\) be i.i.d. copies of the log conformal radius distribution, and let \(T_\ell =\sum _{i=1}^\ell X_i\). Then

$$\begin{aligned} \Pr [T_\ell \le t]&\le \mathbb {E}[e^{-X}]^\ell e^{t} \\ \Pr [T_\ell \le \log ({{\mathrm{CR}}}(z;D)/|z-w|)]&\le \mathbb {E}[e^{-X}]^\ell \frac{{{\mathrm{CR}}}(z;D)}{|z-w|}. \end{aligned}$$

If \(T_\ell > \log ({{\mathrm{CR}}}(z;D)/|z-w|)\), then \(J_{z,|z-w|}^\cap \le \ell \). But \(\mathcal {N}_{z,w}< J_{z,|z-w|}^\subset \), and by Corollary 2.4, \(J_{z,|z-w|}^\subset -J_{z,|z-w|}^\cap \) has exponential tails. \(\square \)

Proof of Theorem 1.2

We check that (5.4) holds with \(f_n = \mathfrak {h}_n\). Writing out each difference as a sum of loop weights and using the linearity of expectation, we calculate for \(0\le m\le n\) and \(z,w\in D\),

$$\begin{aligned} \mathbb {E}[(\mathfrak {h}_m(z)-\mathfrak {h}_n(z))(\mathfrak {h}_m(w)-\mathfrak {h}_n(w))]&= \sigma ^2 \sum _{k=m+1}^n \mathbb {P}[\mathcal {N}_{z,w}\ge k]. \end{aligned}$$

Let \(\delta (z)\) be the value for which \(c_1\log ({{\mathrm{CR}}}(z;D)/\delta (z))+c_3=k\), where \(c_1\) and \(c_3\) are as in Lemma 7.1. Let \(K\) be compact, and let \(\delta =\max _{z\in K}\delta (z)\). Then

$$\begin{aligned} \delta \le \exp [-\Theta (k)] \times \sup _{z\in K} {{\mathrm{dist}}}(z,\partial D) \end{aligned}$$
(7.1)

and

$$\begin{aligned} \iint \limits _{\begin{array}{c} K\times K\\ |z-w|\ge \delta \end{array}} \Pr [\mathcal {N}_{z,w}\ge k] \,dz\,dw \le \exp (-k) \times {\text {area}}(K)^2. \end{aligned}$$
(7.2)

The integral of \(\mathbb {P}[\mathcal {N}_{z,w}\ge k]\) over \(z,w\) which are closer than \(\delta \) is controlled by virtue of the small volume of the domain of integration:

$$\begin{aligned} \iint \limits _{\begin{array}{c} K\times K\\ |z-w|\le \delta \end{array}}\mathbb {P}[\mathcal {N}_{z,w}\ge k] \,dz\,dw \le \delta ^2\times {\text {area}}(K). \end{aligned}$$
(7.3)

Putting together (7.1)–(7.3) establishes

$$\begin{aligned} \iint \limits _{K\times K}\mathbb {P}\left[ \mathcal {N}_{z,w}\ge k\right] \,dz\,dw \le \exp [-\Theta (k)]\times C_{K,D} \end{aligned}$$
(7.4)

as \(k\rightarrow \infty \).

Having proved (7.4), we may appeal to Proposition 5.1 and conclude that \(\mathfrak {h}_n\) converges almost surely to a limiting random variable \(\mathfrak {h}\) taking values in \(H_{{\text {loc}}}^{-2-\delta }(D)\).

Since each \(\mathfrak {h}_n\) is determined by \(\Gamma \) and \((\xi _\mathcal {L})_{\mathcal {L}\in \Gamma }\), the same is true of \(\mathfrak {h}\). Similarly, for each \(n\in \mathbb {N}, \mathfrak {h}_n\) inherits conformal invariance from the underlying \({{\mathrm{CLE}}}_\kappa \). It follows that \(\mathfrak {h}\) is conformally invariant as well. \(\square \)

The following proposition shows that if the weight distribution \(\mu \) has zero mean, then the step nesting field \(\mathfrak {h}\) and the usual nesting field \(h\) are equal.

Proposition 7.2

Suppose that \(D \subsetneq \mathbb {C}\) is a simply connected domain, and let \(\mu \) be a probability measure with finite second moment and zero mean. Let \(\Gamma \) be a \({{\mathrm{CLE}}}_\kappa \) in \(D\), and let \((\xi _\mathcal {L})_{\mathcal {L}\in \Gamma }\) be an i.i.d. sequence of \(\mu \)-distributed random variables. The weighted nesting field \(h=h(\Gamma ,(\xi _\mathcal {L})_{\mathcal {L}\in \Gamma })\) from Theorem 1.1 and the step nesting field \(\mathfrak {h}=\mathfrak {h}(\Gamma ,(\xi _\mathcal {L})_{\mathcal {L}\in \Gamma })\) from Theorems 1.1 and 1.2 are almost surely equal.

Proof

Let \(g\in C_c^\infty (D), \varepsilon >0\) and \(n\in \mathbb {N}\). By Fubini’s theorem, we have

$$\begin{aligned}&\mathbb {E}[(\langle h_\varepsilon ,g \rangle -\langle \mathfrak {h}_n,g \rangle )^2] \nonumber \\&\quad = \iint _{D\times D} \mathbb {E}[(h_\varepsilon (z) - \mathfrak {h}_n(z))(h_\varepsilon (w) - \mathfrak {h}_n(w))]\,g(z)g(w)\,dz\,dw. \end{aligned}$$
(7.5)

Applying the same technique as in (4.2), we find that the expectation on the right-hand side of (7.5) is bounded by \(\sigma ^2\) times the expectation of the number \(N_{z,w}(n,\varepsilon )\) of loops \(\mathcal {L}\) satisfying both of the following conditions:

  1. 1.

    \(\mathcal {L}\) surrounds \(B_z(\varepsilon )\) or \(\mathcal {L}\) is among the \(n\) outermost loops surrounding \(z\), but not both.

  2. 2.

    \(\mathcal {L}\) surrounds \(B_w(\varepsilon )\) or \(\mathcal {L}\) is among the \(n\) outermost loops surrounding \(w\), but not both.

Using Fatou’s lemma and (7.5), we find that

$$\begin{aligned} \mathbb {E}[(\langle h,g \rangle -\langle \mathfrak {h},g \rangle )^2]&= \mathbb {E}\left[ \lim _{\varepsilon \rightarrow 0}\lim _{n\rightarrow \infty } (\langle h_\varepsilon ,g \rangle -\langle \mathfrak {h_n},g \rangle )^2\right] \\&\le \liminf _{\varepsilon \rightarrow 0}\liminf _{n\rightarrow \infty } \mathbb {E}[ (\langle h_\varepsilon ,g \rangle -\langle \mathfrak {h_n},g \rangle )^2] \\&\le \liminf _{\varepsilon \rightarrow 0}\liminf _{n\rightarrow \infty } \iint _{D\times D}\mathbb {E}[N_{z,w}(n,\varepsilon )]\,g(z)g(w) \,dz\,dw \\&\le \limsup _{\varepsilon \rightarrow 0}\limsup _{n\rightarrow \infty } \iint _{D\times D}\mathbb {E}[N_{z,w}(n,\varepsilon )]\,g(z)g(w) \,dz\,dw. \end{aligned}$$

If \(z\ne w\), then \(\mathcal {N}_{z,w}<\infty \) almost surely, so \(\mathbb {E}[N_{z,w}(n,\varepsilon )]\) tends to 0 as \(\varepsilon \rightarrow 0\) and \(n\rightarrow \infty \). Furthermore, the observation \(N_{z,w}(n,\varepsilon ) \le \mathcal {N}_{z,w}\) implies by Theorem 1.3 that \(\mathbb {E}[N_{z,w}(n,\varepsilon )]\) is bounded by \(\nu _{\mathrm {typical}}\log |z-w|^{-1}+\text {const}\) independently of \(n\) and \(\varepsilon \). Since \((z,w)\mapsto \mathbb {E}[N_{z,w}(n,\varepsilon )]g(z)g(w)\) is dominated by the integrable function \((\nu _{\mathrm {typical}}\log |z-w|^{-1}+\text {const})g(w)g(w)\), we may apply the reverse Fatou lemma to obtain

$$\begin{aligned} \mathbb {E}[(\langle h,g \rangle -\langle \mathfrak {h},g \rangle )^2]&\le \iint _{D\times D} \limsup _{\varepsilon \rightarrow 0}\limsup _{n\rightarrow \infty } \mathbb {E}[N_{z,w}(n,\varepsilon )]\,g(z)g(w) \,dz\,dw \\&= 0, \end{aligned}$$

which implies

$$\begin{aligned} \langle h,g \rangle =\langle \mathfrak {h},g \rangle \end{aligned}$$
(7.6)

almost surely. The space \(C_c^\infty (\mathbb {C})\) is separable [15, Exercise 1.13.6], which implies that \(\mathbb {C}_c^\infty (D)\) is also separable. To see this, consider a given countable dense subset of \(C_c^\infty (\mathbb {C})\). Any sufficiently small neighborhood of a point in \(C_c^\infty (D)\) is open in \(C_c^\infty (\mathbb {C})\), and therefore intersects the countable dense set. Therefore, we may apply (7.6) to a countable dense subset of \(C_c^\infty (D)\) to conclude that \(h = \mathfrak {h}\) almost surely. \(\square \)

8 Further questions

Question 1

Suppose that \(h\) is the nesting field associated with a \({{\mathrm{CLE}}}_\kappa \) process and weight distribution \(\mu \). For each \(\varepsilon > 0\) and \(z \in D\), let \(A_z(\varepsilon )\) be the average of \(h\) on the disk \(B(z,\varepsilon )\). Is it true that the set of extremes of \(A_z(\varepsilon )\), i.e., points where either \(A_z(\varepsilon )\) has unusually slow or fast growth as \(\varepsilon \rightarrow 0\), is the same as that for \(\mathcal {S}_z(\varepsilon )\)?

Question 2

When \(\kappa =4\) and \(\mu \) is the Bernoulli distribution, the nesting field \(h\) is a GFF on \(D\). In this case, it follows from [9] that the underlying \({{\mathrm{CLE}}}_4\) is a deterministic function of \(h\). Does a similar statement hold for \(\kappa \in (8/3,4]\)? For \(\kappa \in (4,8)\), we do not expect this to hold because we do not believe that it is possible to determine the outermost loops of such a \({{\mathrm{CLE}}}_\kappa \) given the union of the outermost loops as a random set. Nevertheless, is the union of all loops, viewed as a subset of \(D\) and its prime ends, determined by the (weighted) nesting field?

Question 3

When \(\kappa =4\) and \(\mu \) is the Bernoulli distribution, then the nesting field is a Gaussian process (in particular, a GFF). We do not expect this to hold with the Bernoulli distribution for any \(\kappa \ne 4\). Do there exist other values of \(\kappa \in (8/3,8)\) and weight distributions \(\mu \) such that the corresponding nesting field is also Gaussian?

Question 4

Does the nesting field in general satisfy a spatial Markov property which is similar to that of the GFF? Is there a type of Markovian characterization for the nesting field which is analogous to that for \({{\mathrm{CLE}}}\) [12, 14]? The existence of a spatial Markov property for the nesting field is natural in view of the conjectured convergence of discrete models which possess a spatial Markov property to \({{\mathrm{CLE}}}_\kappa \).

Question 5

There are several discrete loop models which are known to converge to CLE. Do their nesting fields converge to the nesting field of CLE?