Random walk loop soups and conformal loop ensembles

The random walk loop soup is a Poissonian ensemble of lattice loops; it has been extensively studied because of its connections to the discrete Gaussian free field, but was originally introduced by Lawler and Trujillo Ferreras as a discrete version of the Brownian loop soup of Lawler and Werner, a conformally invariant Poissonian ensemble of planar loops with deep connections to conformal loop ensembles (CLEs) and the Schramm-Loewner evolution (SLE). Lawler and Trujillo Ferreras showed that, roughly speaking, in the continuum scaling limit, ``large'' lattice loops from the random walk loop soup converge to ``large'' loops from the Brownian loop soup. Their results, however, do not extend to clusters of loops, which are interesting because the connection between Brownian loop soup and CLE goes via cluster boundaries. In this paper, we study the scaling limit of clusters of ``large'' lattice loops, showing that they converge to Brownian loop soup clusters. In particular, our results imply that the collection of outer boundaries of outermost clusters composed of ``large'' lattice loops converges to CLE.


Introduction
Several interesting models of statistical mechanics, such as percolation and the Ising and Potts models, can be described in terms of clusters. In two dimensions and at the critical point, the scaling limit geometry of the boundaries of such clusters is known (see [6,7,8,9,23]) or conjectured (see [13,24]) to be described by some member of the one-parameter family of Schramm-Loewner evolutions (SLE κ with κ > 0) and related conformal loop ensembles (CLE κ with 8/3 < κ < 8). What makes SLEs and CLEs natural candidates is their conformal invariance, a property expected of the scaling limit of two-dimensional statistical mechanical models at the critical point. SLEs can be used to describe the scaling limit of single interfaces; CLEs are collections of loops and are therefore suitable to describe the scaling limit of the collection of all macroscopic boundaries at once. For example, the scaling limit of the critical percolation exploration path is SLE 6 [7, 23], and the scaling limit of the collection of all critical percolation interfaces in a bounded domain is CLE 6 [6,8].  For 8/3 < κ ≤ 4, CLE κ can be obtained [22] from the Brownian loop soup, introduced by Lawler and Werner [16] (see Section 2 for a definition), as we explain below. A sample of the Brownian loop soup in a bounded domain D with intensity λ > 0 is the collection of loops contained in D from a Poisson realization of a conformally invariant intensity measure λµ. When λ ≤ 1, the loop soup is composed of disjoint clusters of loops [22] (where a cluster is a maximal collection of loops that intersect each other). When λ > 1, there is a unique cluster [22] and the set of points not surrounded by a loop is totally disconnected (see [1]). Furthermore, when λ ≤ 1, the outer boundaries of the outermost loop soup clusters are distributed like conformal loop ensembles (CLE κ ) [21,22,26] with 8/3 < κ ≤ 4. More precisely, if 8/3 < κ ≤ 4, then 0 < (3κ − 8)(6 − κ)/2κ ≤ 1 and the collection of all outer boundaries of the outermost clusters of the Brownian loop soup with intensity λ = (3κ − 8)(6 − κ)/2κ is distributed like CLE κ [22]. For example, the continuum scaling limit of the collection of all macroscopic outer boundaries of critical Ising spin clusters is conjectured to correspond to CLE 3 and to a Brownian loop soup with λ = 1/2.
In [15] Lawler and Trujillo Ferreras introduced the random walk loop soup as a discrete version of the Brownian loop soup, and showed that, under Brownian scaling, it converges in an appropriate sense to the Brownian loop soup. The authors of [15] focused on individual loops, showing that, with probability going to 1 in the scaling limit, there is a one-to-one correspondence between "large" lattice loops from the random walk loop soup and "large" loops from the Brownian loop soup such that corresponding loops are close.
In [17] Le Jan showed that the random walk loop soup has remarkable connections with the discrete Gaussian free field, analogous to Dynkin's isomorphism [10,11] (see also [2]). Such considerations have prompted an extensive analysis of more general versions of the random walk loop soup (see e.g. [18,25]).
As explained above, the connection between the Brownian loop soup and SLE/CLE goes through its loop clusters and their boundaries. In view of this observation, it is interesting to investigate whether the random walk loop soup converges to the Brownian loop soup in terms of loop clusters and their boundaries, not just in terms of individual loops, as established by Lawler and Trujillo Ferreras [15]. This is a natural and nontrivial question, due to the complex geometry of the loops involved and of their mutual overlaps.
In this paper, we consider random walk loop soups from which the "small" loops have been removed and establish convergence of their clusters and boundaries, in the scaling limit, to the clusters and boundaries of corresponding Brownian loop soups (see Figure 1). We use tools ranging from classical Brownian motion techniques to recent loop soup results. Indeed, properties of planar Brownian motion as well as properties of CLEs play an important role in the proofs of our results.
We note that determining whether the convergence of a sequence of sets implies the convergence of their outer boundaries to the outer boundary of the limiting set is nontrivial even for a single loop or path. We establish this result of independent interest, i.e. that the outer boundary of a random walk path/loop converges to the outer boundary of a Brownian path/loop, as an important step towards the proof our main theorem.

Definitions and main results
The first main result of this paper is the convergence of the outer boundary of a random walk path/loop to the outer boundary of a Brownian path/loop. Let S t , t ∈ {0, 1, 2, . . .}, be a simple random walk on the square lattice Z 2 , with S 0 = 0. We define S t for non-integer times t by linear interpolation. Let B t , t ≥ 0, be a planar Brownian motion started at 0. Let B loop t , 0 ≤ t ≤ 1, be a planar Brownian loop of time length 1 started at 0, i.e. a process which has the same law as the process B t − tB 1 , 0 ≤ t ≤ 1.
For a bounded subset A of the complex plane, we write ExtA for the exterior of A, i.e. the unique unbounded connected component of C \ A. By HullA, we denote the hull of A, which is the complement of ExtA. We write ∂ o A for the topological boundary of ExtA, called the outer boundary of A. Note that ∂A ⊃ ∂ o A = ∂ExtA = ∂HullA. For sets A, A , the Hausdorff distance between A and A is given by where A δ = x∈A B(x; δ) with B(x; δ) = {y : |x − y| < δ}. We identify R 2 and C, and we will often identify processes and curves with their range in the complex plane. Theorem 2.1. As N → ∞, (i) the outer boundary of (N −1 S 2N 2 t , 0 ≤ t ≤ 1) converges in distribution to the outer boundary of (B t , 0 ≤ t ≤ 1), with respect to d H , (ii) the outer boundary of (N −1 S 2N 2 t , 0 ≤ t ≤ 1), conditional on {S 2N 2 = 0}, converges in distribution to the outer boundary of (B loop t , 0 ≤ t ≤ 1), with respect to d H .
Note that the scaling for the random walk in Theorem 2.1 is slightly different from the usual Brownian scaling. The factor 2 comes from the fact that a simple random walk on Z 2 has the same law as the product of two independent simple random walks on Z rotated by π/4 and scaled in space by 1/ √ 2. See also (3.1) of [15]. To state the second main result of this paper, we recall the definitions of the Brownian loop soup and the random walk loop soup. A curve γ is a continuous function γ : [0, t γ ] → C, where t γ < ∞ is the time length of γ. A loop is a curve with γ(0) = γ(t γ ). The Brownian bridge measure µ z,t 0 is a probability measure on loops, induced by a planar Brownian loop of time length t 0 started at z. The (rooted) Brownian loop measure µ is a measure on loops, given by where C is a collection of loops and A denotes two-dimensional Lebesgue measure, see Remark 5.28 of [14]. For a domain D let µ D be µ restricted to loops which stay in D. The (rooted) Brownian loop soup with intensity λ ∈ (0, ∞) in D is a Poissonian realization from the measure λµ D . The Brownian loop soup introduced by Lawler and Werner [16] is obtained by forgetting the starting points (roots) of the loops. The geometric properties we study in this paper are the same for both the rooted and the unrooted version of the Brownian loop soup. Let L be a Brownian loop soup with intensity λ in a domain D, and let L t 0 be the collection of loops in L with time length at least t 0 .
The (rooted) random walk loop measureμ is a measure on nearest neighbor loops in Z 2 , which we identify with loops in the complex plane by linear interpolation. For a loopγ in Z 2 , we definẽ where tγ is the time length ofγ, i.e. its number of steps. The (rooted) random walk loop soup with intensity λ is a Poissonian realization from the measure λμ. For a domain D and positive integer N , letL N be the collection of loopsγ N defined byγ N (t) = N −1γ (2N 2 t), 0 ≤ t ≤ tγ/(2N 2 ), whereγ are the loops in a random walk loop soup with intensity λ which stay in N D. Note that the time length ofγ N is tγ/(2N 2 ). LetL t 0 N be the collection of loops inL N with time length at least t 0 .
Let A be a collection of loops in a domain D. A chain of loops is a sequence of loops, where each loop intersects the loop which follows it in the sequence. We call C ⊂ A a subcluster of A if each pair of loops in C is connected via a finite chain of loops from C. We say that C is a finite subcluster if it contains a finite number of loops. A subcluster which is maximal in terms of inclusion, is called a cluster. We will often identify a collection of curves C with the set in the plane γ∈C γ. A cluster C of A is called outermost if there exists no cluster C of A such that C = C and HullC ⊂ HullC . The carpet of A is the set D \ C (HullC \ ∂ o C), where the union is over all outermost clusters C of A. For collections of subsets of the plane A, A , the induced Hausdorff distance is given by and ∀A ∈ A ∃A ∈ A such that d H (A, A ) < δ}. Corollary 2.3. Let D be a bounded, simply connected domain, take λ ∈ (0, 1] and 16/9 < θ < 2. Let κ ∈ (8/3, 4] be such that λ = (3κ−8)(6−κ)/2κ. As N → ∞, the collection of outer boundaries of all outermost clusters of L N θ−2 N converges in distribution to CLE κ , with respect to d * H . Corollary 2.3 is an immediate consequence of Theorem 2.2 and the loop soup construction of conformal loop ensembles by Sheffield and Werner [22]. Theorem 2.2 has an analogue for the random walk loop soup with killing and the massive Brownian loop soup as defined in [5]. Our proof extends to that case.
We conclude this section by giving an outline of the paper and explaining the structure of the proofs of Theorems 2.1 and 2.2. In Section 3 we consider a sequence of (deterministic) curves γ N converging uniformly to a curve γ. In general, the exteriors, outer boundaries and hulls of γ N do not necessarily converge in the Hausdorff distance to the corresponding sets for γ. An example is given in Figure 2; the solid curve represents γ and the dashed curve γ N . The curves γ N and γ are close in the supremum distance, but their outer boundaries are not close in the Hausdorff distance. The reason for this behavior is that two parts of the curve γ touch while the corresponding parts of γ N are separated. We will give a precise definition of a touching, and we prove that if γ has no touchings then the exteriors of γ N converge in the Hausdorff distance to the exterior of γ, and that convergence of exteriors implies convergence of outer boundaries and hulls. In Section 4 we will prove Figure 3. A cluster whose exterior is not well-approximated by the exterior of any finite subcluster. that, almost surely, planar Brownian motion has no touchings. The results from Sections 3 and 4 imply Theorem 2.1.
The largest part of the proof of Theorem 2.2 is to show that, for large N , with high probability, for each large cluster C of L there exists a cluster C N ofL N θ−2 N such that d H (ExtC, ExtC N ) is small. We will prove this fact in three steps.
First, let C be a large cluster of L. We choose a finite subcluster C of C such that d H (ExtC, ExtC ) is small. A priori, it is not clear that such a finite subcluster exists. An example is given in Figure 3; the cluster contains two infinite chains of loops which are disjoint and at Euclidean distance zero. In Section 5 we prove, using results from Section 3, that, almost surely, a finite subcluster with the desired property exists.
Second, we approximate the finite subcluster C by a finite subcluster C N ofL N θ−2 N . We use Corollary 5.4 of Lawler and Trujillo Ferreras [15], which gives that, with probability tending to 1, there is a one-to-one correspondence between loops inL N θ−2 N and loops in L N θ−2 such that corresponding loops are close. It follows from results in Sections 3 and 4 that d H (ExtC , ExtC N ) is small. Third, we letC N be the cluster ofL N θ−2 N that containsC N . In Section 6 we will prove an estimate which implies that, with high probability, for non-intersecting loops in L N θ−2 the corresponding loops inL N θ−2 N do not intersect. We deduce from this that for distinct subclustersC 1,N andC 2,N , the corresponding clustersC 1,N andC 2,N are distinct. We use this property to conclude that d H (ExtC, ExtC N ) is small.

Topological conditions for convergence
Let γ N be a sequence of curves converging uniformly to a curve γ, i.e.
The distance d ∞ is a natural distance on the space of curves mentioned in Section 5.1 of [14]. We are interested in identifying topological conditions that, imposed on γ (and γ N ), will yield convergence in the Hausdorff distance of the exteriors, outer boundaries and hulls of γ N to the corresponding sets defined for γ.
Note that, in general, uniform convergence of the curves does not imply convergence of any of these sets. For example, let γ : [0, 1] → C be a loop given by γ(t) = e 2πit . Then, Let γ N : [0, 1] → C be a sequence of curves given by γ N (t) = γ(1/N + t(1 − 1/N )). It is clear that d ∞ (γ N , γ) → 0, but at the same time where arg(x) is the principal argument of x. In this case, the outer boundaries converge in the Hausdorff distance but the exteriors and hulls stay at Hausdorff distance 1 to each other. The reason for this behavior is that there exists a pair of times, namely the pair (0, 1), where different parts of the curve γ touch each other while the corresponding parts of the approximating curves γ N are separated. This separation causes the exterior of γ N to reach the origin of the plane, whereas γ disconnects the origin from infinity. We call such a pair of times a touching: Definition 3.1. We say that a curve γ has a touching (s, t) if 0 ≤ s < t ≤ t γ , γ(s) = γ(t) and there exists δ > 0 such that for all ε ∈ (0, δ), there exists a curve γ with , and t − , t + are defined similarly using t instead of s.
Note that in the above example, γ has only one touching. The main result of this section is the following: Theorem 3.2. Let γ N , γ be curves such that d ∞ (γ N , γ) → 0 as N → ∞, and γ has no touchings. Then, To prove the main results of this paper, we will also need to deal with similar convergence issues for sets defined by collections of curves. For two collections of curves C, C let and ∀γ ∈ C ∃γ ∈ C such that d ∞ (γ, γ ) < δ}.
Definition 3.4. We say that a collection of curves has a touching if it contains a curve that has a touching or it contains a pair of distinct curves that have a mutual touching.
The next result is an analog of Theorem 3.2.
Theorem 3.5. Let C N , C be collections of curves such that d * ∞ (C N , C) → 0 as N → ∞, and C contains finitely many curves and C has no touchings. Then, Note that there exist curves γ which have touchings and such that for all sequences γ N with d ∞ (γ N , γ) → 0, the exteriors, outer boundaries and hulls of γ N converge as in Theorem 3.2. A simple example is the curve γ(t) = sin t, t ∈ [0, π], whose range is the real interval [0, 1] and where all pairs of the form (π/2 − s, π/2 + s), s ∈ (0, π/2], create a touching. In the first example of this section, we constructed a sequence γ N converging uniformly to γ, where the hulls and exteriors of γ N did not converge to the hull and exterior of γ, but the outer boundaries converged in the Hausdorff distance to the outer boundary of γ. Consider the sets A N = ∂{x : |x| < 1, arg(x) > 2π/N }. One can find a sequence γ N of curves converging uniformly and such that γ N = A N as sets in the plane. In this case, the exteriors and outer boundaries do not converge, but the hulls converge in the Hausdorff distance to the hull of the the limiting curve.
The remainder of this section is devoted to proving Theorems 3.2 and 3.5. We will first identify a general condition for the convergence of exteriors, outer boundaries and hulls in the setting of arbitrary bounded subsets of the plane. Then, we will prove that if a curve does not have any touchings, then this condition is satisfied and hence Theorem 3.2 follows. At the end of the section, we will show how to obtain Theorem 3.5 using similar arguments.
Note that both in the first example of this section and in the above example in which γ N is constructed via the sets A N , the exterior of γ N extended to a non-negligible distance outside the exterior of γ for all N . The next result states that when this situation does not occur, then the desired convergence follows.
Proposition 3.6. Let A N , A be bounded subsets of the plane such that d H (A N , A) → 0 as N → ∞. Suppose that for every δ > 0 there exists N 0 such that, for all N > N 0 , ExtA N ⊂ (ExtA) δ . Then, We will first prove that one of the inclusions required for the convergence of exteriors is always satisfied under the assumption that Proof. Suppose that the desired inclusion does not hold. This means that there exists δ > 0 such that, after passing to a subsequence, ExtA ⊂ (ExtA N ) δ for all N . This is equivalent to the existence of x N ∈ ExtA, such that d E (x N , ExtA N ) ≥ δ. Since d H (A N , A) → 0 and the sets are bounded, the sequence x N is bounded and we can assume that x N → x ∈ ExtA when N → ∞. It follows that for N large enough, d E (x, ExtA N ) > δ/2 and hence B(x; δ/2) does not intersect ExtA N . We will show that this leads to a contradiction. To this end, note that since x ∈ ExtA, there exists y ∈ ExtA such that |x − y| < δ/4. Furthermore, ExtA is an open connected subset of C, and hence it is path connected. This means that there exists a continuous path connecting y with ∞ which stays within ExtA. We denote by ℘ its range in the complex plane.
and so A N does not intersect ℘. This implies that A N does not disconnect y from ∞. Hence, y ∈ ExtA N and B(x; δ/2) intersects ExtA N for N large enough, which is a contradiction. This completes the proof.
Proof. We start with the first inclusion. From the assumption, it follows is connected and intersects both ExtA and its complement HullA . This We are left with proving the second inclusion. From the assumption, it Proof of Proposition 3.6. It follows from Lemmas 3.7 and 3.8. Remark 3.9. In the proof of Theorem 2.2, we will use equivalent formulations of Theorem 3.5 and Lemma 3.7 in terms of metric rather than sequential convergence. The equivalent formulation of Lemma 3.7 is as follows: For any bounded set A and δ > 0, there exists ε > 0 such that if d H (A, A ) < ε, then ExtA ⊂ (ExtA ) δ . The equivalent formulation of Theorem 3.5 is similar.
Without loss of generality, from now till the end of this section, we assume that all curves have time length 1 (this can always be achieved by a linear time change). Lemma 3.11. Let γ N , γ be curves such that d ∞ (γ N , γ) → 0 as N → ∞, and γ has no touchings. Then for any δ > 0 and s, t which are δ-connected in γ, there exists N 0 such that s, t are 4δ-connected in γ N for all N > N 0 .
Proof. Fix δ > 0. If the diameter of γ is at most δ, then it is enough to Otherwise, let s, t ∈ [0, 1] be δ-connected in γ and let x be such that γ(s) and γ(t) are in the same connected component of γ ∩ B(x; δ/2). We say that Note that if [a, b] defines an excursion, then the diameter of γ[a, b] is at least δ/2. Since γ is uniformly continuous, it follows that there are only finitely many excursions.
. If s, t ∈ I i for some i, then it is enough to take N 0 such that d ∞ (γ N , γ) < δ for N > N 0 , and the claim of the lemma follows. Otherwise, using the fact that γ[I i ] are closed connected sets, one can reorder the intervals in such a way that s ∈ I 1 , t ∈ I l , and , and therefore also in γ N ∩ B(x; 2δ). Lemma 3.12. If γ is a curve, then there exists a loop whose range is ∂ o γ and whose winding around each point of Hullγ \ ∂ o γ is equal to 2π.
Proof. Let D = {x ∈ C : |x| > 1}. By the proof of Theorem 1.5(ii) of [3], there exists a one-to-one conformal map ϕ from D onto Extγ which extends to a continuous function ϕ : D → Extγ, and such that ϕ[∂D ] = ∂ o γ. Let γ r (t) = ϕ(e it2π (1 + r)) for t ∈ [0, 1] and r ≥ 0. It follows that the range of γ 0 is ∂ o γ. Moreover, since ϕ is one-to-one, γ r is a simple curve for r > 0 and hence its winding around every point of Hullγ \ ∂ o γ is equal to 2π. Since d ∞ (γ 0 , γ r ) → 0 when r → 0, the winding of γ 0 around every point of Hullγ \ ∂ o γ is also equal to 2π.
Suppose that for any δ > 0 and s, t which are δ-connected in γ, there exists N 0 such that s, t are 4δ-connected in γ N for all N > N 0 . Then, for every δ > 0, there exists N 0 such that for all N > N 0 , Extγ N ⊂ (Extγ) δ .
Proof. Fix δ > 0. By Lemma 3.12, let γ 0 be a loop whose range is ∂ o γ and whose winding around each point of Hullγ \ ∂ o γ equals 2π. Let be a sequence of times satisfying and |γ 0 (t) − γ 0 (t l−1 )| < δ/32 for all t ∈ [t l−1 , 1). This is well defined, i.e. l < ∞, since γ 0 is uniformly continuous. Note that t i and t i+1 are δ/8connected in γ 0 . For each t i , we choose a time τ i , such that γ(τ i ) = γ 0 (t i ) and τ l = τ 0 . It follows that τ i and τ i+1 are δ/8-connected in γ. Let N i be so large that τ i and τ i+1 are δ/2-connected in γ N for all N > N i , and let M = max i N i . The existence of such N i is guaranteed by the assumption of the lemma.
is open and connected, it is path connected and there exists a continuous path ℘ connecting x with ∞ and such that ℘ ⊂ Extγ N .
We will construct a loop γ * which is contained in C \ ℘, and which disconnects x from ∞. This will yield a contradiction. By the definition of M , for i = 0, . . . , l−1, there exists an open ball B i of diameter δ/2, such that γ N (τ i ) and γ N (τ i+1 ) are connected in γ N ∩ B i , and hence also in B i \ ℘. Since the connected components of B i \ ℘ are open, they are path connected and there exists a curve γ * i which starts at γ N (τ i ), ends at γ N (τ i+1 ), and is contained in B i \ ℘. By concatenating these curves, we construct the loop γ * , i.e.
By construction, γ * ⊂ C \ ℘. We will now show that γ * disconnects x from ∞ by proving that its winding around x equals 2π. By the definition Combining these two facts, we conclude that d ∞ (γ 0 , γ * ) < 5δ/8. Since the winding of γ 0 around every point of Hullγ \∂ o γ is equal to 2π, and since x ∈ Hullγ and d E (x, γ 0 ) ≥ δ, the winding of γ * around x is also equal to 2π. This means that γ * disconnects x from ∞, and hence ℘ ∩ γ * = ∅, which is a contradiction.
Proof of Theorem 3.2. It is enough to use Proposition 3.6, Lemma 3.11 and Lemma 3.13.
Proof of Theorem 3.5. Without loss of generality, we can assume that C is connected. Otherwise, there is a minimal Euclidean distance ε > 0 between the connected components of C and if d * ∞ (C N , C) < ε/2, then we can consider each component separately. If a connected collection of curves contains a curve whose image consists of one point, then this curve creates a touching with some other curve in this collection. Hence, all curves in C have positive diameter.
We want to follow the same steps as in the proof of Theorem 3.2. In Lemma 3.11 and 3.13, we used the fact that γ and the approximating sets γ N were given by functions defined on the same space (in this case, [0, 1]). This was used to define the notion of δ-connected times s, t ∈ [0, 1] in γ and γ N . We introduce a similar structure for C and C N . Let γ 1 , γ 2 , . . . , γ k be an enumeration of the curves in C. One can hence treat C as a function from One can similarly treat C N as a function from I to C by setting C N (s) = γ i N (s) for s = (i, s). We say that s, t ∈ I are δ-connected in C if there exists an open ball B of diameter δ, such that C(s) and C(t) are connected in C ∩ B. We do the same for C N .
Since d * ∞ (C N , C) → 0, we have that d H (C N , C) → 0, and by Proposition 3.6, it is enough to prove that for all δ > 0, there exists N 0 such that ExtC N ⊂ (ExtC) δ for all N > N 0 . Since C N ⊂ C N , we have that ExtC N ⊂ ExtC N , and it is enough to prove that there exists N 0 such that With the above definition of δ-connected s, t ∈ I in C and C N , one can prove an analog of Lemma 3.11 with curves replaced by collections of curves (and using d * ∞ instead of d ∞ ). The proof is very similar since the intersection of C with a ball of radius smaller than the smallest radius of a curve in C looks like an intersection of a single curve with the ball. Moreover, if N is large enough, the intersection of the ball with C N also looks like an intersection with just one approximating curve. The arguments in the proof of Lemma 3.11 are local and are not affected by the connectivity properties of the sets considered outside the ball. Hence, the same arguments apply in the case of collections of curves (with the additional use of mutual touchings since two different parts of the intersection with the ball may come from two different curves) and we can conclude that for all s, t which are δ-connected in C, there exists N 0 such that s, t are 4δ-connected in C N for all N > N 0 .
Since C is connected and composed of finitely many curves, there exists a curve whose range is C. Hence, by Lemma 3.12, the outer boundary of C is given by a continuous curve. We finish the proof by noticing that the proof of Lemma 3.13 translates verbatim to the setting of collections of curves.

No touchings
Recall the definitions of touching, Definitions 3.1, 3.3 and 3.4.   We will first prove Theorem 4.1, and then we prove Corollary 4.2 and Theorem 4.3, using Theorem 4.1. We start by giving a sketch of the proof of Theorem 4.1. We define excursions of the planar Brownian motion B from the boundary of a disk which stay in the disk. Each of these excursions has, up to a rescaling in space and time, the same law as a process W which we define below. We show that the process W possesses a particular property, see Lemma 4.7 below. If B had a touching, it would follow that the excursions of B had a behavior that is incompatible with this particular property of the process W .
To define the process W , we recall some facts about the three-dimensional Bessel process and its relation with Brownian motion, see e.g. Lemma 1 of [4] and the references therein. The three-dimensional Bessel process can be defined as the modulus of a three-dimensional Brownian motion.
Lemma 4.4. Let X t be a one-dimensional Brownian motion starting at 0 and Y t a three-dimensional Bessel process starting at 0. Let 0 < a < a and define τ = T a (X) = inf{t ≥ 0 : |X t | = a}, τ = T a (X), σ = sup{t < τ : X t = 0}, ρ = T a (Y ) and ρ = T a (Y ). Then, the same law, (ii) the process (Y ρ+u , 0 ≤ u ≤ ρ − ρ) has the same law as the process The following lemma is the skew-product representation of planar Brownian motion, see e.g. Theorem 7.26 of [19]. This is a consequence of conformal invariance of planar Brownian motion, see e.g. Theorem 7.20 of [19].
Lemma 4.5 (Skew-product representation). Let B t be a planar Brownian motion starting at 1. There exist two independent one-dimensional Brownian motions X 1 t and X 2 t starting at 0 such that We define the process W t as follows. Let X t be a one-dimensional Brownian motion starting according to some distribution on [0, 2π). Let Y t be a three-dimensional Bessel process starting at 0, independent of X t . Define Let B t be a planar Brownian motion starting at 0, independent of X t and Y t , and define Note that W t starts on the unit circle, stays in the unit disk and is stopped when it hits the unit circle again.
Next we derive the property of W which we will use in the proof of Theorem 4.1. For this, we need the following property of planar Brownian motion: Lemma 4.6. Let B be a planar Brownian motion started at 0 and stopped when it hits the unit circle. Almost surely, there exists ε > 0 such that for all curves γ with d ∞ (γ, B) < ε we have that γ disconnects ∂B(0; ε) from ∂B(0; 1). Proof. We construct the event E ε , for 0 < ε ≤ 1/7, illustrated in Figure 4. Loosely speaking, E ε is the event that B disconnects 0 from the unit circle in a strong sense, by crossing an annulus centered at 0 and winding around twice in this annulus. Let where arg is the continuous determination of the angle. Let Define the event E ε by By construction, if E ε occurs then for all curves γ with d ∞ (γ, B) < ε we have that γ disconnects ∂B(0; 2ε) from ∂B(0; 6ε). It remains to prove that almost surely E ε occurs for some ε. By scale invariance of Brownian motion, P(E ε ) does not depend on ε, and it is obvious that P(E ε ) > 0. Furthermore, the events E 1/7 n , n ∈ N, are independent. Hence almost surely E ε occurs for some ε.
Lemma 4.7. Let γ : [0, 1] → C be a curve with |γ(0)| = |γ(1)| = 1 and |γ(t)| < 1 for all t ∈ (0, 1). Let W denote the process defined after Lemma 4.5 and assume that W 0 ∈ {γ(0), γ(1)} a.s. Then the intersection of the following two events has probability 0: Proof. The idea of the proof is as follows. We run the process W t till it hits ∂B(0; a), where a < 1 is close to 1. From that point the process is distributed as a conditional Brownian motion. We run the Brownian motion till it hits the trace of the curve γ. From that point the Brownian motion winds around such that the event (ii) cannot occur, by Lemma 4.6.
Let T a (W ) = inf{t ≥ 0 : |W t | = a} and let P be the law of W Ta(W ) . Let B t be a planar Brownian motion with starting point distributed according to the law P and stopped when it hits the unit circle. Let τ = inf{t > 0 : |W t | = 1}. By Lemma 4.4 and the skew-product representation, if a ∈ ( 1 2 , 1), the process has the same law as where T 1 (B) = inf{t ≥ 0 : |B t | = 1}. Let E 1 , E 2 be similar to the events (i) and (ii), respectively, from the statement of the lemma, but with B instead of W , i.e.
Let T γ (W ) = inf{t ≥ 0 : W t ∈ γ} be the first time W t hits the trace of the curve γ.
The probability of the intersection of the events (i) and (ii) from the statement of the lemma is bounded above by The second term in (4.1) converges to 0 as a → 1, by the assumption that W 0 ∈ {γ(0), γ(1)} a.s. The first term in (4.1) is equal to 0. This follows from the fact that which we prove below, using Lemma 4.6.
To prove (4.2) note that is a stopping time and hence, by the strong Markov property, B t , t ≥ T γ (B), is a Brownian motion. Therefore, by translation and scale invariance, we can apply Lemma 4.6 to B t started at time T γ (B) and stopped when it hits the boundary of the ball centered at B Tγ (B) with radius δ. It follows that (4.2) holds.
Since B t = B 0 for all t ∈ (0, 1] a.s., we have that (0, t) is not a touching for all t ∈ (0, 1] a.s. By time inversion, B 1 − B 1−u , 0 ≤ u ≤ 1, is a planar Brownian motion and hence (s, 1) is not a touching for all s ∈ [0, 1) a.s. For every touching (s, t) with 0 < s < t < 1 there exists δ > 0 such that for all δ ≤ δ we have that (s, t) is a δ-touching a.s. (A touching (s, t) that is not a δ-touching for any δ > 0 could only exist if B u = B 0 for all u ∈ [0, s] or B u = B 1 for all u ∈ [t, 1].) We prove that for every δ > 0 we have almost surely, B has no δ-touchings (s, t) with 0 < s < t < 1.
By letting δ → 0 it follows that B has no touchings a.s. To prove (4.3), fix δ > 0 and let z ∈ C. We define excursions W n , for n ∈ N, of the Brownian motion B as follows. Let τ 0 = inf{u ≥ 0 : |B u − z| = 2δ/3}, and define for n ≥ 1, Note that ρ n < σ n < τ n < ρ n+1 and that ρ n , σ n , τ n may be infinite. The reason that we take 2δ/3 instead of δ is that we will consider δ-touchings (s, t) not only with B s = z but also with |B s − z| < δ/3. We define the excursion W n by W n u = B u , ρ n ≤ u ≤ τ n .
Observe that W n has, up to a rescaling in space and time and a translation, the same law as the process W defined after Lemma 4.5. This follows from Lemma 4.4, the skew-product representation and Brownian scaling. If B has a δ-touching (s, t) with |B s − z| < δ/3, then there exist m = n such that (i) W m ∩ W n = ∅, (ii) for all ε > 0 there exist curves γ m , γ n such that d ∞ (γ m , W m ) < ε, d ∞ (γ n , W n ) < ε and γ m ∩ γ n = ∅.
By Lemma 4.7, with W m playing the role of W and W n of γ, for each m, n such that m = n the intersection of the events (i) and (ii) has probability 0. Here we use the fact that W m ρm ∈ {W n ρn , W n τn } a.s. Hence B has no δtouchings (s, t) with |B s − z| < δ/3 a.s. We can cover the plane with a countable number of balls of radius δ/3 and hence B has no δ-touchings a.s.
Proof of Theorem 4.3. It follows from Corollary 4.2 and the fact that there are countably many loops in L, that every loop in L has no touchings a.s. We prove that each pair of loops in L has no mutual touchings a.s. Discover the loops in L one by one in decreasing order of their diameter, similarly to the construction in Section 4.3 of [20]. Given a set of discovered loops γ 1 , . . . , γ k−1 , we show that the next loop γ k and the discovered loop γ have no mutual touchings a.s., for every γ ∈ {γ 1 , . . . , γ k−1 }. Note that, because of the conditioning, we can treat γ as a deterministic loop. Let B loop denote the (random) loop γ k . Let t γ , t B be the time lengths of γ, B loop , respectively.
Since For any u 0 ∈ (0, t B ) the laws of the processes B loop u , 0 ≤ u ≤ u 0 , and B u , 0 ≤ u ≤ u 0 , are mutually absolutely continuous, see e.g. Exercise 1.5(b) of [19]. Hence it suffices to prove that γ, B have no mutual touchings (s, t) with 0 < s < t γ and 0 < t < t B a.s.
It follows from (4.4) that a.s. γ, B have no mutual touchings (s, t) with 0 < s < t γ and 0 < t < t B . The proof of (4.4) is similar to the proof of Theorem 4.1 and therefore we omit the details. We define excursions of γ and B and apply Lemma 4.7. Note that Lemma 4.7 is stated in terms of a deterministic curve γ and the random process W , and hence the lemma is applicable in the current setting.

Finite approximation of a Brownian loop soup cluster
Let L be a Brownian loop soup with intensity λ ∈ (0, 1] in a bounded, simply connected domain D. The following theorem is the main result of this section.
Theorem 5.1. Almost surely, for any cluster C of L, there exists a sequence of finite subclusters C N of C such that as N → ∞, We will need the following result. Proof. This follows from the proof of Lemma 9.7 in [22]. Note that in [22], a cluster C is replaced by the collection of simple loops η given by the outer boundaries of γ ∈ C. However, the same argument works also for C and the loops γ.
To prove Theorem 5.1, we will show that the loops N , from Lemma 5.2 satisfy the conditions of Lemma 3.13. Then, using Proposition 3.6 and Lemma 3.13, we obtain Theorem 5.1. We will first prove some necessary lemmas.
Lemma 5.3. Almost surely, for all γ ∈ L and all subclusters C of L such that γ does not intersect C, it holds that d E (γ, C) > 0.
Proof. Fix k and let γ k be the loop in L with k-th largest diameter. Using an argument similar to that in Lemma 9.2 of [22], one can prove that, conditionally on γ k , the loops in L which do not intersect γ k are distributed like L(D \ γ k ), i.e. a Brownian loop soup in D \ γ k . Moreover, L(D \ γ k ) consists of a countable collection of disjoint loop soups, one for each connected component of D \ γ k . By conformal invariance, each of these loop soups is distributed like a conformal image of a copy of L. Hence, by Lemma 9.4 of [22], almost surely, each cluster of L(D\γ k ) is at positive distance from γ k . This implies that the unconditional probability that there exists a subcluster C such that d E (γ k , C) = 0 and γ k does not intersect C is zero. Since k was arbitrary and there are countably many loops in L, the claim of the lemma follows. Proof. This follows from Lemma 9.4 of [22], the restriction property of the Brownian loop soup, conformal invariance and the fact that we consider a countable number of balls.
Lemma 5.5. Almost surely, for every δ > 0 there exists t 0 > 0 such that every subcluster of L with diameter larger than δ contains a loop of time length larger than t 0 .
Proof. Let δ > 0 and suppose that for all t 0 > 0 there exists a subcluster of diameter larger than δ containing only loops of time length less than t 0 .
Let t 1 = 1 and let C 1 be a subcluster of diameter larger than δ containing only loops of time length less than t 1 . By the definition of a subcluster there exists a finite chain of loops C 1 which is a subcluster of C 1 and has diameter larger than δ. Let t 2 = min{t γ : γ ∈ C 1 }, where t γ is the time length of γ. Let C 2 be a subcluster of diameter larger than δ containing only loops of time length less than t 2 . By the definition of a subcluster there exists a finite chain of loops C 2 which is a subcluster of C 2 and has diameter larger than δ. Note that by the construction γ 1 = γ 2 for all γ 1 ∈ C 1 , γ 2 ∈ C 2 , i.e. the chains of loops C 1 and C 2 are disjoint as collections of loops, i.e. γ 1 = γ 2 for all γ 1 ∈ C 1 , γ 2 ∈ C 2 . Iterating the construction gives infinitely many chains of loops C i which are disjoint as collections of loops and which have diameter larger than δ.
For each chain of loops C i take a point z i ∈ C i , where C i is viewed as a subset of the complex plane. Since the domain is bounded, the sequence z i has an accumulation point, say z. Let z have rational coordinates and δ be a rational number such that |z − z | < δ/8 and |δ − δ | < δ/8. The annulus centered at z with inner radius δ /4 and outer radius δ /2 is crossed by infinitely many chains of loops which are disjoint as collections of loops. However, the latter event has probability 0 by Lemma 9.6 of [22] and its consequence, leading to a contradiction.
Proof of Theorem 5.1. We restrict our attention to the event of probability 1 such that the claims of Lemmas 5.2, 5.3, 5.4 and 5.5 hold true, and such that there are only finitely many loops of diameter or time length larger than any positive threshold. Fix a realization of L and a cluster C of L. Take C N , N and defined for C as in Lemma 5.2. By Proposition 3.6 and Lemma 3.13, it is enough to prove that the sequence N satisfies the condition that for all δ > 0 and s, t ∈ [0, 1] which are δ-connected in , there exists N 0 such that s, t are 4δ-connected in N for all N > N 0 .
To this end, take δ > 0 and s, t such that (s) is connected to (t) in ∩ B(x, δ/2) for some x. Take x with rational coordinates and δ rational such that for all N and we are done. Hence, we can assume that When intersected with B(x ; δ ), each loop γ ∈ C may split into multiple connected components. We call each such component of γ ∩ B(x ; δ ) a piece of γ. In particular if γ ⊂ B(x ; δ ), then the only piece of γ is the full loop γ. The collection of all pieces we consider is given by {℘ : ℘ is a piece of γ for some γ ∈ C}. A chain of pieces is a sequence of pieces such that each piece intersects the next piece in the sequence. Two pieces are in the same cluster of pieces if they are connected via a finite chain of pieces. We identify a collection of pieces with the set in the plane given by the union of the pieces. Note that there are only finitely many pieces of diameter larger than any positive threshold, since the number of loops of diameter larger than any positive threshold is finite and each loop is uniformly continuous.
Let C * 1 , C * 2 , . . . be the clusters of pieces such that We will see later in the proof that the number of such clusters of pieces is finite, but we do not need this fact yet. We now prove that

To this end, suppose that (5.3) is false and let
First assume that z ∈ C * i . Then, by the definition of clusters of pieces, z / ∈ C * j . It follows that C * j contains a chain of infinitely many different pieces which has z as an accumulation point. Since there are only finitely many pieces of diameter larger than any positive threshold, the diameters (s) (t) of the pieces in this chain approach 0. Since d E (z, ∂B(x ; δ )) > δ /2, the pieces become full loops at some point in the chain. Let γ ∈ C be such that z ∈ γ. It follows that there exists a subcluster of loops of C, which does not contain γ and has z as an accumulation point. This contradicts the claim of Lemma 5.3 and therefore it cannot be the case that z ∈ C * i . Second assume that z / ∈ C * i and z / ∈ C * j . By the same argument as in the previous paragraph, there exist two chains of loops of C which are disjoint, contained in B(x ; δ ) and both of which have z as an accumulation point. These two chains belong to two different clusters of L restricted to B(x ; δ ). Since x and δ are rational, this contradicts the claim of Lemma 5.4, and hence it cannot be the case that z / ∈ C * i and z / ∈ C * j . This completes the proof of (5.3).
We now define a particular collection of pieces P . By Lemma 5.5, let t 0 > 0 be such that every subcluster of L of diameter larger than δ /4 contains a loop of time length larger than t 0 . Let P be the collection of pieces which have diameter larger than δ /4 or are full loops of time length larger than t 0 . Note that P is finite. Each chain of pieces which intersects both B(x ; δ /2) and ∂B(x ; δ ), contains a piece of diameter larger than δ /4 intersecting ∂B(x ; δ ) or contains a chain of full loops which intersects both B(x ; δ /2) and ∂B(x ; 3δ /4). In the latter case it contains a subcluster of L of diameter larger than δ /4 and therefore a full loop of time length larger than t 0 . Hence, each chain of pieces which intersects both B(x ; δ /2) and ∂B(x ; δ ) contains an element of P . Since P is finite, it follows that the number of clusters of pieces C * i satisfying (5.2) is finite.
Since the range of is C and the number of clusters of pieces C * i is finite, ∈ C * j for j = i. It follows that . Since N is a finite subcluster of C, it also follows that there are finite chains of pieces G * N (s), G * N (t) ⊂ C * i ∩ N (not necessarily distinct) which connect N (s), N (t), respectively, to ∂B(x ; δ ).
Since G * N (s), G * N (t) intersect both B(x ; δ /2) and ∂B(x ; δ ), we have that G * N (s), G * N (t) both contain an element of P . Moreover, P is finite, any two elements of C * i are connected via a finite chain of pieces and N (= C N ) increases to the full cluster C. Hence, all elements of C * i ∩ P are connected to each other in C * i ∩ N for N sufficiently large. It follows that G * N (s) is connected to G * N (t) in C * i ∩ N for N sufficiently large. Hence, N (s) is connected to N (t) in N ∩ B(x ; δ ) for N sufficiently large. This implies that s, t are 4δ-connected in N for N sufficiently large.

Distance between Brownian loops
In this section we give two estimates, on the Euclidean distance between non-intersecting loops in the Brownian loop soup and on the overlap between intersecting loops in the Brownian loop soup. We will only use the first estimate in the proof of Theorem 2.2. As a corollary to the two estimates, we obtain a one-to-one correspondence between clusters composed of "large" loops from the random walk loop soup and clusters composed of "large" loops from the Brownian loop soup. This is an extension of Corollary 5.4 of [15]. For intersecting loops γ 1 , γ 2 we define their overlap by overlap(γ 1 , γ 2 ) = 2 sup{ε ≥ 0 : for all loops γ 1 , γ 2 such that d ∞ (γ 1 , γ 1 ) ≤ ε, d ∞ (γ 2 , γ 2 ) ≤ ε, we have that γ 1 ∩ γ 2 = ∅}. Proposition 6.1. Let L be a Brownian loop soup with intensity λ ∈ (0, ∞) in a bounded, simply connected domain D. Let c > 0 and 16/9 < θ < 2. For all non-intersecting loops γ, γ ∈ L of time length at least N θ−2 we have that d E (γ, γ ) ≥ cN −1 log N , with probability tending to 1 as N → ∞. Proposition 6.2. Let L be a Brownian loop soup with intensity λ ∈ (0, ∞) in a bounded, simply connected domain D. Let c > 0 and θ < 2 sufficiently close to 2. For all intersecting loops γ, γ ∈ L of time length at least N θ−2 we have that overlap(γ, γ ) ≥ cN −1 log N , with probability tending to 1 as N → ∞. Corollary 6.3. Let D be a bounded, simply connected domain, take λ ∈ (0, ∞) and θ < 2 sufficiently close to 2. Let L, L N θ−2 ,L N ,L N θ−2 N be defined as in Section 2. For every N we can defineL N and L on the same probability space in such a way that the following holds with probability tending to 1 as N → ∞. There is a one-to-one correspondence between the clusters ofL N θ−2 N and the clusters of L N θ−2 such that for corresponding clusters,C ⊂L N θ−2 N and C ⊂ L N θ−2 , there is a one-to-one correspondence between the loops iñ C and the loops in C such that for corresponding loops,γ ∈C and γ ∈ C, we have that d ∞ (γ,γ) ≤ cN −1 log N , for some constant c which does not depend on N .
Proof. Let c be two times the constant in Corollary 5.4 of [15]. Combine this corollary and Propositions 6.1 and 6.2 with the c in Propositions 6.1 and 6.2 equal to six times the constant in Corollary 5.4 of [15].
In Propositions 6.1 and 6.2 and Corollary 6.3, the probability tends to 1 as a power of N . This can be seen from the proofs. We will use Proposition 6.1, but we will not use Proposition 6.2 in the proof of Theorem 2.2. Because of this, and because the proofs of Propositions 6.1 and 6.2 are based on similar techniques, we omit the proof of Proposition 6.2. To prove Proposition 6.1, we first prove two lemmas. Lemma 6.4. Let B be a planar Brownian motion and let B loop,t 0 be a planar Brownian loop with time length t 0 . There exist c 1 , c 2 > 0 such that, for all 0 < δ < δ and all N ≥ 1, Proof. First we prove (6.2). By Brownian scaling, where X loop t is a one-dimensional Brownian bridge starting at 0 with time length 1. The distribution of sup t∈[0,1] |X loop t | is the asymptotic distribution of the (scaled) Kolmogorov-Smirnov statistic, and we can write, see e.g. Theorem 1 of [12], for some constant c and all 0 < δ < δ and all N ≥ 1. This proves (6.2).
Lemma 6.5. There exist c 1 , c 2 > 0 such that the following holds. Let c > 0 and 0 < δ < δ < 2. Let γ be a (deterministic) loop with diamγ ≥ N −δ /2 . Let B loop,t 0 be a planar Brownian loop starting at 0 of time length t 0 ≥ N −δ . Then for all N > 1, Proof. We use some ideas from the proof of Proposition 5.1 of [15]. By time reversal, we have where B is a planar Brownian motion starting at 0. The equality (6.5) follows from the following relation between the law µ 0,t 0 of (B loop,t 0 t , 0 ≤ t ≤ t 0 ) and the law µ 0,t 0 of (B t , 0 ≤ t ≤ t 0 ): see Section 5.2 of [14] and Section 3.1.1 of [16].
Next we bound the probability (6.6) 3 4 t 0 ] ∩ γ = ∅). If the event in (6.6) occurs, then B t hits the cN −1 log N neighborhood of γ before time 1 2 t 0 , say at the point x. From that moment, in the next 1 4 t 0 time span, B t either stays within a ball containing x (to be defined below) or exits this ball without touching γ. Hence, using the strong Markov property, (6.6) is bounded above by where B x is a planar Brownian motion starting at x and τ x y is the exit time of B x from the ball B(y; 1 4 N −δ /2 ). To bound the second term in (6.7), recall that diamγ ≥ N −δ /2 , so γ intersects both the center and the boundary of the ball B(y; 1 4 N −δ /2 ). Hence we can apply the Beurling estimate (see e.g. Theorem 3.76 of [14]) to obtain the following upper bound for the second term in (6.7), for some constant c 1 > 1 which in particular does not depend on the curve γ. The above reasoning to obtain the bound (6.8) holds if cN −1 log N < 1 4 N −δ /2 and hence for large enough N . If N is small then the bound (6.8) is larger than 1 and holds trivially. To bound the first term in (6.7) we use Lemma 6.4, for some constants c 2 , c 3 > 0.
We have that The first inequality in (6.9) follows from the Markov property of Brownian motion. The equality in (6.9) follows from the fact that B x 1 4 t 0 is a twodimensional Gaussian random vector centered at x. By combining (6.5), the bound on (6.6), and (6.9), we conclude that Proof of Proposition 6.1. Let 2 − θ =: δ < δ < 2 and let X N be the number of loops in L of time length at least N −δ . First, we give an upper bound on X N . Note that X N is stochastically less than the number of loops γ in a Brownian loop soup in the full plane C with t γ ≥ N −δ and γ(0) ∈ D. The latter random variable has the Poisson distribution with mean where A denotes two-dimensional Lebesgue measure. By Chebyshev's inequality, X N ≤ N δ log N with probability tending to 1 as N → ∞.
Second, we bound the probability that L contains loops of large time length with small diameter. By Lemma 6.4, (6.10) for some constants c 1 , c 2 > 0. The expression (6.10) converges to 0 as N → ∞.
Third, we prove the proposition. To this end, we discover the loops in L one by one in decreasing order of their time length, similarly to the construction in Section 4.3 of [20]. This exploration can be done in the following way. Let L 1 , L 2 , . . . be a sequence of independent Brownian loop soups with intensity λ in D. From L 1 take the loop γ 1 with the largest time length. From L 2 take the loop γ 2 with the largest time length smaller than t γ 1 . Iterating this procedure yields a random collection of loops {γ 1 , γ 2 , . . .}, which is such that t γ 1 > t γ 2 > · · · a.s. By properties of Poisson point processes, {γ 1 , γ 2 , . . .} is a Brownian loop soup with intensity λ in D.
Given a set of discovered loops γ 1 , . . . , γ k−1 , we bound the probability that the next loop γ k comes close to γ i but does not intersect γ i , for each i ∈ {1, . . . , k − 1} separately. Note that, because of the conditioning, we can treat γ i as a deterministic loop, while γ k is random. Therefore, to obtain such a bound, we can use Lemma 6.5 on the event that t γ k ≥ N −δ and diamγ i ≥ N −δ /2 . We use the first and second steps of this proof to bound the probability that L contains more than N δ log N loops of large time length, or loops of large time length with small diameter. Thus, for some constants c 3 , c 4 > 0. If δ < 2/9, then (6.11) converges to 0 as N → ∞.

Proof of main results
Proof of Theorem 2.1. This follows from Theorem 3.2, Theorem 4.1 and Corollary 4.2, and the fact that there exist couplings such that almost surely the processes converge uniformly, see e.g. Corollary 3.2 of [15].
Proof of Theorem 2.2. By Corollary 5.4 of [15], for every N we can define on the same probability spaceL N and L such that the following holds with probability tending to 1 as N → ∞: There is a one-to-one correspondence between the loops inL N θ−2 N and the loops in L N θ−2 such that, if γ ∈L N θ−2 N and γ ∈ L N θ−2 are paired in this correspondence, then d ∞ (γ, γ) < cN −1 log N , where c is a constant which does not depend on N .
We prove that in the above coupling, for all δ, α > 0 there exists N 0 such that for all N ≥ N 0 the following holds with probability at least 1 − α: For every outermost cluster C of L there exists an outermost clusterC N of L N θ−2 N such that and for every outermost clusterC N ofL N θ−2 N there exists an outermost cluster C of L such that (7.1) holds. By Lemma 3.8, (7.1) implies that d H (∂ o C, ∂ oCN ) < 2δ and d H (HullC, HullC N ) < 2δ. Also, (7.1) implies that the Hausdorff distance between the carpet of L and the carpet ofL N θ−2 N is less than or equal to δ. Hence this proves the theorem.
Fix δ, α > 0. To simplify the presentation of the proof of (7.1), we will often use the phrase "with high probability", by which we mean with probability larger than a certain lower bound which is uniform in N . It is not difficult to check that we can choose these lower bounds in such a way that (7.1) holds with probability at least 1 − α.
First we define some constants. By Lemma 9.7 of [22], a.s. there are only finitely many clusters of L with diameter larger than any positive threshold; moreover they are all at positive distance from each other. Let ρ ∈ (0, δ/2) be such that, with high probability, for every z ∈ D we have that z ∈ HullC for some outermost cluster C of L with diamC ≥ δ/2, or d E (z, C) < δ/4 for some outermost cluster C of L with ρ < diamC < δ/2. The existence of such a ρ follows from the fact that a.s. L is dense in D and that there are only finitely many clusters of L with diameter at least δ/2. We call a cluster or subcluster large (small ) if its diameter is larger than (less than or equal to) ρ.
Let ε 1 > 0 be such that, with high probability, for all clusters C of L. Let ε 2 > 0 be such that, with high probability, for all distinct large clusters C 1 , C 2 of L. For every large cluster C 1 of L, let ℘(C 1 ) be a path connecting HullC 1 with ∞ such that, for all large clusters C 2 of L such that HullC 1 ⊂ HullC 2 , we have that ℘(C 1 ) ∩ HullC 2 = ∅. Let ε 3 > 0 be such that, with high probability, for all large clusters C 1 , C 2 of L such that HullC 1 ⊂ HullC 2 . By Lemma 3.7 (and Remark 3.9) we can choose ε 4 > 0 such that, with high probability, for every large cluster C of L, for any collection of loopsC. Let t 0 > 0 be such that, with high probability, every subcluster C of L with diamC > ρ − ε 1 contains a loop of time length larger than t 0 . Such a t 0 exists by Lemma 5.5. In particular, every large subcluster of L contains a loop of time length larger than t 0 . Note that the number of loops with time length larger than t 0 is a.s. finite.
From now on the proof is in six steps, and we start by giving a sketch of these steps (see Figure 6). First, we treat the large clusters. For every large cluster C of L, we choose a finite subcluster C of C such that d H (C, C ) and d H (ExtC, ExtC ) are small, using Theorem 5.1. Second, we approximate C by a subclusterC N ofL N θ−2 N such that d H (ExtC , ExtC N ) is small, using the one-to-one correspondence between random walk loops and Brownian loops, Step 1 Step 2 Step 3 C N C C N C Figure 6. Schematic diagram of the proof of Theorem 2.2. We start with a cluster C of L and, following the arrows, we construct a clusterC N ofL N θ−2 N . The dashed arrow indicates that C andC N satisfy (7.1). Theorem 3.5 and Theorem 4.3. Third, we letC N be the cluster ofL N θ−2 N that containsC N . Here we make sure, using Proposition 6.1, that for distinct subclustersC 1,N ,C 2,N , the corresponding clustersC 1,N ,C 2,N are distinct. It follows that d H (C,C N ) and d H (ExtC, ExtC N ) are small. Fourth, we show that the obtained clustersC N are large. We also show that we obtain in fact all large clusters ofL N θ−2 N in this way. Fifth, we prove that a large cluster C of L is outermost if and only if the corresponding large clusterC N ofL N θ−2 N is outermost. Sixth, we deal with the small outermost clusters.
Step 1. Let C be the collection of large clusters of L. By Lemma 9.7 of [22], the collection C is finite a.s. For every C ∈ C let C be a finite subcluster of C such that C contains all loops in C which have time length larger than t 0 and d H (C, C ) < min{δ, ε 1 , ε 2 , ε 3 , ε 4 }/16, (7.2) d H (ExtC, ExtC ) < min{δ, ε 2 }/16, (7.3) a.s. This is possible by Theorem 5.1. Let C be the collection of these finite subclusters C .
Step 2. For every C ∈ C letC N ⊂L N θ−2 N be the set of random walk loops which correspond to the Brownian loops in C , in the one-to-one correspondence from the first paragraph of this proof. This is possible for large N , with high probability, since then where C = C ∈C C . LetC N be the collection of these sets of random walk loopsC N . Now we prove some properties of the elements ofC N . By Theorem 4.3, C has no touchings a.s. Hence, by Theorem 3.5 (and Remark 3.9), for large N , with high probability, Next note that almost surely, d E (γ, γ ) > 0 for all non-intersecting loops γ, γ ∈ L, and overlap(γ, γ ) > 0 for all intersecting loops γ, γ ∈ L. Since the number of loops in C is finite, we can choose η > 0 such that, with high probability, d E (γ, γ ) > η for all non-intersecting loops γ, γ ∈ C , and overlap(γ, γ ) > η for all intersecting loops γ, γ ∈ C . For large N , cN −1 log N < η/2 and hence with high probability, (7.5) γ 1 ∩ γ 2 = ∅ if and only ifγ 1 ∩γ 2 = ∅, for all γ 1 , γ 2 ∈ C , whereγ 1 ,γ 2 are the random walk loops which correspond to the Brownian loops γ 1 , γ 2 , respectively. By (7.5), everyC N ∈C N is connected and hence a subcluster ofL N θ−2 N . Also by (7.5), for distinct C 1 , C 2 ∈ C , the correspond-ingC 1,N ,C 2,N ∈C N do not intersect each other when viewed as subsets of the plane.
Step 3. For everyC N ∈C N letC N be the cluster ofL N θ−2 N which containsC N . LetC N be the collection of these clustersC N . We claim that for distinctC 1,N ,C 2,N ∈C N , the correspondingC 1,N ,C 2,N ∈C N are distinct, for large N , with high probability. This implies that there is one-toone correspondence between elements ofC N and elements ofC N , and hence between elements of C, C ,C N andC N .
Step 4. We prove that, for large N , with high probability, allC N ∈C N are large, and that all large clusters ofL N θ−2 N are elements ofC N . This gives that, for large N , with high probability, there is a one-to-one correspondence between large clusters C of L and large clustersC N ofL N θ−2 N such that (7.7) and (7.8) hold, and hence such that (7.1) holds.
First we show that, for large N , with high probability, allC N ∈C N are large. By (7.7) and the definition of ε 1 , for large N , with high probability, diamC N > diamC − ε 1 > ρ, i.e.C N is large.
Next we prove that, for large N , with high probability, all large clusters ofL N θ−2 N are elements ofC N . LetG N be a large cluster ofL N θ−2 N . Let G N ⊂ L N θ−2 be the set of Brownian loops which correspond to the random walk loops inG N . By (7.6), G N is connected and hence a subcluster of L. If cN −1 log N < ε 1 /2, then diamG N > ρ − ε 1 . Let G be the cluster of L which contains G N . We have that diamG > ρ − ε 1 and hence by the definition of ε 1 , with high probability, G is large, i.e. G ∈ C. LetG * N be the element of C N which corresponds to G. We claim that (7.9)G * N =G N , which implies thatG N ∈C N .
To prove (7.9), let G be the element of C which corresponds to G. Since G N is a subcluster of L with diamG N > ρ − ε 1 , G N contains a loop γ of time length larger than t 0 . Since γ ∈ G and t γ > t 0 , by the construction of G , we have that γ ∈ G . Henceγ ∈G * N , whereγ is the random walk loop corresponding to the Brownian loop γ. Since γ ∈ G N , by the definition of G N , we have thatγ ∈G N . It follows thatγ ∈G N ∩G * N , which implies that (7.9) holds.
Step 5. Let C, G be distinct large clusters of L, and letC N ,G N be the large clusters ofL N θ−2 N which correspond to C, G, respectively. We prove that, for large N , with high probability, (7. 10) HullC ⊂ HullG if and only if HullC N ⊂ HullG N .

It follows that HullC
To prove the reverse implication of (7.10), suppose that HullC N ⊂ HullG N . There are three cases: HullC ⊂ HullG, HullG ⊂ HullC and HullC ∩HullG = ∅. We will show that the second and third case lead to a contradiction, which implies that HullC ⊂ HullG. For the second case, suppose that HullG ⊂ HullC. Then, by the previous paragraph, HullG N ⊂ HullC N . This contradicts the fact that HullC N ⊂ HullG N andC N ∩G N = ∅.
Next, by the one-to-one correspondence between elements of C andC N satisfying (7.7) and (7.8), for large N , with high probability, d H C∈C ExtC, C N ∈C N ExtC N < δ/4. (7.11) LetG N be a small outermost cluster ofL N θ−2 N , then we have thatG N ⊂ C N ∈C N ExtC N . By (7.11) and the fact that L is dense in D, a.s. there exists an outermost cluster C of L with diamC < δ/2 such that d E (C,G N ) < δ/2. It follows that d H (C,G N ) ≤ d E (C,G N ) + max{diamC, diamG N } < δ, d H (ExtC, ExtG N ) ≤ 1 2 max{diamC, diamG N } < δ/4. This completes the proof.