Conformal Invariance of Boundary Touching Loops of FK Ising Model

In this article we show the convergence of a loop ensemble of interfaces in the FK Ising model at criticality, as the lattice mesh tends to zero, to a unique conformally invariant scaling limit. The discrete loop ensemble is described by a canonical tree glued from the interfaces, which then is shown to converge to a tree of branching SLEs. The loop ensemble contains unboundedly many loops and hence our result describes the joint law of infinitely many loops in terms of SLE type processes, and the result gives the full scaling limit of the FK Ising model in the sense of random geometry of the interfaces. Some other results in this article are convergence of the exploration process of the loop ensemble (or the branch of the exploration tree) to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox {SLE}(\kappa ,\kappa -6)$$\end{document}SLE(κ,κ-6), \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\kappa =16/3$$\end{document}κ=16/3, and convergence of a generalization of this process for 4 marked points to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox {SLE}[\kappa ,Z]$$\end{document}SLE[κ,Z], \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\kappa =16/3$$\end{document}κ=16/3, where Z refers to a partition function. The latter SLE process is a process that can’t be written as a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hbox {SLE}(\kappa ,\rho _1,\rho _2,\ldots )$$\end{document}SLE(κ,ρ1,ρ2,…) process, which are the most commonly considered generalizations of SLEs.

We will consider the FK Ising model, i.e., FK model whose parameter value q corresponds to the Ising model, on a square lattice. The lattice we are considering is the usual square lattice Z 2 , for convenience rotated by 45 • . In Fig. 1a it is the lattice formed by the centers of the black squares. We will consider a bounded simply connected subgraph of the square lattice. We will make this definition clearer later. An FK configuration in the graph is illustrated in Fig. 2a and its dual configuration in Fig. 2b. The dual configuration is defined on the dual lattice, formed by the centers of the white squares in Fig. 1a, with the rule that exactly one of any (primal) edge and its dual edge (the dual edge is the unique edge on the dual lattice crossing the primal edge) is present in the configuration or in the dual configuration. The so called loop representation of the FK model is defined on so called medial lattice, shown in Fig. 1a and formed by the corners of the squares, by taking the inner and outer boundaries of all the connected components in a configuration of edges. The loop representation can be seen as a dense collection of simple loops when we resolve the possible double vertices by the modification illustrated in Fig. 1b    (b) A configuration on L • satisfying free boundary conditions. As the name suggest, the free boundary conditions imply that the FK model is considered in its standard form, without any added weight factors. The duality of the FK configurations described above gives a mapping between the set of configurations with wired boundary conditions and the set of configurations with free boundary conditions, defined on the graph and on its dual, respectively. One can check that the FK model parameter values p and q get mapped under this involution of FK configurations to values p * and q * = q where p * is given by In this article we will consider critical point of the model which happens to be the selfdual value of p, that is, p = p c when p = p * . For the FK Ising model, the parameter q = 2 and the critical parameter p c = √ 2 1+ √ 2 . By the fact that it is the self-dual point we see that in Fig. 2a, b the FK models have the same parameter values. The only difference is in boundary conditions. The resulting loop configurations have the same law for the critical parameter on both setups, either wired boundary conditions on the primal graph or free boundary conditions on the dual graph.
The correspondence of boundary conditions is slightly more complicated for other boundary conditions than the wired and free ones.
1.1.1. The role of the critical parameter. The parameter is chosen to be critical for several reasons. First of all it is expected that only for this value of the parameter the scaling limit will be non-trivial. For the other values, we get either zero or one macroscopic loop and all the other loops will be microscopic and vanishing in the scaling limit. The only macroscopic loop will be rather uninteresting as it will follow closely the boundary, so that the loop will fluctuate from the boundary only to a distance which vanishes in the scaling limit. In contrast, for the critical parameter the scaling limit will consist of (countably) infinite number of loops as we will be showing.
The second reason for selecting this value of the parameter is more technical. For that value, the observable we are defining in Sect. 4.1 is going to satisfy a relation which we can interpret as a discrete version of the Cauchy-Riemann equations. This makes it possible to pass to the limit and recover in the limit a holomorphic function solving a boundary value problem.
The third reason for the choice of the critical parameter is that at criticality we expect that the loop collection will have a conformal symmetry. This is already suggested by the existence of the holomorphic observable.
In fact, the discrete holomorphicity and related techniques allow us to control the scaling limit well and to identify the scaling limit and to show its conformal invariance. And thus can be seen as the most important reason to consider a system at criticality.

The exploration tree and the main results.
1.2.1. The exploration tree. Figure 3 illustrates the construction of the exploration tree of a loop configuration. The (chordal) exploration tree connects the fixed root vertex to any other boundary point. The construction of the branch form the root to a fixed target vertex is the following: (1) To initialize the process cut open the loop next to the root vertex and start the process at that location on that loop. (2) Explore the current loop in clockwise direction until the target point is reached or any point at which it is clear that it is impossible to reach the target point along the current loop (meaning that the current location of the exploration is disconnected from the target in the discrete slit domain where the explored part has been removed from the original domain). Step (2).
The construction depends on the direction of the exploration which was chosen in (2) to be clockwise. Instead of a deterministic choice, independent coin flips could be v root used to decide whether to follow each loop in clockwise or counterclockwise direction. This would lead to a different process. In this article we will use the above construction which suits well our purposes. After all, the main goal is to show convergence of the loop collection, and the above construction agrees well with our observable.

Main result.
The main theorem of this article is the following result. For its formulation, call a discrete domain admissible if it is simply connected and bounded and its boundary consists of a chain of black octagons and small squares as in Fig. 2a. More generally, introduce lattice mesh by scaling the lattices (L • , L • , L , L ♠ etc.) by a factor δ > 0 and consider discrete, admissible domains with lattice mesh δ > 0, and use notation (δ) to explicitly refer to such a domain. A sequence of simply connected domains n (say n = (δ n ) ) converges to a domain in Carathéodory sense with respect to a point w 0 ∈ , if w 0 ∈ n for all n and the conformal and onto maps ψ n : D → n with ψ n (0) = w 0 and ψ n (0) > 0 converge uniformly on compact subsets of D to a conformal and onto map ψ : D → with ψ(0) = w 0 and ψ (0) > w 0 . Notice that then the inverse maps φ n = ψ −1 n converge uniformly on any compact subset K of (K belongs to the domain of φ n for large n). Theorem 1.1. Let δ n be a sequence of admissible domains that converges to a domain in Carathéodory sense with respect to some fixed w 0 ∈ and let ∂,δ n be the random collection of loops which is the collection of all loops of the loop representation of the FK Ising model on δ n that intersect the boundary (the boundary touching loops). Let φ n be a conformal map that sends n onto the unit disc and w 0 to 0. Then as n → ∞, the sequence of random loop collections φ n ( ∂,δ n ) converges weakly to a limit whose law is independent of the choices of , δ n , δ n , w 0 and φ n , and hence the law is invariant under all conformal isomorphisms of D.
Moreover the law of is the boundary touching loops of CLE(κ) with κ = 16/3 and is given by the image of the SLE(κ, κ − 6) exploration tree with κ = 16/3 under a tree-to-loops mapping which inverts the construction in Sect. 1.2.1 and is explained in more details in Sect. 2.3 (including definitions needed for understanding this theorem and making the statements more precise). Remark 1.2. In this article we will prove Theorem 1.1 only in the case that is smooth and that each of its discrete approximations have boundary which close to the boundary of in the sense that their distance is bounded by a uniform constant times the lattice step of the approximation. By these assumptions, we will exclude the cases where the boundary forms long fjords to have better estimates for the harmonic measure. The general case follows from the sequel [15] of this article on the radial exploration tree of the FK Ising model. These restrictions are technical and they are not needed, for instance, for the convergence in the so-called 4-point case (Sect. 4.4.1 and other arguments leading to the convergence of the interface to SLE[κ, Z ] in Sect. 5.5).
The present article aims to provide clear details for the basic proof techniques which include the regularity properties of trees, derivation of the martingale observables and the corresponding martingale characterization in the boundary-touching-loop setting. In principle, one should be able to deduce the complete picture by repeatedly iterating this construction inside the resulting holes appearing after removing the boundary touching loops, the main difficult point being the fractal boundary. Instead, in the sequel [15] we build the complete tree towards interior points, thus not having to deal with fractal boundaries. This requires working with a more complicated observable, and the proof in the current article better explains what follows in the sequel.
Our result in the 4-point setting is interesting in its own right. In a follow-up paper [16], we use it to show that the interface conditioned on an internal arc pattern converges towards so-called hypergeometric SLE.
A sample of FK Ising branch is illustrated in Fig. 4.

Previous results on conformally invariant scaling limits of random curves and loops.
So far, convergence of a single discrete interface to SLE(κ)'s has been established for but a few models: κ = 2 and κ = 8 [17], κ = 3 and κ = 16 3 [8], κ = 4 [21,22] and κ = 6 [24,25]. However, the framework for the full scaling limit, including all interfaces, is less developed: in addition to the present article only κ = 3 [4] and κ = 6 [6] and our subsequent work [15]. A similar result on a collection of random curves is the convergece of the free arc ensemble of the Ising model in [3].

Organization of the article.
We will give further definitions in Sect. 2. In Sect. 3 we explore the regularity and tightness properties of the loop configurations and the exploration trees based on crossing estimates. This gives a priori knowledge needed in the main argument. In Sect. 4 we define the holomorphic observable and show its convergence. In Sect. 5 we combine these tools and extract information from the observable so that we can characterize the scaling limit and prove the main theorem in Sect. 6.

Graph theoretical notations and setup.
In this article the lattice L • is the square lattice Z 2 rotated by π/4, L • is its dual lattice, which itself is also a square lattice (c) The exploration process between the lower left and right corners. Notice that the process uses only some loops touching the bottom arc and it uses only the loop arcs which are at top as seen from the lower right corner. Notice that in this particular sample, the large orange loop happens to come fairly close to the boundary without disconnecting the small loops to its right. Thus the exploration path turns and explores those boundary touching loops inside the "fjord" and L is their (common) medial lattice. More specifically, we define three lattices Notice that sites of L are the midpoints of the edges of L • and L • . We call the vertices and edges of V (L • ) black and the vertices and edges of V (L • ) white. Correspondingly the faces of L are colored black and white depending whether the center of that face belongs to V (L • ) or V (L • ).
The directed version L → is defined by setting V (L → ) = V (L ) and orienting the edges around any black face in the counter-clockwise direction.
The modified medial lattice L ♠ , which is a square-octagon lattice, is obtained from L by replacing each site by a small square. See Fig. 1b. The oriented lattice L ♠ → is obtained from L ♠ by orienting the edges around black and white octagonal faces in counter-clockwise and clockwise directions, respectively. Definition 2.1. A simply connected, non-empty, bounded domain is said to be a wired L ♠ → -domain (or admissible domain) if ∂ oriented in counter-clockwise direction is a path in L ♠ → . See Fig. 5 for an example of such a domain. The wired L ♠ → -domains are in one to one correspondence with non-empty finite subgraphs of L • which are simply connected, i.e., they are graphs who have an unique unbounded face and the rest of the faces are unit-size squares.

FK
Ising model: notations and setup for the full scaling limit. Let G be a simply connected subgraph of the square lattice L • . Consider the random cluster measure μ = μ 1 p,q of G with all wired boundary conditions in the special case of the critical FK Ising model, that is, when q = 2 and p = √ 2/(1 + √ 2). Its dual model is again a critical FK Ising model, now with free boundary conditions on the dual graph G • of G which is a (simply connected) subgraph of L • . The loop representation is obtained as loops which form boundaries between open cluster and the dual open clusters and is defined as a collection of loops on the corresponding subgraph G ♠ of the modified medial lattice L ♠ . The loop collection satisfies the properties of the following definition. See also Fig. 2 for illustration of the common loop representation shared by the random cluster model and its dual model.
Let's call a (unordered) collection of loops L = (L j ) j=1,...N on G ♠ a dense collection of non-intersecting loops (DCNIL) if • each L j ⊂ G ♠ is a simple loop • L j and L k are vertex-disjoint when j = k • for every edge e ∈ E there is a loop L j that visits e. Here we use that E is naturally a subset of E ♠ . Let the collection of all the loops in the loop representation be = (θ j ) j=1,...N . Then DCNIL is exactly the support of and for any DCNIL collection C of loops where Z is the partition function that normalizes the probability measure. We consider two boundaries of the domain, one which is the boundary of the domain and one which shifted by one lattice step from the first one towards the interior of the domain. They are both simple loops on the lattice which satisfy the same parity condition as the loops of the random cluster loop representation (the octagons on both sides have uniform color). More specifically (a) A sample of random cluster model and the corresponding loop configuration. v w w (b) Two branches of exploration tree from v root ∈ V ∂ to w, w ∈ V ∂ . Note the target independence of the process and that the branching takes place on the vertices of V ∂,1 . • ∂G ♠ is the boundary of the domain, in the usual topological sense. Call it the external boundary. • ∂ 1 G ♠ is the outermost (simple) loop can be drawn in G ♠ . In other words, it is the outermost loop of the empty random cluster configuration with wired boundary conditions. Call it the internal boundary. We say that ∂ 1 G ♠ touches the boundary everywhere and that a loop touches the boundary if if it intersects ∂ 1 G ♠ . Notice that if a loop and ∂ 1 G ♠ intersect then they share an edge (which is an edge shared by two octagons).
Define the collection of boundary touching loops, ∂ ⊂ , to be simply the set of loops which intersect the internal boundary ∂ 1 G ♠ .
Recall the Carathéodory convergence from Sect. 1.2.2 a bounded simply connected domain in the plane. And take a sequence δ n 0 as n → ∞ and a sequence of simply connected graphs G • δ n ⊂ δ n L • which approximate in the sense that, if we denote by δ n the bounded component of C\∂G ♠ δ n , then δ n converges in Carathéodory convergence to (with respect to any interior point of ). Fix any w 0 ∈ and let φ δ n : δ n → D be conformal transformations normalized in the usual way using w 0 , that is, Let ∂,δ n be the collection of boundary touching loops in δ n and set˜ ∂,δ n = φ δ n ( ∂,δ n ). Let us rephrase here the first half of Theorem 1.1.

Theorem 1.1 (a).
(Conformal invariance of ∂ ). As n → ∞,˜ ∂,δ n converges weakly to a random collection of loops in D. The law of is independent of , δ n , δ n and w 0 .
Notice that the independence of the law of from , δ n , δ n and w 0 implies that is invariant under all conformal automorphisms (Möbius transformations) of D. The rotational invariance requires a separate argument using the correspondence between exploration tree and the loop collection and the fact that we are free to choose the root for the exploration. See Sect. 6.

The exploration tree of FK Ising model.
Let's simplify the notation so that we use ∂ and ∂ 1 to denote ∂G ♠ and ∂ 1 G ♠ . Remember that ∂ and ∂ 1 are simple loops on L ♠ → and that ∂ was the set of loops in that intersected ∂ 1 . Next we will explain the construction of the exploration tree of ∂ . The branches of the tree will be simple paths from a root edge to a directed edge of ∂ 1 . More specifically let V target be the vertex set of ∂ 1 and for each v ∈ V target , let f v be the edge of ∂ 1 arriving to v.
We assume that the root vertex v root ∈ V ∂ of the exploration is fixed. For any w ∈ V target , we are going to construct a simple path which starts from the inwards pointing edge of v root and ends on the edge f w of w, denoting this path by T w = T v root ,w . We will call the mapping from the loop collections to the trees the "loops-to-tree map. " We describe next the algorithm of Sect. 1.2.1 when the target point is on the boundary. Consider a loop collection L = (L j ) j∈J where J is some finite index set, and assume that L satisfies the properties of DCNIL. The index j ∈ J shouldn't be confused with the concrete sequence L 1 , . . . , L n chosen below. Here we consider L j as a path in L and lift it to L ♠ when needed.
(1) Set e 0 to be the inward pointing edge of ∂ 1 at v root and f end = f w , that is the edge of ∂ 1 arriving to w. Denote the set of edges in ∂ 1 that lie between f end and e 0 , including f end , by F w . (2) Set L 1 to be the loop going through e 0 . Find the first edge of L 1 after e 0 in the orientation (remember that all the loops are oriented in the clockwise direction) of L 1 that lies in F w . Call it f 1 and the part of L 1 between e 0 and f 1 , not including f 1 , L T 1 . Notice that L 1 goes through f end if and only if f 1 is equal to f end . Notice also that if f 1 is not f end , then it is the first edge that takes the loop to a component of the domain that is no longer "visible" to f end .
(3) Suppose that f n and L T 1 , L T 2 , . . . , L T n are known. If f n is equal to f end , stop and return the concatenation of L T 1 , L T 2 , . . . , L T n and f n as the result T w . Otherwise take the inward pointing e n edge next to f n (starting at the endpoint of f n ) and the loop L n+1 passing through e n . Find the first edge of L n+1 after e n in the orientation of L n+1 that lies in F w . Call it f n+1 and the part of L n+1 between e n and f n+1 , not including f n+1 , L T n+1 . Repeat (3). The result of the algorithm T w is called the exploration process from v root to w. The collection T = (T w ) w∈V target where V target = V ∂ is called the exploration tree of the loop collection L rooted at v root .
The following result is immediate from the definition of the exploration tree. Two branches coincide until the first time that the branch disconnects the target points by that result, and after that the branches explore disjoint regions, which will later imply independence of this processes for the FK Ising exploration tree.

Proposition 2.3 (Target independence of exploration tree).
Suppose that v root , w, w are vertices in V ∂ in counterclockwise order, that can be the same. Let F w,w the edges of ∂ 1 that lie between the outward pointing edges of w and w in counterclockwise direction, including the edge at w and but excluding the edge at w. Then T w and T w are equal up until the first edge lying in F w,w .
Next we will construct the inverse of the loops-to-tree map which we will call a "treeto-loops map." The structure of the tree T is the following: the branches of T are simple and follow the rule of leaving white squares on their right and black on their left. The branching occurs in a subset of vertex set of ∂ 1 . There is a one-to-one correspondence Similar constructions work in the continuous setting. See [23] for the construction of the SLE(κ, κ − 6) exploration tree and the construction for recovering the loops.
Let us repeat here the second half of Theorem 1.1.

Theorem 1.1 (b).
The law of is given by the image of the SLE(κ, κ − 6) exploration tree with κ = 16/3 under the above tree-to-loops mapping.
The proof of Theorem 1.1 is given in Sect. 6.

Tightness of Trees and Loop Collections
In this section, we establish a priori bounds for trees and loop ensembles. The setting is relatively general, although we only apply it here to the FK Ising exploration tree of the boundary touching loops.

A probability bound on multiple crossings by the tree.
An approach to establish compactness properties of sequences of probability measures based on probability bounds of multiple crossings of annuli by random curves was set up in [14] extending the results of [1]. Below we use that type of result for the FK Ising exploration tree. We start from the essential definitions. For any fixed measurable space (S, F), we call a random variable X tight over a collection 0 of probability measures P ∈ 0 on the space (S, F), if for each ε > 0 there exists a constant M > 0 such that P(|X | < M) > 1 − ε for all P.
A crossing of an annulus A(z 0 , r, R) = {z ∈ C : r < |z − z 0 | < R} is a closed segment of a curve that intersects both connected components of C\A(z 0 , r, R) and a minimal crossing doesn't contain any genuine subcrossings.
Recall the general setup of [14]: we are given a collection (φ, P) ∈ where the conformal map φ contains also the information about its domain of definition and P is the probability law of FK Ising model on the discrete domain and in particular gives the distribution of the FK Ising exploration tree. Given the collection of pairs (φ, P) we define the collection D = {φP : (φ, P) ∈ } where φP is the pushforward measure defined by (φP)(E) = P(φ −1 (E)).

Theorem 3.1. The following claim holds for the collection of the probability laws of FK Ising exploration trees
• for any > 0, there exists n ∈ N and K > 0 such that for all P ∈ D and for all z 0 ∈ C and R > r > 0. and there exist positive numbers α, α > 0 such that the following claims hold • if for each r > 0, M r is the minimum of all m such that each T ∈ T can be split into m segments of diameter less or equal to r , then there exists a random variable K (T ) such that K is a tight random variable for the family D and for all r > 0.
• All branches of T can be jointly parametrized so that they are all α -Hölder continuous and the Hölder norm can be bounded by a random variable K (T ) such that K is a tight random variable for the family D .
Each of the claims have their own applications below although they are closely related, see [1].
Proof. We need to verify the first claim and the two other claims follow from it, by results of [1]; more specifically the second claim follows from the reformulated statement presented in the beginning of the proof of Theorem 1.1 of [1] and the third claim follows from Theorem 1.1 of [1]. Notice that we need to verify the inequality (7) for = n i and n = n i where n i is an increasing sequence of natural numbers and n i is a sequence of positive real numbers tending to infinity, since the left-hand side of the inequality in the first claim is non-increasing in n.
For that A 1 and for C > 1 big enough, apply the estimate of conformal distortion given by either Lemma A.1 or Lemma A.2, depending on the case, to show that for any m = 1, 2, . . . , log R r , there existÃ m = A(z m , r m , 2r m ) such that the conformal image of any crossing of A(z 0 , C m−1 r, C m r ) under φ −1 is a crossing of A m .
By Lemmas B.1 and B.2 in Appendix B and the results of [7] (in particular, Lemma 5.7) applied to crossings ofÃ m , it follows that for each ε > 0 there is n such that P(at least n disjoint segments of T crossÃ m ) < ε. Thus (7) holds for n and constants K = ε −1 and = log 1 ε , Here the constant tends to ∞ as n → ∞ (i.e. as ε tends to zero), and the estimates are uniform over all P ∈ D and annuli A(z 0 , r, R) with R > r .

The crossing property of trees
in a (random) order and suppose that the random curves γ k are each parametrized by [0, 1]. The chosen permutation specifies the order of exploration of the curves. More specifically, set γ (t) = γ k (t − k) when t ∈ [k, k + 1). We call γ an explored collection of branches.
For a given domain and for a given simple (random) curve γ on , we set τ = \γ [0, τ ] for each (random) time τ . Similarly, for a given domain and for the given finite collection of curves γ k on , we set τ = \γ [0, τ ] for each (random) time τ .
We call τ or τ the domain at time τ .
The following definition generalizes Definition 2.3 from [14]. This definition is needed in order to recognize those crossing events which have low probability.

Definition 3.2.
For a given domain ( , v root ) and for a given order of exploration (which defines γ ) of curves γ x , x ∈ V target , where each curve γ x is contained in , starting from v root and ending at a point x in the set V target , define for any annulus A = A(z 0 , r, R), the connected component of z in τ ∩ A is crossed by any path connecting γ (τ ) to x in τ (9) and set A u Here and in what follows we only consider allowed lattice paths when we talk about connectedness.
Recall that ∂ is the boundary of and ∂ 1 is the internal boundary of which is the outermost of all (simple) lattice paths are contained in . A point on ∂ 1 is a branching point of the tree T if it is the last common point of two branches. In that case the edge on the primal lattice passing through the point has to be open in the random cluster configuration. See also Fig. 7. Notice that at the branching point on the right-hand side the drawn branch jumps from a loop to another one, whereas on the left-hand side it keeps following the loop explored at that time Next we will write down an estimate in the form of a hypothesis analogous the ones presented in Section 2 of [14]. The estimate is sufficient for the desired compactness properties of the exploration tree. In fact, we will present two equivalent conditions here. As we will later see that conformal invariance will hold for this type of conditions, again analogously to [14]. These conditions will be verified for the FK Ising exploration tree below.
Condition G1. Let be as above. If there exists C > 1 such that for any (φ, P) ∈ , for any stopping time 0 ≤ τ ≤ N and for any annulus which is contained in A f τ and the first minimal crossing doesn't have branching points on both sides then the family is said to satisfy a geometric joint unforced-forced crossing bound Call the event above E u,f . See Fig. 8 for more information about different types of branching points. Condition G2. The family is said to satisfy a geometric joint unforced-forced crossing power-law bound if there exist K > 0 and > 0 such that for any (φ, P) ∈ , for any stopping time 0 ≤ τ ≤ N and for any annulus Here LHS is the left-hand side of (10).
We want to use Condition G1 or equivalently G2 as a hypothesis for theorems. We start by verifying them for the critical FK Ising model exploration tree.

Theorem 3.4.
If is the collection of pairs (φ, P) where φ satisfies the properties given in (6) and also that its domain of definition (φ) is a discrete domain with some lattice mesh, and P is the law of the critical FK Ising model exploration tree T on U (φ), then satisfies Conditions G1 and G2.
Proof. The theorem can be proved in the same way as the result that a single interface in FK Ising model satisfies a similar condition, which was presented in Section 4.1 of [14]. However, stronger crossing estimates are needed for the exploration tree compared to a single interface. Luckily such estimates were established in Theorem 1.1 of [7]. The full argument goes as follows • Similarly as in [14], we try to bound uniformly from above the probability of crossings by the interface in an annular sector.  [14]. We can move the wired boundary where the crossing starts and introduce free boundary along the two sides which are parallel to the possible open crossing. However we cannot move the free boundary or replace it by wired boundary if the boundary condition at the endpoint of the possible open crossing is indeed free. Thus we need to consider a general topological quadrilateral and not just regular one (with an archetypical shape), which was enough in [14]. • As indicated by considerations in Fig. 9b, we end up to wired-free-free-free or wired-free-wired-free boundary conditions after the FKG transformation. (That is, we introduce new boundary along the boundaries of the annulus of the opposite "color" as the dashed crossing of the quadrilateral, in the sense that if we consider open crossing the new boundary is free and if the crossing crossing is dual open the new boundary is wired. Then we use the duality as we stated above.) • Then we use the crossing estimates of [7] to reduce an uniform lower bound. By Theorem 1.1 of [7], the lower bound (as well as the upper bound) are uniform and depend only on the extremal length of the topological quadrilateral.
The upper bound for existence of a curve can improved to the form C r R α using a similar argument as Proposition 2.6 of [14] by considering a disjoint set of concentric annuli where an upper bound of the form 1 − ε holds.
As shown in [14,Proposition 2.6], this type of bounds behave well under conformal maps. We have uniform control on how the constants in Conditions G1 and G2 change if we transform the random objects conformally from one domain to another.

Theorem 3.5.
If is as in Theorem 3.4, then D satisfies Conditions G1 and G2.

Topology on trees.
Let d curve to be the curve distance (defined to be the infimum over all reparametrizations of the supremum norm of the difference). We define the space of trees as the space of closed subsets of the space of curves and we endow this space with the Hausdorff metric. More explicitly, if (T x ) x∈S and (Tx )x ∈Ŝ are trees then the distance between them is given by (b) For each annulus type, we need to establish a probability lower bound on the crossing events of connected random cluster paths indicated by the dashed lines to get the bound of Condition G2. Consider now the random tree T δ = (T v root ,x ) x whose law is given by P δ . The finite subtree T δ (I) is defined by the following steps: • first take a discrete approximationŜ δ of the set of points φ −1 (Ŝ D ) on the vertex set of ∂ 1 δ . • then consider the finite subtreeT δ = (T v root ,x ) x∈Ŝ δ • finally set S δ to be the union ofŜ δ and all the branching points (in the sense of the definition in Sect. Here T v root ,x for a branching point is defined as the subpath of (T v root ,x ) x∈Ŝ δ that starts from v root and terminates at x.
In other words, we take the subtree corresponding toŜ and then we augment it by adding all its branching points and the branches ending at those branching points to the tree. We will call below T δ = T δ (I) the finite subtree corresponding to I. It is finite in a uniform way over the family of probability laws and domains of definition. Hence the name.
Denote the image of T δ (I) under φ by T D δ (I).
as m(I) → 0 where the supremum is taken over δ > 0.
Remark 3.7. In fact, the supremum in the claim can be taken to be over all shapes , since probability bounds that the proof is based on hold for any shape. Proof. Let ε > 0 and suppose that K and α are positive numbers such that M r ≤ K r − 1 α for all r > 0 with probability greater than 1 − ε for any P δ . Such constants exist by 2) on the arc x − x + . We will show that with a high probability this happens for all x simultaneously.
Fix R > 0. We will study under which circumstances is larger than R or smaller or equal to R. We can assume that m(I) is less than R/10, say. Then the diameter of x − x + is at most R/10. First we notice, that if x 0 the first branching point on I k , then the branches to x, x − and x + are all the same until they reach x 0 .
We will consider two complementary cases: either the branches of x and x + continue to be the same after x 0 at least for some time. Suppose that min{d( Then in particular the branch that we follow to continue towards x − and x 0 has to reach distance 3R/4 from x − x + without making a branch point in xv root , i.e. a visit to the boundary. It has to contain therefore a subpath which has diameter at least R/2, starts at the boundary and is otherwise disjoint from the boundary. Call the subpaths with this property J . Then by the bound on M r we get for the relationship of maximal number of disjoint segments and the minimal number of segments needed to cover a path).
If we happen to reach distance 3R/4 from x − x + without making a branch point, then in order d(T v root ,x , T v root ,x + ) > R to hold the path needs to come close to x − x + again and branch on x x + which it will do surely. But while doing so one of the following has to hold ( Fig. 10): • the returning path doesn't branch on x + v root before branching in x x + and then the path towards x + reaches distance at least 3R/4 from x − x + before reaching x + • or the returning path branches in x x + and then the path towards x reaches distance 3R/4 from x − x + before reaching x.
By the conditions presented in the previous section the probabilities of both of them can be made less than ε by choosing length of x − x + small. Now there are #J subpaths where this can happen. Hence the total probability is at most which can be made arbitrarily small. This argument can be made more formal by introducing stopping times τ n such that τ 0 = 0 and τ n , n ≥ 1 is the smallest t such that t > τ n−1 and it holds that there exists s ∈ [τ n−1 , t) such that the exploration at time s in on the boundary and the exploration on (s, t) is disjoint from the boundary and has diameter at least R/2. Then if E n is the event above (described in the two bullets), then the inequality (15) follows simply by using a union bound for n E n . The other case x 0 ∈ x x + is similar.

Precompactness of the loops and recovering them from the tree.
3.4.1. Topology for loops. Let T be the unit circle. We consider a loop to be a continuous function on T, considered modulo orientation preserving reparametrizations of T. The metric on loops is given in a similar fashion as for curves. We define where we use the notation f ∈ γ to denote that f is a parametrization of the (unparametrized) curve γ . The space of loops endowed with d loop is a complete and separable metric space. A loop collection is a closed subset of the space of loops. On the space of loop collections we use the Hausdorff distance. Denote that metric by d LE , where LE stands for loop ensemble.
The random loop configuration we are considering is the collection of FK Ising loops which we orient in clockwise direction. Proof. The claim follows directly from Lemma 2.4 and the tightness of the trees. Namely, observe that there exists a constant α > 0 and a tight random variable K > 0 such that the minimum number of segments needed to cover any branch in the tree with segments of diameter at most R is bounded by K R −α . By the bijection between loop collections and trees, each loop is a subpath of a branch of the tree and thus we find that the minimum number of segments needed to cover any loop in the loop collection with segments of diameter at most R is bounded by K R −α .

Precompactness of the loops.
3.5. Uniform approximation of loops by the finite subtrees. By Lemma 2.4 it is possible to reconstruct the loops from the full tree. And by Theorem 3.6 we can approximate the full tree by finite subtrees. How do we recover loops approximately from the finite subtrees? Can we do it in a uniform manner? Consider a loop θ ∈ . We divide it into arcs which are excursions from the boundary to the boundary, otherwise disjoint from the boundary. Since the loop is oriented in the clockwise direction, exactly one of the arcs goes away from v root in the counterclockwise orientation of the boundary and rest of them go towards v root . The first case is the top arc γ T of the loop γ and the rest of them are the bottom arcs. See Fig. 6. The concatenation of the bottom arcs is denoted by γ B .
Suppose that the loop doesn't intersect a fixed neigborhood of v D root . The diameter of the entire loop is bounded by a uniform constant (depending on the chosen neighborhood) times the diameter of the top arc and vice versa. This means that if the loop has diameter greater than a fixed number, then the top arc is traced by finite subtree for fine enough mesh according to Theorem 3.6.
Also the bottom arcs from right to left are traced until the last branch point to the points defining the finite subtree. Again by Theorem 3.6, the part which remains to be discovered of the loops, has small diameter. Hence if we define approximate loop to have just a linear segment in that place, we see that the loop and the approximation are close in the given metric.
So we define the finite-subtree approximation δ of the random loop collection to be the collection of all those discovered top arcs concatenated with their discovered bottom arcs and the linear segment needed to close the loop. By Theorem 3.6, we have the following result.
3.6. Some a priori properties of the loop ensembles. The following theorem gathers some technical estimates needed below.
as N → ∞. (b) (small loops and branching points are dense on the boundary) There exists > 0 such that for each x ∈ ∂D and R > 0 as r → 0. Consequently, for all c > 1 for all δ > 0 and R > cδ, and thus for any β ∈ (0, 1) and r > 0 and any partitioning of the boundary of the domain to connected sets I j of diameter at least r it holds that P δ ∀ j, I j is touched by a loop with diameter at most as r → 0. Here x L (γ ) and x R (γ ) are the left-most and the right-most points of γ along the boundary of the domain (as seen from the root). We call the boundary arc x L (γ )x R (γ ) the support of γ on the boundary. Furthermore, for any constant and as r → 0.
Proof. The bound (18) follows from Theorem 3.6. The bound (19) follows directly from the crossing bound of annuli at x, and the bound (20) is merely rephrasing the previous bound and specializing to r = cδ. For the bound (21), take x j to be any point on I j , say, not too close to the endpoints of I j , and use the bound (20) for x = x j , R = δ β and sum over j to get the an upper bound for an union of events of type in (20); the required bound is obtained as the complement. The bound (23) is shown to hold by considering the domain U \γ (τ ) where τ is a stopping time such that the top arc of a big loop γ is reaching its endpoint x(γ ) at time τ , and using the crossing bound of annuli at x(γ ).
The proofs of (22), (24) and (25) are similar. The exploration process discovers, as seen from the root, the top arc of any loop before the lower arcs. As noted before, the diameter of the top arc is comparable to the diameter of the whole loop. Hence it enough to work with loops that have top-arc diameter more than R.
For (22), stop the process when the current arc exits the ball of radius R/4 around the starting point z 0 of the arc. The rest of the exploration process makes with high probability a branching point in the annulus A(z 0 , r, R/4) before finishing the loop at z 0 by the crossing estimate (namely, the estimate (11) used in A(z 0 , r, R/4) implies that the path will touch both sides of the of the slit domain boundary inside the annulus before reaching distance r from z 0 ). This implies that the event (22) does not occur for that loop. These segments of diameter R/4 in the exploration process are necessarily disjoint (they start on the boundary and remain after that disjoint from the boundary) and hence their number is a tight random variable by Theorem 3.1. The bound (22) follows by a simple union bound.
For (24), stop the process when the diameter of the current arc is at least R and the tip lies within distance r from the boundary. Let the point closest on the boundary be z 0 . These arc of diameter R are necessarily disjoint: they start from the boundary and remain after that disjoint from the boundary. The rest of the exploration makes with high probability a branching point before it exits the ball of radius R centered at z 0 . The bound (24) follows by a simple union bound.
The proof of (25) is very similar and we omit the details. For the bound (26), let γ 1 and γ 2 be two loops of diameter larger than R. Let S be the set of points of γ 2 and let γ T 1 be the top arc of γ 1 and let γ B 1 be the rest of γ 1 . Without loss of generality, we can assume that γ T 1 separates S from γ B 1 in D. Otherwise we can exchange the roles of γ 1 and γ 2 . Suppose that d loop (γ 1 , γ 2 ) < r . Then we claim that the Hausdorff distance of γ T 1 and γ B 1 is less than 2r . The distance from any point of γ B 1 to γ T 1 is less than r . This follows when we notice that any line segment connecting γ − 1 to S intersects γ + 1 . Due to topological reasons, since γ 1 and γ 2 are oriented and d loop uses this orientation, it also holds that the distance from any point of γ T 1 to γ B 1 is less than 2r . Namely, any point in γ T 1 has a point of γ 2 in its r neighborhood, which has distance less than r to γ B 1 . We can now use the exploration process to explore γ T 1 . Take a point z 0 ∈ γ T 1 that has distance at least √ r R to the boundary. Such a point exists by (22) and (25). Now by the crossing property used in A(z 0 , r, √ r R) at the stopping time when the exploration has explored fully γ T 1 , with high probability the exploration of γ B 1 doesn't come close to z 0 .

Some consequences.
In the discrete setting we are given a tree-loop ensemble pair. The tree and the loop ensemble are in one to one correspondence as explained earlier. Recall that, given the loop ensemble, the tree is recovered by the exploration process which follows the loops in counterclockwise direction and jumps to the next loop at boundary points where the loop being followed turns away from the target point. Recall also that the loops are recovered from the tree by noticing that the leftmost point in the loop corresponds to a branching point of the tree and the rest of the loop is the continuation of the branch to the point just right of that branching point.
Consider any random tree-loop ensemble pair which is a subsequential weak limit of the tree-loop ensemble pairs of the FK Ising model. Since the discrete collections of loops are have finite number of big loops with the uniform bound (18), the loop collection is almost surely at most countable also in the limit (use the Portmanteau theorem for the closed event that there are at most n loops of diameter strictly greater than R). By the properties of the loop ensemble given in Theorem 3.10, the limiting pair and the process of taking the limit have the following properties • the loops are distinguishable in the sense that there is no sequence of pairs of distinct loops that would converge to the same loop. • Each loop consists of a single top arc which is disjoint from the boundary except at the endpoints and a non-empty collection of bottom arcs. In particular, the endpoints of the top arc (the leftmost and rightmost points of the loop) are different.
From these properties we can prove the following result. The second assertion basically means that there is a way to reconstruct the loops from the tree also in the limit.
Theorem 3.11. Let ( D , T D ) be the almost sure limit of ( D δ n , T D δ n ) as n → ∞. Write D = (θ j ) j∈J and D δ n = (θ n, j ) j∈J (with possible repetitions) such that almost surely for all j ∈ J , θ n, j converges to θ j as n → ∞, and then set x n, j to be the target point of the branch of T D δ n that corresponds to θ n, j in the above bijection (described in the beginning of the Sect. 3.6.1). Then • Almost surely all x n, j converge to some points x j as n → ∞ and all the branches T x + n, j converge to some branches denoted by T x + j as n → ∞. Furthermore, x j are distinct and they form a dense subset of ∂D and T is the closure of (T x + j ) j∈J . • On the other hand, (T x + j ) j∈J is characterized as being the subset of T that contains all the branches of T that have a doublepoint on the boundary. Furthermore, that doublepoint is unique and it is the target point (that is, endpoint) of that branch.
• Any loop θ j can be reconstructed from (T x + j ) j∈J in the following way: the leftmost point of θ j ∩ ∂D (the point closest to the root in the counterclockwise direction) is one of the doublepoints x + j and the loop θ j is the part between the first and last visit to x + j by T x + j .

Precompactness of a single branch.
We know now that the sequence of exploration trees is tight in the space of curve collections, which has the topology of Hausdorff distance on the compact sets of the space of curves. This enables us to choose for any subsequence a convergent subsequence. However, it turns out that we need stronger tools to be able to characterize the limit. We will review the results of [14] that we will use. The hypothesis of [14] is similar to the conditions in Sect. 3.2. Once that hypothesis holds for a sequence of random curves, it is shown in that paper that the sequence is tight in the topology of the space of curves. Furthermore, it is established that such a sequence is also tight in the topology of uniform convergence of driving terms of Loewner evolutions in such a way that mapping between curves and Loewner evolutions is uniform enough so that if a sequence converges in both of the above mentioned topologies the limits have to be the same.
The hypothesis of [14] for the FK Ising branch has been already established since we can see it as a special case of Theorem 3.4. Theorem 3.12 [14]). Under certain hypothesis, a sequence probability laws P n of random curves of H has the following properties: for each ε > 0, there exists an event K such that inf n P n (K ) ≥ 1 − ε and the capacity-parametrized curves in K ∩ {γ simple} form an equicontinuous family, their driving processes form an equicontinuous family and finally |γ (t)| → ∞ as t → ∞ uniformly. Moreover the driving processes on the event K ∩ {γ simple} are β-Hölder continuous with a bounded Hölder constant for any β ∈ (0, 1 2 ). In addition to [14], see also Section 6.3 in [13] for this type of argument in the case of site percolation.

Precompactness of finite subtrees.
For fixed finite number of curves it is straightforward to generalize Theorem 3.12. In fact, the conclusions of Theorem 3.12 hold for any finite subtree that we considered in Sect. 3.5.
In the rest of the paper we use these tools available for us and aim to characterize the scaling limits of finite subtrees of the exploration tree. If we manage to establish the uniqueness of the subsequent scaling limit of those objects, then Theorem 1.1 follows from tightness and from the approximation result, Theorem 3.9.

The setup for the observable.
It is natural to generalize the setup of the previous section to domains of type illustrated in Fig. 11.
Henceforth we consider G ♠ with four special boundary vertices a, b, c, d, where the boundary edges (when consider as edges of the directed graph G ♠ → ⊂ L ♠ → ) at a and c point inwards and the boundary edges at b and d outwards. Now a and d play the roles of v root and w, respectively, in the construction of the exploration tree of the previous section. The arcs ab and cd have white boundary (L • wired) and bc and da have black boundary (L • wired).
To get a random cluster measure on G ♠ consistent with the all wired boundary conditions and the exploration process in Sect. 2, we have to count the wired arcs bc and da to be in the same cluster. This corresponds to the external arc configuration where a and b are connected by an arc and c and d are connected by an arc. Denote such external arc pattern (a b, c d). Later we will denote internal arc patterns by (a b, c d) etc. In fact, we will choose not to draw the external arc c d. The reason for this is that it is not used in the definition of the observable and the weights for loop configurations that we get either with or without c d are all proportional by the same √ 2 factor. Thus it doesn't change the probability distribution.
The configuration on G ♠ is (γ 1 , γ 2 , l i : i = 1, 2, . . . N loops ), where γ 1 and γ 2 are the paths starting from a and c, respectively. Definê where α is the planar curve that realizes the exterior arch a b as in Fig. 11 and denotes the concatenation of paths. The first case in (27) 11. Generalized setting for the chordal exploration tree. We add an external arc from b to a with the following interpretation. The observable contains an indicator factor for the curve starting from c and a complex factor depending on the winding of that curve. If that curve goes to b we continue to follow it through the external arc to a and from there to d. On the other hand, if the curve starting from c ends directly to d, then the curve connecting a to b is counted as a loop giving an additional weight √ 2 to the configuration. The arc bc ⊂ V ∂,1 is marked with * 's. Notice that the general domain can have common parts for different boundary arcs longer than one lattice step such as near b. In fact, we need to consider domains in this generality, if we wish to explore the interfaces starting at the marked points and the corresponding observables conditioned on the information of the progressing exploration for any e ∈ E(G δ ) ⊂ E(G ♠ δ ). Here W (d δ , e) is the winding from boundary edge of d δ to e along the reversal ofγ and θ δ (satisfying |θ δ | = 1) is a constant, whose value we specify later. Notice that f δ doesn't depend on the choice of α since the winding is well-defined modulo 4π .

Preholomorphicity of the observable.
Let λ = e −i π 4 . Associate to each edge e of the modified medial lattice one of the following four lines through the origin R, iR, λR, λR as in Fig. 12

. Denote this line by l(e).
Spin preholomorphicity Choose the constant θ in the definition of f δ so that the value of f δ at the edge e belongs to the line l(e). Then θ ∈ {±1, ±i, ±λ, ±λ}. The ± sign mostly doesn't play any role, but in some situations it should be chosen consistently. The observable f δ satisfies the relation for every vertex v of the medial graph, whose four neighboring edges in counterclockwise order are called e N , e W , e S , e E . The relation is verified using the same involution among the loop configurations as in [26]. Therefore, using (29) where e is the edge between v and w and Proj e is the orthogonal projection to the line l(e). That is, if l(e) = η e R where η e is a complex number with unit modulus, then Since f δ on V (G δ ) satisfies the relation (30), we call it spin-preholomorphic (or strongly preholomorphic). Spin-preholomorphic functions satisfy a discrete version of the Cauchy-Riemann equations [26].

Martingale property of the observable.
Let γ = T a,d where T a,d is the branch of the exploration tree constructed in Sect. 2.3. Let's parametrize γ by lattice steps so that at integer values of the time, γ is at the head of an oriented edge (in L ♠ → ) between black and white octagon and at half-integer values of the time, it is at the tail of such an edge. Note that step between the times t = k − 1/2 and t = k, k = 1, 2, . . ., is deterministic given the information up to time t = k − 1/2. Hence between two consecutive integer times, at most one bit of information is generated: namely, whether the curve turn left or right in an "intersection". Even this choice might be predetermined, if we are visiting a boundary vertex or a vertex visited already before.
Denote the loop configuration by ω, in the four marked point setting it consist of two arcs and a number of loops. We note that the pair (γ [0, t], ω) can be sampled in two different ways (which are basically describing the conditional distributions of one given the other): (1) In the first option, we sample first ω and then γ [0, t] is a deterministic function of ω as explained above. (2) In the second option, we sample first γ [0, t] (or rather we keep the sample of γ [0, t] of the previous construction) and then we (re-) sample ω (We can rename ω in the end as ω. The prime symbol is only used so that there is no confusion with ω defined above.) in two steps: we sample ω in the complement of γ [0, t] using the boundary condition given by γ [0, t] and then for each visit of γ [0, t] to the arc bc ⊂ V ∂,1 we flip an independent coin ζ v ∈ {0, 1}, v ∈ bc, such that and then we open for each visited v ∈ bc such that ζ v = 1 the edge e v ∈ E(L • ) in ω ∪ γ [0, t] and call the resulting configuration ω . Notice that the numbers on the The involution where the state of the random cluster configuration is changed at v and which is illustrated in this pair of figures, preserves the winding factor in the observable and the relative weight of the configuration.  Table 1 for details about the equivalence of these two constructions of (γ [0, t], ω). The configurations are grouped in Table 1 in pairs so that the configurations only differ by the state of the edge at v. By construction, the right column occurs for γ surely, since the left column leads to branching at v. The relative weights of the L • -edge at v being open or closed are independent of are calculated in the table giving (32); namely, the relative weight of the pair of the configuration is a constant. Notice also that bc mattered only here since when the branch hits ab or da (or even cd, if we continue to process all the way up to d) it continuous to follow the same "loop". Thus the loop-to-loop jump can only occur on bc.
For a non-negative integer t, let G ♠ t be the slit graph where we have removed γ [0, t] and let a t be the edge γ ([t − 1/2, t]).
Next we decorate the boundary arc bc Let (F t ) t∈R + be the filtration such that the σ -algebra F t is generated by γ [0, t] and ξ v for any v ∈ γ [0, t − 1] ∩ bc. Notice that we can couple ζ v and ξ v in such a way that We interpret ζ v so that ζ v = 1 if and only if we jump at v from one boundary touching loop to the neighboring one, and ξ v so that ξ v = 1 when there is no jump and ξ v is a fair coin flip when there is a jump.
Set for any integer t where we can either take b t = b or b t is the rightmost point visible from cd. The rightmost visible point means that we move b t on all branching events to the place where we cut the next loop open, and to the edge which points towards that vertex, to be more specific. That is, M + t is the probability that a is connected to d by the interface (internal arc) in the slit domain and by the proof of the next result, it is interpreted as a conditional probability in the original domain. Lemmas 4.1 and 4.2 below are very central for the proof of the main theorem, namely, the law of the exploration process is determined from the martingale property of these quantities.

by the Markov property of random cluster model and hence
On the other hand, if γ (t) ∈ bc, then M + t+1 = (1 + Therefore M t is a martingale.
Define for fixed edge e where θ δ is the constant with unit modulus chosen in Sect. 4.2. In words, the value of the process (N t ) at time t is the value at the given edge e of the observable on the slit domain at time t. For the next property θ δ needs to be chosen consistently. This can be done for example by requiring that the sign of the observable is always the same at d.
Proof. If γ (t) / ∈ bc, then clearly E[N t+1 |F t ] = N t . If γ (t) ∈ bc, then N t+1 = N t , since by the observations in Table 1 the random variables 1 e∈γ and e −i 1 2 W (d,e) are independent from the state of the edge at γ (t) conditionally on the history up to time t. Therefore N t is a martingale.

Convergence of the observables.
In this subsection we first work on the scaling limit of the 4-point observable. In that approach we keep the points c δ and d δ at a macroscopically positive distance. That could be considered as a motivation for the so called 3-point or fused observable where we take the same limit, but keeping c δ and d δ at a microscopically bounded distance.

Convergence of the 4-point observable.
It is straightforward to apply the reasoning of [26] to this case. We only summarize the method here without any proof. See also the lecture notes [11].
The observable f δ is given on the medial lattice. It is preholomorphic on the vertices of the lattice and it has well-defined projections to the complex lines l(e) defined on the edges of the lattice. Define a function H δ on the square lattice by setting H δ (B 0 ) = 1 where B 0 is the black square next to d and then extending to other squares by where B and W are any pair of neighboring black and white square and e is the edge of the medial lattice between them. Now H δ is well-defined since by the properties of f δ the sum of differences of H δ along a closed loop is zero. The boundary values H δ are the following, see also Fig. 13: • H δ is equal to 0 on the arc cd.
• H δ is equal to 1 on the arcs da and bc.
• H δ is equal to 1 − β δ on the arc ab.
The relation between f δ and H δ becomes more clear after a small calculation. Namely, by this calculation for neighboring black squares B, B with a vertex v ∈ L Notice that the complex number (B − B)/δ has modulus √ 2. The natural interpretation is that H δ (B) is the imaginary part of the discrete integral (counted in lattice-step units) of 1 h(x, y) = a 1 + a 2 x + a 3 y + a 4 x y in each square and matches with the values given at the corners of the square. Then at each interior pointH Next apply standard difference estimates to show that the preharmonic functionsH δ andH • δ have convergent subsequences and also using crossing estimates show that their boundary values approach to each other. Since 0 ≤ β δ ≤ 1, by taking a subsequence we assume that β δ converges to a number β and H δ converges to a harmonic function on with the boundary values H = 0 on cd, H = 1 on bc and da and H = 1 − β on ab.
As explained in [26,Section 5] we can extend the convergence of H δ to the convergence of f δ and hence along the same subsequence as H δ converges to H also 1 where φ is any holomorphic function with Im φ = H . In fact, the value of β is determined uniquely and it depends only on the conformal type of the domain. We'll give the argument in Sect. 4.5 for completeness following the lines of Sect. 4.4.2. Suppose for now that it is the case. Then it follows that the whole sequence 1 √ 2δ f δ converges.

A proof for the fused case.
Consider now the same setup, but when c and d are close to each other, say, at most at bounded lattice distance from each other. Denote by z 0 the point in the continuum that c and d are approximating. We will deal with the case of flat boundary near z 0 . We will change the definition of H to that of 1− H , it means that on the arc ab, H ≡ β, on the arcs bc and da, H ≡ 0 and on the arc cd, H ≡ 1. Then H is superharmonic when restricted to L • and subharmonic when restricted to L • and the inequalities in (43) are reversed.
We expect that in the fused (3-point) limit, β δ goes to zero, which we have to compensate by renormalizing the observable. In effect, the value of H on cd will go to infinity and we expect to get a Poisson kernel type singularity. Hence we say that there is a singularity at z 0 (or at c or d).
We will make the following definitions: • The discrete half-plane H is a discrete approximation of H (z 0 ,θ) := z 0 + e iθ H where H = {z ∈ C : Im[z] > 0}. Suppose that the boundary lies between two parallel lines that are at a distance which remains bounded (in lattice steps) as δ → 0. Assume also that the projection of a parametrization of the boundary on one of the lines is a monotone function, at least in sufficiently large neighborhood of z 0 . This ensures that there are no long fjords near z 0 .
• H + δ , f + δ are the half plane functions on H (z 0 ,θ) δ with the "singularity" at z 0 . That is, f + δ is the unique, up to ± sign, bounded preholomorphic function on H • H δ , f δ are the functions on δ with the "singularity" at z 0 .
The next lemma gives the convergence of the observable in a half-plane. We will compare the other observables to this one.
The following statements hold: (i) As δ → 0 the sequenceĤ + δ converges uniformly on compact sets to Im(− e iθ z−z 0 ). (ii) For each sequence δ n 0 as n → ∞, there exists a subsequence δ n k and a constant c + ∈ R >0 such that as k → ∞ the sequence L δ n k /δ n k converges to c + and the sequence δ −1 n k H + δ n k converges uniformly on compact sets to c + Im(− e iθ z−z 0 ).
Proof. Let's first prove (i) and then use that result to prove (ii). We need here thatĤ + δ (w δ ) = 1 andĤ + δ ≥ 0. Harnack's inequality and Harnack boundary principle [10] imply that for any r > 0,Ĥ + δ ≤ C in H (z 0 ,θ) δ \B(z 0 , r ). Fix a compact subset of H (z 0 ,θ) with non-empty interior. From boundedness ofĤ + δ in the whole domain it follows that the corresponding f function 1 √ δf + δ remains bounded on that compact set, see [11,Section 5.1.3]. The boundary the values ofĤ + δ on L • and L • are close and hence the harmonic extensions of the two functions to the interior are close. Hence (by a standard argument) along a subsequenceĤ + δ converges to a function on that compact set and the limit is harmonic in the interior points. It follows that 1 √ δf + δ converges uniformly on compact subset along that subsequence similarly as in Sect. 4.4.1.
By taking an increasing sequence of compact sets we see that the convergence takes place in the whole half-plane for a subsequence. The limit has to be the Poisson kernel of the half-plane normalized to have value 1 at the point z 0 + i e iθ , because the limit is harmonic andĤ + δ is non-negative, has zero boundary values away from z 0 and satisfies the normalization w δ . This claim can be proven using integration with respect to the Poisson kernel of the upper half-plane (shifted by small > 0 towards the interior of the domain). Since the limit is the same for all subsequences, the whole sequence converges.
For (ii) we use the first claim, (i), and the fact that H + δ = L δĤ + δ . Notice that the harmonic upper and lower bound for the restrictions of H + δ to L • and L • can be bounded from above and from below, respectively, by a quantity of the form const.δ just by considering the probabilities of simple random walks to exit the domain through cd on these two lattices. Notice that we use here the fact that the boundary is flat around z 0 (though such bounds hold also true, when the boundary is smooth and the approximating discrete boundary is well chosen, see also Remark 4.6). The best constants that we get might have a gap in between. Nevertheless, there exist constants C 1 and C 2 such that 0 < C 1 < δ −1 L δ < C 2 < ∞ for small enough δ.
Thus we can take a subsequence δ n so that δ −1 n L δ n converges as n → ∞. Then the claim holds for c + = lim n→∞ δ −1 n L δ n . (where z 0 , θ are fixed, for simplicity) and any r > 0, there exists a subsequence δ n k and constants c ∈ R >0 and β ∈ R ≥0 such that for any sequence of domains δ n that agrees with H (z 0 ,θ) δ n in the r -neighborhood of z 0 and that converges to a domain in the Carathéodory sense, the sequence δ −1 n k H δ n k converges as k → ∞ uniformly on compact sets to a harmonic function h that satisfies the following boundary conditions converges uniformly on compact sets to c + Im(− e iθ z−z 0 ) along the same subsequence and c = c + and the convergence is uniform over all domains .
Proof. Use the same argument as in the proof of Lemma 4.3 (ii) to show that δ −1 n i H δ n i converges uniformly on compact sets to a harmonic function h and it satisfies (45). Suppose that c = c + . Then, because f can be recovered from h by the formula f = √ 2i ∂ z h, it is straightforward to show that as z → z 0 . There exists r > 0 such thath is positive in H (z 0 ,θ) ∩ B(z 0 , r ).
Next we notice thath is not bounded in H (z 0 ,θ) ∩ B(z 0 , r ). On the other hand, if we consider the discrete versionh δ ofh, it must remain bounded on H uniformly in δ because of the convergence toh. Thus it is bounded inside H  Remark 4.6. This proof can be generalized to any domain with z 0 lying on a smooth boundary segment, if the domain is approximated in a nice way around z 0 . This means that the boundary near z 0 lies between two copies of the same smooth curve shifted by a bounded number of lattice steps, for instance. The upper and lower bound for the harmonic function with a pole at a boundary vertex, evaluated at fixed interior point take the form const.δ since we can bound it using hitting probabilities in regular regions such as squares or discs (or rather their discrete approximations). The constant depends on the chosen interior point as well as the continuum domain, but not on its discrete approximation. Namely, the local geometry of the boundary near the pole doesn't play a role as we can take minimums and maximums over the finite number of possibilities (when the boundary is between two curves which are closer than a finite multiple of δ). All H functions on domains that have a given r neighborhood of z 0 must have the same singular part. Moreover, the value at a fixed point of any fixed H function can be used for normalization for the other functions and they converge to a limit.
The generalization to non-smooth boundary would require a matching pair of upper and lower bounds for H δ , the former for the restriction to L • and the latter to L • . This is therefore equivalent of knowing (uniform bounds for) the leading term of the asymptotics of the exit probabilities with different lattice "boundary shapes" as δ → 0. The asymptotics is heavily influenced by the local geometry around z 0 .
In the final result of this subsection, we derive a characterizing property for the constant β. Hence this constant is uniquely determined by the continuum setting and doesn't depend on the discrete approximations we are using. In fact, it only depends on the conformal type of the domain ( , a, b, z 0 ) with a prescribed length scale (derivative) at z 0 .
Suppose for simplicity that is a Jordan domain. Let h : → R be a harmonic function, for instance, h is the scaling limit from Proposition 4.4. Suppose that z is a boundary point and h = β near z on the boundary. We define a weak version of the sign of the normal derivative ∂ n h(z) to the direction of the outer normal at a boundary point z of a (possibly non-smooth) domain by Proof. The normal derivatives (47) follow by the same argument as in [10], Section 6. Suppose that φ : U → H is conformal and onto and φ(c) = ∞.
and up to a universal multiplicative constant where u = φ(a) and v = φ(b). Write where Since the coefficients of the quadratic polynomial Q are real, it either has two real zeros or a pair of complex conjugate zeros (with non-zero imaginary part). Since (50) holds and f H is single-valued in H (being a scaling limit of a discrete observable which is single-valued), there can't be any zeros of multiplicity one in the upper half-plane. Thus the zeros of Q are real. Now on R\{u, v}. Let the zeros of Q be w 1 and w 2 with w 1 ≤ w 2 . Then w 1 + w 2 = u + v by (51). Therefore we have to analyze the following four cases We notice using (52) that only the first case is consistent with (47). Thus we have shown that The value of β is uniquely determined and it is the only real value such that ∂ n h has the properties claimed.

Value of β for the 4-point observable.
We will determine the value of β for the 4-point observable in the same way as in Proposition 4.7. Remember that 0 ≤ β ≤ 1.
Consider f H for the domain H with the four marked boundary points u < v < w and ∞. This means that h can be written as Hence where Q(z) is a quadratic polynomial. Let's simplify things by setting u = 0 and w = 1. Hence for 0 < v < 1, Q(z) can be written as Since the coefficients of Q are real, there are two options: either we have that Im w 1 = 0 or Im w 2 = 0, and then w * 1 = w 2 , or we have that w 1 and w 2 are real. Since (55) holds and f H is single valued in H, there can't be any zeros in the upper half-plane. Hence the zeros are real.
Let's write also in this case the normal derivative in the direction of the outer normal for all x ∈ R\{0, v, 1}. By the same argument as in [10], Section 6, this normal derivative is negative on (−∞, 0) ∪ (v, 1) and positive on (0, v) ∪ (1, ∞). This is only possible if the two roots of Q are equal. Therefore in addition to 0 < β < 1, the constant β has to satisfy and hence β = β − or β = β + where , which is positive for β + and negative for β − for all v ∈ (0, 1). Thus β = β − and we find that where 0 < x < π/2 is such that v = sin 2 x. Notice that the double root of Q is √ βv and it lies in the interval (0, v).
To conclude, we state that the value of β is characterized uniquely by the formula (54), the fact that β ∈ (0, 1) and that the normal derivative of h H is negative on (−∞, u) ∪ (v, w) and positive on (u, v) ∪ (w, ∞).

4.6.
A remark on crossing probabilities. As a side remark, let's derive the probability P of the internal arc pattern under the random cluster measure where there isn't any exterior connections, that is, the arcs γ 1 and γ 2 (as defined in Sect. 4.1) are not counted as loops in the weight of a configuration. Since √ β and 1 − √ β are proportional to P and √ 2(1 − P), respectively, P satisfies P and hence By using the relations we can write this into the form This is consistent with the result in [10].

Martingales and uniform convergence with respect to the domain.
Consider the scaling limit of a single branch from a to z 0 in the domain . To simplify the setting, map the discrete random curves to a reference domain D using conformal maps so that the resulting probability law P δ is the law of a random curve in D from −1 to 1 and P * is any subsequent scaling limit of P δ . Consider some scheme of parametrizing the curves that works for all δ. We will mostly use the half-plane capacity parametrization. Let X be the space of continuous function from the time interval used for parametrization to D. We consider X as a metric space with the metric defined by the sup norm. Denote by F t the filtration generated by the curve up to time t. After we start exploring the branch we will move automatically from the setting of two points to a setting of three points. Hence we will also consider the setups of ( , a, b, z 0 ), where z 0 is the fused arc (cd), and ( , a, b, c, d).
Remember that the two martingales were Notice that we have included the scaling by a power of δ that makes these quantities converge in the limit δ → 0, at least for subsequences. Consider one of the processes above, for instance, (M (δ) t ) t≥0 the martingale property could be formulated so that if 0 ≤ s < t and if ψ : X → R is bounded, uniformly continuous and F s -measurable, then Now due to the uniform convergence of β (δ) ( t , a t , b t , z 0 ) over the domains, Proposition 4.4, the expected values on both sides will converge and we get By a similar argument,Ñ t = lim n→∞ N  ( , a, b, z 0 ) and map it to the upper half-plane conformally. Suppose that w is the image of z 0 . Then the singularity is of the form c Im(−1/(z − w)). If we have a slit domain ( \γ [0, t], γ (t), b, z 0 ) and we apply further the Loewner map g t in the upper half-plane, then the singularity has to be c Im(−g t (w)/(g t (z)−g t (w))) = c Im(−1/((z−w))+o(1), as z → w. This shows that the functions H transform as Since f = √ 2i where is holomorphic and Im = H , Now if we choose to send w to ∞, then the observable is of the form (49). For Loewner chains g t (∞) = 1 when appropriately interpreted, and hence As we saw in the proof of Proposition 4.7, the value of β t is We define The former quantity is proportional toM t and the latter one to the first non-trivial coefficient in the expansion of (71) around z = ∞. Here ± signs are constant on each excursion of V t − U t and distributed as independent fair (symmetric) coin flips for each excursion. Here we interpret that an excursion starts at 0, ends at 0 and is positive in between.
By the martingale properties in Sect. 4.1 and the convergence results of the observables we have the following result.
Proposition 5.1. Let P * be a subsequent limit of the sequence of laws of FK Ising branch in discrete approximations of ( , a, b, z 0 ). Let φ : → H be a conformal, onto map such that φ(z 0 ) = ∞. Let γ be the random curve distributed according to P * in the capacity parametrization, U t = φ(γ (t)) and V t is the "right-most point" in the hull of φ(γ (t)). Let the signs in (72) be i.i.d. fair coin flips independent of γ . Then processes (M t ) t≥0 and (N t ) t≥0 are martingales.
In the rest of this section we consider the following martingale problem: t≥0 and (N t ) t≥0 as above, that is, satisfying that (M t ) t≥0 and (N t ) t≥0 are martingales and satisfy relation (73). What is their law given that (M t ) t≥0 and (N t ) t≥0 are martingales?
We claim that the required properties (with the above functional dependency of the processes) uniquely determine the joint law of the processes. We call the verification of this claim and the explicit formulation of the law as the solution of the martingale problem.
The solution is divided into two part. In Sect. 5.3 we will show that (|M t | 1 α ) t≥0 for some α > 0 is a Bessel process. In Sect. 5.4, we will show that (V t ) t≥0 follows an evolution such that V t is a sum of a term from Loewner equation and a term whose value is changing only in the random Cantor set {t : U t = V t }, and then we show that the latter "singular" term is in fact identically 0.

Characterization of V t −U t .
In this section, we show how the "martingale problem" characterizes the law of (V t − U t ).
More concretely, we work towards the following theorem. Its proof is given in Sect. 5.3.3.
Theorem 5.2. Let X t = V t − U t where U t and V t are the processes followed by the marked points for the subsequent scaling limit of the FK Ising exploration process. Then (X t ) t≥0 is a Bessel process of dimension δ = 3/2 scaled by a constant √ 16/3.

Relation to Lévy's and Stroock-Varadhan martingale characterizations.
The argument which we will present can be compared to Paul Lévy's characterization of Brownian motion. The law of Brownian motion (B t ) t≥0 is characterized by the fact that B t and B 2 t − t are martingales. In the setting of general diffusions, the classical Stroock-Varadhan martingale problem approach describes weak solutions X t to stochastic differential equations of the form dX t = √ a dB t + b dt (with coefficients a and b satisfying suitable measurability conditions) as exactly those that all the quantities of ds are martingales for a class of test functions f , see Sections V. 19 and V.20 in [19]. However, similarly to Lévy's theorem, there are stronger results, stating that two (well-chosen) martingales are enough to characterize the law of a diffusion, see [2,27]. We take this path, using two martingales to show that the diffusion in question is the Bessel process.

Lemmas.
We need the next two lemmas, which we write in greater generality suitable for the 4-point case.
Lemma 5.4. Suppose that T > 0 is a stopping time and suppose that ψ ∈ C 2 satisfies ψ(0) = 0 and ψ (0) = 0. If A t and C t are continuous, predictable processes which satisfy (1) A t is of bounded total variation (2) C t is non-decreasing, differentiable and satisfiesĊ t > 0 almost surely on [0, T ).

then any continuous martingale (M t ) t∈[0,T ] with the property that the process
is a martingale, satisfies Proof. Since (M t ) t∈[0,T ] is a continuous martingale, it has quadratic variation process M t . By Itô's formula Since ψ(M t ) and ψ (M t ) vanish when M t = 0, it follows that The process 1 M t =0 is predictable, because it is a pointwise limit of adapted continuous processes (for instance, 1 M t =0 = lim n→∞ max{0, 1 − n|M t |}), and hence the left-hand side of (78) is a local martingale. On the other hand the right-hand side of (78) is bounded variation process. Hence t 0 1 M s =0Ċs ds = 0 almost surely. SinceĊ s > 0, the claim follows.
t dt and σ t > 0 for any t such that M t = 0, then (B t ) t∈[0,T ] defined as is well-defined and continuous and it is a one-dimensional standard (F t )-Brownian motion.
We can approximate the set F from below by finite unions of this type of intervals and use monotone convergence theorem to show that where the left-hand side is a Lebesgue-Stieltjes integral and the right-hand side is a Lebesgue integral defined pointwise in the randomness almost surely. This shows that 1 F σ −1 s is dM s integrable (belongs to the square integrable processes with respect to the variation process of M) and (B t ) is well-defined and continuous in t. Therefore clearly, (B t ) is a local martingale and satisfies B t = t. Hence (B t ) is a standard Brownian motion by Lévy's characterization theorem.

X t = V t − U t is a Bessel process.
Proof of Theorem 5.2. The claim follows from the next theorem with M t = 1 2 √ X t , or written in the other way X t = 4M 2 t , and ψ(x) = x 4 . Notice that if C M 4 t , where C > 0 is a constant, is a squared Bessel process of dimension δ, then ( √ C/4)X t is a Bessel process of dimension δ. Here δ = 3/2 and C = 2δ = 3, which implies that X t is a Bessel process scaled by the constant 4/ √ C = √ 16/3. Theorem 5.6. Let (a, b) ⊂ R. Suppose that ψ : (a, b) → R is twice continuously differentiable function which is convex, i.e. ψ ≥ 0, and such that ψ is strictly increasing. Let M = (M t ) t∈R + be a continuous stochastic process adapted to a filtration (F t ) t∈R + and N = (N t ) t∈R + a process defined by N t = 2ψ(M t ) − t. Suppose that M and N are martingales. Then the following claims hold.
(1) If the process W = (W t ) t∈R + is defined as then it is a standard Brownian motion. (2) If for some ε > 0, ψ(x) = |x| 2+ε , then there exists constants C > 0 and 1 < δ < 2 such that Z t = C ψ(M t ) is a squared Bessel process of dimension δ. More specifically, the constants are δ = 2 1+ε 2+ε and C = 2δ. (3) Suppose there exists continuous functions F,F such that 2F(x) = F(2ψ(x)) and for all x -in particular ψ is positive except possibly at the point (there exists at most one such point) where ψ is zero. Then Z t = 2ψ(M t ) is a solution to the stochastic differential equation for some standard Brownian motion (W t ) t∈R + .
Proof. (1) Similarly as in the previous section, we notice that we can do stochastic analysis with M, because M is a continuous martingale, see Chapter 2 of [12]. The same argument, using that (M t ) t∈R + and (N t ) t∈R + are martingales and that N t is given in terms of M t as N t = 2ψ(M t ) − t, as above tells us the process defined as is a continuous martingale with a variation process W t = t. Namely, by Itô's lemma we have dN t = (ψ (M t )d M t − dt) +2ψ (M t )dM t and thus by martingale property of N t the quantity inside the first brackets has to vanish identically, and we can apply the previous lemmas for the claim. Hence by Lévy's characterization theorem, it is a standard Brownian motion. (2) When ψ(x) = |x| 2+ε , there is a constant D > 0 such that .
Hence if we chooseC = 2 D −2 , Z t is squared Bessel process with the parameter δ = 2 D −2 . Here we used the fact that from which the claim follows.

Characterization of (U t , V t ).
Next result together with Theorem 5.2 gives the distribution of the pair of processes (U t , V t ).
Theorem 5.8. Let X t , V t , U t be as in Theorem 5.2. Then U t and V t satisfy Remark 5.9. This equation for V t is the Loewner equation. Notice that t 0 ds X s is finite since (X t ) is Bessel process of dimension δ = 3/2. The equation for U t is obtained from the one of V t and the definition of X t .
Proof of Theorem 5.8. Since t 0 X −1 s ds is finite, we have shown so far that where t is some non-decreasing process which is constant on any subinterval of {t : X t > 0}. See Proposition C.2 in Appendix C for the generalized Loewner equation. The claim follows when we show that t ≡ 0. This is shown in the next proposition. where The quantity is a conditional probability on F t of the event γ 1 ⊂γ hence (M t ) t≥0 is a martingale.
Here ± sign is needed to extend the martingale property beyond the hitting of 0 by M t . We can solve X t in terms of Y t and M t as From the expansion of f as z → ∞, we get that is a martingale. Write (100) as where ψ(m) = 4[m 2 /(1 − m 2 )] 2 . Note that W t is always differentiable and Y t is differentiable outside the set of times := {t : X t = 0} = {t : M t = 0}. Now we have to solve the following martingale problem: for given filtered probability space (P * , (F t )), determine the law of (M t , Y t , W t ) t∈[0,T ] which satisfies (1) (W t ) is strictly increasing and C 1 for all t ∈ [0, T ] (2) (Y t ) is strictly decreasing and C 1 for all t ∈ [0, T ]\ (2)Ẇ t is given byẆ and for a non-decreasing process t given by Proposition C.2 in Appendix C, it holds that It is understood that solving the martingale problem means, as before, that we claim that these properties uniquely characterize the law of (M t ) and hence also the law of (X t ) and also that the law can be explicitly descibred.

Solving the martingale problem.
By the lemmas of Sect. 5.3.2, we can construct out of (M t ), which is a continuous martingale, a Brownian motion (B t ) so that there exists a process (σ t ), σ t ≥ 0, both adapted to (F t ), such that Proof of Theorem 5. 13. By comparing to a Bessel process as we did in Sect. 5.5.3, we see that t 0 X −1 s ds is finite. Therefore we have shown so far that where t is some non-decreasing process which is constant on any subinterval of {t : X t > 0}. It remains to show that t ≡ 0. This implies then that (90) and (92) hold and finalizes the proof of Theorem 5.13.
Recall from the proof of Proposition 5.10 that = {t : X t = 0} and that the index of a Bessel process is defined as Notice that ν = −1/4 in our case. Now t ≡ 0 follows from Lemmas 5.11, 5.16 below, which shows that the Hausdorff dimension of is 1/4, and from the fact that (U t ) t∈[0,T ] is 1/2 − ε Hölder continuous for any ε > 0 by Lemma 5.15.
(120) By [18] Exercise XI.1.25 and [5] Theorem III.15, the Hausdorff dimension of the support of the local time of a Bessel process with index ν ∈ [−1, 0] is −ν. By Markov property of the Bessel process, the support of the local time is the entire set of times when the process is at the origin. Hence the set is sandwiched between two sets which have dimension arbitrarily close to 1/4.

Proof of Theorem 1.1
As a conclusion to this entire article we outline below the proof of Theorem 1.1.
As above, we consider a sequence of domains δ n converging to a domain with respect to a fixed interior point w 0 . Let φ n be the conformal map from δ n to D normalized using w 0 . Suppose also that we have a sequence of boundary points v δ n ∈ ∂ δ n that converge in the sense φ n (v δ n ) converges to a point on ∂D. We use v δ n as the root point in the construction of the exploration tree.
The basic crossing estimates were established for the FK Ising exploration tree and its branches in Sect. 3.2, Theorem 3.4. Based on those estimates, the precompactness of probability laws of a single branch or a finite subtree (i.e., a subtree with a fixed number of target points) was shown in Sect. 3.7. By those results we can choose convergent subsequences. The structure of the tree is characterized by the target independence, the independence of the branches after disconnection and the martingale characterization of a single branch in Sect. 5. Here it is needed that the branches converge in the strong sense as capacity parameterized curves.
By these results, every sequence of finite subtrees of the approximating domains converges in distribution to a finite subtree of the SLE(κ, κ − 6) exploration tree with κ = 16/3. The convergence takes place, for instance, in the unit disc D after a conformal transformation and under the metric defined in Sect. 3.3.1. In fact, it is possible to extend this convergence to the original domain (without the conformal transformation). See Corollary 1.8 in [14] for such a result.
The precompactness of the probability laws of the full tree and of the loop collection were established in Theorems 3.1 and 3.8, respectively. Therefore we can choose subsequences such that both the full tree and the loop collection converge. The above together with the finite-tree approximation in Theorem 3.6 implies that the full tree has a unique limit which is the SLE(κ, κ − 6) exploration tree with κ = 16/3. Similarly using the finite-tree approximation in Theorem 3.9 we show that the loop collection has a unique limit which is characterized by the one-to-one correspondence to the exploration tree under the maps introduced in Sects. 1.2.1 and 2.3, see Theorem 3.11. This ends the outline of the proof. Notice that the convergence takes place in a topology where both the tree and the loop ensemble converge simultaneously and convergence occurs for the full objects, not just the finite tree approximations (with only a finite fixed number of target points).
Since the orientation of the image of the exploration tree when mapped to D depends on lim n φ n (v δ n ), while the scaling limit of the loop collection is the same for any sequence v δ n . This gives the rotational invariance of the loop collection and thus implies the complete conformal invariance of the scaling limit. and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A. Distortion of Annuli Under Conformal Maps
Let be a simply connected domain and φ : D → is a conformal map. The next two lemmas establish bounds for distortion of annuli in terms of their conformal modulus which is a constant times log(R/r ) for an annulus A = A(z 0 , r, R). The proofs can be found from Section 6.3.3 of [13]. The results are needed in the proof of Theorem 3.1 above. Proof. Let k be the number of minimal crossings of A which touch the boundary so that their dual-open right-hand side touches free boundary (i.e. the vertex sets of the dual-open path and the dual boundary intersect) and that along each them there is at least one branching point. Then due to topological reasons k is at most 2. Let m be the total number crossings. Then m − k crossings are disjoint from the boundary or touch the boundary by their open left-hand sides. In particular, for these types of crossings, the crossings consists only of a part following a single loop. Thus the entire right-hand side is dual-open. Similarly as in the previous proof, we see that there are at least (m −k)/2 dual-open crossings of A.

Appendix C. Auxiliary Results on Loewner Evolutions
The results of this section are needed in Sect. 5.4 above.
For a Loewner chain (K t ) t∈ [0,T ] in H driven by (U t ) t∈[0,T ] , let (V t ) t∈[0,T ] be the function defined by V t = g t (sup(K t ∩ R)). The point V t is thus the image of the rightmost point on the hull under the conformal map g t . The following lemma can be extracted from the proof of Proposition 3.12 in [23].
Proof. Let ε > 0, J ε t = min{kε > sup(K t ∩ R) : k ∈ Z} andṼ ε t = g t (J ε t ). Then it follows from monotonicity of g t that V t ≤ V ε t ≤ V t + ε. By the Loewner equation we can writeṼ where ξ ε s ∈ [0, ε] and ξ ε s = 0 only whenṼ ε u − U u hits zero as u s. Thus the sum on the right-hand side is a finite sum. By Lemma C.1 and Lebesgue's dominated convergence theorem the middle term in (122) converges to the integral in (121). Since alsoṼ ε t converges uniformly to V t , it holds that s≤t ξ ε s converges uniformly to some continuous function t as ε → 0. It follows that t is non-decreasing and 0 = 0. The claim follows.

Remark C.3. Notice that in fact,
Thus T 0 dt V t −U t < ∞ follows from Lebesgue's monotone convergence theorem.