Abstract
We study \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) curves, with \(\kappa \) and \(\rho \) chosen so that the curves hit the boundary. More precisely, we study the sets on which the curves collide with the boundary at a prescribed “angle” and determine the almost sure Hausdorff dimensions of these sets. This is done by studying the moments of the spatial derivatives of the conformal maps \(g_t\), by employing the Girsanov theorem and using imaginary geometry techniques to derive a correlation estimate.
Introduction
The Schramm–Loewner evolution (\({{\,\mathrm{SLE}\,}}\)) is a one parameter family of random fractal curves, that was introduced by Oded Schramm as a conformally invariant candidate for the scaling limit of twodimensional discrete models in statistical mechanics. Consider the halfplane Loewner differential equation
where the driving function, \(W_t\), is continuous and realvalued. The chordal Schramm–Loewner evolution with parameter \(\kappa >0\) (\({{\,\mathrm{SLE}\,}}_\kappa \)) is the curve with corresponding conformal maps given by the Loewner equation with \(W_t = \sqrt{\kappa } B_t\). \({{\,\mathrm{SLE}\,}}_\kappa \) exhibits interesting geometric behaviour. If \(0 < \kappa \le 4\), the curves are simple and do not intersect the real line, if \(\kappa > 4\) they have nontraversing selfintersections and collide with the real line and if \(\kappa \ge 8\), the curves are spacefilling. For \(\kappa > 0\), the almost sure Hausdorff dimension is \(d_\kappa = \min \{2,1+\frac{\kappa }{8}\}\), see e.g. [4]. For \(\kappa > 4\), the intersection of an \({{\,\mathrm{SLE}\,}}_\kappa \) curve with the real line is a random fractal of almost sure Hausdorff dimension \(\min \{28/\kappa ,1\}\), see [2] and [24]. In [1], Alberts, Binder and Viklund studied and computed the almost sure Hausdorff dimension spectrum of random sets of points, where the \({{\,\mathrm{SLE}\,}}_\kappa \) curve, for \(\kappa > 4\), hits the real line at a prescribed “angle”.
If we again, consider the Loewner equation, but instead let \(W_t\) be the solution to the following system of SDEs
where \(x_R\) is a point on the real line, called the force point, and \(\rho > 2\) is an associated weight, then the Loewner chain is generated by a random curve, called an \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) curve (for the definition when \(\rho \le 2\), see [20]). This two parameter family of random fractal curves is a natural generalization of \({{\,\mathrm{SLE}\,}}_\kappa \), in which one keeps track of the force point as well as the curve. The weight \(\rho \) determines a repulsion (if \(\rho > 0\)) or attraction (if \(\rho < 0\)) of the curve, from the boundary. For \(\rho = 0\), it is the ordinary \({{\,\mathrm{SLE}\,}}_\kappa \). For \(\kappa \in (0,8)\), \(\rho \in ((2)\vee (\frac{\kappa }{2}4), \frac{\kappa }{2}2)\) and \(x_R=0^+\), the Hausdorff dimension of the intersection of \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) with the real line is almost surely \(1  (\rho +2)(\rho +4\kappa /2)/\kappa \) (see [21] and [28]). What we are interested in studying in this article, is the dimension spectrum studied by Alberts, Binder and Viklund in [1], but for \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\). In [3], the authors apply the main result of this paper to describe the boundary hitting behaviour of the loops of the twovalued sets of the Gaussian free field.
Let \(\eta \) be the \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) curve with \(x_R = 0^+\) and let \(\tau _s(x)\) be the first time \(\eta \) is within distance \(e^{s}\) of x, i.e.,
Let \((g_t)\) be the \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) Loewner chain, let \(g_t'\) denote the spatial derivative of \(g_t\) and for \(\beta \in {\mathbb {R}}\), define
that is, \(V_\beta \) is the set of points x in \({\mathbb {R}}_+\), at which \(g_{\tau _s}'\) decays like \(e^{\beta s}\). This can be viewed as a generalized hitting “angle” of the real line (see the discussion in the introduction in [1]). We shall view the decay of \(g_{\tau _s}'(x)\) as a decay of a certain harmonic measure. Indeed, let \(r_t\) be the rightmost point of \(\eta ([0,t]) \cap {\mathbb {R}}\) and let \(H_t\) be the unbounded connected component of \({\mathbb {H}}{\setminus } \eta ([0,t])\). Then the harmonic measure from infinity of \((r_t,x]\) in \(H_t\) is defined as
Assume that we have
for some \(\beta \). By using first Schwarz reflection and then the Koebe 1/4 theorem, we have
for some constant c, possibly different from the previous, see Fig. 1. As we see in (2), however, we allow a subexponential error, as up to constant asymptotics are too restrictive to require.
For fixed \(\kappa >0\), \(\rho \in {\mathbb {R}}\), let
and write \(\beta _ = \inf \{\beta :d(\beta )>0\}\) and \(\beta _+ = \sup \{\beta :d(\beta )>0\}\). The main theorem of the paper is the following.
Theorem 1.1
Let \(\kappa \in (0,4]\), \(\rho \in (2, \frac{\kappa }{2}2)\), \(x_R = 0^+\) and \(\beta \in [\beta _,\beta _+]\). Then, almost surely,
The reason for the restricting \(\rho \) to values smaller than \(\kappa /2  2\) is that this is the critical value for the curve to hit the boundary, that is, the curve will not hit the boundary if \(\rho \ge \kappa /2 2\).
An interesting observation is that there is a typical behaviour of the curve hidden in this theorem; there is a \(\beta \) such that \(V_\beta \) has full Hausdorff dimension, that is, such that \(\text {dim}_H V_\beta = \text {dim}_H \eta \cap {\mathbb {R}}_+\). Indeed, if \(\beta _0 = (4+2\rho )/(8\kappa +2\rho )\), then
which was proven in [21] to be the Hausdorff dimension of the curve intersected with the real line.
Remark 1.2
If we let
then we can rephrase Theorem 1.1 as
To prove Theorem 1.1, it will be more convenient to consider the sets
and then use that \(V_\beta = V_{\beta /(1+\rho /2)}^*\).
We now give an overview of the paper. In Sect. 2, we introduce the preliminary material needed in the rest of the paper, such as \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) processes, the Gaussian free field and the imaginary geometry coupling. In order to make the paper more selfcontained, the section on imaginary geometry is more extensive than necessary. Furthermore, we use the Girsanov theorem to weight the measure with the local martingale given by the product of a time change of \(g_t'(x)^\zeta \) (where \(\zeta \) is a parameter in onetoone correspondence with \(\beta \)) and a compensator. With this new measure we can compute the asymptotics of \(g_{\tau _s}'(x)^\zeta \), which we use in Sect. 3 to find a onepoint estimate. It turns out to be strong enough to give the upper bound on the dimension of \(V_\beta \), so this can actually be achieved immediately after Corollary 3.2. The rest of Sect. 3 is dedicated to studying the mass concentration of the weighted measures, which we need for the correlation estimate. Section 4 is dedicated to the proof of the twopoint estimate needed to prove Theorem 1.1. This is done by employing the coupling between SLE and the Gaussian free field. We finish the paper in Sect. 5 by first establishing the upper bound on the dimension of \(V_\beta \) using the onepoint estimate of Sect. 3 and then constructing Frostman measures and using the twopoint estimate to show that the sdimensional energy is finite for every \(s < d(\beta )\), and hence that the Hausdorff dimension can not be smaller than \(d(\beta )\).
We believe Theorem 1.1 to be true for all \(\kappa \) and \(\beta \in [\beta _(\kappa ,\rho ),\beta _+(\kappa ,\rho )]\), and our upper bound is actually valid for all parameters. Given the spectrum for \(\kappa \in (0,4]\) it seems natural to try and prove the result for \(\kappa > 4\) using SLE duality, i.e., that the outer boundary of the \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) curves are variants of \({{\,\mathrm{SLE}\,}}_{16/\kappa }\) curves, similar to what is done in [21]. This is not as straightforward here, however, as what we are interested in is not the dimension of the intersection of the curve with the real line, but the set of points where the curve intersects the boundary with the prescribed behaviour of the derivatives of the conformal maps (or equivalently, the decay of \(\omega _\infty \)) as the curve approaches the boundary. How to do this is not clear at the moment.
However, using the method of [1] to get a twopoint estimate, one can deduce that in the case \(\kappa > 4\), the theorem holds for \(\beta \in [\beta _,\beta _0]\). This is done by considering three events which exhaust the possible geometries of the curve approaching the two points (this is possible as for boundary interactions, the geometries are rather simple) and then separately estimating each of them. The correlation estimate that we have is actually more closely related to the one in [21].
An almost sure multifractal spectrum of SLE curves was first derived in [9], where the reverse flow of SLE was used to study the behaviour of the conformal maps close to the tip of the curve. Another result in this direction is [8], where the imaginary geometry techniques, developed and demonstrated in the articles [16,17,18,19], were used to find an almost sure bulk multifractal spectrum. In [15], Miller used the imaginary geometry techniques to compute Hausdorff dimensions for other sets related to SLE. We also mention [12], where Lawler proved the existence of the Minkowski content of an SLE curve intersected with the real line, which is related to what was done in [1] and what we do here. Lastly, we mention [5], where the authors computed an average integral means spectrum of SLE.
As for notation, we write \(f(x) \asymp g(x)\) if there is a constant C such that \(C^{1} g(x) \le f(x) \le C g(x)\) and \(f(x) \lesssim g(x)\) if there is a constant C such that \(f(x) \le C g(x)\), and the constants do not depend on f, g or x. We say that \(\phi \) is a subpower function if \(\lim _{x \rightarrow \infty } \phi (x) x^{\varepsilon } = 0\) for every \(\varepsilon > 0\). In the same way, we say that \(\psi \) is a subexponential function if for every \(\varepsilon > 0\), \(\lim _{x \rightarrow \infty } \psi (x) e^{\varepsilon x} = 0\). In what follows, implicit constants, subpower functions and subexponential functions may change between the lines, without a change of notation.
Preliminaries
We begin by introducing some preliminaries on complex analysis, \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) processes, the Gaussian free field and imaginary geometry.
Measuring distances and sizes
Let D be a simply connected domain, \(z \in D\), and let \(\varphi : D \rightarrow {\mathbb {D}}\) be a conformal map of D onto \({\mathbb {D}}\) such that \(\varphi (z) = 0\). We define the conformal radius of D with respect to z as
It behaves well under conformal transformations; if \(f:D \rightarrow f(D)\) is a conformal transformation, then \(\text {crad}_{f(D)}(f(z)) = f'(z) \text {crad}_D(z)\), \(z \in D\), (that is, it is conformally covariant) and by the Schwarz lemma and the Koebe 1/4 theorem, one easily sees that
For any domain \({\mathbb {H}}{\setminus } U\), where \(U\subset {\mathbb {H}}\) is bounded, we define the harmonic measure from infinity of \(E \subset \partial ({\mathbb {H}}{\setminus } U)\) as
where \(\omega \) denotes the usual harmonic measure. It turns out that \(\omega _\infty \) is a very convenient notion of size of subsets of \({\mathbb {H}}\). We say that A is a compact \({\mathbb {H}}\)hull if \(A = {\mathbb {H}}\cap {\overline{A}}\) and \({\mathbb {H}}{\setminus } A\) is simply connected and we let \({\mathcal {Q}}\) be the set of compact \({\mathbb {H}}\)hulls. By Proposition 3.36 of [10], there exists, for every \(A\in {\mathcal {Q}}\), a unique conformal map \(g_A:{\mathbb {H}}{\setminus } A \rightarrow {\mathbb {H}}\) which satisfies the hydrodynamic normalization, i.e.,
The function \(g_A\) is called the mappingout function of A. By (7) and the conformal invariance of \(\omega \), we have that
where \(g_A(E)\) is the total length of the set \(g_A(E) \subset {\mathbb {R}}\).
While \(\omega \) is defined on the boundary of the domain, it is convenient to speak about the harmonic measure of subsets of the domain, so we make the following extension: if \(U \subset D\), then
This extends naturally to the case of \(\omega _\infty \) as well.
\({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) processes
In this section we will introduce \({{\,\mathrm{SLE}\,}}_\kappa \) and \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) processes. As stated in the introduction, a chordal \({{\,\mathrm{SLE}\,}}_\kappa \) Loewner chain is the collection of random conformal maps \((g_t)_{t \ge 0}\), given by solving (1) with \(W_t = \sqrt{\kappa } B_t\), where \(B_t\) is a standard Brownian motion with \(B_0 = 0\) and filtration \({\mathscr {F}}_t\), satisfying (7). We define \((f_t)_{t\ge 0}\) to be the centered Loewner chain, that is,
For fixed \(z \in {\mathbb {H}}\), the solution to (1) exists until time \(T_z = \inf \{t \ge 0: f_t(z) = 0 \}\). The domain of \(g_t\) is \(H_t = {\mathbb {H}}{\setminus } K_t\) where \(K_t = \{z: T_z \le t \} \in {\mathcal {Q}}\) is the SLE hull at time t and \(g_t\) is the unique conformal map from \(H_t\) onto \({\mathbb {H}}\) such that \(\lim _{z \rightarrow \infty } g_t(z)  z = 0\). Rohde and Schramm proved that the family of \({{\,\mathrm{SLE}\,}}_\kappa \) hulls is almost surely generated by a curve \(\eta :[0,\infty ] \rightarrow {\overline{{\mathbb {H}}}}\), i.e., \(H_t\) is the unbounded component of \({\mathbb {H}}{\setminus } \eta ([0,t])\) (see [22]). We call \(\eta \) the \({{\,\mathrm{SLE}\,}}_\kappa \) process or \({{\,\mathrm{SLE}\,}}_\kappa \) curve and say that \(K_t\) is the filling of \(\eta ([0,t])\).
Now we will define the \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) process. Let \({\underline{x}}_L = (x_{l,L},\ldots ,x_{1,L})\), \({\underline{x}}_R = (x_{1,R},\ldots ,x_{r,R})\), where \(x_{l,L}< \ldots< x_{1,L} \le 0 \le x_{1,R}< \ldots < x_{r,R}\). Also, let \({\underline{\rho }}_L = (\rho _{1,L},\ldots ,\rho _{l,L})\), \({\underline{\rho }}_R = (\rho _{1,R},\ldots ,\rho _{r,R})\), where \(\rho _{j,q} \in {\mathbb {R}}\), \(q \in \{L,R\}\). We call \(\rho _{j,q}\) the weight of \(x_{j,q}\). Let \(W_t\) be the solution to the system of SDEs
where \(N_L= l\) and \(N_R = r\). An \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) Loewner chain with force points \(({\underline{x}}_L;{\underline{x}}_R)\) is the family of conformal maps \((g_t)_{t \ge 0}\) obtained by solving (1) with \(W_t\) being the solution to (10). The \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) hulls, \((K_t)\), are defined analogously and they are almost surely generated by a continuous curve, \(\eta \), the \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) process or \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) curve (see Theorem 1.3 in [16]). \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) is a generalization of \({{\,\mathrm{SLE}\,}}_\kappa \) (\({{\,\mathrm{SLE}\,}}_\kappa \) = \({{\,\mathrm{SLE}\,}}_\kappa (0;0)\)), where one also keeps track of the force points and their assigned weights either attract (\(\rho _{j,q} < 0\)) or repel (\(\rho _{j,q} > 0\)) the curve. If \(\eta \) is an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) curve, we write \(\eta \sim \text {SLE}_\kappa ({\underline{\rho }})\). Exactly how the weights of the force points affect the curve \(\eta \) is explained in Lemma 2.1.
The solution to the system of SDEs (10) exists up until the continuation threshold is hit, that is, the first time t such that either
as is explained in Section 2.2 of [16]. Moreover, for every \(t>0\) before the continuation threshold, \({\mathbb {P}}(W_t=V_t^{j,q}) = 0\) for \(j \in {\mathbb {N}}\) and \(q\in \{L,R\}\).
Geometrically, hitting the continuation threshold means the curve \(\eta \) swallowing force points on either side such that the sum of their weights is less than \(2\), that is, \(\eta \) hits an interval \((x_{m+1,L},x_{m,L})\) (or \((x_{n,R},x_{n+1,R})\)) such that \(\sum _{j=1}^m \rho _{j,L} \le 2\) (resp. \(\sum _{j=1}^n \rho _{j,R} \le 2\)).
Now, we write \(\rho _{0,L} = \rho _{0,R} = 0\), \(x_{0,L} = 0^\), \(x_{l+1,L} = \infty \), \(x_{0,R} = 0^+\) and \(x_{r+1,R} = +\infty \), and let for \(q \in \{L,R\}\) and \(j \in {\mathbb {N}}\)
The following lemma describes the interaction \(\eta \) with the real line. It is written down in [21], and just as they did, we refer to Remark 5.3 and Theorem 1.3 of [16] and Lemma 15 of [6] for the proof.
Lemma 2.1
Let \(\eta \) be an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) curve in \({\mathbb {H}}\), from 0 to \(\infty \), with force points \(({\underline{x}}_L;{\underline{x}}_R)\). Then,

(i)
if \({\overline{\rho }}_{k,R} \ge \frac{\kappa }{2}2\), then \(\eta \) almost surely does not hit \((x_{k,R},x_{k+1,R})\),

(ii)
if \(\kappa \in (0,4)\) and \({\overline{\rho }}_{k,R} \in (\frac{\kappa }{2}4,2]\), then \(\eta \) can hit \((x_{k,R},x_{k+1,R})\), but then can not be continued afterwards,

(iii)
if \(\kappa > 4\) and \({\overline{\rho }}_{k,R} \in (2,\frac{\kappa }{2}4]\), then \(\eta \) can hit \((x_{k,R},x_{k+1,R})\), be continued afterwards and \(\eta \cap (x_{k,R},x_{k+1,R})\) is almost surely an interval,

(iv)
if \({\overline{\rho }}_{k,R} \in ((2)\vee (\frac{\kappa }{2}4),\frac{\kappa }{2}2)\), then \(\eta \) can hit and bounce off of \((x_{k,R},x_{k+1,R})\) and \(\eta \cap (x_{k,R},x_{k+1,R})\) has empty interior.
The same holds if we replace R by L and consider \((x_{k+1,L},x_{k,L})\).
Note that in (ii) in the above lemma, the curve has swallowed force points with a total weight at least as negative as \(2\), and hence it cannot be continued. In (iii) and (iv), the total weight of the force points swallowed is greater than \(2\), and hence the curve can be continued.
The following was proved in [16] (see also Lemma 2.2 in [21]).
Lemma 2.2
Fix \(\kappa >0\) and let \(({\underline{x}}_L^n)\) and \(({\underline{x}}_R^n)\) be sequences of vectors of numbers \(x_{l,L}^n<\dots<x_{1,L}^n<0<x_{1,R}^n<\dots <x_{r,R}^n\), converging to vectors \(({\underline{x}}_L)\) and \(({\underline{x}}_R)\) such that \(x_{1,L} = 0^\) and \(x_{1,R} = 0^+\). For each n, denote by \((W^n,V^{n,L},V^{n,R})\) the driving processes of an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L,{\underline{\rho }}_R)\) process with force points \(({\underline{x}}_L^n;{\underline{x}}_R^n)\). Then \((W^n,V^{n,L},V^{n,R})\) converges weakly in law, with respect to the local uniform topology, to the driving process \((W,V^L,V^R)\) of an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) with force points \(({\underline{x}}_L;{\underline{x}}_R)\), as \(n \rightarrow \infty \).
It turns out that if we, using the Girsanov theorem, reweight an \({{\,\mathrm{SLE}\,}}_\kappa \) process by a certain martingale (how this is done is explained briefly below), then we obtain an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) process at least until the first time that \(W_t = V_t^{j,q}\) for some (j, q). Let \(x_{1,L}< 0 < x_{1,R}\) and define
Then \(M_t\) is a local martingale and an \({{\,\mathrm{SLE}\,}}_\kappa \) process weighted by \(M_t\) has the law of an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) process with force points \(({\underline{x}}_L;{\underline{x}}_R)\) (see Theorem 6 of [23]).
So far, we have only defined chordal \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) processes in \({\mathbb {H}}\), but we can define them in any Jordan domain, by a conformal coordinate change. More precisely, an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) in a Jordan domain D, from \(z_0\) to \(z_\infty \), with force points \(({\underline{x}}_L,{\underline{x}}_R)\) is the image of an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) in \({\mathbb {H}}\) from 0 to \(\infty \) under a conformal map \(\varphi \) such that \(\varphi ({\mathbb {H}})=D\), \(\varphi (0) = z_0\), \(\varphi (\infty )=z_\infty \) and such that the force points of the \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) in \({\mathbb {H}}\) are mapped to \(({\underline{x}}_L;{\underline{x}}_R)\). We say that the constructed \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) in D is an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) process with configuration
The configuration of the \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) process we defined in the beginning of the section is \(({\mathbb {H}}, 0, {\underline{x}}_L, {\underline{x}}_R, \infty )\).
The case of one force point
Let \(a=2/\kappa \). We will parametrize the \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) so that \(\text {hcap}(K_t) = at\), i.e., as the solution to
with
where \(B_t\) is a onedimensional standard Brownian motion with \(B_0 = 0\) and filtration \({\mathscr {F}}_t\). We say that the conformal maps \((g_t)_{t \ge 0}\) are driven by W. The solution to (14) exists for all \(t \ge 0\), if \(\kappa > 0\) and \(\rho > 2\), so henceforth, we assume this. If \(\rho \ge \frac{\kappa }{2}2\), then \(\eta \) does not hit the real line, almost surely, and hence we are interested in the case \(\rho < \frac{\kappa }{2} 2\). If \(\kappa > 4\) and \(\rho \in (2,\frac{\kappa }{2}4]\), then by Lemma 2.1, \(\eta \cap (x_R,\infty )\) is almost surely an interval, and thus we will consider \(\rho \in ((2) \vee (\frac{\kappa }{2}4), \frac{\kappa }{2}2)\).
Fix \(\kappa > 0\) and \(\rho > 2\) and let \((K_t, t \ge 0)\) be the hulls of an \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) process with force point \(x_R\). The \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) satisfies two important properties. The first is the following scaling rule: for any \(m > 0\), \((m^{1} K_{m^2 t},t \ge 0)\) has the same law as the hulls of an \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) process with force point \(x_R/m\). If \(x_R = 0^+\), then it is scaling invariant. The second is the domain Markov property: for any finite stopping time, \(\tau \), the curve defined as \(({\hat{\eta }}(t) = f_\tau (\eta (t+\tau )), t \ge 0)\) is an \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) curve with force point \(V_\tau  W_\tau \), where again, \(f_t =g_tW_t\).
Note that \(f_t\) follows the SDE
Taking the spatial derivative of (15) results in an ODE that, upon solving, yields
While \(g_t\) is defined in \(H_t\), it (and hence \(f_t\) and \(g_t'\)) extends continuously to the real line and is realvalued there. For \(x \in {\mathbb {R}}_+\), \(g_t'(x) \in [0,1]\) and is decreasing in t. Due to symmetry, it is enough to consider \(x, x_R \in {\mathbb {R}}_+\). By applying Itô’s formula to \(\log f_t(z)\) and exponentiating, we see that
The same procedure, applied to \(v_t(z) :=g_t(z)  V_t = f_t(z) + W_t  V_t\), yields
Observe that, considered as functions on \({\mathbb {R}}_+\), \(f_t\), \(g_t'\) and \(v_t\) are increasing in x. We will mostly work with these functions on or close to \({\mathbb {R}}_+\).
Local martingales and weighted measures
Fix \(\kappa > 0\), \(\rho \in ((2)\vee (\frac{\kappa }{2}4), \frac{\kappa }{2}  2)\) and let, as above, \(a = 2/\kappa \). For each such pair \((\kappa ,\rho )\), we will define a oneparameter family of local martingales which will play a major role in our analysis. Let \(\zeta \) be the variable with which we will parametrize the martingales and let
For \(\frac{\mu _c^2}{2a}< \zeta < \infty \), define the parameters
Note that with our choice of \(\zeta \), \(\mu > \mu _c\) and \(\beta > 0\). The restriction \(\mu > \mu _c\) is necessary for a certain invariant density of a diffusion to exist, see (33) and the appendix. The parameters are related as follows:
For each \(x > 0\), we have by equations (16), (17) and (18), that
is a local martingale on \(0 \le t \le T_x\) (see Theorem 6 of [23]), such that
Note that under the measure weighted by \(M_t^\zeta \), the \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) process becomes an \({{\,\mathrm{SLE}\,}}_\kappa (\rho ,\mu \kappa )\) process with force points \((x_R,x)\). We shall write the local martingale in a different way, which is very convenient for our analysis. Define the random processes
and
We will often not write out the dependence on x, as it will cause no confusion. Since we \(V_t \ge W_t\), we see that \(Q_t(x) \in [0,1]\) for every \(t\ge 0\). We note that if \(x_R = 0^+\), then \(Q_t(x)\) is the ratio between the harmonic measure from infinity of two sets, more precisely, if \(\eta ^R\) denotes the right side of the curve \(\eta \) and \(r_t\) is the rightmost point of \(\eta ([0,t])\cap {\mathbb {R}}\), then
With these processes, we can write
This will be very convenient, as we will relate the process \(\delta _t(x)\) to the conformal radius at the point x, and thus, it will be comparable to \({{\,\mathrm{dist}\,}}(x,\eta ([0,t]))\). With this in mind, we will then make a random time change so that the timechanged process decays deterministically, and it will give us good control over the decay of the distance between the curve and a certain point. It does not, however, make sense to talk about the conformal radius of a boundary point, so we will begin by sorting this out.
We let \({\widehat{H}}_t = H_t \cup \{z:{\overline{z}} \in H_t \} \cup \{x \in {\mathbb {R}}_+: t<T_x \}\) be the union of the reflected domain and the points on \({\mathbb {R}}_+\) that have not been swallowed at time t. Then, it makes sense to talk about \(\text {crad}_{{\widehat{H}}_t}(x)\) for \(0<x\in {\widehat{H}}_t\), and a calculation shows that if \(x_R = 0^+\), then for \(t < T_x\),
By (5), this implies that
While this only holds for \(x_R = 0^+\), similar bounds can be acquired for other \(x_R\), as the following lemma shows. See also [26].
Lemma 2.3
Let \(\kappa > 0\), \(\rho > 2\), and let \(\eta \) be an \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) curve with force point \(x_R \ge 0\). Let \((g_t)_{t \ge 0}\) be the conformal maps, driven by W, and let \(T_x\) be the swallowing time of x. Let \(x > x_R\) and \(0<t<T_x\), then
Proof
Denote by \((K_t)_{t \ge 0}\) the compact hulls corresponding to the \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) process. Extend the maps \((g_t)_{t \ge 0}\) by Schwarz reflection to \({\mathbb {C}} {\setminus } ( K_t \cup \{ z : {\overline{z}} \in K_t \} \cup {\mathbb {R}}_)\), let \(r_t\) be the rightmost point of \(K_t \cap {\mathbb {R}}\) and let \(O_t = g_t(r_t)\). Then, by the Koebe 1/4 theorem,
that is,
since \({{\,\mathrm{dist}\,}}(g_t(x),O_t) = g_t(x)  O_t \ge g_t(x)  V_t\), and thus the upper bound is done. Since \(V_t = g_t(x_R)\), we have that if \(x_R = 0^+\), \(V_t = O_t\), and the proof is done. Thus, we now consider \(x_R > 0\). For \(t \ge T_{x_R}\), again \(O_t = V_t\), so we now consider \(t<T_{x_R}\). Let \(u \in (0,x_R]\) and define
for \(t< T_u\). Then,
Using the Loewner equation, we see that
since \(g_t(x)g_t(u) \ge 0\) and \(g_t(u)g_t(x_R) \le 0\). Thus, for every \(u \in (0,x_R]\) and \(t < T_u\),
Fixing t and letting \(u \rightarrow r_t\), we get
which implies that
and thus
\(\square \)
With this lemma in mind, we will proceed and make a random time change. The process \(\delta _t\) satisfies the (stochastic) ODE
so we define the process \({\tilde{t}}(s) = {\tilde{t}}_x(s)\) as the solution to the equation
Then, \({\tilde{\delta }}_s(x) :=\delta _{{\tilde{t}}(s)}(x)\) satisfies the ODE \(d{\tilde{\delta }}_s(x) = a{\tilde{\delta }}_s(x) ds\), i.e.,
This time change is called the radial parameterization. Note that this time change is depending on x. We let \({\tilde{g}}_s = g_{{\tilde{t}}(s)}\) etc. denote the timechanged processes. They are all adapted to the filtration \(\tilde{{\mathscr {F}}}_s = {\mathscr {F}}_{{\tilde{t}}(s)}\). By differentiating (28) to get an expression for \(\frac{d}{ds}{\tilde{t}}(s)\), and combining this with equations (16), (17) and (18), we have that in this new parametrization,
and \({\tilde{Q}}_s\) follows the SDE
where \({\tilde{B}}_s\) is a Brownian motion with respect to the filtration \(\tilde{{\mathscr {F}}}_s\). Combining (25) and (29), we see that
For each \(x>0\) and \(\zeta > \frac{\mu _c^2}{2a}\), we define the new probability measure, which we denote by \({\mathbb {P}}^* = {\mathbb {P}}_{x,\zeta }^*\), as
for every \(A \in \tilde{{\mathscr {F}}}_s\). We denote the expectation with respect to \({\mathbb {P}}^*\) by \({\mathbb {E}}^*\). Let \({\tilde{B}}_s^*\) denote a Brownian motion with respect to \({\mathbb {P}}^*\). By the Girsanov theorem, we have that under the measure \({\mathbb {P}}^*\), \({\tilde{Q}}_s\) follows the SDE
Under \({\mathbb {P}}^*\), \({\tilde{Q}}_s\) is positive recurrent and has the invariant density
where
see Corollary A.2. Throughout, we will denote by \({\tilde{X}}_s\), a process that follows the same SDE as \({\tilde{Q}}_s\), but started according to the invariant density. For \(s \ge 0\) and \(y > 0\), short calculations show that
In Sect. 3 we will use that \({\mathbb {P}}^*({\tilde{t}}(s)<\infty ) = 1\) for every s (which is shown in Appendix A), and the following lemma.
Lemma 2.4
Fix x and let \(\tau _s\) and \({\tilde{t}}(s)\) be as above. Then
for all \(s > \max \{0,\log (xx_R)\}\).
Proof
Assume that \(s > \max \{0,\log (xx_R)\}\). Trivially, \(\tau _s \ge {\tilde{t}}(0) = 0\). By Lemma 2.3, we have that \(\delta _{\tau _s} \le 4 e^{s}\) and thus, if \(s  \log 4 +\log (xx_R) > 0\), then (recall that \(\delta _0 = xx_R\))
and since \(t \mapsto \delta _t\) is decreasing, \(\tau _s \ge {\tilde{t}}((\frac{s}{a} + \frac{1}{a} \log (xx_R)  \frac{1}{a} \log 4) \vee 0)\).
Also, by Lemma 2.3, we get
Thus, \(\tau _s \le {\tilde{t}}\left( \frac{s}{a} + \frac{1}{a}(\log x + \log 4)\right) \), and the proof is done. \(\square \)
In what follows, x and \(x_R\) will be kept constant (time is the parameter which will change), and hence we will instead treat the inequality as
as we can choose such a \(C^* = C^*(x,x_R)\) for each \(x>1\), \(x_R < x\). The only place where we have to be careful with this is in the proof of Lemma 4.6, but we will discuss that there.
The Gaussian free field
We will now introduce and discuss the Gaussian free field (GFF), for more on the GFF, see [25]. Let \(D \subset {\mathbb {C}}\) be a Jordan domain and let \(C_c^\infty (D)\) be the set of compactly supported smooth functions on D. The Dirichlet inner product on D is defined as
and the Hilbert space closure of \(C_c^\infty (D)\) with this inner product is the Sobolev space \(H(D) :=H_0^1(D)\). Let \(( \phi _n )_{n=1}^\infty \) be a \((\cdot ,\cdot )_\nabla \)orthonormal basis of H(D). Then, the zeroboundary GFF h on D can be expressed as
where \(( \alpha _n )_{n=1}^\infty \) is a sequence of i.i.d. N(0, 1) random variables. This does not converge in any space of functions, however, it converges almost surely in a space of distributions. The GFF is conformally invariant, i.e., if h is a GFF on D and \(\varphi : D \rightarrow {\widetilde{D}}\) is a conformal transformation, then \(h \circ \varphi ^{1} = \sum _n \alpha _n \phi _n \circ \varphi ^{1}\) is a GFF on \({\widetilde{D}}\).
If we denote the standard \(L^2(D)\) inner product by \((\cdot ,\cdot )\) and \(U \subseteq D\) is open, then for \(f \in C_c^\infty (U)\), \(g \in C_c^\infty (D)\), we obtain by integration by parts that
that is, every function in \(C_c^\infty (D)\) which is harmonic in U is \((\cdot ,\cdot )_\nabla \)orthogonal to every function in \(C_c^\infty (U)\). From this, one can see that H(D) can be \((\cdot ,\cdot )_\nabla \)orthogonally decomposed as \(H(U) \oplus H^\perp (U)\), where \(H^\perp (U)=H_D^\perp (U)\) is the set of functions in H(D) which are harmonic in U and have finite Dirichlet energy. Hence, we may write
where \(( \alpha _n^U )\) and \(( \alpha _n^{U^\perp } )\) are independent sequences of i.i.d. N(0, 1) random variables and \(( \phi _n^U)\) and \(( \phi _n^{U^\perp } )\) are orthonormal bases of H(U) and \(H^\perp (U)\), respectively. Note that \(h_U\) is a GFF on U and that \(h_{U^\perp }\) is a random distribution which agrees with h on \(D {\setminus } U\) and can be viewed as a harmonic function on U and that \(h_U\) and \(h_{U^\perp }\) are independent. Hence, the law of h restricted to U, given the values of h restricted to \(\partial U\), is that of a GFF on U plus the harmonic extension of the values of h on \(\partial U\). This is the socalled Markov property of the GFF. With this in mind, one can make sense of GFF with nonzero boundary conditions: let \(f: \partial D \rightarrow {\mathbb {R}}\), let F be the harmonic extension of f to D and let h be a zeroboundary GFF on D, then the law of the of the GFF with boundary condition f is given by the law of \(h+F\).
The GFF also exhibits certain absolute continuity properties, the key (for us) of which we state now. (This is the content of Proposition 3.4 (ii) in [16], where the reader can find a proof.)
Proposition 2.5
Let \(D_1,D_2\) be simply connected domains, such that \(D_1 \cap D_2 \ne \emptyset \) and for \(j=1,2\), let \(h_j\) and \(F_j\) be a zeroboundary GFF and a harmonic function on \(D_j\), respectively. If \(U \subseteq D_1 \cap D_2\) is a bounded simply connected domain and \(U' \supset {\overline{U}}\) is such that \({\overline{D}}_1 \cap U' = {\overline{D}}_2 \cap U'\) and \(F_1F_2\) tends to zero when approaching \(\partial D_j \cap U'\) for \(j=1,2\), then the laws of \((h_1+F_1)_U\) and \((h_2+F_2)_U\) are mutually absolutely continuous.
In other words, if \(h_1\) and \(h_2\) are GFFs on \(D_1\) and \(D_2\), whose boundary conditions agree in some set \(E \subset \partial D_1 \cap \partial D_2\), then the laws of \(h_1\) and \(h_2\) restricted to any simply connected bounded subdomain U of \(D_1\) and \(D_2\), such that \({{\,\mathrm{dist}\,}}(\partial U, (\partial D_1 \cap \partial D_2) {\setminus } E) >0\), are mutually absolutely continuous.
The result holds for unbounded domains U as well, but we shall only need the bounded case.
Imaginary geometry
In this section, we describe the coupling of SLE with the GFF. As stated in the introduction, this section will be slightly longer than necessary, in order to make this paper more selfcontained.
Suppose for now that h is a smooth, realvalued function on a Jordan domain D and fix constants \(\chi > 0\) and \(\theta \in [0,2\pi )\). A flow line of the complex vector field \(e^{i(h/\chi + \theta )}\), with initial point z, is a solution to the ordinary differential equation
If \(\eta \) is a flow line of \(e^{ih/\chi }\) and \(\psi : {\widetilde{D}} \rightarrow D\) is a conformal map, then \({\tilde{\eta }} = \psi ^{1} \circ \eta \) is a flow line of \(e^{i{\tilde{h}}/\chi }\), where \({\tilde{h}} :=h \circ \psi  \chi \arg \psi '\) is a smooth function on \({\widetilde{D}}\). This follows by the chain rule and the fact that a reparametrization of a flow line is a flow line. Hence, the following definition makes sense: we say that an imaginary surface is an equivalence class of pairs (D, h) under the equivalence relation
We say that \(\psi \) is a conformal coordinate change of the imaginary surface.
The idea is that if h is a GFF, then we are interested in the flow lines of h and we want to see that these are \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) curves. However, while (37) makes sense if h is a GFF, the ODE (36) does not, as h is then a random distribution and not a continuous function. Thus, the approach to defining the flow lines of the GFF will be a little less “direct”. Instead, the following characterization will be used: let h be a smooth function and \(\eta \) a smooth simple curve in \({\overline{{\mathbb {H}}}}\), with \(\eta (0) = 0\) and \(\eta (t) \in {\mathbb {H}}\) for \(t>0\), starting in the vertical direction (that is, \(\eta '(0)\) has zero real part and positive imaginary part), so that as \(t \rightarrow 0\) the winding number is \(\approx \pi /2\). Furthermore, let \((f_t)_{t\ge 0}\) be the centered Loewner chain of \(\eta \). Then, for any \(t>0\), we have two parametrizations of \(\eta _{[0,t]}\):
where \(s_\) and \(s_+\) are the left and right images of zero under \(f_t\), respectively. Since \(\eta \) is smooth, there exist smooth, decreasing functions \(\phi _,\phi _+:(0,\infty ) \rightarrow (0,\infty )\) such that
By differentiation and (36), it is then easy to see that \(\eta \) is a flow line of h if and only if for each \(z \in \eta ((0,t))\)
as w approaches z from the left side of \(\eta \) and
as w approaches z from the right side of \(\eta \). With this in mind, we will now introduce the coupling.
With the notation as in Sect. 2.2, the coupling of SLE and GFF is given by the following theorem (for the proof, see [16]).
Theorem 2.6
Fix \(\kappa > 0\) and a vector of weights \(({\underline{\rho }}_L;{\underline{\rho }}_R)\). Let \((K_t)\) and \((f_t)\) be the hulls and centered Loewner chain, respectively, of an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) process in \({\mathbb {H}}\) from 0 to \(\infty \) with force points \(({\underline{x}}_L;{\underline{x}}_R)\) and let h be a zeroboundary GFF in \({\mathbb {H}}\). Furthermore, let
let \(\Phi _t^0\) be the harmonic function in \({\mathbb {H}}\) with boundary values
and define
If \(\tau \) is any stopping time for the \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) process which almost surely occurs before the continuation threshold, then the conditional law of \((h+\Phi _0)_{{\mathbb {H}}{\setminus } K_\tau }\) given \(K_\tau \) is equal to the law of \(h \circ f_\tau + \Phi _\tau \).
In this coupling, \(\eta \sim \text {SLE}_\kappa ({\underline{\rho }})\) is almost surely determined by h, that is, \(\eta \) is a deterministic function of h (see Theorem 1.2 of [16]). When \(\kappa \in (0,4)\), a flow line of the GFF \(h+\Phi _0\) on \({\mathbb {H}}\) is an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) curve, \(\eta \), coupled with \(h+\Phi _0\) as in Theorem 2.6. This definition can be extended to other simply connected domains than \({\mathbb {H}}\), using the conformal coordinate change described in (37), see Remark 2.7. If we add \(\theta \chi \) to the boundary values, i.e., replace \(h+\Phi _0\) by \(h+\Phi _0 +\theta \chi \) then the resulting flow line is called a flow line of angle \(\theta \), and we denote it by \(\eta _\theta \).
Note that if \(\kappa \in (0,4)\) then \(\chi >0\) and if \(\kappa >4\) then \(\chi <0\). If we let \(\kappa \in (0,4)\) and write \(\kappa ' = 16/\kappa \), then \(\kappa '>4\) and \(\chi (\kappa ) = \chi (\kappa ')\). From Theorem 2.6, it is clear that the conditional law of \(h+\Phi _0\) given an \({{\,\mathrm{SLE}\,}}_\kappa \) or \({{\,\mathrm{SLE}\,}}_{\kappa '}\) curve is transformed in the same way under a conformal map, up to a sign change, which motivates the following definition. A counterflow line of the GFF \(h+\Phi _0\) is an \({{\,\mathrm{SLE}\,}}_{\kappa '}({\underline{\rho }})\) curve coupled with \((h+\Phi _0)\) as in Theorem 2.6. Note that the sign of the GFF is changed so that it matches the sign of \(\chi (\kappa ')\) and that in the notation of the theorem, the \(\lambda \) is replaced by \(\lambda ' = \frac{\pi }{\sqrt{\kappa '}} = \lambda  \frac{\pi }{2}\chi \) (Fig. 2).
In the figures, we often write \(\underset{^{\scriptstyle \sim }}{a}\), where a is some real number. This is to be interpreted as a plus \(\chi \) times the winding of the curve, see Fig. 3. This makes perfect sense for piecewise smooth curves, but for fractal curves the winding is not defined pointwise. However, the harmonic extension of the winding of the curve makes sense, as we can map conformally to \({\mathbb {H}}\) with piecewise constant boundary conditions. The term \(\chi \arg f_\tau '(z)\) in Theorem 2.6 is interpreted as \(\chi \) times the harmonic extension of the winding of the curve \(\eta \).
Remark 2.7
Let D be a simply connected domain, with \(x,y \in \partial D\) distinct and let \(\psi : D \rightarrow {\mathbb {H}}\) be a conformal transformation with \(\psi (x) = 0\) and \(\psi (y) = \infty \). Let \({\underline{x}}_L\) (\({\underline{x}}_R\)) consist of l (r) marked prime ends in the clockwise (counterclockwise) segment of \(\partial D\), which are in clockwise (counterclockwise) order. The orientation of \(\partial D\) is as defined by \(\psi \). Write \(x_{0,L} = x_{0,R} = x\) and \(x_{l+1,L} = x_{r+1,R} = y\) and let \({\underline{\rho }}_L\) and \({\underline{\rho }}_R\) be vectors of weights corresponding to the points in \({\underline{x}}_L\) and \({\underline{x}}_R\) respectively. Let h be a GFF on D with boundary values given by
where \((x_{j,L},x_{j+1,L})\) denotes the clockwise segment of \(\partial D\) from \(x_{j,L}\) to \(x_{j+1,L}\) and \((x_{j,R},x_{j+1,R})\) the counterclockwise segment of \(\partial D\) from \(x_{j,R}\) to \(x_{j+1,R}\). Let \(\kappa \in (0,4)\). We say that an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) curve, \(\eta \), from x to y in D, coupled with h, is a flow line of h if \(\psi (\eta )\) is coupled as a flow line of the GFF \(h \circ \psi ^{1}  \arg (\psi ^{1})'\) on \({\mathbb {H}}\).
The same statement holds for counterflow lines with \(\kappa ' \in (4,\infty )\) if we replace \(\lambda \) with \(\lambda '\) in the boundary values (but keep \(\chi = \chi (\kappa )\)).
We write the following statements for flow lines in \({\mathbb {H}}\), but they hold true for other simply connected domains as well.
Let h be a GFF in \({\mathbb {H}}\) with piecewise constant boundary values. It turns out that (Theorem 1.4, [16]) if \(\eta '\) is a counterflow line of h in \({\mathbb {H}}\), from \(\infty \) to 0, then the range of \(\eta '\) is almost surely equal to the points that can be reached by the flow lines in \({\mathbb {H}}\), from 0 to \(\infty \), with angles in the interval \(\left[ \frac{\pi }{2},\frac{\pi }{2} \right] \). Also, it almost surely holds that the left boundary of \(\eta '\) is equal to the trace of the flow line of angle \(\frac{\pi }{2}\) and the right boundary is equal to the trace of the flow line of angle \(\frac{\pi }{2}\) (seen from the viewpoint of travelling along \(\eta '\), from the flow lines’ point of view, it is the other way). Here, we talk about counterflow lines corresponding to the parameter \(\kappa '\) and flow lines corresponding to \(\kappa \), so that they can be coupled with the same GFF (Fig. 4).
Again, let h be a GFF in \({\mathbb {H}}\) with piecewise constant boundary values. For each \(x \in {\mathbb {R}}\) and \(\theta \in {\mathbb {R}}\), we denote by \(\eta _\theta ^x\) the flow line of h from x to \(\infty \) with angle \(\theta \). Fix \(x_1,x_2 \in {\mathbb {R}}\) such that \(x_1 \ge x_2\), then the following holds (see Fig. 5 for illustrations).

(i)
If \(\theta _1 < \theta _2\), then \(\eta _{\theta _1}^{x_1}\) almost surely stays to the right of \(\eta _{\theta _2}^{x_2}\). If \(\theta _2  \theta _1 < \frac{\pi \kappa }{4\kappa }\), then the paths might hit and bounce off of each other, otherwise they almost surely never collide away from the starting point.

(ii)
If \(\theta _1 = \theta _2\), then \(\eta _{\theta _1}^{x_1}\) and \(\eta _{\theta _2}^{x_2}\) can intersect and if they do, they merge and never separate.

(iii)
If \(\theta _2< \theta _1 < \theta _2 + \pi \), then \(\eta _{\theta _1}^{x_1}\) and \(\eta _{\theta _2}^{x_2}\) can intersect, and if they do, they cross and never cross back. If \(\theta _1  \theta _2 < \frac{\pi \kappa }{4\kappa }\), then they can hit and bounce off of each other, otherwise they never intersect after crossing.
The above flow line interactions are the content of Theorem 1.5 of [16]. We shall make use of property (ii), as it is instrumental in our twopoint estimate.
Level lines
The coupling is valid for \(\kappa = 4\) as well. We then interpret the resulting \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) curve as the level line of the GFF. Note that \(\chi (4)=0\), that is, there is no extra winding term, and hence the boundary values of the level line are constant along the curve, \(\lambda \) on the left and \(\lambda \) on the right. As in the case of flow and counterflow lines, level lines can be defined in other domains and with different starting and ending points via conformal maps. For level lines, the terminology is a bit different: we say that \(\eta \) is a level line of height \(u\in {\mathbb {R}}\) if it is a level line of the GFF \(h+u\). The same interactions as for flow lines hold for level lines. We let \(\eta _u^x\) denote the level line of height u starting from x. Let \(x_1 \ge x_2\), then

(i)
If \(u_1 < u_2\), then \(\eta _{u_1}^{x_1}\) almost surely stays to the right of \(\eta _{u_2}^{x_2}\).

(ii)
If \(u_1 = u_2\), then \(\eta _{u_1}^{x_1}\) and \(\eta _{u_2}^{x_2}\) can intersect and if they do, they merge and never separate.
For more on the level lines of a GFF with piecewise constant boundary data, see [27].
Deterministic curves and Radon–Nikodym derivatives
We now recall some consequences of Proposition 2.5: two lemmas about flow and counterflow line behaviour (Figs. 6, 7) and two lemmas on absolute continuity, all from [21], which we will need in Sect. 4. Note that while they are proven for \(\kappa \ne 4\), the case \(\kappa = 4\) follows by the same argument as \(\kappa < 4\), when the \({{\,\mathrm{SLE}\,}}_4({\underline{\rho }})\) curves are coupled as level lines.
Lemma 2.8
(Lemma 2.3 of [21]) Fix \(\kappa > 0\) and let \(\eta \) be an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) curve in \({\mathbb {H}}\) from 0 to \(\infty \), with force points \(({\underline{x}}_L;{\underline{x}}_R)\) such that \(x_{1,L} = 0^\), \(x_{1,R} = 0^+\) and \(\rho _{1,L},\rho _{1,R} > 2\). Let \(\gamma :[0,1] \rightarrow {\overline{{\mathbb {H}}}}\) be a deterministic curve such that \(\gamma (0) = 0\) and \(\gamma ((0,1]) \subset {\mathbb {H}}\). Fix \(\varepsilon > 0\), write \(A(\varepsilon ) = \{ z: {{\,\mathrm{dist}\,}}(z,\gamma ([0,1])) < \varepsilon \}\) and define the stopping times
Then \({\mathbb {P}}(\sigma _1 < \sigma _2) > 0\).
Lemma 2.9
(Lemma 2.5 of [21]) Fix \(\kappa > 0\) and let \(\eta \) be an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) curve in \({\mathbb {H}}\) from 0 to \(\infty \), with force points \(({\underline{x}}_L;{\underline{x}}_R)\) such that \(x_{1,L} = 0^\), \(x_{1,R} = 0^+\) and \(\rho _{1,L},\rho _{1,R} > 2\). Fix \(k \in {\mathbb {N}}\) such that \({\overline{\rho }}_{k,R} \in (\frac{\kappa }{2}4,\frac{\kappa }{2}2)\) and an \(\varepsilon > 0\) such that \(x_{2,q} \ge \varepsilon \) for \(q \in \{L,R\}\), \(x_{k+1,R}x_{k,R} \ge \varepsilon \) and \(x_{k,R} \le \varepsilon ^{1}\). Let \(\gamma :[0,1] \rightarrow {\overline{{\mathbb {H}}}}\), with \(\gamma (0) = 0\), \(\gamma ((0,1)) \subset {\mathbb {H}}\) and \(\gamma (1) \in [x_{k,R},x_{k+1,R}]\) and \(A(\varepsilon ) = \{ z: {{\,\mathrm{dist}\,}}(z,\gamma ([0,1])) < \varepsilon \}\) and define the stopping times
Then there exists a \(p_1 = p_1(\kappa ,\max _{j,q} \rho _{j,q},{\overline{\rho }}_{k,R},\varepsilon ) > 0\), such that \({\mathbb {P}}(\sigma _1 < \sigma _2) \ge p_1\).
Next, we shall describe the Radon–Nikodym derivatives between \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) processes in different domains, see also [6] and [21]. The results that we need are Lemma 2.10 and Lemma 2.11, which we will use in Sect. 4. Let
be a configuration, that is, a Jordan domain D with boundary points \(z_0\), \({\underline{x}}_L\), \({\underline{x}}_R\) and \(z_\infty \), and let U be an open neighborhood of \(z_0\). Denote the law of an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) process with configuration c, stopped the first time \(\tau \) it exits U, by \(\mu _c^U\). Let \(H_D\) be the Poisson excursion kernel of D, that is, if \(\varphi : D \rightarrow {\mathbb {H}}\) is conformal, then
Furthermore, let \(\rho _\infty = \kappa 6\sum _{j,q} \rho _{j,q}\) and
Moreover, let
where \(x_{j,q}^\tau = x_{j,q}\) if \(x_{j,q}\) is not swallowed by \(\eta \) at time \(\tau \) and \(x_{j,q}^\tau \) the leftmost (resp. rightmost) point of \(K_\tau \cap \partial D\) on the clockwise (resp. counterclockwise) arc of \(\partial D\) if \(q=L\) (resp. \(q=R\)). Moreover, let \(\mu ^\text {loop}\) be the Brownian loop measure, a \(\sigma \)finite measure on unrooted loops (see [13]), and write
where
Lemma 2.10
(Lemma 2.7 of [21]) Let \(c = (D,z_0,{\underline{x}}_L,{\underline{x}}_R,z_\infty )\) and \({\tilde{c}} = ({\widetilde{D}},z_0,\underline{{\tilde{x}}}_L,\underline{{\tilde{x}}}_R,{\tilde{z}}_\infty )\) be configurations and U an open neighborhood of \(z_0\) such that c and \({\tilde{c}}\) and the weights of the marked points agree in U, and the distance from U to the marked points of c and \({\tilde{c}}\) which differ, is positive. Then, the probability measures \(\mu _c^U\) and \(\mu _{{\tilde{c}}}^U\) are mutually absolutely continuous and the Radon–Nikodym derivative between them are given by
Lemma 2.11
(Lemma 2.8 of [21]) Assume that we have the same setup as in Lemma 2.10, with \(D = {\mathbb {H}}\), \({\widetilde{D}} \subseteq {\mathbb {H}}\), \(U \subset {\mathbb {H}}\) bounded and \(z_0 = 0\). Fix \(\varsigma > 0\) and suppose that \({{\,\mathrm{dist}\,}}(U,{\mathbb {H}}{\setminus } {\widetilde{D}}) > \varsigma \) and that the force points which are outside U are at least at distance \(\varsigma \) from U. Then there exists a constant \(C \ge 1\), depending on U, \(\varsigma \), \(\kappa \) and the weights of the force points, such that
Onepoint estimates
In this section we will find first moment estimates, which will be of importance, as they will give us the means to get good twopoint estimates as well as give us the upper bound of the dimension of \(V_\beta ^*\). Recall that \({\tilde{g}}_s = g_{{\tilde{t}}(s)}\) is the Loewner chain under the radial time change, see Sect. 2.2.2.
Proposition 3.1
Let \(\zeta > \mu _c^2/2a\) and \(x_R \ge 0\). For all \(x > x_R\), we have
where \(K = \frac{\Gamma (22a+2\mu ) \Gamma (24aa\rho +\mu )}{\Gamma (22a+\mu ) \Gamma (24aa\rho +2\mu )}\).
Proof
By (34), \(f(x) = x^{\mu }\) is in \(L^1(\nu _{{\tilde{Q}}})\) (see Corollary A.2) and thus Corollary A.2 gives that
Thus, since \({\mathbb {P}}^*({\tilde{t}}(s) < \infty ) = 1\) for every s, we have
using (31) for the first equality, changing to the measure \({\mathbb {P}}^*\) for the third inequality, using (40) in the fifth and (34) in the last equality. \(\square \)
Using Proposition 3.1 and Lemma 2.4, recalling that we can choose a \(C^* = C^*(x,x_R)\) such that (35) holds, we get the following corollary.
Corollary 3.2
Suppose \(\zeta \ge 0\) and \(x_R \ge 0\). For every \(x > x_R\), there is a constant \(C=C(x,x_R)\) such that
Proof
By Lemma 2.4 and that the map \(t \mapsto g_t'\) is decreasing, we have
By the previous proposition, we have
and
where the constants in O depend on x and \(x_R\) (since \(C^*\) does). Thus, the proof is done. \(\square \)
At this point, we already have what is needed for the upper bound of the dimension of \(V_\beta ^*\).
Mass concentration
In this subsection, we will see that the mass of the weighted measure \({\mathbb {P}}^*\) is concentrated on an event where the behaviour of \({\tilde{g}}_s'(x)\), for fixed x, is nice. On this event, we will show that \({\tilde{g}}_s'(x)\) satisfies a number of inequalities which will be helpful in proving the twopoint estimate of the next section. The ideas here are similar to those of Section 7 of [11].
We define the process \({\tilde{L}}_s\), by recalling (30), as
As stated in Sect. 2.2.2 (and shown in the appendix), \({\tilde{Q}}_s\) has an invariant distribution under \({\mathbb {P}}^*\), with density \(p_{{\tilde{Q}}}\) (recall (33)). Therefore, by the ergodicity of \({\tilde{Q}}_s\) (Corollary A.2) and a computation,
holds \({\mathbb {P}}^*\)almost surely, that is, the time average converges \({\mathbb {P}}^*\)almost surely to the space average. We shall prove that, roughly speaking, as \(s \rightarrow \infty \), \({\tilde{L}}_s \approx \beta (1+\rho /2)s\), with an error of order \(\sqrt{s}\). To prove this, we need to prove the next lemma first.
Lemma 3.3
Let \(\zeta > \mu _c^2/2a\). There is a positive constant \(c<\infty \) such that for \(p>0\) sufficiently small, and \(t\ge 1\),
The proof idea is as follows. Observe that if we view \(\mu = \mu _c + \sqrt{\mu _c^2 + 2a\zeta }\) as a function of \(\zeta \), then \(\mu '(\zeta ) = a/\sqrt{\mu _c^2 + 2a\zeta } = \beta \). We define the process
which by Itô’s formula is seen to be a local martingale under \({\mathbb {P}}^*\). Since it is bounded from below, it is a supermartingale. Then we use that \(\mu (\zeta +\delta )  \mu (\zeta ) = \delta \beta + O(\delta ^2)\) and that we have good control of \({\tilde{Q}}_s\).
Proof
We have that
since \(\mu (\zeta +\delta )\mu (\zeta )=\delta \beta + O(\delta ^2)\). This implies that
Let \(\delta = \pm \frac{\varepsilon }{\sqrt{t}}\), where \(\varepsilon \) is small enough for \({\tilde{N}}_t\) to be welldefined. Then,
and exponentiating, we get
since \({\tilde{N}}_t\) is a supermartingale and hence \({\mathbb {E}}[{\tilde{N}}_t] \le {\mathbb {E}}[{\tilde{N}}_0] = {\tilde{Q}}_0^{\mu (\zeta +\delta )\mu (\zeta )} = \left( \frac{xx_R}{x}\right) ^{\mu (\zeta +\delta )\mu (\zeta )}\). Consider the case \(\delta <0\), i.e.,
Since \({\tilde{Q}}_t \in [0,1], \beta > 0\), we have \({\tilde{Q}}_t^{\beta \frac{\varepsilon }{\sqrt{t}} + O(\varepsilon ^2/t)} \ge 1\) for sufficiently small \(\varepsilon \), and thus
Consider the case \(\delta >0\). We will split the expectation into the cases \({\tilde{Q}}_t \le y\) and \({\tilde{Q}}_t > y\) for some \(y \in (0,1]\). First,
which implies
for sufficiently small \(\varepsilon \). For the other part, note that since \({\tilde{L}}_t \ge 0\),
for some constant, \(c'\), where the last equality follows by Corollary A.2 and (34). If we let
then we see that both the “\({\tilde{Q}}_t \le y\)”part and the “\({\tilde{Q}}_t > y\)”part are bounded by positive constants. Thus, we are done. \(\square \)
With the previous lemma at hand, we can now prove the following.
Proposition 3.4
There exists a constant, c, such that if we fix \(t > 0\) and let \({\tilde{I}}_t^u\) be the event that for all \(0 \le s \le t\),
then, for every \(\varepsilon > 0\) there exists a \(u < \infty \) such that
for every t.
Proof
There is a constant, c, such that for any \(k \in {\mathbb {N}}\),
Thus, by splitting into subintervals of length 1, Chebyshev’s inequality and Lemma 3.3 (with \(p>0\) accordingly)
which is o(1) in u. \(\square \)
We shall denote both the event and the indicator function of the event as \({\tilde{I}}_t^u\), and we will more often than not drop the u in the notation and write \({\tilde{I}}_t\). Straightforward calculations, using that \({\tilde{g}}_s'(x) = e^{a{\tilde{L}}_s}\), show that on the event of the above proposition, we have
where \(\psi _0\) is the subexponential function \(\psi _0(s) = e^{au \sqrt{s} \log (2+s) +c}\).
Next, we want to convert these facts into the corresponding for \(g_{\tau _t}'(x)\). We let \(C^* = C^*(x,x_R)\) denote the constant as remarked after Lemma 2.4, that is, the constant such that, for \(t>0\), \({\tilde{t}}((t/a C^*) \vee 0) \le \tau _t(x) \le {\tilde{t}}(t/a + C^*)\). What we will do now, is to define an \({\mathscr {F}}_{\tau _t}\)measurable version of \({\tilde{I}}_t^u\) (the indicator of the event of Proposition 3.4) and the natural way is to define this as the conditional expectation with respect to this filtration. Fix \(u>0\) and write
In the next proposition, we will see that this indeed works the same way for \(g_{\tau _t}'(x)\) as \({\tilde{I}}_t^u\) does for \({\tilde{g}}_t'(x)\). We will omit the superscript and write \(I_t = I_t^u\).
Lemma 3.5
Let \(u>0\) and \(I_t = I_t^u\) be as above. Then there is a subexponential function \(\psi \) such that for \(\max (0,\log (xx_R)) \le s \le t\),
where the implicit constants depend on x and \(x_R\).
Proof
Fix \(u>0\) and write \(s_+ = s/a + C^*\) and \(s_ = s/a C^*\) (where \(C^*\) is as described above). Since \(t \mapsto g_t'(x)\) is decreasing, we have
Hence, by (41)
In the same way,
and the lemma is proven. \(\square \)
Remark 3.6
We remark that we can allow a larger constant in the definition of the event in Proposition 3.4. The choice of constant c is not important, as if we let \({\tilde{I}}_t^{u,{\tilde{c}}}\) denote the event where we replace c by \({\tilde{c}}>c\), the same estimates hold with the subexponential function \(\psi _1(s) = e^{au\sqrt{s} \log (2+s)+{\tilde{c}}}\) in place of \(\psi _0\). Hence, the correct asymptotic behaviour of \({\tilde{g}}_s'(x)\) is preserved on \({\tilde{I}}_t^{u,{\tilde{c}}}\). Furthermore, \({\tilde{I}}_t^u \subset {\tilde{I}}_t^{u,{\tilde{c}}}\).
Twopoint estimate
Outline
In this section, we use the imaginary geometry techniques to prove a twopoint estimate that we need for the lower bound on the dimension of \(V_\beta ^*\) (and hence \(V_\beta \)). We follow the ideas of Sect. 3.2 of [21] and we will keep the notation similar. Note that we will write the proof for flow lines, i.e., \(\kappa <4\), but the merging property and every lemma that we will need, hold for the level lines of the GFF as well, so the method also gives the twopoint estimate in the case \(\kappa = 4\). The main idea is to use the merging of the flow lines and the approximate independence of GFF in disjoint regions to “move the problem between scales” and separate the points when at the right scale.
We let h be a GFF in \({\mathbb {H}}\) with boundary conditions such that the flow line \(\eta \) from 0 is an \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) process from 0 to \(\infty \). We define a sequence of random variables \(E^n(x)\), for \(x \in {\mathbb {R}}\) and \(n \in {\mathbb {N}}\), such that if \(E^n(x)>0\) for every \(n \in {\mathbb {N}}\), then \(x \in V_\beta ^*\) and we say that x is a perfect point. The idea for the construction of the random variables is as follows. Consider the event \(A_0^1(x)\), that \(\eta \) hits the ball \(B(x,\varepsilon _1)\), \(\varepsilon _1 = e^{\alpha _1}\) and let \(E^0(x) = 1_{A_0^1(x)} I_{\alpha _1}^{u,\Lambda }\), where \(I_{\alpha _1}^{u,\Lambda }\) is the random variable of (42) but with a larger constant \(\Lambda \). That is, if \(E^0(x)>0\), then \(\eta \) gets within distance \(\varepsilon _1\) of x and the derivative \(g_{\tau _t}'(x)\) decays approximately as \(e^{\beta (1+\rho /2)t}\) until \(\eta \) hits \(B(x,\varepsilon _1)\).
We proceed inductively. Assume that \(E^k(x)\) is defined and that \(\varepsilon _j = e^{\sum _{l=1}^j \alpha _l}\), \(\alpha _l > 0\). Let \(\eta ^{x_{k+1}}\) be the flow line started from the point \(x_{k+1} = x\varepsilon _{k+1}/4\). Let \(A_{k+1}^1(x)\) be the event that \(\eta ^{x_{k+1}}\) hits \(B(x,\varepsilon _{k+2})\), plus some regularity conditions. Furthermore, let \(I_{k+1}^{u,\Lambda ,k+1}\) denote the random variable corresponding to (42), but for \(\eta ^{x_{k+1}}\) until hitting \(B(x,\varepsilon _{k+2})\). Next, given that \(A_k^1(x)\) and \(A_{k+1}^1(x)\) occur, let \(A_{k+1}^2(x)\) be the event that \(\eta ^{x_k}\) hits \(\eta ^{x_{k+1}}\) plus some regularity conditions. We then set \(E^{k+1}(x) = E^k(x) 1_{A_{k+1}^1(x) \cap A_{k+1}^2(x)} I_{k+1}^{u,\Lambda ,k+1}\).
In short, we let a sequence of flow lines, on smaller and smaller scales, approach the point x, such that each flow line has the correct geometric behaviour as it approaches x. Moreover, each flow line hits and merges with the next. In this way, the \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) process \(\eta \) inherits its geometric behaviour from each of the flow lines. This is very convenient when deriving the twopoint estimate, that is, when proving that the correlation of \(E^n(x)\) and \(E^n(y)\) is small when \(xy\) is large. The key property that we use is that the flow lines started within the balls \(B(x, xy/(2+\delta _0))\) and \(B(y,xy/(2+\delta _0))\) are approximately independent when \(\delta _0 > 0\) (in the sense that the Radon–Nikodym derivative between the measures with and without the other set of flow lines present is bounded above and below by a constant). Moreover, the flow lines outside of those balls will also be approximately independent, in the same sense, see Lemma 4.4. Furthermore, the probability of two subsequent flow lines merging is proportional to 1, see Lemma 4.5.
Having a certain decay rate of the derivatives of the conformal maps is equivalent to having a certain decay rate of the harmonic measure from infinity of some set on the real line. This will be essential to us, as it is the tool with which we show that the perfect points actually belong to \(V_\beta ^*\). Moreover, it is important that \(\alpha _j \rightarrow \infty \), but not too quickly. If \(\alpha _j\) would not tend to \(\infty \), then the perfect points would just be points where \(\lim _{s\rightarrow \infty } \frac{1}{s} \log g_{\tau _s}'(x) \in [\beta (1+\rho /2)  c,\beta (1+\rho /2) + c]\).
In the next subsection, there will be parameters which at first may look redundant, but in fact play important roles in the regularity conditions. We conclude this subsection by listing them and give brief descriptions of how they are used.

\(\underline{\delta \in (0,\frac{1}{2})}\): Chosen to be very small and makes sure that the curve \(\eta ^{x_k}\) does not hit \(B(x,\frac{1}{M}\varepsilon _k)\) or \(B(x,\varepsilon _{k+1})\) too close to the real line. Important, as it makes sure that the probability of \(\eta ^{x_k}\) and \(\eta ^{x_{k+1}}\) merging does not decrease in k. Furthermore, it is needed in the onepoint estimate, Lemma 4.6, as it gives control of a certain martingale.

\(\underline{M>0}\): Crucial in the proof that the perfect points belong to \(V_\beta ^*\). It makes sure that the probability of exiting in the interval between the rightmost point on \({\mathbb {R}}\) of \(\eta ^{x_{k+1}}\) (stopped upon hitting \(B(x,\varepsilon _{k+2})\)) and x for a Brownian motion started in \(B(x,\varepsilon _{k+1})\) depends mostly on \(\eta ^{x_{k+1}}\) and not on \(\eta ^{x_k}\). It is chosen to be large, so that the process \(Q^k\), under the measure \({\mathbb {P}}^*\), will be close in law to its \({\mathbb {P}}^*\)invariant distribution when \(\eta ^{x_{k+1}}\) reaches \(B(x,\frac{1}{M}\varepsilon _k)\). Moreover, this also makes sure that the probability of \(\eta ^{x_k}\) and \(\eta ^{x_{k+1}}\) merging does not decrease in k.

\(\underline{\Lambda >0}\): Chosen large so that the event \({\tilde{I}}_t^{u,\Lambda }\) for \(\eta ^{x_k}\) contains the event \({\tilde{I}}_t^u\) for the image of \(\eta ^{x_k}\) under some map F.

\(\underline{u>0}\): Chosen large enough, so that the event \({\tilde{I}}_t^u\) has sufficiently large \({\mathbb {P}}^*\)probability.
Perfect points and the twopoint estimate
Throughout this section we fix \(\kappa \in (0,4)\) and \(\rho \in (2,\frac{\kappa }{2}2)\) and let h be a GFF in \({\mathbb {H}}\) with boundary values \(\lambda \) on \({\mathbb {R}}_\) and \(\lambda (1+\rho )\) on \({\mathbb {R}}_+\), so that the flow line \(\eta \) from 0 to \(\infty \) is an \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) curve with force point located at \(0^+\) (so the configuration is \(({\mathbb {H}},0,0^+,\infty )\)). Note that the interval for \(\rho \) is chosen so that \(\eta \) can hit \({\mathbb {R}}_+\). We denote the flow line from x by \(\eta ^x\) and note that for \(x>0\), \(\eta ^x\) is an \({{\,\mathrm{SLE}\,}}_\kappa (2+\rho ,2\rho ;\rho )\) with configuration \(({\mathbb {H}},x,(0,x^),x^+,\infty )\). We fix \(\delta \in (0,\frac{1}{2})\), \(M>0\) large and an increasing sequence, \(\alpha _j \rightarrow \infty \), write \({\overline{\alpha }}_k = \sum _{j=1}^k \alpha _j\) and let \(\varepsilon _k = e^{{\overline{\alpha }}_k}\). The constants \(\delta \) and M will be chosen later. As for \(\alpha _j\), we define it as \(\alpha _j = \alpha _0+\log j\), where \(\alpha _0 = \log N\) for some large integer N. For \(x \ge 1\) and \(k \in {\mathbb {N}}\), we write
For \(U \subset {\mathbb {H}}\), we define
(when \(x=0\), we omit the superscript) and
and note that \(\sigma (B(x,\varepsilon _k))=\tau _{{\overline{\alpha }}_k}(x)\). Furthermore, let \(\sigma _{k,M}^x = \sigma ^{x_k}(B(x,\frac{1}{M}\varepsilon _k))\). We let \(\eta ^{x_k,R}\) denote the right side of the flow line \(\eta ^{x_k}\), \(r_t^k = \max \{ \eta ^{x_k}([0,t]) \cap {\mathbb {R}}\}\) and define \(Q_t^k\) by
Recall that by (24), \(Q_t(x)=Q_t^0(x)\) is the diffusion (23). For \(k \ge 0\), let \({\tilde{I}}_t^{u,\Lambda ,k}={\tilde{I}}_t^{u,\Lambda ,k}(x)\) denote the event (as well as the indicator of the event) of Proposition 3.4, with constant \(\Lambda \) (see Remark 3.6) but for the flow line \(\eta ^{x_k}\), and
as previously. The constants u and \(\Lambda \) will be chosen in Lemma 4.6. Note that the event \({\tilde{I}}_t^{u,\Lambda ,k}\) is a condition on the geometry of the curve which does not change when we rescale (it can be expressed in terms of \(Q^k(x)\), which is invariant under scaling of the \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) process). Moreover, if we let \(\eta _*^{x_k} = \varphi _k(\eta ^{x_k})\), where \(\varphi _k(z) = (zx)/\varepsilon _k\) (so that \(\eta _*^{x_k}\) is an \({{\,\mathrm{SLE}\,}}_\kappa (2+\rho ,2\rho ;\rho )\) process with configuration \(({\mathbb {H}},1/4,(x/\varepsilon _k,0^),0^+,\infty )\)) and let \((g_t^{*,k})_{t \ge 0}\) denote its Loewner chain, then on the event \(\{ I_k^{u,\Lambda ,k} > 0\}\),
for some subexponential function \(\psi =\psi _{u,\Lambda }\) (Fig. 8).
We let \(A_k^1(x) = A_k^1(x,\delta ,M,\alpha _0)\) be the event that

(i)
\(\sigma _k^x < \infty \),

(ii)
\(Q_{\sigma _{k,M}^x}^k(x),Q_{\sigma _k^x}^k(x) \in [\delta ,1\delta ]\), and

(iii)
\(\sigma _k^x < \sigma ^{x_k}({\mathbb {H}}{\setminus } B(x,\frac{1}{2} \varepsilon _k))\),
that is, \(\eta ^{x_k}\) hits \(B(x,\varepsilon _{k+1})\) before exiting \(B(x,\frac{1}{2}\varepsilon _k)\) and it does not hit \(B(x,\frac{1}{M}\varepsilon _k)\) or \(B(x,\varepsilon _{k+1})\) “too far down” (the latter being due to the condition on \(Q_t^k(x)\)). Now, we set
We let \(A_k^2(x)=A_k^2(x,\delta ,M,\alpha _0)\) be the event that on \(A_k^1(x)\) and \(A_{k+1}^1(x)\),

(i)
\(\eta ^{x_{k1}}_{[\sigma _{k1}^x,\infty )}\) merges with \(\eta ^{x_k}_{[0,\sigma _k^x)}\) before exiting \(B(x,\frac{3}{2} \varepsilon _k) {\setminus } B(x,\frac{1}{M}\varepsilon _k)\),

(ii)
\(\arg (\eta ^{x_{k1}}(t)x) \ge \frac{2}{3} \min (\arg (\eta ^{x_{k1}}(\sigma _{k1}^x)x),\arg (\eta ^{x_k}(\sigma _k^x)x))\) for \(t> \sigma _{k1}^x\) but before merging with \(\eta ^{x_k}\),
that is, property (ii) makes sure that the curve does not get “too close” to \({\mathbb {R}}_+\), see Fig. 9. We let \(E_k^2(x) = 1_{A_k^2(x)}\) be the indicator of that event. Next, we let \(E_k(x) = E_k^1(x) E_k^2(x)\) and write
and \(E^n(x) = E^{1,n}(x)\).
Why this is the right setting and these conditions are the correct ones to look for might not be clear at first sight. This, we prove in the next lemma.
Lemma 4.1
If \(E^n(x)>0\) for each \(n\in {\mathbb {N}}\), then \(x \in V_\beta ^*\).
Proof
First, note that we are considering the decay of the conformal maps at a sequence of times \({\overline{\alpha }}_k \rightarrow \infty \), rather than as the limit over a continuum. However, by the monotonicity of the map \(t \mapsto g_t'(x)\), this is sufficient. By the Koebe 1/4 theorem,
for each integer \(k \in {\mathbb {N}}\). Hence, it is enough to see that the decay rate of \(\omega _\infty \) is the correct one.
Let \({\widehat{K}}_n\) denote the closure of the complement of the unbounded connected component of \({\mathbb {H}}{\setminus } (\eta ([0,\tau _{{\overline{\alpha }}_{n+1}}]) \cup \eta ^{x_n}([0,\sigma _n^x]))\). Clearly, on the event \(\{E^n(x)>0\}\),
since \(K_{\tau _{{\overline{\alpha }}_{n+1}}} \subset {\widehat{K}}_n\) and \((r_{\sigma _n^x}^n,x] \subset (r_{\tau _{{\overline{\alpha }}_{n+1}}},x]\). In view of \(\omega \) as the hitting probability of a Brownian motion, it is easy to see that
where the implicit constant is independent of n. Indeed, if \(L_n\) denotes the line segment \([x,\eta ^{x_n}(\sigma _n^x)]\), then a Brownian motion, started in the unbounded connected component of \({\mathbb {H}}{\setminus } ({\widehat{K}}_n \cup L_n)\), which exits \({\mathbb {H}}{\setminus } {\widehat{K}}_n\) in either of the two intervals \((r_{\tau _{{\overline{\alpha }}_{n+1}}},r_{\sigma _n^x}^n]\) and \((r_{\sigma _n^x}^n,x]\), must first hit the line segment \(L_n\). However, from any point \(z \in L_n\), \({{\,\mathrm{dist}\,}}(z,(r_{\tau _{{\overline{\alpha }}_{n+1}}},r_{\sigma _n^x}^n]) > \varepsilon _{n+1}\), \({{\,\mathrm{dist}\,}}(z,(r_{\sigma _n^x}^n,x]) \le \varepsilon _{n+1}\) and \(xr_{\sigma _n^x}^n > \varepsilon _{n+1}\). Hence, the conditional probability of the Brownian motion exiting in \((r_{\sigma _n^x}^n,x]\), given that it will exit in \((r_{\tau _{{\overline{\alpha }}_{n+1}}},x]\) is greater than some \({\hat{p}}>0\). That the constant is independent of n follows from scale invariance. See Fig. 10 for the illustration of (45). Thus, we have proven that on the event \(\{E^n(x)>0\}\),
Finally, we shall prove that \(\omega _\infty ((r_{\sigma _n^x}^{n},x],{\mathbb {H}}{\setminus } {\widehat{K}}_n)\) has the correct decay rate, that is, that
We start with the upper bound. Then, since \(\eta ([0,\tau _{\alpha _1}]) \cup \bigcup _{j=1}^n \eta ^{x_j}([0,\sigma _j^x])) \subset {\widehat{K}}_n\), we have, by the Markov property for Brownian motion,
using that if \(K^1 \subset K^2 \subset {\mathbb {H}}\), then \(\omega (z,E,K^1) \ge \omega (z,E,K^2)\) for \(E \subset {\mathbb {R}}{\setminus } \partial K^2\) and \(z \in {\mathbb {H}}{\setminus } K^2\) (by removing obstacles, we allow more Brownian paths, and hence the probability of exiting in that interval increases). Next, note that for \(z \in \partial B\left( x,\frac{1}{2} \varepsilon _j\right) \), we have that
where the implicit constant is independent of both z and j. This holds since the sizes of, as well as distances to z from \(\partial B(x,\frac{\varepsilon _{j+1}}{2})\) and \((r_{\sigma _j^x}^j,x]\) are of the same order. Thus,
where C is independent of j. Moreover, note that on the event \(\{E^n(x) > 0\}\), the condition \(I_j^{u,\Lambda ,j}>0\) implies that (43) holds, and using the Koebe 1/4 theorem as in (44), we have
for some subexponential function \(\psi = \psi _{u,\Lambda }\). Next, we need that
where the implicit constant is independent of j. By Harnack’s inequality, we can choose an arc \(S \subset \partial B(0,\frac{1}{2})\), depending only on the parameters \(\delta \), M, \(\Lambda \) and u, such that for each \(z \in \varphi _j^{1}(S) \subset \partial B\left( x,\frac{1}{2} \varepsilon _j\right) \),
The fact that this will hold for every j follows since the same geometric restrictions are imposed on each flow line \(\eta ^{x_j}\). We let \(\varrho \) and \(\tau _*\) denote the first exit time of \({\mathbb {H}}{\setminus } B(0,\frac{1}{2})\) and \({\mathbb {H}}{\setminus } \eta _*^{x_j}([0,\sigma _j^x])\), respectively (recall that \(\eta _*^{x_j} = \varphi _j(\eta ^{x_j})\)). Then,
where we used the fact that \(\omega _\infty (\partial B(0,\frac{1}{2}),{\mathbb {H}}{\setminus } B(0,\frac{1}{2})) \asymp \omega _\infty (S, {\mathbb {H}}{\setminus } B(0,\frac{1}{2}))\) together with (50) on the first line, the conformal invariance of Brownian motion on the third line, (50) on the fourth line and that \(\omega _\infty (S,{\mathbb {H}}{\setminus } B(0,\frac{1}{2})) \asymp 1\) on the fifth line. Thus (49) holds. Combining this with (48), we have
for some constant \({\hat{C}}\). Thus, combining this with (47)
that is,
We now turn to the lower bound. We begin by writing \({\tilde{\tau }}_0 = \inf \{ t>0: \eta (t) \in \eta ^{x_1}([0,\sigma _1^x]) \}\). Next, \(A_1^2(x)\) and (ii) of \(A_0^1(x)\) and \(A_1^1(x)\) imply that \(\eta ((\tau _{\alpha _1},{\tilde{\tau }}_0])\) is not “too close” to \({\mathbb {R}}\), in the sense that there will be a sector \(\{ z: 0<\arg (zx) < c\}\) that the curve will not enter, and the distance from \(\eta ([0,{\tilde{\tau }}_0])\) to x is of the same order as the distance from \(\eta ([0,\tau _{\alpha _1}])\) to x. Thus,
where the implicit constant depends only on \(\delta \), M and \(\Lambda \). In fact, the probability of the Brownian motion hitting \((r_{\alpha _1},x]\) and some arc \(S_1 = \{ \frac{1}{2} \varepsilon _1 e^{i\theta }: \theta \in [\theta _1(\delta ), \theta _2(\delta )], \ \theta _1(\delta ) > 0 \}\), such that \(S_1 \cap \eta ([0,{\tilde{\tau }}_0]) = \emptyset \) is proportional to the probability of the Brownian motion hitting \((r_{\alpha _1},x]\). That is, let \(\upsilon \) denote the exit time of \({\mathbb {H}}{\setminus } \eta ([0,{\tilde{\tau }}_0])\) for the Brownian motion, then
where, again, the implicit constants depend only on \(\delta \), M and \(\Lambda \). By the same reasoning, if \(S_k = \{ \frac{1}{2}\varepsilon _k e^{i\theta }: \theta \in [\theta _1(\delta ), \theta _2(\delta )], \ \theta _1(\delta ) > 0 \}\), then for every \(z \in S_{k1}\),
where \(\upsilon _k\) denotes the exit time from \({\mathbb {H}}{\setminus } {\widehat{K}}_k\) for the Brownian motion, and the implicit constant does not depend on k. Thus, by the Markov property, (51), and (52) together with (48),
Hence,
Therefore, by (46),
and consequently by (44),
Thus, if \(E^n(x)>0\) for every \(n \in {\mathbb {N}}\), then \(x \in V_\beta ^*\). \(\square \)
The twopoint estimate which we will acquire here is the following. We consider \(x,y\in [1,2]\) out of convenience, but it will be clear that the same proof works for every compact interval.
Proposition 4.2
For each sufficiently small \(\delta \in (0,\frac{1}{2})\) there exists a subpower function \({\tilde{\Psi }}_\delta \) such that for all \(x,y \in [1,2]\) and \(m \in {\mathbb {N}}\) such that \(2\varepsilon _{m+2} \le xy \le \frac{1}{2} \varepsilon _m\), we have
Remark 4.3
Noting that \(\varepsilon _{m+2} = \varepsilon _m^{1+o_m(1)}\) as \(m \rightarrow \infty \), we can write Proposition 4.2 as
where \(\Psi _\delta (s)\) is a subpower function.
The main ingredients in the proof are divided into three lemmas; the first of which establishes “approximate independence” between flow line interactions in different regions; the second states that merging of these flow lines happens with high enough probability and the third is a onepoint estimate.
Lemma 4.4
For every \(x \ge 1\) and \(m,n \in {\mathbb {N}}\) such that \(m \le n\), it holds that
Furthermore, if y is such that \(2\varepsilon _{m+2} \le xy \le \frac{1}{2} \varepsilon _m\), then
The constants in \(\asymp \) may depend on \(\kappa \) and \(\rho \).
Proof
In order to prove the first part, it suffices to prove that
Let \(v = \eta (\tau _{{\overline{\alpha }}_{m+1}})\), denote by \(K^1\) the closure of the unbounded component of
(see Fig. 11) and let \(v_+ = \max \{K^1 \cap {\mathbb {R}}\}\). As stated above, \(\eta ^{x_{m+1}}\) is an \({{\,\mathrm{SLE}\,}}_\kappa (2+\rho ,2\rho ;\rho )\) curve with configuration
Thus, recalling Remark 2.7 and Fig. 3 and noting the boundary conditions illustrated in Fig. 11, the conditional law of \(\eta ^{x_{m+1}}\) given \(\eta _{[0,\tau _{{\overline{\alpha }}_{m+1}}]}\), \(\eta ^{x_m}_{[0,\sigma _m^x]}\) and \(E^m(x)\), restricted to the event \(\{E^m(x) > 0\}\) (recall that \(E^m(x)\) is determined by \(\eta _{[0,\tau _{{\overline{\alpha }}_{m+1}}]}\) and \(\eta ^{x_m}_{[0,\sigma _m^x]}\)) is that of an \({{\,\mathrm{SLE}\,}}_\kappa (2,\rho ,2\rho ;\rho )\) process with configuration
(Note that the boundary data to the left of v on \(K^1\) is the same as on \({\mathbb {R}}_\), so that the leftmost force point is indeed v.)
Let \(U = B(x,\frac{1}{2} \varepsilon _{m+1})\) and let \(\tau = \sigma ^{x_{m+1}}({\mathbb {H}}{\setminus } U)\) be the exit time of U. Also, let \(K^2\) be the closure of the complement of the unbounded component of \({\mathbb {H}}{\setminus } \eta ^{x_{m+1}}([0,\tau ])\), \({\tilde{v}} = \eta ^{x_{m+1}}(\tau )\), \({\tilde{v}}_ = \min \{K^2 \cap {\mathbb {R}}\}\) and \({\tilde{v}}_+ = \max \{K^2 \cap {\mathbb {R}}\}\). Furthermore, let
Then, by Lemma 2.10,
Since \({{\,\mathrm{dist}\,}}(K^1,x) = \varepsilon _{m+1}\), and \(K^2 \subseteq \overline{B(x,\frac{1}{2} \varepsilon _{m+1})}\), we have that \({{\,\mathrm{dist}\,}}(K^1,K^2) > rsim \varepsilon _{m+1}\). Also, \(\text {diam}(U) = \varepsilon _{m+1}\) and hence
Thus, after rescaling, we are in the setting of Lemma 2.11 and thus there exists a constant \(C \ge 1\) such that
that is, the Radon–Nikodym derivative between the law of \(\eta ^{x_{m+1}}\) stopped upon exiting U, given \(\eta ([0,\tau _{{\overline{\alpha }}_{m+1}}])\), \(\eta ^{x_m}([0,\sigma _m^x])\) and \(E^m(x)\), restricted to the event \(\{E^m(x) > 0\}\), and the law of \(\eta ^{x_{m+1}}\) stopped upon exiting U is bounded above and below by constants. Moreover, by (53), the constant C is independent of m. Hence,
and the case \(n = m+1\) is proven. Now, suppose that \(n \ge m+2\). Obviously, the Radon–Nikodym derivative between the law of \(\eta ^{x_n}\) stopped upon leaving the component of \(B(x,\frac{1}{2}\varepsilon _n) {\setminus } \eta ^{x_{m+1}}([0,\tau ])\) in which it starts growing and the law where we condition on \(K^1\) and \(E^m(x)\), restricted to \(\{E^m(x) > 0\}\), as well, is bounded above and below by a constant, by the very same argument as above. (Note that the distances have the same lower bound and that U is unchanged.) Furthermore, conditional on \(\eta ^{x_{m+1}}([0,\sigma ^{x_{m+1}}(B(x,\varepsilon _{n+1}))])\) and \(\eta ^{x_n}([0,\sigma _n^x])\) merging, the joint laws of \(\eta ^{x_j}_{[0,\sigma _j^x]}\) for \(j = 1,\dots ,m\) and \(\eta ^{x_k}_{[0,\sigma _k^x]}\) for \(k=m+2,\dots ,n1\) are independent, hence the proof of the first part is done.
The second part is proven in the same way, noting that if \(\tau ^y = \sigma ^{y_{m+2}}({\mathbb {H}}{\setminus } B(y,\frac{1}{2}\varepsilon _{m+2}))\), \({\tilde{\tau }}^y = \inf \{ t>0: \eta ^{y_n}(t) \notin B(y,\frac{1}{2}\varepsilon _n) {\setminus } \eta ^{y_{m+2}}([0,\tau ^y]) \}\) and \(K^3\) is the closure of the complement of the unbounded connected component of \({\mathbb {H}}{\setminus } \left( \eta ^{y_{m+2}}_{[0,\tau ^y]} \cup \eta ^{y_n}_{[0,{\tilde{\tau }}^y]} \right) \), then \({{\,\mathrm{dist}\,}}(K^1 \cup K^2,K^3) > rsim \varepsilon _{m+2}\) and \(\text {diam}(B(y,\frac{1}{2}\varepsilon _{m+2})) = \varepsilon _{m+2}\). Thus, as above, we can rescale and apply Lemma 2.11, to see that
that is,
Repeating the above argument, noting that \({{\,\mathrm{dist}\,}}(K^1,K^2)/\text {diam}(B(x,\frac{1}{2} \varepsilon _{m+2})) > rsim 1\), we have that
and the proof is done. \(\square \)
Lemma 4.5
For each \(x \ge 1\) and \(m,n \in {\mathbb {N}}\) such that \(m \le n\), it holds that
where the constants can depend on \(\kappa \), \(\rho \), \(\delta \) and M.
Proof
We begin by noting that by the first part of Lemma 4.4,
Thus, it remains to show that \({\mathbb {E}}[E^n(x)] > rsim {\mathbb {E}}[E^m(x)] {\mathbb {E}}[E^{m,n}(x)]\). In order to prove this, it suffices to prove that
that is, that with positive probability (which is independent of m and n), \(\eta ^{x_m}_{[\sigma _m^x,\infty )}\) will merge with \(\eta ^{x_{m+1}}_{[0,\sigma _{m+1}^x)}\) before exiting \(B(x,\frac{3}{2} \varepsilon _{m+1}){\setminus } B(x,\frac{1}{M}\varepsilon _{m+1})\) and not come too close to \({\mathbb {R}}\) (in the sense of property (ii) of \(A_{m+1}^2(x)\)). For the remainder of the proof, assume that \(E^m(x),E^{m,n}(x)>0\) (if not, we are done). Let \(K^1\) and \(K^2\) be the closure of the complement of the unbounded connected component of \({\mathbb {H}}{\setminus } (\eta ([0,\tau _{{\overline{\alpha }}_{m+1}}]) \cup \eta ^{x_m}([0,\sigma _m^x]))\) and \({\mathbb {H}}{\setminus }( \eta ^{x_{m+1}}([0,{\hat{\tau }}]) \cup \eta ^{x_n}([0,\sigma _n^x]))\), respectively, where \({\hat{\tau }} = \sigma ^{x_{m+1}}(B(x,\varepsilon _{n+1}))\), and let \(K = K^1 \cup K^2\). Let \(K^{1,L}\) and \(K^{1,R}\) denote the boundaries of \(K^1\) to left and right of \(\eta (\tau _{{\overline{\alpha }}_{m+1}})\) and let \(K_M^{2,L} = \eta ^{x_{m+1}}([0,{\hat{\tau }}]) \cap \partial K^2\) be the part of \(K^2\) that \(\eta ^{x_m}\) should hit and merge with.
Before going on with the proof, we discuss the strategy. Note that the law of the flow line from \(\eta (\tau _{{\overline{\alpha }}_{m+1}}) = \eta ^{x_m}(\sigma _m^x)\), given \(K^1\) and \(K^2\) is that of an \({{\,\mathrm{SLE}\,}}_\kappa (\rho ,2\rho ,2,\rho )\) process with configuration \(({\mathbb {H}}{\setminus } K,\eta (\tau _{{\overline{\alpha }}_{m+1}}),(r_{\sigma _m^x}^m,l_{{\hat{\tau }}}^{m+1},\eta ^{x_{m+1}}({\hat{\tau }}),r_{\sigma _n^x}^n),\infty )\), where \(l_t^{m+1} = \min \{\eta ^{x_{m+1}}([0,t]) \cap {\mathbb {R}}\}\). The idea of the proof is to map \({\mathbb {H}}{\setminus } K\) to \({\mathbb {H}}\), so that the image of the flow line is an \({{\,\mathrm{SLE}\,}}_\kappa (\rho ,2\rho ,2,\rho )\) process in \({\mathbb {H}}\) (see Fig. 12), and use Lemma 2.9 to see that the merging, as well as the geometric restriction of property (ii) of \(A_{m+1}^2(x)\), occurs with positive probability, independent of m. (Note that if the conditions of Lemma 2.9 are satisfied, then we can just choose some suitable deterministic curve, such that the properties are satisfied when the flow line follows that curve.) Let \(g_K\) denote the mappingout function of \(K = K^1 \cup K^2\) (recall Sect. 2.1). The image of the flow line from \(\eta (\tau _{{\overline{\alpha }}_{m+1}})\) under \(g_K\) is then an \({{\,\mathrm{SLE}\,}}_\kappa (\rho ,2\rho ,2,\rho )\) with configuration \(({\mathbb {H}}, g_K(\eta (\tau _{{\overline{\alpha }}_{m+1}})),(g_K(r_{\sigma _m^x}^m),g_K(l_{{\hat{\tau }}}^{m+1}),g_K(\eta ^{x_{m+1}}({\hat{\tau }})),g_K(r_{\sigma _n^x}^n)),\infty )\), however, the images of the boundary sets under the mappingout function decrease in length as m increases and thus we need to rescale to be able to use Lemma 2.9. Thus, we want to show that we can find some scaling factor k, which will depend on m, so that the images of the boundary sets under \({\hat{g}}_K :=k g_K\) are of appropriate sizes. More precisely, recall that we want the flow line from \(\eta (\tau _{{\overline{\alpha }}_{m+1}})\) to hit \(\eta ^{x_{m+1}}([0,\sigma _{m+1,M}^x])\), that is, we want the flow line from \({\hat{g}}_K(\eta (\tau _{{\overline{\alpha }}_{m+1}}))\) to hit \({\hat{g}}_K(K_M^{2,L}) =({\hat{g}}_K(l_{{\hat{\tau }}}^{m+1}),{\hat{g}}_K(\eta ^{x_{m+1}}(\sigma _{m+1,M}^x)))\). Then, to be able to use Lemma 2.9, we need to check that we can fix some \({\tilde{\varepsilon }}>0\), which may depend on \(\delta \) and M, but is independent of m, such that
Note that since there is no force point to the left of \({\hat{g}}_K(\eta (\tau _{{\overline{\alpha }}_{m+1}}))\), we do not need to care about the force point \(x_{2,L}\) of the lemma. Furthermore, while the lemma concerns hitting the interval between two force points, it is still applicable to the interval \(({\hat{g}}_K(l_{{\hat{\tau }}}^{m+1}),{\hat{g}}_K(\eta ^{x_{m+1}}(\sigma _{m+1,M}^x)))\), as we can just consider \({\hat{g}}_K(\eta ^{x_{m+1}}(\sigma _{m+1,M}^x))\) as a force point with weight 0.
Now, we want to estimate the length of intervals which are images of boundary sets, under \(g_K\). Recalling (8), it is natural to consider the harmonic measure from infinity of the boundary sets. Rephrasing the above, in terms of harmonic measure from infinity, we need to check that there exists some \({\hat{\varepsilon }}>0\), independent of m, such that
if k is chosen properly. We shall show that the three harmonic measures are actually proportional, with proportionality constants independent of m, and thus, letting \(k = \omega _\infty (K^{1,R},{\mathbb {H}}{\setminus } K)^{1}\), we are in the setting of Lemma 2.9 and the result follows.
We now turn to proving that the above harmonic measures are proportional. Arguing as in the proof of Lemma 4.1 (i.e., a Brownian motion must first hit the arc of \(\partial B(x,\varepsilon _{m+1})\) with endpoints \(\eta (\tau _{{\overline{\alpha }}_{m+1}})\) and \(x+\varepsilon _{m+1}\), from there, the probabilities of hitting the different sets are proportional), we see that
and that
and the implicit constants are independent of m. Condition (ii) of \(A_m^1(x)\) states that
and consequently
Thus,
and hence, by (54),
Next, we note that
where the first inequality holds since the lefthand side is the harmonic measure from infinity of a larger set, with fewer obstacles for the Brownian paths. By (56), (57) and (58),
Hence, rescaling by letting \(k = \omega _\infty (K^{1,R},{\mathbb {H}}{\setminus } K)^{1}\), we can use Lemma 2.9, and the proof is done. \(\square \)
Lemma 4.6
For each \(\delta \in (0,\frac{1}{2})\), sufficiently small, there exist a constant \({\tilde{c}}(\delta )>0\) and a subexponential function \(\psi \) such that the for each \(x \ge 1\),
Proof
By Lemma 4.4,
so we need to show that there exist a constant \({\tilde{c}}(\delta )\) and a subexponential function \(\psi \) such that
However, (60) follows from the very same argument as Lemma 4.5, so we will now concern ourselves with (59). Since \(1_{A_k^1}\) is \({\mathscr {F}}_{\sigma _k^x}\)measurable, we have
As stated above, \(\eta ^{x_k}\) is an \({{\,\mathrm{SLE}\,}}_\kappa (2+\rho ,2\rho ;\rho )\) curve with configuration \(({\mathbb {H}},x_k,(0,x_k^),x_k^+,\infty )\) and by Lemma 2.11, the Radon–Nikodym derivative between the law of \(\eta ^{x_k}\) and an \({{\,\mathrm{SLE}\,}}_\kappa (2\rho ;\rho )\) curve with configuration \(({\mathbb {H}},x_k,x_k^,x_k^+,\infty )\), both stopped upon exiting \(B(x,\frac{1}{2}\varepsilon _k)\) is bounded above and below by constants and hence we can (and will) instead consider the latter. Also, we can translate and rescale the process so that we consider an \({{\,\mathrm{SLE}\,}}_\kappa (2\rho ;\rho )\) curve, \({\hat{\eta }}\), started from 0 and the point that we want the curve to get close to being 1. Then, since \({{\,\mathrm{dist}\,}}(x_k,x) = \frac{1}{4} \varepsilon _k\), the event \(\{ \sigma _k^x < \sigma ^{x_k}({\mathbb {H}}{\setminus } B(x,\frac{1}{2}\varepsilon _k)) \}\) turns into the event \(\{ {\hat{\eta }} \text { hits } B(1,e^{\alpha _{k+1}}) \text { before leaving } B(1,2) \}\) and the event \(\{ Q_{\sigma _{k,M}^x}^k, Q_{\sigma _k^x}^k \in [\delta ,1\delta ] \}\) remains roughly the same. More precisely, let \({\hat{Q}}_t\) denote the process defined by (24) (but with \({\hat{\eta }}\) in place of \(\eta \)), \(\sigma _M = \inf \{ t \ge 0: {{\,\mathrm{dist}\,}}({\hat{\eta }}([0,t]),1) < 4/M \}\) and \(\sigma _k = \inf \{ t \ge 0: {{\,\mathrm{dist}\,}}({\hat{\eta }}([0,t]),1) < e^{\alpha _{k+1}} \}\), then (by translation and scaling invariance of \(Q_t^k\)), \(\{ {\hat{Q}}_{\sigma _k}, {\hat{Q}}_{\sigma _M} \in [\delta ,1\delta ] \} = \{ Q_{\sigma _{k,M}^x}^k, Q_{\sigma _k^x}^k \in [\delta ,1\delta ] \}\). We denote by \((g_t)\) the Loewner chain corresponding to \({\hat{\eta }}\) and weight the probability measure \({\mathbb {P}}\) with the local martingale (recall (11))
and denote the resulting measure by \({\mathbb {P}}^*\). Note that it is under \({\mathbb {P}}^*\) that we can choose u such that \({\tilde{I}}_{\frac{\alpha _{k+1}}{a}+C_k^*(x)}^{u,\Lambda ,k}\) has probability arbitrarily close to 1 (in the case with no force point to the left). We note that Lemma 3.5 implies that on \(A_k^1(x) \cap {\tilde{I}}_{\frac{\alpha _{k+1}}{a}+C_k^*(x)}^{u,\Lambda ,k}\)
for some subexponential function \(\psi \). Furthermore, \({\hat{Q}}_{\sigma _k} \asymp 1\) and by Lemma 2.3\(\delta _{\sigma _k} \asymp e^{\alpha _{k+1}}\), that is, \(\delta _{\sigma _k}^{\mu (1+\rho /2)} \asymp e^{\alpha _{k+1}\mu (1+\rho /2)}\). Moreover, we see that \(g_{\sigma _k}(1)  V_{\sigma _k}^L \asymp 1\), since it is the harmonic measure from infinity of the left side of \({\hat{\eta }}\), so that it is upper bounded by \(\omega _\infty (B(1,2),{\mathbb {H}})\), which is finite, and lower bounded by \(\omega _\infty ([0,1/2],{\mathbb {H}})=1/2\pi \). Since \({\mathbb {P}}^*(A) = {\mathbb {E}}[M_{\sigma _k}^\zeta (1)1_A]\) and
we have that
Now we need only show that \({\mathbb {P}}^*(A_k^1(x) \cap {\tilde{I}}_{\frac{\alpha _{k+1}}{a}+C_k^*(x)}^{u,\Lambda ,k}) \asymp 1\). Note that under \({\mathbb {P}}^*\), \({\hat{\eta }}\) is an \({{\,\mathrm{SLE}\,}}_\kappa (2\rho ;\rho ,\mu \kappa )\) curve with configuration \(({\mathbb {H}},0,0^,(0^+,1),\infty )\). We shall begin by reducing this to the case of an \({{\,\mathrm{SLE}\,}}\) process with no force points to the left of 0. The following procedure is illustrated in Fig. 13. Let \(\gamma :[0,1] \rightarrow {\overline{{\mathbb {H}}}}\) be a deterministic curve starting at 0 and remaining in \({\mathbb {H}}\) after that, and \({\hat{\varepsilon }} > 0\) be such that if \({\hat{\eta }}\) comes within distance \({\hat{\varepsilon }}\) of the tip \(\gamma (1)\) before exiting the \({\hat{\varepsilon }}\)neighborhood of \(\gamma \), then
where
\({\hat{\sigma }}_1 = \inf \{ t \ge 0: {{\,\mathrm{dist}\,}}({\hat{\eta }}([0,t]),\gamma (1)) < {\hat{\varepsilon }}\}\), and \(({\hat{f}}_t)\) and \(({\hat{K}}_t)\) are the centered Loewner chain and hulls respectively. Then, the curve \({\overline{\eta }}(t) = F({\hat{\eta }}({\hat{\sigma }}_1+t))\) has the law of a timechanged \({{\,\mathrm{SLE}\,}}_\kappa (2\rho ;\rho ,\mu \kappa )\) curve with configuration \(({\mathbb {H}},0,x_L,(x_R,1),\infty )\), where \(x_L < 2\) and \(x_R \in [0^+,1)\). Note that we may choose \(\gamma \) and \({\hat{\varepsilon }}\) so that \(1x_R > {\hat{\delta }}\) for some \({\hat{\delta }} > 0\) (to get a bound on the constant \(C^*\), chosen as remarked after Lemma 2.4). By Lemma 2.8, the above happens with positive probability, say \(p_0\). By Lemma 2.11, the Radon–Nikodym derivative between the law of \({\overline{\eta }}\) and the law of a correspondingly timechanged \({{\,\mathrm{SLE}\,}}_\kappa (\rho ,\mu \kappa )\) curve with force points \((x_R,1)\), is bounded above and below by some constants. Thus we may consider such an \({{\,\mathrm{SLE}\,}}_\kappa (\rho ,\mu \kappa )\) process. Note that, if \(({\hat{g}}_t)\) and \(({\overline{g}}_t)\) are the Loewner chains of \({\hat{\eta }}\) and \({\overline{\eta }}\), respectively, then
Thus, we can choose \(\Lambda \) to be sufficiently large, so that
where \({\hat{\sigma }}_2 = \inf \{t\ge 0: {{\,\mathrm{dist}\,}}({\hat{\eta }}(t),\gamma ([0,1])) > {\hat{\varepsilon }}\}\) and \({\tilde{I}}_{t}^{u}({\overline{\eta }})\) is the event of Proposition 3.4 for \({\overline{\eta }}\). We have lower bounded the probability of \(\{ {\hat{\sigma }}_1 \le {\hat{\sigma }}_2 \}\) and next, we prove that \(\{ {\overline{\eta }} \text { hits } 1 \text { before } \partial B(1,2) \} \cap {\tilde{I}}_{\frac{\alpha _{k+1}}{a}+C_k^*(x)}^{u}({\overline{\eta }}) \cap \{ {\hat{Q}}_{\sigma _k}, {\hat{Q}}_{\sigma _M} \in [\delta ,1\delta ] \}\) has positive probability, which completes the proof of the lemma. (Note that we do not need \({\overline{\eta }}\) to hit 1 before exiting B(1, 2), but rather that it hits some small set, separating 1 from \(\infty \), before exiting B(1, 2), which is of course weaker.)
We begin by lower bounding \({\mathbb {P}}^*({\overline{\eta }} \text { hits } 1 \text { before } \partial B(1,2))\). We now make a conformal coordinate change with the Möbius transformation
The image of an \({{\,\mathrm{SLE}\,}}_\kappa (\rho ,\mu \kappa )\) curve in \({\mathbb {H}}\) from 0 to \(\infty \) is an \({{\,\mathrm{SLE}\,}}_\kappa (\rho _L;\rho )\) curve in \({\mathbb {H}}\) from 0 to \(\infty \), with force points \(\varphi (\infty ) = 1\) and \(\varphi (x_R) = \frac{x_R}{1x_R}\), where \(\rho _L = \kappa 6(\rho \mu \kappa ) = \kappa (1+\mu )6\rho \) (see [23]). Furthermore, 1 is mapped to \(\infty \) and \(\varphi (\partial B(1,2)) = \partial B(1,\frac{1}{2})\) and thus the event of hitting 1 before exiting B(1, 2) turns into hitting the event that \(\varphi ({\overline{\eta }})\) does not hit \(B(1,\frac{1}{2})\). We have that \(\kappa \le 4\) and \(\rho _L > \frac{\kappa }{2}2\), so the probability of \(\varphi ({\overline{\eta }})\) avoiding \(B(1,\frac{1}{2})\) is positive.
Next, choosing \(\delta \) to be sufficiently small and M to be large enough, we can (by Corollary A.2) guarantee that the \({\mathbb {P}}^*\)probability of the event \(\{ {\hat{Q}}_{\sigma _k}, {\hat{Q}}_{\sigma _M} \in [\delta ,1\delta ] \}\) is as close to 1 as we want. This holds, since as \({\hat{\eta }}\) comes closer to 1, a time change \({\hat{Q}}_{{\tilde{t}}(s)}\) of \({\hat{Q}}\) (as in Sect. 2.2.2) converges to its invariant distribution \({\tilde{X}}_s\), so by choosing M large, we can make sure that \({\hat{Q}}_{{\tilde{t}}(s)}\) will be as close to \({\tilde{X}}_s\) as necessary, in the sense of (76). Moreover, letting \(\delta \) be sufficiently small, we have that, for each s, \({\mathbb {P}}^*({\tilde{X}}_s \in [\delta ,1\delta ])\) is sufficiently close to 1 and hence the same follows for \({\hat{Q}}_{\sigma _k}\) and \({\hat{Q}}_{\sigma _M}\).
By then choosing \(u>0\) sufficiently large, we have that the \({\mathbb {P}}^*\)probability of \({\tilde{I}}_{\frac{k}{a}+C_k^*(x)}^{u,k}\) is arbitrarily close to 1, and hence that \({\mathbb {P}}^*(A_k^1 \cap {\tilde{I}}_{\frac{k}{a}+C_k^*(x)}^{u,k}) > rsim 1\). Thus (59) is proven, which gives the result. \(\square \)
Remark 4.7
In the above proof, we actually proved an upper bound as well:
Proof of Proposition 4.2
It holds that
where we used Lemma 4.4 in the second inequality, Lemma 4.5 in the fourth and Lemma 4.6 and (61) in the fifth. Thus, we can find a subpower function \({\tilde{\Psi }}_\delta \) as in the statement of the proposition, and we are done. \(\square \)
Dimension spectrum
In this section, we compute the almost sure Hausdorff dimension of the random sets (4). We will, however, compute the almost sure dimension of the sets
and note that this is sufficient, due to the monotonicity of \(t \mapsto g_t'\). The theorem that we prove in this section is the following.
Theorem 5.1
Let \(\kappa > 0\), \(\rho \in ((2)\vee (\frac{\kappa }{2}4), \frac{\kappa }{2}2)\), \(x_R = 0^+\) and write \(a = 2/\kappa \). Define
and let \(\beta _^* = \inf \{\beta :d^*(\beta )>0\}\) and \(\beta _+^* = \sup \{\beta :d^*(\beta )>0\}\). Then, almost surely, if \(\kappa \in (0,4]\)
With this theorem at hand, noting that \(V_\beta = V_{\beta /(1+\rho /2)}^*\) gives the dimension \(\text {dim}_H V_\beta = \text {dim}_H V_{\beta /(1+\rho /2)}^* = d^*(\beta /(1+\rho /2))\), we immediately get Theorem 1.1.
We start by showing that \(\text {dim}_H V_\beta ^*\) is an almost surely constant quantity. The proof of this is contained in [1], but we repeat the proof here for completeness.
Lemma 5.2
Let \(x_R = 0^+\). For each \(\beta \), \(\text {dim}_H V_\beta ^*\) is almost surely constant.
Proof
Let \(x>0\) and write \(S_x = V_\beta ^* \cap (0,x)\). Since \(x_R = 0^+\), the \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) process is scaling invariant and hence, the law of \(S_x\) is identical to the law of \(xS_1\). However, since \(\text {dim}_H xS_1 = \text {dim}_H S_1\) (due to the invariance of the Hausdorff dimension under linear scaling), we see that the law of \(\text {dim}_H S_x\) is not depending on x. The sets \(S_x\) are decreasing as \(x \rightarrow 0^+\), and hence \(\text {dim}_H S_x\) has an almost sure limit (as \(x \rightarrow 0^+\)) which is measurable with respect to \({\mathscr {F}}_{0^+}\), since \(S_x\) is measurable with respect to \({\mathscr {F}}_{T_x}\). By Blumenthal’s 01 law, the limit must be constant and the same for every \(x>0\). \(\square \)
In the following two sections, we prove the upper and lower bounds on the dimension, Theorem 5.3 and Theorem 5.6, and together they imply Theorem 1.1.
Upper bound
We define the random sets
and note that for \(\beta _1< \beta < \beta _2\), we have \(V_\beta ^* \subset {\underline{V}}_{\beta _1}\) and \(V_\beta ^* \subset {\overline{V}}_{\beta _2}\). Thus, in this subsection, we will find a suitable cover of the above sets, and bound the Hausdorff measure of the above sets using the Minkowski content of the covers. Using this, we prove the following theorem, which gives the upper bound on the dimension. We let \(d^*(\beta ) = 1 + (\zeta \beta  \mu )(1+\rho /2)\) and write \(\beta _^*\) and \(\beta _+^*\) for its left and right zero, respectively. For \(\beta _0^* = \beta (0) = \frac{2a}{4a1+a\rho }\) (where \(\beta (0)\) means \(\beta (\zeta )\), evaluated at \(\zeta =0\)), we have \((d^*)'(\beta _0^*)=0\), and hence \(d^*(\beta )\) it is increasing for \(\beta \in [\beta _^*,\beta _0^*]\) and decreasing for \(\beta \in [\beta _0^*, \beta _+^*]\).
Theorem 5.3
Let \(\kappa > 0\), \(\rho \in ((2)\vee (\frac{\kappa }{2}4),\frac{\kappa }{2}2)\) and \(x_R = 0^+\). Then, the following hold:

(i)
if \(\beta \in [ \beta _^*, \beta _0^*)\), then \(\text {dim}_H {\overline{V}}_\beta \le d^*(\beta )\) almost surely,

(ii)
if \(\beta < \beta _^*\), then \({\overline{V}}_\beta = \emptyset \) almost surely,

(iii)
if \(\beta \in ( \beta _0^*, \beta _+^*]\), then \(\text {dim}_H {\underline{V}}_\beta \le d^*(\beta )\) almost surely,

(iv)
if \(\beta > \beta _+^*\), then \({\underline{V}}_\beta = \emptyset \) almost surely.
Together, they imply that if \(\beta \in [\beta _^*,\beta _+^*]\), then
almost surely.
Now, we will construct the cover. It is sufficient to prove the above theorem for the sets intersected with every closed subinterval of \({\mathbb {R}}_+\). As in [1], we will do this for the set [1, 2], but it will be clear that the very same construction works for any other closed set. We start by constructing the cover for \({\overline{V}}_\beta \cap [1,2]\). For every \(n \ge 1\), let
Then every interval has length \(e^{n}/2\). We denote by \(x_{j,n}\) the midpoint of the interval \(J_{j,n}\) and write \({\mathscr {J}}_n :=\{J_{j,n} \}\). By distortion estimates, we have that there is some constant \(c' > 0\), such that if
then,
We also have that
since the curve must hit the ball of radius \(e^{(n2)}\), centered at \(x_{j,n}\) before it hits the ball of radius \(e^{n}\), centered at any point x in \(J_{j,n}\), as the former ball contains the latter for any \(x \in J_{j,n}\). Combining this with the fact that \(t \mapsto g_t(x)\) is decreasing for every fixed x, (64) and writing \(c = c'e^{2\beta (1+\rho /2)}\), shows that (63) implies that
We let \({\mathscr {I}}_n^(\beta ) = \left\{ j \in \{ 0,1,..., \lceil 2e^n \rceil 1 \} : g_{\tau _{n2}(x_{j,n})}'(x_{j,n}) \ge ce^{\beta (1+\rho /2)(n2)} \right\} \), and define \(J_{n,}(\beta )\) by
Then \(\left\{ x \in [1,2]: g_{\tau _n(x)}'(x) \ge e^{\beta (1+\rho /2) n} \right\} \subset J_{n,}(\beta )\), and thus, for every positive integer m,
i.e., for every positive integer m, \(\bigcup _{n \ge m} J_{n,}(\beta )\) is a cover of \({\overline{V}}_\beta \cap [1,2]\). We write
that is, \(\overline{{\mathcal {N}}}_n(\beta )\) is the number of intervals \(J_{j,n}\) that make up \(J_{n,}(\beta )\).
Lemma 5.4
Let \(\kappa > 0\), \(\rho \in ((2)\vee (\frac{\kappa }{2}4),\frac{\kappa }{2}2)\) and \(\zeta > 0\), that is, \(\beta < \beta _0^* = \frac{2a}{4a1+a\rho }\). Then,
Proof
By (65), we get
using Chebyshev’s inequality in the fourth row and Corollary 3.2 in the last. \(\square \)
Next, we construct the cover for \({\underline{V}}_\beta \cap [1,2]\). Let \(x \in {\underline{V}}_\beta \cap [1,2]\) and \(n > 0\) be such that \(g_{\tau _n}'(x) \le e^{\beta (1+\rho /2) n}\). By Lemma 2.4, there is a constant \(C^*\), such that
where \(s_n = \frac{n}{a} + C^*\) (here, \(C^*\) can be chosen so that the above holds for every \(x \in [1,2]\)). By the distortion principle, there is a smallest nonnegative integer k such that, if J is the unique interval in \({\mathscr {J}}_{n+k}\) such that \(x \in J\), then for every \(z \in J\),
where the second subscript denotes the point which the time change \({\tilde{t}}(s)\) is made with respect to (if no second subscript is written out, the time change corresponds to the point in which we evaluate the function). Let \(x_J\) denote the midpoint of J. Then \({\tilde{g}}_{s_n,x}(x_J) \lesssim e^{\beta (1+\rho /2) n}\). Since \({{\,\mathrm{dist}\,}}(x,x_J) \le e^{(n+k)}/4\), we have for geometric reasons and by Lemma 2.4, that
that is, there is a constant \(c_1\) such that
Therefore, there is a constant \(c_2\) such that \({\tilde{g}}_{s_n'}'(x_J) \le c_2 e^{\beta (1+\rho /2) n}\). The constants above can be chosen to be universal. Let us recap what we have done above; we concluded that there are universal constants k, \(c_1\), \(c_2\) such that every \(x \in {\underline{V}}_\beta \) is contained in an interval J in \({\mathscr {J}}_{n+k}\) and \({\tilde{g}}_{\frac{n}{a} + c_1}'(x_J) \le c_2 e^{\beta (1+\rho /2) n}\), where \(x_J\) is the midpoint of J. Therefore, choosing universal constants \(C_1\) and \(C_2\), we have that if
where \({\mathscr {I}}_n^+(\beta ) = \left\{ j \in \{ 0,1,..., \lceil 2e^n \rceil 1 \} : {\tilde{g}}_{\frac{n}{a} + C_1}'(x_{j,n}) \le C_2e^{\beta (1+\rho /2) n} \right\} \), then
for every m. We let \(\underline{{\mathcal {N}}}_n(\beta )\) denote the number of intervals \(J_{j,n}\) that make up \(J_{n,+}(\beta )\), i.e.,
Lemma 5.5
Let \(\kappa > 0\), \(\rho \in ((2)\vee (\frac{\kappa }{2}4),\frac{\kappa }{2}2)\) and \(\zeta < 0\), that is, \(\beta > \beta _0^* = \frac{2a}{4a1+a\rho }\). Then,
Proof
Applying Chebyshev’s inequality and Proposition 3.1 in the same way as in the previous lemma gives the result. \(\square \)
With these, we will now prove Theorem 5.3.
Proof of Theorem 5.3
We begin with (i). Lemma 5.4 implies, for \(s > d^*(\beta )\) and n sufficiently large, that
Since \({\overline{V}}_\beta \cap [1,2] \subset \cup _{n \ge m} J_{n,}(\beta )\) for every m, the tdimensional Hausdorff measure of \({\overline{V}}_\beta \cap [1,2]\), \({\mathcal {H}}^t({\overline{V}}_\beta \cap [1,2])\), is bounded by a constant times the tdimensional lower Minkowski content of \(\cup _{n \ge m} J_{n,}(\beta )\) (see Section 5.5 of [14]), which implies that
for \(t>s\). Since such a construction works for every interval, we thus have that \(\text {dim}_H{\overline{V}}_\beta \le d^*(\beta )\) almost surely. Similarly, using Lemma 5.5, \(\text {dim}_H {\underline{V}}_\beta \le d^*(\beta )\) almost surely. For (ii), note that \(1\{ {\overline{V}}_\beta \cap [1,2] \} \le \sum _{n=m}^\infty \overline{{\mathcal {N}}}_n(\beta )\), and thus
since \(\beta < \beta _^*\) implies that \(d^*(\beta ) = 1+ (\zeta \beta  \mu )(1+\rho /2) < 0\), so \({\overline{V}}_\beta = \emptyset \) for \(\beta < \beta _^*\). The same argument shows that \({\underline{V}}_\beta = \emptyset \) almost surely for \(\beta > \beta _+^*\).
What is left, is to use (i) and (iii) to prove that \(\text {dim}_H V_\beta ^* \le d^*(\beta )\) almost surely for \(\beta \in [\beta _^*,\beta _+^*]\). First, let \(\beta \in [\beta _^*,\beta _0^*)\) and \(\varepsilon > 0\) be such that \(\beta + \varepsilon \in (\beta _^*, \beta _0^*)\). Since \(V_\beta ^* \subset {\overline{V}}_\beta \), (i) gives that \(\text {dim}_H V_\beta ^* \le \text {dim}_H {\overline{V}}_\beta \le d^*(\beta +\varepsilon )\). By then letting \(\varepsilon \rightarrow 0^+\) and the fact that \(d^*(\beta )\) is increasing on \([\beta _^*,\beta _0^*]\) gives the upper bound for chosen \(\beta \). In the same way, letting \(\beta \in (\beta _0^*,\beta _+^*]\) and \(\varepsilon > 0\) be such that \(\beta  \varepsilon \in (\beta _0^*,\beta _+^*)\), then since \(V_\beta ^* \subset {\underline{V}}_{\beta \varepsilon }\), (iii) implies that \(\text {dim}_H V_\beta ^* \le \text {dim}_H {\underline{V}}_\beta \le d^*(\beta \varepsilon )\). Again, letting \(\varepsilon \rightarrow 0^+\) and noting that \(d^*(\beta )\) is decreasing on \([\beta _0^*,\beta _+^*]\) gives the desired upper bound. Finally, we comment on the case \(\beta = \beta _0^*\). In this case, \(\text {dim}_H V_{\beta _0^*}^*\) is trivially bounded by \(d(\beta _0^*)\), as that is equal to the dimension of the intersection of the curve with \({\mathbb {R}}_+\), a set which clearly contains \(V_{\beta _0^*}^*\). While this is proven previously, we remark that the upper bound follows immediately from Corollary 3.2 with \(\zeta = 0\), and a less involved covering argument than the above. Hence, the proof of the upper bound on the dimension is done. \(\square \)
Lower bound
We shall prove the lower bound using Frostman’s lemma, that is, we let \(E_s(\nu )\) be the sdimensional energy of the measure \(\nu \), i.e.,
We then construct a Frostman measure on \(V_\beta ^*\) and show that it has finite sdimensional energy for every \(s < d^* = d^*(\beta )\), which implies that the sdimensional Hausdorff measure of \(V_\beta ^*\) is infinite (see Theorem 8.9 of [14]), and thus that the Hausdorff dimension must be greater than or equal to s. Just like in the previous section, we do this for \(V_\beta ^*\) intersected with the interval [1, 2], but again it will be clear that this can be done for any closed interval to the right of 0. In the following, we will construct a family of Frostman measures and show that it gives the correct lower bound on the dimension of \(V_\beta ^*\).
Theorem 5.6
Let \(\kappa \in (0,4]\), \(\rho \in (2,\frac{\kappa }{2}2)\) and \(x_R = 0^+\). Then, for every \(\varsigma >0\),
Proof
Fix \(\kappa \in (0,4)\), \(\rho \in (2,\frac{\kappa }{2}2)\) and \(\delta \in (0,\frac{1}{2})\) small and \(u,\Lambda ,M>0\) large enough for Proposition 4.2 to hold. We fix \(n \in {\mathbb {N}}\), divide [1, 2] into \(\varepsilon _n^{1}\) intervals of length \(\varepsilon _n\), and let \(x_{j,n} = 1+(j\frac{1}{2})\varepsilon _n\) be the midpoint of the jth of these intervals. Let \({\mathscr {D}}_n=\{x_{j,n}:j=1,...,\varepsilon _n^{1}\}\), let \({\mathscr {C}}_n = \{ x \in {\mathscr {D}}_n: E^n(x) > 0 \}\) and let \(J_n(x) = [x\frac{\varepsilon _n}{2},x+\frac{\varepsilon _n}{2}]\). Then,
For each \(n \ge 1\), we define the measure \(\nu _n\) by
for Borel sets \(A \subset [1,2]\). We want to take a subsequential limit of the sequence of measures \((\nu _n)\), which we will prove converges to the Frostman measure on \(V_\beta ^*\). To see that this limit exists, we need that the event on which we want to take the subsequential limit has positive probability and that the support of the limit is contained in \(V_\beta ^*\). That the support of the limit is contained in \(V_\beta ^*\) is obvious by construction, so we turn to proving that the event has positive probability. Clearly \({\mathbb {E}}[\nu _n([1,2])] = 1\). We need to show that there is some constant \(c>0\) such that \({\mathbb {E}}[\nu _n([1,2])^2]<c\). Then, by the CauchySchwarz inequality,
which implies that the event on which we want to take a subsequence has positive probability. We have that
and we will bound the diagonal and the offdiagonal parts separately below. For the diagonal terms, we have (since \(E^n(x)^2 \le E^n(x)\))
for large enough \(\alpha _0\) and n, since \(\psi \) is a subpower function and \(o_n(1)\) tends to 0 as \(n \rightarrow \infty \). For the offdiagonal terms, we have, again for large enough \(\alpha _0\) and n, by Proposition 4.2,
since \(\Psi _\delta \) is a subpower function. Thus, \({\mathbb {E}}[\nu _n([1,2])]\) is finite, and \({\mathbb {P}}(\nu _n([1,2])>0) > 0\). What is left to do is to show that \(E_{d^*\varsigma }(\nu _n)\) is almost surely finite for each n and every \(\varsigma > 0\). To do this, it suffices to bound \({\mathbb {E}}[E_{d^*\varsigma }(\nu _n)]\) for each n and every \(\varsigma > 0\). We have that
In order to bound \({\mathbb {E}}[E_{d^*\varsigma }(\nu _n)]\) we will use the following:
We first consider the diagonal terms. By (66),
Clearly, the sum is uniformly bounded in n for large enough \(\alpha _0\). For the offdiagonal terms we have, by Proposition 4.2 and (67), that
The implicit constants do not depend on n, and hence we have that for every \(\varsigma >0\), the limiting measure \(\nu \) satisfies \(E_{d^*\varsigma }(\nu )<\infty \) on an event of positive probability. By Lemma 5.2, the Hausdorff dimension is almost surely constant, so a lower bound with positive probability is an almost sure lower bound. Thus, the proof is done. \(\square \)
Now Theorem follows from Theorem 5.3 and Theorem 5.6 and thus, as remarked, Theorem 1.1 is proven.
References
Alberts, T., Binder, I., Johansson Viklund, F.: A dimension spectrum for SLE boundary collisions. Commun. Math. Phys. 343, 273 (2016)
Alberts, T., Sheffield, S.: Hausdorff dimension of the SLE curve intersected with the real line. Electron. J. Probab. 13(40), 1166–1188 (2008)
Aru, J., Sepúlveda, A.: Twovalued local sets of the 2D continuum Gaussian free field: connectivity, labels and induced metrics. Electron. J. Probab. 23(61), 35 (2018)
Beffara, V.: The dimension of SLE curves. Ann. Probab. 36(4), 1421–1452 (2008)
Beliaev, D., Smirnov, S.: Harmonic measure and SLE. Commun. Math. Phys. 290(2), 577–595 (2009)
Dubédat, J.: Duality of Schramm–Loewner evolutions. Ann. Sci. Éc. Norm. Supér. (4) 42(5), 697–724 (2009)
Erdélyi, A., Magnus, W., Oberhettinger, F., Tricomi, F.G.: Higher transcendental functions. Vol. II. McGrawHill Book Company, Inc., New YorkTorontoLondon, 1953. Based, in part, on notes left by Harry Bateman
Gwynne, E., Miller, J., Sun, X.: Almost sure multifractal spectrum of Schramm–Loewner evolution. Duke Math. J. 167(6), 1099–1237 (2018)
Johansson Viklund, F., Lawler, G.F.: Almost sure multifractal spectrum for the tip of an SLE curve. Acta Math. 209(2), 265–322 (2012)
Lawler, G.F.: Conformally Invariant Processes in the Plane. Mathematical Surveys and Monographs, vol. 114. American Mathematical Society, Providence (2005)
Lawler, G.F.: Multifractal analysis for the reverse flow for the SchrammLoewner evolution. In: Brandt, C., Mörters, P., Zähle, M. (eds.) Fractal Geometry and Stochastics IV, pp. 73–107. Birkhäuser
Lawler, G.F.: Minkowski content of the intersection of a Schramm–Loewner evolution (SLE) curve with the real line. J. Math. Soc. Jpn. 67(4), 1631–1669 (2015)
Lawler, G.F., Werner, W.: The Brownian loop soup. Probab. Theory Relat. Fields 128(4), 565–588 (2004)
Mattila, P.: Geometry of Sets and Measures in Euclidean Spaces: Fractals and Rectifiability. Cambridge University Press, Cambridge (1995)
Miller, J.: Dimension of the \(\operatorname{SLE}\) light cone, the \(\operatorname{SLE}\) fan, and \(\operatorname{SLE}_\kappa (\rho )\) for \(\kappa \in (0,4)\) and \(\rho \in [\frac{\kappa }{2}4,2)\). Commun. Math. Phys. 360, 1083 (2018)
Miller, J., Sheffield, S.: Imaginary geometry I: interacting \(\operatorname{SLE}\)s. Probab. Theory Relat. Fields 164, 553 (2016)
Miller, J., Sheffield, S.: Imaginary geometry II: reversibility of \(\operatorname{SLE}_{\kappa }(\rho _{1};\rho _{2})\) for \(\kappa \in (0,4)\). Ann. Probab. 44(3), 1647–1722 (2016)
Miller, J., Sheffield, S.: Imaginary geometry III: reversibility of \(\operatorname{SLE}_\kappa \) for \(\kappa \in (4,8)\). Ann. of Math. (2) 184(2), 455–486 (2016)
Miller, J., Sheffield, S.: Imaginary geometry IV: interior rays, wholeplane reversibility, and spacefilling trees. Probab. Theory Relat. Fields 169, 729 (2017)
Miller, J., Sheffield, S.: Gaussian free field light cones and \(\operatorname{SLE}_\kappa (\rho )\). Ann. Probab. 47(6), 3606–3648 (2019)
Miller, J., Hao, W.: Intersections of SLE Paths: the double and cut point dimension of SLE. Probab. Theory Relat. Fields 167, 45 (2017)
Rohde, S., Schramm, O.: Basic properties of SLE. Ann. Math. (2) 161(2), 883–924 (2005)
Schramm, O., Wilson, D.: SLE coordinate changes. N. Y. J. Math. 5, 659–669 (2005)
Schramm, O., Zhou, W.: Boundary proximity of SLE. Probab. Theory Relat. Fields 146, 435 (2010)
Sheffield, S.: Gaussian free fields for mathematicians. Probab. Theory Relat. Fields 139, 521 (2007)
Wang, M., Wu, H.: Remarks on the intersection of \(\text{SLE}_\kappa (\rho )\) curve with the real line (2015). arXiv:1507.00218
Wang, M., Hao, W.: Level lines of the Gaussian Free Field I: zeroboundary GFF. Stoch. Process. Appl. 127(4), 1045–1124 (2017)
Werner, W., Wu, H.: From \(\text{ CLE }_\kappa \) to \(\text{ SLE }(\kappa; \rho )\)’s. Electron. J. Probab. 18(36), 20 (2013)
Zhan, D.: Ergodicity of the tip of an SLE curve. Probab. Theory Relat. Fields 164, 333 (2016)
Acknowledgements
We thank Fredrik Viklund, Jason Miller and Ewain Gwynne for helpful discussions. We also thank Nathanaël Berestycki and an anonymous referee for good comments on an earlier version. The research was conducted at KTH Royal Institute of Technology, Stockholm, as part of the author’s PhD thesis and was supported by the Knut and Alice Wallenberg Foundation.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The diffusion \({\tilde{Q}}_s\)
The diffusion \({\tilde{Q}}_s\)
In this appendix we will state and prove some of the main properties of the diffusion \({\tilde{Q}}_s\), given by (32) (we will consider it under the measure \({\mathbb {P}}^*\)) and \({\tilde{Q}}_0 = \frac{xx_R}{x}\). First, we note that if we let \(Z_s\) be the stochastic process, taking values in \([0,\pi ]\), such that
then by Itô’s formula we have that
Thus, \(Z_s\) is asymptotically Bessel\(\left( \frac{3}{2}4aa\rho +2\mu \right) \) at the origin and asymptotically Bessel\(\left( \frac{1}{2}2aa\rho \right) \) at \(\pi \) (recall that if \(dX_s = b(X_s)ds + dB_s\), then we say that \(X_s\) is asymptotically Bessel\(\alpha \) at the origin if \(b(x)  \frac{\alpha }{x} \le cx\) and \(b'(x) + \frac{\alpha }{x^2} \le c\) for some constant c and all \(x \in (0,\frac{\pi }{2}]\), and we say that \(X_s\) is asymptotically Bessel\(\alpha \) at \(\pi \) if \(b(x)  \frac{\alpha }{\pi x} \le c(\pi x)\) and \(b'(x) + \frac{\alpha }{(\pi x)^2} \le c\) for \(x \in [\frac{\pi }{2},\pi )\)). Since
a standard result on Bessel processes implies that \(Z_s\) will almost surely not reach 0 in finite time, hence, \({\tilde{Q}}_s\) will almost surely not reach 0 in finite time, and thus, \({\mathbb {P}}^*({\tilde{t}}(s) < \infty ) = 1\) for every s.
The rest of this appendix is devoted to showing that \({\tilde{Q}}_s\) has an invariant distribution, to which it converges exponentially fast. We prove this via the eigenvalue method. This was done in [29], Appendix B, and most of this section follows almost verbatim from that, but we choose to include it here for completeness. We consider a transformation of \({\tilde{Q}}_s\) as the rather standard form of the eigenvalue problem for the transformed process makes it easier to solve. Let \(Y_t\) be the diffusion process \(Y_t = 2{\tilde{Q}}_t1\), that is, \(Y_t\) follows the SDE
where \(\delta _+ = 48a2a\rho +4\mu >0\), \(\delta _ = 4a+2a\rho >0\) and \(Y_0 \in (1,1]\). Note that this diffusion can be defined for all positive time (through reflection at 1). First, we assume that Y has a smooth transition density. It is then given by the Kolmogorov backward equation,
Assuming that both sides equal \({\tilde{\lambda }} p\), we arrive at the following differential equation
which has a solution if and only if \({\tilde{\lambda }} = {\tilde{\lambda }}_n =  \frac{n}{2}(n+ \frac{\delta _+}{2} + \frac{\delta _}{2} 1 )\), where n is a nonnegative integer. Then, for each n, the solution is given by the Jacobi polynomial \(P_n^{(\frac{\delta _+}{2}1,\frac{\delta _}{2}1)}(x)\) (see [7]), that is, the orthogonal polynomials on \((1,1)\) with respect to the inner product
Hence
solves (69) for \(t>0\) and \(1<x<1\). Thus, the candidate for the transition density of Y is
where \(\Vert f \Vert _{\delta _+,\delta _}^2 = \langle f,f \rangle _{\delta _+,\delta _}\). First, we need that (70) is absolutely convergent for \(t>0\). This holds, since
and
Next, we shall check that
It is sufficient to show this for polynomials. Let q(x) be a polynomial, then
is zero for all but finitely many n and
Next, letting
we note that \({\tilde{q}}(t,x)\) solves (69) and that \({\tilde{q}}(0,x) = q(x)\). We fix \(t_0 > 0\) and let \(M_t = {\tilde{q}}(t_0t,Y_t)\) for \(0<t \le t_0\). Then \(M_t\) is bounded and by Itô’s formula,
(where \('\) denotes the spatial derivative) and hence \(M_t\) is a bounded martingale. Furthermore, we have that \(\lim _{t \rightarrow t_0} M_t = q(Y_{t_0})\), so by the optional stopping theorem, \({\mathbb {E}}^{y_0}[q(Y_{t_0})] = M_0 = {\tilde{q}}(t_0,y_0)\), that is,
Thus, \(p_Y(t,x,y)\) is the transition density of Y and sending t to infinity, we see that Y has a unique stationary distribution with density given by
that is, the \(n=0\) term in (70), we therefore see that there is a constant C, such that
Thus, \(p_Y(t,x,y) \rightarrow p_Y(y)\) uniformly in \(x,y \in [1,1]\) as \(t \rightarrow \infty \). In fact, a stronger statement is true. Note that, writing \({\hat{c}} = (\int _{1}^1 (1x)^{\frac{\delta _+}{2}1} (1+x)^{\frac{\delta _}{2}1} dx)^{1}\), we have
Next, we note that by (71) and (72), uniformly in \(x,y \in [1,1]\),
Noting that the sum on the second line is bounded for all \(t>0\) and decreasing in t, we have that for an arbitrary \(\varepsilon > 0\),
for \(t>\varepsilon \), where the implicit constant depends only on \(\varepsilon \). Thus, if we denote by \({\mathcal {Y}}\) the process satisfying the SDE (68), started according to the invariant density (73), then for \(y_0 >0\) and \(t \ge 1\),
We have thus proven the following.
Lemma A.1
The transition density, \(p_Y(t,x,y)\) for Y is given by (70). Furthermore, Y has a unique invariant density, \(p_Y(y)\), given by (73), (75) holds and Y is ergodic.
As a corollary, we have the following (noting that \(\frac{\delta _+ + \delta _}{4} = 1a+\mu \) and letting \({\mathbb {E}}^*\) denote the expectation under \({\mathbb {P}}^*\), as in Sects. 2 and 3).
Corollary A.2
The transition density of \({\tilde{Q}}_s\) is given by \(p_{{\tilde{Q}}}(s,x,y) = 2p_Y(s,2x1,2y1)\). Furthermore, it has invariant density \(p_{{\tilde{Q}}}(y) = 2p_Y(2y1)\) and if \({\tilde{X}}\) follows the same SDE as \({\tilde{Q}}\) and is started according to the invariant density, then for each \(f \in L^1(\nu _{{\tilde{Q}}})\), where \(\nu _{{\tilde{Q}}}(dx) = p_{{\tilde{Q}}}(x)dx\), we have, for \(s \ge 1\), that
for each value of \({\tilde{Q}}_0 \in (0,1]\). Moreover, \({\tilde{Q}}\) is ergodic.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Schoug, L. A multifractal boundary spectrum for \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\). Probab. Theory Relat. Fields 178, 173–233 (2020). https://doi.org/10.1007/s0044002000975w
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s0044002000975w
Mathematics Subject Classification
 Primary 60J67
 Secondary 60D05