1 Introduction

The Schramm–Loewner evolution (\({{\,\mathrm{SLE}\,}}\)) is a one parameter family of random fractal curves, that was introduced by Oded Schramm as a conformally invariant candidate for the scaling limit of two-dimensional discrete models in statistical mechanics. Consider the half-plane Loewner differential equation

$$\begin{aligned} \partial _t g_t(z) = \frac{2}{g_t(z)-W_t}, \qquad g_0(z) = z, \end{aligned}$$
(1)

where the driving function, \(W_t\), is continuous and real-valued. The chordal Schramm–Loewner evolution with parameter \(\kappa >0\) (\({{\,\mathrm{SLE}\,}}_\kappa \)) is the curve with corresponding conformal maps given by the Loewner equation with \(W_t = \sqrt{\kappa } B_t\). \({{\,\mathrm{SLE}\,}}_\kappa \) exhibits interesting geometric behaviour. If \(0 < \kappa \le 4\), the curves are simple and do not intersect the real line, if \(\kappa > 4\) they have non-traversing self-intersections and collide with the real line and if \(\kappa \ge 8\), the curves are space-filling. For \(\kappa > 0\), the almost sure Hausdorff dimension is \(d_\kappa = \min \{2,1+\frac{\kappa }{8}\}\), see e.g. [4]. For \(\kappa > 4\), the intersection of an \({{\,\mathrm{SLE}\,}}_\kappa \) curve with the real line is a random fractal of almost sure Hausdorff dimension \(\min \{2-8/\kappa ,1\}\), see [2] and [24]. In [1], Alberts, Binder and Viklund studied and computed the almost sure Hausdorff dimension spectrum of random sets of points, where the \({{\,\mathrm{SLE}\,}}_\kappa \) curve, for \(\kappa > 4\), hits the real line at a prescribed “angle”.

If we again, consider the Loewner equation, but instead let \(W_t\) be the solution to the following system of SDEs

$$\begin{aligned} dW_t&= \sqrt{\kappa } dB_t + \frac{\rho dt}{W_t - V_t}, \quad W_0 = 0; \\ dV_t&= \frac{2dt}{V_t - W_t}, \quad V_0 = x_R, \end{aligned}$$

where \(x_R\) is a point on the real line, called the force point, and \(\rho > -2\) is an associated weight, then the Loewner chain is generated by a random curve, called an \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) curve (for the definition when \(\rho \le -2\), see [20]). This two parameter family of random fractal curves is a natural generalization of \({{\,\mathrm{SLE}\,}}_\kappa \), in which one keeps track of the force point as well as the curve. The weight \(\rho \) determines a repulsion (if \(\rho > 0\)) or attraction (if \(\rho < 0\)) of the curve, from the boundary. For \(\rho = 0\), it is the ordinary \({{\,\mathrm{SLE}\,}}_\kappa \). For \(\kappa \in (0,8)\), \(\rho \in ((-2)\vee (\frac{\kappa }{2}-4), \frac{\kappa }{2}-2)\) and \(x_R=0^+\), the Hausdorff dimension of the intersection of \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) with the real line is almost surely \(1 - (\rho +2)(\rho +4-\kappa /2)/\kappa \) (see [21] and [28]). What we are interested in studying in this article, is the dimension spectrum studied by Alberts, Binder and Viklund in [1], but for \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\). In [3], the authors apply the main result of this paper to describe the boundary hitting behaviour of the loops of the two-valued sets of the Gaussian free field.

Let \(\eta \) be the \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) curve with \(x_R = 0^+\) and let \(\tau _s(x)\) be the first time \(\eta \) is within distance \(e^{-s}\) of x, i.e.,

$$\begin{aligned} \tau _s(x) = \inf \{ t \ge 0: {{\,\mathrm{dist}\,}}(x,\eta ([0,t])) \le e^{-s}\}. \end{aligned}$$

Let \((g_t)\) be the \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) Loewner chain, let \(g_t'\) denote the spatial derivative of \(g_t\) and for \(\beta \in {\mathbb {R}}\), define

$$\begin{aligned} V_\beta = \left\{ x> 0: \lim _{s \rightarrow \infty } \frac{1}{s} \log g_{\tau _s}'(x) = -\beta , \ \tau _s = \tau _s(x) < \infty \ \forall s>0 \right\} , \end{aligned}$$
(2)

that is, \(V_\beta \) is the set of points x in \({\mathbb {R}}_+\), at which \(g_{\tau _s}'\) decays like \(e^{-\beta s}\). This can be viewed as a generalized hitting “angle” of the real line (see the discussion in the introduction in [1]). We shall view the decay of \(g_{\tau _s}'(x)\) as a decay of a certain harmonic measure. Indeed, let \(r_t\) be the rightmost point of \(\eta ([0,t]) \cap {\mathbb {R}}\) and let \(H_t\) be the unbounded connected component of \({\mathbb {H}}{\setminus } \eta ([0,t])\). Then the harmonic measure from infinity of \((r_t,x]\) in \(H_t\) is defined as

$$\begin{aligned} \omega _\infty ((r_t,x],H_t) = \lim _{y \rightarrow \infty } y \omega (iy,(r_t,x],H_t). \end{aligned}$$

Assume that we have

$$\begin{aligned} c^{-1} e^{-\beta s} \le g_{\tau _s}'(x) \le c e^{-\beta s} \end{aligned}$$

for some \(\beta \). By using first Schwarz reflection and then the Koebe 1/4 theorem, we have

$$\begin{aligned} c^{-1} e^{-\alpha s} \le \omega _\infty ((r_{\tau _s},x],H_{\tau _s}) \le c e^{-\alpha s}, \quad \alpha = \beta +1, \end{aligned}$$

for some constant c, possibly different from the previous, see Fig. 1. As we see in (2), however, we allow a subexponential error, as up to constant asymptotics are too restrictive to require.

Fig. 1
figure 1

Having a certain rate of decay of \(g_{\tau _s}'(x)\) as \(s\rightarrow \infty \) is equivalent to having a related decay rate of the harmonic measure from infinity of the set \((r_{\tau _s},x]\) (the green interval in the figure) (color figure online)

For fixed \(\kappa >0\), \(\rho \in {\mathbb {R}}\), let

$$\begin{aligned} d(\beta ) = 1 - \frac{\beta }{\kappa } \left( \frac{\kappa -2\rho }{4} - \frac{1+\rho /2 + 2\beta }{\beta } \right) ^2, \end{aligned}$$
(3)

and write \(\beta _- = \inf \{\beta :d(\beta )>0\}\) and \(\beta _+ = \sup \{\beta :d(\beta )>0\}\). The main theorem of the paper is the following.

Theorem 1.1

Let \(\kappa \in (0,4]\), \(\rho \in (-2, \frac{\kappa }{2}-2)\), \(x_R = 0^+\) and \(\beta \in [\beta _-,\beta _+]\). Then, almost surely,

$$\begin{aligned} \text {dim}_H V_\beta = d(\beta ). \end{aligned}$$

The reason for the restricting \(\rho \) to values smaller than \(\kappa /2 - 2\) is that this is the critical value for the curve to hit the boundary, that is, the curve will not hit the boundary if \(\rho \ge \kappa /2 -2\).

An interesting observation is that there is a typical behaviour of the curve hidden in this theorem; there is a \(\beta \) such that \(V_\beta \) has full Hausdorff dimension, that is, such that \(\text {dim}_H V_\beta = \text {dim}_H \eta \cap {\mathbb {R}}_+\). Indeed, if \(\beta _0 = (4+2\rho )/(8-\kappa +2\rho )\), then

$$\begin{aligned} d(\beta _0) = 1 - \frac{(\rho +2)(\rho +4-\kappa /2)}{\kappa }, \end{aligned}$$

which was proven in [21] to be the Hausdorff dimension of the curve intersected with the real line.

Remark 1.2

If we let

$$\begin{aligned} \Omega _\alpha = \left\{ x>0: \lim _{s \rightarrow \infty } \frac{1}{s} \log \omega _\infty ((r_{\tau _s},x],H_{\tau _s}) = -\alpha , \ \tau _s = \tau _s(x) < \infty \ \forall s>0 \right\} , \end{aligned}$$

then we can rephrase Theorem 1.1 as

$$\begin{aligned} \text {dim}_H \Omega _\alpha = 1-\frac{\alpha -1}{\kappa } \left( \frac{\kappa -2\rho }{4} - \frac{2\alpha -1 +\rho /2}{\alpha -1}\right) ^2. \end{aligned}$$

To prove Theorem 1.1, it will be more convenient to consider the sets

$$\begin{aligned} V_\beta ^* = \left\{ x> 0: \lim _{s \rightarrow \infty } \frac{1}{s} \log g_{\tau _s}'(x) = -\beta (1+\rho /2), \ \tau _s = \tau _s(x) < \infty \ \forall s>0 \right\} , \end{aligned}$$
(4)

and then use that \(V_\beta = V_{\beta /(1+\rho /2)}^*\).

We now give an overview of the paper. In Sect. 2, we introduce the preliminary material needed in the rest of the paper, such as \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) processes, the Gaussian free field and the imaginary geometry coupling. In order to make the paper more self-contained, the section on imaginary geometry is more extensive than necessary. Furthermore, we use the Girsanov theorem to weight the measure with the local martingale given by the product of a time change of \(g_t'(x)^\zeta \) (where \(\zeta \) is a parameter in one-to-one correspondence with \(\beta \)) and a compensator. With this new measure we can compute the asymptotics of \(g_{\tau _s}'(x)^\zeta \), which we use in Sect. 3 to find a one-point estimate. It turns out to be strong enough to give the upper bound on the dimension of \(V_\beta \), so this can actually be achieved immediately after Corollary 3.2. The rest of Sect. 3 is dedicated to studying the mass concentration of the weighted measures, which we need for the correlation estimate. Section 4 is dedicated to the proof of the two-point estimate needed to prove Theorem 1.1. This is done by employing the coupling between SLE and the Gaussian free field. We finish the paper in Sect. 5 by first establishing the upper bound on the dimension of \(V_\beta \) using the one-point estimate of Sect. 3 and then constructing Frostman measures and using the two-point estimate to show that the s-dimensional energy is finite for every \(s < d(\beta )\), and hence that the Hausdorff dimension can not be smaller than \(d(\beta )\).

We believe Theorem 1.1 to be true for all \(\kappa \) and \(\beta \in [\beta _-(\kappa ,\rho ),\beta _+(\kappa ,\rho )]\), and our upper bound is actually valid for all parameters. Given the spectrum for \(\kappa \in (0,4]\) it seems natural to try and prove the result for \(\kappa > 4\) using SLE duality, i.e., that the outer boundary of the \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) curves are variants of \({{\,\mathrm{SLE}\,}}_{16/\kappa }\) curves, similar to what is done in [21]. This is not as straightforward here, however, as what we are interested in is not the dimension of the intersection of the curve with the real line, but the set of points where the curve intersects the boundary with the prescribed behaviour of the derivatives of the conformal maps (or equivalently, the decay of \(\omega _\infty \)) as the curve approaches the boundary. How to do this is not clear at the moment.

However, using the method of [1] to get a two-point estimate, one can deduce that in the case \(\kappa > 4\), the theorem holds for \(\beta \in [\beta _-,\beta _0]\). This is done by considering three events which exhaust the possible geometries of the curve approaching the two points (this is possible as for boundary interactions, the geometries are rather simple) and then separately estimating each of them. The correlation estimate that we have is actually more closely related to the one in [21].

An almost sure multifractal spectrum of SLE curves was first derived in [9], where the reverse flow of SLE was used to study the behaviour of the conformal maps close to the tip of the curve. Another result in this direction is [8], where the imaginary geometry techniques, developed and demonstrated in the articles [16,17,18,19], were used to find an almost sure bulk multifractal spectrum. In [15], Miller used the imaginary geometry techniques to compute Hausdorff dimensions for other sets related to SLE. We also mention [12], where Lawler proved the existence of the Minkowski content of an SLE curve intersected with the real line, which is related to what was done in [1] and what we do here. Lastly, we mention [5], where the authors computed an average integral means spectrum of SLE.

As for notation, we write \(f(x) \asymp g(x)\) if there is a constant C such that \(C^{-1} g(x) \le f(x) \le C g(x)\) and \(f(x) \lesssim g(x)\) if there is a constant C such that \(f(x) \le C g(x)\), and the constants do not depend on f, g or x. We say that \(\phi \) is a subpower function if \(\lim _{x \rightarrow \infty } \phi (x) x^{-\varepsilon } = 0\) for every \(\varepsilon > 0\). In the same way, we say that \(\psi \) is a subexponential function if for every \(\varepsilon > 0\), \(\lim _{x \rightarrow \infty } \psi (x) e^{-\varepsilon x} = 0\). In what follows, implicit constants, subpower functions and subexponential functions may change between the lines, without a change of notation.

2 Preliminaries

We begin by introducing some preliminaries on complex analysis, \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) processes, the Gaussian free field and imaginary geometry.

2.1 Measuring distances and sizes

Let D be a simply connected domain, \(z \in D\), and let \(\varphi : D \rightarrow {\mathbb {D}}\) be a conformal map of D onto \({\mathbb {D}}\) such that \(\varphi (z) = 0\). We define the conformal radius of D with respect to z as

$$\begin{aligned} \text {crad}_D(z) = \frac{1}{|\varphi '(z)|}. \end{aligned}$$

It behaves well under conformal transformations; if \(f:D \rightarrow f(D)\) is a conformal transformation, then \(\text {crad}_{f(D)}(f(z)) = |f'(z)| \text {crad}_D(z)\), \(z \in D\), (that is, it is conformally covariant) and by the Schwarz lemma and the Koebe 1/4 theorem, one easily sees that

$$\begin{aligned} {{\,\mathrm{dist}\,}}(z,\partial D) \le \text {crad}_D(z) \le 4 {{\,\mathrm{dist}\,}}(z,\partial D). \end{aligned}$$
(5)

For any domain \({\mathbb {H}}{\setminus } U\), where \(U\subset {\mathbb {H}}\) is bounded, we define the harmonic measure from infinity of \(E \subset \partial ({\mathbb {H}}{\setminus } U)\) as

$$\begin{aligned} \omega _\infty (E,{\mathbb {H}}{\setminus } U) = \lim _{y \rightarrow \infty } y \omega (iy,E,{\mathbb {H}}{\setminus } U), \end{aligned}$$
(6)

where \(\omega \) denotes the usual harmonic measure. It turns out that \(\omega _\infty \) is a very convenient notion of size of subsets of \({\mathbb {H}}\). We say that A is a compact \({\mathbb {H}}\)-hull if \(A = {\mathbb {H}}\cap {\overline{A}}\) and \({\mathbb {H}}{\setminus } A\) is simply connected and we let \({\mathcal {Q}}\) be the set of compact \({\mathbb {H}}\)-hulls. By Proposition 3.36 of [10], there exists, for every \(A\in {\mathcal {Q}}\), a unique conformal map \(g_A:{\mathbb {H}}{\setminus } A \rightarrow {\mathbb {H}}\) which satisfies the hydrodynamic normalization, i.e.,

$$\begin{aligned} \lim _{z \rightarrow \infty } |g_A(z)-z| = 0. \end{aligned}$$
(7)

The function \(g_A\) is called the mapping-out function of A. By (7) and the conformal invariance of \(\omega \), we have that

$$\begin{aligned} \omega _\infty (E,{\mathbb {H}}{\setminus } A)&= \lim _{y\rightarrow \infty } y\omega (g_A(iy),g_A(E),{\mathbb {H}}) \nonumber \\&= \lim _{y \rightarrow \infty } \int _{g_A(E)} \frac{y \text {Im}(g_A(iy))}{(t-\text {Re}(g_A(iy)))^2 + \text {Im}(g_A(iy))^2} \frac{dt}{\pi } \nonumber \\&= \int _{g_A(E)} \frac{dt}{\pi } = \frac{|g_A(E)|}{\pi }, \end{aligned}$$
(8)

where \(|g_A(E)|\) is the total length of the set \(g_A(E) \subset {\mathbb {R}}\).

While \(\omega \) is defined on the boundary of the domain, it is convenient to speak about the harmonic measure of subsets of the domain, so we make the following extension: if \(U \subset D\), then

$$\begin{aligned} \omega (z,U,D) :=\omega (z,D \cap \partial U, D{\setminus } U). \end{aligned}$$
(9)

This extends naturally to the case of \(\omega _\infty \) as well.

2.2 \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) processes

In this section we will introduce \({{\,\mathrm{SLE}\,}}_\kappa \) and \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) processes. As stated in the introduction, a chordal \({{\,\mathrm{SLE}\,}}_\kappa \) Loewner chain is the collection of random conformal maps \((g_t)_{t \ge 0}\), given by solving (1) with \(W_t = \sqrt{\kappa } B_t\), where \(B_t\) is a standard Brownian motion with \(B_0 = 0\) and filtration \({\mathscr {F}}_t\), satisfying (7). We define \((f_t)_{t\ge 0}\) to be the centered Loewner chain, that is,

$$\begin{aligned} f_t(z) = g_t(z) - W_t. \end{aligned}$$

For fixed \(z \in {\mathbb {H}}\), the solution to (1) exists until time \(T_z = \inf \{t \ge 0: f_t(z) = 0 \}\). The domain of \(g_t\) is \(H_t = {\mathbb {H}}{\setminus } K_t\) where \(K_t = \{z: T_z \le t \} \in {\mathcal {Q}}\) is the SLE hull at time t and \(g_t\) is the unique conformal map from \(H_t\) onto \({\mathbb {H}}\) such that \(\lim _{z \rightarrow \infty } |g_t(z) - z| = 0\). Rohde and Schramm proved that the family of \({{\,\mathrm{SLE}\,}}_\kappa \) hulls is almost surely generated by a curve \(\eta :[0,\infty ] \rightarrow {\overline{{\mathbb {H}}}}\), i.e., \(H_t\) is the unbounded component of \({\mathbb {H}}{\setminus } \eta ([0,t])\) (see [22]). We call \(\eta \) the \({{\,\mathrm{SLE}\,}}_\kappa \) process or \({{\,\mathrm{SLE}\,}}_\kappa \) curve and say that \(K_t\) is the filling of \(\eta ([0,t])\).

Now we will define the \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) process. Let \({\underline{x}}_L = (x_{l,L},\ldots ,x_{1,L})\), \({\underline{x}}_R = (x_{1,R},\ldots ,x_{r,R})\), where \(x_{l,L}< \ldots< x_{1,L} \le 0 \le x_{1,R}< \ldots < x_{r,R}\). Also, let \({\underline{\rho }}_L = (\rho _{1,L},\ldots ,\rho _{l,L})\), \({\underline{\rho }}_R = (\rho _{1,R},\ldots ,\rho _{r,R})\), where \(\rho _{j,q} \in {\mathbb {R}}\), \(q \in \{L,R\}\). We call \(\rho _{j,q}\) the weight of \(x_{j,q}\). Let \(W_t\) be the solution to the system of SDEs

$$\begin{aligned} dW_t&= \sum _{j=1}^l \frac{\rho _{j,L}}{W_t - V_t^{j,L}} dt + \sum _{j=1}^r \frac{\rho _{j,R}}{W_t-V_t^{j,R}} dt + \sqrt{\kappa }dB_t, \nonumber \\ dV_t^{j,q}&= \frac{2}{V_t^{j,q}-W_t} dt, \quad V_0^{j,q} = x_{j,q}, \quad j =1,\dots ,N_q, \quad q \in \{L,R\}, \end{aligned}$$
(10)

where \(N_L= l\) and \(N_R = r\). An \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) Loewner chain with force points \(({\underline{x}}_L;{\underline{x}}_R)\) is the family of conformal maps \((g_t)_{t \ge 0}\) obtained by solving (1) with \(W_t\) being the solution to (10). The \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) hulls, \((K_t)\), are defined analogously and they are almost surely generated by a continuous curve, \(\eta \), the \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) process or \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) curve (see Theorem 1.3 in [16]). \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) is a generalization of \({{\,\mathrm{SLE}\,}}_\kappa \) (\({{\,\mathrm{SLE}\,}}_\kappa \) = \({{\,\mathrm{SLE}\,}}_\kappa (0;0)\)), where one also keeps track of the force points and their assigned weights either attract (\(\rho _{j,q} < 0\)) or repel (\(\rho _{j,q} > 0\)) the curve. If \(\eta \) is an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) curve, we write \(\eta \sim \text {SLE}_\kappa ({\underline{\rho }})\). Exactly how the weights of the force points affect the curve \(\eta \) is explained in Lemma 2.1.

The solution to the system of SDEs (10) exists up until the continuation threshold is hit, that is, the first time t such that either

$$\begin{aligned} \sum _{j:V_t^{j,L}=W_t} \rho _{j,L} \le -2 \quad \text {or} \quad \sum _{j:V_t^{j,R}=W_t} \rho _{j,R} \le -2, \end{aligned}$$

as is explained in Section 2.2 of [16]. Moreover, for every \(t>0\) before the continuation threshold, \({\mathbb {P}}(W_t=V_t^{j,q}) = 0\) for \(j \in {\mathbb {N}}\) and \(q\in \{L,R\}\).

Geometrically, hitting the continuation threshold means the curve \(\eta \) swallowing force points on either side such that the sum of their weights is less than \(-2\), that is, \(\eta \) hits an interval \((x_{m+1,L},x_{m,L})\) (or \((x_{n,R},x_{n+1,R})\)) such that \(\sum _{j=1}^m \rho _{j,L} \le -2\) (resp. \(\sum _{j=1}^n \rho _{j,R} \le -2\)).

Now, we write \(\rho _{0,L} = \rho _{0,R} = 0\), \(x_{0,L} = 0^-\), \(x_{l+1,L} = -\infty \), \(x_{0,R} = 0^+\) and \(x_{r+1,R} = +\infty \), and let for \(q \in \{L,R\}\) and \(j \in {\mathbb {N}}\)

$$\begin{aligned} {\overline{\rho }}_{j,q} = \sum _{k=0}^j \rho _{k,q}. \end{aligned}$$

The following lemma describes the interaction \(\eta \) with the real line. It is written down in [21], and just as they did, we refer to Remark 5.3 and Theorem 1.3 of [16] and Lemma 15 of [6] for the proof.

Lemma 2.1

Let \(\eta \) be an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) curve in \({\mathbb {H}}\), from 0 to \(\infty \), with force points \(({\underline{x}}_L;{\underline{x}}_R)\). Then,

  1. (i)

    if \({\overline{\rho }}_{k,R} \ge \frac{\kappa }{2}-2\), then \(\eta \) almost surely does not hit \((x_{k,R},x_{k+1,R})\),

  2. (ii)

    if \(\kappa \in (0,4)\) and \({\overline{\rho }}_{k,R} \in (\frac{\kappa }{2}-4,-2]\), then \(\eta \) can hit \((x_{k,R},x_{k+1,R})\), but then can not be continued afterwards,

  3. (iii)

    if \(\kappa > 4\) and \({\overline{\rho }}_{k,R} \in (-2,\frac{\kappa }{2}-4]\), then \(\eta \) can hit \((x_{k,R},x_{k+1,R})\), be continued afterwards and \(\eta \cap (x_{k,R},x_{k+1,R})\) is almost surely an interval,

  4. (iv)

    if \({\overline{\rho }}_{k,R} \in ((-2)\vee (\frac{\kappa }{2}-4),\frac{\kappa }{2}-2)\), then \(\eta \) can hit and bounce off of \((x_{k,R},x_{k+1,R})\) and \(\eta \cap (x_{k,R},x_{k+1,R})\) has empty interior.

The same holds if we replace R by L and consider \((x_{k+1,L},x_{k,L})\).

Note that in (ii) in the above lemma, the curve has swallowed force points with a total weight at least as negative as \(-2\), and hence it cannot be continued. In (iii) and (iv), the total weight of the force points swallowed is greater than \(-2\), and hence the curve can be continued.

The following was proved in [16] (see also Lemma 2.2 in [21]).

Lemma 2.2

Fix \(\kappa >0\) and let \(({\underline{x}}_L^n)\) and \(({\underline{x}}_R^n)\) be sequences of vectors of numbers \(x_{l,L}^n<\dots<x_{1,L}^n<0<x_{1,R}^n<\dots <x_{r,R}^n\), converging to vectors \(({\underline{x}}_L)\) and \(({\underline{x}}_R)\) such that \(x_{1,L} = 0^-\) and \(x_{1,R} = 0^+\). For each n, denote by \((W^n,V^{n,L},V^{n,R})\) the driving processes of an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L,{\underline{\rho }}_R)\) process with force points \(({\underline{x}}_L^n;{\underline{x}}_R^n)\). Then \((W^n,V^{n,L},V^{n,R})\) converges weakly in law, with respect to the local uniform topology, to the driving process \((W,V^L,V^R)\) of an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) with force points \(({\underline{x}}_L;{\underline{x}}_R)\), as \(n \rightarrow \infty \).

It turns out that if we, using the Girsanov theorem, reweight an \({{\,\mathrm{SLE}\,}}_\kappa \) process by a certain martingale (how this is done is explained briefly below), then we obtain an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) process at least until the first time that \(W_t = V_t^{j,q}\) for some (jq). Let \(x_{1,L}< 0 < x_{1,R}\) and define

$$\begin{aligned} M_t&= \prod _{j,q} |g_t'(x_{j,q})|^{\frac{(4-\kappa +\rho _{j,q})\rho _{j,q}}{4\kappa }} \prod _{j,q} |W_t-V_t^{j,q}|^{\frac{\rho _{j,q}}{\kappa }} \nonumber \\&\quad \times \prod _{(j_1,q_1) \ne (j_2,q_2)} |V_t^{j_1,q_1}-V_t^{j_2,q_2}|^{\frac{\rho _{j_1,q_1}\rho _{j_2,q_2}}{2\kappa }}. \end{aligned}$$
(11)

Then \(M_t\) is a local martingale and an \({{\,\mathrm{SLE}\,}}_\kappa \) process weighted by \(M_t\) has the law of an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) process with force points \(({\underline{x}}_L;{\underline{x}}_R)\) (see Theorem 6 of [23]).

So far, we have only defined chordal \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) processes in \({\mathbb {H}}\), but we can define them in any Jordan domain, by a conformal coordinate change. More precisely, an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) in a Jordan domain D, from \(z_0\) to \(z_\infty \), with force points \(({\underline{x}}_L,{\underline{x}}_R)\) is the image of an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) in \({\mathbb {H}}\) from 0 to \(\infty \) under a conformal map \(\varphi \) such that \(\varphi ({\mathbb {H}})=D\), \(\varphi (0) = z_0\), \(\varphi (\infty )=z_\infty \) and such that the force points of the \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) in \({\mathbb {H}}\) are mapped to \(({\underline{x}}_L;{\underline{x}}_R)\). We say that the constructed \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) in D is an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) process with configuration

$$\begin{aligned} c = (D, z_0, {\underline{x}}_L, {\underline{x}}_R, z_\infty ). \end{aligned}$$
(12)

The configuration of the \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) process we defined in the beginning of the section is \(({\mathbb {H}}, 0, {\underline{x}}_L, {\underline{x}}_R, \infty )\).

2.2.1 The case of one force point

Let \(a=2/\kappa \). We will parametrize the \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) so that \(\text {hcap}(K_t) = at\), i.e., as the solution to

$$\begin{aligned} \partial _t g_t(z) = \frac{a}{g_t(z) - W_t}, \qquad g_0(z) = z, \end{aligned}$$
(13)

with

$$\begin{aligned} dW_t = dB_t + \frac{a\rho /2}{W_t - V_t}dt, \ W_0 = 0; \ dV_t = \frac{a}{V_t - W_t}dt, \ V_0 = x_R, \end{aligned}$$
(14)

where \(B_t\) is a one-dimensional standard Brownian motion with \(B_0 = 0\) and filtration \({\mathscr {F}}_t\). We say that the conformal maps \((g_t)_{t \ge 0}\) are driven by W. The solution to (14) exists for all \(t \ge 0\), if \(\kappa > 0\) and \(\rho > -2\), so henceforth, we assume this. If \(\rho \ge \frac{\kappa }{2}-2\), then \(\eta \) does not hit the real line, almost surely, and hence we are interested in the case \(\rho < \frac{\kappa }{2} -2\). If \(\kappa > 4\) and \(\rho \in (-2,\frac{\kappa }{2}-4]\), then by Lemma 2.1, \(\eta \cap (x_R,\infty )\) is almost surely an interval, and thus we will consider \(\rho \in ((-2) \vee (\frac{\kappa }{2}-4), \frac{\kappa }{2}-2)\).

Fix \(\kappa > 0\) and \(\rho > -2\) and let \((K_t, t \ge 0)\) be the hulls of an \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) process with force point \(x_R\). The \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) satisfies two important properties. The first is the following scaling rule: for any \(m > 0\), \((m^{-1} K_{m^2 t},t \ge 0)\) has the same law as the hulls of an \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) process with force point \(x_R/m\). If \(x_R = 0^+\), then it is scaling invariant. The second is the domain Markov property: for any finite stopping time, \(\tau \), the curve defined as \(({\hat{\eta }}(t) = f_\tau (\eta (t+\tau )), t \ge 0)\) is an \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) curve with force point \(V_\tau - W_\tau \), where again, \(f_t =g_t-W_t\).

Note that \(f_t\) follows the SDE

$$\begin{aligned} df_t(z) = \frac{a}{f_t(z)}dt - dW_t = \left[ \frac{a}{f_t(z)} + \frac{a\rho /2}{W_t - V_t} \right] dt - dB_t. \end{aligned}$$
(15)

Taking the spatial derivative of (15) results in an ODE that, upon solving, yields

$$\begin{aligned} g_t'(z) = f_t'(z) = \exp \left\{ -\int _0^t \frac{a}{f_s(z)^2}ds \right\} . \end{aligned}$$
(16)

While \(g_t\) is defined in \(H_t\), it (and hence \(f_t\) and \(g_t'\)) extends continuously to the real line and is real-valued there. For \(x \in {\mathbb {R}}_+\), \(g_t'(x) \in [0,1]\) and is decreasing in t. Due to symmetry, it is enough to consider \(x, x_R \in {\mathbb {R}}_+\). By applying Itô’s formula to \(\log f_t(z)\) and exponentiating, we see that

$$\begin{aligned} f_t(z) = z\exp \left\{ \int _0^t \frac{a-1/2}{f_s(z)^2}ds - \int _0^t \frac{a\rho /2}{f_s(z)(W_s-V_s)}ds - \int _0^t \frac{dB_s}{f_s(z)} \right\} . \end{aligned}$$
(17)

The same procedure, applied to \(v_t(z) :=g_t(z) - V_t = f_t(z) + W_t - V_t\), yields

$$\begin{aligned} v_t(z) = (z-x_R)\exp \left\{ \int _0^t \frac{a}{f_s(z)(W_s-V_s)}ds \right\} . \end{aligned}$$
(18)

Observe that, considered as functions on \({\mathbb {R}}_+\), \(f_t\), \(g_t'\) and \(v_t\) are increasing in x. We will mostly work with these functions on or close to \({\mathbb {R}}_+\).

2.2.2 Local martingales and weighted measures

Fix \(\kappa > 0\), \(\rho \in ((-2)\vee (\frac{\kappa }{2}-4), \frac{\kappa }{2} - 2)\) and let, as above, \(a = 2/\kappa \). For each such pair \((\kappa ,\rho )\), we will define a one-parameter family of local martingales which will play a major role in our analysis. Let \(\zeta \) be the variable with which we will parametrize the martingales and let

$$\begin{aligned} \mu _c = 2a - \frac{1}{2} + \frac{a\rho }{2}. \end{aligned}$$
(19)

For \(-\frac{\mu _c^2}{2a}< \zeta < \infty \), define the parameters

$$\begin{aligned} \mu = \mu _c + \sqrt{\mu _c^2 + 2a\zeta }, \quad \beta = \frac{a}{\sqrt{\mu _c^2 + 2a\zeta }}. \end{aligned}$$
(20)

Note that with our choice of \(\zeta \), \(\mu > \mu _c\) and \(\beta > 0\). The restriction \(\mu > \mu _c\) is necessary for a certain invariant density of a diffusion to exist, see (33) and the appendix. The parameters are related as follows:

$$\begin{aligned} \mu _c< \mu< \infty&: \zeta = \frac{\mu }{2a}(\mu -2\mu _c), \quad \beta = \frac{a}{\mu -\mu _c}, \\ 0< \beta < \infty&: \mu = \frac{a}{\beta } + \mu _c, \quad \zeta = \frac{1}{2a}\left( \frac{a}{\beta }+\mu _c \right) \left( \frac{a}{\beta }-\mu _c \right) . \end{aligned}$$

For each \(x > 0\), we have by equations (16), (17) and (18), that

$$\begin{aligned} M_t^\zeta (x)&= g_t'(x)^{\zeta + \mu (1+\rho /2)} f_t(x)^{-\mu } v_t(x)^{-\mu \rho /2} \nonumber \\&= x^{-\mu } (x-x_R)^{-\mu \rho /2} \exp \left\{ \mu \int _0^t \frac{dB_s}{f_s(x)} - \frac{\mu ^2}{2} \int _0^t \frac{ds}{f_s(x)^2} \right\} \end{aligned}$$
(21)

is a local martingale on \(0 \le t \le T_x\) (see Theorem 6 of [23]), such that

$$\begin{aligned} \frac{dM_t^\zeta (x)}{M_t^\zeta (x)} = \frac{\mu }{f_t(x)} dB_t, \quad M_0^\zeta (x) = x^{-\mu } (x-x_R)^{-\mu \rho /2}. \end{aligned}$$

Note that under the measure weighted by \(M_t^\zeta \), the \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) process becomes an \({{\,\mathrm{SLE}\,}}_\kappa (\rho ,-\mu \kappa )\) process with force points \((x_R,x)\). We shall write the local martingale in a different way, which is very convenient for our analysis. Define the random processes

$$\begin{aligned} \delta _t (x) = \frac{v_t(x)}{g_t'(x)} = \frac{g_t(x)-V_t}{g_t'(x)} \end{aligned}$$
(22)

and

$$\begin{aligned} Q_t(x) = \frac{v_t(x)}{f_t(x)} = \frac{g_t(x)-V_t}{g_t(x)-W_t}. \end{aligned}$$
(23)

We will often not write out the dependence on x, as it will cause no confusion. Since we \(V_t \ge W_t\), we see that \(Q_t(x) \in [0,1]\) for every \(t\ge 0\). We note that if \(x_R = 0^+\), then \(Q_t(x)\) is the ratio between the harmonic measure from infinity of two sets, more precisely, if \(\eta ^R\) denotes the right side of the curve \(\eta \) and \(r_t\) is the rightmost point of \(\eta ([0,t])\cap {\mathbb {R}}\), then

$$\begin{aligned} Q_t(x) =\frac{\omega _\infty ([r_t,x],{\mathbb {H}}{\setminus } \eta ([0,t]))}{\omega _\infty (\eta ^R([0,t])\cup [r_t,x],{\mathbb {H}}{\setminus }\eta ([0,t]))}. \end{aligned}$$
(24)

With these processes, we can write

$$\begin{aligned} M_t^\zeta (x) = g_t'(x)^\zeta Q_t(x)^\mu \delta _t(x)^{-\mu (1+\frac{\rho }{2})}. \end{aligned}$$
(25)

This will be very convenient, as we will relate the process \(\delta _t(x)\) to the conformal radius at the point x, and thus, it will be comparable to \({{\,\mathrm{dist}\,}}(x,\eta ([0,t]))\). With this in mind, we will then make a random time change so that the time-changed process decays deterministically, and it will give us good control over the decay of the distance between the curve and a certain point. It does not, however, make sense to talk about the conformal radius of a boundary point, so we will begin by sorting this out.

We let \({\widehat{H}}_t = H_t \cup \{z:{\overline{z}} \in H_t \} \cup \{x \in {\mathbb {R}}_+: t<T_x \}\) be the union of the reflected domain and the points on \({\mathbb {R}}_+\) that have not been swallowed at time t. Then, it makes sense to talk about \(\text {crad}_{{\widehat{H}}_t}(x)\) for \(0<x\in {\widehat{H}}_t\), and a calculation shows that if \(x_R = 0^+\), then for \(t < T_x\),

$$\begin{aligned} \delta _t(x) = \frac{1}{4} \text {crad}_{{\widehat{H}}_t}(x). \end{aligned}$$

By (5), this implies that

$$\begin{aligned} \frac{1}{4} {{\,\mathrm{dist}\,}}(x,\eta ([0,t])) \le \delta _t(x) \le {{\,\mathrm{dist}\,}}(x,\eta ([0,t])). \end{aligned}$$
(26)

While this only holds for \(x_R = 0^+\), similar bounds can be acquired for other \(x_R\), as the following lemma shows. See also [26].

Lemma 2.3

Let \(\kappa > 0\), \(\rho > -2\), and let \(\eta \) be an \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) curve with force point \(x_R \ge 0\). Let \((g_t)_{t \ge 0}\) be the conformal maps, driven by W, and let \(T_x\) be the swallowing time of x. Let \(x > x_R\) and \(0<t<T_x\), then

$$\begin{aligned} \frac{x-x_R}{4x}{{\,\mathrm{dist}\,}}(x,\eta ([0,t])) \le \delta _t(x) \le 4 {{\,\mathrm{dist}\,}}(x,\eta ([0,t])). \end{aligned}$$

Proof

Denote by \((K_t)_{t \ge 0}\) the compact hulls corresponding to the \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) process. Extend the maps \((g_t)_{t \ge 0}\) by Schwarz reflection to \({\mathbb {C}} {\setminus } ( K_t \cup \{ z : {\overline{z}} \in K_t \} \cup {\mathbb {R}}_-)\), let \(r_t\) be the rightmost point of \(K_t \cap {\mathbb {R}}\) and let \(O_t = g_t(r_t)\). Then, by the Koebe 1/4 theorem,

$$\begin{aligned} \frac{1}{4} \frac{{{\,\mathrm{dist}\,}}(g_t(x),O_t)}{{{\,\mathrm{dist}\,}}(x,\eta ([0,t]))} \le g_t'(x) \le 4\frac{{{\,\mathrm{dist}\,}}(g_t(x),O_t)}{{{\,\mathrm{dist}\,}}(x,\eta ([0,t]))}, \end{aligned}$$

that is,

$$\begin{aligned} \frac{1}{4} {{\,\mathrm{dist}\,}}(x,\eta ([0,t])) \le \frac{g_t(x)-O_t}{g_t'(x)} \le 4 {{\,\mathrm{dist}\,}}(x,\eta ([0,t])), \end{aligned}$$

since \({{\,\mathrm{dist}\,}}(g_t(x),O_t) = g_t(x) - O_t \ge g_t(x) - V_t\), and thus the upper bound is done. Since \(V_t = g_t(x_R)\), we have that if \(x_R = 0^+\), \(V_t = O_t\), and the proof is done. Thus, we now consider \(x_R > 0\). For \(t \ge T_{x_R}\), again \(O_t = V_t\), so we now consider \(t<T_{x_R}\). Let \(u \in (0,x_R]\) and define

$$\begin{aligned} G_u(t) = \frac{g_t(x)-g_t(u)}{g_t(x)-g_t(x_R)}. \end{aligned}$$

for \(t< T_u\). Then,

$$\begin{aligned} G_u(0) = \frac{x-u}{x-x_R}. \end{aligned}$$

Using the Loewner equation, we see that

$$\begin{aligned} G_u'(t) = \frac{a(g_t(x)-g_t(u))(g_t(u)-g_t(x_R))}{(g_t(x)-g_t(x_R))(g_t(x)-W_t)(g_t(x_R)-W_t)(g_t(u)-W_t)} \le 0, \end{aligned}$$

since \(g_t(x)-g_t(u) \ge 0\) and \(g_t(u)-g_t(x_R) \le 0\). Thus, for every \(u \in (0,x_R]\) and \(t < T_u\),

$$\begin{aligned} \frac{g_t(x)-g_t(u)}{g_t(x)-g_t(x_R)} \le \frac{x-u}{x-x_R}. \end{aligned}$$

Fixing t and letting \(u \rightarrow r_t\), we get

$$\begin{aligned} \frac{g_t(x)-O_t}{g_t(x)-V_t} = \frac{g_t(x)-g_t(r_t)}{g_t(x)-g_t(x_R)} \le \frac{x-r_t}{x-x_R} \le \frac{x}{x-x_R}, \end{aligned}$$

which implies that

$$\begin{aligned} \frac{1}{4} {{\,\mathrm{dist}\,}}(x,\eta ([0,t])) \le \frac{g_t(x) - O_t}{g_t'(x)} \le \frac{x}{x-x_R} \frac{g_t(x)-V_t}{g_t'(x)}, \end{aligned}$$

and thus

$$\begin{aligned} \frac{x-x_R}{4x}{{\,\mathrm{dist}\,}}(x,\eta ([0,t])) \le \frac{g_t(x) - V_t}{g_t'(x)} \le 4 {{\,\mathrm{dist}\,}}(x,\eta ([0,t])). \end{aligned}$$

\(\square \)

With this lemma in mind, we will proceed and make a random time change. The process \(\delta _t\) satisfies the (stochastic) ODE

$$\begin{aligned} d\delta _t(x) = - a \delta _t(x) \frac{v_t(x)}{f_t(x)^2(V_t-W_t)}dt = - a \delta _t \frac{Q_t(x)}{1-Q_t(x)}\frac{dt}{f_t(x)^2}, \end{aligned}$$
(27)

so we define the process \({\tilde{t}}(s) = {\tilde{t}}_x(s)\) as the solution to the equation

$$\begin{aligned} s = \int _0^{{\tilde{t}}(s)} \frac{Q_u(x)}{1-Q_u(x)}\frac{du}{f_u(x)^2}. \end{aligned}$$
(28)

Then, \({\tilde{\delta }}_s(x) :=\delta _{{\tilde{t}}(s)}(x)\) satisfies the ODE \(d{\tilde{\delta }}_s(x) = -a{\tilde{\delta }}_s(x) ds\), i.e.,

$$\begin{aligned} {\tilde{\delta }}_s(x) = \delta _0(x) e^{-as} = (x-x_R) e^{-as}. \end{aligned}$$
(29)

This time change is called the radial parameterization. Note that this time change is depending on x. We let \({\tilde{g}}_s = g_{{\tilde{t}}(s)}\) etc. denote the time-changed processes. They are all adapted to the filtration \(\tilde{{\mathscr {F}}}_s = {\mathscr {F}}_{{\tilde{t}}(s)}\). By differentiating (28) to get an expression for \(\frac{d}{ds}{\tilde{t}}(s)\), and combining this with equations (16), (17) and (18), we have that in this new parametrization,

$$\begin{aligned} {\tilde{g}}_s'(x) = \exp \left\{ -a \int _0^s \frac{1-{\tilde{Q}}_u(x)}{{\tilde{Q}}_u(x)} du \right\} , \end{aligned}$$
(30)

and \({\tilde{Q}}_s\) follows the SDE

$$\begin{aligned} d{\tilde{Q}}_s = ((1-2a-a\rho /2) - (1-a){\tilde{Q}}_s)ds + \sqrt{{\tilde{Q}}_s(1-{\tilde{Q}}_s)}d{\tilde{B}}_s, \end{aligned}$$

where \({\tilde{B}}_s\) is a Brownian motion with respect to the filtration \(\tilde{{\mathscr {F}}}_s\). Combining (25) and (29), we see that

$$\begin{aligned} {\tilde{M}}_s^\zeta (x) = (x-x_R)^{-\mu (1+\rho /2)} {\tilde{g}}_s'(x)^\zeta {\tilde{Q}}_s(x)^\mu e^{a\mu (1+\rho /2)s}. \end{aligned}$$
(31)

For each \(x>0\) and \(\zeta > -\frac{\mu _c^2}{2a}\), we define the new probability measure, which we denote by \({\mathbb {P}}^* = {\mathbb {P}}_{x,\zeta }^*\), as

$$\begin{aligned} {\mathbb {P}}^*(A) = {\tilde{M}}_0^\zeta (x)^{-1} {\mathbb {E}}\left[ {\tilde{M}}_s^\zeta (x)1_A \right] , \end{aligned}$$

for every \(A \in \tilde{{\mathscr {F}}}_s\). We denote the expectation with respect to \({\mathbb {P}}^*\) by \({\mathbb {E}}^*\). Let \({\tilde{B}}_s^*\) denote a Brownian motion with respect to \({\mathbb {P}}^*\). By the Girsanov theorem, we have that under the measure \({\mathbb {P}}^*\), \({\tilde{Q}}_s\) follows the SDE

$$\begin{aligned} d{\tilde{Q}}_s = \left[ (1-2a-a\rho /2+\mu ) - (1-a+\mu ){\tilde{Q}}_s \right] ds + \sqrt{{\tilde{Q}}_s(1-{\tilde{Q}}_s)} d{\tilde{B}}_s^*. \end{aligned}$$
(32)

Under \({\mathbb {P}}^*\), \({\tilde{Q}}_s\) is positive recurrent and has the invariant density

$$\begin{aligned} p_{{\tilde{Q}}}(x) = {\tilde{c}}x^{1-4a-a\rho +2\mu }(1-x)^{2a+a\rho -1}, \end{aligned}$$
(33)

where

$$\begin{aligned} {\tilde{c}} = \frac{\Gamma (2-2a+2\mu )}{\Gamma (2a+a\rho )\Gamma (2-4a-a\rho +2\mu )}, \end{aligned}$$

see Corollary A.2. Throughout, we will denote by \({\tilde{X}}_s\), a process that follows the same SDE as \({\tilde{Q}}_s\), but started according to the invariant density. For \(s \ge 0\) and \(y > 0\), short calculations show that

$$\begin{aligned} {\mathbb {P}}^*( {\tilde{X}}_s \le y ) \lesssim y^{2\mu -4a-a\rho +2}, \quad {\mathbb {E}}^* \big [ {\tilde{X}}_s^{-\mu } \big ] = \frac{\Gamma (2-2a+2\mu ) \Gamma (2-4a-a\rho +\mu )}{\Gamma (2-2a+\mu ) \Gamma (2-4a-a\rho +2\mu )}. \end{aligned}$$
(34)

In Sect. 3 we will use that \({\mathbb {P}}^*({\tilde{t}}(s)<\infty ) = 1\) for every s (which is shown in Appendix A), and the following lemma.

Lemma 2.4

Fix x and let \(\tau _s\) and \({\tilde{t}}(s)\) be as above. Then

$$\begin{aligned} {\tilde{t}}\left( \left( \frac{s}{a} + \frac{1}{a} (\log (x-x_R) - \log 4)\right) \vee 0\right) \le \tau _s \le {\tilde{t}}\left( \frac{s}{a} + \frac{1}{a}(\log x + \log 4)\right) , \end{aligned}$$

for all \(s > \max \{0,-\log (x-x_R)\}\).

Proof

Assume that \(s > \max \{0,-\log (x-x_R)\}\). Trivially, \(\tau _s \ge {\tilde{t}}(0) = 0\). By Lemma 2.3, we have that \(\delta _{\tau _s} \le 4 e^{-s}\) and thus, if \(s - \log 4 +\log (x-x_R) > 0\), then (recall that \(\delta _0 = x-x_R\))

$$\begin{aligned} \delta _{\tau _s} \le 4 e^{-s} = (x-x_R) e^{-s + \log 4 -\log (x-x_R)} = \delta _{{\tilde{t}}(\frac{s}{a} - \frac{1}{a} \log 4 + \frac{1}{a} \log (x-x_R))}, \end{aligned}$$

and since \(t \mapsto \delta _t\) is decreasing, \(\tau _s \ge {\tilde{t}}((\frac{s}{a} + \frac{1}{a} \log (x-x_R) - \frac{1}{a} \log 4) \vee 0)\).

Also, by Lemma 2.3, we get

$$\begin{aligned} \delta _{\tau _s} \ge \frac{x-x_R}{4x}e^{-s} = (x-x_R) e^{-s -\log 4 -\log x} = \delta _{{\tilde{t}}(\frac{s}{a} + \frac{1}{a}\log x + \frac{1}{a} \log 4)}. \end{aligned}$$

Thus, \(\tau _s \le {\tilde{t}}\left( \frac{s}{a} + \frac{1}{a}(\log x + \log 4)\right) \), and the proof is done. \(\square \)

In what follows, x and \(x_R\) will be kept constant (time is the parameter which will change), and hence we will instead treat the inequality as

$$\begin{aligned} {\tilde{t}}\left( \left( \frac{s}{a} -C^* \right) \vee 0\right) \le \tau _s \le {\tilde{t}}\left( \frac{s}{a} + C^*\right) , \end{aligned}$$
(35)

as we can choose such a \(C^* = C^*(x,x_R)\) for each \(x>1\), \(x_R < x\). The only place where we have to be careful with this is in the proof of Lemma 4.6, but we will discuss that there.

2.3 The Gaussian free field

We will now introduce and discuss the Gaussian free field (GFF), for more on the GFF, see [25]. Let \(D \subset {\mathbb {C}}\) be a Jordan domain and let \(C_c^\infty (D)\) be the set of compactly supported smooth functions on D. The Dirichlet inner product on D is defined as

$$\begin{aligned} (f,g)_\nabla = \frac{1}{2\pi } \int _D \nabla f(z) \cdot \nabla g(z) dz, \end{aligned}$$

and the Hilbert space closure of \(C_c^\infty (D)\) with this inner product is the Sobolev space \(H(D) :=H_0^1(D)\). Let \(( \phi _n )_{n=1}^\infty \) be a \((\cdot ,\cdot )_\nabla \)-orthonormal basis of H(D). Then, the zero-boundary GFF h on D can be expressed as

$$\begin{aligned} h = \sum _{n=1}^\infty \alpha _n \phi _n, \end{aligned}$$

where \(( \alpha _n )_{n=1}^\infty \) is a sequence of i.i.d. N(0, 1) random variables. This does not converge in any space of functions, however, it converges almost surely in a space of distributions. The GFF is conformally invariant, i.e., if h is a GFF on D and \(\varphi : D \rightarrow {\widetilde{D}}\) is a conformal transformation, then \(h \circ \varphi ^{-1} = \sum _n \alpha _n \phi _n \circ \varphi ^{-1}\) is a GFF on \({\widetilde{D}}\).

If we denote the standard \(L^2(D)\) inner product by \((\cdot ,\cdot )\) and \(U \subseteq D\) is open, then for \(f \in C_c^\infty (U)\), \(g \in C_c^\infty (D)\), we obtain by integration by parts that

$$\begin{aligned} (f,g)_\nabla = -\frac{1}{2\pi } (f,\Delta g), \end{aligned}$$

that is, every function in \(C_c^\infty (D)\) which is harmonic in U is \((\cdot ,\cdot )_\nabla \)-orthogonal to every function in \(C_c^\infty (U)\). From this, one can see that H(D) can be \((\cdot ,\cdot )_\nabla \)-orthogonally decomposed as \(H(U) \oplus H^\perp (U)\), where \(H^\perp (U)=H_D^\perp (U)\) is the set of functions in H(D) which are harmonic in U and have finite Dirichlet energy. Hence, we may write

$$\begin{aligned} h = h_U + h_{U^\perp } = \sum _n \alpha _n^U \phi _n^U + \sum _n \alpha _n^{U^\perp } \phi _n^{U^\perp }, \end{aligned}$$

where \(( \alpha _n^U )\) and \(( \alpha _n^{U^\perp } )\) are independent sequences of i.i.d. N(0, 1) random variables and \(( \phi _n^U)\) and \(( \phi _n^{U^\perp } )\) are orthonormal bases of H(U) and \(H^\perp (U)\), respectively. Note that \(h_U\) is a GFF on U and that \(h_{U^\perp }\) is a random distribution which agrees with h on \(D {\setminus } U\) and can be viewed as a harmonic function on U and that \(h_U\) and \(h_{U^\perp }\) are independent. Hence, the law of h restricted to U, given the values of h restricted to \(\partial U\), is that of a GFF on U plus the harmonic extension of the values of h on \(\partial U\). This is the so-called Markov property of the GFF. With this in mind, one can make sense of GFF with non-zero boundary conditions: let \(f: \partial D \rightarrow {\mathbb {R}}\), let F be the harmonic extension of f to D and let h be a zero-boundary GFF on D, then the law of the of the GFF with boundary condition f is given by the law of \(h+F\).

The GFF also exhibits certain absolute continuity properties, the key (for us) of which we state now. (This is the content of Proposition 3.4 (ii) in [16], where the reader can find a proof.)

Proposition 2.5

Let \(D_1,D_2\) be simply connected domains, such that \(D_1 \cap D_2 \ne \emptyset \) and for \(j=1,2\), let \(h_j\) and \(F_j\) be a zero-boundary GFF and a harmonic function on \(D_j\), respectively. If \(U \subseteq D_1 \cap D_2\) is a bounded simply connected domain and \(U' \supset {\overline{U}}\) is such that \({\overline{D}}_1 \cap U' = {\overline{D}}_2 \cap U'\) and \(F_1-F_2\) tends to zero when approaching \(\partial D_j \cap U'\) for \(j=1,2\), then the laws of \((h_1+F_1)|_U\) and \((h_2+F_2)|_U\) are mutually absolutely continuous.

In other words, if \(h_1\) and \(h_2\) are GFFs on \(D_1\) and \(D_2\), whose boundary conditions agree in some set \(E \subset \partial D_1 \cap \partial D_2\), then the laws of \(h_1\) and \(h_2\) restricted to any simply connected bounded subdomain U of \(D_1\) and \(D_2\), such that \({{\,\mathrm{dist}\,}}(\partial U, (\partial D_1 \cap \partial D_2) {\setminus } E) >0\), are mutually absolutely continuous.

The result holds for unbounded domains U as well, but we shall only need the bounded case.

2.4 Imaginary geometry

In this section, we describe the coupling of SLE with the GFF. As stated in the introduction, this section will be slightly longer than necessary, in order to make this paper more self-contained.

Suppose for now that h is a smooth, real-valued function on a Jordan domain D and fix constants \(\chi > 0\) and \(\theta \in [0,2\pi )\). A flow line of the complex vector field \(e^{i(h/\chi + \theta )}\), with initial point z, is a solution to the ordinary differential equation

$$\begin{aligned} \eta '(t) = e^{i(h(\eta (t))/\chi + \theta )} \quad \text {for} \quad t>0, \quad \eta (0) = z. \end{aligned}$$
(36)

If \(\eta \) is a flow line of \(e^{ih/\chi }\) and \(\psi : {\widetilde{D}} \rightarrow D\) is a conformal map, then \({\tilde{\eta }} = \psi ^{-1} \circ \eta \) is a flow line of \(e^{i{\tilde{h}}/\chi }\), where \({\tilde{h}} :=h \circ \psi - \chi \arg \psi '\) is a smooth function on \({\widetilde{D}}\). This follows by the chain rule and the fact that a reparametrization of a flow line is a flow line. Hence, the following definition makes sense: we say that an imaginary surface is an equivalence class of pairs (Dh) under the equivalence relation

$$\begin{aligned} (D,h) \sim (\psi ^{-1}(D), h \circ \psi - \chi \arg \psi ') = ({\widetilde{D}}, {\tilde{h}}). \end{aligned}$$
(37)

We say that \(\psi \) is a conformal coordinate change of the imaginary surface.

The idea is that if h is a GFF, then we are interested in the flow lines of h and we want to see that these are \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) curves. However, while (37) makes sense if h is a GFF, the ODE (36) does not, as h is then a random distribution and not a continuous function. Thus, the approach to defining the flow lines of the GFF will be a little less “direct”. Instead, the following characterization will be used: let h be a smooth function and \(\eta \) a smooth simple curve in \({\overline{{\mathbb {H}}}}\), with \(\eta (0) = 0\) and \(\eta (t) \in {\mathbb {H}}\) for \(t>0\), starting in the vertical direction (that is, \(\eta '(0)\) has zero real part and positive imaginary part), so that as \(t \rightarrow 0\) the winding number is \(\approx \pi /2\). Furthermore, let \((f_t)_{t\ge 0}\) be the centered Loewner chain of \(\eta \). Then, for any \(t>0\), we have two parametrizations of \(\eta |_{[0,t]}\):

$$\begin{aligned} s \mapsto f_t^{-1}(-s)|_{(s_-,0)} \quad \text {and} \quad s \mapsto f_t^{-1}(s)|_{(0,s_+)}, \end{aligned}$$

where \(s_-\) and \(s_+\) are the left and right images of zero under \(f_t\), respectively. Since \(\eta \) is smooth, there exist smooth, decreasing functions \(\phi _-,\phi _+:(0,\infty ) \rightarrow (0,\infty )\) such that

$$\begin{aligned} \eta (s) = f_t^{-1}(-\phi _-(s)) \quad \text {and} \quad \eta (s) = f_t^{-1}(\phi _+(s)). \end{aligned}$$

By differentiation and (36), it is then easy to see that \(\eta \) is a flow line of h if and only if for each \(z \in \eta ((0,t))\)

$$\begin{aligned} \chi \arg f_t'(w) \rightarrow -h(z)-\chi \pi /2 \end{aligned}$$
(38)

as w approaches z from the left side of \(\eta \) and

$$\begin{aligned} \chi \arg f_t'(w) \rightarrow -h(z)+\chi \pi /2 \end{aligned}$$
(39)

as w approaches z from the right side of \(\eta \). With this in mind, we will now introduce the coupling.

With the notation as in Sect. 2.2, the coupling of SLE and GFF is given by the following theorem (for the proof, see [16]).

Theorem 2.6

Fix \(\kappa > 0\) and a vector of weights \(({\underline{\rho }}_L;{\underline{\rho }}_R)\). Let \((K_t)\) and \((f_t)\) be the hulls and centered Loewner chain, respectively, of an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) process in \({\mathbb {H}}\) from 0 to \(\infty \) with force points \(({\underline{x}}_L;{\underline{x}}_R)\) and let h be a zero-boundary GFF in \({\mathbb {H}}\). Furthermore, let

$$\begin{aligned} \chi = \frac{2}{\sqrt{\kappa }} - \frac{\sqrt{\kappa }}{2}, \quad \lambda = \frac{\pi }{\sqrt{\kappa }}, \end{aligned}$$

let \(\Phi _t^0\) be the harmonic function in \({\mathbb {H}}\) with boundary values

$$\begin{aligned} -\lambda (1+{\overline{\rho }}_{j,L}) \quad&\text {if} \quad x \in [f_t(x_{j+1,L}),f_t(x_{j,L})), \\ \lambda (1+{\overline{\rho }}_{j,R}) \quad&\text {if} \quad x \in (f_t(x_{j,R}),f_t(x_{j+1,R})], \end{aligned}$$

and define

$$\begin{aligned} \Phi _t(z) = \Phi _t^0(f_t(z))-\chi \arg f_t'(z). \end{aligned}$$

If \(\tau \) is any stopping time for the \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) process which almost surely occurs before the continuation threshold, then the conditional law of \((h+\Phi _0)|_{{\mathbb {H}}{\setminus } K_\tau }\) given \(K_\tau \) is equal to the law of \(h \circ f_\tau + \Phi _\tau \).

Fig. 2
figure 2

Theorem 2.6

In this coupling, \(\eta \sim \text {SLE}_\kappa ({\underline{\rho }})\) is almost surely determined by h, that is, \(\eta \) is a deterministic function of h (see Theorem 1.2 of [16]). When \(\kappa \in (0,4)\), a flow line of the GFF \(h+\Phi _0\) on \({\mathbb {H}}\) is an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) curve, \(\eta \), coupled with \(h+\Phi _0\) as in Theorem 2.6. This definition can be extended to other simply connected domains than \({\mathbb {H}}\), using the conformal coordinate change described in (37), see Remark 2.7. If we add \(\theta \chi \) to the boundary values, i.e., replace \(h+\Phi _0\) by \(h+\Phi _0 +\theta \chi \) then the resulting flow line is called a flow line of angle \(\theta \), and we denote it by \(\eta _\theta \).

Note that if \(\kappa \in (0,4)\) then \(\chi >0\) and if \(\kappa >4\) then \(\chi <0\). If we let \(\kappa \in (0,4)\) and write \(\kappa ' = 16/\kappa \), then \(\kappa '>4\) and \(\chi (\kappa ) = -\chi (\kappa ')\). From Theorem 2.6, it is clear that the conditional law of \(h+\Phi _0\) given an \({{\,\mathrm{SLE}\,}}_\kappa \) or \({{\,\mathrm{SLE}\,}}_{\kappa '}\) curve is transformed in the same way under a conformal map, up to a sign change, which motivates the following definition. A counterflow line of the GFF \(h+\Phi _0\) is an \({{\,\mathrm{SLE}\,}}_{\kappa '}({\underline{\rho }})\) curve coupled with \(-(h+\Phi _0)\) as in Theorem 2.6. Note that the sign of the GFF is changed so that it matches the sign of \(\chi (\kappa ')\) and that in the notation of the theorem, the \(\lambda \) is replaced by \(\lambda ' = \frac{\pi }{\sqrt{\kappa '}} = \lambda - \frac{\pi }{2}\chi \) (Fig. 2).

In the figures, we often write \(\underset{^{\scriptstyle \sim }}{a}\), where a is some real number. This is to be interpreted as a plus \(\chi \) times the winding of the curve, see Fig. 3. This makes perfect sense for piecewise smooth curves, but for fractal curves the winding is not defined pointwise. However, the harmonic extension of the winding of the curve makes sense, as we can map conformally to \({\mathbb {H}}\) with piecewise constant boundary conditions. The term \(-\chi \arg f_\tau '(z)\) in Theorem 2.6 is interpreted as \(\chi \) times the harmonic extension of the winding of the curve \(\eta \).

Fig. 3
figure 3

Each time the curve makes a turn to the right, the boundary values decrease by \(\chi \) times the angle and each time it makes a turn to the left the angle increases by \(\chi \) times the angle, for example, a quarter turn to the right (left) decreases (increases) the boundary values by \(\frac{\pi }{2} \chi \). We illustrate \(a + \chi \cdot \text {winding}\) as \(\underset{^{\scriptstyle \sim }}{a}\), and hence, the three pictures of this figure give the same information

Remark 2.7

Let D be a simply connected domain, with \(x,y \in \partial D\) distinct and let \(\psi : D \rightarrow {\mathbb {H}}\) be a conformal transformation with \(\psi (x) = 0\) and \(\psi (y) = \infty \). Let \({\underline{x}}_L\) (\({\underline{x}}_R\)) consist of l (r) marked prime ends in the clockwise (counterclockwise) segment of \(\partial D\), which are in clockwise (counterclockwise) order. The orientation of \(\partial D\) is as defined by \(\psi \). Write \(x_{0,L} = x_{0,R} = x\) and \(x_{l+1,L} = x_{r+1,R} = y\) and let \({\underline{\rho }}_L\) and \({\underline{\rho }}_R\) be vectors of weights corresponding to the points in \({\underline{x}}_L\) and \({\underline{x}}_R\) respectively. Let h be a GFF on D with boundary values given by

$$\begin{aligned} -&\lambda (1+{\overline{\rho }}_{j,L})-\chi \arg \psi ' \quad \text {for} \quad z' \in (x_{j,L},x_{j+1,L}) \\&\lambda (1+{\overline{\rho }}_{j,R})-\chi \arg \psi ' \quad \text {for} \quad z' \in (x_{j,R},x_{j+1,R}) \end{aligned}$$

where \((x_{j,L},x_{j+1,L})\) denotes the clockwise segment of \(\partial D\) from \(x_{j,L}\) to \(x_{j+1,L}\) and \((x_{j,R},x_{j+1,R})\) the counterclockwise segment of \(\partial D\) from \(x_{j,R}\) to \(x_{j+1,R}\). Let \(\kappa \in (0,4)\). We say that an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) curve, \(\eta \), from x to y in D, coupled with h, is a flow line of h if \(\psi (\eta )\) is coupled as a flow line of the GFF \(h \circ \psi ^{-1} - \arg (\psi ^{-1})'\) on \({\mathbb {H}}\).

The same statement holds for counterflow lines with \(\kappa ' \in (4,\infty )\) if we replace \(\lambda \) with \(\lambda '\) in the boundary values (but keep \(\chi = \chi (\kappa )\)).

We write the following statements for flow lines in \({\mathbb {H}}\), but they hold true for other simply connected domains as well.

Let h be a GFF in \({\mathbb {H}}\) with piecewise constant boundary values. It turns out that (Theorem 1.4, [16]) if \(\eta '\) is a counterflow line of h in \({\mathbb {H}}\), from \(\infty \) to 0, then the range of \(\eta '\) is almost surely equal to the points that can be reached by the flow lines in \({\mathbb {H}}\), from 0 to \(\infty \), with angles in the interval \(\left[ -\frac{\pi }{2},\frac{\pi }{2} \right] \). Also, it almost surely holds that the left boundary of \(\eta '\) is equal to the trace of the flow line of angle \(-\frac{\pi }{2}\) and the right boundary is equal to the trace of the flow line of angle \(\frac{\pi }{2}\) (seen from the viewpoint of travelling along \(\eta '\), from the flow lines’ point of view, it is the other way). Here, we talk about counterflow lines corresponding to the parameter \(\kappa '\) and flow lines corresponding to \(\kappa \), so that they can be coupled with the same GFF (Fig. 4).

Fig. 4
figure 4

The flow lines \(\eta _{-\frac{\pi }{2}}\), \(\eta \), \(\eta _\frac{\pi }{2}\) from \(-i\) to i, and the counterflow line \(\eta '\) from i to \(-i\) in the square \([-1,1]^2\). \(\eta _{-\frac{\pi }{2}}\) and \(\eta _{\frac{\pi }{2}}\) will hit and merge with the respective sides of \(\eta '\), as they will be the outer boundary of \(\eta '\). Note that the outer boundary conditions agree

Again, let h be a GFF in \({\mathbb {H}}\) with piecewise constant boundary values. For each \(x \in {\mathbb {R}}\) and \(\theta \in {\mathbb {R}}\), we denote by \(\eta _\theta ^x\) the flow line of h from x to \(\infty \) with angle \(\theta \). Fix \(x_1,x_2 \in {\mathbb {R}}\) such that \(x_1 \ge x_2\), then the following holds (see Fig. 5 for illustrations).

  1. (i)

    If \(\theta _1 < \theta _2\), then \(\eta _{\theta _1}^{x_1}\) almost surely stays to the right of \(\eta _{\theta _2}^{x_2}\). If \(\theta _2 - \theta _1 < \frac{\pi \kappa }{4-\kappa }\), then the paths might hit and bounce off of each other, otherwise they almost surely never collide away from the starting point.

  2. (ii)

    If \(\theta _1 = \theta _2\), then \(\eta _{\theta _1}^{x_1}\) and \(\eta _{\theta _2}^{x_2}\) can intersect and if they do, they merge and never separate.

  3. (iii)

    If \(\theta _2< \theta _1 < \theta _2 + \pi \), then \(\eta _{\theta _1}^{x_1}\) and \(\eta _{\theta _2}^{x_2}\) can intersect, and if they do, they cross and never cross back. If \(\theta _1 - \theta _2 < \frac{\pi \kappa }{4-\kappa }\), then they can hit and bounce off of each other, otherwise they never intersect after crossing.

The above flow line interactions are the content of Theorem 1.5 of [16]. We shall make use of property (ii), as it is instrumental in our two-point estimate.

Fig. 5
figure 5

Flow line interactions for different pairs \((\theta _1,\theta _2)\)

2.4.1 Level lines

The coupling is valid for \(\kappa = 4\) as well. We then interpret the resulting \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) curve as the level line of the GFF. Note that \(\chi (4)=0\), that is, there is no extra winding term, and hence the boundary values of the level line are constant along the curve, \(-\lambda \) on the left and \(\lambda \) on the right. As in the case of flow and counterflow lines, level lines can be defined in other domains and with different starting and ending points via conformal maps. For level lines, the terminology is a bit different: we say that \(\eta \) is a level line of height \(u\in {\mathbb {R}}\) if it is a level line of the GFF \(h+u\). The same interactions as for flow lines hold for level lines. We let \(\eta _u^x\) denote the level line of height u starting from x. Let \(x_1 \ge x_2\), then

  1. (i)

    If \(u_1 < u_2\), then \(\eta _{u_1}^{x_1}\) almost surely stays to the right of \(\eta _{u_2}^{x_2}\).

  2. (ii)

    If \(u_1 = u_2\), then \(\eta _{u_1}^{x_1}\) and \(\eta _{u_2}^{x_2}\) can intersect and if they do, they merge and never separate.

For more on the level lines of a GFF with piecewise constant boundary data, see [27].

2.4.2 Deterministic curves and Radon–Nikodym derivatives

We now recall some consequences of Proposition 2.5: two lemmas about flow and counterflow line behaviour (Figs. 6, 7) and two lemmas on absolute continuity, all from [21], which we will need in Sect. 4. Note that while they are proven for \(\kappa \ne 4\), the case \(\kappa = 4\) follows by the same argument as \(\kappa < 4\), when the \({{\,\mathrm{SLE}\,}}_4({\underline{\rho }})\) curves are coupled as level lines.

Fig. 6
figure 6

Lemma 2.8 states that if \(\eta \) is an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L,{\underline{\rho }}_R)\) curve in \({\mathbb {H}}\) from 0 to \(\infty \), with \(x_{1,L} = 0^-\), \(x_{1,R}=0^+\) and \(\rho _{1,L},\rho _{1,R} > -2\), then for any fixed deterministic curve \(\gamma :[0,1] \rightarrow {\overline{{\mathbb {H}}}}\), such that \(\gamma (0) = 0\) and \(\gamma ((0,1]) \subset {\mathbb {H}}\), and each \(\varepsilon >0\), the probability that \(\eta \) comes within distance \(\varepsilon \) of \(\gamma (1)\) before leaving the \(\varepsilon \)-neighborhood of \(\gamma \) is positive

Lemma 2.8

(Lemma 2.3 of [21]) Fix \(\kappa > 0\) and let \(\eta \) be an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) curve in \({\mathbb {H}}\) from 0 to \(\infty \), with force points \(({\underline{x}}_L;{\underline{x}}_R)\) such that \(x_{1,L} = 0^-\), \(x_{1,R} = 0^+\) and \(\rho _{1,L},\rho _{1,R} > -2\). Let \(\gamma :[0,1] \rightarrow {\overline{{\mathbb {H}}}}\) be a deterministic curve such that \(\gamma (0) = 0\) and \(\gamma ((0,1]) \subset {\mathbb {H}}\). Fix \(\varepsilon > 0\), write \(A(\varepsilon ) = \{ z: {{\,\mathrm{dist}\,}}(z,\gamma ([0,1])) < \varepsilon \}\) and define the stopping times

$$\begin{aligned} \sigma _1 = \inf \{t \ge 0: |\eta (t) - \gamma (1)| \le \varepsilon \}, \quad \sigma _2 = \inf \{ t \ge 0: \eta (t) \notin A(\varepsilon ) \}. \end{aligned}$$

Then \({\mathbb {P}}(\sigma _1 < \sigma _2) > 0\).

Fig. 7
figure 7

By Lemma 2.9 we have that if \(\eta \) is an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L,{\underline{\rho }}_R)\) curve in \({\mathbb {H}}\) from 0 to \(\infty \), with \(x_{1,L} = 0^-\), \(x_{1,R}=0^+\) and \(\rho _{1,L},\rho _{1,R} > -2\), then for any fixed deterministic curve \(\gamma :[0,1] \rightarrow {\overline{{\mathbb {H}}}}\), such that \(\gamma (0) = 0\), \(\gamma (1) \in [x_{k,R},x_{k+1,R}]\) and \(\gamma ((0,1)) \subset {\mathbb {H}}\), where k is such that \({\overline{\rho }}_{k,R} \in (\frac{\kappa }{2}-4,\frac{\kappa }{2}-2)\), and each \(\varepsilon >0\), the probability that \(\eta \) hits \([x_{k,R},x_{k+1,R}]\) before leaving the \(\varepsilon \)-neighborhood of \(\gamma \) is positive

Lemma 2.9

(Lemma 2.5 of [21]) Fix \(\kappa > 0\) and let \(\eta \) be an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) curve in \({\mathbb {H}}\) from 0 to \(\infty \), with force points \(({\underline{x}}_L;{\underline{x}}_R)\) such that \(x_{1,L} = 0^-\), \(x_{1,R} = 0^+\) and \(\rho _{1,L},\rho _{1,R} > -2\). Fix \(k \in {\mathbb {N}}\) such that \({\overline{\rho }}_{k,R} \in (\frac{\kappa }{2}-4,\frac{\kappa }{2}-2)\) and an \(\varepsilon > 0\) such that \(|x_{2,q}| \ge \varepsilon \) for \(q \in \{L,R\}\), \(x_{k+1,R}-x_{k,R} \ge \varepsilon \) and \(x_{k,R} \le \varepsilon ^{-1}\). Let \(\gamma :[0,1] \rightarrow {\overline{{\mathbb {H}}}}\), with \(\gamma (0) = 0\), \(\gamma ((0,1)) \subset {\mathbb {H}}\) and \(\gamma (1) \in [x_{k,R},x_{k+1,R}]\) and \(A(\varepsilon ) = \{ z: {{\,\mathrm{dist}\,}}(z,\gamma ([0,1])) < \varepsilon \}\) and define the stopping times

$$\begin{aligned} \sigma _1 = \inf \{ t\ge 0: \eta (t) \in (x_{k,R},x_{k+1,R}) \}, \quad \sigma _2 = \inf \{ t\ge 0: \eta (t) \notin A(\varepsilon ) \}. \end{aligned}$$

Then there exists a \(p_1 = p_1(\kappa ,\max _{j,q} |\rho _{j,q}|,{\overline{\rho }}_{k,R},\varepsilon ) > 0\), such that \({\mathbb {P}}(\sigma _1 < \sigma _2) \ge p_1\).

Next, we shall describe the Radon–Nikodym derivatives between \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) processes in different domains, see also [6] and [21]. The results that we need are Lemma 2.10 and Lemma 2.11, which we will use in Sect. 4. Let

$$\begin{aligned} c = (D,z_0,{\underline{x}}_L,{\underline{x}}_R,z_\infty ) \end{aligned}$$

be a configuration, that is, a Jordan domain D with boundary points \(z_0\), \({\underline{x}}_L\), \({\underline{x}}_R\) and \(z_\infty \), and let U be an open neighborhood of \(z_0\). Denote the law of an \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }}_L;{\underline{\rho }}_R)\) process with configuration c, stopped the first time \(\tau \) it exits U, by \(\mu _c^U\). Let \(H_D\) be the Poisson excursion kernel of D, that is, if \(\varphi : D \rightarrow {\mathbb {H}}\) is conformal, then

$$\begin{aligned} H_D(x,y) = \frac{\varphi '(y)}{(\varphi (y)-\varphi (x))^2}. \end{aligned}$$

Furthermore, let \(\rho _\infty = \kappa -6-\sum _{j,q} \rho _{j,q}\) and

$$\begin{aligned} Z(c)&= H_D(z_0,z_\infty )^{-\frac{\rho _\infty }{2\kappa }} \times \prod _{j,q} H_D(z_0,x_{j,q})^{-\frac{\rho _{j,q}}{2\kappa }} \\&\quad \times \prod _{(j_1,q_1) \ne (j_2,q_2)} H_D(x_{j_1,q_1},x_{j_2,q_2})^{-\frac{\rho _{j_1,q_1} \rho _{j_2,q_2}}{4\kappa }} \times \prod _{j,q} H_D(x_{j,q},z_\infty )^{-\frac{\rho _{j,q} \rho _\infty }{4\kappa }}. \end{aligned}$$

Moreover, let

$$\begin{aligned} c_\tau = (D {\setminus } K_\tau , \eta (\tau ),{\underline{x}}_L^\tau ,{\underline{x}}_R^\tau ,z_\infty ), \end{aligned}$$

where \(x_{j,q}^\tau = x_{j,q}\) if \(x_{j,q}\) is not swallowed by \(\eta \) at time \(\tau \) and \(x_{j,q}^\tau \) the leftmost (resp. rightmost) point of \(K_\tau \cap \partial D\) on the clockwise (resp. counterclockwise) arc of \(\partial D\) if \(q=L\) (resp. \(q=R\)). Moreover, let \(\mu ^\text {loop}\) be the Brownian loop measure, a \(\sigma \)-finite measure on unrooted loops (see [13]), and write

$$\begin{aligned} m(D;K,K') = \mu ^\text {loop}(l:l \subseteq D, l \cap K \ne \emptyset , l \cap K' \ne \emptyset ), \end{aligned}$$

where

$$\begin{aligned} \xi = \frac{(6-\kappa )(8-3\kappa )}{2\kappa }. \end{aligned}$$

Lemma 2.10

(Lemma 2.7 of [21]) Let \(c = (D,z_0,{\underline{x}}_L,{\underline{x}}_R,z_\infty )\) and \({\tilde{c}} = ({\widetilde{D}},z_0,\underline{{\tilde{x}}}_L,\underline{{\tilde{x}}}_R,{\tilde{z}}_\infty )\) be configurations and U an open neighborhood of \(z_0\) such that c and \({\tilde{c}}\) and the weights of the marked points agree in U, and the distance from U to the marked points of c and \({\tilde{c}}\) which differ, is positive. Then, the probability measures \(\mu _c^U\) and \(\mu _{{\tilde{c}}}^U\) are mutually absolutely continuous and the Radon–Nikodym derivative between them are given by

$$\begin{aligned} \frac{d\mu _{{\tilde{c}}}^U}{d\mu _{c}^U} (\eta ) = \frac{Z({\tilde{c}}_\tau )/Z({\tilde{c}})}{Z(c_\tau )/Z(c)} \exp \left( -\xi m(D;K_\tau ,D {\setminus } {\widetilde{D}}) + \xi m({\widetilde{D}};K_\tau ,{\widetilde{D}} {\setminus } D) \right) . \end{aligned}$$

Lemma 2.11

(Lemma 2.8 of [21]) Assume that we have the same setup as in Lemma 2.10, with \(D = {\mathbb {H}}\), \({\widetilde{D}} \subseteq {\mathbb {H}}\), \(U \subset {\mathbb {H}}\) bounded and \(z_0 = 0\). Fix \(\varsigma > 0\) and suppose that \({{\,\mathrm{dist}\,}}(U,{\mathbb {H}}{\setminus } {\widetilde{D}}) > \varsigma \) and that the force points which are outside U are at least at distance \(\varsigma \) from U. Then there exists a constant \(C \ge 1\), depending on U, \(\varsigma \), \(\kappa \) and the weights of the force points, such that

$$\begin{aligned} \frac{1}{C} \le \frac{d\mu _{{\tilde{c}}}^U}{d\mu _{c}^U} \le C. \end{aligned}$$

3 One-point estimates

In this section we will find first moment estimates, which will be of importance, as they will give us the means to get good two-point estimates as well as give us the upper bound of the dimension of \(V_\beta ^*\). Recall that \({\tilde{g}}_s = g_{{\tilde{t}}(s)}\) is the Loewner chain under the radial time change, see Sect. 2.2.2.

Proposition 3.1

Let \(\zeta > -\mu _c^2/2a\) and \(x_R \ge 0\). For all \(x > x_R\), we have

$$\begin{aligned} {\mathbb {E}}\left[ {\tilde{g}}_s'(x)^\zeta 1\{{\tilde{t}}(s)<\infty \} \right] = K \left( \frac{x-x_R}{x} \right) ^\mu e^{-a\mu (1+\rho /2)s}(1+O(e^{-(1-a+\mu ) s})), \end{aligned}$$

where \(K = \frac{\Gamma (2-2a+2\mu ) \Gamma (2-4a-a\rho +\mu )}{\Gamma (2-2a+\mu ) \Gamma (2-4a-a\rho +2\mu )}\).

Proof

By (34), \(f(x) = x^{-\mu }\) is in \(L^1(\nu _{{\tilde{Q}}})\) (see Corollary A.2) and thus Corollary A.2 gives that

$$\begin{aligned} {\mathbb {E}}^*[{\tilde{Q}}_s^{-\mu }] = {\mathbb {E}}^*[{\tilde{X}}_s^{-\mu }](1+O(e^{-(1-a+\mu ) s})). \end{aligned}$$
(40)

Thus, since \({\mathbb {P}}^*({\tilde{t}}(s) < \infty ) = 1\) for every s, we have

$$\begin{aligned}&{\mathbb {E}}[ {\tilde{g}}_s'(x)^\zeta 1\{ {\tilde{t}}(s)<\infty \}] = {\mathbb {E}}\left[ {\tilde{M}}_s^\mu {\tilde{Q}}_s^{-\mu } (x-x_R)^{\mu (1+\rho /2)} e^{-a\mu (1+\rho /2)s} 1\{ {\tilde{t}}(s)<\infty \} \right] \\&\quad = (x-x_R)^{\mu (1+\rho /2)} e^{-a\mu (1+\rho /2)} {\mathbb {E}}\left[ {\tilde{M}}_s^\mu {\tilde{Q}}_s^{-\mu } 1\{ {\tilde{t}}(s) < \infty \} \right] \\&\quad = (x-x_R)^{\mu (1+\rho /2)} e^{-a\mu (1+\rho /2)} x^{-\mu } (x-x_R)^{-\mu \rho /2} {\mathbb {E}}^*[{\tilde{Q}}_s^{-\mu }] \\&\quad = \left( \frac{x-x_R}{x} \right) ^\mu e^{-a\mu (1+\rho /2)s} {\mathbb {E}}^*[{\tilde{Q}}_s^{-\mu }] \\&\quad = \left( \frac{x-x_R}{x} \right) ^\mu e^{-a\mu (1+\rho /2)s} {\mathbb {E}}^*[{\tilde{X}}_s^{-\mu }] (1+O(e^{-(1-a+\mu ) s})) \\&\quad = K \left( \frac{x-x_R}{x} \right) ^\mu e^{-a\mu (1+\rho /2)s} (1+O(e^{-(1-a+\mu ) s})), \end{aligned}$$

using (31) for the first equality, changing to the measure \({\mathbb {P}}^*\) for the third inequality, using (40) in the fifth and (34) in the last equality. \(\square \)

Using Proposition 3.1 and Lemma 2.4, recalling that we can choose a \(C^* = C^*(x,x_R)\) such that (35) holds, we get the following corollary.

Corollary 3.2

Suppose \(\zeta \ge 0\) and \(x_R \ge 0\). For every \(x > x_R\), there is a constant \(C=C(x,x_R)\) such that

$$\begin{aligned} \frac{1}{C} e^{-\mu (1+\rho /2)s} \le {\mathbb {E}}\Big [ g_{\tau _s}'(x)^\zeta 1\{ \tau _s < \infty \} \Big ] \le C e^{-\mu (1+\rho /2)s}. \end{aligned}$$

Proof

By Lemma 2.4 and that the map \(t \mapsto g_t'\) is decreasing, we have

$$\begin{aligned} {\mathbb {E}}\left[ {\tilde{g}}_{\frac{s}{a}+C^*}'(x)^\zeta 1\{{\tilde{t}}(s/a +C^*)<\infty \} \right]&\le {\mathbb {E}}\Big [ g_{\tau _s}'(x)^\zeta 1\{ \tau _s< \infty \} \Big ] \\&\le {\mathbb {E}}\left[ {\tilde{g}}_{\frac{s}{a} -C^*}'(x)^\zeta 1\{{\tilde{t}}(s/a - C^*)<\infty \} \right] . \end{aligned}$$

By the previous proposition, we have

$$\begin{aligned}&{\mathbb {E}}\left[ {\tilde{g}}_{\frac{s}{a}+C^*}'(x)^\zeta 1\{{\tilde{t}}(s/a+C^*)<\infty \} \right] \\&\quad = K \left( \frac{x-x_R}{x} \right) ^\mu e^{-\mu (1+\rho /2)(s+aC^*)} (1+O(e^{-\frac{1}{a}(1-a+\mu ) s})), \end{aligned}$$

and

$$\begin{aligned}&{\mathbb {E}}\left[ {\tilde{g}}_{\frac{s}{a} -C^*}'(x)^\zeta 1\{{\tilde{t}}(s/a -C^*)<\infty \} \right] \\&\quad = K \left( \frac{x-x_R}{x} \right) ^\mu e^{-\mu (1+\rho /2)(s-aC^*)} (1+O(e^{-\frac{1}{a}(1-a+\mu ) s})), \end{aligned}$$

where the constants in O depend on x and \(x_R\) (since \(C^*\) does). Thus, the proof is done. \(\square \)

At this point, we already have what is needed for the upper bound of the dimension of \(V_\beta ^*\).

3.1 Mass concentration

In this subsection, we will see that the mass of the weighted measure \({\mathbb {P}}^*\) is concentrated on an event where the behaviour of \({\tilde{g}}_s'(x)\), for fixed x, is nice. On this event, we will show that \({\tilde{g}}_s'(x)\) satisfies a number of inequalities which will be helpful in proving the two-point estimate of the next section. The ideas here are similar to those of Section 7 of [11].

We define the process \({\tilde{L}}_s\), by recalling (30), as

$$\begin{aligned} {\tilde{L}}_s = - \frac{1}{a} \log {\tilde{g}}_s'(x) = \int _0^s {\tilde{Q}}_u^{-1} (1-{\tilde{Q}}_u)du. \end{aligned}$$

As stated in Sect. 2.2.2 (and shown in the appendix), \({\tilde{Q}}_s\) has an invariant distribution under \({\mathbb {P}}^*\), with density \(p_{{\tilde{Q}}}\) (recall (33)). Therefore, by the ergodicity of \({\tilde{Q}}_s\) (Corollary A.2) and a computation,

$$\begin{aligned} \lim _{s \rightarrow \infty } \frac{{\tilde{L}}_s}{s} = \int _0^1 y^{-1}(1-y) p_{{\tilde{Q}}}(y) dy = \beta (1+\rho /2), \end{aligned}$$

holds \({\mathbb {P}}^*\)-almost surely, that is, the time average converges \({\mathbb {P}}^*\)-almost surely to the space average. We shall prove that, roughly speaking, as \(s \rightarrow \infty \), \({\tilde{L}}_s \approx \beta (1+\rho /2)s\), with an error of order \(\sqrt{s}\). To prove this, we need to prove the next lemma first.

Lemma 3.3

Let \(\zeta > -\mu _c^2/2a\). There is a positive constant \(c<\infty \) such that for \(p>0\) sufficiently small, and \(t\ge 1\),

$$\begin{aligned} {\mathbb {E}}^* \left[ \exp \left\{ p \frac{|{\tilde{L}}_t - \beta (1+\rho /2)t|}{\sqrt{t}} \right\} \right] \le c. \end{aligned}$$

The proof idea is as follows. Observe that if we view \(\mu = \mu _c + \sqrt{\mu _c^2 + 2a\zeta }\) as a function of \(\zeta \), then \(\mu '(\zeta ) = a/\sqrt{\mu _c^2 + 2a\zeta } = \beta \). We define the process

$$\begin{aligned} {\tilde{N}}_s = e^{-a\delta {\tilde{L}}_s} e^{a(1+\rho /2)(\mu (\zeta +\delta ) - \mu (\zeta ))s} {\tilde{Q}}_s^{\mu (\zeta +\delta ) - \mu (\zeta )}, \end{aligned}$$

which by Itô’s formula is seen to be a local martingale under \({\mathbb {P}}^*\). Since it is bounded from below, it is a supermartingale. Then we use that \(\mu (\zeta +\delta ) - \mu (\zeta ) = \delta \beta + O(\delta ^2)\) and that we have good control of \({\tilde{Q}}_s\).

Proof

We have that

$$\begin{aligned} \log {\tilde{N}}_t -(\mu (\zeta +\delta )-\mu (\zeta )) \log {\tilde{Q}}_t&= -a\delta {\tilde{L}}_t + at(1+\rho /2)(\mu (\zeta +\delta ) - \mu (\zeta )) \\&= -a\delta ({\tilde{L}}_t - \beta (1+\rho /2)t) + O(\delta ^2 t), \end{aligned}$$

since \(\mu (\zeta +\delta )-\mu (\zeta )=\delta \beta + O(\delta ^2)\). This implies that

$$\begin{aligned} \log {\tilde{N}}_t = -a\delta ({\tilde{L}}_t - \beta (1+\rho /2)t) + O(\delta ^2 t) + (\delta \beta + O(\delta ^2)) \log {\tilde{Q}}_t. \end{aligned}$$

Let \(\delta = \pm \frac{\varepsilon }{\sqrt{t}}\), where \(\varepsilon \) is small enough for \({\tilde{N}}_t\) to be well-defined. Then,

$$\begin{aligned} \log {\tilde{N}}_t = \mp a\frac{\varepsilon }{\sqrt{t}}({\tilde{L}}_t - \beta (1+\rho /2)t) + O(\varepsilon ^2) + (\beta \frac{\pm \varepsilon }{\sqrt{t}} + O(\varepsilon ^2/t))\log {\tilde{Q}}_t, \end{aligned}$$

and exponentiating, we get

$$\begin{aligned} {\mathbb {E}}^*\left[ \exp \left\{ \mp a\varepsilon \frac{{\tilde{L}}_t-\beta (1+\rho /2)t}{\sqrt{t}} \right\} {\tilde{Q}}_t^{\beta \frac{\pm \varepsilon }{\sqrt{t}}+O(\varepsilon ^2/t)} \right] \le c, \end{aligned}$$

since \({\tilde{N}}_t\) is a supermartingale and hence \({\mathbb {E}}[{\tilde{N}}_t] \le {\mathbb {E}}[{\tilde{N}}_0] = {\tilde{Q}}_0^{\mu (\zeta +\delta )-\mu (\zeta )} = \left( \frac{x-x_R}{x}\right) ^{\mu (\zeta +\delta )-\mu (\zeta )}\). Consider the case \(\delta <0\), i.e.,

$$\begin{aligned} {\mathbb {E}}^*\left[ \exp \left\{ a\varepsilon \frac{{\tilde{L}}_t-\beta (1+\rho /2)t}{\sqrt{t}} \right\} {\tilde{Q}}_t^{-\beta \frac{\varepsilon }{\sqrt{t}}+O(\varepsilon ^2/t)} \right] \le c. \end{aligned}$$

Since \({\tilde{Q}}_t \in [0,1], \beta > 0\), we have \({\tilde{Q}}_t^{-\beta \frac{\varepsilon }{\sqrt{t}} + O(\varepsilon ^2/t)} \ge 1\) for sufficiently small \(\varepsilon \), and thus

$$\begin{aligned} {\mathbb {E}}^*\left[ \exp \left\{ a\varepsilon \frac{{\tilde{L}}_t-\beta (1+\rho /2)t}{\sqrt{t}} \right\} \right] \le c. \end{aligned}$$

Consider the case \(\delta >0\). We will split the expectation into the cases \({\tilde{Q}}_t \le y\) and \({\tilde{Q}}_t > y\) for some \(y \in (0,1]\). First,

$$\begin{aligned} c&\ge {\mathbb {E}}^*\left[ \exp \left\{ -a\varepsilon \frac{{\tilde{L}}_t-\beta (1+\rho /2)t}{\sqrt{t}} \right\} {\tilde{Q}}_t^{\beta \frac{\varepsilon }{\sqrt{t}}+O(\varepsilon ^2/t)} 1\left\{ {\tilde{Q}}_t> y \right\} \right] \\&\ge {\mathbb {E}}^*\left[ \exp \left\{ -a\varepsilon \frac{{\tilde{L}}_t-\beta (1+\rho /2)t}{\sqrt{t}} \right\} y^{\beta \frac{\varepsilon }{\sqrt{t}}+O(\varepsilon ^2/t)} 1\left\{ {\tilde{Q}}_t > y \right\} \right] , \end{aligned}$$

which implies

$$\begin{aligned} {\mathbb {E}}^*\left[ \exp \left\{ -a\varepsilon \frac{{\tilde{L}}_t-\beta (1+\rho /2)t}{\sqrt{t}} \right\} 1\left\{ {\tilde{Q}}_t > y \right\} \right] \le cy^{-\beta \frac{\varepsilon }{\sqrt{t}} + O(\varepsilon ^2/t)} \le cy^{-2\beta \frac{\varepsilon }{\sqrt{t}}} \end{aligned}$$

for sufficiently small \(\varepsilon \). For the other part, note that since \({\tilde{L}}_t \ge 0\),

$$\begin{aligned}&{\mathbb {E}}^* \left[ \exp \left\{ -a\varepsilon \frac{{\tilde{L}}_t - \beta (1+\rho /2)t}{\sqrt{t}} \right\} 1\left\{ {\tilde{Q}}_t \le y \right\} \right] \le {\mathbb {E}}^* \left[ e^{a\varepsilon \beta (1+\rho /2)\sqrt{t}} 1\left\{ {\tilde{Q}}_t \le y \right\} \right] \\&\quad = e^{a\varepsilon \beta (1+\rho /2)\sqrt{t}} {\mathbb {P}}^*({\tilde{Q}}_t \le y) \le c' e^{a\varepsilon \beta (1+\rho /2)\sqrt{t}} y^{2\mu -4a-a\rho +2}, \end{aligned}$$

for some constant, \(c'\), where the last equality follows by Corollary A.2 and (34). If we let

$$\begin{aligned} y = \exp \left\{ -\frac{a\varepsilon \beta (1+\rho /2)}{2\mu -4a-a\rho +2}\sqrt{t} \right\} , \end{aligned}$$

then we see that both the “\({\tilde{Q}}_t \le y\)”-part and the “\({\tilde{Q}}_t > y\)”-part are bounded by positive constants. Thus, we are done. \(\square \)

With the previous lemma at hand, we can now prove the following.

Proposition 3.4

There exists a constant, c, such that if we fix \(t > 0\) and let \({\tilde{I}}_t^u\) be the event that for all \(0 \le s \le t\),

$$\begin{aligned} |{\tilde{L}}_s-\beta (1+\rho /2)s| \le u\sqrt{s} \log (2+s) + c, \end{aligned}$$

then, for every \(\varepsilon > 0\) there exists a \(u < \infty \) such that

$$\begin{aligned} {\mathbb {P}}^*({\tilde{I}}_t^u) \ge 1-\varepsilon \end{aligned}$$

for every t.

Proof

There is a constant, c, such that for any \(k \in {\mathbb {N}}\),

$$\begin{aligned}&{\mathbb {P}}^* \left( |{\tilde{L}}_s - \beta (1+\rho /2)s|> u \sqrt{s} \log (2+s) + c \ \text {for some} \ s \in [k,k+1] \right) \\&\quad \le {\mathbb {P}}^* \left( |{\tilde{L}}_{k+1} - \beta (1+\rho /2)(k+1)| > u \sqrt{k+1} \log (2+(k+1)) \right) . \end{aligned}$$

Thus, by splitting into subintervals of length 1, Chebyshev’s inequality and Lemma 3.3 (with \(p>0\) accordingly)

$$\begin{aligned}&{\mathbb {P}}^* \left( |{\tilde{L}}_s - \beta (1+\rho /2)s|> u \sqrt{s} \log (2+s) + c \ \text {for some} \ s \ge 0 \right) \\&\quad \le \sum _{k=1}^\infty {\mathbb {P}}^* \left( |{\tilde{L}}_k - \beta (1+\rho /2)k|> u \sqrt{k} \log (2+k) \right) \\&\quad = \sum _{k=1}^\infty {\mathbb {P}}^* \left( p\frac{|{\tilde{L}}_k - \beta (1+\rho /2)k|}{\sqrt{k}} > pu \log (2+k) \right) \\&\quad \le \sum _{k=1}^\infty {\mathbb {E}}^* \left[ \exp \left\{ p \frac{|{\tilde{L}}_k-\beta (1+\rho /2)k|}{\sqrt{k}} \right\} \right] (2+k)^{-pu} \\&\quad \le \sum _{k=1}^\infty c_0 (2+k)^{-pu}, \end{aligned}$$

which is o(1) in u. \(\square \)

We shall denote both the event and the indicator function of the event as \({\tilde{I}}_t^u\), and we will more often than not drop the u in the notation and write \({\tilde{I}}_t\). Straightforward calculations, using that \({\tilde{g}}_s'(x) = e^{-a{\tilde{L}}_s}\), show that on the event of the above proposition, we have

$$\begin{aligned} \psi _0(s)^{-1} e^{-a\beta (1+\rho /2)s} \le {\tilde{g}}_s'(x) \le \psi _0(s) e^{-a\beta (1+\rho /2)s} \end{aligned}$$
(41)

where \(\psi _0\) is the subexponential function \(\psi _0(s) = e^{au \sqrt{s} \log (2+s) +c}\).

Next, we want to convert these facts into the corresponding for \(g_{\tau _t}'(x)\). We let \(C^* = C^*(x,x_R)\) denote the constant as remarked after Lemma 2.4, that is, the constant such that, for \(t>0\), \({\tilde{t}}((t/a -C^*) \vee 0) \le \tau _t(x) \le {\tilde{t}}(t/a + C^*)\). What we will do now, is to define an \({\mathscr {F}}_{\tau _t}\)-measurable version of \({\tilde{I}}_t^u\) (the indicator of the event of Proposition 3.4) and the natural way is to define this as the conditional expectation with respect to this filtration. Fix \(u>0\) and write

$$\begin{aligned} I_t^u = {\mathbb {E}}\left[ {\tilde{I}}_{\frac{t}{a} + C^*}^u \bigg | {\mathscr {F}}_{\tau _t} \right] . \end{aligned}$$
(42)

In the next proposition, we will see that this indeed works the same way for \(g_{\tau _t}'(x)\) as \({\tilde{I}}_t^u\) does for \({\tilde{g}}_t'(x)\). We will omit the superscript and write \(I_t = I_t^u\).

Lemma 3.5

Let \(u>0\) and \(I_t = I_t^u\) be as above. Then there is a subexponential function \(\psi \) such that for \(\max (0,-\log (x-x_R)) \le s \le t\),

$$\begin{aligned} \psi (s)^{-1} e^{-\beta (1+\rho /2)s} I_t \le g_{\tau _s}'(x) I_t \le \psi (s) e^{-\beta (1+\rho /2)s} I_t, \end{aligned}$$

where the implicit constants depend on x and \(x_R\).

Proof

Fix \(u>0\) and write \(s_+ = s/a + C^*\) and \(s_- = s/a -C^*\) (where \(C^*\) is as described above). Since \(t \mapsto g_t'(x)\) is decreasing, we have

$$\begin{aligned} {\tilde{g}}_{s_+}'(x) \le g_{\tau _s}'(x) \le {\tilde{g}}_{s_-}'(x). \end{aligned}$$

Hence, by (41)

$$\begin{aligned} g_{\tau _s}'(x) I_t&= {\mathbb {E}}\left[ g_{\tau _s}'(x) {\tilde{I}}_{t_+} \bigg | {\mathscr {F}}_{\tau _t} \right] \le {\mathbb {E}}\left[ {\tilde{g}}_{s_-}'(x) {\tilde{I}}_{t_+} \bigg | {\mathscr {F}}_{\tau _t} \right] \le \psi _0(s_-) e^{-a\beta (1+\rho /2)s_-} I_t \\&=\psi _0(s_-) e^{C^* a\beta (1+\rho /2)} e^{-\beta (1+\rho /2)s} I_t. \end{aligned}$$

In the same way,

$$\begin{aligned} g_{\tau _s}'(x) I_t \ge {\mathbb {E}}\left[ {\tilde{g}}_{s_+}'(x) {\tilde{I}}_{t_+} \bigg | {\mathscr {F}}_{\tau _t} \right] \ge \psi _0(s_+)^{-1} e^{-C^* a\beta (1+\rho /2)} e^{-\beta (1+\rho /2)s} I_t, \end{aligned}$$

and the lemma is proven. \(\square \)

Remark 3.6

We remark that we can allow a larger constant in the definition of the event in Proposition 3.4. The choice of constant c is not important, as if we let \({\tilde{I}}_t^{u,{\tilde{c}}}\) denote the event where we replace c by \({\tilde{c}}>c\), the same estimates hold with the subexponential function \(\psi _1(s) = e^{au\sqrt{s} \log (2+s)+{\tilde{c}}}\) in place of \(\psi _0\). Hence, the correct asymptotic behaviour of \({\tilde{g}}_s'(x)\) is preserved on \({\tilde{I}}_t^{u,{\tilde{c}}}\). Furthermore, \({\tilde{I}}_t^u \subset {\tilde{I}}_t^{u,{\tilde{c}}}\).

4 Two-point estimate

4.1 Outline

In this section, we use the imaginary geometry techniques to prove a two-point estimate that we need for the lower bound on the dimension of \(V_\beta ^*\) (and hence \(V_\beta \)). We follow the ideas of Sect. 3.2 of [21] and we will keep the notation similar. Note that we will write the proof for flow lines, i.e., \(\kappa <4\), but the merging property and every lemma that we will need, hold for the level lines of the GFF as well, so the method also gives the two-point estimate in the case \(\kappa = 4\). The main idea is to use the merging of the flow lines and the approximate independence of GFF in disjoint regions to “move the problem between scales” and separate the points when at the right scale.

We let h be a GFF in \({\mathbb {H}}\) with boundary conditions such that the flow line \(\eta \) from 0 is an \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) process from 0 to \(\infty \). We define a sequence of random variables \(E^n(x)\), for \(x \in {\mathbb {R}}\) and \(n \in {\mathbb {N}}\), such that if \(E^n(x)>0\) for every \(n \in {\mathbb {N}}\), then \(x \in V_\beta ^*\) and we say that x is a perfect point. The idea for the construction of the random variables is as follows. Consider the event \(A_0^1(x)\), that \(\eta \) hits the ball \(B(x,\varepsilon _1)\), \(\varepsilon _1 = e^{-\alpha _1}\) and let \(E^0(x) = 1_{A_0^1(x)} I_{\alpha _1}^{u,\Lambda }\), where \(I_{\alpha _1}^{u,\Lambda }\) is the random variable of (42) but with a larger constant \(\Lambda \). That is, if \(E^0(x)>0\), then \(\eta \) gets within distance \(\varepsilon _1\) of x and the derivative \(g_{\tau _t}'(x)\) decays approximately as \(e^{-\beta (1+\rho /2)t}\) until \(\eta \) hits \(B(x,\varepsilon _1)\).

We proceed inductively. Assume that \(E^k(x)\) is defined and that \(\varepsilon _j = e^{-\sum _{l=1}^j \alpha _l}\), \(\alpha _l > 0\). Let \(\eta ^{x_{k+1}}\) be the flow line started from the point \(x_{k+1} = x-\varepsilon _{k+1}/4\). Let \(A_{k+1}^1(x)\) be the event that \(\eta ^{x_{k+1}}\) hits \(B(x,\varepsilon _{k+2})\), plus some regularity conditions. Furthermore, let \(I_{k+1}^{u,\Lambda ,k+1}\) denote the random variable corresponding to (42), but for \(\eta ^{x_{k+1}}\) until hitting \(B(x,\varepsilon _{k+2})\). Next, given that \(A_k^1(x)\) and \(A_{k+1}^1(x)\) occur, let \(A_{k+1}^2(x)\) be the event that \(\eta ^{x_k}\) hits \(\eta ^{x_{k+1}}\) plus some regularity conditions. We then set \(E^{k+1}(x) = E^k(x) 1_{A_{k+1}^1(x) \cap A_{k+1}^2(x)} I_{k+1}^{u,\Lambda ,k+1}\).

In short, we let a sequence of flow lines, on smaller and smaller scales, approach the point x, such that each flow line has the correct geometric behaviour as it approaches x. Moreover, each flow line hits and merges with the next. In this way, the \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) process \(\eta \) inherits its geometric behaviour from each of the flow lines. This is very convenient when deriving the two-point estimate, that is, when proving that the correlation of \(E^n(x)\) and \(E^n(y)\) is small when \(|x-y|\) is large. The key property that we use is that the flow lines started within the balls \(B(x, |x-y|/(2+\delta _0))\) and \(B(y,|x-y|/(2+\delta _0))\) are approximately independent when \(\delta _0 > 0\) (in the sense that the Radon–Nikodym derivative between the measures with and without the other set of flow lines present is bounded above and below by a constant). Moreover, the flow lines outside of those balls will also be approximately independent, in the same sense, see Lemma 4.4. Furthermore, the probability of two subsequent flow lines merging is proportional to 1, see Lemma 4.5.

Having a certain decay rate of the derivatives of the conformal maps is equivalent to having a certain decay rate of the harmonic measure from infinity of some set on the real line. This will be essential to us, as it is the tool with which we show that the perfect points actually belong to \(V_\beta ^*\). Moreover, it is important that \(\alpha _j \rightarrow \infty \), but not too quickly. If \(\alpha _j\) would not tend to \(\infty \), then the perfect points would just be points where \(\lim _{s\rightarrow \infty } \frac{1}{s} \log g_{\tau _s}'(x) \in [-\beta (1+\rho /2) - c,-\beta (1+\rho /2) + c]\).

In the next subsection, there will be parameters which at first may look redundant, but in fact play important roles in the regularity conditions. We conclude this subsection by listing them and give brief descriptions of how they are used.

  • \(\underline{\delta \in (0,\frac{1}{2})}\): Chosen to be very small and makes sure that the curve \(\eta ^{x_k}\) does not hit \(B(x,\frac{1}{M}\varepsilon _k)\) or \(B(x,\varepsilon _{k+1})\) too close to the real line. Important, as it makes sure that the probability of \(\eta ^{x_k}\) and \(\eta ^{x_{k+1}}\) merging does not decrease in k. Furthermore, it is needed in the one-point estimate, Lemma 4.6, as it gives control of a certain martingale.

  • \(\underline{M>0}\): Crucial in the proof that the perfect points belong to \(V_\beta ^*\). It makes sure that the probability of exiting in the interval between the rightmost point on \({\mathbb {R}}\) of \(\eta ^{x_{k+1}}\) (stopped upon hitting \(B(x,\varepsilon _{k+2})\)) and x for a Brownian motion started in \(B(x,\varepsilon _{k+1})\) depends mostly on \(\eta ^{x_{k+1}}\) and not on \(\eta ^{x_k}\). It is chosen to be large, so that the process \(Q^k\), under the measure \({\mathbb {P}}^*\), will be close in law to its \({\mathbb {P}}^*\)-invariant distribution when \(\eta ^{x_{k+1}}\) reaches \(B(x,\frac{1}{M}\varepsilon _k)\). Moreover, this also makes sure that the probability of \(\eta ^{x_k}\) and \(\eta ^{x_{k+1}}\) merging does not decrease in k.

  • \(\underline{\Lambda >0}\): Chosen large so that the event \({\tilde{I}}_t^{u,\Lambda }\) for \(\eta ^{x_k}\) contains the event \({\tilde{I}}_t^u\) for the image of \(\eta ^{x_k}\) under some map F.

  • \(\underline{u>0}\): Chosen large enough, so that the event \({\tilde{I}}_t^u\) has sufficiently large \({\mathbb {P}}^*\)-probability.

4.2 Perfect points and the two-point estimate

Throughout this section we fix \(\kappa \in (0,4)\) and \(\rho \in (-2,\frac{\kappa }{2}-2)\) and let h be a GFF in \({\mathbb {H}}\) with boundary values \(-\lambda \) on \({\mathbb {R}}_-\) and \(\lambda (1+\rho )\) on \({\mathbb {R}}_+\), so that the flow line \(\eta \) from 0 to \(\infty \) is an \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) curve with force point located at \(0^+\) (so the configuration is \(({\mathbb {H}},0,0^+,\infty )\)). Note that the interval for \(\rho \) is chosen so that \(\eta \) can hit \({\mathbb {R}}_+\). We denote the flow line from x by \(\eta ^x\) and note that for \(x>0\), \(\eta ^x\) is an \({{\,\mathrm{SLE}\,}}_\kappa (2+\rho ,-2-\rho ;\rho )\) with configuration \(({\mathbb {H}},x,(0,x^-),x^+,\infty )\). We fix \(\delta \in (0,\frac{1}{2})\), \(M>0\) large and an increasing sequence, \(\alpha _j \rightarrow \infty \), write \({\overline{\alpha }}_k = \sum _{j=1}^k \alpha _j\) and let \(\varepsilon _k = e^{-{\overline{\alpha }}_k}\). The constants \(\delta \) and M will be chosen later. As for \(\alpha _j\), we define it as \(\alpha _j = \alpha _0+\log j\), where \(\alpha _0 = \log N\) for some large integer N. For \(x \ge 1\) and \(k \in {\mathbb {N}}\), we write

$$\begin{aligned} x_k = {\left\{ \begin{array}{ll} x-\frac{1}{4}\varepsilon _k &{} \text {if } k \ge 1,\\ 0 &{} \text {if } k = 0. \end{array}\right. } \end{aligned}$$

For \(U \subset {\mathbb {H}}\), we define

$$\begin{aligned} \sigma ^x(U) = \inf \{ t\ge 0: \eta ^x(t) \in {\overline{U}} \} \end{aligned}$$

(when \(x=0\), we omit the superscript) and

$$\begin{aligned} \sigma _k^x = \sigma ^{x_k}(B(x,\varepsilon _{k+1})), \end{aligned}$$

and note that \(\sigma (B(x,\varepsilon _k))=\tau _{{\overline{\alpha }}_k}(x)\). Furthermore, let \(\sigma _{k,M}^x = \sigma ^{x_k}(B(x,\frac{1}{M}\varepsilon _k))\). We let \(\eta ^{x_k,R}\) denote the right side of the flow line \(\eta ^{x_k}\), \(r_t^k = \max \{ \eta ^{x_k}([0,t]) \cap {\mathbb {R}}\}\) and define \(Q_t^k\) by

$$\begin{aligned} Q_t^k(x) = \frac{\omega _\infty ((r_t^k,x],{\mathbb {H}}{\setminus } \eta ^{x_k}([0,t]))}{\omega _\infty (\eta ^{x_k,R}([0,t]) \cup (r_t^k,x],{\mathbb {H}}{\setminus } \eta ^{x_k}([0,t]))}. \end{aligned}$$

Recall that by (24), \(Q_t(x)=Q_t^0(x)\) is the diffusion (23). For \(k \ge 0\), let \({\tilde{I}}_t^{u,\Lambda ,k}={\tilde{I}}_t^{u,\Lambda ,k}(x)\) denote the event (as well as the indicator of the event) of Proposition 3.4, with constant \(\Lambda \) (see Remark 3.6) but for the flow line \(\eta ^{x_k}\), and

$$\begin{aligned} I_k^{u,\Lambda ,k} = {\mathbb {E}}\left[ {\tilde{I}}_{\frac{\alpha _{k+1}}{a}+C_k^*(x)}^{u,\Lambda ,k} \bigg | {\mathscr {F}}_{\sigma _k^x} \right] , \end{aligned}$$

as previously. The constants u and \(\Lambda \) will be chosen in Lemma 4.6. Note that the event \({\tilde{I}}_t^{u,\Lambda ,k}\) is a condition on the geometry of the curve which does not change when we rescale (it can be expressed in terms of \(Q^k(x)\), which is invariant under scaling of the \({{\,\mathrm{SLE}\,}}_\kappa ({\underline{\rho }})\) process). Moreover, if we let \(\eta _*^{x_k} = \varphi _k(\eta ^{x_k})\), where \(\varphi _k(z) = (z-x)/\varepsilon _k\) (so that \(\eta _*^{x_k}\) is an \({{\,\mathrm{SLE}\,}}_\kappa (2+\rho ,-2-\rho ;\rho )\) process with configuration \(({\mathbb {H}},-1/4,(-x/\varepsilon _k,0^-),0^+,\infty )\)) and let \((g_t^{*,k})_{t \ge 0}\) denote its Loewner chain, then on the event \(\{ I_k^{u,\Lambda ,k} > 0\}\),

$$\begin{aligned} \psi (\alpha _{k+1})^{-1} e^{-\beta (1+\rho /2)\alpha _{k+1}} \le (g_{\sigma _k^x}^{*,k})'(0) \le \psi (\alpha _{k+1}) e^{-\beta (1+\rho /2)\alpha _{k+1}}, \end{aligned}$$
(43)

for some subexponential function \(\psi =\psi _{u,\Lambda }\) (Fig. 8).

Fig. 8
figure 8

If \(E_k^1(x)>0\), then \(\eta ^{x_{k-1}}\) hits \(B(x,\varepsilon _k)\), \(Q_{\sigma _{k,M}^x},Q_{\sigma _k^x}^k \in [\delta , 1-\delta ]\) and the derivatives of the Loewner chain for \(\eta ^{x_{k-1}}\) behave as we want. Furthermore, given that \(E_k^1(x) > 0\), we have that if \(E_k^2(x) = 1\), then \(\eta ^{x_{k-1}}\) merges with \(\eta ^{x_k}\) before exiting \(B(x,\frac{1}{2} \varepsilon _k) {\setminus } B(x,\frac{1}{M}\varepsilon _k)\)

We let \(A_k^1(x) = A_k^1(x,\delta ,M,\alpha _0)\) be the event that

  1. (i)

    \(\sigma _k^x < \infty \),

  2. (ii)

    \(Q_{\sigma _{k,M}^x}^k(x),Q_{\sigma _k^x}^k(x) \in [\delta ,1-\delta ]\), and

  3. (iii)

    \(\sigma _k^x < \sigma ^{x_k}({\mathbb {H}}{\setminus } B(x,\frac{1}{2} \varepsilon _k))\),

that is, \(\eta ^{x_k}\) hits \(B(x,\varepsilon _{k+1})\) before exiting \(B(x,\frac{1}{2}\varepsilon _k)\) and it does not hit \(B(x,\frac{1}{M}\varepsilon _k)\) or \(B(x,\varepsilon _{k+1})\) “too far down” (the latter being due to the condition on \(Q_t^k(x)\)). Now, we set

$$\begin{aligned} E_k^1(x) = 1_{A_k^1(x)} I_k^{u,\Lambda ,k}. \end{aligned}$$

We let \(A_k^2(x)=A_k^2(x,\delta ,M,\alpha _0)\) be the event that on \(A_k^1(x)\) and \(A_{k+1}^1(x)\),

  1. (i)

    \(\eta ^{x_{k-1}}|_{[\sigma _{k-1}^x,\infty )}\) merges with \(\eta ^{x_k}|_{[0,\sigma _k^x)}\) before exiting \(B(x,\frac{3}{2} \varepsilon _k) {\setminus } B(x,\frac{1}{M}\varepsilon _k)\),

  2. (ii)

    \(\arg (\eta ^{x_{k-1}}(t)-x) \ge \frac{2}{3} \min (\arg (\eta ^{x_{k-1}}(\sigma _{k-1}^x)-x),\arg (\eta ^{x_k}(\sigma _k^x)-x))\) for \(t> \sigma _{k-1}^x\) but before merging with \(\eta ^{x_k}\),

that is, property (ii) makes sure that the curve does not get “too close” to \({\mathbb {R}}_+\), see Fig. 9. We let \(E_k^2(x) = 1_{A_k^2(x)}\) be the indicator of that event. Next, we let \(E_k(x) = E_k^1(x) E_k^2(x)\) and write

$$\begin{aligned} E^{m,n}(x) = E_{m+1}^1(x) \prod _{k=m+2}^n E_k(x), \end{aligned}$$

and \(E^n(x) = E^{-1,n}(x)\).

Fig. 9
figure 9

Condition (ii) on \(A_k^2(x)\) ensures that we will not have the case in the above figure—instead there will be some sector which the flow lines will not enter

Why this is the right setting and these conditions are the correct ones to look for might not be clear at first sight. This, we prove in the next lemma.

Lemma 4.1

If \(E^n(x)>0\) for each \(n\in {\mathbb {N}}\), then \(x \in V_\beta ^*\).

Proof

First, note that we are considering the decay of the conformal maps at a sequence of times \({\overline{\alpha }}_k \rightarrow \infty \), rather than as the limit over a continuum. However, by the monotonicity of the map \(t \mapsto g_t'(x)\), this is sufficient. By the Koebe 1/4 theorem,

$$\begin{aligned} \frac{e^{-{\overline{\alpha }}_k}}{4}g_{\tau _{{\overline{\alpha }}_k}}'(x) \le \omega _\infty ((r_{{\overline{\alpha }}_k},x],{\mathbb {H}}{\setminus } K_{{\overline{\alpha }}_k}) \le 4 e^{-{\overline{\alpha }}_k} g_{\tau _{{\overline{\alpha }}_k}}'(x) \end{aligned}$$
(44)

for each integer \(k \in {\mathbb {N}}\). Hence, it is enough to see that the decay rate of \(\omega _\infty \) is the correct one.

Let \({\widehat{K}}_n\) denote the closure of the complement of the unbounded connected component of \({\mathbb {H}}{\setminus } (\eta ([0,\tau _{{\overline{\alpha }}_{n+1}}]) \cup \eta ^{x_n}([0,\sigma _n^x]))\). Clearly, on the event \(\{E^n(x)>0\}\),

$$\begin{aligned} \omega _\infty ((r_{\sigma _n^x}^{n},x],{\mathbb {H}}{\setminus } {\widehat{K}}_n) \le \omega _\infty ((r_{\tau _{{\overline{\alpha }}_{n+1}}},x],{\mathbb {H}}{\setminus } K_{\tau _{{\overline{\alpha }}_{n+1}}}), \end{aligned}$$

since \(K_{\tau _{{\overline{\alpha }}_{n+1}}} \subset {\widehat{K}}_n\) and \((r_{\sigma _n^x}^n,x] \subset (r_{\tau _{{\overline{\alpha }}_{n+1}}},x]\). In view of \(\omega \) as the hitting probability of a Brownian motion, it is easy to see that

$$\begin{aligned} \omega _\infty ((r_{\sigma _n^x}^n,x],{\mathbb {H}}{\setminus } {\widehat{K}}_n) > rsim \omega _\infty ((r_{\tau _{{\overline{\alpha }}_{n+1}}},x],{\mathbb {H}}{\setminus } K_{\tau _{{\overline{\alpha }}_{n+1}}}), \end{aligned}$$
(45)

where the implicit constant is independent of n. Indeed, if \(L_n\) denotes the line segment \([x,\eta ^{x_n}(\sigma _n^x)]\), then a Brownian motion, started in the unbounded connected component of \({\mathbb {H}}{\setminus } ({\widehat{K}}_n \cup L_n)\), which exits \({\mathbb {H}}{\setminus } {\widehat{K}}_n\) in either of the two intervals \((r_{\tau _{{\overline{\alpha }}_{n+1}}},r_{\sigma _n^x}^n]\) and \((r_{\sigma _n^x}^n,x]\), must first hit the line segment \(L_n\). However, from any point \(z \in L_n\), \({{\,\mathrm{dist}\,}}(z,(r_{\tau _{{\overline{\alpha }}_{n+1}}},r_{\sigma _n^x}^n]) > \varepsilon _{n+1}\), \({{\,\mathrm{dist}\,}}(z,(r_{\sigma _n^x}^n,x]) \le \varepsilon _{n+1}\) and \(x-r_{\sigma _n^x}^n > \varepsilon _{n+1}\). Hence, the conditional probability of the Brownian motion exiting in \((r_{\sigma _n^x}^n,x]\), given that it will exit in \((r_{\tau _{{\overline{\alpha }}_{n+1}}},x]\) is greater than some \({\hat{p}}>0\). That the constant is independent of n follows from scale invariance. See Fig. 10 for the illustration of (45). Thus, we have proven that on the event \(\{E^n(x)>0\}\),

$$\begin{aligned} \omega _\infty ((r_{\sigma _n^x}^{n},x],{\mathbb {H}}{\setminus } {\widehat{K}}_n) \asymp \omega _\infty ((r_{\tau _{{\overline{\alpha }}_{n+1}}},x],{\mathbb {H}}{\setminus } K_{\tau _{{\overline{\alpha }}_{n+1}}}). \end{aligned}$$
(46)

Finally, we shall prove that \(\omega _\infty ((r_{\sigma _n^x}^{n},x],{\mathbb {H}}{\setminus } {\widehat{K}}_n)\) has the correct decay rate, that is, that

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{{\overline{\alpha }}_{n+1}} \log \omega _\infty ((r_{\sigma _n^x}^{n},x],{\mathbb {H}}{\setminus } {\widehat{K}}_n) = -1-\beta (1+\rho /2). \end{aligned}$$

We start with the upper bound. Then, since \(\eta ([0,\tau _{\alpha _1}]) \cup \bigcup _{j=1}^n \eta ^{x_j}([0,\sigma _j^x])) \subset {\widehat{K}}_n\), we have, by the Markov property for Brownian motion,

$$\begin{aligned} \omega _\infty ((r_{\sigma _n^x}^{n},x],{\mathbb {H}}{\setminus } {\widehat{K}}_n)&\le \omega _\infty \Big ((r_{\sigma _n^x}^n,x],{\mathbb {H}}{\setminus } (\eta ([0,\tau _{\alpha _1}]) \cup \bigcup _{j=1}^n \eta ^{x_j}([0,\sigma _j^x])) \Big ) \nonumber \\&\le \omega _\infty \left( \partial B\left( x,\frac{\varepsilon _1}{2} \right) ,{\mathbb {H}}{\setminus }\eta ([0,\tau _{\alpha _1}])\right) \nonumber \\&\quad \times \prod _{j=1}^{n-1} \sup _{z \in \partial B\left( x,\frac{1}{2} \varepsilon _{j}\right) } \omega \left( z,\partial B\left( x,\frac{\varepsilon _{j+1}}{2} \right) , {\mathbb {H}}{\setminus }\eta ^{x_j}([0,\sigma _j^x])\right) \nonumber \\&\quad \times \sup _{z \in \partial B\left( x,\frac{1}{2} \varepsilon _n\right) } \omega \left( z,(r_{\sigma _n^x}^n,x], {\mathbb {H}}{\setminus }\eta ^{x_n}([0,\sigma _n^x])\right) , \end{aligned}$$
(47)

using that if \(K^1 \subset K^2 \subset {\mathbb {H}}\), then \(\omega (z,E,K^1) \ge \omega (z,E,K^2)\) for \(E \subset {\mathbb {R}}{\setminus } \partial K^2\) and \(z \in {\mathbb {H}}{\setminus } K^2\) (by removing obstacles, we allow more Brownian paths, and hence the probability of exiting in that interval increases). Next, note that for \(z \in \partial B\left( x,\frac{1}{2} \varepsilon _j\right) \), we have that

$$\begin{aligned} \omega \left( z,\partial B\left( x,\frac{\varepsilon _{j+1}}{2} \right) , {\mathbb {H}}{\setminus }\eta ^{x_j}([0,\sigma _j^x])\right) \asymp \omega (z,(r_{\sigma _j^x}^j,x],{\mathbb {H}}{\setminus } \eta ^{x_j}([0,\sigma _j^x])), \end{aligned}$$

where the implicit constant is independent of both z and j. This holds since the sizes of, as well as distances to z from \(\partial B(x,\frac{\varepsilon _{j+1}}{2})\) and \((r_{\sigma _j^x}^j,x]\) are of the same order. Thus,

$$\begin{aligned}&\sup _{z \in \partial B\left( x,\frac{1}{2} \varepsilon _j\right) } \omega \left( z,\partial B\left( x,\frac{\varepsilon _{j+1}}{2} \right) , {\mathbb {H}}{\setminus }\eta ^{x_j}([0,\sigma _j^x])\right) \\&\qquad \le C \sup _{z \in \partial B\left( x,\frac{1}{2} \varepsilon _j\right) } \omega (z,(r_{\sigma _j^x}^j,x],{\mathbb {H}}{\setminus } \eta ^{x_j}([0,\sigma _j^x])), \end{aligned}$$

where C is independent of j. Moreover, note that on the event \(\{E^n(x) > 0\}\), the condition \(I_j^{u,\Lambda ,j}>0\) implies that (43) holds, and using the Koebe 1/4 theorem as in (44), we have

$$\begin{aligned} \psi (\alpha _{j+1})^{-1} e^{-\alpha _{j+1}(1+\beta (1+\rho /2))}&\le \omega _\infty ((\varphi _j(r_{\sigma _j^x}^j),0],{\mathbb {H}}{\setminus }\eta _*^{x_j}([0,\sigma _j^x])) \nonumber \\&\le \psi (\alpha _{j+1}) e^{-\alpha _{j+1}(1+\beta (1+\rho /2))} \end{aligned}$$
(48)

for some subexponential function \(\psi = \psi _{u,\Lambda }\). Next, we need that

$$\begin{aligned} \omega _\infty ((\varphi _j(r_{\sigma _j^x}^j),0],{\mathbb {H}}{\setminus }\eta _*^{x_j}([0,\sigma _j^x])) \asymp \sup _{z \in \partial B\left( x,\frac{1}{2} \varepsilon _j\right) } \omega (z,(r_{\sigma _j^x}^j,x],{\mathbb {H}}{\setminus } \eta ^{x_j}([0,\sigma _j^x])), \end{aligned}$$
(49)

where the implicit constant is independent of j. By Harnack’s inequality, we can choose an arc \(S \subset \partial B(0,\frac{1}{2})\), depending only on the parameters \(\delta \), M, \(\Lambda \) and u, such that for each \(z \in \varphi _j^{-1}(S) \subset \partial B\left( x,\frac{1}{2} \varepsilon _j\right) \),

$$\begin{aligned} \omega (z,(r_{\sigma _j^x}^j,x],{\mathbb {H}}{\setminus } \eta ^{x_j}([0,\sigma _j^x])) \asymp \sup _{z \in \partial B\left( x,\frac{1}{2} \varepsilon _j\right) } \omega (z,(r_{\sigma _j^x}^j,x],{\mathbb {H}}{\setminus } \eta ^{x_j}([0,\sigma _j^x])). \end{aligned}$$
(50)

The fact that this will hold for every j follows since the same geometric restrictions are imposed on each flow line \(\eta ^{x_j}\). We let \(\varrho \) and \(\tau _*\) denote the first exit time of \({\mathbb {H}}{\setminus } B(0,\frac{1}{2})\) and \({\mathbb {H}}{\setminus } \eta _*^{x_j}([0,\sigma _j^x])\), respectively (recall that \(\eta _*^{x_j} = \varphi _j(\eta ^{x_j})\)). Then,

$$\begin{aligned}&\omega _\infty ((\varphi _j(r_{\sigma _j^x}^j),0],{\mathbb {H}}{\setminus }\eta _*^{x_j}([0,\sigma _j^x])) \\&\quad \asymp \lim _{y \rightarrow \infty } y{\mathbb {P}}^{iy}(B_\varrho \in S, B_{\tau _*} \in (\varphi _j(r_{\sigma _j^x}^j),0]) \\&\quad = \lim _{y \rightarrow \infty } y\int _S {\mathbb {P}}(B_{\tau _*} \in (\varphi _j(r_{\sigma _j^x}^j),0] | B_\varrho = z) d{\mathbb {P}}^{iy}(B_\varrho = z) \\&\quad = \lim _{y \rightarrow \infty } y\int _S \omega (\varphi _j^{-1}(z),(r_{\sigma _j^x}^j,x],{\mathbb {H}}{\setminus } \eta ^{x_j}([0,\sigma _j^x])) d{\mathbb {P}}^{iy}(B_\varrho = z) \\&\quad \asymp \sup _{z \in \partial B\left( x,\frac{1}{2} \varepsilon _j\right) } \omega (z,(r_{\sigma _j^x}^j,x],{\mathbb {H}}{\setminus } \eta ^{x_j}([0,\sigma _j^x])) \lim _{y \rightarrow \infty } y{\mathbb {P}}^{iy}(B_\varrho \in S) \\&\quad \asymp \sup _{z \in \partial B\left( x,\frac{1}{2} \varepsilon _j\right) } \omega (z,(r_{\sigma _j^x}^j,x],{\mathbb {H}}{\setminus } \eta ^{x_j}([0,\sigma _j^x])) \end{aligned}$$

where we used the fact that \(\omega _\infty (\partial B(0,\frac{1}{2}),{\mathbb {H}}{\setminus } B(0,\frac{1}{2})) \asymp \omega _\infty (S, {\mathbb {H}}{\setminus } B(0,\frac{1}{2}))\) together with (50) on the first line, the conformal invariance of Brownian motion on the third line, (50) on the fourth line and that \(\omega _\infty (S,{\mathbb {H}}{\setminus } B(0,\frac{1}{2})) \asymp 1\) on the fifth line. Thus (49) holds. Combining this with (48), we have

$$\begin{aligned} \sup _{z \in \partial B\left( x,\frac{1}{2} \varepsilon _j\right) } \omega (z,(r_{\sigma _j^x}^j,x],{\mathbb {H}}{\setminus } \eta ^{x_j}([0,\sigma _j^x])) \le {\hat{C}} \psi (\alpha _{j+1}) e^{-\alpha _{j+1}(1+\beta (1+\rho /2))}, \end{aligned}$$

for some constant \({\hat{C}}\). Thus, combining this with (47)

$$\begin{aligned} \omega _\infty ((r_{\sigma _n^x}^{n},x],{\mathbb {H}}{\setminus } {\widehat{K}}_n) \le {\tilde{C}}^{n+1} \left( \prod _{j=1}^{n+1} \psi (\alpha _j) \right) e^{-{\overline{\alpha }}_{n+1} (1+\beta (1+\rho /2))}, \end{aligned}$$

that is,

$$\begin{aligned}&\lim _{n \rightarrow \infty } \frac{1}{{\overline{\alpha }}_{n+1}} \log \omega _\infty ((r_{\sigma _n^x}^n,x],{\mathbb {H}}{\setminus } {\widehat{K}}_n) \\&\quad \le \lim _{n \rightarrow \infty } \frac{(n+1) \log {\tilde{C}}}{{\overline{\alpha }}_{n+1}} + \frac{\sum _{j=1}^{n+1} \psi (\alpha _j)}{{\overline{\alpha }}_{n+1}} - 1-\beta (1+\rho /2) \\&\quad = -1-\beta (1+\rho /2). \end{aligned}$$

We now turn to the lower bound. We begin by writing \({\tilde{\tau }}_0 = \inf \{ t>0: \eta (t) \in \eta ^{x_1}([0,\sigma _1^x]) \}\). Next, \(A_1^2(x)\) and (ii) of \(A_0^1(x)\) and \(A_1^1(x)\) imply that \(\eta ((\tau _{\alpha _1},{\tilde{\tau }}_0])\) is not “too close” to \({\mathbb {R}}\), in the sense that there will be a sector \(\{ z: 0<\arg (z-x) < c\}\) that the curve will not enter, and the distance from \(\eta ([0,{\tilde{\tau }}_0])\) to x is of the same order as the distance from \(\eta ([0,\tau _{\alpha _1}])\) to x. Thus,

$$\begin{aligned} \omega _\infty ((r_{\tau _{\alpha _1}},x],{\mathbb {H}}{\setminus } \eta ([0,{\tilde{\tau }}_0])) \asymp \omega _\infty ((r_{\tau _{\alpha _1}},x],{\mathbb {H}}{\setminus } \eta ([0,\tau _{\alpha _1}])), \end{aligned}$$

where the implicit constant depends only on \(\delta \), M and \(\Lambda \). In fact, the probability of the Brownian motion hitting \((r_{\alpha _1},x]\) and some arc \(S_1 = \{ \frac{1}{2} \varepsilon _1 e^{i\theta }: \theta \in [\theta _1(\delta ), \theta _2(\delta )], \ \theta _1(\delta ) > 0 \}\), such that \(S_1 \cap \eta ([0,{\tilde{\tau }}_0]) = \emptyset \) is proportional to the probability of the Brownian motion hitting \((r_{\alpha _1},x]\). That is, let \(\upsilon \) denote the exit time of \({\mathbb {H}}{\setminus } \eta ([0,{\tilde{\tau }}_0])\) for the Brownian motion, then

$$\begin{aligned} \lim _{y \rightarrow \infty } y{\mathbb {P}}^{iy}( B_\upsilon \in (r_{\tau _{\alpha _1}},x], \ B_{[0,\upsilon ]} \cap S_1 \ne \emptyset ) \asymp \omega _\infty ((r_{\tau _{\alpha _1}},x],{\mathbb {H}}{\setminus } \eta ([0,\tau _{\alpha _1}])), \end{aligned}$$
(51)

where, again, the implicit constants depend only on \(\delta \), M and \(\Lambda \). By the same reasoning, if \(S_k = \{ \frac{1}{2}\varepsilon _k e^{i\theta }: \theta \in [\theta _1(\delta ), \theta _2(\delta )], \ \theta _1(\delta ) > 0 \}\), then for every \(z \in S_{k-1}\),

$$\begin{aligned} {\mathbb {P}}^z(B_{\upsilon _k} \in (r_{\sigma _k^x}^k,x], B_{[0,\upsilon _k]} \cap S_k \ne \emptyset ) \asymp \omega (z,(r_{\sigma _k^x}^k,x],{\mathbb {H}}{\setminus } \eta ^{x_k}([0,\sigma _k^x])), \end{aligned}$$
(52)

where \(\upsilon _k\) denotes the exit time from \({\mathbb {H}}{\setminus } {\widehat{K}}_k\) for the Brownian motion, and the implicit constant does not depend on k. Thus, by the Markov property, (51), and (52) together with (48),

$$\begin{aligned} \omega _\infty ((r_{\sigma _n^x}^{n},x],{\mathbb {H}}{\setminus } {\widehat{K}}_n)&\ge \lim _{y \rightarrow \infty } y{\mathbb {P}}^{iy}( B_\upsilon \in (r_{\tau _{\alpha _1}},x], \ B_{[0,\upsilon ]} \cap S_1 \ne \emptyset ) \\&\quad \times \prod _{j=1}^n \inf _{z \in S_j} {\mathbb {P}}^z(B_{\upsilon _j} \in (r_{\sigma _j^x}^j,x], B_{[0,\upsilon _j]} \cap S_j \ne \emptyset ) \\&\ge {\tilde{c}}^{n+1} \left( \prod _{j=1}^{n+1} \psi (\alpha _j)^{-1} \right) e^{-{\overline{\alpha }}_{n+1} (1+\beta (1+\rho /2))}. \end{aligned}$$

Hence,

$$\begin{aligned}&\lim _{n \rightarrow \infty } \frac{1}{{\overline{\alpha }}_{n+1}} \log \omega _\infty ((r_{\sigma _n^x}^n,x],{\mathbb {H}}{\setminus } {\widehat{K}}_n) \\&\quad \ge \lim _{n \rightarrow \infty } \frac{(n+1) \log {\tilde{c}}}{{\overline{\alpha }}_{n+1}} - \frac{\sum _{j=1}^{n+1} \psi (\alpha _j)}{{\overline{\alpha }}_{n+1}} - 1-\beta (1+\rho /2) \\&\quad = -1-\beta (1+\rho /2). \end{aligned}$$

Therefore, by (46),

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{{\overline{\alpha }}_{n+1}} \log \omega _\infty ((r_{\tau _{{\overline{\alpha }}_{n+1}}},x],{\mathbb {H}}{\setminus } K_{\tau _{{\overline{\alpha }}_{n+1}}}) = -1-\beta (1+\rho /2), \end{aligned}$$

and consequently by (44),

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{{\overline{\alpha }}_n} \log g_{\tau _{{\overline{\alpha }}_n}}'(x) = -\beta (1+\rho /2). \end{aligned}$$

Thus, if \(E^n(x)>0\) for every \(n \in {\mathbb {N}}\), then \(x \in V_\beta ^*\). \(\square \)

Fig. 10
figure 10

A Brownian motion started from the dash-dotted line will hit the green interval with probability proportional to that of hitting the union of the green and the blue intervals (color figure online)

The two-point estimate which we will acquire here is the following. We consider \(x,y\in [1,2]\) out of convenience, but it will be clear that the same proof works for every compact interval.

Proposition 4.2

For each sufficiently small \(\delta \in (0,\frac{1}{2})\) there exists a subpower function \({\tilde{\Psi }}_\delta \) such that for all \(x,y \in [1,2]\) and \(m \in {\mathbb {N}}\) such that \(2\varepsilon _{m+2} \le |x-y| \le \frac{1}{2} \varepsilon _m\), we have

$$\begin{aligned} {\mathbb {E}}[E^n(x)E^n(y)] \le {\tilde{\Psi }}_\delta (\varepsilon _{m+2}^{-1}) \varepsilon _{m+2}^{(\zeta \beta -\mu )(1+\rho /2)} {\mathbb {E}}[E^n(x)] {\mathbb {E}}[E^n(y)]. \end{aligned}$$

Remark 4.3

Noting that \(\varepsilon _{m+2} = \varepsilon _m^{1+o_m(1)}\) as \(m \rightarrow \infty \), we can write Proposition 4.2 as

$$\begin{aligned} {\mathbb {E}}[E^n(x)E^n(y)] \le \Psi _\delta (1/|x-y|) |x-y|^{(\zeta \beta -\mu )(1+\rho /2)} {\mathbb {E}}[E^n(x)] {\mathbb {E}}[E^n(y)], \end{aligned}$$

where \(\Psi _\delta (s)\) is a subpower function.

The main ingredients in the proof are divided into three lemmas; the first of which establishes “approximate independence” between flow line interactions in different regions; the second states that merging of these flow lines happens with high enough probability and the third is a one-point estimate.

Fig. 11
figure 11

\(K^1\), shown in purple, and \(K^2\), shown in green, are the closures of the complement of the unbounded connected component of \({\mathbb {H}}{\setminus } (\eta ([0,\tau _{{\overline{\alpha }}_{m+1}}]) \cup \eta ^{x_m}([0,\sigma _m^x]))\) and \({\mathbb {H}}{\setminus } \eta ^{x_{m+1}}([0,\tau ])\) respectively, where \(\tau \) is the first exit time of \(U = B(x,\frac{1}{2} \varepsilon _{m+1})\) for \(\eta ^{x_{m+1}}\) (color figure online)

Lemma 4.4

For every \(x \ge 1\) and \(m,n \in {\mathbb {N}}\) such that \(m \le n\), it holds that

$$\begin{aligned} {\mathbb {E}}[E^m(x) E^{m,n}(x)] \asymp {\mathbb {E}}[E^m(x)]{\mathbb {E}}[E^{m,n}(x)]. \end{aligned}$$

Furthermore, if y is such that \(2\varepsilon _{m+2} \le |x-y| \le \frac{1}{2} \varepsilon _m\), then

$$\begin{aligned} {\mathbb {E}}[E^{m-1}(x) E^{m+1,n}(x) E^{m+1,n}(y)] \asymp {\mathbb {E}}[E^{m-1}(x)]{\mathbb {E}}[E^{m+1,n}(x)]{\mathbb {E}}[E^{m+1,n}(y)]. \end{aligned}$$

The constants in \(\asymp \) may depend on \(\kappa \) and \(\rho \).

Proof

In order to prove the first part, it suffices to prove that

$$\begin{aligned} {\mathbb {E}}[E^{m,n}(x)|E^m(x)] 1_{\{E^m(x)>0\}} \asymp {\mathbb {E}}[E^{m,n}(x)] 1_{\{E^m(x)>0\}}. \end{aligned}$$

Let \(v = \eta (\tau _{{\overline{\alpha }}_{m+1}})\), denote by \(K^1\) the closure of the unbounded component of

$$\begin{aligned} {\mathbb {H}}{\setminus } (\eta ([0, \tau _{{\overline{\alpha }}_{m+1}}])\cup \eta ^{x_m}([0,\sigma _m^x])) \end{aligned}$$

(see Fig. 11) and let \(v_+ = \max \{K^1 \cap {\mathbb {R}}\}\). As stated above, \(\eta ^{x_{m+1}}\) is an \({{\,\mathrm{SLE}\,}}_\kappa (2+\rho ,-2-\rho ;\rho )\) curve with configuration

$$\begin{aligned} c = ({\mathbb {H}},x_{m+1},(0,x_{m+1}^-),x_{m+1}^+,\infty ). \end{aligned}$$

Thus, recalling Remark 2.7 and Fig. 3 and noting the boundary conditions illustrated in Fig. 11, the conditional law of \(\eta ^{x_{m+1}}\) given \(\eta |_{[0,\tau _{{\overline{\alpha }}_{m+1}}]}\), \(\eta ^{x_m}|_{[0,\sigma _m^x]}\) and \(E^m(x)\), restricted to the event \(\{E^m(x) > 0\}\) (recall that \(E^m(x)\) is determined by \(\eta |_{[0,\tau _{{\overline{\alpha }}_{m+1}}]}\) and \(\eta ^{x_m}|_{[0,\sigma _m^x]}\)) is that of an \({{\,\mathrm{SLE}\,}}_\kappa (2,\rho ,-2-\rho ;\rho )\) process with configuration

$$\begin{aligned} {\tilde{c}} = ({\mathbb {H}}{\setminus } K^1, x_{m+1}, (v,v_+,x_{m+1}^-),x_{m+1}^+,\infty ). \end{aligned}$$

(Note that the boundary data to the left of v on \(K^1\) is the same as on \({\mathbb {R}}_-\), so that the leftmost force point is indeed v.)

Let \(U = B(x,\frac{1}{2} \varepsilon _{m+1})\) and let \(\tau = \sigma ^{x_{m+1}}({\mathbb {H}}{\setminus } U)\) be the exit time of U. Also, let \(K^2\) be the closure of the complement of the unbounded component of \({\mathbb {H}}{\setminus } \eta ^{x_{m+1}}([0,\tau ])\), \({\tilde{v}} = \eta ^{x_{m+1}}(\tau )\), \({\tilde{v}}_- = \min \{K^2 \cap {\mathbb {R}}\}\) and \({\tilde{v}}_+ = \max \{K^2 \cap {\mathbb {R}}\}\). Furthermore, let

$$\begin{aligned} c_\tau&= ({\mathbb {H}}{\setminus } K^2, {\tilde{v}},(0,{\tilde{v}}_-),{\tilde{v}}_+,\infty ), \\ {\tilde{c}}_\tau&= ({\mathbb {H}}{\setminus } (K^1 \cap K^2), {\tilde{v}},(v,v_+,{\tilde{v}}_-),{\tilde{v}}_+,\infty ). \end{aligned}$$

Then, by Lemma 2.10,

$$\begin{aligned} \frac{d\mu _{{\tilde{c}}}^U}{d\mu _c^U} = \frac{Z({\tilde{c}}_\tau )/Z({\tilde{c}})}{Z(c_\tau )/Z(c)} \exp \left( -\xi m({\mathbb {H}},K^1,K^2) \right) . \end{aligned}$$

Since \({{\,\mathrm{dist}\,}}(K^1,x) = \varepsilon _{m+1}\), and \(K^2 \subseteq \overline{B(x,\frac{1}{2} \varepsilon _{m+1})}\), we have that \({{\,\mathrm{dist}\,}}(K^1,K^2) > rsim \varepsilon _{m+1}\). Also, \(\text {diam}(U) = \varepsilon _{m+1}\) and hence

$$\begin{aligned} \frac{{{\,\mathrm{dist}\,}}(K^1,K^2)}{\text {diam}(U)} > rsim 1. \end{aligned}$$
(53)

Thus, after rescaling, we are in the setting of Lemma 2.11 and thus there exists a constant \(C \ge 1\) such that

$$\begin{aligned} \frac{1}{C} \le \frac{d\mu _{{\tilde{c}}}^U}{d\mu _c^U} \le C, \end{aligned}$$

that is, the Radon–Nikodym derivative between the law of \(\eta ^{x_{m+1}}\) stopped upon exiting U, given \(\eta ([0,\tau _{{\overline{\alpha }}_{m+1}}])\), \(\eta ^{x_m}([0,\sigma _m^x])\) and \(E^m(x)\), restricted to the event \(\{E^m(x) > 0\}\), and the law of \(\eta ^{x_{m+1}}\) stopped upon exiting U is bounded above and below by constants. Moreover, by (53), the constant C is independent of m. Hence,

$$\begin{aligned} {\mathbb {E}}[E^{m,m+1}(x)|E^m(x)] 1_{\{E^m(x)>0\}} \asymp {\mathbb {E}}[E^{m,m+1}(x)] 1_{\{E^m(x)>0\}}, \end{aligned}$$

and the case \(n = m+1\) is proven. Now, suppose that \(n \ge m+2\). Obviously, the Radon–Nikodym derivative between the law of \(\eta ^{x_n}\) stopped upon leaving the component of \(B(x,\frac{1}{2}\varepsilon _n) {\setminus } \eta ^{x_{m+1}}([0,\tau ])\) in which it starts growing and the law where we condition on \(K^1\) and \(E^m(x)\), restricted to \(\{E^m(x) > 0\}\), as well, is bounded above and below by a constant, by the very same argument as above. (Note that the distances have the same lower bound and that U is unchanged.) Furthermore, conditional on \(\eta ^{x_{m+1}}([0,\sigma ^{x_{m+1}}(B(x,\varepsilon _{n+1}))])\) and \(\eta ^{x_n}([0,\sigma _n^x])\) merging, the joint laws of \(\eta ^{x_j}|_{[0,\sigma _j^x]}\) for \(j = 1,\dots ,m\) and \(\eta ^{x_k}|_{[0,\sigma _k^x]}\) for \(k=m+2,\dots ,n-1\) are independent, hence the proof of the first part is done.

The second part is proven in the same way, noting that if \(\tau ^y = \sigma ^{y_{m+2}}({\mathbb {H}}{\setminus } B(y,\frac{1}{2}\varepsilon _{m+2}))\), \({\tilde{\tau }}^y = \inf \{ t>0: \eta ^{y_n}(t) \notin B(y,\frac{1}{2}\varepsilon _n) {\setminus } \eta ^{y_{m+2}}([0,\tau ^y]) \}\) and \(K^3\) is the closure of the complement of the unbounded connected component of \({\mathbb {H}}{\setminus } \left( \eta ^{y_{m+2}}|_{[0,\tau ^y]} \cup \eta ^{y_n}|_{[0,{\tilde{\tau }}^y]} \right) \), then \({{\,\mathrm{dist}\,}}(K^1 \cup K^2,K^3) > rsim \varepsilon _{m+2}\) and \(\text {diam}(B(y,\frac{1}{2}\varepsilon _{m+2})) = \varepsilon _{m+2}\). Thus, as above, we can rescale and apply Lemma 2.11, to see that

$$\begin{aligned}&{\mathbb {E}}[E^{m+1,n}(y)|E^{m-1}(x),E^{m+1,n}(x)] 1_{\{E^{m-1}(x)>0, E^{m+1,n}(x)>0\}} \\&\quad \asymp {\mathbb {E}}[E^{m+1,n}(y)] 1_{\{E^{m-1}(x)>0, E^{m+1,n}(x)>0\}}, \end{aligned}$$

that is,

$$\begin{aligned} {\mathbb {E}}[E^{m-1}(x) E^{m+1,n}(x) E^{m+1,n}(y)] \asymp {\mathbb {E}}[E^{m-1}(x)E^{m+1,n}(x)]{\mathbb {E}}[E^{m+1,n}(y)]. \end{aligned}$$

Repeating the above argument, noting that \({{\,\mathrm{dist}\,}}(K^1,K^2)/\text {diam}(B(x,\frac{1}{2} \varepsilon _{m+2})) > rsim 1\), we have that

$$\begin{aligned} {\mathbb {E}}[E^{m-1}(x)E^{m+1,n}(x)] \asymp {\mathbb {E}}[E^{m-1}(x)] {\mathbb {E}}[E^{m+1,n}(x)], \end{aligned}$$

and the proof is done. \(\square \)

Lemma 4.5

For each \(x \ge 1\) and \(m,n \in {\mathbb {N}}\) such that \(m \le n\), it holds that

$$\begin{aligned} {\mathbb {E}}[E^n(x)] \asymp {\mathbb {E}}[E^m(x)] {\mathbb {E}}[E^{m,n}(x)], \end{aligned}$$

where the constants can depend on \(\kappa \), \(\rho \), \(\delta \) and M.

Proof

We begin by noting that by the first part of Lemma 4.4,

$$\begin{aligned} {\mathbb {E}}[E^n(x)] \le {\mathbb {E}}[E^m(x) E^{m,n}(x)] \lesssim {\mathbb {E}}[E^m(x)] {\mathbb {E}}[E^{m,n}(x)]. \end{aligned}$$

Thus, it remains to show that \({\mathbb {E}}[E^n(x)] > rsim {\mathbb {E}}[E^m(x)] {\mathbb {E}}[E^{m,n}(x)]\). In order to prove this, it suffices to prove that

$$\begin{aligned} {\mathbb {E}}[E_{m+1}^2(x) | E^m(x),E^{m,n}(x)] 1_{\{E^m(x)>0,E^{m,n}(x)>0\}} \asymp 1_{\{E^m(x)>0,E^{m,n}(x)>0\}}, \end{aligned}$$

that is, that with positive probability (which is independent of m and n), \(\eta ^{x_m}|_{[\sigma _m^x,\infty )}\) will merge with \(\eta ^{x_{m+1}}|_{[0,\sigma _{m+1}^x)}\) before exiting \(B(x,\frac{3}{2} \varepsilon _{m+1}){\setminus } B(x,\frac{1}{M}\varepsilon _{m+1})\) and not come too close to \({\mathbb {R}}\) (in the sense of property (ii) of \(A_{m+1}^2(x)\)). For the remainder of the proof, assume that \(E^m(x),E^{m,n}(x)>0\) (if not, we are done). Let \(K^1\) and \(K^2\) be the closure of the complement of the unbounded connected component of \({\mathbb {H}}{\setminus } (\eta ([0,\tau _{{\overline{\alpha }}_{m+1}}]) \cup \eta ^{x_m}([0,\sigma _m^x]))\) and \({\mathbb {H}}{\setminus }( \eta ^{x_{m+1}}([0,{\hat{\tau }}]) \cup \eta ^{x_n}([0,\sigma _n^x]))\), respectively, where \({\hat{\tau }} = \sigma ^{x_{m+1}}(B(x,\varepsilon _{n+1}))\), and let \(K = K^1 \cup K^2\). Let \(K^{1,L}\) and \(K^{1,R}\) denote the boundaries of \(K^1\) to left and right of \(\eta (\tau _{{\overline{\alpha }}_{m+1}})\) and let \(K_M^{2,L} = \eta ^{x_{m+1}}([0,{\hat{\tau }}]) \cap \partial K^2\) be the part of \(K^2\) that \(\eta ^{x_m}\) should hit and merge with.

Fig. 12
figure 12

\(K^1\), shown in purple, and \(K^2\), shown in green, are the closures of the complement of the unbounded connected component of \({\mathbb {H}}{\setminus } (\eta ([0,\tau _{{\overline{\alpha }}_{m+1}}]) \cup \eta ^{x_m}([0,\sigma _m^x]))\) and \({\mathbb {H}}{\setminus } (\eta ^{x_{m+1}}([0,{\hat{\tau }}])\cup \eta ^{x_n}([0,\sigma _n^x]))\), respectively. The boundary data on the left is as in Fig. 11. On the right, the images under \(g_K\) of the boundary sets of the left are shown, along with their boundary data (color figure online)

Before going on with the proof, we discuss the strategy. Note that the law of the flow line from \(\eta (\tau _{{\overline{\alpha }}_{m+1}}) = \eta ^{x_m}(\sigma _m^x)\), given \(K^1\) and \(K^2\) is that of an \({{\,\mathrm{SLE}\,}}_\kappa (\rho ,-2-\rho ,2,\rho )\) process with configuration \(({\mathbb {H}}{\setminus } K,\eta (\tau _{{\overline{\alpha }}_{m+1}}),(r_{\sigma _m^x}^m,l_{{\hat{\tau }}}^{m+1},\eta ^{x_{m+1}}({\hat{\tau }}),r_{\sigma _n^x}^n),\infty )\), where \(l_t^{m+1} = \min \{\eta ^{x_{m+1}}([0,t]) \cap {\mathbb {R}}\}\). The idea of the proof is to map \({\mathbb {H}}{\setminus } K\) to \({\mathbb {H}}\), so that the image of the flow line is an \({{\,\mathrm{SLE}\,}}_\kappa (\rho ,-2-\rho ,2,\rho )\) process in \({\mathbb {H}}\) (see Fig. 12), and use Lemma 2.9 to see that the merging, as well as the geometric restriction of property (ii) of \(A_{m+1}^2(x)\), occurs with positive probability, independent of m. (Note that if the conditions of Lemma 2.9 are satisfied, then we can just choose some suitable deterministic curve, such that the properties are satisfied when the flow line follows that curve.) Let \(g_K\) denote the mapping-out function of \(K = K^1 \cup K^2\) (recall Sect. 2.1). The image of the flow line from \(\eta (\tau _{{\overline{\alpha }}_{m+1}})\) under \(g_K\) is then an \({{\,\mathrm{SLE}\,}}_\kappa (\rho ,-2-\rho ,2,\rho )\) with configuration \(({\mathbb {H}}, g_K(\eta (\tau _{{\overline{\alpha }}_{m+1}})),(g_K(r_{\sigma _m^x}^m),g_K(l_{{\hat{\tau }}}^{m+1}),g_K(\eta ^{x_{m+1}}({\hat{\tau }})),g_K(r_{\sigma _n^x}^n)),\infty )\), however, the images of the boundary sets under the mapping-out function decrease in length as m increases and thus we need to rescale to be able to use Lemma 2.9. Thus, we want to show that we can find some scaling factor k, which will depend on m, so that the images of the boundary sets under \({\hat{g}}_K :=k g_K\) are of appropriate sizes. More precisely, recall that we want the flow line from \(\eta (\tau _{{\overline{\alpha }}_{m+1}})\) to hit \(\eta ^{x_{m+1}}([0,\sigma _{m+1,M}^x])\), that is, we want the flow line from \({\hat{g}}_K(\eta (\tau _{{\overline{\alpha }}_{m+1}}))\) to hit \({\hat{g}}_K(K_M^{2,L}) =({\hat{g}}_K(l_{{\hat{\tau }}}^{m+1}),{\hat{g}}_K(\eta ^{x_{m+1}}(\sigma _{m+1,M}^x)))\). Then, to be able to use Lemma 2.9, we need to check that we can fix some \({\tilde{\varepsilon }}>0\), which may depend on \(\delta \) and M, but is independent of m, such that

$$\begin{aligned} {\hat{g}}_K(r_{\sigma _m^x}^m)-{\hat{g}}_K(\eta (\tau _{{\overline{\alpha }}_{m+1}}))&\ge {\tilde{\varepsilon }} \\ {\hat{g}}_K(\eta ^{x_{m+1}}(\sigma _{m+1,M}^x))-{\hat{g}}_K(l_{{\hat{\tau }}}^{m+1})&\ge {\tilde{\varepsilon }}, \text { and} \\ {\hat{g}}_K(l_{{\hat{\tau }}}^{m+1}) -{\hat{g}}_K(\eta (\tau _{{\overline{\alpha }}_{m+1}}))&\le {\tilde{\varepsilon }}^{-1}. \end{aligned}$$

Note that since there is no force point to the left of \({\hat{g}}_K(\eta (\tau _{{\overline{\alpha }}_{m+1}}))\), we do not need to care about the force point \(x_{2,L}\) of the lemma. Furthermore, while the lemma concerns hitting the interval between two force points, it is still applicable to the interval \(({\hat{g}}_K(l_{{\hat{\tau }}}^{m+1}),{\hat{g}}_K(\eta ^{x_{m+1}}(\sigma _{m+1,M}^x)))\), as we can just consider \({\hat{g}}_K(\eta ^{x_{m+1}}(\sigma _{m+1,M}^x))\) as a force point with weight 0.

Now, we want to estimate the length of intervals which are images of boundary sets, under \(g_K\). Recalling (8), it is natural to consider the harmonic measure from infinity of the boundary sets. Rephrasing the above, in terms of harmonic measure from infinity, we need to check that there exists some \({\hat{\varepsilon }}>0\), independent of m, such that

$$\begin{aligned} k \omega _\infty (K^{1,R},{\mathbb {H}}{\setminus } K)&\ge {\hat{\varepsilon }}, \\ k \omega _\infty (K_M^{2,L},{\mathbb {H}}{\setminus } K)&\ge {\hat{\varepsilon }}, \text { and } \\ k \omega _\infty (K^{1,R} \cup (r_{\sigma _m^x}^m,l_{{\hat{\tau }}}^{m+1}), {\mathbb {H}}{\setminus } K)&\le {\hat{\varepsilon }}^{-1}, \end{aligned}$$

if k is chosen properly. We shall show that the three harmonic measures are actually proportional, with proportionality constants independent of m, and thus, letting \(k = \omega _\infty (K^{1,R},{\mathbb {H}}{\setminus } K)^{-1}\), we are in the setting of Lemma 2.9 and the result follows.

We now turn to proving that the above harmonic measures are proportional. Arguing as in the proof of Lemma 4.1 (i.e., a Brownian motion must first hit the arc of \(\partial B(x,\varepsilon _{m+1})\) with endpoints \(\eta (\tau _{{\overline{\alpha }}_{m+1}})\) and \(x+\varepsilon _{m+1}\), from there, the probabilities of hitting the different sets are proportional), we see that

$$\begin{aligned} \omega _\infty (K^{1,R},{\mathbb {H}}{\setminus } K) \asymp \omega _\infty (K^{1,R},{\mathbb {H}}{\setminus } K^1) > rsim \omega _\infty ((r_{\sigma _m^x}^m,x],{\mathbb {H}}{\setminus } K^1), \end{aligned}$$
(54)

and that

$$\begin{aligned} \omega _\infty (K_M^{2,L},{\mathbb {H}}{\setminus } K) \asymp \omega _\infty (\eta ^{x_{m+1}}([0,\sigma _{m+1}^x]),{\mathbb {H}}{\setminus } K) \asymp \omega _\infty ((r_{\sigma _m^x}^m,x],{\mathbb {H}}{\setminus } K^1), \end{aligned}$$
(55)

and the implicit constants are independent of m. Condition (ii) of \(A_m^1(x)\) states that

$$\begin{aligned} \omega _\infty (\eta ^{x_m,R}([0,\sigma _m^x]) \cup (r_{\sigma _m^x}^m,x], {\mathbb {H}}{\setminus } \eta ^{x_m}([0,\sigma _m^x])) \asymp \omega _\infty ((r_{\sigma _m^x}^m,x], {\mathbb {H}}{\setminus } \eta ^{x_m}([0,\sigma _m^x])), \end{aligned}$$

and consequently

$$\begin{aligned} \omega _\infty (K^{1,R} \cup (r_{\sigma _m^x}^m,x], {\mathbb {H}}{\setminus } K^1) \asymp \omega _\infty ((r_{\sigma _m^x}^m,x],{\mathbb {H}}{\setminus } K^1). \end{aligned}$$
(56)

Thus,

$$\begin{aligned} \omega _\infty ((r_{\sigma _m^x}^m,x],{\mathbb {H}}{\setminus } K^1) > rsim \omega _\infty (K^{1,R},{\mathbb {H}}{\setminus } K^1), \end{aligned}$$

and hence, by (54),

$$\begin{aligned} \omega _\infty (K_M^{2,L},{\mathbb {H}}{\setminus } K) \asymp \omega _\infty ((r_{\sigma _m^x}^m,x],{\mathbb {H}}{\setminus } K^1) \asymp \omega _\infty (K^{1,R},{\mathbb {H}}{\setminus } K). \end{aligned}$$
(57)

Next, we note that

$$\begin{aligned} \omega _\infty (K^{1,R} \cup (r_{\sigma _m^x}^m,x], {\mathbb {H}}{\setminus } K^1)&\ge \omega _\infty (K^{1,R} \cup (r_{\sigma _m^x}^m,l_{{\hat{\tau }}}^{m+1}), {\mathbb {H}}{\setminus } K) \nonumber \\&\ge \omega _\infty (K^{1,R},{\mathbb {H}}{\setminus } K) \end{aligned}$$
(58)

where the first inequality holds since the left-hand side is the harmonic measure from infinity of a larger set, with fewer obstacles for the Brownian paths. By (56), (57) and (58),

$$\begin{aligned} \omega _\infty (K_M^{2,L},{\mathbb {H}}{\setminus } K) \asymp \omega _\infty (K^{1,R},{\mathbb {H}}{\setminus } K) \asymp \omega _\infty (K^{1,R} \cup (r_{\sigma _m^x}^m,l_{{\hat{\tau }}}^{m+1}), {\mathbb {H}}{\setminus } K). \end{aligned}$$

Hence, rescaling by letting \(k = \omega _\infty (K^{1,R},{\mathbb {H}}{\setminus } K)^{-1}\), we can use Lemma 2.9, and the proof is done. \(\square \)

Lemma 4.6

For each \(\delta \in (0,\frac{1}{2})\), sufficiently small, there exist a constant \({\tilde{c}}(\delta )>0\) and a subexponential function \(\psi \) such that the for each \(x \ge 1\),

$$\begin{aligned} {\mathbb {E}}[E^m(x)] \ge {\tilde{c}}(\delta )^{m+1} \left( \prod _{j=1}^{m+1}\psi (\alpha _j)^{-|\zeta |}\right) e^{{\overline{\alpha }}_{m+1}(\zeta \beta -\mu )(1+\rho /2)}. \end{aligned}$$

Proof

By Lemma 4.4,

$$\begin{aligned} {\mathbb {E}}[E_k^1(x)|E^{k-1}(x)] 1_{\{E^{k-1}(x)>0\}} \asymp {\mathbb {E}}[E_k^1(x)] 1_{\{E^{k-1}(x)>0\}}, \end{aligned}$$

so we need to show that there exist a constant \({\tilde{c}}(\delta )\) and a subexponential function \(\psi \) such that

$$\begin{aligned}&{\mathbb {E}}[E_k^1(x)] \ge {\tilde{c}}(\delta ) \psi (\alpha _{k+1})^{-|\zeta |} e^{\alpha _{k+1}(\zeta \beta -\mu )(1+\rho /2)}, \end{aligned}$$
(59)
$$\begin{aligned}&{\mathbb {E}}[E_k^2(x)|E^{k-1}(x),E_k^1(x)] 1_{\{E^{k-1}(x)>0,E_k^1(x)>0\}} \asymp 1_{\{E^{k-1}(x)>0,E_k^1(x)>0\}}. \end{aligned}$$
(60)

However, (60) follows from the very same argument as Lemma 4.5, so we will now concern ourselves with (59). Since \(1_{A_k^1}\) is \({\mathscr {F}}_{\sigma _k^x}\)-measurable, we have

$$\begin{aligned} {\mathbb {E}}[E_k^1(x)]&= {\mathbb {E}}\left[ 1_{A_k^1} I_k^{u,\Lambda ,k}\right] = {\mathbb {E}}\left[ 1_{A_k^1} {\mathbb {E}}\left[ {\tilde{I}}_{\frac{\alpha _{k+1}}{a}+C_k^*(x)}^{u,\Lambda ,k} \bigg | {\mathscr {F}}_{\sigma _k^x} \right] \right] \\&= {\mathbb {P}}\left( A_k^1 \cap {\tilde{I}}_{\frac{\alpha _{k+1}}{a}+C_k^*(x)}^{u,\Lambda ,k}\right) . \end{aligned}$$

As stated above, \(\eta ^{x_k}\) is an \({{\,\mathrm{SLE}\,}}_\kappa (2+\rho ,-2-\rho ;\rho )\) curve with configuration \(({\mathbb {H}},x_k,(0,x_k^-),x_k^+,\infty )\) and by Lemma 2.11, the Radon–Nikodym derivative between the law of \(\eta ^{x_k}\) and an \({{\,\mathrm{SLE}\,}}_\kappa (-2-\rho ;\rho )\) curve with configuration \(({\mathbb {H}},x_k,x_k^-,x_k^+,\infty )\), both stopped upon exiting \(B(x,\frac{1}{2}\varepsilon _k)\) is bounded above and below by constants and hence we can (and will) instead consider the latter. Also, we can translate and rescale the process so that we consider an \({{\,\mathrm{SLE}\,}}_\kappa (-2-\rho ;\rho )\) curve, \({\hat{\eta }}\), started from 0 and the point that we want the curve to get close to being 1. Then, since \({{\,\mathrm{dist}\,}}(x_k,x) = \frac{1}{4} \varepsilon _k\), the event \(\{ \sigma _k^x < \sigma ^{x_k}({\mathbb {H}}{\setminus } B(x,\frac{1}{2}\varepsilon _k)) \}\) turns into the event \(\{ {\hat{\eta }} \text { hits } B(1,e^{-\alpha _{k+1}}) \text { before leaving } B(1,2) \}\) and the event \(\{ Q_{\sigma _{k,M}^x}^k, Q_{\sigma _k^x}^k \in [\delta ,1-\delta ] \}\) remains roughly the same. More precisely, let \({\hat{Q}}_t\) denote the process defined by (24) (but with \({\hat{\eta }}\) in place of \(\eta \)), \(\sigma _M = \inf \{ t \ge 0: {{\,\mathrm{dist}\,}}({\hat{\eta }}([0,t]),1) < 4/M \}\) and \(\sigma _k = \inf \{ t \ge 0: {{\,\mathrm{dist}\,}}({\hat{\eta }}([0,t]),1) < e^{-\alpha _{k+1}} \}\), then (by translation and scaling invariance of \(Q_t^k\)), \(\{ {\hat{Q}}_{\sigma _k}, {\hat{Q}}_{\sigma _M} \in [\delta ,1-\delta ] \} = \{ Q_{\sigma _{k,M}^x}^k, Q_{\sigma _k^x}^k \in [\delta ,1-\delta ] \}\). We denote by \((g_t)\) the Loewner chain corresponding to \({\hat{\eta }}\) and weight the probability measure \({\mathbb {P}}\) with the local martingale (recall (11))

$$\begin{aligned} M_t^\zeta (1) = g_t'(1)^\zeta {\hat{Q}}_t^\mu \delta _t^{-\mu (1+\rho /2)}(g_t(1)-V_t^L)^{\mu (1+\rho /2)}, \end{aligned}$$

and denote the resulting measure by \({\mathbb {P}}^*\). Note that it is under \({\mathbb {P}}^*\) that we can choose u such that \({\tilde{I}}_{\frac{\alpha _{k+1}}{a}+C_k^*(x)}^{u,\Lambda ,k}\) has probability arbitrarily close to 1 (in the case with no force point to the left). We note that Lemma 3.5 implies that on \(A_k^1(x) \cap {\tilde{I}}_{\frac{\alpha _{k+1}}{a}+C_k^*(x)}^{u,\Lambda ,k}\)

$$\begin{aligned} \psi (\alpha _{k+1})^{-|\zeta |} e^{-\alpha _{k+1} \zeta \beta (1+\rho /2)} \le g_{\sigma _k}'(x)^\zeta \le \psi (\alpha _{k+1})^{|\zeta |} e^{-\alpha _{k+1} \zeta \beta (1+\rho /2)}, \end{aligned}$$

for some subexponential function \(\psi \). Furthermore, \({\hat{Q}}_{\sigma _k} \asymp 1\) and by Lemma 2.3\(\delta _{\sigma _k} \asymp e^{-\alpha _{k+1}}\), that is, \(\delta _{\sigma _k}^{-\mu (1+\rho /2)} \asymp e^{\alpha _{k+1}\mu (1+\rho /2)}\). Moreover, we see that \(g_{\sigma _k}(1) - V_{\sigma _k}^L \asymp 1\), since it is the harmonic measure from infinity of the left side of \({\hat{\eta }}\), so that it is upper bounded by \(\omega _\infty (B(1,2),{\mathbb {H}})\), which is finite, and lower bounded by \(\omega _\infty ([0,1/2],{\mathbb {H}})=1/2\pi \). Since \({\mathbb {P}}^*(A) = {\mathbb {E}}[M_{\sigma _k}^\zeta (1)1_A]\) and

$$\begin{aligned} \psi (\alpha _{k+1})^{-|\zeta |} e^{-\alpha _{k+1}(\zeta \beta -\mu )(1+\rho /2)} \lesssim M_{\sigma _k}^\zeta (1) \lesssim \psi (\alpha _{k+1})^{|\zeta |} e^{-\alpha _{k+1}(\zeta \beta -\mu )(1+\rho /2)}, \end{aligned}$$

we have that

$$\begin{aligned}&\psi (\alpha _{k+1})^{-|\zeta |} e^{\alpha _{k+1}(\zeta \beta -\mu )(1+\rho /2)} \\&\quad {\mathbb {P}}^*(A_k^1(x) \cap {\tilde{I}}_{\frac{\alpha _{k+1}}{a}+C_k^*(x)}^{u,\Lambda ,k}) \\&\quad \lesssim {\mathbb {P}}(A_k^1(x) \cap {\tilde{I}}_{\frac{\alpha _{k+1}}{a}+C_k^*(x)}^{u,\Lambda ,k}) \\&\quad \lesssim \psi (\alpha _{k+1})^{|\zeta |} e^{\alpha _{k+1}(\zeta \beta -\mu )(1+\rho /2)} {\mathbb {P}}^*(A_k^1(x) \cap {\tilde{I}}_{\frac{\alpha _{k+1}}{a}+C_k^*(x)}^{u,\Lambda ,k}). \end{aligned}$$
Fig. 13
figure 13

We may fix a curve \(\gamma \) (black) such that if the curve \({\hat{\eta }}\) (purple) does not leave the \({\tilde{\varepsilon }}\)-neighborhood before coming close to the tip, then after mapping back to \({\mathbb {H}}\), the points are as indicated in the picture (color figure online)

Now we need only show that \({\mathbb {P}}^*(A_k^1(x) \cap {\tilde{I}}_{\frac{\alpha _{k+1}}{a}+C_k^*(x)}^{u,\Lambda ,k}) \asymp 1\). Note that under \({\mathbb {P}}^*\), \({\hat{\eta }}\) is an \({{\,\mathrm{SLE}\,}}_\kappa (-2-\rho ;\rho ,-\mu \kappa )\) curve with configuration \(({\mathbb {H}},0,0^-,(0^+,1),\infty )\). We shall begin by reducing this to the case of an \({{\,\mathrm{SLE}\,}}\) process with no force points to the left of 0. The following procedure is illustrated in Fig. 13. Let \(\gamma :[0,1] \rightarrow {\overline{{\mathbb {H}}}}\) be a deterministic curve starting at 0 and remaining in \({\mathbb {H}}\) after that, and \({\hat{\varepsilon }} > 0\) be such that if \({\hat{\eta }}\) comes within distance \({\hat{\varepsilon }}\) of the tip \(\gamma (1)\) before exiting the \({\hat{\varepsilon }}\)-neighborhood of \(\gamma \), then

$$\begin{aligned} {{\,\mathrm{dist}\,}}(1,F(\partial B(1,2) \cap {\mathbb {H}})) \ge 2 \text { and } F(\min \{{\hat{K}}_{{\hat{\sigma }}_1} \cap {\mathbb {R}}\}) < -2, \end{aligned}$$

where

$$\begin{aligned} F(z) = \frac{{\hat{f}}_{{\hat{\sigma }}_1}(z)}{{\hat{f}}_{{\hat{\sigma }}_1}(1)}, \end{aligned}$$

\({\hat{\sigma }}_1 = \inf \{ t \ge 0: {{\,\mathrm{dist}\,}}({\hat{\eta }}([0,t]),\gamma (1)) < {\hat{\varepsilon }}\}\), and \(({\hat{f}}_t)\) and \(({\hat{K}}_t)\) are the centered Loewner chain and hulls respectively. Then, the curve \({\overline{\eta }}(t) = F({\hat{\eta }}({\hat{\sigma }}_1+t))\) has the law of a time-changed \({{\,\mathrm{SLE}\,}}_\kappa (-2-\rho ;\rho ,-\mu \kappa )\) curve with configuration \(({\mathbb {H}},0,x_L,(x_R,1),\infty )\), where \(x_L < -2\) and \(x_R \in [0^+,1)\). Note that we may choose \(\gamma \) and \({\hat{\varepsilon }}\) so that \(1-x_R > {\hat{\delta }}\) for some \({\hat{\delta }} > 0\) (to get a bound on the constant \(C^*\), chosen as remarked after Lemma 2.4). By Lemma 2.8, the above happens with positive probability, say \(p_0\). By Lemma 2.11, the Radon–Nikodym derivative between the law of \({\overline{\eta }}\) and the law of a correspondingly time-changed \({{\,\mathrm{SLE}\,}}_\kappa (\rho ,-\mu \kappa )\) curve with force points \((x_R,1)\), is bounded above and below by some constants. Thus we may consider such an \({{\,\mathrm{SLE}\,}}_\kappa (\rho ,-\mu \kappa )\) process. Note that, if \(({\hat{g}}_t)\) and \(({\overline{g}}_t)\) are the Loewner chains of \({\hat{\eta }}\) and \({\overline{\eta }}\), respectively, then

$$\begin{aligned} {\hat{g}}_{{\hat{\sigma }}_1+t}'(1) = \frac{1}{{\hat{f}}_{{\hat{\sigma }}_1}(1)} {\hat{g}}_{{\hat{\sigma }}_1}'(1) {\overline{g}}_t'(1). \end{aligned}$$

Thus, we can choose \(\Lambda \) to be sufficiently large, so that

$$\begin{aligned} {\tilde{I}}_{\frac{\alpha _{k+1}}{a}+C_k^*(x)}^{u,\Lambda ,k} \supset \{ {\hat{\sigma }}_1 \le {\hat{\sigma }}_2 \} \cap {\tilde{I}}_{\frac{\alpha _{k+1}}{a}+C_k^*(x)}^{u}({\overline{\eta }}) \end{aligned}$$

where \({\hat{\sigma }}_2 = \inf \{t\ge 0: {{\,\mathrm{dist}\,}}({\hat{\eta }}(t),\gamma ([0,1])) > {\hat{\varepsilon }}\}\) and \({\tilde{I}}_{t}^{u}({\overline{\eta }})\) is the event of Proposition 3.4 for \({\overline{\eta }}\). We have lower bounded the probability of \(\{ {\hat{\sigma }}_1 \le {\hat{\sigma }}_2 \}\) and next, we prove that \(\{ {\overline{\eta }} \text { hits } 1 \text { before } \partial B(1,2) \} \cap {\tilde{I}}_{\frac{\alpha _{k+1}}{a}+C_k^*(x)}^{u}({\overline{\eta }}) \cap \{ {\hat{Q}}_{\sigma _k}, {\hat{Q}}_{\sigma _M} \in [\delta ,1-\delta ] \}\) has positive probability, which completes the proof of the lemma. (Note that we do not need \({\overline{\eta }}\) to hit 1 before exiting B(1, 2), but rather that it hits some small set, separating 1 from \(\infty \), before exiting B(1, 2), which is of course weaker.)

We begin by lower bounding \({\mathbb {P}}^*({\overline{\eta }} \text { hits } 1 \text { before } \partial B(1,2))\). We now make a conformal coordinate change with the Möbius transformation

$$\begin{aligned} \varphi (z) = \frac{z}{1-z}. \end{aligned}$$

The image of an \({{\,\mathrm{SLE}\,}}_\kappa (\rho ,-\mu \kappa )\) curve in \({\mathbb {H}}\) from 0 to \(\infty \) is an \({{\,\mathrm{SLE}\,}}_\kappa (\rho _L;\rho )\) curve in \({\mathbb {H}}\) from 0 to \(\infty \), with force points \(\varphi (\infty ) = -1\) and \(\varphi (x_R) = \frac{x_R}{1-x_R}\), where \(\rho _L = \kappa -6-(\rho -\mu \kappa ) = \kappa (1+\mu )-6-\rho \) (see [23]). Furthermore, 1 is mapped to \(\infty \) and \(\varphi (\partial B(1,2)) = \partial B(-1,\frac{1}{2})\) and thus the event of hitting 1 before exiting B(1, 2) turns into hitting the event that \(\varphi ({\overline{\eta }})\) does not hit \(B(-1,\frac{1}{2})\). We have that \(\kappa \le 4\) and \(\rho _L > \frac{\kappa }{2}-2\), so the probability of \(\varphi ({\overline{\eta }})\) avoiding \(B(-1,\frac{1}{2})\) is positive.

Next, choosing \(\delta \) to be sufficiently small and M to be large enough, we can (by Corollary A.2) guarantee that the \({\mathbb {P}}^*\)-probability of the event \(\{ {\hat{Q}}_{\sigma _k}, {\hat{Q}}_{\sigma _M} \in [\delta ,1-\delta ] \}\) is as close to 1 as we want. This holds, since as \({\hat{\eta }}\) comes closer to 1, a time change \({\hat{Q}}_{{\tilde{t}}(s)}\) of \({\hat{Q}}\) (as in Sect. 2.2.2) converges to its invariant distribution \({\tilde{X}}_s\), so by choosing M large, we can make sure that \({\hat{Q}}_{{\tilde{t}}(s)}\) will be as close to \({\tilde{X}}_s\) as necessary, in the sense of (76). Moreover, letting \(\delta \) be sufficiently small, we have that, for each s, \({\mathbb {P}}^*({\tilde{X}}_s \in [\delta ,1-\delta ])\) is sufficiently close to 1 and hence the same follows for \({\hat{Q}}_{\sigma _k}\) and \({\hat{Q}}_{\sigma _M}\).

By then choosing \(u>0\) sufficiently large, we have that the \({\mathbb {P}}^*\)-probability of \({\tilde{I}}_{\frac{k}{a}+C_k^*(x)}^{u,k}\) is arbitrarily close to 1, and hence that \({\mathbb {P}}^*(A_k^1 \cap {\tilde{I}}_{\frac{k}{a}+C_k^*(x)}^{u,k}) > rsim 1\). Thus (59) is proven, which gives the result. \(\square \)

Remark 4.7

In the above proof, we actually proved an upper bound as well:

$$\begin{aligned} {\mathbb {E}}[E^m(x)] \le {\tilde{C}}(\delta )^{m+1} \left( \prod _{j=1}^{m+1} \psi (\alpha _j)^{|\zeta |} \right) e^{{\overline{\alpha }}_{m+1}(\zeta \beta -\mu )(1+\rho /2)}. \end{aligned}$$
(61)

Proof of Proposition 4.2

It holds that

$$\begin{aligned}&{\mathbb {E}}[E^n(x) E^n(y)] \\&\quad \le {\mathbb {E}}[E^{m-1}(x) E^{m+1,n}(x) E^{m+1,n}(y)] \\&\quad \lesssim {\mathbb {E}}[E^{m-1}(x)]{\mathbb {E}}[E^{m+1,n}(x)]{\mathbb {E}}[E^{m+1,n}(y)] \\&\quad = \frac{{\mathbb {E}}[E^{m-1}(x)]}{{\mathbb {E}}[E^{m+1}(x)]{\mathbb {E}}[E^{m+1}(y)]} \\&\qquad \times {\mathbb {E}}[E^{m+1}(x)]{\mathbb {E}}[E^{m+1,n}(x)] {\mathbb {E}}[E^{m+1}(y)]{\mathbb {E}}[E^{m+1,n}(y)] \\&\quad \lesssim \frac{{\mathbb {E}}[E^{m-1}(x)]}{{\mathbb {E}}[E^{m+1}(x)]{\mathbb {E}}[E^{m+1}(y)]} {\mathbb {E}}[E^n(x)] {\mathbb {E}}[E^n(y)] \\&\quad \le \frac{{\tilde{C}}(\delta )^m \left( \prod _{j=1}^m \psi (\alpha _j)^{|\zeta |}\right) e^{{\overline{\alpha }}_m (\zeta \beta -\mu )(1+\rho /2)}}{{\tilde{c}}(\delta )^{2m+4} \left( \prod _{j=1}^{m+2} \psi (\alpha _j)^{-|\zeta |}\right) e^{{\overline{\alpha }}_{m+2} (\zeta \beta -\mu )(1+\rho /2)}} {\mathbb {E}}[E^n(x)] {\mathbb {E}}[E^n(y)] \\&\quad \le c(\delta )^{m+2} \left( \prod _{j=1}^{m+2} \psi (\alpha _j)^{3|\zeta |}\right) e^{-({\overline{\alpha }}_{m+2} + \alpha _{m+1} + \alpha _{m+2})(\zeta \beta -\mu )(1+\rho /2)} {\mathbb {E}}[E^n(x)] {\mathbb {E}}[E^n(y)] \\&\quad = c(\delta )^{m+2} \left( \prod _{j=1}^{m+2} \psi (\alpha _j)^{3|\zeta |}\right) \varepsilon _{m+2}^{(1+o_m(1))(\zeta \beta -\mu )(1+\rho /2)} {\mathbb {E}}[E^n(x)] {\mathbb {E}}[E^n(y)] \end{aligned}$$

where we used Lemma 4.4 in the second inequality, Lemma 4.5 in the fourth and Lemma 4.6 and (61) in the fifth. Thus, we can find a subpower function \({\tilde{\Psi }}_\delta \) as in the statement of the proposition, and we are done. \(\square \)

5 Dimension spectrum

In this section, we compute the almost sure Hausdorff dimension of the random sets (4). We will, however, compute the almost sure dimension of the sets

$$\begin{aligned} V_\beta ^* = \left\{ x>0: \lim _{n \rightarrow \infty } \frac{1}{n} \log g_{\tau _n}'(x) = -\beta (1+\rho /2), \ \tau _n = \tau _n(x) < \infty \ \forall n>0 \right\} \end{aligned}$$
(62)

and note that this is sufficient, due to the monotonicity of \(t \mapsto g_t'\). The theorem that we prove in this section is the following.

Theorem 5.1

Let \(\kappa > 0\), \(\rho \in ((-2)\vee (\frac{\kappa }{2}-4), \frac{\kappa }{2}-2)\), \(x_R = 0^+\) and write \(a = 2/\kappa \). Define

$$\begin{aligned} d^*(\beta ) :=1 - \frac{a\beta }{2}\left( \frac{1-a\rho }{2a} - \frac{1+2\beta }{\beta } \right) ^2\left( 1+\frac{\rho }{2}\right) \end{aligned}$$

and let \(\beta _-^* = \inf \{\beta :d^*(\beta )>0\}\) and \(\beta _+^* = \sup \{\beta :d^*(\beta )>0\}\). Then, almost surely, if \(\kappa \in (0,4]\)

$$\begin{aligned} \text {dim}_H V_\beta ^* = d^*(\beta ) \text { for } \beta \in [\beta _-^*,\beta _+^*]. \end{aligned}$$

With this theorem at hand, noting that \(V_\beta = V_{\beta /(1+\rho /2)}^*\) gives the dimension \(\text {dim}_H V_\beta = \text {dim}_H V_{\beta /(1+\rho /2)}^* = d^*(\beta /(1+\rho /2))\), we immediately get Theorem 1.1.

We start by showing that \(\text {dim}_H V_\beta ^*\) is an almost surely constant quantity. The proof of this is contained in [1], but we repeat the proof here for completeness.

Lemma 5.2

Let \(x_R = 0^+\). For each \(\beta \), \(\text {dim}_H V_\beta ^*\) is almost surely constant.

Proof

Let \(x>0\) and write \(S_x = V_\beta ^* \cap (0,x)\). Since \(x_R = 0^+\), the \({{\,\mathrm{SLE}\,}}_\kappa (\rho )\) process is scaling invariant and hence, the law of \(S_x\) is identical to the law of \(xS_1\). However, since \(\text {dim}_H xS_1 = \text {dim}_H S_1\) (due to the invariance of the Hausdorff dimension under linear scaling), we see that the law of \(\text {dim}_H S_x\) is not depending on x. The sets \(S_x\) are decreasing as \(x \rightarrow 0^+\), and hence \(\text {dim}_H S_x\) has an almost sure limit (as \(x \rightarrow 0^+\)) which is measurable with respect to \({\mathscr {F}}_{0^+}\), since \(S_x\) is measurable with respect to \({\mathscr {F}}_{T_x}\). By Blumenthal’s 0-1 law, the limit must be constant and the same for every \(x>0\). \(\square \)

In the following two sections, we prove the upper and lower bounds on the dimension, Theorem 5.3 and Theorem 5.6, and together they imply Theorem 1.1.

5.1 Upper bound

We define the random sets

$$\begin{aligned} {\overline{V}}_\beta&= \left\{ x \in {\mathbb {R}}_+: g_{\tau _n}'(x) \ge e^{-\beta (1+\rho /2) n}, \ \tau _n(x)< \infty \ \text {i.o. in} \ n \right\} , \\ {\underline{V}}_\beta&= \left\{ x \in {\mathbb {R}}_+: g_{\tau _n}'(x) \le e^{-\beta (1+\rho /2) n}, \ \tau _n(x) < \infty \ \text {i.o. in} \ n \right\} , \end{aligned}$$

and note that for \(\beta _1< \beta < \beta _2\), we have \(V_\beta ^* \subset {\underline{V}}_{\beta _1}\) and \(V_\beta ^* \subset {\overline{V}}_{\beta _2}\). Thus, in this subsection, we will find a suitable cover of the above sets, and bound the Hausdorff measure of the above sets using the Minkowski content of the covers. Using this, we prove the following theorem, which gives the upper bound on the dimension. We let \(d^*(\beta ) = 1 + (\zeta \beta - \mu )(1+\rho /2)\) and write \(\beta _-^*\) and \(\beta _+^*\) for its left and right zero, respectively. For \(\beta _0^* = \beta (0) = \frac{2a}{4a-1+a\rho }\) (where \(\beta (0)\) means \(\beta (\zeta )\), evaluated at \(\zeta =0\)), we have \((d^*)'(\beta _0^*)=0\), and hence \(d^*(\beta )\) it is increasing for \(\beta \in [\beta _-^*,\beta _0^*]\) and decreasing for \(\beta \in [\beta _0^*, \beta _+^*]\).

Theorem 5.3

Let \(\kappa > 0\), \(\rho \in ((-2)\vee (\frac{\kappa }{2}-4),\frac{\kappa }{2}-2)\) and \(x_R = 0^+\). Then, the following hold:

  1. (i)

    if \(\beta \in [ \beta _-^*, \beta _0^*)\), then \(\text {dim}_H {\overline{V}}_\beta \le d^*(\beta )\) almost surely,

  2. (ii)

    if \(\beta < \beta _-^*\), then \({\overline{V}}_\beta = \emptyset \) almost surely,

  3. (iii)

    if \(\beta \in ( \beta _0^*, \beta _+^*]\), then \(\text {dim}_H {\underline{V}}_\beta \le d^*(\beta )\) almost surely,

  4. (iv)

    if \(\beta > \beta _+^*\), then \({\underline{V}}_\beta = \emptyset \) almost surely.

Together, they imply that if \(\beta \in [\beta _-^*,\beta _+^*]\), then

$$\begin{aligned} \text {dim}_H V_\beta ^* \le d^*(\beta ), \end{aligned}$$

almost surely.

Now, we will construct the cover. It is sufficient to prove the above theorem for the sets intersected with every closed subinterval of \({\mathbb {R}}_+\). As in [1], we will do this for the set [1, 2], but it will be clear that the very same construction works for any other closed set. We start by constructing the cover for \({\overline{V}}_\beta \cap [1,2]\). For every \(n \ge 1\), let

$$\begin{aligned} J_{j,n} = [1+je^{-n}/2, 1+(j+1)e^{-n}/2), \ j = 0,1,\dots , \lceil 2e^n \rceil -1. \end{aligned}$$

Then every interval has length \(e^{-n}/2\). We denote by \(x_{j,n}\) the midpoint of the interval \(J_{j,n}\) and write \({\mathscr {J}}_n :=\{J_{j,n} \}\). By distortion estimates, we have that there is some constant \(c' > 0\), such that if

$$\begin{aligned} g_{\tau _n(x)}'(x) \ge e^{-\beta (1+\rho /2) n} \ \text {for some} \ x \in J_{j,n}, \end{aligned}$$
(63)

then,

$$\begin{aligned} g_{\tau _n(x)}'(x_{j,n}) \ge c'e^{-\beta (1+\rho /2) n}. \end{aligned}$$
(64)

We also have that

$$\begin{aligned} \tau _{n-2}(x_{j,n}) \le \inf _{x \in J_{j,n}} \tau _n(x), \end{aligned}$$

since the curve must hit the ball of radius \(e^{-(n-2)}\), centered at \(x_{j,n}\) before it hits the ball of radius \(e^{-n}\), centered at any point x in \(J_{j,n}\), as the former ball contains the latter for any \(x \in J_{j,n}\). Combining this with the fact that \(t \mapsto g_t(x)\) is decreasing for every fixed x, (64) and writing \(c = c'e^{-2\beta (1+\rho /2)}\), shows that (63) implies that

$$\begin{aligned} g_{\tau _{n-2}(x_{j,n})}'(x_{j,n}) \ge ce^{-\beta (1+\rho /2)(n-2)}. \end{aligned}$$

We let \({\mathscr {I}}_n^-(\beta ) = \left\{ j \in \{ 0,1,..., \lceil 2e^n \rceil -1 \} : g_{\tau _{n-2}(x_{j,n})}'(x_{j,n}) \ge ce^{-\beta (1+\rho /2)(n-2)} \right\} \), and define \(J_{n,-}(\beta )\) by

$$\begin{aligned} J_{n,-}(\beta ) = \bigcup _{j \in {\mathscr {I}}_n^-(\beta )} J_{j,n}. \end{aligned}$$

Then \(\left\{ x \in [1,2]: g_{\tau _n(x)}'(x) \ge e^{-\beta (1+\rho /2) n} \right\} \subset J_{n,-}(\beta )\), and thus, for every positive integer m,

$$\begin{aligned} {\overline{V}}_\beta \cap [1,2] \subset \bigcup _{n \ge m} J_{n,-}(\beta ), \end{aligned}$$

i.e., for every positive integer m, \(\bigcup _{n \ge m} J_{n,-}(\beta )\) is a cover of \({\overline{V}}_\beta \cap [1,2]\). We write

$$\begin{aligned} \overline{{\mathcal {N}}}_n(\beta ) = \sum _{j=0}^{\lceil 2e^n \rceil -1} 1\left\{ g_{\tau _{(n-2)}(x_{j,n})}'(x_{j,n}) \ge ce^{-\beta (1+\rho /2)(n-2)} \right\} , \end{aligned}$$
(65)

that is, \(\overline{{\mathcal {N}}}_n(\beta )\) is the number of intervals \(J_{j,n}\) that make up \(J_{n,-}(\beta )\).

Lemma 5.4

Let \(\kappa > 0\), \(\rho \in ((-2)\vee (\frac{\kappa }{2}-4),\frac{\kappa }{2}-2)\) and \(\zeta > 0\), that is, \(\beta < \beta _0^* = \frac{2a}{4a-1+a\rho }\). Then,

$$\begin{aligned} {\mathbb {E}}[\overline{{\mathcal {N}}}_n(\beta )] \lesssim e^{n(1+(\zeta \beta - \mu )(1+\rho /2))}. \end{aligned}$$

Proof

By (65), we get

$$\begin{aligned} {\mathbb {E}}\left[ \overline{{\mathcal {N}}}_n(\beta )\right]&= \sum _{j=0}^{\lceil 2e^n \rceil -1} {\mathbb {P}}\left( g'_{\tau _{n-2}}(x_{j,n}) \ge c e^{-\beta (1+\rho /2)(n-2)} \right) \\&= \sum _{j=0}^{\lceil 2e^n \rceil -1} {\mathbb {P}}\left( g'_{\tau _{n-2}}(x_{j,n})^\zeta \ge c^\zeta e^{-\zeta \beta (1+\rho /2) (n-2)} ; \tau _{n-2}< \infty \right) \\&\lesssim \sum _{j=0}^{\lceil 2e^n \rceil -1} e^{\zeta \beta (1+\rho /2) (n-2)} {\mathbb {E}}\left[ g_{\tau _{n-2}}'(x_{j,n})^\zeta 1\{\tau _{n-2} < \infty \} \right] \\&\lesssim e^{n(1+(\zeta \beta - \mu )(1+\rho /2))}, \end{aligned}$$

using Chebyshev’s inequality in the fourth row and Corollary 3.2 in the last. \(\square \)

Next, we construct the cover for \({\underline{V}}_\beta \cap [1,2]\). Let \(x \in {\underline{V}}_\beta \cap [1,2]\) and \(n > 0\) be such that \(g_{\tau _n}'(x) \le e^{-\beta (1+\rho /2) n}\). By Lemma 2.4, there is a constant \(C^*\), such that

$$\begin{aligned} g_{{\tilde{t}}(s_n)}'(x) = {\tilde{g}}_{s_n}'(x) \le e^{-\beta (1+\rho /2) n}, \end{aligned}$$

where \(s_n = \frac{n}{a} + C^*\) (here, \(C^*\) can be chosen so that the above holds for every \(x \in [1,2]\)). By the distortion principle, there is a smallest nonnegative integer k such that, if J is the unique interval in \({\mathscr {J}}_{n+k}\) such that \(x \in J\), then for every \(z \in J\),

$$\begin{aligned} {\tilde{g}}_{s_n,x}'(z) \asymp {\tilde{g}}_{s_n}'(x), \end{aligned}$$

where the second subscript denotes the point which the time change \({\tilde{t}}(s)\) is made with respect to (if no second subscript is written out, the time change corresponds to the point in which we evaluate the function). Let \(x_J\) denote the midpoint of J. Then \({\tilde{g}}_{s_n,x}(x_J) \lesssim e^{-\beta (1+\rho /2) n}\). Since \({{\,\mathrm{dist}\,}}(x,x_J) \le e^{-(n+k)}/4\), we have for geometric reasons and by Lemma 2.4, that

$$\begin{aligned} {\tilde{t}}_x \left( \frac{n}{a} + C^* \right) \le \tau _{n+2aC^*}(x) \le \tau _{n+2aC^*+1}(x_J) \le {\tilde{t}}_{x_J} \left( \frac{n+1}{a} + 3C^* \right) , \end{aligned}$$

that is, there is a constant \(c_1\) such that

$$\begin{aligned} {\tilde{t}}_x \left( s_n \right) \le {\tilde{t}}_{x_J} \left( \frac{n}{a} + c_1 \right) = {\tilde{t}}_{x_J}(s_n'). \end{aligned}$$

Therefore, there is a constant \(c_2\) such that \({\tilde{g}}_{s_n'}'(x_J) \le c_2 e^{-\beta (1+\rho /2) n}\). The constants above can be chosen to be universal. Let us recap what we have done above; we concluded that there are universal constants k, \(c_1\), \(c_2\) such that every \(x \in {\underline{V}}_\beta \) is contained in an interval J in \({\mathscr {J}}_{n+k}\) and \({\tilde{g}}_{\frac{n}{a} + c_1}'(x_J) \le c_2 e^{-\beta (1+\rho /2) n}\), where \(x_J\) is the midpoint of J. Therefore, choosing universal constants \(C_1\) and \(C_2\), we have that if

$$\begin{aligned} J_{n,+}(\beta ) = \bigcup _{j \in {\mathscr {I}}_n^+(\beta )} J_{j,n}, \end{aligned}$$

where \({\mathscr {I}}_n^+(\beta ) = \left\{ j \in \{ 0,1,..., \lceil 2e^n \rceil -1 \} : {\tilde{g}}_{\frac{n}{a} + C_1}'(x_{j,n}) \le C_2e^{-\beta (1+\rho /2) n} \right\} \), then

$$\begin{aligned} {\underline{V}}_\beta \subset \bigcup _{n \ge m} J_{n,+}(\beta ) \end{aligned}$$

for every m. We let \(\underline{{\mathcal {N}}}_n(\beta )\) denote the number of intervals \(J_{j,n}\) that make up \(J_{n,+}(\beta )\), i.e.,

$$\begin{aligned} \underline{{\mathcal {N}}}_n(\beta ) = \sum _{j=0}^{\lceil 2e^n \rceil -1} 1\left\{ {\tilde{g}}_{\frac{n}{a} + C_1}'(x_{j,n}) \le C_2 e^{-\beta (1+\rho /2) n} \right\} . \end{aligned}$$

Lemma 5.5

Let \(\kappa > 0\), \(\rho \in ((-2)\vee (\frac{\kappa }{2}-4),\frac{\kappa }{2}-2)\) and \(\zeta < 0\), that is, \(\beta > \beta _0^* = \frac{2a}{4a-1+a\rho }\). Then,

$$\begin{aligned} {\mathbb {E}}\left[ \underline{{\mathcal {N}}}_n(\beta )\right] \lesssim e^{n(1+(\zeta \beta - \mu )(1+\rho /2))}. \end{aligned}$$

Proof

Applying Chebyshev’s inequality and Proposition 3.1 in the same way as in the previous lemma gives the result. \(\square \)

With these, we will now prove Theorem 5.3.

Proof of Theorem 5.3

We begin with (i). Lemma 5.4 implies, for \(s > d^*(\beta )\) and n sufficiently large, that

$$\begin{aligned} {\mathbb {E}}\left[ \overline{{\mathcal {N}}}_n(\beta )\right] \lesssim e^{ns}. \end{aligned}$$

Since \({\overline{V}}_\beta \cap [1,2] \subset \cup _{n \ge m} J_{n,-}(\beta )\) for every m, the t-dimensional Hausdorff measure of \({\overline{V}}_\beta \cap [1,2]\), \({\mathcal {H}}^t({\overline{V}}_\beta \cap [1,2])\), is bounded by a constant times the t-dimensional lower Minkowski content of \(\cup _{n \ge m} J_{n,-}(\beta )\) (see Section 5.5 of [14]), which implies that

$$\begin{aligned} {\mathbb {E}}\left[ {\mathcal {H}}^t({\overline{V}}_\beta \cap [1,2])\right] \le \lim _{m \rightarrow \infty } c \sum _{n=m}^\infty {\mathbb {E}}\left[ \overline{{\mathcal {N}}}_n(\beta )\right] e^{-nt} = 0 \end{aligned}$$

for \(t>s\). Since such a construction works for every interval, we thus have that \(\text {dim}_H{\overline{V}}_\beta \le d^*(\beta )\) almost surely. Similarly, using Lemma 5.5, \(\text {dim}_H {\underline{V}}_\beta \le d^*(\beta )\) almost surely. For (ii), note that \(1\{ {\overline{V}}_\beta \cap [1,2] \} \le \sum _{n=m}^\infty \overline{{\mathcal {N}}}_n(\beta )\), and thus

$$\begin{aligned} {\mathbb {P}}\left( {\overline{V}}_\beta \cap [1,2] \ne \emptyset \right) \le \lim _{m \rightarrow \infty } \sum _{n=m}^\infty {\mathbb {E}}\left[ \overline{{\mathcal {N}}}_n(\beta )\right] = 0, \end{aligned}$$

since \(\beta < \beta _-^*\) implies that \(d^*(\beta ) = 1+ (\zeta \beta - \mu )(1+\rho /2) < 0\), so \({\overline{V}}_\beta = \emptyset \) for \(\beta < \beta _-^*\). The same argument shows that \({\underline{V}}_\beta = \emptyset \) almost surely for \(\beta > \beta _+^*\).

What is left, is to use (i) and (iii) to prove that \(\text {dim}_H V_\beta ^* \le d^*(\beta )\) almost surely for \(\beta \in [\beta _-^*,\beta _+^*]\). First, let \(\beta \in [\beta _-^*,\beta _0^*)\) and \(\varepsilon > 0\) be such that \(\beta + \varepsilon \in (\beta _-^*, \beta _0^*)\). Since \(V_\beta ^* \subset {\overline{V}}_\beta \), (i) gives that \(\text {dim}_H V_\beta ^* \le \text {dim}_H {\overline{V}}_\beta \le d^*(\beta +\varepsilon )\). By then letting \(\varepsilon \rightarrow 0^+\) and the fact that \(d^*(\beta )\) is increasing on \([\beta _-^*,\beta _0^*]\) gives the upper bound for chosen \(\beta \). In the same way, letting \(\beta \in (\beta _0^*,\beta _+^*]\) and \(\varepsilon > 0\) be such that \(\beta - \varepsilon \in (\beta _0^*,\beta _+^*)\), then since \(V_\beta ^* \subset {\underline{V}}_{\beta -\varepsilon }\), (iii) implies that \(\text {dim}_H V_\beta ^* \le \text {dim}_H {\underline{V}}_\beta \le d^*(\beta -\varepsilon )\). Again, letting \(\varepsilon \rightarrow 0^+\) and noting that \(d^*(\beta )\) is decreasing on \([\beta _0^*,\beta _+^*]\) gives the desired upper bound. Finally, we comment on the case \(\beta = \beta _0^*\). In this case, \(\text {dim}_H V_{\beta _0^*}^*\) is trivially bounded by \(d(\beta _0^*)\), as that is equal to the dimension of the intersection of the curve with \({\mathbb {R}}_+\), a set which clearly contains \(V_{\beta _0^*}^*\). While this is proven previously, we remark that the upper bound follows immediately from Corollary 3.2 with \(\zeta = 0\), and a less involved covering argument than the above. Hence, the proof of the upper bound on the dimension is done. \(\square \)

5.2 Lower bound

We shall prove the lower bound using Frostman’s lemma, that is, we let \(E_s(\nu )\) be the s-dimensional energy of the measure \(\nu \), i.e.,

$$\begin{aligned} E_s(\nu ) = \iint \frac{d\nu (x) d\nu (y)}{|x-y|^s}. \end{aligned}$$

We then construct a Frostman measure on \(V_\beta ^*\) and show that it has finite s-dimensional energy for every \(s < d^* = d^*(\beta )\), which implies that the s-dimensional Hausdorff measure of \(V_\beta ^*\) is infinite (see Theorem 8.9 of [14]), and thus that the Hausdorff dimension must be greater than or equal to s. Just like in the previous section, we do this for \(V_\beta ^*\) intersected with the interval [1, 2], but again it will be clear that this can be done for any closed interval to the right of 0. In the following, we will construct a family of Frostman measures and show that it gives the correct lower bound on the dimension of \(V_\beta ^*\).

Theorem 5.6

Let \(\kappa \in (0,4]\), \(\rho \in (-2,\frac{\kappa }{2}-2)\) and \(x_R = 0^+\). Then, for every \(\varsigma >0\),

$$\begin{aligned} {\mathbb {P}}(\text {dim}_H V_\beta ^* \ge d^*(\beta )-\varsigma ) = 1. \end{aligned}$$

Proof

Fix \(\kappa \in (0,4)\), \(\rho \in (-2,\frac{\kappa }{2}-2)\) and \(\delta \in (0,\frac{1}{2})\) small and \(u,\Lambda ,M>0\) large enough for Proposition 4.2 to hold. We fix \(n \in {\mathbb {N}}\), divide [1, 2] into \(\varepsilon _n^{-1}\) intervals of length \(\varepsilon _n\), and let \(x_{j,n} = 1+(j-\frac{1}{2})\varepsilon _n\) be the midpoint of the jth of these intervals. Let \({\mathscr {D}}_n=\{x_{j,n}:j=1,...,\varepsilon _n^{-1}\}\), let \({\mathscr {C}}_n = \{ x \in {\mathscr {D}}_n: E^n(x) > 0 \}\) and let \(J_n(x) = [x-\frac{\varepsilon _n}{2},x+\frac{\varepsilon _n}{2}]\). Then,

$$\begin{aligned} {\mathscr {C}} = \bigcap _{k \ge 1} \overline{\bigcup _{n \ge k} \bigcup _{x \in {\mathscr {C}}_n} J_n(x)} \subseteq V_\beta ^*. \end{aligned}$$

For each \(n \ge 1\), we define the measure \(\nu _n\) by

$$\begin{aligned} \nu _n(A) = \int _A \sum _{x \in {\mathscr {D}}_n} \frac{E^n(x)}{{\mathbb {E}}[E^n(x)]} 1_{J_n(x)}(t) dt, \end{aligned}$$

for Borel sets \(A \subset [1,2]\). We want to take a subsequential limit of the sequence of measures \((\nu _n)\), which we will prove converges to the Frostman measure on \(V_\beta ^*\). To see that this limit exists, we need that the event on which we want to take the subsequential limit has positive probability and that the support of the limit is contained in \(V_\beta ^*\). That the support of the limit is contained in \(V_\beta ^*\) is obvious by construction, so we turn to proving that the event has positive probability. Clearly \({\mathbb {E}}[\nu _n([1,2])] = 1\). We need to show that there is some constant \(c>0\) such that \({\mathbb {E}}[\nu _n([1,2])^2]<c\). Then, by the Cauchy-Schwarz inequality,

$$\begin{aligned} {\mathbb {P}}(\nu _n([1,2])>0) \ge \frac{{\mathbb {E}}[\nu _n([1,2])]^2}{{\mathbb {E}}[\nu _n([1,2])^2]} \ge \frac{1}{c}, \end{aligned}$$

which implies that the event on which we want to take a subsequence has positive probability. We have that

$$\begin{aligned} {\mathbb {E}}[\nu _n([1,2])^2] = \varepsilon _n^2 \sum _{x,y \in {\mathscr {D}}_n} \frac{{\mathbb {E}}[E^n(x)E^n(y)]}{{\mathbb {E}}[E^n(x)]{\mathbb {E}}[E^n(y)]}, \end{aligned}$$

and we will bound the diagonal and the off-diagonal parts separately below. For the diagonal terms, we have (since \(E^n(x)^2 \le E^n(x)\))

$$\begin{aligned} \varepsilon _n^2 \sum _{j=1}^{\varepsilon _n^{-1}} \frac{{\mathbb {E}}[E^n(x_{j,n})^2]}{{\mathbb {E}}[E^n(x_{j,n})]^2}&\le \varepsilon _n^2 \sum _{j=1}^{\varepsilon _n^{-1}} \frac{1}{{\mathbb {E}}[E^n(x_{j,n})]} \lesssim \left( \prod _{j=1}^{n+1} \psi (\alpha _j)^{|\zeta |}\right) \varepsilon _n^{d^*+o_n(1)} \lesssim 1, \end{aligned}$$

for large enough \(\alpha _0\) and n, since \(\psi \) is a subpower function and \(o_n(1)\) tends to 0 as \(n \rightarrow \infty \). For the off-diagonal terms, we have, again for large enough \(\alpha _0\) and n, by Proposition 4.2,

$$\begin{aligned} \varepsilon _n^2 \sum _{j \ne k}&\frac{{\mathbb {E}}[E^n(x_{j,n})E^n(x_{k,n})]}{{\mathbb {E}}[E^n(x_{j,n})] {\mathbb {E}}[E^n(x_{k,n})]} \lesssim \varepsilon _n^2 \sum _{j \ne k} \Psi _\delta \left( 1/|x_{j,n}-x_{k,n}|\right) |x_{j,n}-x_{k,n}|^{d^*-1} \lesssim 1, \end{aligned}$$

since \(\Psi _\delta \) is a subpower function. Thus, \({\mathbb {E}}[\nu _n([1,2])]\) is finite, and \({\mathbb {P}}(\nu _n([1,2])>0) > 0\). What is left to do is to show that \(E_{d^*-\varsigma }(\nu _n)\) is almost surely finite for each n and every \(\varsigma > 0\). To do this, it suffices to bound \({\mathbb {E}}[E_{d^*-\varsigma }(\nu _n)]\) for each n and every \(\varsigma > 0\). We have that

$$\begin{aligned} {\mathbb {E}}[E_{d^*-\varsigma }(\nu _n)] = \sum _{x,y \in {\mathscr {D}}_n} \frac{{\mathbb {E}}[E^n(x) E^n(y)]}{{\mathbb {E}}[E^n(x)] {\mathbb {E}}[E^n(y)]} \iint _{J_n(x) \times J_n(y)} \frac{1}{|u-v|^{d^*-\varsigma }} du dv. \end{aligned}$$

In order to bound \({\mathbb {E}}[E_{d^*-\varsigma }(\nu _n)]\) we will use the following:

$$\begin{aligned} \iint _{J_n(x_{j,n}) \times J_n(x_{j,n})} \frac{du dv}{|u-v|^s}&= \frac{2}{(2-s)(1-s)}\varepsilon _n^{2-s}, \end{aligned}$$
(66)
$$\begin{aligned} \iint _{J_n(x_{j,n}) \times J_n(x_{k,n})} \frac{du dv}{|u-v|^{s}}&\lesssim \varepsilon _n^2 \frac{1}{|x_{j,n}-x_{k,n}|^s}. \end{aligned}$$
(67)

We first consider the diagonal terms. By (66),

$$\begin{aligned}&\sum _{j=1}^{\varepsilon _n^{-1}} \frac{1}{{\mathbb {E}}[E^n(x_{j,n})]} \iint _{J_n(x_{j,n}) \times J_n(x_{j,n})} \frac{1}{|u-v|^{d^*-\varsigma }} du dv \\&\quad = C \sum _{j=1}^{\varepsilon _n^{-1}} \frac{1}{{\mathbb {E}}[E^n(x_{j,n})]} \varepsilon _n^{(2-d^*+\varsigma )} \lesssim \left( \prod _{j=1}^{n+1} \psi (\alpha _j)^{|\zeta |} \right) \varepsilon _n^{\varsigma + o_n(1)}. \end{aligned}$$

Clearly, the sum is uniformly bounded in n for large enough \(\alpha _0\). For the off-diagonal terms we have, by Proposition 4.2 and (67), that

$$\begin{aligned}&\sum _{j \ne k} \frac{{\mathbb {E}}[E^n(x_{j,n}) E^n(x_{k,n})]}{{\mathbb {E}}[E^n(x_{j,n})] {\mathbb {E}}[E^n(x_{k,n})]} \iint _{J_n(x_{j,n}) \times J_n(x_{k,n})} \frac{1}{|u-v|^{d^*-\varsigma }} du dv \\&\quad \lesssim \varepsilon _n^2 \sum _{j \ne k} \Psi _\delta \left( 1/|x_{j,n}-x_{k,n}| \right) |x_{j,n}-x_{k,n}|^{\varsigma - 1} \lesssim 1. \end{aligned}$$

The implicit constants do not depend on n, and hence we have that for every \(\varsigma >0\), the limiting measure \(\nu \) satisfies \(E_{d^*-\varsigma }(\nu )<\infty \) on an event of positive probability. By Lemma 5.2, the Hausdorff dimension is almost surely constant, so a lower bound with positive probability is an almost sure lower bound. Thus, the proof is done. \(\square \)

Now Theorem  follows from Theorem 5.3 and Theorem 5.6 and thus, as remarked, Theorem 1.1 is proven.