1 Introduction

Schramm–Loewner evolution (SLE) processes are universal lattice size scaling limits of interfaces in critical planar lattice models. SLE with parameter \(\kappa > 0\) is a random continuous curve constructed using Loewner’s differential equation driven by Brownian motion with speed \(\kappa \). Solving the Loewner equation gives a continuous family of conformal maps and the SLE curve is obtained by tracking the image of the singularity of the equation. Various geometric observables are useful and important in SLE theory. To name a few examples, the SLE Green’s function, i.e., the renormalized probability that the interface passes near a given point, is important in connection with the Minkowski content parametrization [28]; Smirnov proved Cardy’s formula for the probability of a crossing event in critical percolation which then entails conformal invariance [37]; left or right passage probabilities known as Schramm formulae [36] were recently used in connection with finite Loewner energy curves [39]; and observables involving derivative moments of the SLE conformal maps are important in the study of fractal and regularity properties, see, e.g., [10, 29]. By the Markovian property of SLE, such observables give rise to martingales with respect to the natural SLE filtration and, conversely, it is sometimes possible to construct martingales carrying some specific geometric information about the SLE.

Assuming sufficient regularity, SLE martingale observables satisfy differential equations which can be derived using Itô calculus. If a solution of these differential equations with the correct boundary values can be found, it is sometimes possible to apply a probabilistic argument using the solution’s boundary behavior to show that the solution actually represents the desired quantity. In the simplest case, the differential equation is an ODE, but generically, in the case of multipoint correlations, it is a semi-elliptic PDE in several variables and it is difficult to construct solutions with the desired boundary data. (But see [10, 15, 24].) Seeking new ways to construct explicit solutions and methods for extracting information from them therefore seems to be worthwhile.

It was observed early on that the differential equations that arise in this way in SLE theory also arise in conformal field theory (CFT) as so-called level two degeneracy equations satisfied by certain correlation functions, see [6, 7, 9, 11, 16]. As a consequence, a clear probabilistic and geometric interpretation of the degeneracy equations is obtained via SLE theory. On the other hand, CFT is a source of ideas and methods for how to systematically construct solutions of these equations, cf. [7, 8, 22]. Thus CFT provides a natural setting for the construction of martingale observables for SLE processes.

In [22], a rigorous Coulomb gas framework was developed in which CFTs are modeled using families of fields built from the Gaussian free field (GFF). SLE martingales for any \(\kappa \) can then be represented as GFF correlation functions involving special fields inserted in the bulk or on the boundary. By making additional, carefully chosen, field insertions, the scaling behavior at the insertion points can in some cases be prescribed. In this way many chordal SLE martingale observables were recovered in [22] as CFT correlation functions.

Multiple SLE arises, e.g., when considering scaling limits of models with alternating boundary conditions forcing the existence of several interacting interfaces. See [14] for several examples and results closely related to those of the present paper. Many single-path observables generalize to this setting but when considering several paths, additional boundary points need to be marked thus increasing the dimensionality of the problem. One purpose of this paper is to suggest and explore a method for the explicit construction of at least some martingale observables for multiple SLE starting from single-path observables and exploiting ideas based in CFT considerations. Boundary insertions are easier to handle than insertions in the bulk, so multiple SLE provides a natural first arena in which to consider the extension of one-point functions to multipoint correlations.

The method involves three steps:

  1. (1)

    The first step uses screening techniques and ideas from CFT to generate a non-rigorous prediction for the observable [4, 13, 22] (see also [12, 24]). The prediction is expressed in terms of a contour integral with an explicit integrand. We refer to these integrals as Dotsenko-Fateev integrals (after [13]) or sometimes simply as screening integrals. The main difficulty is to choose the appropriate integration contour, but this choice can be simplified by considering appropriate limits.

  2. (2)

    The second step is to prove that the prediction from Step 1 satisfies the correct boundary conditions. This technical step involves the computation of rather complicated integral asymptotics. In a separate paper [32], we present an approach for computing such asymptotics and carry out the estimates required for this paper.

  3. (3)

    The last step is to use probabilistic methods together with the estimates of Step 2 to rigorously establish that the prediction from Step 1 gives the correct quantity.

Remark

Step 1 can be viewed as a way to “add” a commuting SLE curve to a known observable by first inserting an appropriate marked boundary point and then employing screening to readjust the boundary conditions.

Remark

We stress that we do not need use a-priori information on the regularity of the considered observables as would be the case, e.g., if one would work directly with the differential equations.

1.1 Two Examples

We illustrate the method by presenting two examples in detail. Both examples involve a system of two SLEs aiming for infinity with one marked point in the interior, but we will indicate how the arguments may be generalized to more complicated configurations.

The first example concerns the probability that the system of SLEs passes to the right of a given interior point; that is, the analogue of Schramm’s formula [36]. This probability obviously depends only on the behavior of the leftmost curve. (So it is really an \(\hbox {SLE}_{\kappa }(2)\) quantity.) The main difficulty in this case lies in implementing Steps 1 and 2; the latter step is discussed in detail in [32].

The second example concerns the limiting renormalized probability that the system of SLEs passes near a given point, that is, the Green’s function. We first check that this Green’s function actually exists as a limit. The main step is to verify existence in the case when only one of the two curves grows. We complete this step by establishing the existence of the \(\hbox {SLE}_\kappa (\rho )\) Green’s function when the force point lies on the boundary and \(\rho \) belongs to a certain interval. The proof gives a representation formula in terms of an expectation for two-sided radial SLE stopped at its target point; the formula is similar to that obtained in the main result of [2]. In Step 1, we make a prediction for the observable by choosing an appropriate linear combination of the screening integrals such that the leading order terms in the asymptotics cancel (thereby matching the asymptotics we expect). In Step 2, which is detailed in [32], we carefully analyze the candidate solution and estimate its boundary behavior. Lastly, given these estimates, we show that the candidate observable enjoys the same probabilistic representation as the Green’s function defined as a limit—so they must agree.

For both examples, the asymptotic analysis of the screening integrals in Step 2 is quite involved. The integrals are natural generalizations of hypergeometric functions and belong to a class sometimes referred to as Dotsenko-Fateev integrals. Even though Dotsenko-Fateev integrals and other generalized hypergeometric functions have been considered before in related contexts (see, e.g., [13, 15, 20, 23, 24]), we have not been able to find the required analytic estimates in the literature. We discuss these issues in a separate paper [32] which also includes details of the precise estimates needed here.

1.2 Fusion

By letting the seed points of the SLEs collapse, we obtain rigorous proofs of fused formulas as corollaries. One can verify by direct computation that the limiting one-point observables satisfy specific third-order ODEs which can be alternatively obtained from the non-rigorous fusion rules of CFT, cf. [12]. In fact, in the case of the Schramm probability, the formulas we prove here were predicted using fusion in [18]. The formulas we derive for the fused Green’s functions appear to be new.

The interpretation of fusion in SLE theory as the successive merging of seeds was described in [16]. In [16], the difficult fact that fused one-point observables actually do satisfy higher order ODEs was also established. The ODEs for the Schramm formula for several fused paths were derived rigorously in [16] and the two-path formula in the special case \(\kappa =8/3\) (also allowing for two interior points) was established in [5]. We do not need to use any of these results in this paper.

Regarding the solution of the equation corresponding to Schramm’s formula for two SLE curves started from two distinct points \(x_1,x_2 \in {\mathbb {R}}\) it was noted in [18] that “explicit analytic progress is only feasible in the limit when \(\delta = x_2 - x_1 \rightarrow 0\)”, that is, in the fusion limit. It is only by applying the screening techniques mentioned above that we are able to obtain explicit expressions for Schramm’s formula in the case of two distinct point \(x_1 \ne x_2\) in this paper.

1.3 Outline of the Paper

The main results of the paper are stated in Sect. 2, while we review some aspects of \(\hbox {SLE}_\kappa \) and \(\hbox {SLE}_\kappa (\rho )\) processes in Sect. 3.

In Sect. 4, we review and use ideas from CFT to generate predictions for Schramm’s formula and Green’s function for multiple SLEs with two curves growing toward infinity in terms of screening integrals.

In Sect. 5, we prove rigorously that the predicted Schramm’s formula indeed gives the probability that a given point lies to the left of both curves. The proof relies on a number of technical asymptotic estimates; proofs of these estimates are given in [32].

In Sect. 6, we give a rigorous proof that the predicted Green’s function equals the renormalized probability that the system passes near a given point. The proof relies both on pure SLE estimates (established in Sects. 67) and on asymptotic estimates for contour integrals (established in [32]).

In Sect. 7, we prove a lemma which expresses the fact that it is very unlikely that both curves in a commuting system get near a given point.

In Sect. 8, we consider the special case of two fused curves, i.e., the case when both curves in the commuting system start at the same point. In the case of Schramm’s formula, this provides rigorous proofs of some predictions for Schramm’s formula due to Gamsa and Cardy [18].

In the appendix, we consider the Green’s function when \(8/\kappa \) is an integer and derive explicit formulas in terms of elementary functions in a few cases.

2 Main Results

Before stating the main results, we briefly review some relevant definitions.

Consider first a system of two SLE paths \(\{\gamma _j\}_1^2\) in the upper half-plane \(\mathbb {H} := \{{{\mathrm{Im}}\,}z > 0\}\) growing toward infinity. Let \(0 < \kappa \leqslant 4\). Let \((\xi ^1, \xi ^2) \in {\mathbb {R}}^2\) with \(\xi ^1 \ne \xi ^2\). The Loewner equation corresponding to two growing curves is

$$\begin{aligned} dg_t(z) = \frac{\lambda _1dt}{g_t(z) - \xi ^{1}_t} + \frac{\lambda _2 dt}{g_t(z) - \xi ^{2}_t}, \quad g_0(z) = z, \end{aligned}$$
(2.1)

where \(\xi _t^1\) and \(\xi _t^2\), \(t \geqslant 0\), are the driving terms for the two curves and the growth speeds \(\lambda _j \) satisfy \(\lambda _j \geqslant 0\). The solution of (2.1) is a family of conformal maps \((g_t(z))_{t \geqslant 0}\) called the Loewner chain of \((\xi _t^1, \xi _t^2)_{t \geqslant 0}\). The multiple SLE system started from\((\xi ^1,\xi ^2)\) is obtained by taking \(\xi _t^1\) and \(\xi _t^2\) as solutions of the system of SDEs

$$\begin{aligned} {\left\{ \begin{array}{ll} d\xi _t^{1} = \frac{\lambda _1 +\lambda _2 }{\xi _t^{1} - \xi _t^{2}} dt + \sqrt{\frac{\kappa }{2} \lambda _1 } dB_t^1, &{}\quad \xi _0^1 = \xi ^1,\\ d\xi _t^{2} = \frac{\lambda _1 +\lambda _2 }{\xi _t^{2} - \xi _t^{1}} dt + \sqrt{\frac{\kappa }{2} \lambda _2 } dB_t^2, &{}\quad \xi _0^2 = \xi ^2, \end{array}\right. } \end{aligned}$$
(2.2)

where \(B_t^{1}\) and \(B_t^2\) are independent standard Brownian motions with respect to some measure \(\mathbf {P}= \mathbf {P}_{\xi ^1, \xi ^2}\). The paths are defined by

$$\begin{aligned} \gamma _j(t) = \lim _{y \downarrow 0} g^{-1}_t(\xi _t^j + iy), \quad \gamma _{j,t}: = \gamma _j[0,t], \quad j = 1,2. \end{aligned}$$
(2.3)

For \(j = 1,2\), \(\gamma _{j,\infty }\) is a continuous curve from \(\xi ^j\) to \(\infty \) in \(\mathbb {H}\). It can be shown that the system (2.1) is commuting in the sense that the order in which the two curves are grown does not matter [15]. Since our theorems only concern the distribution of the fully grown curves \(\gamma _{1,\infty }\) and \(\gamma _{2,\infty }\), the choice of growth speeds is irrelevant. When growing one single curve we will often choose the growth rate to equal \(a:=2/\kappa \).

Let us also recall the definition of (chordal) \(\hbox {SLE}_\kappa (\rho )\) for a single path \(\gamma _1\) in \(\mathbb {H}\) growing toward infinity. Let \(0< \kappa < 8\), \(\rho \in {\mathbb {R}}\), and let \((\xi ^1, \xi ^2)\in {\mathbb {R}}^2\) with \(\xi ^1 \ne \xi ^2\). Let \(W_t\) be a standard Brownian motion with respect to some measure \(\mathbf {P}^{\rho }\). Then \({\textit{SLE}}_\kappa (\rho )\)started from\((\xi ^1,\xi ^2)\) is defined by the equations

$$\begin{aligned} \partial _t g_t(z)&= \frac{2/\kappa }{g_t(z) - \xi _t^1}, \quad g_0(z) = z, \end{aligned}$$
(2.4a)
$$\begin{aligned} d\xi ^1_t&= \frac{\rho /\kappa }{\xi ^1_t-g_t(\xi ^2)}dt + dW_t, \quad \xi _0^1 = \xi ^1. \end{aligned}$$
(2.4b)

Depending on the choice of parameters, a solution may not exist for some range of t. When referring to \(\hbox {SLE}_\kappa (\rho )\) started from \((\xi ^1,\xi ^2)\), we always assume that the curve starts from the first point of the tuple \((\xi ^1, \xi ^2)\) while the second point (in this case \(\xi ^2\)) is the force point. The \(\hbox {SLE}_\kappa (\rho )\) path \(\gamma _1\) is defined as in (2.3), assuming existence of the limit. In general, \(\hbox {SLE}_\kappa (\rho )\) need not be generated by a curve, but it is in all the cases considered in this paper.

The marginal law of either of the SLEs in a commuting system is that of an \(\hbox {SLE}_\kappa (\rho ), \, \rho =2,\) with the force point at the seed of the other curve. A similar statement also holds for stopped portions of the curve(s), see [15].

2.1 Schramm’s Formula

Our first result provides an explicit expression for the probability that an \(\hbox {SLE}_\kappa (2)\) path passes to the right of a given point. (See below for a precise definition of this event.) The probability is expressed in terms of the function \(\mathcal {M}(z, \xi )\) defined for \(z = x + iy \in \mathbb {H}\) and \(\xi >0\) by

$$\begin{aligned} \mathcal {M}(z, \xi ) =&\; y^{\alpha - 2} z^{-\frac{\alpha }{2}}(z- \xi )^{-\frac{\alpha }{2}}\bar{z}^{1-\frac{\alpha }{2}}(\bar{z} - \xi )^{1-\frac{\alpha }{2}}\nonumber \\&\times \int _{\bar{z}}^z(u-z)^{\alpha }(u- \bar{z})^{\alpha - 2} u^{-\frac{\alpha }{2}}(u - \xi )^{-\frac{\alpha }{2}} du, \end{aligned}$$
(2.5)

where \(\alpha = 8/\kappa > 1\) and the integration contour from \(\bar{z}\) to z passes to the right of \(\xi \), see Fig. 1. (Unless otherwise stated, we always consider complex powers defined using the principal branch of the complex logarithm.)

Fig. 1
figure 1

The integration contour used in the definition (2.5) of \(\mathcal {M}(z, \xi )\) is a path from \(\bar{z}\) to z which passes to the right of \(\xi \)

Theorem 2.1

(Schramm’s formula for \(\hbox {SLE}_\kappa (2)\)) Let \(0 < \kappa \leqslant 4\). Let \(\xi > 0\) and consider chordal \(\hbox {SLE}_\kappa (2)\) started from \((0,\xi )\). Then the probability \(P(z, \xi )\) that a given point \(z = x + iy \in \mathbb {H}\) lies to the left of the curve is given by

$$\begin{aligned} P(z, \xi ) = \frac{1}{c_\alpha } \int _x^\infty {{\mathrm{Re}}\,}\mathcal {M}(x' +i y, \xi ) dx', \quad z \in \mathbb {H}, \quad \xi > 0, \end{aligned}$$
(2.6)

where the normalization constant \(c_\alpha \in {\mathbb {R}}\) is defined by

$$\begin{aligned} c_\alpha&= -\frac{2 \pi ^{3/2} \Gamma \left( \frac{\alpha -1}{2}\right) \Gamma \left( \frac{3 \alpha }{2}-1\right) }{\Gamma \left( \frac{\alpha }{2}\right) ^2 \Gamma (\alpha )}. \end{aligned}$$
(2.7)

The proof of Theorem 2.1 will be given in Sect. 5. The formula (2.6) for \(P(z,\xi )\) is motivated by the CFT and screening considerations of Sect. 4.

A point \(z \in \mathbb {H}\) lies to the left of both curves in a commuting system iff it lies to the left of the leftmost curve. Since each of the two curves of a commuting process has the distribution of an \(\hbox {SLE}_\kappa (2)\) (see Sect. 3.1.2), Theorem 2.1 can be interpreted as the following result for multiple SLE.

Corollary 2.2

(Schramm’s formula for multiple SLE) Let \(0 < \kappa \leqslant 4\). Let \(\xi > 0\) and consider a multiple \(\hbox {SLE}_\kappa \) system in \(\mathbb {H}\) started from \((0,\xi )\) and growing toward infinity. Then the probability \(P(z, \xi )\) that a given point \(z = x + iy \in \mathbb {H}\) lies to the left of both curves is given by (2.6).

Corollary 2.2 together with translation invariance immediately yields an expression for the probability that a point z lies to the left of a system of two SLEs started from two arbitrary points \((\xi ^1, \xi ^2)\) in \({\mathbb {R}}\). The probabilities that z lies to the right of or between the two curves then follow by symmetry. For completeness, we formulate this as another corollary.

Corollary 2.3

Let \(0 < \kappa \leqslant 4\). Suppose \(-\infty< \xi ^1< \xi ^2 < \infty \) and consider a multiple \(\hbox {SLE}_\kappa \) system in \(\mathbb {H}\) started from \((\xi ^1, \xi ^2)\) and growing toward infinity. Let \(P(z, \xi )\) denote the function in (2.6). Then the probability \(P_{left}(z, \xi ^1, \xi ^2)\) that a given point \(z = x + iy \in \mathbb {H}\) lies to the left of both curves is given by

$$\begin{aligned} P_{left}(z, \xi ^1, \xi ^2) = P(z - \xi ^1, \xi ^2 - \xi ^1); \end{aligned}$$

the probability \(P_{right}(z, \xi ^1, \xi ^2)\) that a point \(z \in \mathbb {H}\) lies to the right of both curves is

$$\begin{aligned} P_{right}(z, \xi ^1, \xi ^2) = P(-\bar{z} + \xi ^2, \xi ^2 - \xi ^1); \end{aligned}$$

and the probability \(P_{middle}(z, \xi ^1, \xi ^2)\) that z lies between the two curves is given by

$$\begin{aligned} P_{middle}(z, \xi ^1, \xi ^2) = 1 - P_{left}(z, \xi ^1, \xi ^2)- P_{right}(z, \xi ^1, \xi ^2). \end{aligned}$$

By letting \(\xi \rightarrow 0+\) in (2.6), we obtain proofs of formulas for “fused” paths. See Sect. 8 for a derivation of the following corollary.

Corollary 2.4

Let \(0 < \kappa \leqslant 4 \) and define \(P_{fusion}(z) = \lim _{\xi \downarrow 0} P(z,\xi )\), where \(P(z,\xi )\) is as in (2.6). Then

$$\begin{aligned} P_{fusion}(z) = \frac{\Gamma \left( \frac{\alpha }{2}\right) \Gamma (\alpha )}{2^{2 - \alpha }\pi \Gamma \left( \frac{3\alpha }{2} - 1\right) } \int _{\frac{x}{y}}^\infty S(t') dt', \end{aligned}$$
(2.8)

where the real-valued function S(t) is defined by

$$\begin{aligned} S(t)&= (1 + t^2)^{1- \alpha } \bigg \{{}_2F_1\bigg (\frac{1}{2} + \frac{\alpha }{2}, 1 - \frac{\alpha }{2}, \frac{1}{2}; - t^2\bigg )\\&\quad - \frac{2\Gamma \left( 1 +\frac{\alpha }{2}\right) \Gamma \left( \frac{\alpha }{2}\right) t}{\Gamma \left( \frac{1}{2} + \frac{\alpha }{2}\right) \Gamma \left( - \frac{1}{2} + \frac{\alpha }{2}\right) } {}_2F_1\bigg (1 + \frac{\alpha }{2}, \frac{3}{2} - \frac{\alpha }{2}, \frac{3}{2}; -t^2\bigg )\bigg \}, \quad t \in {\mathbb {R}}. \end{aligned}$$

Corollary 2.4 provides a proof of the predictions of [18] where the formula (2.8) was derived by solving a third order ODE obtained from so-called fusion rules. (We prove the result for \(\kappa \leqslant 4\) but the formulas match those from [18] in general.) We note that even given the explicit predictions of [18], it is not clear how to proceed to verify them rigorously without additional non-trivial information. Indeed, as soon the evolution starts, the tips of the curves are separated and the system leaves the fused state. However, [16] provides a different rigorous approach by exploiting the hypoellipticity of the PDEs to show that the fused observables satisfy the higher order ODEs. In the special case \(\kappa =8/3\), the formula for \(P_{fusion}(z)\) was proved in [5] using Cardy and Simmons’ prediction [38] for a two-point Schramm formula.

2.2 The Green’s Function

Our second main result provides an explicit expression for the Green’s function for \(\hbox {SLE}_\kappa (2)\).

Let \(\alpha = 8/\kappa \). Define the function \(I(z,\xi ^1, \xi ^2)\) for \(z \in \mathbb {H}\) and \(-\infty< \xi ^1< \xi ^2 < \infty \) by

$$\begin{aligned} I(z,\xi ^1, \xi ^2) = \int _A^{(z+,\xi ^2+,z-,\xi ^2-)} (u - z)^{\alpha -1} (u - \bar{z})^{\alpha -1} (u - \xi ^1)^{-\frac{\alpha }{2}} (\xi ^2 - u)^{-\frac{\alpha }{2}} du, \end{aligned}$$
(2.9)

where \(A = (z + \xi ^2)/2\) is a basepoint and the Pochhammer integration contour is displayed in Fig. 2. More precisely, the integration contour begins at the base point A, encircles the point z once in the positive (counterclockwise) sense, returns to A, encircles \(\xi ^2\) once in the positive sense, returns to A, and so on. The points \(\bar{z}\) and \(\xi ^1\) are exterior to all loops. The factors in the integrand take their principal values at the starting point and are then analytically continued along the contour. The use of a Pochhammer contour ensures that the integrand is analytic everywhere on the contour despite the fact that the integrand involves (multiple-valued) non-integer powers.

Fig. 2
figure 2

The Pochhammer integration contour in (2.9) is the composition of four loops based at the point \(A = (z + \xi ^2)/2\) halfway between z and \(\xi ^2\). The loop denoted by 1 is traversed first, then the loop denoted by 2 is traversed, and so on

For \(\alpha \in (1, \infty ) \smallsetminus {\mathbb {Z}}\), we define the function \({\mathcal {G}}(z, \xi ^1, \xi ^2)\) for \(z = x+iy \in \mathbb {H}\) and \(\xi ^1 < \xi ^2\) by

$$\begin{aligned} {\mathcal {G}}(z, \xi ^1, \xi ^2) = \frac{1}{\hat{c}}y^{\alpha + \frac{1}{\alpha } - 2} |z - \xi ^1|^{1 - \alpha } |z - \xi ^2|^{1 - \alpha } {\text {Im}} \big (e^{-i\pi \alpha } I(z,\xi ^1,\xi ^2) \big ), \end{aligned}$$
(2.10)

where the constant \(\hat{c} = \hat{c}(\kappa )\) is given by

$$\begin{aligned} \hat{c} = \frac{4 \sin ^2\left( \frac{\pi \alpha }{2}\right) \sin (\pi \alpha ) \Gamma \left( 1-\frac{\alpha }{2}\right) \Gamma \left( \frac{3 \alpha }{2}-1\right) }{\Gamma (\alpha )} \quad \text {with} \quad \alpha = \frac{8}{\kappa }. \end{aligned}$$
(2.11)

We extend this definition of \({\mathcal {G}}(z,\xi ^1,\xi ^2)\) to all \(\alpha > 1\) by continuity. The following lemma shows that even though \(\hat{c}\) vanishes as \(\alpha \) approaches an integer, the function \({\mathcal {G}}(z,\xi ^1,\xi ^2)\) has a continuous extension to integer values of \(\alpha \).

Lemma 2.5

For each \(z \in \mathbb {H}\) and each \((\xi ^1, \xi ^2) \in {\mathbb {R}}^2\) with \(\xi ^1 < \xi ^2\), \({\mathcal {G}}(z,\xi ^1,\xi ^2)\) can be extended to a continuous function of \(\alpha \in (1, \infty )\).

Proof

See Appendix A. \(\square \)

The CFT and screening considerations described in Sect. 4 suggest that \({\mathcal {G}}\) is the Green’s function for \(\hbox {SLE}_\kappa (2)\) started from \((\xi ^1, \xi ^2)\); that is, that \({\mathcal {G}}(z, \xi ^1, \xi ^2)\) provides the normalized probability that an \(\hbox {SLE}_\kappa (2)\) path originating from \(\xi ^1\) with force point \(\xi ^2\) passes through an infinitesimal neighborhood of z. Our next theorem establishes this rigorously.

In the following statements, \(\Upsilon _\infty (z)\) denotes 1 / 2 times the conformal radius seen from z of the complement of the curve(s) under consideration (as indicated by the probability measure) in \(\mathbb {H}\). For example, in the case of two paths the conformal radius is with respect to the component of z of \(\mathbb {H} \smallsetminus (\gamma _{1,\infty } \cup \gamma _{2,\infty })\).

Theorem 2.6

(Green’s function for \(\hbox {SLE}_\kappa (2)\)) Let \(0< \kappa \leqslant 4\). Let \(-\infty< \xi ^1< \xi ^2 < \infty \) and consider chordal \(\hbox {SLE}_\kappa (2)\) started from \((\xi ^1,\xi ^2)\). Then, for each \(z = x + iy \in \mathbb {H}\),

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \epsilon ^{d-2}\mathbf {P}^2_{\xi ^1, \xi ^2} \left( \Upsilon _\infty (z) \leqslant \epsilon \right) = c_* {\mathcal {G}}(z,\xi ^1, \xi ^2), \end{aligned}$$
(2.12)

where \(d=1+\kappa /8\), \(\mathbf {P}^2\) is the \(\hbox {SLE}_\kappa (2)\) measure, the function \({\mathcal {G}}\) is defined in (2.10), and the constant \(c_* = c_*(\kappa )\) is defined by

$$\begin{aligned} c_* = \frac{2}{\int _0^\pi \sin ^{4a} x dx} = \frac{2\Gamma \left( 1+2a\right) }{\sqrt{\pi } \Gamma \left( \frac{1}{2}+2a\right) } \quad \text {with} \quad a = \frac{2}{\kappa }. \end{aligned}$$
(2.13)

The proof of Theorem 2.6 will be presented in Sect. 6.

Remark

The function \({\mathcal {G}}(z, \xi ^1, \xi ^2)\) can be written as

$$\begin{aligned} {\mathcal {G}}(z, \xi ^1, \xi ^2) = ({\text {Im}} z)^{d-2} h(\theta ^1, \theta ^2), \quad z \in \mathbb {H}, \quad -\infty< \xi ^1< \xi ^2 < \infty , \end{aligned}$$
(2.14)

where h is a function of \(\theta ^1 = \arg (z - \xi ^1)\) and \(\theta ^2 = \arg (z - \xi ^2)\). This is consistent with the expected translation invariance and scale covariance of the Green’s function.

In the appendix, we derive formulas for \({\mathcal {G}}(z, \xi ^1, \xi ^2)\) when \(\alpha \) is an integer. In particular, we obtain a proof of the following proposition which provides explicit formulas for the \(\hbox {SLE}_\kappa (2)\) Green’s function in the case of \(\kappa = 4\), \(\kappa = 8/3\), and \(\kappa = 2\).

Proposition 2.7

For \(\kappa = 4\), \(\kappa = 8/3\), and \(\kappa = 2\) (i.e. for \(\alpha = 2,3,4\)), the \(\hbox {SLE}_\kappa (2)\) Green’s function is given by Eq. (2.14) where \(h(\theta ^1, \theta ^2)\) is given explicitly by

$$\begin{aligned} h(\theta ^1, \theta ^2) =&\; \frac{1}{4 \pi \sin (\theta ^1-\theta ^2)}\big \{\sin (2 \theta ^1-2\theta ^2)+2 \theta ^1(1-\cos {2 \theta ^2})+2 \theta ^2 (\cos {2 \theta ^1} -1)\nonumber \\&-\sin {2 \theta ^1} +\sin {2 \theta ^2}\big \}, \quad \kappa = 4, \end{aligned}$$
(2.15)
$$\begin{aligned} h(\theta ^1, \theta ^2) =&\; \frac{1}{30 \pi (\cos (\theta ^1-\theta ^2)+1)}\bigg \{\sqrt{\sin \theta ^1 \sin \theta ^2}\bigg [-6 \cos \bigg (\frac{\theta ^1-3\theta ^2}{2}\bigg )\nonumber \\&+\,\cos \bigg (\frac{3 \theta ^1- 5 \theta ^2}{2}\bigg ) +\cos \bigg (\frac{5 \theta ^1 - 3 \theta ^2}{2}\bigg ) -6 \cos \bigg (\frac{3 \theta ^1 - \theta ^2}{2}\bigg )\nonumber \\&-\,38 \cos \bigg (\frac{\theta ^1+\theta ^2}{2}\bigg ) +20 \cos \bigg (\frac{3 \theta ^1+ 3\theta ^2}{2}\bigg )\nonumber \\&+\,14 \cos \bigg (\frac{5 \theta ^1+\theta ^2}{2}\bigg ) +14 \cos \bigg (\frac{\theta ^1+5 \theta ^2}{2}\bigg )\bigg ]-\,2 \cos ^2\left( \frac{\theta ^1-\theta ^2}{2}\right) \nonumber \\&\times \, \big [-9 \sin {2 \theta ^1} \sin {2 \theta ^2}+ (7 \cos {2\theta ^2}+8)\cos {2 \theta ^1}+8 \cos 2 \theta ^2- 23\big ] \nonumber \\&\times \, \arg \left( \cos \left( \frac{\theta ^1+\theta ^2}{2}\right) +i \sqrt{\sin \theta ^1 \sin \theta ^2}\right) \bigg \}, \quad \kappa = 8/3, \end{aligned}$$
(2.16)

and

$$\begin{aligned}&h(\theta ^1, \theta ^2) \nonumber \\&\quad = \frac{1}{192 \pi }\bigg \{\frac{72 \sin ^5(\theta ^1) \cos (\theta ^2) \cos (\theta ^1-3\theta ^2)}{\sin ^3(\theta ^1-\theta ^2)} +\frac{\sin ^3(\theta ^2)}{(\cot \theta ^1 -\cot \theta ^2)^3}\nonumber \\&\qquad \times \, \bigg [ 96\frac{(3 \theta ^1 \cot \theta ^2 +2)\cot \theta ^1 +\theta ^1(3-2 \csc ^2\theta ^1)-3 \cot \theta ^2}{\sin \theta ^1}\nonumber \\&\qquad +\,\csc ^6(\theta ^2)\big [3 \big (16 \theta ^2 (3 \sin (\theta ^1-2 \theta ^2)+\sin \theta ^1) \sin \theta ^1 + 5 \sin {2\theta ^2} -4 \sin {4 \theta ^2}\big ) \sin \theta ^1\nonumber \\&\qquad +\,6 \cos (\theta ^1-6\theta ^2)-\cos (3\theta ^1- 6\theta ^2) +(75 \cos {2 \theta ^2} - 30 \cos {4 \theta ^2} -33)\cos \theta ^1\nonumber \\&\qquad -\, 17 \cos {3 \theta ^1}\big ]\bigg ]\bigg \}, \quad \kappa = 2. \end{aligned}$$
(2.17)

It is possible to derive an explicit expression for the Green’s function for a system of two multiple SLEs as a consequence of Theorem 2.6. To this end, we need a correlation estimate which expresses the intuitive fact that it is very unlikely that both curves pass near a given point \(z \in \mathbb {H}\).

Lemma 2.8

Let \(0< \kappa \leqslant 4\). Then,

$$\begin{aligned} \lim _{\epsilon \downarrow 0} \epsilon ^{d-2}\mathbf {P}_{\xi ^1, \xi ^2} \left( \Upsilon _\infty (z) \leqslant \epsilon \right) = \lim _{\epsilon \downarrow 0} \epsilon ^{d-2}\Big [\mathbf {P}_{\xi ^1, \xi ^2}^2 \left( \Upsilon _\infty (z) \leqslant \epsilon \right) + \mathbf {P}_{\xi ^2, \xi ^1}^2 \left( \Upsilon _\infty (z) \leqslant \epsilon \right) \Big ], \end{aligned}$$

where \(\mathbf {P}_{\xi ^1, \xi ^2}\) denotes the law of a system of two multiple \(\hbox {SLE}_\kappa \) in \(\mathbb {H}\) started from \((\xi ^1, \xi ^2)\) and aiming for \(\infty \), and \(\mathbf {P}^2_{\xi ^1, \xi ^2}\) denotes the law of chordal \(\hbox {SLE}_\kappa (2)\) in \(\mathbb {H}\) started from \((\xi ^1, \xi ^2)\).

The proof of Lemma 2.8 will be given in Sect. 7.

Assuming Lemma 2.8, it follows immediately from Theorem 2.6 that the Green’s function for a system of multiple SLEs started from \((-\xi , \xi )\) is given by

$$\begin{aligned} G_\xi (z) = {\mathcal {G}}(z,-\xi ,\xi ) + {\mathcal {G}}(-\bar{z},-\xi ,\xi ), \quad z \in \mathbb {H}, \quad \xi > 0. \end{aligned}$$

In other words, given a system of two multiple \(\hbox {SLE}_\kappa \) paths started from \(-\xi \) and \(\xi \) respectively, \(G_\xi (z)\) provides the normalized probability that at least one of the two curves passes through an infinitesimal neighborhood of z. We formulate this as a corollary.

Corollary 2.9

(Green’s function for multiple SLE) Let \(0< \kappa \leqslant 4\). Let \(\xi > 0\) and consider a system of two multiple \(\hbox {SLE}_\kappa \) paths in \(\mathbb {H}\) started from \((-\xi , \xi )\) and growing towards \(\infty \). Then, for each \(z = x + iy \in \mathbb {H}\),

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \epsilon ^{d-2}\mathbf {P}_{-\xi , \xi } \left( \Upsilon _\infty (z) \leqslant \epsilon \right) = c_* G_{\xi }(z), \end{aligned}$$
(2.18)

where \(d=1+\kappa /8\), the constant \(c_* = c_*(\kappa )\) is given by (2.13), and the function \(G_\xi \) is defined for \(z \in \mathbb {H}\) and \(\xi > 0\) by

$$\begin{aligned} G_{\xi }(z) = \frac{1}{\hat{c}} y^{\alpha + \frac{1}{\alpha } - 2} |z + \xi |^{1 - \alpha } |z - \xi |^{1 - \alpha } {\text {Im}} \big [e^{-i\pi \alpha }(I(z,-\xi ,\xi ) + I(-\bar{z},-\xi ,\xi ))\big ]. \end{aligned}$$

Remark 2.10

If the system is started from two arbitrary points \((\xi ^1, \xi ^2) \in {\mathbb {R}}\) with \(\xi ^1 < \xi ^2\), then it follows immediately from (2.18) and translation invariance that

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \epsilon ^{d-2}\mathbf {P}_{\xi ^1, \xi ^2} \left( \Upsilon _\infty \leqslant \epsilon \right) = c_* G_{\frac{\xi ^2 - \xi ^1}{2}}\bigg (z - \frac{\xi ^1 + \xi ^2}{2}\bigg ). \end{aligned}$$

We will prove Theorem 2.6 by establishing two independent propositions, which when combined imply Theorem 2.6. The first of these propositions (Proposition 2.11) establishes existence of a Green’s function for \(\hbox {SLE}_\kappa (\rho )\) and provides a representation for this Green’s function in terms of an expectation with respect to two-sided radial \(\hbox {SLE}_\kappa \). For the proof of Theorem 2.6, we only need this proposition for \(\rho = 2\). However, since it is no more difficult to state and prove it for a suitable range of positive \(\rho \), we consider the general case.

Proposition 2.11

(Existence and representation of Green’s function for \(\hbox {SLE}_\kappa (\rho )\)) Let \(0 < \kappa \leqslant 4\) and \(0 \leqslant \rho < 8-\kappa \). Given two points \(\xi ^1, \xi ^2 \in {\mathbb {R}}\) with \(\xi ^1 < \xi ^2\), consider chordal \(\hbox {SLE}_\kappa (\rho )\) started from \((\xi ^1, \xi ^2)\). Then, for each \(z \in \mathbb {H}\),

$$\begin{aligned} \lim _{\epsilon \downarrow 0}\epsilon ^{d-2} \mathbf {P}^{\rho }_{\xi ^1, \xi ^2} \left( \Upsilon _\infty \leqslant \epsilon \right) = c_* G^{\rho }(z, \xi ^1, \xi ^2), \end{aligned}$$

where the \(\hbox {SLE}_\kappa (\rho )\) Green’s function \(G^{\rho }\) is given by

$$\begin{aligned} G^{\rho }(z, \xi ^1, \xi ^2) = G(z-\xi ^1) \, \mathbf {E}^*_{\xi ^1,z} \left[ M_T^{(\rho )} \right] . \end{aligned}$$
(2.19)

Here \(G(z) = ({\text {Im}} z)^{d-2} \sin ^{4a-1}( \arg \, z)\) is the Green’s function for chordal \(\hbox {SLE}_\kappa \) in \(\mathbb {H}\) from 0 to \(\infty \), the martingale \(M^{(\rho )}\) is defined in (3.8), \(\mathbf {E}^*_{\xi ^1,z}\) denotes expectation with respect to two-sided radial \(\hbox {SLE}_\kappa \) from \(\xi ^1\) through z, stopped at T, the hitting time of z, and the constant \(c_*\) is given by (2.13).

We expect that the analogous result is true for \(\kappa \in (0,8)\) and for a wider range of \(\rho \). We restrict to the stated ranges for simplicity, as these assumptions simplify some of the arguments, e.g., due to the relationship between the boundary exponent and the dimension of the path.

The next result (Proposition 2.12) shows that the function \({\mathcal {G}}(z, \xi ^1, \xi ^2)\) predicted by CFT and defined in (2.10) can be represented in terms of an expectation with respect to two-sided radial \(\hbox {SLE}_\kappa \). Since this representation coincides with the representation in (2.19), Theorem 2.6 will follow immediately once we establish Propositions 2.11 and 2.12.

Proposition 2.12

(Representation of \({\mathcal {G}}\)) Let \(0 < \kappa \leqslant 4\) and let \(\xi ^1, \xi ^2 \in {\mathbb {R}}\) with \(\xi ^1 < \xi ^2\). The function \({\mathcal {G}}(z,\xi ^1,\xi ^2)\) defined in (2.10) satisfies

$$\begin{aligned} {\mathcal {G}}(z,\xi ^1,\xi ^2) = G(z-\xi ^1) \, \mathbf {E}^*_{\xi ^1,z} \left[ M_T^{(2)} \right] , \quad z \in \mathbb {H}, \quad 0< \xi < \infty , \end{aligned}$$
(2.20)

where \(G(z) = ({\text {Im}} z)^{d-2} \sin ^{4a-1}( \arg \, z)\) is the Green’s function for chordal \(\hbox {SLE}_\kappa \) in \(\mathbb {H}\) from 0 to \(\infty \) and \(\mathbf {E}^*_{\xi ^1,z}\) denotes expectation with respect to two-sided radial \(\hbox {SLE}_\kappa \) from \(\xi ^1\) through z, stopped at T, the hitting time of z.

Remark

Note that Eq. (2.20) gives a formula for the two-sided radial SLE observable,

$$\begin{aligned} \mathbf {E}^*_{\xi ^1,z} \left[ M_T^{(2)} \right] = \frac{{\mathcal {G}}(z,\xi ^1,\xi ^2)}{G(z-\xi ^1)}, \end{aligned}$$

and as a consequence we obtain smoothness and the fact that it satisfies the expected PDE.

The proofs of Propositions 2.11 and 2.12 are presented in Sects. 6.1 and  6.2, respectively.

In Sect. 8.2, we obtain fusion formulas by letting \(\xi \rightarrow 0+\). The formulas simplify for some values of \(\kappa \). In particular, we will prove the following result.

Proposition 2.13

(Fused Green’s functions) Suppose \(\kappa = 4, 8/3\), or 2. Consider a system of two fused multiple \(\hbox {SLE}_\kappa \) paths in \(\mathbb {H}\) started from 0 and growing toward \(\infty \). Then, for each \(z = x + iy \in \mathbb {H}\),

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \epsilon ^{d-2}\mathbf {P}_{0,0+} \left( \Upsilon _\infty (z) \leqslant \epsilon \right) = c_* ({\mathcal {G}}_f(z) + {\mathcal {G}}_f(-\bar{z})), \end{aligned}$$

where \(d=1+\kappa /8\), the constant \(c_* = c_*(\kappa )\) is given by (2.13), and the function \({\mathcal {G}}_f\) is defined by

$$\begin{aligned} {\mathcal {G}}_f(x+iy) = y^{d-2} h_{f}(\theta ) \end{aligned}$$

with \(h_f(\theta )\) given explicitly by

$$\begin{aligned} h_f(\theta ) = {\left\{ \begin{array}{ll} \frac{2}{\pi }(\sin \theta - \theta \cos \theta ) \sin \theta , &{} \kappa = 4,\\ \frac{8}{15 \pi } (4 \theta -3 \sin {2 \theta } +2 \theta \cos {2 \theta })\sin ^2\theta , &{} \kappa = 8/3,\\ \frac{1}{12 \pi }(27 \sin \theta + 11 \sin {3 \theta } -6 \theta (9 \cos \theta + \cos {3 \theta })) \sin ^3\theta , &{} \kappa = 2, \end{array}\right. } \ 0< \theta < \pi . \end{aligned}$$

2.3 Remarks

We end this section by making a few remarks.

  • We believe the method used in this paper will generalize to produce analogous results for observables for \(N \geqslant 3\) multiple SLE paths depending on one interior point. This would require \(N-1\) screening insertions, and the integrals will then be \(N-1\) iterated contour integrals.

  • In [23, 24] screening integrals for SLE boundary observables (such as the ordered multipoint boundary Green’s function) are given and shown to be closely related to a particular quantum group. In fact, this algebraic structure is used to systematically make the difficult choices of integration contours. It seems reasonable to expect that a similar connection exists in our setting as well, allowing for an efficient generalization to several commuting SLE curves, but we will not pursue this here.

  • Another way of viewing the system of two multiple SLEs growing towards \(\infty \) is as one SLE path conditioned to hit a boundary point, also known as two-sided chordal SLE. Indeed, the extra \(\rho =2\) at the second seed point forces a \(\rho = \kappa -8\) at \(\infty \).

  • Suppose one has an \(\hbox {SLE}_{\kappa }\) martingale and wants to construct a similar martingale for \(\hbox {SLE}_{\kappa }(\rho )\). The first idea that comes to mind is to try to “compensate” the \(\hbox {SLE}_\kappa \) martingale by multiplying by a differentiable process. In the cases we consider this method does not give the correct observables (the boundary behavior is not correct), but rather corresponds to a change of coordinates moving the target point at \(\infty \).

3 Preliminaries

Unless specified otherwise, all complex powers are defined using the principal branch of the logarithm, that is, \(z^\alpha = e^{\alpha (\ln |z| + i {{\mathrm{Arg}}\,}z)}\) where \({{\mathrm{Arg}}\,}z \in (-\pi , \pi ]\). We write \(z = x + iy\) and let

$$\begin{aligned} \partial = \frac{1}{2}\bigg ( \frac{\partial }{\partial x} - i \frac{\partial }{\partial y}\bigg ), \quad \bar{\partial } = \frac{1}{2}\bigg ( \frac{\partial }{\partial x} + i \frac{\partial }{\partial y}\bigg ). \end{aligned}$$

We let \(\mathbb {H} = \{z \in {\mathbb {C}}: {{\mathrm{Im}}\,}z > 0\}\) and \(\mathbb {D} = \{z \in {\mathbb {C}}: |z| < 1\}\) denote the open upper half-plane and the open unit disk, respectively. The open disk of radius \(\epsilon > 0\) centered at \(z \in {\mathbb {C}}\) will be denoted by \(\mathcal {B}_\epsilon (z) = \{w \in {\mathbb {C}}: |w-z| < \epsilon \}\). Throughout the paper, \(c > 0\) and \(C > 0\) will denote generic constants which may change within a computation.

Let D be a simply connected domain with two distinct boundary points pq (prime ends). There is a conformal transformation \(f:D \rightarrow \mathbb {H}\) taking p to 0 and q to \(\infty \); in fact, f is determined only up to a final scaling. We choose one such f, but the quantities we define do not depend on the choice. Given \(z \in D\), we define the conformal radius \(r_D(z)\) of D seen from z by letting

$$\begin{aligned} \Upsilon _D(z) = \frac{{\text {Im}} f(z)}{|f'(z)|}, \quad r_D(z) = 2 \Upsilon _D(z). \end{aligned}$$

Schwarz’ lemma and Koebe’s 1/4 theorem give the distortion estimates

$$\begin{aligned} {\text {dist}}(z,\partial D)/2 \leqslant \Upsilon _D(z) \leqslant 2 {\text {dist}}(z,\partial D). \end{aligned}$$
(3.1)

We define

$$\begin{aligned} S_{D,p,q}(z) = \sin [\arg f(z) ], \quad S(z) = S_{\mathbb {H}, 0, \infty }(z), \end{aligned}$$

and note that this is a conformal invariant. Suppose D is a Jordan domain and that \(J_-,J_+\) are the boundary arcs \(f^{-1}(\mathbb {R}_-)\) and \(f^{-1}(\mathbb {R}_+)\), respectively. Let \(\omega _D(z, E)\) denote the harmonic measure of \(E \subset \partial D\) in D from z. Then it is easy to see that

$$\begin{aligned} S_{D,p,q}(z) \asymp \min \{\omega _D(z, J_-), \, \omega _D(z, J_+) \}, \end{aligned}$$
(3.2)

with the implicit constants universal. By conformal invariance an analogous statement holds for any simply connected domain different from \(\mathbb {C}\). We will use this relation several times without explicitly saying so in order to estimate \(S_{D,p,q}\).

In many places we will estimate harmonic measure using the following lemma often referred to as the Beurling estimate. It is derived from Beurling’s projection theorem, see for example Theorem 9.2 and Corollary 9.3 of [19].

Lemma 3.1

(Beurling estimate) There is a constant \(C < \infty \) such that the following holds. Suppose K is a connected set in \(\overline{\mathbb {D}}\) such that \(K \cap \partial \mathbb {D} \ne \emptyset \). Then

$$\begin{aligned} \omega _{\mathbb {D} \smallsetminus K}(0, \partial \mathbb {D}) \leqslant C \, ({\text {dist}}(0, K))^{1/2}. \end{aligned}$$

3.1 Schramm–Loewner Evolution

Let \(0< \kappa < 8\). Throughout the paper we will use the following parameters:

$$\begin{aligned} a = \frac{2}{\kappa }, \quad r=r_{\kappa }(\rho )=\frac{\rho }{\kappa } = \frac{\rho a}{2}, \quad d=1+\frac{1}{4a}, \quad \beta = 4a-1. \end{aligned}$$

We will also sometimes write \(\alpha = 4a\). The assumption \(\kappa = 2/a< 8\) implies that \(\alpha > 1\).

We will work with the \(\kappa \)-dependent Loewner equation

$$\begin{aligned} \partial _t g_t(z) = \frac{a}{g_t(z) - \xi _t}, \quad g_0(z) = z, \end{aligned}$$
(3.3)

where \(\xi _t\), \(t \geqslant 0\), is the (continuous) Loewner driving term. The solution is a family of conformal maps \((g_t(z))_{t \geqslant 0}\) called the Loewner chain of \((\xi _t)_{t \geqslant 0}\). The \(\hbox {SLE}_\kappa \) Loewner chain is obtained by taking the driving term to be a standard Brownian motion and \(a=2/\kappa \). The chordal \(\hbox {SLE}_\kappa \) path is the continuous curve connecting 0 with \(\infty \) in \(\mathbb {H}\) defined by

$$\begin{aligned} \gamma (t) = \lim _{y \downarrow 0} g^{-1}_t(\xi _t + iy), \quad \gamma _t: = \gamma [0,t]. \end{aligned}$$

We write \(H_t\) for the simply connected domain given by taking the unbounded component of \(\mathbb {H}\smallsetminus \gamma _t\). Given a simply connected domain D with distinct boundary points pq, we define chordal \(\hbox {SLE}_\kappa \) in D from p to q by conformal invariance. We write

$$\begin{aligned} S_t(z) = S_{H_t, \gamma (t), \infty }(z), \quad \Upsilon _t(z) = \Upsilon _{H_t}(z) = \frac{{\text {Im}} g_t(z)}{|g'_t(z)|}. \end{aligned}$$
(3.4)

We will make use of the following sharp one-point estimate which also defines the Green’s function for chordal \(\hbox {SLE}_\kappa \), see Lemma 2.10 of [30].

Lemma 3.2

(Green’s function for chordal \(\hbox {SLE}_\kappa \)) Suppose \(0< \kappa < 8\). There exists a constant \(c > 0\) such that the following holds. Let \(\gamma \) be \(\hbox {SLE}_\kappa \) in D from p to q, where D is a simply connected domain with distinct boundary points (prime ends) pq. As \(\epsilon \rightarrow 0\) the following estimate holds uniformly with respect to all \(z \in D\) satisfying \({\text {dist}}(z,\partial D) \geqslant 2\epsilon \):

$$\begin{aligned} \mathbf {P}\left( \Upsilon _\infty (z) \leqslant \epsilon \right) = c_{*}\epsilon ^{2-d}G_D(z;p,q)\left[ 1+O(\epsilon ^c)\right] , \end{aligned}$$

where, by definition,

$$\begin{aligned} G_D(z;p,q) = \Upsilon _D(z)^{d-2} S_{D,p,q}(z)^\beta \end{aligned}$$

is the Green’s function for \(\hbox {SLE}_\kappa \) from p to q in D, and \(c_*\) is the constant defined in (2.13).

Remark

Using the relation between \(\Upsilon _\infty (z)\) and Euclidean distance, Lemma 3.2 shows that \(\mathbf {P}\left( {\text {dist}}(x, \gamma ) \leqslant \epsilon \right) \leqslant C \epsilon ^{2-d}G_D(z;p,q)\). In fact, the statement of Lemma 3.2 holds with \(\Upsilon _\infty (z)\) replaced by Euclidean distance, with another constant in place of \(c_{*}\).

We also need to use a boundary estimate for SLE which is convenient to express in terms of extremal distance, see Chap. IV of [19] for the definition and basic properties we use here. For a domain D with \(E,F \subset \overline{D}\), we write \(d_D(E,F)\) for the conformally invariant extremal distance between E and F in D. Recall that if \(D = \{z: \, r< |z| < R\}\) is the round annulus and EF are the two boundary components, then \(d_D(E,F) = \ln (R/r)/(2\pi )\). A crosscut \(\eta \) of D is an open Jordan arc in D with the property that the closure of \(\eta \) equals \(\eta \cup \{x,y\}\), where \(x,y \in \partial D\), and xy may coincide.

Fig. 3
figure 3

The domain D and the crosscut \(\eta \) of Lemma 3.3

Lemma 3.3

Let \(0< \kappa < 8\). Suppose D is a simply connected Jordan domain and let \(p,q \in \partial D\) be two distinct boundary points. Write \(J_{-}, J_{+}\) for the boundary arcs connecting q with p and p with q in the counterclockwise direction, respectively. Suppose \(\eta \) is a crosscut of D starting and ending on \(J_{+}\), see Fig. 3. Then, if \(\gamma \) is chordal \(\hbox {SLE}_{\kappa }\) in D from p to q,

$$\begin{aligned} \mathbf {P}\left( \gamma \cap \eta \ne \emptyset \right) \leqslant C \, e^{- \beta \pi d_{D }(J_-, \eta )}, \end{aligned}$$
(3.5)

where \(\beta = 4a-1\) and the constant \(C \in (0, \infty )\) is independent of Dpq, and \(\eta \).

Proof

By conformal invariance, we may assume \(D = \mathbb {H}, p=0, q = \infty \) and that \(\eta \) separates 1 from \(\infty \). Set \(\delta = \max \{|z-1|: z \in \eta \}\) and let \(w \in \eta \) be a point such that \(|w-1| = \delta \). Then the unbounded annulus A whose boundary components are \({\mathbb {R}}_-\) and \(\eta \cup \bar{\eta }\) separates w and 1 from \({\mathbb {R}}_-\). Hence, first using the symmetry rule for extremal distance and then Teichmüller’s module theorem (see Chap. II.1.3 and Eq. II.2.10 of [31]) and the relation between extremal distance and module

$$\begin{aligned} d_{\mathbb {H} }({\mathbb {R}}_-, \eta ) = 2 d_{\mathbb {C}}({\mathbb {R}}_-, \eta \cup \bar{\eta }) < \frac{2}{\pi }\ln \left( 4/\sqrt{ \delta /(1+\delta ) } \right) \leqslant \frac{1}{\pi } \ln (1/(\delta \wedge 1)) + C. \end{aligned}$$

(Note that [31] defines the module of a round annulus by \(\ln (R/r)\), i.e., without the \(1/2\pi \) factor.) Hence

$$\begin{aligned} \delta \wedge 1 \leqslant C e^{-\pi d_{\mathbb {H} }({\mathbb {R}}_-, \eta )}. \end{aligned}$$
(3.6)

On the other hand, comparing with a half-disk of radius \(\delta \) about 1, the standard boundary estimate for \(\hbox {SLE}_{\kappa }\) in the upper half-plane [1, Theorem 1.1] implies that \(\mathbf {P}(\gamma \cap \eta \ne \emptyset ) \leqslant C \,\delta ^{\beta }\), which together with (3.6) gives the desired bound. \(\square \)

3.1.1 \(\hbox {SLE}_{\kappa }(\rho )\)

Let \(0< \kappa < 8\). We will work with \(\hbox {SLE}_\kappa (\rho )\), for \(\rho \in \mathbb {R}\) chosen appropriately, as defined by weighting \(\hbox {SLE}_\kappa \) by a local martingale via Girsanov’s theorem; we will explain this below. Let \((\xi ^1, \xi ^2) \in {\mathbb {R}}^2\) be given with \( \xi ^1 < \xi ^2\). Suppose \((\xi ^1_t)_{t \geqslant 0}\) is Brownian motion started from \(\xi ^1\) under the measure \(\mathbf {P}\), with filtration \(\mathcal {F}_t\). We refer to \(\mathbf {P}\) as the \(\hbox {SLE}_\kappa \) measure. Let \((g_t)_{t \geqslant 0}\) be the \(\hbox {SLE}_\kappa \) Loewner flow defined by Eq. (3.3) with \(\xi _t = \xi _t^1\) and set

$$\begin{aligned} \xi _t^2:=g_t(\xi ^2). \end{aligned}$$
(3.7)

We call \(\xi ^2\) the force point. Define

$$\begin{aligned} \lambda (r) = \frac{r}{2 a}\left( r - \beta \right) , \quad \zeta (r)=\lambda (-r)-r = \frac{r}{2a} \left( r+2a-1\right) . \end{aligned}$$

Note that \(\zeta \geqslant 0\) whenever \(0< \kappa \leqslant 4\) and \(r \geqslant 0\). Itô’s formula shows that

$$\begin{aligned} M_t^{(\rho )} = \left( \frac{\xi ^2_t-\xi ^1_t}{\xi ^2-\xi ^1}\right) ^{r} g'_t(\xi ^2)^{\zeta (r)} , \quad t \geqslant 0, \end{aligned}$$
(3.8)

is a local \(\mathbf {P}\)-martingale for any \(\rho \in {\mathbb {R}}\), where \(r=\rho /\kappa \). In fact,

$$\begin{aligned} \frac{dM_t^{(\rho )}}{M_t^{(\rho )}}=\frac{-r}{\xi ^2_t-\xi ^1_t}d\xi ^1_t. \end{aligned}$$

The \(\hbox {SLE}_\kappa (\rho )\) measure \(\mathbf {P}^{\rho } = \mathbf {P}_{\xi ^1, \xi ^2}^{\rho }\) is defined by weighting \(\mathbf {P}\) by the martingale \(M^{(\rho )}\), that is,

$$\begin{aligned} \mathbf {P}^{\rho } \left( V \right) = \mathbf {E}[M_t^{(\rho )} 1_V] \quad \text {for } V \in \mathcal {F}_t. \end{aligned}$$
(3.9)

Then, using Girsanov’s theorem, the equation for \((\xi ^1_t)_{t \geqslant 0}\) changes to

$$\begin{aligned} d\xi ^1_t = \frac{r}{\xi ^1_t-\xi ^2_t}dt + dW_t, \end{aligned}$$
(3.10)

where \((W_t)_{t \geqslant 0}\) is \(\mathbf {P}^{\rho }\)-Brownian motion. This is the defining equation for the driving term of \(\hbox {SLE}_\kappa (\rho )\). (Since \(M^{(\rho )}\) is a local martingale we need to stop the process before \(M^{(\rho )}\) blows up; we will not always be explicit about this. We will not need to consider \(\hbox {SLE}_\kappa (\rho )\) after the time the path hits or swallows the force point.) We refer to the Loewner chain driven by \((\xi ^1_t)_{t \geqslant 0}\) under \(\mathbf {P}^{\rho }\) as \({\textit{SLE}}_\kappa (\rho )\)started from\((\xi ^1, \xi ^2)\). If \(\rho \) is sufficiently negative, the \(\hbox {SLE}_{\kappa }(\rho )\) path will almost surely hit the force point. In this case it can be useful to reparametrize so that the quantity

$$\begin{aligned} C_{t} = C_{t}(\xi ^{2}) = \frac{\xi _{t}^{2} - O_{t}}{g'_{t}(\xi ^{2})}, \end{aligned}$$
(3.11)

decays deterministically; this is called the radial parametrization in this context. Here \(O_t\) is defined as the image under \(g_t\) of the rightmost point in the hull at time t; in particular, \(O_t = g_{t}(0+)\) if \(0 < \kappa \leqslant 4\), see, e.g., [3]. Geometrically \(C_{t}\) equals (1/4 times) the conformal radius seen from \(\xi ^{2}\) in \(H_{t}\) after Schwarz reflection. We define a time-change s(t) so that \(\hat{C}_{t}:=C_{s(t)} = e^{-at}\). A computation shows that if

$$\begin{aligned} A_{t} = \frac{\xi ^{2}_{t}-O_{t}}{\xi ^{2}_{t}-\xi ^{1}_{t}} \end{aligned}$$

then \(s'(t) = (\hat{\xi }^2_t-\hat{\xi }^1_t)^2(\hat{A}_t^{-1}-1)\), where \(\hat{A}_t = A_{s(t)}\), see, e.g., Sect. 2.2 of [3]. An important fact is that \((\hat{A}_t)_{t \geqslant 0}\) is positive recurrent with respect to \(\hbox {SLE}_{\kappa }(\rho )\) if \(\rho \) is chosen appropriately.

Lemma 3.4

Suppose \(0< \kappa < 8\) and \( \rho < \kappa /2 - 4\). Consider \(\hbox {SLE}_{\kappa }(\rho )\) started from (0, 1). Then \(\hat{A}_{t}\) is positive recurrent with invariant density

$$\begin{aligned} \pi _{A}(x) = c'\, x^{-\beta -a\rho }(1-x)^{2a-1}, \quad c' = \frac{\Gamma (2-2a - a\rho )}{\Gamma (2a) \Gamma (2-4a - a\rho )}. \end{aligned}$$

In fact, there is \(\alpha > 0\) such that if f is integrable with respect to the density \(\pi _A\), then as \(t \rightarrow \infty \),

$$\begin{aligned} \mathbf {E}\left[ f(\hat{A}_t) \right] = c' \int _0^1 f(x) \, \pi _A(x) \, dx \left( 1+O(e^{-\alpha t}) \right) . \end{aligned}$$

Proof

See, e.g., Sect. 5.3 of [26]. \(\square \)

3.1.2 Relationship Between Multiple SLE and \(\hbox {SLE}_\kappa (\rho )\)

Suppose \(\kappa \leqslant 4\) and consider a system of two multiple chordal SLEs curves started from \((\xi ^1, \xi ^2)\) both aiming at \(\infty \); recall (2.1) and (2.2). Suppose we first grow \(\gamma _2\) up to a fixed capacity time t. The conditional law of \(g_t \circ \gamma _1\) is then an \(\hbox {SLE}_\kappa (2)\) in \(\mathbb {H}\) started from \((\xi ^1_t, \xi ^2_t)\). In particular, the marginal law of \(\gamma _1\) is that of an \(\hbox {SLE}_\kappa (2)\) started from \(\xi ^1\) with force point \(\xi ^2\). Indeed, if we choose the particular growth speeds \(\lambda _1 = a\) and \(\lambda _2 = 0\), then the defining Eqs. (2.1) and (2.2) reduce to

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t g_t(z) = \frac{a}{g_t(z) - \xi ^{1}_t}, &{} g_0(z) = z,\\ d\xi _t^{1} = \frac{a}{\xi ^{1} - \xi ^{2}} dt + dB_t^{1}, &{} \xi _0^1 = \xi ^1,\\ d\xi _t^{2} = \frac{a}{\xi _t^2 - \xi _t^1} dt, &{} \xi _0^2 = \xi ^2, \end{array}\right. } \end{aligned}$$
(3.12)

where \((B_t^1)_{t \geqslant 0}\) is \(\mathbf {P}\)-Brownian motion. Evaluating the equation for \(g_t(z)\) at \(z = \xi ^2\) we infer that \(\xi _t^2 = g_t(\xi ^2)\). Comparing this with Eqs. (3.7) and (3.10) defining \(\hbox {SLE}_\kappa (\rho )\), we conclude that \(\gamma _1\) has the same distribution under the multiple \(\hbox {SLE}_\kappa \) measure \(\mathbf {P}\) as it has under the \(\hbox {SLE}_\kappa (2)\)-measure \(\mathbf {P}^2\) started from \((\xi ^1, \xi ^2)\).

3.1.3 Two-Sided Radial SLE and Radial Parametrization

Recall that if \(z \in \mathbb {H}\) is fixed then the \(\hbox {SLE}_{\kappa }\) Green’s function in \(H_{t}\) equals

$$\begin{aligned} G_t = G_t(z)= \Upsilon _t^{d-2}(z) S_t(z)^\beta , \end{aligned}$$
(3.13)

which is a covariant \(\mathbf {P}\)-martingale. Two-sided radial SLE in \(\mathbb {H}\) through z is the process obtained by weighting chordal \(\hbox {SLE}_\kappa \) by G. (This is the same as \(\hbox {SLE}_{\kappa }(\kappa -8)\) with force point \(z \in \mathbb {H}\).) Since two-sided radial SLE approaches its target point, it is natural to parametrize so that the conformal radius (seen from z) decays deterministically. More precisely, we change time so that \(\Upsilon _{s(t)}(z) = e^{-2at}\); this parametrization depends on z. The Loewner equation implies

$$\begin{aligned} d\ln \Upsilon _{t} = - 2a\frac{y_{t}^{2}}{|z_{t}|^{4}} dt, \quad z_{t}=x_{t}+ iy_{t}=g_{t}(z) - \xi ^{1}_{t}. \end{aligned}$$

Hence \(s'(t) = |\tilde{z}_{t}|^{4}/\tilde{y}_{t}^{2}\), where \(\tilde{S}_{t} = S_{s(t)}\), \(\tilde{z}_{t} = z_{s(t)}\), etc., denote the time-changed processes. If \(\Theta _t = \arg z_t\), then using that

$$\begin{aligned} d\Theta _t = (1-2a)\frac{x_ty_t}{|z_t|^4}dt + \frac{y_t}{|z_t|^2}d\xi _t^1, \end{aligned}$$

we find that \(\tilde{\Theta }_{t} = \Theta _{s(t)}\) satisfies

$$\begin{aligned} d \tilde{\Theta }_{t} = (1-2a) \cot \tilde{\Theta }_{t}\, dt + d\tilde{W}_{t}, \end{aligned}$$

where \((\tilde{W}_{t})_{t \geqslant 0}\) is standard \(\mathbf {P}\)-Brownian motion. The time-changed martingale can be written

$$\begin{aligned} \tilde{G}_{t} = e^{-2a(d-2) t}\tilde{S}_{t}^\beta . \end{aligned}$$
(3.14)

The measure \(\mathbf {P}^{*} = \mathbf {P}_z^{*}\) is defined by weighting chordal \(\hbox {SLE}_{\kappa }\) by \(\tilde{G}\), that is,

$$\begin{aligned} \mathbf {P}^* \left( V \right) = \tilde{G}_0^{-1}\mathbf {E}[\tilde{G}_t 1_V], \quad V \in \tilde{\mathcal {F}}_t. \end{aligned}$$
(3.15)

This produces two-sided radial \(\hbox {SLE}_{\kappa }\) in the radial parametrization.

Since \(d\tilde{G}_t = \beta \tilde{G}_t \cot (\tilde{\Theta }_t) d\tilde{W}_t\), Girsanov’s theorem implies that the equation for \(\tilde{\Theta }_{t}\) changes to the radial Bessel equation under the new measure \(\mathbf {P}^{*}\):

$$\begin{aligned} d \tilde{\Theta }_{t} = 2a \cot \tilde{\Theta }_{t} \, dt + d\tilde{B}_{t}, \end{aligned}$$

where \(\tilde{B}_{t}\) is \(\mathbf {P}^{*}\)-standard Brownian motion.

We will use the following lemma about the radial Bessel equation, see, e.g., Sect. 3 of [27].

Lemma 3.5

Let \(0< \kappa < 8, \, a = 2/\kappa \) and suppose the process \((\Theta _t)_{t \geqslant 0}\) is a solution to the SDE

$$\begin{aligned} d \Theta _t = 2a \cot \Theta _t \, dt+ dB_t, \quad \Theta _0 =\Theta . \end{aligned}$$
(3.16)

Then \((\Theta _t)_{t \geqslant 0}\) is positive recurrent with invariant density

$$\begin{aligned} \psi (x) = \frac{c_*}{2} \sin ^{4a} x, \end{aligned}$$

where \(c_*\) is the constant in (2.13). In fact, there is \(\alpha > 0\) such that if f is integrable with respect to the density \(\psi \), then as \(t \rightarrow \infty \),

$$\begin{aligned} \mathbf {E}\left[ f(\Theta _t) \right] = \int _0^\pi f(x) \, \psi (x) \, dx \, (1+O(e^{-\alpha t})), \end{aligned}$$

where the error term does not depend on \(\Theta _0\).

4 Martingale Observables as CFT Correlation Functions

4.1 Screening

The CFT framework of Kang and Makarov [22] can be used to generate SLE martingale observables, see in particular Lecture 14 of [22]. The ideas of [22] have been extended to incorporate several multiple SLEs started from different points in [4]. (See also, e.g., [8, 18] for related work.) We will also make use of the screening method [13] which produces observables in the form of contour integrals, which we call Dotsenko-Fateev integrals. From the CFT perspective (in the sense of [22]), one starts from a CFT correlation function with appropriate field insertions giving a corresponding (known) \(\hbox {SLE}_{\kappa }\) martingale. Adding additional paths means inserting additional boundary fields. This will create observables for the system of SLEs. But in the cases we consider, the extra fields change the boundary behavior so that the new observable does not encode the desired geometric information anymore. To remedy this, carefully chosen auxiliary fields are inserted and then integrated out along integration contours. (The mismatching “charges” are “screened” by the contours.) The correct choices of insertions and integration contours depend on the particular problem, and different choices correspond to solutions with different boundary behavior.

Remark

We mention in passing that from a different point of view, it is known that the Gaussian free field with suitable boundary data can be coupled with SLE paths as local sets for the field [33]. Jumps in boundary conditions for the GFF are implemented by vertex operator insertions on the boundary. By the nature of the coupling, correlation functions for the field will give rise to SLE martingales.

In what follows, we briefly summarize how we used these ideas to arrive at the explicit expressions (2.6) and (2.10) for the Schramm probability \(P(z, \xi )\) and the Green’s function \({\mathcal {G}}(z, \xi ^1, \xi ^2)\), respectively. We refer to [4, 22] for an introduction to the underlying CFT framework and we will use notation from these references. Since the discussion is purely motivational, we make no attempt in this section to be complete or rigorous. This is in contrast to the other sections of the paper which are rigorous. Indeed, we shall only use the results of this section as guesses for solutions to be studied more closely later on.

Consider a system of two multiple SLEs started from \((\xi ^1, \xi ^2) \in {\mathbb {R}}^2\). If \(\lambda _1\) and \(\lambda _2\) denote the growth speeds of the two curves, the evolution of the system is described by equations (2.1) and (2.2). In the language of [22], the presence of two multiple SLE curves in \(\mathbb {H}\) started from \(\xi ^1\) and \(\xi ^2\) corresponds to the insertion of the operator

$$\begin{aligned} \mathcal {O}(\xi ^1, \xi ^2) = V_{\star , (b)}^{i\sqrt{a}}(\xi ^1)V_{\star , (b)}^{i\sqrt{a}}(\xi ^2), \end{aligned}$$

where \(V_{\star ,(b)}^{i\sigma }(z)\) denotes a rooted vertex field inserted at z (see [22], p. 96) and the parameter b satisfies the relation

$$\begin{aligned} 2\sqrt{a}(\sqrt{a} + b) =1, \quad a=2/\kappa . \end{aligned}$$
(4.1)

Notice that we define \(a=2/\kappa \) while [22] defines “a” by \(\sqrt{2/\kappa }\). The framework of [22] (or rather an extension of this framework to the case of multiple curves [4]) suggests that if \(\{z_j\}_1^n \subset {\mathbb {C}}\) are points and \(\{X_j\}_1^n\) are fields satisfying certain properties, then the correlation function

$$\begin{aligned} M_t^{(z_1, \dots , z_n)} = \hat{\mathbf {E}}_{\mathcal {O}(\xi _t^1, \xi _t^2)}^{\mathbb {H}}[(X_1||g_t^{-1})(z_1) \cdots (X_n|| g_t^{-1})(z_n)] \end{aligned}$$
(4.2)

is a (local) martingale observable for the system when evaluated in the “Loewner charts” \((g_t)\), where for each t the map \(g_t\) “removes” the whole system of curves which is growing at constant capacity speed. It turns out that the observables relevant for Schramm’s formula and for the Green’s function belong to a class of correlation functions of the form

$$\begin{aligned} M_t^{(z,u)} = \hat{\mathbf {E}}_{\mathcal {O}(\varvec{\xi }_t)}[(V_{\star , (b)}^{i\sigma _1}||g_t^{-1})(z)\overline{(V_{\star , (b)}^{i\sigma _2}||g_t^{-1})(z)}(V_{\star , (b)}^{is}||g_t^{-1})(u)], \end{aligned}$$
(4.3)

where \(z \in \mathbb {H}\), \(u \in {\mathbb {C}}\), and \(\sigma _1, \sigma _2, s \in {\mathbb {R}}\) are real constants. We will later integrate out the variable u, but it is essential to include the screening field \((V_{\star , (b)}^{is}||g_t^{-1})(u)\) in the definition (4.3) in order to arrive at observables with the appropriate conformal dimensions at z and at infinity. The observable \(M_t^{(z,u)}\) can be written as

$$\begin{aligned} M_t^{(z,u)} =&\; g_t'(z)^{\frac{\sigma _1^2}{2} -\sigma _1 b} \overline{g_t'(z)}^{\frac{\sigma _2^2}{2}-\sigma _2 b} g_t'(u)^{\frac{s^2}{2}-s b} A(Z_t, \xi _t^1, \xi _t^2, U_t), \end{aligned}$$
(4.4)

where \(Z_t = g_t(z)\), \(U_t = g_t(u)\), and the function \(A(z,\xi ^1, \xi ^2, u)\) is defined by

$$\begin{aligned} A(z,\xi ^1, \xi ^2, u) =&\; (z - \bar{z})^{\sigma _1\sigma _2} \big [(z - \xi ^1)(z - \xi ^2)\big ]^{\sigma _1 \sqrt{a}} \big [(\bar{z} - \xi ^1)(\bar{z} - \xi ^2)\big ]^{\sigma _2 \sqrt{a}}\nonumber \\&\times (z - u)^{\sigma _1s} (\bar{z} - u)^{\sigma _2 s} \big [(u - \xi ^1)(u - \xi ^2)\big ]^{s \sqrt{a}}. \end{aligned}$$
(4.5)

Itô’s formula implies that the CFT generated observable \(M_t^{(z,u)}\) is indeed a local martingale for any choice of \(z,u \in \mathbb {H}\) and \(\sigma _1,\sigma _2,s \in {\mathbb {R}}\).

Since (4.4) is a local martingale for each value of the screening variable u, and the observable transforms as a one-form in u, we expect the integrated observable

$$\begin{aligned} \mathcal {M}_t^{(z)} = \int _\gamma M_t^{(z,u)} du \end{aligned}$$
(4.6)

to be a local martingale for any choice of \(z \in \mathbb {H}\), \(\sigma _1,\sigma _2,s \in {\mathbb {R}}\), and of the integration contour \(\gamma \), at least as long as the integral in (4.6) converges and the branches of the complex powers in (4.5) are consistently defined. The integral in (4.6) is referred to as a “screening” integral.

By choosing \(\lambda _2 = 0\), we expect the observable \(\mathcal {M}_t^{(z)}\) defined in (4.6) to be a local martingale for \(\hbox {SLE}_\kappa (2)\) started from \((\xi ^1, \xi ^2)\). We later check these facts in the cases of interest by direct computation, see Propositions 5.2 and 6.4. We next describe how the martingales relevant for Schramm’s formula and for the Green’s function for \(\hbox {SLE}_\kappa (2)\) arise as special cases of \(\mathcal {M}_t^{(z)}\) corresponding to particular choices of \(\sigma _1,\sigma _2,s \in {\mathbb {R}}\) and of the contour \(\gamma \).

4.2 Prediction of Schramm’s Formula

In order to obtain the local martingale relevant for Schramm’s formula we choose the following values for the parameters (“charges”) in (4.4):

$$\begin{aligned} \sigma _1 = -2\sqrt{a}, \quad \sigma _2 = 2b, \quad s = -2\sqrt{a}. \end{aligned}$$
(4.7)

The choice (4.7) can be motivated as follows. First of all, by choosing \(s = -2\sqrt{a}\) we ensure that \(s^2/2 -s b = 1\) (see (4.1)). This implies that \(\mathcal {M}_t^{(z)}\) involves the one-form \(g_t'(u)^{s^2/2-s b}du = g_t'(u) du\). After integration with respect to du this leads to a conformally invariant screening integral. To motivate the choices of \(\sigma _1\) and \(\sigma _2\), let \(P(z, \xi ^1, \xi ^2)\) denote the probability that the point \(z\in \mathbb {H}\) lies to the left of an \(\hbox {SLE}_\kappa (2)\)-path started from \((\xi ^1,\xi ^2)\). Then we expect \(\partial _zP\) to be a martingale observable with conformal dimensions

$$\begin{aligned} \lambda (z) = 1, \quad \lambda _*(z) = 0, \quad \lambda _\infty = 0. \end{aligned}$$
(4.8)

The parameters in (4.7) are chosen so that the observable \(\mathcal {M}_t^{(z)}\) in (4.6) has the conformal dimensions in (4.8). We emphasize that it is the inclusion of the screening field in (4.3) that makes it possible to obtain these dimensions. In particular, by including it we can have \(\lambda _{\infty } = 0\). We have considered the derivative \(\partial _zP\) instead of P because then we are able to construct a nontrivial martingale with the correct dimensions.

In the special case when the parameters \(\sigma _1,\sigma _2,s\) are given by (4.7), the local martingale (4.6) takes the form

$$\begin{aligned} \mathcal {M}_t^{(z)} =&\; g_t'(z) (Z_t - \bar{Z}_t)^{\alpha - 2}(Z_t - \xi _t^1)^{-\frac{\alpha }{2}}(Z_t - \xi _t^2)^{-\frac{\alpha }{2}} (\bar{Z}_t - \xi _t^1)^{1-\frac{\alpha }{2}}(\bar{Z}_t - \xi _t^2)^{1-\frac{\alpha }{2}}\nonumber \\&\times \int _\gamma (Z_t - U_t)^\alpha (\bar{Z}_t - U_t)^{\alpha -2} \big [(U_t - \xi _t^1)(U_t - \xi _t^2)\big ]^{-\frac{\alpha }{2}} g_t'(u) du. \end{aligned}$$
(4.9)

We expect from the above discussion that there exists an appropriate choice of the integration contour \(\gamma \) in (4.6) such that \(\partial _z P(z, \xi ^1, \xi ^2) = \text {const} \times \mathcal {M}_0^{(z)}\), that is, we expect

$$\begin{aligned} \partial _zP(z, \xi ^1, \xi ^2)=&\; c(\kappa ) y^{\alpha - 2} (z - \xi ^1)^{-\frac{\alpha }{2}}(z- \xi ^2)^{-\frac{\alpha }{2}}(\bar{z} - \xi ^1)^{1-\frac{\alpha }{2}}(\bar{z} - \xi ^2)^{1-\frac{\alpha }{2}}\\&\times \int _\gamma (u-z)^\alpha (u-\bar{z})^{\alpha -2} (u - \xi ^1)^{-\frac{\alpha }{2}}(u - \xi ^2)^{-\frac{\alpha }{2}} du, \end{aligned}$$

where \(c(\kappa )\) is a complex constant and \(y = {{\mathrm{Im}}\,}z\). Setting \(\xi ^1= 0\) and \(\xi ^2 = \xi \) in this formula, we arrive at the prediction (2.6) for the Schramm probability \(P(z,\xi )\). Indeed, the integration with respect to x in (2.6) recovers P from \(\partial _z P\) and ensures that P tends to zero as \({{\mathrm{Re}}\,}z \rightarrow \infty \). On the other hand, the choice of the integration contour from \(\bar{z}\) to z in (2.5) is mandated by the requirement that \(P(z, \xi )\) should satisfy the correct boundary conditions as z approaches the real axis. See [22, Lecture 15] (see also, e.g., [21] and the references therein). Moreover, \(P(z,\xi )\) must be a real-valued function tending to 1 as \({{\mathrm{Re}}\,}z \rightarrow -\infty \); this fixes the constant \(c(\kappa )\).

4.3 Prediction of the Green’s Function

In order to obtain the local martingale relevant for the \(\hbox {SLE}_\kappa (2)\) Green’s function, we choose the following values for the parameters in (4.4):

$$\begin{aligned} \sigma _1 = b - \sqrt{a}, \quad \sigma _2 = b - \sqrt{a}, \quad s = -2\sqrt{a}, \end{aligned}$$
(4.10)

As in the case of Schramm’s formula, the choice \(s = -2\sqrt{a}\) ensures that \(\mathcal {M}_t^{(z)}\) involves the one-form \(g_t'(u) du\). Moreover, if we let \({\mathcal {G}}(z, \xi ^1, \xi ^2)\) denote the Green’s function for \(\hbox {SLE}_\kappa (2)\) started from \((\xi ^1,\xi ^2)\), then we expect \({\mathcal {G}}\) to have the conformal dimensions (cf. page 124 in [22])

$$\begin{aligned} \lambda (z)= \lambda _*(z) = \frac{2-d}{2}, \quad \lambda _\infty = 0. \end{aligned}$$
(4.11)

The parameters \(\sigma _1\) and \(\sigma _2\) in (4.10) are determined so that the observable \(\mathcal {M}_t^{(z)}\) in (4.6) has the conformal dimensions in (4.11). For example, a generalization of Proposition 15.5 in [22] to the case of two curves implies that \(\lambda _\infty = (2\sqrt{a}-b)\Sigma + \frac{\Sigma ^2}{2} = 0\) where \(\Sigma = \sigma _1 + \sigma _2 - 2\sqrt{a}\).

Remark

We can see here that the choice \(\rho =2\) is special: we have only two possible ways to add one screening field, corresponding to \(s=-2\sqrt{a}\) or \(s=1/\sqrt{a}\). But the extra \(\rho = 2\) corresponds to additional charges \(\sigma = \sigma _* = 2/\sqrt{8 \kappa }\) (we are using \(\sigma = \rho /\sqrt{8 \kappa }\)), so at infinity we have an additional charge \(\sigma + \sigma _* = 2 \sqrt{a}\). Consequently, the \(\rho = 2\) charge can be screened by only one screening field. If we add more \(\rho \) insertions, they can be screened by one screening field if their charges sum up to \(2\sqrt{a}\). This suggests that every \(\hbox {SLE}_\kappa \) observable with \(\lambda _q=0\) gives an \(\hbox {SLE}_\kappa (2)\) observable with \(\lambda _q=0\) after screening. Similarly, since adding n additional \(\rho _{j} =2\) gives additional charges at \(\infty \) of \(2n\sqrt{a}\), one could expect that one can construct a martingale for a system of n SLEs by adding n screening charges.

In the special case when the parameters \(\sigma _1,\sigma _2,s\) are given by (4.10), the local martingale (4.6) takes the form

$$\begin{aligned} \mathcal {M}_t^{(z)} =&\; |g_t'(z)|^{2-d} \int _\gamma \mathcal {A}(Z_t,\xi _t^1, \xi _t^2, g_t(u)) g_t'(u)du, \end{aligned}$$
(4.12)

where

$$\begin{aligned} \mathcal {A}(z,\xi ^1, \xi ^2, u) =&\; (z - \bar{z})^{\alpha + \frac{1}{\alpha } - 2} |z - \xi ^1|^{-\beta } |z - \xi ^2|^{-\beta }\nonumber \\&\times (z - u)^\beta (\bar{z} - u)^{\beta } \big [(u - \xi ^1)(u - \xi ^2)\big ]^{-\frac{\alpha }{2}}. \end{aligned}$$
(4.13)

We expect from the above discussion that there exists an appropriate choice of the integration contour \(\gamma \) in (4.6) such that \({\mathcal {G}}(z, \xi ^1, \xi ^2) = \text {const} \times \mathcal {M}_0^{(z)}\), that is, we expect

$$\begin{aligned} {\mathcal {G}}(z, \xi ^1, \xi ^2) =&\; c(\kappa ) y^{\alpha + \frac{1}{\alpha } - 2} |z - \xi ^1|^{-\beta } |z - \xi ^2|^{-\beta } J(z,\xi ^1, \xi ^2), \end{aligned}$$
(4.14)

where

$$\begin{aligned} J(z,\xi ^1, \xi ^2) = \int _\gamma (u-z)^\beta (u-\bar{z})^{\beta } (u - \xi ^1)^{-\frac{\alpha }{2}} (\xi ^2-u)^{-\frac{\alpha }{2}} du \end{aligned}$$

and \(c(\kappa )\) is a complex constant. By requiring that G satisfy the correct boundary conditions, we arrive at the prediction (2.10) for the Green’s function for \(\hbox {SLE}_\kappa (2)\). The trickiest step is the determination of the appropriate screening contour \(\gamma \). This contour must be chosen so that the Green’s function satisfies the appropriate boundary conditions as \((z,\xi ^1, \xi ^2)\) approaches the boundary of the domain \(\mathbb {H} \times \{-\infty< \xi ^1< \xi ^2 < \infty \}\). The complete verification that the Pochhammer integration contour in (2.5) leads to the correct boundary behavior is presented in Lemma 6.3 and relies on a complicated analysis of integral asymptotics. We first arrived at the Pochhammer contour in (2.5) via the following simpler argument.

Let \({\mathcal {G}}_\xi (z) = {\mathcal {G}}(z, -\xi ,\xi )\), \(J_\xi (z) = J(z, -\xi ,\xi )\). Let also \(I_\xi (z) = I(z, -\xi ,\xi )\) where I is the function defined in (2.9), i.e.,

$$\begin{aligned} I_\xi (z) = \int _A^{(z+,\xi +,z-,\xi -)} (u - z)^{\alpha -1} (u - \bar{z})^{\alpha -1} (\xi + u)^{-\frac{\alpha }{2}} (\xi - u)^{-\frac{\alpha }{2}} du. \end{aligned}$$
(4.15)

We make the ansatz that

$$\begin{aligned} J_\xi (z) = \sum _{i=1}^4 c_i(\kappa ) \int _{\gamma _i} (u - z)^{\alpha -1} (u - \bar{z})^{\alpha -1} (\xi + u)^{-\frac{\alpha }{2}} (\xi - u)^{-\frac{\alpha }{2}} du, \end{aligned}$$
(4.16)

where the contours \(\{\gamma _i\}_1^4\) are Pochhammer contours surrounding the pairs \((\xi , z)\), \((\xi , \bar{z})\), \((-\xi , z)\), and \((-\xi , \bar{z})\), respectively. The integral involving the pair \((\xi , z)\) is \(I_\xi (z)\). The integrals involving the pairs \((\pm \xi , z)\) are related via complex conjugation to the integrals involving the pairs \((\pm \xi , \bar{z})\). Moreover, by performing the change of variables \(u \rightarrow -\bar{u}\), we see that the integral involving the pair \((-\xi , z)\) can be expressed in terms of \(I(-\bar{z})\). Thus, using the requirement that \(J(z,\xi )\) be real-valued, we can without loss of generality assume that \(J(z,\xi )\) is a real linear combination of the real and imaginary parts of \(I_\xi (z)\) and \(I_\xi (-\bar{z})\).

At this stage it is convenient, for simplicity, to assume \(4< \kappa < 8\) so that \(1< \alpha < 2\). Then we can collapse the contour in the definition (4.15) of \(I_\xi (z)\) onto a curve from \(\xi \) from z; this gives

$$\begin{aligned} I_\xi (z) = (1-e^{2 i \pi \alpha }+e^{i \pi \alpha }-e^{-i \pi \alpha }) \hat{I}_\xi (z), \end{aligned}$$

where \(\hat{I}_\xi (z)\) is defined by

$$\begin{aligned} \hat{I}_\xi (z) = \int _\xi ^z (u - z)^{\alpha -1} (u - \bar{z})^{\alpha -1} (\xi + u)^{-\frac{\alpha }{2}} (\xi - u)^{-\frac{\alpha }{2}} du. \end{aligned}$$

Since \(\hat{I}\) obeys the symmetry \({{\mathrm{Im}}\,}\hat{I}_\xi (z) = {{\mathrm{Im}}\,}\hat{I}_\xi (-\bar{z})\), our ansatz takes the form

$$\begin{aligned} J_\xi (z) = A_1 {{\mathrm{Re}}\,}\hat{I}_\xi (z) + A_2{{\mathrm{Re}}\,}\hat{I}_\xi (-\bar{z}) + A_3 {{\mathrm{Im}}\,}\hat{I}_\xi (z), \end{aligned}$$
(4.17)

where \(A_j = A_j(\kappa )\), \(j = 1,2,3\), are real constants.

Remark

It is not necessary to include further contours surrounding pairs such as \((z, \bar{z})\) and \((-\xi , \xi )\) in the ansatz (4.16) for \(J_\xi (z)\), because the contributions from such pairs can be obtained as linear combinations of the contributions from the four pairs already included. This is most easily seen in the case \(1< \alpha < 2\) where each Pochhammer contour can be collapsed to a single curve connecting the two points in the pair.

Up to factors which are independent of y, we expect the Green’s function \({\mathcal {G}}_\xi (z)\) to satisfy

$$\begin{aligned}&{\mathcal {G}}_\xi (x+iy) \sim y^{d-2} = y^{\frac{1}{\alpha } - 1}, \quad y \rightarrow \infty , \quad x \; \text {fixed}, \end{aligned}$$
(4.18a)
$$\begin{aligned}&{\mathcal {G}}_\xi (\xi + iy) \sim y^{d-2}y^{\beta + 2a} = y^{\frac{1}{\alpha } + \frac{3\alpha }{2} -2}, \quad y \downarrow 0. \end{aligned}$$
(4.18b)

Indeed, since the influence of the force point \(\xi ^2\) goes to zero as \({{\mathrm{Im}}\,}\gamma (t)\) becomes large, the first relation follows by comparison with \(\hbox {SLE}_\kappa \). The second relation can be motivated by noticing that the boundary exponent for \(\hbox {SLE}_\kappa (\rho )\) at the force point \(\xi ^2\) is \(\beta + \rho a\), see Lemma 7.1. In terms of \(J_\xi (z)\), the estimates (4.18) translate into

$$\begin{aligned}&J_\xi (x + iy) \sim y^{\alpha - 1}, \quad y \rightarrow \infty , \ x \; \text {fixed}, \end{aligned}$$
(4.19a)
$$\begin{aligned}&J_\xi (\xi + iy) \sim y^{\frac{3\alpha }{2}-1}, \quad y \downarrow 0. \end{aligned}$$
(4.19b)

We will use these conditions to fix the values of the \(A_j\)’s.

We obtain one constraint on the \(A_j\)’s by considering the asymptotics of \(J_\xi (iy)\) as \(y \rightarrow \infty \). Indeed, for \(x = 0\) we have

$$\begin{aligned} \hat{I}_\xi (iy) =&\int _\xi ^{iy} (u^2 + y^2)^{\alpha -1} (\xi ^2 - u^2)^{-\frac{\alpha }{2}} du\\ =&\; \frac{i}{2} \sqrt{\pi } \xi ^{-\alpha } y^{2 \alpha -2} \bigg \{\frac{y \Gamma (\alpha )}{\Gamma \left( \alpha +\frac{1}{2}\right) } \, _2F_1\left( \frac{1}{2},\frac{\alpha }{2},\alpha +\frac{1}{2},-\frac{y^2}{\xi ^2}\right) \\&+\frac{i \xi \Gamma \left( 1-\frac{\alpha }{2}\right) }{\Gamma \left( \frac{3}{2}-\frac{\alpha }{2}\right) } \, _2F_1\left( \frac{1}{2},1-\alpha ,\frac{3}{2}-\frac{\alpha }{2},-\frac{\xi ^2}{y^2}\right) \bigg \}, \end{aligned}$$

where \({}_2F_1\) denotes the standard hypergeometric function. This implies

$$\begin{aligned} \hat{I}_\xi (iy) =&\; y^{\alpha -1 } \left( \frac{i \Gamma \left( \frac{1}{2}-\frac{\alpha }{2}\right) \Gamma (\alpha )}{2 \Gamma \left( \frac{\alpha +1}{2}\right) }+O\left( \frac{1}{y^2}\right) \right) \\&+ y^{2(\alpha - 1)} \left( -\frac{\pi ^{3/2} \xi ^{1-\alpha } \left( \csc \left( \frac{\pi \alpha }{2}\right) +i \sec \left( \frac{\pi \alpha }{2}\right) \right) }{2 \left( \Gamma \left( \frac{3}{2}-\frac{\alpha }{2}\right) \Gamma \left( \frac{\alpha }{2}\right) \right) }+O\left( \frac{1}{y^2}\right) \right) , \quad y \rightarrow \infty . \end{aligned}$$

Substituting this expansion into (4.17), we find an expression for \(J_\xi (iy)\) involving two terms which are proportional to \(y^{2(\alpha - 1)}\) and \(y^{\alpha -1}\), respectively, as \(y \rightarrow \infty \). In order to satisfy the condition (4.19a), we must choose the \(A_j\) so that the coefficient of the larger term involving \(y^{2(\alpha - 1)}\) vanishes. This leads to the relation

$$\begin{aligned} \frac{A_1 + A_2}{A_3} = - \tan \frac{\pi \alpha }{2}. \end{aligned}$$
(4.20)

We obtain a second constraint on the \(A_j\)’s by considering the asymptotics of \(J_\xi (iy)\) as \(z \rightarrow \xi \). Indeed, for \(x = \xi \) we have

$$\begin{aligned} \hat{I}_\xi (\xi + iy) = e^{\frac{i\pi }{2}\left( 1 + \frac{\alpha }{2}\right) } \int _0^y (y^2 - s^2)^{\alpha -1} (2\xi + is)^{\alpha -1} s^{-\frac{\alpha }{2}} ds. \end{aligned}$$

Hence

$$\begin{aligned} \hat{I}_\xi (\xi + iy)&\sim e^{\frac{i\pi }{2}\left( 1 + \frac{\alpha }{2}\right) } (2\xi )^{-\frac{\alpha }{2}} \int _0^y (y^2 - s^2)^{\alpha -1} s^{-\frac{\alpha }{2}} ds \nonumber \\&= \frac{2^{-\frac{\alpha }{2}-1} e^{\frac{1}{4} i \pi (\alpha +2)} \xi ^{-\alpha /2} \Gamma \left( \frac{1}{2}-\frac{\alpha }{4}\right) \Gamma (\alpha )}{\Gamma \left( \frac{3 \alpha }{4}+\frac{1}{2}\right) } y^{\frac{3 \alpha }{2}-1}, \quad y \downarrow 0, \quad \xi > 0. \end{aligned}$$
(4.21)

Similarly, for \(x = -\xi \), we have

$$\begin{aligned} \hat{I}(-\xi + iy, \xi )=&\; \int _{\xi }^{-\xi }((u+\xi )^2 + y^2)^{\alpha -1} (\xi + u)^{-\frac{\alpha }{2}} (\xi - u)^{-\frac{\alpha }{2}} du\\&+\, \int _0^y(i(s-y))^{\alpha -1} (i(s+y))^{\alpha -1} (is)^{-\frac{\alpha }{2}} (2\xi - is)^{-\frac{\alpha }{2}} i ds. \end{aligned}$$

Hence

$$\begin{aligned}&\hat{I}(-\xi + iy, \xi ) \nonumber \\&\quad \sim -\int _{-\xi }^{\xi }(u+\xi )^{\frac{3\alpha }{2} -2} (\xi - u)^{-\frac{\alpha }{2}} du + e^{\frac{i\pi }{2}\left( 1 - \frac{\alpha }{2}\right) } (2\xi )^{-\frac{\alpha }{2}} \int _0^y (y^2 - s^2)^{\alpha -1} s^{-\frac{\alpha }{2}} ds\nonumber \\&\quad = 2 \xi ^{\alpha -1} \left( \frac{\, _2F_1\left( 1,2-\frac{3 \alpha }{2},2-\frac{\alpha }{2};-1\right) }{\alpha -2}+\frac{\, _2F_1\left( 1,\frac{\alpha }{2},\frac{3 \alpha }{2};-1\right) }{2-3 \alpha }\right) \nonumber \\&\qquad +\,\frac{i 2^{-\frac{\alpha }{2}-1} e^{-\frac{1}{4} i \pi \alpha } \xi ^{-\alpha /2} \Gamma \left( \frac{1}{2}-\frac{\alpha }{4}\right) \Gamma (\alpha )}{\Gamma \left( \frac{3 \alpha }{4}+\frac{1}{2}\right) } y^{\frac{3 \alpha }{2}-1}, \quad y \downarrow 0, \quad \xi > 0. \end{aligned}$$
(4.22)

Substituting the expansions (4.21) and (4.22) into (4.17), we find an expression for \(J_\xi (\xi + iy)\) involving two terms which are of order \(O(y^{\frac{3 \alpha }{2}-1})\) and O(1), respectively, as \(y \rightarrow 0\). In order to satisfy the condition (4.19b), we must choose the \(A_j\) so that the coefficient of the larger term of O(1) vanishes. This implies

$$\begin{aligned} A_2 = 0. \end{aligned}$$
(4.23)

Using the constraints (4.20) and (4.23), the expression (4.17) becomes

$$\begin{aligned} J_\xi (z) = B_1 {{\mathrm{Im}}\,}\big (e^{-\frac{i\pi \alpha }{2}}\hat{I}_\xi (z)\big ) = B_2 {{\mathrm{Im}}\,}\big (e^{-i\pi \alpha }I_\xi (z)\big ), \end{aligned}$$

where \(B_j = B_j(\kappa )\), \(j = 1,2\), are real constants. Recalling (4.14), this gives the following expression for \({\mathcal {G}}_\xi (z) = {\mathcal {G}}(z, -\xi ,\xi )\):

$$\begin{aligned} {\mathcal {G}}(z, -\xi ,\xi ) = \frac{1}{\hat{c}}y^{\alpha + \frac{1}{\alpha } - 2} |z + \xi |^{1 - \alpha } |z - \xi |^{1 - \alpha } {\text {Im}} \big (e^{-i\pi \alpha } I(z,-\xi ,\xi ) \big ), \quad z \in \mathbb {H}, \quad \xi > 0, \end{aligned}$$

where \(\hat{c}(\kappa )\) is an overall real constant yet to be determined. Using translation invariance to extend this expression to an arbitrary starting point \((\xi ^1, \xi ^2)\), we find (2.10). The derivation here used that \(4< \kappa < 8\), but by analytic continuation we expect the same formula to hold for \(0 < \kappa \leqslant 4\).

Remark

We remark here that the non-screened martingale obtained via Girsanov has the conformal dimensions

$$\begin{aligned} \lambda (z)= \lambda _*(z) = \frac{2-d}{2}, \quad \lambda _\infty = -\beta . \end{aligned}$$
(4.24)

5 Schramm’s Formula

This section proves Theorem 2.1. The strategy is the same as in Schramm’s original argument [36]. Assume \(0 < \kappa \leqslant 4\), i.e., \(\alpha = 8/\kappa \geqslant 2\). We write the function \(\mathcal {M}(z, \xi )\) defined in (2.5) as

$$\begin{aligned} \mathcal {M}(z, \xi ) =&\; y^{\alpha - 2} z^{-\frac{\alpha }{2}}(z- \xi )^{-\frac{\alpha }{2}}\bar{z}^{1-\frac{\alpha }{2}}(\bar{z} - \xi )^{1-\frac{\alpha }{2}}J(z, \xi ), \quad z \in \mathbb {H}, \quad \xi > 0, \end{aligned}$$
(5.1)

where \(z = x+iy\) and \(J(z, \xi )\) is defined by

$$\begin{aligned} J(z, \xi ) = \int _{\bar{z}}^z(u-z)^{\alpha }(u- \bar{z})^{\alpha - 2} u^{-\frac{\alpha }{2}}(u - \xi )^{-\frac{\alpha }{2}} du, \quad z \in \mathbb {H}, \quad \xi > 0, \end{aligned}$$
(5.2)

and the contour from \(\bar{z}\) to z passes to the right of \(\xi \) as in Fig. 1. We want to prove that the probability that the system started from \((0,\xi )\) passes to the right of \(z = x+iy\) is given by

$$\begin{aligned} P(z, \xi ) = \frac{1}{c_\alpha } \int _x^\infty {{\mathrm{Re}}\,}\mathcal {M}(x' +i y, \xi ) dx', \quad x \in {\mathbb {R}}, \quad y> 0, \quad \xi > 0. \end{aligned}$$

The idea is to apply Itô’s formula and a stopping time argument to prove that the prediction is correct. Once we have proved Theorem 2.1, we easily obtain fusion formulas by simply collapsing the seeds.

5.1 Proof of Theorem 2.1

In [32], we carefully analyze the function \(P(z,\xi )\) and show that it is well-defined, smooth, and fulfills the correct boundary conditions. We summarize these facts here and then use them to give the short proof of Theorem 2.1.

Lemma 5.1

The function \(P(z, \xi )\) defined in (2.6) is a well-defined smooth function of \((z, \xi ) \in \mathbb {H} \times (0, \infty )\) which satisfies

$$\begin{aligned} |P(z, \xi )|&\leqslant C(\arg z)^{\alpha -1}, \quad z \in \mathbb {H}, \quad \xi > 0, \end{aligned}$$
(5.3a)
$$\begin{aligned} |P(z, \xi ) - 1|&\leqslant C(\pi - \arg z)^{\alpha -1}, \quad z \in \mathbb {H}, \quad \xi > 0. \end{aligned}$$
(5.3b)

Proof

See [32]. \(\square \)

Proposition 5.2

(PDE for Schramm’s formula) Let \(\alpha >1\). The function \(\tilde{\mathcal {M}}\) defined by

$$\begin{aligned} \tilde{\mathcal {M}}(x,y,\xi ^1, \xi ^2) = \mathcal {M}(x - \xi ^1 + iy, \xi ^2 - \xi ^1), \end{aligned}$$

where \(\mathcal {M}\) is given by (2.5), satisfies the two linear PDEs

$$\begin{aligned} \left( \mathcal {A}_j - \frac{2}{(x+iy-\xi ^j)^2} \right) \tilde{\mathcal {M}} = 0, \quad j = 1,2, \end{aligned}$$
(5.4)

where the differential operators \(\mathcal {A}_j\) are defined by

$$\begin{aligned} \mathcal {A}_j =&\; \frac{4}{\alpha } \partial _{\xi ^j}^2 + \frac{2(x-\xi ^j)}{y^2 + (x-\xi ^j)^2} \partial _x - \frac{2y}{y^2 + (x-\xi ^j)^2} \partial _y \nonumber \\&+ \frac{2}{\xi ^1 - \xi ^2} \partial _{\xi ^1} + \frac{2}{\xi ^2 - \xi ^1} \partial _{\xi ^2}, \quad j = 1,2. \end{aligned}$$
(5.5)

Moreover, the function \(\tilde{P}\) defined by

$$\begin{aligned} \tilde{P}(x,y,\xi ^1,\xi ^2)=P(x- \xi ^1 + iy, \xi ^2-\xi ^1), \end{aligned}$$

where \(P(z,\xi )\) is defined by (2.6), satisfies the linear PDEs

$$\begin{aligned} \mathcal {A}_j\tilde{P}=0, \quad j=1,2. \end{aligned}$$

Proof

Let \(z = x+iy\) and \(\bar{z} = x-iy\). We have

$$\begin{aligned} \tilde{\mathcal {M}}(x,y,\xi ^1, \xi ^2) = \int _{\bar{z}}^{z} m(x,y, \xi ^1, \xi ^2,u) du, \end{aligned}$$
(5.6)

where the integrand m is given by

$$\begin{aligned} m(x,y,\xi ^1, \xi ^2, u) =&\; y^{\alpha - 2} (z-\xi ^1)^{-\frac{\alpha }{2}}(z- \xi ^2)^{-\frac{\alpha }{2}}(\bar{z}-\xi ^1)^{1-\frac{\alpha }{2}} (\bar{z} - \xi ^2)^{1-\frac{\alpha }{2}}\\&\times (u-z)^{\alpha }(u-\bar{z})^{\alpha - 2} (u-\xi ^1)^{-\frac{\alpha }{2}}(u - \xi ^2)^{-\frac{\alpha }{2}}. \end{aligned}$$

Let

$$\begin{aligned} \mathcal {B}_j=\mathcal {A}_j - \frac{2}{(x+iy-\xi ^j)^2}, \quad j=1,2. \end{aligned}$$

A long but straightforward computation shows that m obeys the equations

$$\begin{aligned} \mathcal {B}_jm + \frac{2}{u-\xi ^j} \partial _u m - \frac{2}{(u-\xi ^j)^2} m = 0, \quad j = 1,2. \end{aligned}$$
(5.7)

Suppose first that \(\alpha > 2\). Then we can take the differential operator \(\mathcal {B}_j\) inside the integral when computing \(\mathcal {B}_j\mathcal {M}\) without any extra terms being generated by the variable endpoints. Hence (5.7) implies

$$\begin{aligned} \mathcal {B}_j\mathcal {M} = - \int _{\bar{z}}^{z} \bigg (\frac{2}{u-\xi ^j} \partial _u m - \frac{2}{(u-\xi ^j)^2} m\bigg ) du, \quad j = 1,2. \end{aligned}$$

An integration by parts with respect to u shows that the integral on the right-hand side vanishes. This shows (5.4) for \(\alpha > 2\). The equations in (5.4) follow in the same way for \(\alpha \in (1,2)\) if we first replace the contour from \(\bar{z}\) to z in (5.6) by a Pochhammer contour:

$$\begin{aligned} \tilde{\mathcal {M}}(x,y,\xi ^1, \xi ^2) = \frac{1}{(1 - e^{2\pi i \alpha })^2}\int _A^{(z+, \bar{z}+,z-,\bar{z}-)} m(x,y, \xi ^1, \xi ^2,u) du, \quad \alpha \ne {\mathbb {Z}}. \end{aligned}$$

If \(\alpha = 2\), then

$$\begin{aligned} m(x,y,\xi ^1, \xi ^2, u) = \frac{(u-z)^2}{(z-\xi ^1)(z-\xi ^2)(u-\xi ^1)(u-\xi ^2)} \end{aligned}$$

and (5.4) can be verified by a direct computation.

It remains to check the last assertion. We have

$$\begin{aligned} \tilde{P}(x,y,\xi ^1,\xi ^2) = \int _x^\infty \tilde{m}(x',y,\xi ^1, \xi ^2)dx', \end{aligned}$$

where \(\tilde{m} = \frac{1}{c_\alpha } {\text {Re}} \tilde{\mathcal {M}}\). Write

$$\begin{aligned} \mathcal {A}_j(x) = \mathcal {D}_j + f_j(x)\partial _x + g_j(x) \partial _y, \end{aligned}$$
(5.8)

where

$$\begin{aligned} \mathcal {D}_j&= \frac{4}{\alpha } \partial _{\xi ^j}^2 + \frac{2}{\xi ^1 - \xi ^2} \partial _{\xi ^1} + \frac{2}{\xi ^2 - \xi ^1} \partial _{\xi ^2},\\ f_j(x)&= \frac{2(x-\xi ^j)}{y^2 + (x-\xi ^j)^2}, \quad g_j(x) = - \frac{2y}{y^2 + (x-\xi ^j)^2}, \end{aligned}$$

and we have only indicated the dependence on x explicitly. Since \(c_\alpha \in {\mathbb {R}}\) and \(\mathcal {A}_j\) has real coefficients, we have

$$\begin{aligned} (\mathcal {A}_j \tilde{P})(x) = \frac{1}{c_\alpha } {\text {Re}} \mathcal {A}_j(x) \int _x^\infty \tilde{\mathcal {M}}(x',y,\xi ^1, \xi ^2)dx'. \end{aligned}$$

Employing (5.8) twice, we find

$$\begin{aligned} \mathcal {A}_j(x) \int _x^\infty \tilde{\mathcal {M}}(x')dx' =&\; \int _x^\infty \mathcal {D}_j\tilde{\mathcal {M}}(x')dx' - f_j(x)\tilde{\mathcal {M}}(x) + g_j(x) \int _x^\infty \partial _y\tilde{\mathcal {M}}(x')dx'\nonumber \\ =&\; \int _x^\infty (\mathcal {A}_j(x') - f_j(x')\partial _{x'} - g_j(x') \partial _y)\tilde{\mathcal {M}}(x')dx'\nonumber \\&- f_j(x) \tilde{\mathcal {M}}(x) + g_j(x) \int _x^\infty \partial _y\tilde{\mathcal {M}}(x')dx'. \end{aligned}$$
(5.9)

Using (5.4) to replace \(\mathcal {A}_j(x')\) and integrating by parts in the term involving \(f_j(x')\), it follows that the right-hand side of (5.9) equals

$$\begin{aligned}&\int _x^\infty \bigg (\frac{2}{(x'+iy-\xi ^j)^2} - g_j(x') \partial _y\bigg )\tilde{\mathcal {M}}(x')dx'\\&\quad = \int _x^\infty \bigg (\frac{2}{(x'+iy-\xi ^j)^2} + \partial _{x'}f_j(x') + (g_j(x) - g_j(x')) \partial _y\bigg )\tilde{\mathcal {M}}(x')dx'. \end{aligned}$$

Since

$$\begin{aligned} \frac{2}{(x'+iy-\xi ^j)^2} + \partial _{x'}f_j(x') = -\frac{4iy(x'-\xi ^j)}{((x'-\xi ^j)^2 + y^2)^2} \end{aligned}$$

is purely imaginary and \(g_j(x)\) is real-valued, this yields

$$\begin{aligned} \mathcal {A}_j \tilde{P}=&\; \frac{1}{c_\alpha } {\text {Re}} \int _x^\infty \bigg (\frac{2}{(x'+iy-\xi ^j)^2} + \partial _{x'}f_j(x') + (g_j(x) - g_j(x')) \partial _y\bigg )\tilde{\mathcal {M}}(x')dx'\nonumber \\ =&\; \frac{1}{c_\alpha } \int _x^\infty \frac{4y(x'-\xi ^j)}{((x'-\xi ^j)^2 + y^2)^2} {{\mathrm{Im}}\,}\tilde{\mathcal {M}}(x')dx'\nonumber \\&+ \frac{1}{c_\alpha } \int _x^\infty (g_j(x) - g_j(x')) \partial _y{\text {Re}} \tilde{\mathcal {M}}(x')dx'. \end{aligned}$$
(5.10)

Since \(\partial _y{\text {Re}} \tilde{\mathcal {M}}(x') = -\partial _{x'} {\text {Im}} \tilde{\mathcal {M}}(x')\) (see Lemma 7.8 in [32]), we can integrate by parts again to see that

$$\begin{aligned} \int _x^\infty (g_j(x) - g_j(x')) \partial _y{\text {Re}} \tilde{\mathcal {M}}(x')dx' = - \int _x^\infty \frac{4y(x'-\xi ^j)}{(y^2 + (x'-\xi ^j)^2)^2} {\text {Im}} \tilde{\mathcal {M}}(x')dx'. \end{aligned}$$
(5.11)

Combining (5.10) and (5.11), we conclude that \(\mathcal {A}_j \tilde{P}=0 \). \(\square \)

Consider a system of multiple SLEs in \(\mathbb {H}\) started from 0 and \(\xi > 0\), respectively. Write \(\xi _t^1\) and \(\xi _t^2\) for the Loewner driving terms of the system and let \(g_t\) denote the solution of (2.1) which uniformizes the whole system at capacity t. Then \(\xi _t^1\) and \(\xi _t^2\) are the images of the tips of the two curves under the conformal map \(g_t\). Given a point \(z \in \mathbb {H}\), let \(Z_t = g_t(z) \) and let \(\tau (z)\) denote the time that \({\text {Im}} g_t(z)\) first reaches 0.

A point \(z \in \mathbb {H}\) lies to the left of both curves iff it lies to the left of the leftmost curve \(\gamma _1\) started from 0. Moreover, since the system is commuting, its distribution is independent of the order at which the two curves are grown. Hence we may assume that the growth speeds \(\lambda _1\) and \(\lambda _2\) are given by \(\lambda _1 = 1\) and \(\lambda _2 =0\), but this assumption is not essential. We are therefore now in the setting of \(\hbox {SLE}_\kappa (2)\) started from \(\xi ^1\) with force point at \(\xi ^2\).

Lemma 5.3

Let \(z \in \mathbb {H}\). Define \(P_t(z)\) by

$$\begin{aligned} P_t(z) = P(Z_t - \xi _t^1, \xi _t^2 - \xi _t^1), \quad 0 \leqslant t < \tau (z). \end{aligned}$$

Then \(P_t(z)\) is an \(\hbox {SLE}_\kappa (2)\) martingale.

Proof

Itô’s formula combined with Proposition 5.2 immediately implies that \(P_t\) is a local martingale for the \(\hbox {SLE}_\kappa (2)\) flow; the drift term vanishes. Since P is bounded by Lemma 5.1, it follows that \(P_t\) is actually a martingale. \(\square \)

Lemma 5.4

Let \(z \in \mathbb {H}\), and \(\Theta _t^1 = \arg (Z_t - \xi _t^1)\). Then,

$$\begin{aligned} \lim _{t \uparrow \tau (z)} \Theta _t^1 = 0 \quad \bigg (\text {resp. } \lim _{t \uparrow \tau (z)} \Theta _t^1 = \pi \bigg ), \end{aligned}$$

if and only if z lies to the right (resp. left) of the curve \(\gamma _1\) starting at 0.

Proof

See the proof of Lemma 3 in [36]. \(\square \)

Lemma 5.5

Let \(\tilde{P}(z, \xi )\) be the probability that the point \(z \in \mathbb {H}\) lies to the left of the two curves starting at 0 and \(\xi > 0\), respectively. Then \(\tilde{P}(z,\xi ) = P(z,\xi )\), where \(P(z,\xi )\) is the function defined in (2.6).

Proof

By Lemma 5.4, the angle \(\Theta _t^1 = \arg (Z_t - \xi _t^1)\) approaches \(\pi \) as \(t \uparrow \tau (z)\) on the event that \(z \in \mathbb {H}\) lies to the left of both curves. But (5.3b) shows that

$$\begin{aligned} |P_t(z) - 1| = |P(Z_t - \xi _t^1, \xi _t^2 - \xi _t^1) - 1| \leqslant C(\pi - \Theta _t^1)^{\alpha -1}, \quad z \in \mathbb {H}, \ t \in [0, \tau (z)). \end{aligned}$$

Consequently, on the event that z lies to the left of both curves, \(P_t(z) \rightarrow 1\) as \(t \uparrow \tau (z)\). A similar argument relying on (5.3a) shows that on the event that \(z \in \mathbb {H}\) lies between or to the right of the two curves, then \(P_t(z) \rightarrow 0\) as \(t \uparrow \tau (z)\).

Let \(\tau _n(z)\) be the stopping time defined by

$$\begin{aligned} \tau _n(z) = \inf \bigg \{ t \geqslant 0\, : \, \sin \Theta _t^1 \leqslant \frac{1}{n}\bigg \}. \end{aligned}$$

Since \(P_t(z)\) is a martingale, we have

$$\begin{aligned} P_0(z) = \mathbf {E}\left[ P_{\tau _n(z)}(z) \right] , \quad z \in \mathbb {H}, \quad n = 1, 2, \dots . \end{aligned}$$

By using the dominated convergence theorem,

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbf {E}\left[ P_{\tau _n(z)}(z) \right] =\tilde{P}(z, \xi ). \end{aligned}$$

Since \(P_0(z) = P(z, \xi )\), this concludes the proof of the lemma and of Theorem 2.1. \(\square \)

If \(\alpha = \frac{8}{\kappa } > 1\) is an integer, the integral (5.2) defining \(J(z, \xi )\) can be computed explicitly. However, the formulas quickly get very complicated as \(\alpha \) increases. We consider here the simplest case of \(\alpha =2\) (i.e. \(\kappa = 4\)). We remark that this case is particularly simple for one curve as well; indeed, the probability that an \(\hbox {SLE}_4\) path passes to the right of z equals \((\arg z)/\pi \).

Proposition 5.6

Let \(\kappa = 4\). Then the function \(P(z,\xi )\) in (2.6) is given explicitly by

$$\begin{aligned} P(z, \xi ) =&\; \frac{1}{4 \pi ^2 \xi }\bigg \{-2 \arctan \left( \frac{x}{y}\right) \left( \pi \xi -2 \xi \arctan \left( \frac{x-\xi }{y}\right) +2 y\right) \nonumber \\&+ \pi ^2 \xi +(4 y-2 \pi \xi ) \arctan \left( \frac{x-\xi }{y}\right) \bigg \}, \quad z = x+iy \in \mathbb {H}, \quad \xi > 0. \end{aligned}$$
(5.12)

Proof

Let \(\alpha = 2\). Then \(c_\alpha = -2\pi ^2\) and an explicit evaluation of the integral in (5.2) gives

$$\begin{aligned} J(z, \xi ) = 2iy + \frac{2i}{\xi }\big ((z - \xi )^2\arg (z - \xi ) - z^2\arg z\big ), \quad z \in \mathbb {H}, \quad \xi > 0. \end{aligned}$$

Using that

$$\begin{aligned} \arg z = \frac{\pi }{2} - \arctan \frac{x}{y} \quad \text {and} \quad \arg (z -\xi ) = \frac{\pi }{2} - \arctan \frac{x - \xi }{y}, \end{aligned}$$

it follows that the function \(\mathcal {M}\) in (2.5) can be expressed as

$$\begin{aligned}&\mathcal {M}(z, \xi ) = \frac{2i}{z(z-\xi )\xi } \bigg \{(z - \xi )^2\bigg (\frac{\pi }{2} - \arctan \frac{x - \xi }{y}\bigg ) - z^2\bigg (\frac{\pi }{2} - \arctan \frac{x}{y}\bigg ) + \xi y\bigg \} \end{aligned}$$

for \(z = x + iy \in \mathbb {H}\) and \(\xi > 0\). Taking the real part of this expression and integrating with respect to x, we find that the function \(P(z,\xi )\) in (2.6) is given by

$$\begin{aligned} P(z, \xi ) =&\; \frac{1}{c_\alpha } \int _x^\infty {{\mathrm{Re}}\,}\mathcal {M}(x' +i y, \xi ) dx'\\ =&\; \frac{1}{2 \pi ^2\xi }\bigg \{(2 y-\pi \xi ) \left( \frac{\pi }{2}-\arctan \frac{x'-\xi }{y}\right) -(\pi \xi +2 y) \left( \frac{\pi }{2}-\arctan \frac{x'}{y}\right) \\&-2 \xi \arctan \Big (\frac{x'}{y}\Big ) \arctan \Big (\frac{x'-\xi }{y}\Big )\bigg \}\bigg |_{x'=x}^\infty . \end{aligned}$$

The expression (5.12) follows. \(\square \)

Remark

In the fusion limit, Eq. (5.12) is consistent with the results of [18]. Indeed, in the limit \(\xi \downarrow 0\) the expression (5.12) for \(P(z, \xi )\) reduces to

$$\begin{aligned} P(z, 0+) = \frac{1}{4}-\frac{1}{\pi ^2 (1+t^2)}-\frac{\arctan {t}}{\pi } + \frac{(\arctan t)^2}{\pi ^2}, \quad t := \frac{x}{y}, \end{aligned}$$

which is Eq. (25) in [18].

6 The Green’s Function

In this section we prove Theorem 2.6. We recall from the discussion in Sect. 2 that the proof breaks down into proving Propositions 2.11 and 2.12. Proposition 2.11 establishes existence of a Green’s function for \(\hbox {SLE}_\kappa (\rho )\) and provides a representation for this Green’s function in terms of an expectation with respect to two-sided radial SLE. Proposition 2.12 then shows that the CFT prediction \({\mathcal {G}}_\xi (z)\) defined in (2.10) obeys this representation in the case of \(\rho = 2\).

6.1 Existence of the Green’s Function: Proof of Proposition 2.11

The basic idea of the proof of existence is similar to the “standard” one for the Green’s function for \(\hbox {SLE}_\kappa \) in \(\mathbb {H}\) which we now briefly recall. (See, e.g., [28] for further discussion.) Consider chordal \(\hbox {SLE}_\kappa \) from 0 to \(\infty \) in \(\mathbb {H}\). If \(\tau = \tau _\epsilon = \inf \{t\geqslant 0 : \Upsilon _t(z) \leqslant \epsilon \}\), we have

$$\begin{aligned} \lim _{\epsilon \downarrow 0 } \epsilon ^{d-2}\mathbf {P}(\tau< \infty ) = G(z) \lim _{\epsilon \downarrow 0 }\mathbf {E}^*[1_{\tau < \infty } S_\tau (z)^{-\beta }] = G(z) \lim _{\epsilon \downarrow 0 } \mathbf {E}^*[S_\tau (z)^{-\beta }] = c_* G(z). \end{aligned}$$

Here \(G(z) = \Upsilon (z)^{d-2} S(z)^\beta \) is the \(\hbox {SLE}_\kappa \) Green’s function and \(\mathbf {E}^*\) refers to the two-sided radial SLE measure with marked point \(z \in \mathbb {H}\). The computation uses that \(\mathbf {P}^*(\tau < \infty ) = 1\) and knowledge of the invariant distribution of S under \(\mathbf {P}^*\).

In the present setting, there are several complications. The \(\hbox {SLE}_\kappa \) measure \(\mathbf {P}\) is now weighted by a local martingale (6.1) to accommodate the boundary force point, and one of the main hurdles is to control the magnitude of this weight. Moreover, the expression corresponding to \(\mathbf {E}^*[S_\tau (z)^{-\beta }]\) involves an extra weight. Proving convergence is therefore significantly harder and our argument requires control of crossing events for paths near the marked interior point (Lemma 6.1).

Now let us proceed to the proof. Let \(0 < \kappa \leqslant 4\) and \(0 \leqslant \rho < 8-\kappa \) and consider \(\hbox {SLE}_\kappa (\rho )\) started from \((\xi ^1, \xi ^2)\) with \(\xi ^1 < \xi ^2\). We recall our parameters

$$\begin{aligned} a=2/\kappa , \quad r=\rho a/2=\rho /\kappa , \quad \zeta (r)=\frac{r}{2a} \left( r+2a-1\right) , \end{aligned}$$

and the normalized local martingale

$$\begin{aligned} M_t^{(\rho )} = \left( \frac{\xi ^2_t-\xi ^1_t}{\xi ^2-\xi ^1}\right) ^{r} g'_t(\xi ^2)^{\zeta (r)} \end{aligned}$$
(6.1)

by which we can weight \(\hbox {SLE}_{\kappa }\) in order to obtain \(\hbox {SLE}_{\kappa }(\rho )\), see Sect. 3. Let us first derive a few simple estimates on \(M_t^{(\rho )}\). Note that for our parameter choices we have \(r, \zeta \geqslant 0\). The identity

$$\begin{aligned} g_t'(\xi ^2) = \exp \bigg (-\int _0^t \frac{ads}{(\xi _s^2 - \xi _s^1)^2}\bigg ) \end{aligned}$$
(6.2)

which follows from (2.4a) shows that

$$\begin{aligned} 0 \leqslant g_t'(\xi ^2) \leqslant 1, \quad t \geqslant 0. \end{aligned}$$
(6.3)

Moreover, if \(E_t\) denotes the interval \(E_t = (\xi _t^1, \xi _t^2) \subset {\mathbb {R}}\) and \(\gamma \) the curve generating the Loewner chain \((g_t)_{t \geqslant 0}\), then conformal invariance of harmonic measure gives

$$\begin{aligned} \lim _{s\rightarrow \infty } s \pi \omega _{\mathbb {H}\smallsetminus \gamma [0,t]}(is, g_t^{-1}(E_t)) = \lim _{s\rightarrow \infty } s \pi \omega _{\mathbb {H}}(g_t(is), E_t) = |\xi _t^2 - \xi _t^1|. \end{aligned}$$

Since the left-hand side is bounded above by a constant times \(1 + {\text {diam}}(\gamma [0,t])\), this gives the estimate

$$\begin{aligned} |\xi _t^2 - \xi _t^1| \leqslant C (1 + {\text {diam}}(\gamma [0,t])), \quad t \geqslant 0, \end{aligned}$$
(6.4)

which together with (6.3) shows that \(M_t^{(\rho )}\) only gets large when \({\text {diam}}(\gamma [0,t])\) gets large.

We will also need a geometric regularity estimate. In order to state it, let \(z \in \mathbb {H}\) and \(0<\epsilon _1< \epsilon _2 < {\text {Im}} z\). Let \(\gamma : (0,1] \rightarrow \mathbb {H}\) be a simple curve such that

$$\begin{aligned} \gamma (0+) = 0, \quad |\gamma (1)-z| = \epsilon _1, \quad |\gamma (t) - z| > \epsilon _1, \quad t \in [0,1). \end{aligned}$$

Write \(H = \mathbb {H}\smallsetminus \gamma \) where \(\gamma = \gamma [0,1]\). For \(\epsilon > 0\) let \(\mathcal {B}_\epsilon = \mathcal {B}_\epsilon (z)\) be the disk of radius \(\epsilon \) about z and let U be the connected component containing z of \(\mathcal {B}_{\epsilon _2} \cap H\). The set \(\partial \mathcal {B}_{\epsilon _2} \cap \partial U\) consists of crosscuts of H. There is a unique outermost one which separates z from \(\infty \) in H and we denote this crosscut

$$\begin{aligned} \ell = \ell (z, \gamma , \epsilon _2). \end{aligned}$$
(6.5)

Outermost means that \(\ell \) separates z and any other such crosscut from \(\infty \). See Fig. 4.

Lemma 6.1

Let \(0 < \kappa \leqslant 4\). There exists \(C < \infty \) such that the following holds. Let \(z \in \mathbb {H}\) and \(0< \epsilon _1< \epsilon _2 < {\text {Im}} z\). For \(\epsilon >0\) define the stopping times

$$\begin{aligned} \tau _\epsilon = \inf \{t \geqslant 0 : \Upsilon _t(z) \leqslant \epsilon \}, \quad \tau _\epsilon '= \inf \{t \geqslant 0 : |\gamma (t) - z| \leqslant \epsilon \}. \end{aligned}$$
(6.6)

If

$$\begin{aligned} \lambda = \lambda _{\epsilon _1,\epsilon _2}=\inf \{t \geqslant \tau _{\epsilon _1}' : \gamma (t) \cap \ell \ne \emptyset \}, \end{aligned}$$
(6.7)

where

$$\begin{aligned} \ell = \ell (z, \gamma _{\tau _{\epsilon _1}'}, \epsilon _2) \end{aligned}$$
(6.8)

is as in (6.5), then for \(0< 10\epsilon < \epsilon _1\), on the event \(\{\tau _{\epsilon _1}' < \infty \}\),

$$\begin{aligned} \mathbf {P}\left( \lambda< \tau _\epsilon < \infty \mid \gamma _{\tau _{\epsilon _1}'} \right) \leqslant C \left( \frac{\epsilon }{\epsilon _1} \right) ^{2-d} \left( \frac{\epsilon _1}{\epsilon _2} \right) ^{\beta /2}, \end{aligned}$$
(6.9)

where \(\beta = 4a -1\).

Fig. 4
figure 4

Schematic picture of the curve \(\gamma \) (solid), the open set V (shaded), and the crosscut \(\ell \) defined in (6.5). If the path reenters V and hits \(\mathcal {B}_\epsilon (z)\) the “bad” event that \(\lambda< \tau < \infty \) occurs. The probability of this event is estimated in Lemma 6.1. The path \(\gamma '=\gamma [\sigma , \lambda ]\) is a crosscut of V that may either separate z from the \(\partial V \smallsetminus \ell \) or not. In either case, the function \(S_{\lambda }(z)\) defined in (3.4) can be estimated by the Beurling estimate since there is only one “side” of the curve facing z inside V

Proof

Let \(\mathcal {B}_1 = \mathcal {B}_{\epsilon _1}(z)\) and \(\mathcal {B}_2 = \mathcal {B}_{\epsilon _2}(z)\). Given \(\gamma _{\tau _{\epsilon _1}'}\) we consider the outermost separating crosscut \(\ell = \ell (z, \gamma _{\tau _{\epsilon _1}'}, \epsilon _2)\). Let \(\sigma = \max \{t \leqslant \tau _{\epsilon _1}': \gamma (t) \in \ell \}\), which is not a stopping time but almost surely \(\ell \) is a crosscut of \(H_\sigma \) which separates z from \(\infty \). Write V for the simply connected component containing z of \(H_{\sigma } \smallsetminus \ell \). Because one of the endpoints of \(\ell \) is the tip \(\gamma (\sigma )\), \(g_\sigma (\partial V \smallsetminus \ell ) - W_\sigma \) is a bounded open interval I contained in either the positive or negative real axis. (This also uses that \(\ell \) is a crosscut.) Almost surely, the curve \(\gamma '=\gamma [\sigma , \lambda ]\) is a crosscut of V starting and ending in \(\ell \). Note that \(g_\sigma (\gamma ') -W_\sigma \) is a curve in \(\mathbb {H}\) connecting 0 with \(g_\sigma (\ell )-W_\sigma \), the latter which is a crosscut of \(\mathbb {H}\) separating I and the point \(g_\sigma (z) - W_\sigma \) from \(\infty \) in \(\mathbb {H}\). Therefore, there is one “side” (i.e., one of \(g_\sigma ^{-1}(W_\sigma \pm \mathbb {R}_+)\)) of the curve \(\gamma \) such that any curve connecting z with it must intersect \(\ell \). If we now write \(\delta := {\text {dist}}(\gamma _\lambda , z) \leqslant \epsilon _1\), we can use (3.2), the maximum principle, and then the Beurling estimate (Lemma 3.1) to see that (see Fig. 4) on the event \(\tau _{\epsilon _1}' < \infty \),

$$\begin{aligned} S_{\lambda }(z) \leqslant C \omega _{V \smallsetminus \gamma '}(z, \ell ) \leqslant C \omega _{(\mathcal {B}_2 \cap V) \smallsetminus \gamma '}(z,\partial \mathcal {B}_2) \leqslant C \left( \frac{\delta }{\epsilon _2} \right) ^{1/2}. \end{aligned}$$

Consequently, on the event that \(\tau _{\epsilon _1}' < \infty \) and \(\delta \geqslant 2 \epsilon \), the one-point estimate Lemma 3.2 and the distortion estimate (3.1) show that

$$\begin{aligned} \mathbf {P}\left( \tau _\epsilon < \infty \mid \gamma _{\lambda } \right) \leqslant C \left( \frac{\epsilon }{\Upsilon _\lambda (z)}\right) ^{2-d} S_\lambda (z)^\beta \leqslant C \left( \frac{\epsilon }{\delta }\right) ^{2-d} \left( \frac{\delta }{\epsilon _2} \right) ^{\beta /2}. \end{aligned}$$

Since \(\delta \leqslant \epsilon _1\) and

$$\begin{aligned} \beta /2 - (2-d) \geqslant 0 \quad \text {for} \quad \kappa \leqslant 4, \end{aligned}$$
(6.10)

the right-hand side gets larger if \(\delta \) is replaced by \(\epsilon _1\). Thus, on the event that \(\tau _{\epsilon _1}' < \infty \) and \(\delta \geqslant 2 \epsilon \),

$$\begin{aligned} \mathbf {P}\left( \tau _\epsilon < \infty \mid \gamma _{\lambda } \right) \leqslant C \left( \frac{\epsilon }{\epsilon _1}\right) ^{2-d} \left( \frac{\epsilon _1}{\epsilon _2} \right) ^{\beta /2}, \end{aligned}$$

which proves (6.9) for all curves with \(\delta \geqslant 2 \epsilon \).

On the event that \(\tau _{\epsilon _1}' < \infty \) and \(\epsilon < \delta \leqslant 2\epsilon \), we can use the boundary estimate of Lemma 3.3 as follows. We may view \(\partial \mathcal {B}_\delta (z)\) as a crosscut of \(\mathbb {H}\smallsetminus \gamma [0, \lambda ]\) possibly considering only a subarc. We use the extension rule (see Chap. IV of [19]) to estimate from below the extremal distance between \(g^{-1}_{\lambda }(W_\lambda + \mathbb {R}_+)\) (or \(g^{-1}_{\lambda }(W_\lambda - \mathbb {R}_+)\), whichever does not intersect \(\partial \mathcal {B}_\delta (z)\) viewed as a crosscut) and \(\mathcal {B}_\delta (z)\) in \(\mathbb {H} \smallsetminus \gamma [0, \lambda ]\) by the extremal distance between \(\partial \mathcal {B}_2\) and \(\mathcal {B}_\delta (z)\) in \(\mathcal {B}_2 \cap V\). By comparing with the round annulus, the latter is at least \(\ln (\epsilon _2/\epsilon )/(2\pi )\). Therefore, Lemma 3.3 gives

$$\begin{aligned} \mathbf {P}\left( \tau _\epsilon < \infty \mid \gamma _{\lambda }\right) \leqslant C e^{-\beta \pi \frac{\ln (\epsilon _2/\epsilon )}{2\pi }} = C \, \left( \frac{\epsilon }{\epsilon _2} \right) ^{\beta /2}. \end{aligned}$$

It follows from (6.10) that, on the event that \(\tau _{\epsilon _1}' < \infty \) and \(\epsilon < \delta \leqslant 2\epsilon \),

$$\begin{aligned} \mathbf {P}\left( \tau _\epsilon < \infty \mid \gamma _{\lambda }\right) \leqslant C \left( \frac{\epsilon }{\epsilon _1}\right) ^{2-d} \left( \frac{\epsilon _1}{\epsilon _2} \right) ^{\beta /2}, \end{aligned}$$

which proves (6.9) also for curves with \(\epsilon < \delta \leqslant 2\epsilon \). \(\square \)

6.1.1 Proof of Proposition 2.11

We may without loss of generality assume \(\xi ^{1}=0\) and \(|z| = 1\). Constants are allowed to depend on z and \(\xi ^2\) as well as on \(\kappa \) and \(\rho \).

We will apply Lemma 6.1 with

$$\begin{aligned} \epsilon _1 = \epsilon ^{1/2} \quad \text {and} \quad \epsilon _2 = \epsilon ^{1/4}. \end{aligned}$$

By choosing \(\epsilon \) sufficiently small, we may assume that \(\epsilon _2 = \epsilon ^{1/4} < {\text {Im}} z\). For \(\epsilon > 0\), let \(\tau = \tau _\epsilon \), \(\tau ' = \tau _\epsilon '\), and \(\lambda = \lambda _{\epsilon _1, \epsilon _2}\) be defined by (6.6) and (6.7). Let \(\ell = \ell (z, \gamma _{\tau '_{\epsilon _1}} , \epsilon _2)\) denote the separating crosscut in (6.8). Let \( p \in (0, 1/4)\) be a constant (to be chosen later) and set \(\Sigma =\Sigma _{\epsilon } = \inf \{t \geqslant 0: |\gamma (t)| \geqslant \epsilon ^{-p}\}\).

We define a “good” event \(E = E_{\epsilon }\) by

$$\begin{aligned} E = E_{\epsilon } = E_1 \cap E_2 \end{aligned}$$

where

$$\begin{aligned} E_1 = \{\tau< \lambda \}, \quad E_2 = \{ \tau < \Sigma \}. \end{aligned}$$

We claim that

$$\begin{aligned} \lim _{\epsilon \downarrow 0}\epsilon ^{d-2}\mathbf {P}^{\rho } \left( \tau < \infty , E^{c} \right) = 0, \end{aligned}$$
(6.11)

where \(E^c\) denotes the complement of E. To prove (6.11), it is clearly enough to show that

$$\begin{aligned} \lim _{\epsilon \downarrow 0}\epsilon ^{d-2}\mathbf {P}^{\rho } \left( \tau < \infty , E_1^{c} \right) = 0 \end{aligned}$$
(6.12)

and

$$\begin{aligned} \lim _{\epsilon \downarrow 0}\epsilon ^{d-2}\mathbf {P}^{\rho } \left( \tau < \infty , E_2^{c} \right) = 0. \end{aligned}$$
(6.13)

Let \(\sigma _{0}=0\) and define \(\sigma _k\) and \(U_k\) for \(k=1,2,\ldots ,\) by

$$\begin{aligned} \sigma _k = \inf _{t \geqslant 0}\{|\gamma (t)| \geqslant 2^k \}, \quad U_k = \{\sigma _{k-1} \leqslant \tau< \sigma _k < \infty \}. \end{aligned}$$

Using (3.8), (6.3), and (6.4), we obtain the estimates

$$\begin{aligned} \mathbf {P}^\rho \left( \tau< \infty , \, E_i^c \right)&= \mathbf {E}\left[ M_\tau ^{(\rho )} 1_{\tau< \infty } 1_{E_i^c} \right] \leqslant C \mathbf {E}\left[ (1 + {\text {diam}}(\gamma [0,\tau ])^r) 1_{\tau< \infty } 1_{E_i^c} \right] \nonumber \\&\leqslant C \left( \mathbf {P}(\tau <\infty , E_i^c) + \sum _{k=1}^\infty 2^{kr} \mathbf {P}\left( E_i^c, \, U_k \right) \right) , \quad i = 1,2. \end{aligned}$$
(6.14)

We will use (6.14) to prove (6.12) and (6.13).

We first prove (6.12). Lemma 6.1 and Lemma 3.2 yield

$$\begin{aligned} \mathbf {P}(\tau<\infty , E_1^c)&\leqslant \mathbf {P}\left( \lambda< \tau< \infty \mid \gamma _{\tau _{\epsilon _1}'} \right) \mathbf {P}(\tau _{\epsilon _1}'<\infty )\nonumber \\&\leqslant C \left( \frac{\epsilon }{\epsilon _1} \right) ^{2-d} \left( \frac{\epsilon _1}{\epsilon _2} \right) ^{\beta /2} \bigg (\frac{\epsilon _1}{\Upsilon _{\mathbb {H}}(z)}\bigg )^{2-d} = o(\epsilon ^{2-d}). \end{aligned}$$
(6.15)

We next estimate the series on the right-hand side of (6.14). We first consider \(i=1\). For this, suppose \(j = -1, \ldots , J := \lfloor \log _2(\epsilon ^{-1}) \rfloor +1\) and define

$$\begin{aligned} V_j^k = \{ \tau '_{\epsilon 2^{j}} \leqslant \sigma _{k-1} < \tau '_{\epsilon 2^{j-1}}\}. \end{aligned}$$

The distortion estimate (3.1) implies \(\tau \leqslant \tau _{\epsilon 2^{-1}}'\). Also, \(\tau '_{\epsilon 2^J} = 0\) because \(|z| = 1\). Hence, \(U_k = \cup _{j=-1}^J (U_k \cap V_j^k)\) and so

$$\begin{aligned} \mathbf {P}\left( E_1^c, \, U_k \right) = \sum _{j =-1}^{J}\mathbf {P}\left( E_1^c, \, U_k, \, V_j^k\right) . \end{aligned}$$
(6.16)

Let us first assume \(-1 \leqslant j \leqslant \lfloor \frac{1}{2}\log _2 \epsilon ^{-1} \rfloor \). We claim that, on the event \(V_{j}^k \cap \{\sigma _{k-1} < \infty \}\),

$$\begin{aligned} \mathbf {P}\left( \tau< \infty , E_1^c \mid \gamma _{\sigma _{k-1}} \right) \leqslant \mathbf {P}\left( \tau < \infty \mid \gamma _{\sigma _{k-1}} \right) \leqslant C \bigg (\frac{2^j \epsilon }{2^{2k}}\bigg )^{\beta /2} \bigg (\frac{\epsilon }{2^j \epsilon }\bigg )^{2-d}. \end{aligned}$$
(6.17)

Indeed, the first estimate in (6.17) is trivial and the second follows from Lemma 3.2 as follows. The curve \(\gamma _{\sigma _{k-1}}\) is a crosscut of \(D:= 2^{k-1} \mathbb {D} \cap \mathbb {H}\) and so partitions D into exactly two components, one of which contains z. Consequently, using (3.2) and the maximum principle, we have \(S_{\sigma _{k-1}}(z) \leqslant C \omega _{D \smallsetminus \gamma [0, \sigma _{k-1}]}(z, \partial D)\), that is, we can estimate \(S_{\sigma _{k-1}}(z) \) by the probability of a Brownian motion from z to reach distance \(2^{k-1}\) from 0 before hitting the real line or the curve \(\gamma [0, \sigma _{k-1}]\). Thus, given the path up to time \(\sigma _{k-1}\), on \(V^k_j\), the Beurling estimate (Lemma 3.1) shows that the probability that a Brownian motion starting at z reaches the circle of radius \(2 {\text {Im}} z\) about z without exiting \(H_{\sigma _{k-1}}\) is \(O((\epsilon 2^j / {\text {Im}} z)^{1/2})\). Given this, we claim that the probability to reach distance \(2^{k-1}\) from 0 is \(O({\text {Im}} z/2^k)\). Indeed, this follows easily, e.g., from the fact that \(\omega _{\mathbb {D}\cap \mathbb {H}}(z, \partial \mathbb {D})=2(\pi - \nu (z))/\pi \), where \(\nu (z)\) is the obtuse angle \(\angle (-1, z, 1)\). Hence, since \({\text {Im}} z \leqslant 1,\) we see that \(S_{\sigma _{k-1}}(z)^\beta \leqslant C \,(\epsilon 2^{j}/2^{2k})^{\beta /2}\). Moreover, by the distortion estimate (3.1), we have \(\Upsilon _{H_{\sigma _{k-1}}}(z) \asymp 2^j \epsilon \) on \(V_j^k\). Hence, the one-point estimate (Lemma 3.2) gives (6.17).

Lemma 3.2 also shows that

$$\begin{aligned} \mathbf {P}\left( V_{j}^k ,\, \sigma _{k-1} < \infty \right) \leqslant C (2^{j-1}\epsilon )^{2-d} S_0^\beta , \end{aligned}$$

which combined with (6.17) gives

$$\begin{aligned} \mathbf {P}\left( E_1^c, \, U_k, \, V_j^k\right) \leqslant C \bigg (\frac{2^j \epsilon }{2^{2k}}\bigg )^{\beta /2} \bigg (\frac{\epsilon }{2^j \epsilon }\bigg )^{2-d} (2^{j-1}\epsilon )^{2-d} \leqslant C \epsilon ^{2-d+ \frac{\beta }{2}} 2^{j\left( \frac{\beta }{2}-(2-d)\right) -k\beta }. \end{aligned}$$

Summing over j from \(-1\) to \(\lfloor \frac{1}{2}\log _2 \epsilon ^{-1} \rfloor \) and recalling (6.10), we find

$$\begin{aligned} \sum _{j=-1}^{\lfloor \frac{1}{2}\log _2 \epsilon ^{-1} \rfloor } \mathbf {P}\left( E_1^c, \, U_k, \, V_j^k\right) \leqslant C \epsilon ^{2-d + \frac{\beta }{2}} \epsilon ^{-\frac{\beta }{4} + \frac{2-d}{2}} 2^{-k \beta } \leqslant C \, \epsilon ^{2-d + \frac{\beta }{4}} 2^{-k \beta }. \end{aligned}$$
(6.18)

Suppose now that \(\lfloor \frac{1}{2}\log _2 \epsilon ^{-1} \rfloor + 1 \leqslant j \leqslant J\). Lemma 6.1 with \(\epsilon _1 = \epsilon ^{1/2}\) and \(\epsilon _2 = \epsilon ^{1/4}\) implies that, on the event \(\{\tau '_{\epsilon _1} < \infty \}\),

$$\begin{aligned} \mathbf {P}\left( \tau < \infty , \, E_1^c \, \big | \, \gamma _{\tau '_{\epsilon _1}}\right) \leqslant C \left( \frac{\epsilon }{\epsilon _1} \right) ^{2-d} \left( \frac{\epsilon _1}{\epsilon _2} \right) ^{\beta /2} = C \, \epsilon ^{\frac{2-d}{2} + \frac{\beta }{8}} \end{aligned}$$
(6.19)

for all sufficiently small \(\epsilon \). Moreover, on the event \(V_j^k \cap \{ \sigma _{k-1} < \infty \}\), we can estimate \(S_{\sigma _{k-1}}\) as when proving (6.17) and use Lemma 3.2 to see that

$$\begin{aligned} \mathbf {P}\left( \tau '_{\epsilon _1} < \infty \mid \gamma _{\sigma _{k-1}}\right) \leqslant C \, \left( \frac{2^j \epsilon }{2^{2k}} \right) ^{\beta /2} \left( \frac{\epsilon _1}{2^j \epsilon } \right) ^{2-d}. \end{aligned}$$
(6.20)

We conclude from (6.19) and (6.20) that

$$\begin{aligned} \mathbf {P}\left( E_1^c, \, U_k, \, V_j^k \right) \leqslant C \, \epsilon ^{2-d +\frac{\beta }{8}} (2^{j}\epsilon )^{\frac{\beta }{2}-(2-d)} 2^{-k \beta }. \end{aligned}$$

Summing over \(j = \lfloor \frac{1}{2}\log _2 \epsilon ^{-1} \rfloor +1, \ldots , J\) and using that \(\beta /2-(2-d) \geqslant 0\) with equality only if \(\kappa = 4\) (see (6.10)), we infer that

$$\begin{aligned} \sum _{j= \lfloor \frac{1}{2}\log _2 \epsilon ^{-1} \rfloor +1}^J \mathbf {P}\left( E_1^c, \, U_k, \, V_j^k\right) \leqslant C \epsilon ^{2-d + \frac{\beta }{8}} \log _2(\epsilon ^{-1}) \epsilon ^{\frac{\beta }{4} - \frac{2-d}{2}} 2^{-k \beta } = o(\epsilon ^{2-d}) 2^{-k \beta }, \end{aligned}$$

where the factor \(\log _2(\epsilon ^{-1})\) needs to be included only when \(\kappa = 4\). Together with (6.16) and (6.18), this gives

$$\begin{aligned} 2^{rk} \mathbf {P}\left( E_1^c, \, U_k \right) \leqslant 2^{(r - \beta )k}o(\epsilon ^{2-d}) . \end{aligned}$$
(6.21)

Since \(r-\beta < 0\) (this is equivalent to the condition \(\rho < 8-\kappa \)), we can sum the right-hand side of (6.21) over all integers \(k \geqslant 1\) and the result is \(o(\epsilon ^{2-d})\). In view of (6.14) and (6.15), this proves (6.12).

We next prove (6.13). Note that \(E^c_2 \subset \cup _{\lfloor p \log _2 (\epsilon ^{-1}) \rfloor }^\infty U_k\). Given \(\gamma _{\sigma _{k-1}}\), we can estimate harmonic measure as when proving (6.17) to find \(S_{\sigma _{k-1}}(z)^\beta \leqslant C \Upsilon _{H_{\sigma _{k-1}}}^{\beta /2}2^{-\beta k}\) on \(U_k\). Therefore, estimating as in (6.14) with the help of (6.4), and then using the one-point estimate (see Lemma 3.2) and the fact that \(\beta /2-(2-d) \geqslant 0\), we obtain

$$\begin{aligned} \mathbf {P}^{\rho }(U_k) \leqslant C 2^{rk}\mathbf {P}(U_k) \leqslant C 2^{rk} \bigg (\frac{\epsilon }{\Upsilon _{H_{\sigma _{k-1}}}(z)}\bigg )^{2-d} S_{\sigma _{k-1}}(z)^\beta \leqslant C 2^{(r-\beta ) k} \epsilon ^{2-d}. \end{aligned}$$
(6.22)

Since \(r-\beta < 0\), we can sum over all integers k with \(k \geqslant p \log _2 (\epsilon ^{-1})\) and the result is \(o(\epsilon ^{2-d})\). We obtain \(\lim _{\epsilon \downarrow 0} \epsilon ^{d-2}\mathbf {P}^\rho (\tau < \infty , E_2^c) = 0\) for any \(p > 0\). This proves (6.13) and hence completes the proof of (6.11).

In view of (6.11), it only remains to prove that

$$\begin{aligned} \lim _{\epsilon \downarrow 0}\epsilon ^{d-2}\mathbf {P}^\rho \left( \tau < \infty , \, E\right) = c_*G^{\rho }(z, \xi ^1, \xi ^2). \end{aligned}$$

According to Eq. (3.15), we have

$$\begin{aligned} \mathbf {E}^*[f] = \tilde{G}_0^{-1}\mathbf {E}[\tilde{G}_t f], \quad t \geqslant 0, \end{aligned}$$

whenever \(f \in L^1(\mathbf {P}^*)\) is measurable with respect to \(\tilde{\mathcal {F}}_t\). We change to the radial time parametrization and set \(t = -(\ln \epsilon )/(2a)\), so that \(\epsilon = e^{-2at}\) and \(s(t) = \tau =\tau _\epsilon \). Then \(\tilde{G}_t = G_{\tau _\epsilon }\) and the function \(M_{\tau _\epsilon }^{(\rho )} 1_{\tau _\epsilon < \infty } 1_E\) is measurable with respect to \(\tilde{\mathcal {F}}_t = \mathcal {F}_{\tau _\epsilon }\), so we find

$$\begin{aligned} \mathbf {P}^\rho \left( \tau< \infty , \, E\right) = \mathbf {E}\left[ M_{\tau }^{(\rho )} 1_{\tau< \infty } 1_E \right] = G_0 \, \mathbf {E}^*\left[ G_{\tau }^{-1} M_{\tau }^{(\rho )} 1_{\tau < \infty } 1_E \right] , \end{aligned}$$
(6.23)

where \(G_{0}\) is the \(\hbox {SLE}_{\kappa }\) Green’s function. Thanks to the boundary conditions of the martingale \(G_t\), we have \(G_{s(t)} = G_\infty = 0\) on the event \(\tau = \infty \). This means that \(\mathbf {E}^*[1_{\tau = \infty }] = G_0^{-1}\mathbf {E}[G_\infty 1_{\tau = \infty }] = 0\). Hence we can remove the factor \(1_{\tau < \infty }\) from the right-hand side of (6.23). Thus, using the definition (3.13) of G,

$$\begin{aligned} \mathbf {P}^\rho \left( \tau < \infty , \, E \right) = \epsilon ^{2-d} \, G_0 \, \mathbf {E}^*\left[ M_{\tau }^{(\rho )} S_{\tau }^{-\beta } 1_E \right] , \end{aligned}$$

where \(S_{\tau } = S_{\tau }(z)\). We need to show that

$$\begin{aligned} \lim _{\epsilon \downarrow 0} \mathbf {E}^*\left[ M_{\tau }^{(\rho )} S_{\tau }^{-\beta } 1_{E} \right] = c_* \mathbf {E}^* \left[ M_T^{(\rho )} \right] , \quad \tau =\tau _\epsilon , \end{aligned}$$
(6.24)

and where T is the time at which the path reaches z. Let \(\tau '' = \tau _{\epsilon ^{1/2}/4}\). Then \(\tau '_{\epsilon ^{1/2}} \leqslant \tau '' \leqslant \tau \) if \(\epsilon \) is small enough. We claim that we can choose p so that

$$\begin{aligned} \left| M_{\tau }^{(\rho )} - M_{\tau ''}^{(\rho )} \right| 1_E = o(1). \end{aligned}$$
(6.25)

Indeed, suppose we are on the event \(E=E_1 \cap E_2\). Then, by the definition of \(E_1\), \({\text {diam}}\gamma [\tau '', \tau ] \leqslant 2\epsilon ^{1/4}\) if \(\epsilon \) is small enough. Moreover, if \(R:={\text {diam}}\gamma [0, \tau ]\) then \(R \leqslant \epsilon ^{-p}\) by the definition of \(E_2\). Hence \(\epsilon ^{1/4}R \leqslant \delta \) where \(\delta : = \epsilon ^{1/4 -p}\). Combining these bounds, we see that (see Proposition 3.82 of [25]) \({\text {diam}}g_{s}(\gamma [s, \tau ]) \leqslant C \delta ^{1/2}\) uniformly for \(s \in [\tau '', \tau ]\). Using that \({\text {hcap}}K \leqslant C \, ({\text {diam}}K)^2\) for a half-plane hull K, it follows that

$$\begin{aligned} \tau -\tau '' \leqslant C \, \delta \quad \text {and} \quad \Delta _{s} = \Delta _{\tau ''} + O(\delta ^{1/2})\text { for }s \in [ \tau '', \tau ], \end{aligned}$$
(6.26)

where \(\Delta _t: = \xi ^2_t - \xi ^1_t\). Therefore, using also (6.2) and (6.3),

$$\begin{aligned} \Delta _\tau ^r g_\tau '(\xi ^2)^{\zeta }-\Delta _{\tau ''}^r g_{\tau ''}'(\xi ^2)^{\zeta }&= (\Delta _\tau ^r - \Delta _{\tau ''}^r)g_\tau '(\xi ^2)^{\zeta } + \Delta _{\tau ''}^r(g_\tau '(\xi ^2)^{\zeta } - g_{\tau ''}'(\xi ^2)^{\zeta })\\&= \Delta _{\tau ''}^r g_{\tau ''}'(\xi ^2)^{\zeta }\left[ \exp \left( -\zeta \int _{\tau ''}^\tau a/\Delta _s^2 ds\right) - 1\right] + O(\delta ^{1/2}). \end{aligned}$$

On the event \(\Delta _{\tau ''} \leqslant \delta ^{1/4}\), the right-hand side is o(1) by (6.3). By (6.4) we may therefore assume \( \delta ^{1/4} < \Delta _{\tau ''} \leqslant CR\). Now choose any \(p \in (0,1/4)\) so that \(\epsilon ^{(1-r)p +1/4} = o(1)\). Then by Taylor expansion, using (6.4) and (6.26),

$$\begin{aligned} \Delta _{\tau ''}^r\left[ \exp \left( -\zeta \int _{\tau ''}^\tau a/\Delta _s^2 ds\right) - 1\right] \leqslant C \Delta _{\tau ''}^r \int _{\tau ''}^\tau \frac{a}{\Delta _s^2}ds \leqslant C \, \Delta _{\tau ''}^{r-2}|\tau -\tau ''|= o(1), \end{aligned}$$

where we have used that \(p \in (0,1/4)\) and \(r \geqslant 0\). This concludes the proof of (6.25). We fix p so that (6.25) holds for the remainder.

Using the invariant distribution (see Lemma 3.5 with \(f(x) = \sin ^{-\beta }(x)\)) we have that \(\mathbf {E}^*\left[ S_{\tau }^{-\beta } \right] = O(1)\), so

$$\begin{aligned} \left| \mathbf {E}^*\left[ \left( M_{\tau }^{(\rho )} - M_{\tau ''}^{(\rho )} \right) S_\tau ^{-\beta } 1_E\right] \right| \leqslant o(1) \, \mathbf {E}^*\left[ S_{\tau }^{-\beta } \right] = o(1), \end{aligned}$$
(6.27)

On the other hand, since \(M^{(\rho )}_{\tau ''}1_{U_k} \leqslant C 2^{kr}\) the same argument that proved (6.11) shows that

$$\begin{aligned} \mathbf {E}^*\left[ M_{\tau ''}^{(\rho )} S_\tau ^{-\beta } 1_{E^c} \right] = o(1) \end{aligned}$$
(6.28)

as \(\epsilon \rightarrow 0\). It follows from (6.27) and (6.28) that

$$\begin{aligned} \mathbf {E}^*\left[ M_{\tau }^{(\rho )} S_\tau ^{-\beta } 1_{E} \right] = \mathbf {E}^*\left[ M_{\tau ''}^{(\rho )} S_\tau ^{-\beta } \right] + o(1). \end{aligned}$$

Moreover,

$$\begin{aligned} \mathbf {E}^*\left[ M_{\tau ''}^{(\rho )} S_\tau ^{-\beta } \right] = \mathbf {E}^*\left[ M_{\tau ''}^{(\rho )} \, \mathbf {E}^*\left[ S_\tau ^{-\beta } \mid \mathcal {F}_{\tau ''} \right] \right] . \end{aligned}$$

Using Lemma 3.5 we see that there is \(\alpha > 0\) such that

$$\begin{aligned} \mathbf {E}^*\left[ S_\tau ^{-\beta } \mid \mathcal {F}_{\tau ''} \right] = \frac{c_*}{2} \int _0^{\pi } \sin \theta \, d\theta \, \left( 1+O(\epsilon ^\alpha )\right) = c_*(1+O(\epsilon ^\alpha )). \end{aligned}$$

To complete the proof of Proposition 2.11 it only remains to show that

$$\begin{aligned} \lim _{\epsilon \downarrow 0}\mathbf {E}^*\left[ M_\tau ^{(\rho )} \right] = \mathbf {E}^*\left[ M_T^{(\rho )} \right] . \end{aligned}$$

For this we check that the sequence of integrands \(\{M_\tau ^{(\rho )}\}\) is uniformly integrable as \(\epsilon \downarrow 0\). Since the only way in which \(M_\tau ^{(\rho )}\) can get large is by the path reaching a large diameter, uniform integrability follows almost immediately from what we have done. Since we will need to use this below (for \(\rho =2\)) we formulate this as a separate lemma and give the proof in which we keep the notation introduced in this subsection.

Lemma 6.2

Let \(\rho \in [0, 8-\kappa )\). Then the sequence of integrands \(\{M_\tau ^{(\rho )}\}\) is uniformly integrable as \(\epsilon \downarrow 0\), where \(\tau =\tau _\epsilon \).

Proof

We want to show that for every \(\epsilon _0 > 0\), there exists an \(R > 0\) such that

$$\begin{aligned} \mathbf {E}^*\Big [M_{\tau }^{(\rho )} 1_{\{M_{\tau }^{(\rho )} > R\}}\Big ] < \epsilon _0 \end{aligned}$$
(6.29)

for all small \(\epsilon > 0\). Let us prove (6.29).

The estimate (6.4) yields

$$\begin{aligned} |\xi _{\tau }^2 - \xi _{\tau }^1| \leqslant C\, 2^{k} \quad \text {on} \quad U_k, \quad k \geqslant 0, \end{aligned}$$

so, in view of (6.3), there exists a constant \(C_0>0\) such that

$$\begin{aligned} |M_{\tau }^{(\rho )}| \leqslant C_0 2^{kr} \quad \text {on} \quad U_k, \quad k \geqslant 0. \end{aligned}$$

Suppose \(R > 0\) and let \(N := \lfloor r^{-1}\log _2(R/C_0) \rfloor \). Then

$$\begin{aligned} |M_{\tau }^{(\rho )}| \leqslant C_0 2^{kr} \leqslant R \quad \text {on} \quad U_k, \quad 0 \leqslant k \leqslant N-1. \end{aligned}$$

Hence

$$\begin{aligned} \mathbf {E}^*[M_{\tau }^{(2)} 1_{\{M_{\tau }^{(2)} > R\}}] \leqslant \mathbf {E}^*[M_{\tau }^{(\rho )} 1_{\cup _{j= N}^\infty U_k}] \leqslant C_0 \sum _{j=N}^\infty 2^{kr} \mathbf {P}^*\big (U_k\big ). \end{aligned}$$
(6.30)

Since \(U_k\) is \(\mathcal {F}_{\tau }\)-measurable, (6.46) gives

$$\begin{aligned} \mathbf {P}^*\big ( U_k\big ) = \mathbf {E}^*\big [ 1_{U_k} \big ] = G_0^{-1} \mathbf {E}[G_{\tau } 1_{U_k}] \leqslant C \epsilon ^{d-2} \mathbf {E}[ 1_{U_k}], \end{aligned}$$

where we have used the following estimate in the last step:

$$\begin{aligned} |G_{\tau }| = (\Upsilon _0(z) \epsilon )^{d-2} S_\tau (z)^\beta \leqslant C \epsilon ^{d-2}. \end{aligned}$$

It follows from (6.22) that

$$\begin{aligned} \mathbf {P}\left( U_k\right) \leqslant C \epsilon ^{2-d} 2^{-k\beta } \quad \text {for} \quad k > \log _2(4|z|). \end{aligned}$$
(6.31)

Therefore, we find

$$\begin{aligned} \mathbf {P}^*\left( U_k \right) \leqslant C 2^{-k\beta }, \quad k > \log _2(4|z|). \end{aligned}$$

Employing this estimate in (6.30) we obtain

$$\begin{aligned} \mathbf {E}^*[M_{\tau }^{(\rho )} 1_{\{M_{\tau }^{(\rho )} > R\}}] \leqslant C \sum _{k=N}^\infty 2^{-k(\beta -r)} \leqslant C 2^{-N(\beta -r)}. \end{aligned}$$

The condition \(N > \log _2(4|z|)\) is fulfilled for all sufficiently large R. Since \(N = [r^{-1}\log _2(R/C_0)]\) and \(\beta - r > 0\), given \(\epsilon _0 > 0\), by choosing R large enough, we can make \(\mathbf {E}^*[M_{\tau }^{(\rho )} 1_{\{M_{\tau }^{(\rho )} > R\}}] < \epsilon _0\) for all \(\epsilon > 0\), which implies (6.29) and concludes the proof of Lemma 6.2. \(\square \)

Given Lemma 6.2, the proof of Proposition 2.11 is now complete. \(\square \)

6.2 Probabilistic Representation for \(\mathcal {G}\): Proof of Proposition 2.12

Let \(0 < \kappa \leqslant 4\). Our goal is to show that

$$\begin{aligned} {\mathcal {G}}(z, \xi ^1, \xi ^2) = ({{\mathrm{Im}}\,}z)^{d-2} \sin ^{\beta }(\arg (z-\xi ^1)) \mathbf {E}^*[M_T^{(2)}], \quad z \in \mathbb {H}, \quad \xi ^1 < \xi ^2, \end{aligned}$$
(6.32)

where \(\mathbf {E}^*\) denotes expectation with respect to two-sided radial \(\hbox {SLE}_\kappa \) from \(\xi ^1\) through z, stopped at the hitting time T of z and \(\mathcal {G}\) is our prediction for the Green’s function. Our first step is to use scale and translation invariance to reduce the relation (6.32), which depends on the four real variables \(x={{\mathrm{Re}}\,}z, y={{\mathrm{Im}}\,}z, \xi ^1, \xi ^2\), to an equation involving only two independent variables.

6.2.1 The Function \(h(\theta ^1, \theta ^2)\)

It follows from (2.9) and (2.10) that \({\mathcal {G}}\) satisfies the scaling behavior

$$\begin{aligned} {\mathcal {G}}(\lambda z, \lambda \xi ^1, \lambda \xi ^2) = \lambda ^{d-2} {\mathcal {G}}(z, \xi ^1, \xi ^2), \quad \lambda > 0. \end{aligned}$$

Hence we can write

$$\begin{aligned} {\mathcal {G}}(z,\xi ^1, \xi ^2) = y^{d-2} \mathcal {H}(z,\xi ^1, \xi ^2), \end{aligned}$$

where the function \(\mathcal {H}\) is homogeneous and translation invariant:

$$\begin{aligned} \mathcal {H}(\lambda z, \lambda \xi ^1, \lambda \xi ^2)&= \mathcal {H}(z,\xi ^1, \xi ^2), \quad \lambda > 0, \end{aligned}$$
(6.33a)
$$\begin{aligned} \mathcal {H}(z,\xi ^1, \xi ^2)&= \mathcal {H}(x + \lambda , y, \xi ^1 + \lambda , \xi ^2 + \lambda ), \quad \lambda \in {\mathbb {R}}. \end{aligned}$$
(6.33b)

It follows that the value of \(\mathcal {H}(x,y, \xi ^1, \xi ^2)\) only depends on the two angles \(\theta ^1\) and \(\theta ^2\) defined by

$$\begin{aligned} \theta ^1 = \arg (z - \xi ^1), \quad \theta ^2 = \arg (z - \xi ^2). \end{aligned}$$

In particular, if we let \(\Delta \) denote the triangular domain

$$\begin{aligned} \Delta = \{(\theta ^1, \theta ^2) \in {\mathbb {R}}^2\, | \, 0< \theta ^1< \theta ^2 < \pi \}, \end{aligned}$$

then we can define a function \(h:\Delta \rightarrow {\mathbb {R}}\) for \(\alpha \in (1, \infty ) \smallsetminus {\mathbb {Z}}\) by the equation

$$\begin{aligned} {\mathcal {G}}(z,\xi ^1, \xi ^2) = y^{d-2} h(\theta ^1, \theta ^2), \quad z \in \mathbb {H}, \; -\infty< \xi ^1< \xi ^2 < \infty . \end{aligned}$$
(6.34)

Using Lemma A.2, we can extend the definition of h to all \(\alpha \in (1, \infty )\) by continuity. We write \(h(\theta ^1, \theta ^2; \alpha )\) if we want to indicate the \(\alpha \)-dependence of \(h(\theta ^1, \theta ^2)\) explicitly. In terms of h, we can then reformulate Eq. (6.32) as follows:

$$\begin{aligned} h(\theta ^1, \theta ^2; \alpha ) = (\sin ^{\beta } \theta ^1 )\mathbf {E}^*[M_T^{(2)}], \quad (\theta ^1, \theta ^2) \in \Delta , \quad \beta \geqslant 1. \end{aligned}$$
(6.35)

The following lemma, which is crucial for the proof of (6.35), describes the behavior of h near the boundary of \(\Delta \). In particular, it shows that \(h(\theta ^1, \theta ^2)\) vanishes as \(\theta ^1\) approaches 0 or \(\pi \), and that the restriction of h to the top edge \(\theta ^2 = \pi \) of \(\Delta \) equals \(\sin ^{\beta } \theta ^1\). In other words, the lemma verifies that \({\mathcal {G}}(z,\xi ^1, \xi ^2)\) satisfies the appropriate boundary conditions.

Lemma 6.3

(Boundary behavior of h) Let \(\alpha \geqslant 2\). Then the function \(h(\theta ^1, \theta ^2)\) defined in (6.34) is a smooth function of \((\theta ^1, \theta ^2) \in \Delta \) and has a continuous extension to the closure \(\bar{\Delta }\) of \(\Delta \). This extension satisfies

$$\begin{aligned} h(\theta ^1, \pi )&= \sin ^\beta {\theta ^1}, \quad \theta ^1 \in [0, \pi ], \end{aligned}$$
(6.36)
$$\begin{aligned} h(\theta , \theta )&= h_f(\theta ), \quad \theta \in (0, \pi ), \end{aligned}$$
(6.37)

where \(h_f(\theta )\) is defined in (8.8). Moreover, there exists a constant \(C > 0\) such that

$$\begin{aligned} 0 \leqslant h(\theta ^1, \theta ^2) \leqslant C \sin ^{\beta }\theta ^1, \quad (\theta ^1, \theta ^2) \in \bar{\Delta }, \end{aligned}$$
(6.38)

and

$$\begin{aligned} \frac{|h(\theta ^1, \theta ^2) - h(\theta ^1, \pi )|}{\sin ^{\beta }\theta ^1} \leqslant C\frac{|\pi - \theta ^2|}{\sin \theta ^1}, \quad (\theta ^1, \theta ^2) \in \Delta . \end{aligned}$$
(6.39)

Proof

The rather technical proof involves asymptotic estimates of the integral in (2.9) and is given in [32]. \(\square \)

Proposition 6.4

(PDE for Green’s function) Suppose \(\alpha \geqslant 2\). The function \(\mathcal {G}(x + iy, \xi ^1, \xi ^2)\) defined in (2.10) satisfies the two linear PDEs

$$\begin{aligned} \left( \mathcal {A}_j+\frac{2(\alpha -1)(y^2 - (x-\xi ^j)^2)}{\alpha (y^2 + (x-\xi ^j)^2)^2} \right) \mathcal {G} = 0, \quad j =1,2, \end{aligned}$$

where the differential operators \(\mathcal {A}_j\) were defined in (5.5).

Proof

We have

$$\begin{aligned} \mathcal {G}(x + iy, \xi ^1, \xi ^2) = {{\mathrm{Im}}\,}\int _{\gamma } g(x,y,\xi ^1, \xi ^2, u) du, \end{aligned}$$

where \(\gamma \) denotes the Pochhammer contour in (2.9) and the integrand g is given by

$$\begin{aligned} g(x,y,\xi ^1, \xi ^2, u) =&\; \hat{c}^{-1} y^{\alpha + \frac{1}{\alpha } - 2} |z - \xi ^1|^{1 - \alpha } |z - \xi ^2|^{1 - \alpha } e^{-i\pi \alpha }\\&\times (u - z)^{\alpha -1} (u - \bar{z})^{\alpha -1} (u - \xi ^1)^{-\frac{\alpha }{2}} (\xi ^2 - u)^{-\frac{\alpha }{2}}. \end{aligned}$$

A long but straightforward computation shows that g obeys the equations

$$\begin{aligned} \mathcal {A}_j g + \frac{2}{u-\xi ^j} \partial _u g - \frac{2}{(u-\xi ^j)^2} g = 0, \quad j = 1,2. \end{aligned}$$

It follows that

$$\begin{aligned} \mathcal {A}_j \mathcal {G} = -{{\mathrm{Im}}\,}\int _{\gamma }\bigg (\frac{2}{u-\xi ^j} \partial _u g - \frac{2}{(u-\xi ^j)^2} g\bigg ) du, \quad j = 1,2. \end{aligned}$$

The lemma follows because an integration by parts with respect to u shows that the integral on the right-hand side vanishes. \(\square \)

The derivation of formula (6.35) relies on an application of the optional stopping theorem to the martingale observable associated with \({\mathcal {G}}\). The following lemma gives an expression for this local martingale in terms of h.

Lemma 6.5

(Martingale observable for \(\hbox {SLE}_\kappa (2)\)) Let \(\theta _t^j = \arg (g_t(z) - \xi _t^j)\), \(j = 1,2\). Then

$$\begin{aligned} \mathcal {M}_t = \Upsilon _t(z)^{d-2}h(\theta _t^1, \theta _t^2) \end{aligned}$$
(6.40)

is a local martingale for \(\hbox {SLE}_\kappa (2)\) started from \((\xi ^1, \xi ^2)\).

Proof

The proof follows from localization and a direct computation using Itô’s formula using Proposition 6.4. In fact, since

$$\begin{aligned} \Upsilon _t^{d-2}h(\theta _t^1, \theta _t^2) = |g_t'(z)|^{2-d} {\mathcal {G}}(Z_t,\xi _t^1, \xi _t^2), \end{aligned}$$

we see that \(\mathcal {M}_t\) is the martingale observable relevant for the Green’s function found in Sect. 4 (cf. equation (4.12)). \(\square \)

Let \(z = x + iy \in \mathbb {H}\) and consider \(\hbox {SLE}_\kappa (2)\) started from \((\xi ^1, \xi ^2)\) with \(\xi ^1 < \xi ^2\). We can without loss of generality assume that \(|z| \leqslant 1\). For each \(\epsilon > 0\), we define the stopping time \(\tau _\epsilon \) by

$$\begin{aligned} \tau _\epsilon = \inf \{t \geqslant 0: \, \Upsilon _t \leqslant \epsilon \Upsilon _0\}, \end{aligned}$$

where \(\Upsilon _t = \Upsilon _t(z)\). Let \(\epsilon > 0\) and \(n \geqslant 1\). Then, since \(\Upsilon _t\) is a nonincreasing function of t and \(\Upsilon _0 = y\), we have

$$\begin{aligned} y \geqslant \Upsilon _{t \wedge n \wedge \tau _\epsilon } \geqslant \Upsilon _{\tau _\epsilon } = \epsilon y, \quad t \geqslant 0. \end{aligned}$$
(6.41)

Hence, in view of the boundedness (6.38) of h, Lemma 6.5 implies that \((\mathcal {M}_{t \wedge \tau _\epsilon \wedge n})_{t \geqslant 0}\) is a true martingale for \(\hbox {SLE}_\kappa (2)\). The optional stopping theorem therefore shows that

$$\begin{aligned} h(\theta ^1, \theta ^2) = \Upsilon _0^{2-d} \mathbf {E}^2\big [\Upsilon _{n \wedge \tau _\epsilon }^{d-2}h(\theta _{n \wedge \tau _\epsilon }^1, \theta _{n \wedge \tau _\epsilon }^2)\big ]. \end{aligned}$$
(6.42)

Recall that \(\mathbf {P}\) and \(\mathbf {P}^2\) denote the \(\hbox {SLE}_\kappa \) and \(\hbox {SLE}_\kappa (2)\) measures respectively, and that \(\mathbf {E}\) and \(\mathbf {E}^2\) denote expectations with respect to these measures. Equations (3.8) and (3.9) imply

$$\begin{aligned} \mathbf {P}^{2}(V) = \mathbf {E}\big [M_t^{(2)} 1_V\big ] \quad \text {for }V \in \mathcal {F}_t, \end{aligned}$$

where

$$\begin{aligned} M_t^{(2)} = \left( \frac{\xi ^2_t-\xi ^1_t}{\xi ^2-\xi ^1}\right) ^a g'_t(\xi ^2)^{\frac{3a-1}{2}}. \end{aligned}$$

In particular,

$$\begin{aligned} \mathbf {E}^{2}[f] = \mathbf {E}\big [M_{n \wedge \tau _\epsilon }^{(2)} f\big ], \end{aligned}$$
(6.43)

whenever \(M_{n \wedge \tau _\epsilon }^{(2)} f\) is an \(\mathcal {F}_{n \wedge \tau _\epsilon }\)-measurable \(L^1(\mathbf {P})\) random variable.

Lemma 6.6

For each \(t \geqslant 0\), we have \(M_t^{(2)} \in L^1(\mathbf {P})\).

Proof

Since \(\xi _t^1\) is the driving function for the Loewner chain \(g_t\), we have (see, e.g., Lemma 4.13 in [25])

$$\begin{aligned} {\text {diam}}(\gamma _t) \leqslant C \max \left\{ \sqrt{t}, \sup _{0 \leqslant s \leqslant t} |\xi _s^1|\right\} , \quad t \geqslant 0. \end{aligned}$$

Combining this with (6.3) and (6.4), we find

$$\begin{aligned} |M_t^{(2)}| \leqslant C\, |\xi ^2_t-\xi ^1_t|^a \leqslant C (1 + {\text {diam}}(\gamma _t))^a \leqslant C \left( 1 + \max \left\{ \sqrt{t}, \sup _{0 \leqslant s \leqslant t} |\xi _s^1|\right\} \right) ^a, \quad t \geqslant 0. \end{aligned}$$

Since \(\xi _t^1\) is a \(\mathbf {P}\)-Brownian motion, it follows that \(M_t^{(2)} \in L^1(\mathbf {P})\) for each \(t \geqslant 0\). \(\square \)

As a consequence of (6.38), (6.41), and Lemma 6.6, the random variable

$$\begin{aligned} M_{n \wedge \tau _\epsilon }^{(2)} \Upsilon _{n \wedge \tau _\epsilon }^{d-2}h(\theta _{n \wedge \tau _\epsilon }^1, \theta _{n \wedge \tau _\epsilon }^2) \end{aligned}$$

is \(\mathcal {F}_{n \wedge \tau _\epsilon }\)-measurable and belongs to \(L^1(\mathbf {P})\). Thus, we can use (6.43) to rewrite (6.42) as

$$\begin{aligned} h(\theta ^1, \theta ^2) = \Upsilon _0^{2-d} \mathbf {E}\big [M_{n \wedge \tau _\epsilon }^{(2)} \Upsilon _{n \wedge \tau _\epsilon }^{d-2}h(\theta _{n \wedge \tau _\epsilon }^1, \theta _{n \wedge \tau _\epsilon }^2)\big ]. \end{aligned}$$

We split this into two terms depending on whether \(\tau _\epsilon \leqslant n\) or \(\tau _\epsilon > n\) as follows:

$$\begin{aligned} h(\theta ^1, \theta ^2) = \Upsilon _0^{2-d} \mathbf {E}\big [M_{\tau _\epsilon }^{(2)} \Upsilon _{\tau _\epsilon }^{d-2}h(\theta _{\tau _\epsilon }^1, \theta _{\tau _\epsilon }^2)1_{\tau _\epsilon \leqslant n}\big ] + F_{\epsilon , n}(\theta ^1, \theta ^2), \end{aligned}$$
(6.44)

where

$$\begin{aligned} F_{\epsilon , n}(\theta ^1, \theta ^2) = \Upsilon _0^{2-d} \mathbf {E}\big [M_n^{(2)} \Upsilon _n^{d-2}h(\theta _n^1, \theta _n^2)1_{\tau _\epsilon > n}\big ]. \end{aligned}$$

We prove in Lemma 6.7 below that \(F_{\epsilon , n}(\theta ^1, \theta ^2) \rightarrow 0\) as \(n \rightarrow \infty \) for each fixed \(\epsilon > 0\). Assuming this, we conclude from (6.44) that

$$\begin{aligned} h(\theta ^1, \theta ^2) = \Upsilon _0^{2-d} \lim _{n\rightarrow \infty } \mathbf {E}\big [M_{\tau _\epsilon }^{(2)} \Upsilon _{\tau _\epsilon }^{d-2}h(\theta _{\tau _\epsilon }^1, \theta _{\tau _\epsilon }^2) 1_{\tau _\epsilon \leqslant n}\big ]. \end{aligned}$$
(6.45)

Equations (3.13) and (3.15) imply

$$\begin{aligned} \mathbf {P}^*\left( V\right) = G_0^{-1}\mathbf {E}[G_t 1_V] \quad \text {for }V \in \mathcal {F}_t, \end{aligned}$$
(6.46)

where \(G_t = \Upsilon _t^{d-2} \sin ^\beta \theta ^1_t\). In particular,

$$\begin{aligned} \mathbf {E}\big [M_{\tau _\epsilon }^{(2)} f\big ] = G_0 \mathbf {E}^*\big [G_{\tau _\epsilon }^{-1} M_{\tau _\epsilon }^{(2)} f\big ], \end{aligned}$$

whenever \(M_{\tau _\epsilon }^{(2)} f\) is an \(\mathcal {F}_{\tau _\epsilon }\)-measurable random variable in \(L^1(\mathbf {P})\). Using Lemma 6.6 again, we see that the function \(M_{\tau _\epsilon }^{(2)} \Upsilon _{\tau _\epsilon }^{d-2}h(\theta _{\tau _\epsilon }^1, \theta _{\tau _\epsilon }^2) 1_{\tau _\epsilon \leqslant n}\) is \(\mathcal {F}_{\tau _\epsilon }\)-measurable and belongs to \(L^1(\mathbf {P})\) for \(n \geqslant 1\) and \(\epsilon > 0\). Thus (6.45) can be expressed in terms of an expectation for two-sided radial \(\hbox {SLE}_\kappa \) through z as follows:

$$\begin{aligned} h(\theta ^1, \theta ^2)&= \Upsilon _0^{2-d} G_0 \lim _{n\rightarrow \infty } \mathbf {E}^*\left[ G_{\tau _\epsilon }^{-1} M_{\tau _\epsilon }^{(2)} \Upsilon _{\tau _\epsilon }^{d-2}h(\theta _{\tau _\epsilon }^1, \theta _{\tau _\epsilon }^2) 1_{\tau _\epsilon \leqslant n}\right] \nonumber \\&= \Upsilon _0^{2-d} G_0 \mathbf {E}^*\left[ G_{\tau _\epsilon }^{-1} M_{\tau _\epsilon }^{(2)} \Upsilon _{\tau _\epsilon }^{d-2}h(\theta _{\tau _\epsilon }^1, \theta _{\tau _\epsilon }^2)\right] , \end{aligned}$$
(6.47)

where the second equality is a consequence of dominated convergence and the fact that \(\mathbf {E}^*[1_{\tau _\epsilon < \infty }] = 1\). Using that \(G_t = \Upsilon _t^{d-2} \sin ^\beta \theta ^1_t\), we arrive at

$$\begin{aligned} h(\theta ^1, \theta ^2) = \sin ^{\beta }\theta ^1 \mathbf {E}^*\left[ M_{\tau _\epsilon }^{(2)}\frac{ h(\theta ^1_{\tau _\epsilon }, \theta ^2_{\tau _\epsilon })}{\sin ^{\beta }\theta _{\tau _\epsilon }^1}\right] . \end{aligned}$$
(6.48)

In the limit as \(\tau _\epsilon \rightarrow T\), where T denotes the hitting time of z, we have \(\theta _{\tau _\epsilon }^2 \rightarrow \theta _T^2 = \pi \). Hence we use (6.36) to write (6.48) as

$$\begin{aligned} h(\theta ^1, \theta ^2)&= \sin ^{\beta }\theta ^1 \mathbf {E}^*\big [M_{\tau _\epsilon }^{(2)}\big ] + E(\theta ^1, \theta ^2), \end{aligned}$$

where

$$\begin{aligned} E(\theta ^1, \theta ^2) =&\; \sin ^{\beta }\theta ^1 \mathbf {E}^*\left[ M_{\tau _\epsilon }^{(2)} \frac{ h(\theta ^1_{\tau _\epsilon }, \theta ^2_{\tau _\epsilon }) - h(\theta ^1_{\tau _\epsilon }, \pi )}{\sin ^{\beta }\theta _{\tau _\epsilon }^1} \right] . \end{aligned}$$

But the estimate (6.39) implies

$$\begin{aligned} |E(\theta ^1, \theta ^2)|&\leqslant c \, \mathbf {E}^*\left[ M_{\tau _\epsilon }^{(2)} |\pi - \theta ^2_{\tau _\epsilon }| (\sin \theta ^1_{\tau _\epsilon })^{-1}\right] \end{aligned}$$
(6.49)

and, as \(\epsilon \downarrow 0\), we will show in Lemma 6.8 below that the right-hand side of (6.49) tends to zero whereas by Lemma 6.2, \(\mathbf {E}^*[M_{\tau _\epsilon }^{(2)} ] \rightarrow \mathbf {E}^*[M_T^{(2)}]\). Equation (6.35) therefore follows from Lemmas 6.7 and 6.8, which we now prove.

Lemma 6.7

Let

$$\begin{aligned} F_{\epsilon , n}(\theta ^1, \theta ^2) = \Upsilon _0^{2-d} \mathbf {E}\big [M_n^{(2)} \Upsilon _n^{d-2}h(\theta _n^1, \theta _n^2) 1_{\tau _\epsilon > n}\big ]. \end{aligned}$$

For each \(\epsilon > 0\),

$$\begin{aligned} \lim _{n \rightarrow \infty }F_{\epsilon , n}(\theta ^1, \theta ^2) = 0. \end{aligned}$$

Proof

Recall that \(|z| \leqslant 1\) and that all constants are allowed to depend on z. Setting \(\rho = 2\) in (3.8), we find

$$\begin{aligned} M_t^{(2)} = \left( \frac{\xi ^2_t-\xi ^1_t}{\xi ^2-\xi ^1}\right) ^a g'_t(\xi ^2)^{\frac{3a-1}{2}}. \end{aligned}$$

Since \( |g'_t(\xi ^2)|^{\frac{3a-1}{2}} \leqslant 1\), \(\Upsilon _n^{d-2} 1_{\tau _\epsilon > n} \leqslant C\epsilon ^{d-2}\), and \(h(\theta _n^1, \theta _n^2) \leqslant C \sin ^\beta \theta _n^{1}\), it is enough to prove that

$$\begin{aligned} \lim _{n \rightarrow \infty } \mathbf {E}\left[ |\xi ^2_{n}-\xi ^1_{n}|^a \sin ^\beta \theta _n^{1} \right] =0. \end{aligned}$$

For \(k=1,2,\ldots ,\) let \(U_k\) be the event that \(2^{k} \sqrt{2a n} \leqslant \mathrm{rad}\, \gamma [0,n] \leqslant 2^{k+1} \sqrt{2 a n}\), where \(\mathrm{rad}\, K = \sup \{|z|: z \in K\}\). Since our parametrization is such that \({\text {hcap}}\gamma [0,t] = at\), we have \(\mathbf {P}(\cup _{k \geqslant 0}^\infty U_k) =1\). Fix \(0< p < 1/4\). For each integer \(j \geqslant 0 \), let \(V_j\) be the event that \(2^{j}n^{p} \leqslant |\gamma (n)| \leqslant 2^{j+1}n^{p}\) and let \(V_{-1}\) be the event that \(|\gamma (n)| < n^{p}\). (We could phrase these events in terms of stopping times.) We have

$$\begin{aligned} \mathbf {E}_{}\left[ |\xi ^2_{n}-\xi ^1_{n}|^a \sin ^\beta \theta _n^{1} \right] \leqslant \sum _{k \geqslant 0}\mathbf {E}_{}\left[ |\xi ^2_{n}-\xi ^1_{n}|^a \sin ^\beta \theta _n^{1} 1_{U_k}\right] . \end{aligned}$$

For each k, we will estimate

$$\begin{aligned} \mathbf {E}\left[ |\xi ^2_{n}-\xi ^1_{n}|^a \sin ^\beta \theta _n^{1} 1_{U_k}\right] \leqslant \sum _{j=-1}^{J}\mathbf {E}\left[ |\xi ^2_{n}-\xi ^1_{n}|^a \sin ^\beta \theta _n^{1} 1_{V_j} 1_{U_k}\right] , \end{aligned}$$

where \( J=\lceil (\frac{1}{2}-p)\log _2 (n) + k + \frac{1}{2}\log _2(2a)\rceil .\) By Theorem 1.1 of [17] we have for \(j=-1,0,1, \ldots ,\)

$$\begin{aligned} \mathbf {P}(U_k \cap V_j) \leqslant C (n^{p-1/2}2^{j-k})^\beta \mathbf {P}(U_k) \leqslant C (n^{p-1/2}2^{j-k})^\beta . \end{aligned}$$
(6.50)

On the event \(V_{-1} \cap U_k\) we then use the trivial estimate \( \sin \theta _n^{1} \leqslant 1\) to find

$$\begin{aligned} \mathbf {E}\left[ |\xi ^2_{n}-\xi ^1_{n}|^a \sin ^\beta \theta _n^{1} 1_{V_{-1}} 1_{U_k}\right] \leqslant C (2^k n^{1/2})^a \mathbf {P}(U_k \cap V_{-1}) \leqslant C (2^k n^{1/2})^a (n^{p-1/2}2^{-k})^\beta . \end{aligned}$$

Since \(a-\beta < 0\) for \(a > 1/3\) this is summable over k, and we see that

$$\begin{aligned} \mathbf {E}\left[ |\xi ^2_{n}-\xi ^1_{n}|^a \sin ^\beta \theta _n^{1} 1_{V_{-1}} \right] \leqslant C n^{-((1/2-p)\beta -a/2)}. \end{aligned}$$
(6.51)

When \(j\geqslant 0\), we can estimate harmonic measure (see the proof of Proposition 2.11), to find

$$\begin{aligned} \sin ^\beta \theta _n^{1} 1_{V_j} \leqslant C (n^{-p} 2^{-j})^\beta 1_{V_j}, \end{aligned}$$

and therefore

$$\begin{aligned} |\xi ^2_{n}-\xi ^1_{n}|^a \sin ^\beta \theta _n^{1} 1_{V_{j}} 1_{U_k} \leqslant C (n^{1/2}2^k)^a(n^{-p} 2^{-j})^\beta 1_{V_{j}} 1_{U_k}. \end{aligned}$$

So using (6.50),

$$\begin{aligned} \sum _{j=0}^{J}\mathbf {E}\left[ |\xi ^2_{n}-\xi ^1_{n}|^a \sin ^\beta \theta _n^{1} 1_{V_j} 1_{U_k}\right] \leqslant C_p 2^{-k(\beta -a)} n^{-(\beta /2-a/2)} (\log _2 n + k). \end{aligned}$$

This is summable over k when \(a>1/3\) for any choice of p and the sum is o(1) as \(n\rightarrow \infty \). Since \(0<p < 1/4\), the exponent in (6.51) is strictly negative whenever \(a \geqslant 1/2\). The proof is complete. \(\square \)

Lemma 6.8

For any \((\theta ^1, \theta ^2) \in \Delta \), it holds that

$$\begin{aligned} \lim _{\epsilon \downarrow 0} \mathbf {E}^* \left[ M_{\tau _\epsilon }^{(2)} \right] = \mathbf {E}^*\left[ M_T^{(2)}\right] \end{aligned}$$
(6.52)

and

$$\begin{aligned} \lim _{\epsilon \downarrow 0}\mathbf {E}^*\left[ M_{\tau _\epsilon }^{(2)} |\pi - \theta ^2_{\tau _\epsilon }| (\sin \theta ^1_{\tau _\epsilon })^{-1}\right] = 0. \end{aligned}$$
(6.53)

Proof

For (6.52) it is enough to show that the family \(\{M_{\tau _\epsilon }^{(2)}\}\) is uniformly integrable. This follows from Lemma 6.2 in the special case \(\rho = 2\) which is in the interval \([0, 8-\kappa )\) since \(\kappa \leqslant 4\).

It remains to prove (6.53). Note that there is a constant c (depending on z) such that

$$\begin{aligned} \mathbf {E}^*\left[ M_{\tau _\epsilon }^{(2)} |\pi - \theta ^2_{\tau _\epsilon }| (\sin \theta ^1_{\tau _\epsilon })^{-1}\right] \leqslant c \, \epsilon ^{1/2} \, \mathbf {E}^*\left[ M_{\tau _\epsilon }^{(2)} (\sin \theta ^1_{\tau _\epsilon })^{-1}\right] . \end{aligned}$$

Indeed, \(|\pi - \theta ^2_{\tau _\epsilon }|\) is bounded above by a constant times the harmonic measure from z in \(H_{\tau _\epsilon }\) of \([\xi ^2, \infty )\), which by the Beurling estimate (Lemma 3.1) is \(O(\epsilon ^{1/2})\). On the other hand, recalling the definition of the measure \(\mathbf {P}^*\) and that \(\beta - 1 \geqslant 0\) when \(\kappa \leqslant 4\), we see that

$$\begin{aligned} \mathbf {E}^*\left[ M_{\tau _\epsilon }^{(2)} (\sin \theta ^1_{\tau _\epsilon })^{-1}\right] = \frac{\epsilon ^{d-2}}{\sin ^\beta \theta ^1} \mathbf {E}\left[ M_{\tau _\epsilon }^{(2)} S_{\tau _\epsilon }^{\beta -1} 1_{{\tau _\epsilon }< \infty } \right] \leqslant \frac{\epsilon ^{d-2}}{\sin ^\beta \theta ^1} \mathbf {E}\left[ M_{\tau _\epsilon }^{(2)}1_{{\tau _\epsilon } < \infty } \right] . \end{aligned}$$

Using Proposition 2.11 we see that last term converges, and is in particular bounded as \(\epsilon \rightarrow 0\). This completes the proof. \(\square \)

7 Two Paths Near the Same Point: Proof of Lemma 2.8

This section proves the correlation estimate Lemma 2.8 and this will complete the proof of Theorem 2.6.

We could quote Theorem 1.8 of [34] for a slightly different version of the next lemma, but since our proof is short and slightly different we will give it here.

Lemma 7.1

Suppose \(0 < \kappa \leqslant 4\) and \(\rho > \max \{-2, \kappa /2-4\}\) and consider an \(\hbox {SLE}_{\kappa }(\rho )\) curve \(\gamma \) started from (0, 1). Let \(C_\infty \) denote the function \(C_t\) defined in (3.11) evaluated at \(t = \infty \). Then there exists a \(q > 0\) such that

$$\begin{aligned} \mathbf {P}^{\rho }_{0,1} \left( C_{\infty }(1) \leqslant \epsilon \right) = \tilde{c} \, \epsilon ^{\beta + \rho a}\left( 1+O(\epsilon ^{q})\right) , \quad \epsilon \downarrow 0, \end{aligned}$$
(7.1)

where the constant \(\tilde{c} = \tilde{c}(\kappa , \rho )\) is given by

$$\begin{aligned} \tilde{c}=\frac{\Gamma (6a + a\rho )}{2a\Gamma (2a) \Gamma (4a + a\rho )}. \end{aligned}$$

In particular, there is a constant \(C< \infty \) such that if \(\eta \) is a crosscut separating 1 from 0 in \(\mathbb {H}\), then

$$\begin{aligned} \mathbf {P}_{0, 1}^\rho \left( \gamma _\infty \cap \eta \ne \emptyset \right) \leqslant C\, e^{-\pi {(\beta + \rho a)}d_\mathbb {H}(\mathbb {R}_-, \eta )} \end{aligned}$$

Proof

Write \(\xi ^1_t\) for the driving term of \(\gamma \) and let \(\xi ^2_t = g_t(1)\), where \(g_t\) is the Loewner chain of \(\gamma \). We get \(\hbox {SLE}_\kappa (\rho )\) started from (0, 1) by weighting \(\hbox {SLE}_\kappa \) by the local martingale (see (3.8))

$$\begin{aligned} M_t^{(\rho )} =(\xi ^2_t-\xi ^1_t)^{r} g'_t(1)^{\zeta (r)}, \quad r = \rho a/2. \end{aligned}$$

Let

$$\begin{aligned} N_{t} = C_{t}(1)^{-(\beta + a \rho )}A_{t}^{\beta + a \rho }, \quad A_{t} = \frac{\xi _{t}^{2} - O_{t}}{\xi _{t}^{2} - \xi _{t}^{1}}. \end{aligned}$$

Direct computation shows that \(N_{t}\) is a local martingale for \(\hbox {SLE}_{\kappa }(\rho )\) started from (0, 1), which satisfies \(N_{0}=1\). Moreover,

$$\begin{aligned} M_{t}^{(\rho )}N_{t} = M_{t}^{(\kappa -8-\rho )}, \end{aligned}$$

where \(M_{t}^{(\kappa -8-\rho )}\) is the local \(\hbox {SLE}_{\kappa }\) martingale corresponding to the choice \(r=r_{\kappa }(\kappa -8-\rho ) = -\beta -\rho a/2\). We set

$$\begin{aligned} s(t) = \inf \{s \geqslant 0: C_{s}(1) =e^{-at} \} \end{aligned}$$

and write \(\hat{M}_{t}^{\rho } = M_{s(t)}^{(\rho )}\), etc. for the time-changed processes. We have

$$\begin{aligned} \mathbf {P}_{0,1}^{\rho }\left( s(t)< \infty \right)&= \mathbf {E}\left[ \hat{M}_{t}^{\rho } 1_{s(t)< \infty }\right] \\&= \mathbf {E}\left[ \hat{M}_{t}^{\rho } \hat{N}_{t} \hat{N}_{t}^{-1} 1_{s(t)< \infty }\right] \\&= e^{-a(\beta +a\rho )t}\mathbf {E}\left[ \hat{M}_{t}^{\kappa -8-\rho } \hat{A}_{t}^{-(\beta +a\rho )} 1_{s(t)< \infty }\right] \\&= e^{-a(\beta +a\rho )t}{\tilde{\mathbf {E}}}^{*}\left[ \hat{A}_{t}^{-(\beta +a\rho )}1_{s(t) < \infty }\right] , \end{aligned}$$

where \({\tilde{\mathbf {E}}}^{*}\) refers to expectation with respect to \(\hbox {SLE}_{\kappa }(\tilde{\rho })\) started from (0, 1), where \(\tilde{\rho } := \kappa -8 - \rho \). If \(\rho > \kappa /2-4\), then \(\beta +a\rho > 0\). The key observation is that under the measure \({\tilde{\mathbf {P}}}^{*}\) we have that \(s(t) < \infty \) almost surely and that \(\hat{A}_{t}\) is positive recurrent and converges to an invariant distribution. This uses that \(\rho > \kappa /2-4\) so that \(\tilde{\rho } < \kappa /2-4\); see [26, Sect. 5.3]. Indeed, if we apply Lemma 3.4 to the \(\hbox {SLE}_{\kappa }(\tilde{\rho })\) process from (0, 1) and note that \(-\beta - a\tilde{\rho } = \beta + a\rho \), we find the following formula for the limiting distribution:

$$\begin{aligned} \pi (x) = c' \, x^{\beta + a \rho }(1-x)^{2a-1}, \quad c'=\frac{\Gamma (6a + a\rho )}{\Gamma (2a) \Gamma (4a + a\rho )}. \end{aligned}$$

It follows that there exists a \(q > 0\) such that

$$\begin{aligned} {\tilde{\mathbf {E}}}^{*}\left[ \hat{A}_{t}^{-(\beta +a\rho )}1_{s(t) < \infty }\right] = c' \int _{0}^{1}(1-x)^{2a-1}\, dx \left( 1 + O(e^{-qt}) \right) , \end{aligned}$$

which gives (7.1). By (7.1) and the distortion estimates (3.1), if \(\tau _{\epsilon } = \inf \{t\geqslant 0 : {\text {dist}}(\gamma _{t},1) \leqslant \epsilon )\), then

$$\begin{aligned} \mathbf {P}_{0,1}^{\rho }\left( \tau _{\epsilon } < \infty \right) \asymp \epsilon ^{\beta + a\rho }. \end{aligned}$$

The last assertion follows as in Lemma 3.3. \(\square \)

Lemma 7.2

There is a constant \(0< C_1 < \infty \) such that the following holds. Let D be a simply connected domain containing 0 and with three marked boundary points uvw. Suppose \(\gamma _u, \gamma _v\) are crosscuts of D which are disjoint except at w, and which connect w with u and w with v, respectively, and such that neither crosscut disconnects 0 from the other. Write \(D_u\) and \(D_v\) for the components of 0 of \(D \smallsetminus \gamma _u\) and \(D\smallsetminus \gamma _v\) and let \(D_{u, v} = D_u \cap D_v\). For all \(\epsilon > 0\) small enough it holds that if

$$\begin{aligned} \epsilon \, \left( 1+C_1 \sqrt{\epsilon } \right) < \Upsilon _{D_u}(0) \leqslant 4\epsilon \quad \mathrm{and} \quad \Upsilon _{D_v}(0) \geqslant \sqrt{\epsilon }, \end{aligned}$$
(7.2)

then

$$\begin{aligned} \Upsilon _{D_{u,v}}(0) > \epsilon . \end{aligned}$$

Proof

Set \(\tilde{\epsilon } := \sqrt{\epsilon }/2\). Let \(\phi _u: D_u \rightarrow \mathbb {D}\) be the conformal map with \(\phi _u(0)=0, \phi _u'(0) > 0\). Note that \(\phi _u(\gamma _v)\) is a crosscut of \(\mathbb {D}\). By the distortion estimates (3.1) and the inequalities in (7.2), \({\text {dist}}(0,\partial D_{u}) \asymp \Upsilon _{D_u}(0) \asymp \epsilon \) while \({\text {dist}}(0, \gamma _v) \geqslant \Upsilon _{D_v}(0)/2 \geqslant \tilde{\epsilon }\). Therefore conformal invariance, the maximum principle, and the Beurling estimate (Lemma 3.1) show that

$$\begin{aligned} \omega _{\mathbb {D}}(0, \phi _u(\gamma _v)) = \omega _{D_u}(0, \gamma _v) \leqslant \omega _{\tilde{\epsilon } \mathbb {D}\smallsetminus \gamma _u}(0, \tilde{\epsilon } \partial \mathbb {D}) \leqslant C (\epsilon /\tilde{\epsilon })^{1/2}. \end{aligned}$$
(7.3)

We next show that

$$\begin{aligned} {\text {diam}}( \phi _u(\gamma _v)) \leqslant C \omega _{\mathbb {D}}(0, \phi _u(\gamma _v)). \end{aligned}$$
(7.4)

To prove (7.4), we may assume \(\phi _u(\gamma _v)\) separates 1 from 0 and that \( {\text {diam}}(\phi _u(\gamma _v))\) is small. We map \(\mathbb {D}\) conformally to \(\mathbb {H}\) in such a way that 0 maps to i and 1 to 0. Write \(\eta \) for the image of \(\phi _u(\gamma _v)\) in \(\mathbb {H}\). Let \(w \in \eta \) be a point of maximal modulus. There is a curve \(\eta _0\) from 0 to w in \(\mathbb {H}\) that is separated from \(\infty \) by \(\eta \) in \(\mathbb {H}\) and by conformal invariance and the separation property, \(\omega _{\mathbb {D}}(0, \phi _u(\gamma _v)) = \omega _{\mathbb {H}}(i, \eta ) \geqslant \omega _{\mathbb {H}}(i, \eta _0)\). Hall’s lemma (see Chap. III of [19]) yields \(\omega _{\mathbb {H}}(i, \eta _0) \geqslant (2/3) \omega _{\mathbb {H}}(i, \eta _0^*)\), where \(\eta _0^*=\{|z|: z \in \eta _0\}\). Since \(\omega _{\mathbb {H}}(i, \eta _0^*) \geqslant c{\text {diam}}(\eta _0^*) \geqslant c {\text {diam}}(\eta _0) \geqslant c{\text {diam}}( \phi _u(\gamma _v))\), this proves (7.4). By (7.3) and (7.4), we have

$$\begin{aligned} {\text {diam}}( \phi _u(\gamma _v) ) \leqslant C \, (\epsilon /\tilde{\epsilon })^{1/2}. \end{aligned}$$
(7.5)

Write \(D'\) for the component containing 0 of \(\mathbb {D} \smallsetminus \phi _u(\gamma _v)\). Let \(\psi : D' \rightarrow \mathbb {D}\) be the conformal map with \(\psi (0)=0, \, \psi '(0)>0\) (since \(D' \subset \mathbb {D}\), we actually have \(\psi '(0) \geqslant 1\)). Then the normalized Riemann map of \(D_{u,v}\) is \(\psi \circ \phi _u\) and so we have

$$\begin{aligned} \Upsilon _{D_{u,v}}(0) = \Upsilon _{D_u}(0) \cdot \psi '(0)^{-1}. \end{aligned}$$
(7.6)

Using (7.5), there is \(C_0 < \infty \) such that if \(\epsilon > 0\) is small enough, then

$$\begin{aligned} 1 \leqslant \psi '(0) \leqslant 1 + C_0 \, \epsilon /\tilde{\epsilon }. \end{aligned}$$
(7.7)

Indeed, this follows, e.g., using Proposition 3.58 of [25] and the fact that the half-plane capacity of a hull of diameter d is \(O(d^2)\). Since \(\tilde{\epsilon }=\sqrt{\epsilon }/2\), (7.6) and (7.7) imply

$$\begin{aligned} \Upsilon _{D_{u,v}}(0) \geqslant \Upsilon _{D_u}(0) \frac{1}{1 + 2C_0\sqrt{\epsilon }}. \end{aligned}$$

Consequently, if \(\Upsilon _{D_u}(0) > \epsilon \left( 1+C_1 \sqrt{\epsilon } \right) \) and \(C_1 > 2C_0\), then if \(\epsilon \) is small enough,

$$\begin{aligned} \Upsilon _{D_{u,v}}(0) > \epsilon , \end{aligned}$$

which is the desired estimate. \(\square \)

Lemma 7.3

Consider commuting \(\hbox {SLE}_\kappa \) in \(\mathbb {H}\) started from (0, 1). Let \(\gamma _1\) and \(\gamma _2\) be the curves starting from 0 and 1, respectively. There exist constants \(0<c,C < \infty \) such that if \(0< \epsilon < c {\text {Im}} z\), then

$$\begin{aligned} \mathbf {P}_{0,1}\left( \Upsilon ^1_\infty (z) \leqslant \epsilon , \, \Upsilon ^2_\infty (z) \leqslant \epsilon \right) \leqslant C\, ( \epsilon /{\text {Im}} z)^{2-d + \beta /2 + a}, \end{aligned}$$
(7.8)

where we write \(\Upsilon ^1_\infty (z)\) and \(\Upsilon ^2_\infty (z)\) for 1 / 2 times the conformal radius of z in \(\mathbb {H}\smallsetminus \gamma _{1,\infty }\) and \(\mathbb {H}\smallsetminus \gamma _{2,\infty }\), respectively.

Remark

Notice that if \(a \geqslant 1/2\), i.e., \(\kappa \leqslant 4\), then \(\beta /2 \geqslant 2-d\), i.e., half the boundary exponent is larger than the bulk exponent, with strict inequality if \(\kappa < 4\). Therefore, (7.8) implies that for every \(\kappa \leqslant 4\),

$$\begin{aligned} \mathbf {P}_{0,1}\left( \Upsilon ^1_\infty (z) \leqslant \sqrt{\epsilon }, \, \Upsilon ^2_\infty (z) \leqslant \sqrt{\epsilon } \right) = o(\epsilon ^{2-d}). \end{aligned}$$
(7.9)

Proof of Lemma 7.3

Let \(z = x+iy \in \mathbb {H}\). We will write \(\mathbf {P}\) for \(\mathbf {P}_{0,1}\). We first grow \(\gamma _1\) starting from \(\xi ^1=0\). The distribution is that of an \(\hbox {SLE}_\kappa (2)\) started from (0, 1). Let \(\bar{\mathcal {B}}_r\) denote the closure of the ball \(\mathcal {B}_r = \mathcal {B}_r(z)\). If \(\Upsilon ^j_\infty (z) \leqslant \epsilon \), \(j = 1,2\), then \(\gamma _{j,\infty }\) intersects \(\bar{\mathcal {B}}_{2\epsilon }\) by (3.1). Let \(\tau _\epsilon \) be the first time \(\gamma _1\) hits \(\bar{\mathcal {B}}_{2\epsilon }\) and assume \(\epsilon \leqslant y/10\) say. By Proposition 2.11 we have

$$\begin{aligned} \mathbf {P}\left( \tau _\epsilon < \infty \right) \leqslant C\, (\epsilon /y)^{2-d}. \end{aligned}$$
(7.10)

On the event that \(\tau _\epsilon <\infty \), we stop \(\gamma _1\) at \(\tau _\epsilon \) and write \(H^1_\epsilon = \mathbb {H} \smallsetminus \gamma _{1,\tau _\epsilon }\). Then almost surely, \(\eta :=\partial \mathcal {B}_{2\epsilon } \cap H^1_{\epsilon }\) is a crosscut of \(H^1_{\epsilon }\). We have the following estimate on the extremal distance (using the extension rule)

$$\begin{aligned} d_{H^1_\epsilon }([1, \infty ), \eta ) \geqslant d_{\mathbb {H}}(\mathbb {R}, \mathcal {B}_{2\epsilon }) \geqslant d_{\mathcal {B}_y}(\partial \mathcal {B}_y, \mathcal {B}_{2\epsilon }) = \frac{1}{2\pi }\log (y/(2\epsilon )), \end{aligned}$$
(7.11)

on the event that \(\tau _\epsilon < \infty \). Conditioned on \(\gamma _{1, \tau _\epsilon }\) (after uniformizing \(H^1_\epsilon \)) the distribution of \(\gamma _2\) is that of \(\hbox {SLE}_\kappa (2)\) started from \((\xi ^2_{\tau _\epsilon }, \xi ^1_{\tau _\epsilon })\). We claim that on the event that \(\tau _\epsilon < \infty \)

$$\begin{aligned} \mathbf {P}\left( \gamma _{2, \infty } \cap \eta \ne \emptyset \mid \gamma _{1,\tau _\epsilon }\right) \leqslant C\, (\epsilon /y)^{\beta /2 + a}. \end{aligned}$$
(7.12)

Indeed, note that \(g_{\tau _\epsilon }(\eta )\) is a crosscut of \(\mathbb {H}\) separating \(\xi ^1_{\tau _\epsilon }\) from \(\xi ^2_{\tau _\epsilon }\) and, by conformal invariance and (7.11),

$$\begin{aligned} d_{\mathbb {H}}([ \xi ^2_{\tau _\epsilon }, \infty ), g_{\tau _\epsilon }(\eta )) = d_{H^1_\epsilon }([1, \infty ), \eta ) \geqslant \frac{1}{2\pi } \log (y/(2\epsilon )) . \end{aligned}$$

The estimate (7.12) now follows from Lemma 7.1 with \(\rho =2\). We conclude the proof by combining (7.12) with (7.10). \(\square \)

We can now give the proof of Lemma 2.8.

Proof of Lemma 2.8

Without loss of generality, consider a system of multiple SLEs started from (0, 1) with corresponding measure \(\mathbf {P}=\mathbf {P}_{0,1}\). We want to prove that

$$\begin{aligned} \lim _{\epsilon \downarrow 0}\epsilon ^{d-2} \mathbf {P}\left( \Upsilon _\infty (z) \leqslant \epsilon \right) = \lim _{\epsilon \downarrow 0}\epsilon ^{d-2}\mathbf {P}\left( \Upsilon ^1_\infty (z) \leqslant \epsilon \right) + \lim _{\epsilon \downarrow 0}\epsilon ^{d-2}\mathbf {P}\left( \Upsilon ^2_\infty (z) \leqslant \epsilon \right) . \end{aligned}$$

We can write

$$\begin{aligned} \mathbf {P}\left( \Upsilon _\infty (z) \leqslant \epsilon \right) =&\, \mathbf {P}\left( \Upsilon ^1_\infty (z) \leqslant \epsilon \right) + \mathbf {P}\left( \Upsilon ^2_\infty (z) \leqslant \epsilon \right) \\&- \mathbf {P}\left( \Upsilon ^1_\infty (z) \leqslant \epsilon , \, \Upsilon ^2_\infty (z) \leqslant \epsilon \right) \\&+ \mathbf {P}\left( \Upsilon ^1_\infty (z)> \epsilon , \, \Upsilon ^2_\infty (z) > \epsilon , \, \Upsilon _\infty (z) \leqslant \epsilon \right) . \end{aligned}$$

We know from Theorem 2.11 that the renormalized limits of the first two terms on the right exist. We will show that the remaining terms decay as \(o(\epsilon ^{2-d})\) and this will prove the lemma. For the third term the required estimate follows immediately from Lemma 7.3, so it remains to estimate the last term.

By (3.1), \(\Upsilon _\infty \leqslant \epsilon \) implies \({\text {dist}}(\gamma _{1,\infty } \cup \gamma _{2,\infty }, z ) \leqslant 2\epsilon \). We assume that \({\text {dist}}(\gamma _{1,\infty }, z) \leqslant 2\epsilon \); the other case is handled in the same way. This in turn implies \(\Upsilon ^1_\infty (z) \leqslant 4\epsilon \). Using Lemma 7.2 we see that there is a constant \(C_1\) such that if \(\epsilon \, (1+C_1\sqrt{\epsilon })< \Upsilon ^1_\infty (z) \leqslant 4\epsilon \) and \(\Upsilon ^2_\infty (z) \geqslant \sqrt{\epsilon }\), then \(\Upsilon _\infty (z) > \epsilon \) for all \(\epsilon \) small enough. Hence we can estimate as follows:

$$\begin{aligned}&\mathbf {P}(\Upsilon ^1_\infty (z)> \epsilon , \, \Upsilon ^2_\infty (z) > \epsilon , \, \Upsilon _\infty (z) \leqslant \epsilon , \, {\text {dist}}(\gamma _{1,\infty }, z) \leqslant 2\epsilon )\\&\quad \leqslant \mathbf {P}\left( \epsilon \, (1+C_1\sqrt{\epsilon })< \Upsilon ^1_\infty (z) \leqslant 4\epsilon , \, \Upsilon ^2_\infty (z) < \sqrt{\epsilon } \right) + \mathbf {P}\left( \epsilon \leqslant \Upsilon ^1_\infty (z) \leqslant \epsilon \, (1+C_1\sqrt{\epsilon }) \right) \\&\quad \leqslant \mathbf {P}\left( \Upsilon ^1_\infty (z) \leqslant \sqrt{\epsilon }, \, \Upsilon ^2_\infty (z) \leqslant \sqrt{\epsilon } \right) + \mathbf {P}\left( \epsilon \leqslant \Upsilon ^1_\infty (z) \leqslant \epsilon (1+C_1\sqrt{\epsilon }) \right) . \end{aligned}$$

By (7.9),

$$\begin{aligned} \mathbf {P}\left( \Upsilon ^1_\infty (z) \leqslant \sqrt{\epsilon }, \, \Upsilon ^2_\infty (z) \leqslant \sqrt{\epsilon } \right) = o(\epsilon ^{2-d}). \end{aligned}$$

On the other hand, we can use Proposition 2.11 to see that

$$\begin{aligned} \mathbf {P}\left( \epsilon \leqslant \Upsilon ^1_\infty (z) \leqslant \epsilon (1+C_1\sqrt{\epsilon }) \right)&= c_*G^{2}(z, 0, 1)\epsilon ^{2-d}\left[ (1+C_1 \sqrt{\epsilon })^{2-d}-1 \right] + o(\epsilon ^{2-d})\\&= o(\epsilon ^{2-d}). \end{aligned}$$

This completes the proof. \(\square \)

8 Fusion

8.1 Schramm’s Formula

The function \(P(z, \xi )\) in (2.6) extends continuously to \(\xi = 0\); hence we obtain an expression for Schramm’s formula in the fusion limit by simply setting \(\xi = 0\) in the formulas of Theorem 2.1. In this way, we recover the formula of [18] and can give a rigorous proof of this formula.

Theorem 8.1

(Schramm’s formula for fused \(\hbox {SLE}_\kappa (2)\)) Let \(0 < \kappa \leqslant 4\). Consider chordal \(\hbox {SLE}_\kappa (2)\) started from \((0,0+)\). Then the probability \(P_f(z)\) that a given point \(z = x + iy \in \mathbb {H}\) lies to the left of the curve is given by

$$\begin{aligned} P_f(z) = \frac{1}{c_\alpha } \int _x^\infty {{\mathrm{Re}}\,}\mathcal {M}_f(x' +i y) dx', \end{aligned}$$
(8.1)

where \(c_\alpha \in {\mathbb {R}}\) is the normalization constant in (2.7) and

$$\begin{aligned} \mathcal {M}_f(z) =&\; y^{\alpha - 2} z^{-\alpha }\bar{z}^{2-\alpha } \int _{\bar{z}}^z(u-z)^{\alpha }(u- \bar{z})^{\alpha - 2} u^{-\alpha } du, \quad z \in \mathbb {H}, \end{aligned}$$

with the contour passing to the right of the origin. The function \(P_f(z)\) can be alternatively expressed as

$$\begin{aligned} P_f(z) = \frac{\Gamma \left( \frac{\alpha }{2}\right) \Gamma (\alpha )}{2^{2 - \alpha }\pi \Gamma \left( \frac{3\alpha }{2} - 1\right) } \int _{\frac{x}{y}}^\infty S(t') dt', \end{aligned}$$
(8.2)

where the real-valued function S(t) is defined by

$$\begin{aligned} S(t) =&\; (1 + t^2)^{1- \alpha } \bigg \{{}_2F_1\bigg (\frac{1}{2} + \frac{\alpha }{2}, 1 - \frac{\alpha }{2}, \frac{1}{2}; - t^2\bigg )\\&- \frac{2\Gamma \left( 1 + \frac{\alpha }{2}\right) \Gamma \left( \frac{\alpha }{2}\right) t}{\Gamma \left( \frac{1}{2} + \frac{\alpha }{2}\right) \Gamma \left( - \frac{1}{2} + \frac{\alpha }{2}\right) } {}_2F_1\bigg (1 + \frac{\alpha }{2}, \frac{3}{2} - \frac{\alpha }{2}, \frac{3}{2}; -t^2\bigg )\bigg \}, \quad t \in {\mathbb {R}}. \end{aligned}$$

Remark

Formula (8.2) for \(P_f(z)\) coincides with Eq. (15) of [18].

Proof of Theorem 8.1

The expression (8.1) for \(P_f(z)\) follows immediately by letting \(\xi \rightarrow 0\) in (2.6). Since the right-hand side of (8.2) vanishes as \(x\rightarrow \infty \), the representation (8.2) will follow if we can prove that

$$\begin{aligned} \frac{\Gamma \left( \frac{\alpha }{2}\right) \Gamma (\alpha )}{2^{2 - \alpha }\pi \Gamma \left( \frac{3\alpha }{2} - 1\right) } S(x/y) = \frac{y}{c_\alpha }{{\mathrm{Re}}\,}\mathcal {M}(x+iy, 0), \quad x \in {\mathbb {R}}, \quad y> 0, \quad \alpha > 1. \end{aligned}$$
(8.3)

In order to prove (8.3), we write \(\mathcal {M}_f = y^{\alpha - 2} z^{-\alpha }\bar{z}^{2-\alpha }J_f(z),\) where \(J_f(z)\) denotes the function \(J(z, \xi )\) defined in (5.2) evaluated at \(\xi = 0\), that is,

$$\begin{aligned} J_f(z) = \int _{\bar{z}}^z(u-z)^{\alpha }(u- \bar{z})^{\alpha - 2} u^{-\alpha } du, \end{aligned}$$
(8.4)

where the contour passes to the right of the origin. Let us first assume that \(x > 0\). Then we can choose the vertical segment \([\bar{z},z]\) as contour in (8.4). The change of variables \(v = \frac{u-z}{\bar{z} - z}\), which maps the segment \([z, \bar{z}]\) to the interval [0, 1], yields

$$\begin{aligned} J_f(z) =&\; e^{-i\pi \alpha }(z - \bar{z})^{2\alpha - 1}z^{-\alpha } \int _0^1 v^{\alpha } (1-v)^{\alpha - 2} \Big (1 - v\frac{z-\bar{z}}{z}\Big )^{-\alpha } dv, \quad x > 0, \end{aligned}$$

where we have used that \((z - v(z-\bar{z}))^{-\alpha } = z^{-\alpha }(1 - v\frac{z-\bar{z}}{z})^{-\alpha }\) for \(v \in [0,1]\) and \(x > 0\). The hypergeometric function \({}_2F_1\) can be defined for \(w \in {\mathbb {C}}\smallsetminus [0, \infty )\) and \(0< b < c\) byFootnote 1

$$\begin{aligned} {}_2F_1(a,b,c;w) = \frac{\Gamma (c)}{\Gamma (b)\Gamma (c-b)} \int _0^1 v^{b-1} (1-wv)^{-a} (1-v)^{c-b-1}dv. \end{aligned}$$

This gives, for \(x > 0\),

$$\begin{aligned} \mathcal {M}_f(z) = -iy^{3\alpha - 3} z^{-2\alpha }\bar{z}^{2-\alpha }2^{2\alpha -1} \frac{\Gamma (\alpha + 1)\Gamma (\alpha -1)}{\Gamma (2\alpha )} {}_2F_1\Big (\alpha , \alpha + 1, 2\alpha ;1 - \frac{\bar{z}}{z}\Big ). \end{aligned}$$
(8.5)

The argument \(w = 1 - \frac{\bar{z}}{z}\) of \({}_2F_1\) in (8.5) crosses the branch cut \([1, \infty )\) for \(x = 0\). Therefore, to extend the formula to \(x \leqslant 0\), we need to find the analytic continuation of \({}_2F_1\). This can be achieved as follows. Using the general identities

$$\begin{aligned} {}_2F_1(a,b,c; w) = {}_2F_1(b,a,c; w) \end{aligned}$$

and (see [35, Eq. 15.8.13])

$$\begin{aligned} {}_2F_1(a,b,2b; w) = \left( 1-\frac{w}{2}\right) ^{-a} {}_2F_1\left( \frac{a}{2},\frac{a+1}{2},b +\frac{1}{2}; \left( \frac{w}{2-w}\right) ^2\right) , \end{aligned}$$

we can write the hypergeometric function in (8.5) as

$$\begin{aligned} {}_2F_1\Big (\alpha , \alpha + 1, 2\alpha ;1 - \frac{\bar{z}}{z}\Big ) = \Big (\frac{x}{z}\Big )^{-\alpha -1} {}_2F_1\Big (\frac{\alpha + 1}{2},\frac{\alpha }{2} + 1,\alpha +\frac{1}{2}; -t^{-2}\Big ), \quad x > 0, \end{aligned}$$
(8.6)

where \(t = x/y\). Using the identity (see [35, Eq. 15.8.2])

$$\begin{aligned}&\frac{\sin (\pi (b-a))}{\pi } \frac{{}_2F_1(a,b,c;w)}{\Gamma (c)} \\&\quad = \frac{(-w)^{-a}}{\Gamma (b)\Gamma (c-a)} \frac{{}_2F_1(a,a-c+1,a-b+1;\frac{1}{w})}{\Gamma (a-b+1)}\\&\qquad -\,\frac{(-w)^{-b}}{\Gamma (a)\Gamma (c-b)}\frac{{}_2F_1(b,b-c+1,b-a+1; \frac{1}{w})}{\Gamma (b-a+1)}, \quad w \in {\mathbb {C}}\smallsetminus [0, \infty ), \end{aligned}$$

with \(w = -t^{-2}\) to rewrite the right-hand side of (8.6), and substituting the resulting expression into (8.5), we find after simplification

$$\begin{aligned} \mathcal {M}_f(z) =&-\frac{i \sqrt{\pi } 2^{\alpha -1} \bar{z} \Gamma \left( \frac{\alpha -1}{2}\right) }{y^2 \Gamma \left( \frac{\alpha }{2}\right) } S(t). \end{aligned}$$
(8.7)

We have derived (8.7) under the assumption that \(x > 0\), but since the hypergeometric functions in the definition of S(t) are evaluated at the point \(-t^2\) which avoids the branch cut for \(z \in \mathbb {H}\), equation (8.7) is valid also for \(x \leqslant 0\). Equation (8.3) is the real part of (8.7). \(\square \)

We obtain Schramm’s formula for multiple SLE in the fusion limit as a corollary.

Corollary 8.2

(Schramm’s formula for two fused SLEs) Let \(0 < \kappa \leqslant 4\). Consider two fused multiple \(\hbox {SLE}_\kappa \) paths in \(\mathbb {H}\) started from 0 and growing toward infinity. Then the probability \(P_f(z)\) that a given point \(z = x + iy \in \mathbb {H}\) lies to the left of both curves is given by (8.1).

Remark

We remark that the method adopted in [18] was based on exploiting so-called fusion rules, which produces a third order ODE for \(P_f\) which can then be solved in order to give the prediction in (8.2). However, even given the prediction (8.2) for \(P_f\) it is not clear how to proceed to give a proof that it is correct. As soon as the evolution starts, the tips of the curves are separated and the system leaves the fused state, so it seems difficult to apply a stopping time argument in this case. However, as was pointed out to us by one of the referees, if one knows a priori that the three probabilities that the point is to the left of, right of, or between the two curves satisfy this third order ODE, and if one is given three linearly independent solutions adding up to 1 of this ODE with the correct boundary behaviors, then it is possible that this information is enough to deduce the statement of Corollary 8.2. (See [16].)

8.2 Green’s Function

In this subsection, we derive an expression for the Green’s function for \(\hbox {SLE}_\kappa (2)\) started from \((0, 0+)\). Let \(\alpha = 8/\kappa \). For \(\alpha \in (1, \infty ) \smallsetminus {\mathbb {Z}}\), we define the ‘fused’ function \(h_f(\theta ) \equiv h_f(\theta ; \alpha )\) for \(0< \theta < \pi \) by

$$\begin{aligned} h_f(\theta ) = \frac{\pi 2^{\alpha +1}}{\hat{c}} \sin \left( \frac{\pi \alpha }{2}\right) \sin ^{2 \alpha -2}(\theta ) {\text {Re}} \left[ e^{-\frac{1}{2} i \pi \alpha } \, _2F_1\left( 1-\alpha ,\alpha , 1;\frac{1}{2} (1-i \cot (\theta ))\right) \right] , \end{aligned}$$
(8.8)

where the constant \(\hat{c} \equiv \hat{c}(\kappa )\) is defined in (2.11). This definition is motivated by Lemma 6.3, which shows that \(h_f(\theta )\) is the limiting value of \(h(\theta ^1, \theta ^2)\) in the fusion limit \((\theta ^1, \theta ^2) \rightarrow (\theta ,\theta )\). The next lemma shows that this definition of \(h_f\) can be extended by continuity to all \(\alpha > 1\).

Given \(A \in [0,1]\) and \(\epsilon > 0\) small, we let \(L_A^j \equiv L_A^j(\epsilon )\), \(j = 1, \dots , 4\), denote the contours

$$\begin{aligned}&L_A^1 = [A, i\epsilon ] \cup [i\epsilon , -\epsilon ],&L_A^2 = [-\epsilon , -i\epsilon ] \cup [-i\epsilon , A], \nonumber \\&L_A^3 = [A, 1-i\epsilon ] \cup [1-i\epsilon , 1+\epsilon ],&L_A^4 = [1+\epsilon , 1+ i\epsilon ] \cup [1+i\epsilon , A], \end{aligned}$$
(8.9)

oriented so that \(\sum _1^4 L_A^j\) is a counterclockwise contour enclosing 0 and 1, see Fig. 5.

Fig. 5
figure 5

The contours \(L_A^j\), \(j = 1, \dots , 4\)

Lemma 8.3

For each \(\theta \in (0,\pi )\), the function \(h_f(\theta ; \alpha )\) defined in (8.8) extends to a continuous function of \(\alpha \in (1, \infty )\) satisfying

$$\begin{aligned} h_f(\theta ; n)&= 2^{n-3} h_n \sin ^{2n-2}\theta ^1 \times {\left\{ \begin{array}{ll} {{\mathrm{Re}}\,}\big [2Y_2 - i\pi Y_1\big ], &{} n = 2,4, \dots ,\\ \frac{2}{\pi } {{\mathrm{Re}}\,}\big [2i Y_2 + \pi Y_1 \big ], &{} n = 3,5, \dots , \end{array}\right. } \end{aligned}$$
(8.10)

where the constant \(h_n \in {\mathbb {C}}\) is defined in (A.6) and the coefficients \(Y_j \equiv Y_j(\theta ; n)\), \(j = 1,2\), are defined as follows: Introduce \(y_j \equiv y_j(v,\theta ; n)\), \(j = 0,1\), by

$$\begin{aligned} y_0&= v^{n-1} (1-vz)^{n-1}(1-v)^{-n},\\ y_1&= v^{n-1} (1-vz)^{n-1}(1-v)^{-n}\big (\ln v + \ln (1-vz) - \ln (1-v)\big ), \end{aligned}$$

where \(z = \frac{1 - i\cot \theta }{2}\). Then

$$\begin{aligned} Y_1 = (2\pi i)^2 \underset{v = 1}{{{\mathrm{Res}}\,}} y_0(v,\theta ; n) \end{aligned}$$
(8.11)

and

$$\begin{aligned} Y_2 = 2\pi i \int _{L_A^1 + L_A^2 + L_A^3 + L_A^4} y_1 dv + 2\pi ^2 \int _{L_A^1 - L_A^2 - L_A^3 + L_A^4} y_0 dv, \end{aligned}$$
(8.12)

where 1 / z lies exterior to the contours and the principal branch is used for the complex powers throughout all integrations.

Proof

Let \(n \geqslant 2\) be an integer. The standard hypergeometric function \({}_2F_1\) is defined by (see [35, Eq. 15.6.5])

$$\begin{aligned} {}_2F_1(a,b,c;z) =&\; \frac{e^{-c\pi i}\Gamma (c)\Gamma (1-b)\Gamma (1+b-c)}{4\pi ^2}\nonumber \\&\times \int _A^{(0+,1+,0-,1-)} v^{b-1} (1 - vz)^{-a} (1-v)^{c-b-1} dv, \end{aligned}$$
(8.13)

where \(A \in (0,1)\), \(z \in {\mathbb {C}}\smallsetminus [1, \infty )\), \(b, c-b \ne 1,2,3, \dots \), and 1 / z lies exterior to the contour. Hence, for \(\alpha \notin {\mathbb {Z}}\) and \(z \in {\mathbb {C}}\smallsetminus [1,\infty )\),

$$\begin{aligned} {}_2F_1(1-\alpha ,\alpha , 1; z) = -\frac{1}{4\pi \sin (\pi \alpha )} Y(z; \alpha ). \end{aligned}$$
(8.14)

where

$$\begin{aligned} Y(z; \alpha ) = \int _A^{(0+,1+,0-,1-)} v^{\alpha -1} (1 - vz)^{\alpha -1} (1-v)^{-\alpha } dv. \end{aligned}$$

We first show that the function Y admits the expansion

$$\begin{aligned} Y(\theta ; \alpha ) = (\alpha - n)Y_1 + (\alpha - n)^2Y_2 + O((\alpha - n)^3), \quad \alpha \rightarrow n, \end{aligned}$$
(8.15)

where \(Y_1\) and \(Y_2\) are given by (8.11) and (8.12). Let \(A \in (0,1)\). Then

$$\begin{aligned} Y(z) =&\bigg \{ (1-e^{-2\pi i \alpha }) \int _{L_A^1} + e^{2\pi i(\alpha -1)} (1-e^{-2\pi i \alpha }) \int _{L_A^2} + (e^{2\pi i(\alpha -1)} -1) \int _{L_A^3}\\&+e^{-2\pi i \alpha } (e^{2\pi i(\alpha -1)} -1) \int _{L_A^4} \bigg \} v^{\alpha -1} (1-vz)^{\alpha -1}(1-v)^{-\alpha } dv. \end{aligned}$$

Expansion around \(\alpha = n\) gives (cf. the proof of (A.9)) Eq. (8.15) with \(Y_2\) given by (8.12) and

$$\begin{aligned} Y_1 = 2\pi i \int _{L_A^1 + L_A^2 + L_A^3 + L_A^4} y_0 dv. \end{aligned}$$

Since \(y_0\) is analytic at \(v = 0\) and has a pole at \(v = 1\), we see that \(Y_1\) can be expressed as in (8.11). This proves (8.15).

Equations (8.8) and (8.14) give

$$\begin{aligned} h_f(\theta ) = -\frac{\pi 2^{\alpha +1}}{\hat{c}} \sin \left( \frac{\pi \alpha }{2}\right) \sin ^{2 \alpha -2}(\theta ) {\text {Re}} \left[ \frac{e^{-\frac{\pi i\alpha }{2}}}{4\pi \sin (\pi \alpha )}Y(z; \alpha )\right] . \end{aligned}$$
(8.16)

As \(\alpha \rightarrow n\), we have

$$\begin{aligned}&\hat{c}^{-1} = {\left\{ \begin{array}{ll}-\frac{h_n}{(\alpha -n)^2} + \frac{a_n}{\alpha - n} + O(1), &{} n = 2,4, \dots , \\ -\frac{h_n}{\alpha -n} + b_n + O(\alpha -n), &{} n=3,5, \dots , \end{array}\right. } \end{aligned}$$

where \(a_n, b_n \in {\mathbb {R}}\) are real constants. We also have

$$\begin{aligned}&2^{\alpha +1} = 2^{n+1}(1 + (\alpha -n)\ln 2 + O((\alpha -n)^2),\\&\sin \left( \frac{\pi \alpha }{2}\right) = {\left\{ \begin{array}{ll} \frac{(-1)^{\frac{n}{2}}\pi }{2}(\alpha -n) + O((\alpha -n)^3), &{} n = 2,4, \dots ,\\ (-1)^{\frac{n-1}{2}} + O((\alpha -n)^2), &{} n=3,5, \dots , \end{array}\right. }\\&\sin ^{2\alpha -2}\theta ^1 = \sin ^{2n-2}(\theta ^1)\big (1 + 2\ln (\sin \theta ^1)(\alpha - n) + O((\alpha -n)^2)\big ),\\&e^{-\frac{\pi i\alpha }{2}} = e^{-\frac{\pi i n}{2}}\Big (1 - \frac{\pi i}{2}(\alpha -n) + O((\alpha -n)^2)\Big ),\\&\frac{1}{4\pi \sin (\pi \alpha )} = \frac{(-1)^{n}}{4\pi ^2(\alpha -n)} + O(\alpha -n), \end{aligned}$$

Substituting the above expansions into (8.16) and using (8.15), we obtain, if \(n \geqslant 2\) is even,

$$\begin{aligned} h_f(\theta ) =&\; \frac{2^{n-2}h_n\sin ^{2n-2}(\theta ^1) {{\mathrm{Re}}\,}Y_1}{\alpha - n} + 2^{n-3} h_n \sin ^{2n-2}(\theta ^1)\nonumber \\&\times {{\mathrm{Re}}\,}\bigg [2 Y_2 -i\pi Y_1 + 2 \Big ( \ln 2 - \frac{a_n}{h_n}+ 2\ln (\sin \theta ^1)\Big )Y_1\bigg ] + O(\alpha -n), \end{aligned}$$
(8.17)

while, if \(n \geqslant 2\) is odd,

$$\begin{aligned} h_f(\theta ) =&-\frac{2^{n-1}h_n\sin ^{2n-2}(\theta ^1) {{\mathrm{Im}}\,}Y_1}{\pi (\alpha - n)}+ \frac{2^{n-2} h_n \sin ^{2n-2}(\theta ^1)}{\pi }\nonumber \\&\times {{\mathrm{Re}}\,}\bigg [2i Y_2 + \pi Y_1 + 2i \Big ( \ln 2 - \frac{b_n}{h_n} + 2\ln (\sin \theta ^1)\Big )Y_1\bigg ] + O(\alpha -n). \end{aligned}$$
(8.18)

In order to establish (8.10), it is therefore enough to show that \({{\mathrm{Re}}\,}Y_1 = 0\) for even n and that \({{\mathrm{Im}}\,}Y_1 = 0\) for odd n.

Consider the function J(z) defined by

$$\begin{aligned} J(z) = \int _{|v-1|=\epsilon } v^{n-1} (1-vz)^{n-1}(1-v)^{-n}dv, \quad z \in {\mathbb {C}}\smallsetminus \{1\}, \end{aligned}$$

where \(\epsilon > 0\) is so small that 1 / z lies outside the contour. Then, by (A.14),

$$\begin{aligned} \overline{J(z)} = -\int _{|v-1|=\epsilon } v^{n-1} (1-v\bar{z})^{n-1}(1-v)^{-n}dv = -J(\bar{z}), \quad z \in {\mathbb {C}}\smallsetminus \{1\}. \end{aligned}$$

Letting \(u = 1-v\), we can express J(z) as

$$\begin{aligned} J(z) = -\int _{|u|=\epsilon } (1-u)^{n-1} (1-(1-u)z)^{n-1}u^{-n}du. \end{aligned}$$

The change of variables \(u= \frac{z - 1}{z}\tilde{u}\) then yields

$$\begin{aligned} J(z)= & {} (-1)^{n} \int _{|\tilde{u}|=\epsilon } (z - z\tilde{u} + \tilde{u})^{n-1} (1-\tilde{u})^{n-1} \tilde{u}^{-n} d\tilde{u} \\= & {} (-1)^{n-1} J(1-z), \quad z \in {\mathbb {C}}\smallsetminus \{0,1\}. \end{aligned}$$

Hence, if \({{\mathrm{Re}}\,}z = 1/2\),

$$\begin{aligned} \overline{J(z)} = -J(\bar{z}) = -J(1-z) = (-1)^n J(z). \end{aligned}$$

Since

$$\begin{aligned} Y_1(\theta ; n) = 2\pi i J\left( \frac{1 - i\cot \theta }{2}\right) , \end{aligned}$$

it follows that \({{\mathrm{Re}}\,}Y_1 = 0\) (\({{\mathrm{Im}}\,}Y_1 = 0\)) for even (odd) n. This completes the proof of the lemma. \(\square \)

Taking \(\xi \rightarrow 0+\) in Theorem 2.6, we obtain the following result for \(\hbox {SLE}_\kappa (2)\) in the fusion limit.

Theorem 8.4

(Green’s function for fused \(\hbox {SLE}_\kappa (2)\)) Let \(0< \kappa \leqslant 4\) and consider chordal \(\hbox {SLE}_\kappa (2)\) started from \((0,0+)\). Then, for each \(z = x + iy \in \mathbb {H}\),

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \epsilon ^{d-2}\mathbf {P}^2 \left( \Upsilon _\infty (z) \leqslant \epsilon \right) = c_* {\mathcal {G}}_f(z), \end{aligned}$$
(8.19)

where \(\mathbf {P}^2\) is the \(\hbox {SLE}_\kappa (2)\) measure, the function \({\mathcal {G}}_f\) is defined by

$$\begin{aligned} {\mathcal {G}}_f(z) = ({\text {Im}} z)^{d-2}h_f(\arg z), \quad z \in \mathbb {H}, \end{aligned}$$
(8.20)

and the constant \(c_* = c_*(\kappa )\) is given by (2.13).

For any given integer \(n \geqslant 2\), we can compute the integrals in (8.11) and (8.12) defining \(Y_1\) and \(Y_2\) explicitly by taking the limit \(\epsilon \rightarrow 0\). For the first few simplest cases \(n=2,3,4\), this leads to the expressions for the fused \(\hbox {SLE}_\kappa \)(2) Green’s function presented in the following proposition.

Proposition 8.5

For \(\alpha = 2,3,4\) (corresponding to \(\kappa = 4, 8/3, 2\), respectively), the function \(h_f(\theta )\) in (8.20) is given explicitly by

$$\begin{aligned} h_f(\theta ) = {\left\{ \begin{array}{ll} \frac{2}{\pi }(\sin \theta - \theta \cos \theta ) \sin \theta , &{} \alpha = 2,\\ \frac{8}{15 \pi } (4 \theta -3 \sin {2 \theta } +2 \theta \cos {2 \theta })\sin ^2\theta , &{} \alpha = 3,\\ \frac{1}{12 \pi }(27 \sin \theta + 11 \sin {3 \theta } -6 \theta (9 \cos \theta + \cos {3 \theta })) \sin ^3\theta , &{} \alpha = 4, \end{array}\right. } \ 0< \theta < \pi . \end{aligned}$$

Proof

The proof relies on long but straightforward computations and is similar to that of Proposition 2.7. \(\square \)

Remark

The formulas in Proposition 8.5 can also be obtained by taking the limit \(\theta ^2 \downarrow \theta ^1\) in the formulas of Proposition 2.7.

In view of Lemma 2.8, it follows from Theorem 2.6 that the Green’s function for two fused multiple SLEs started from 0 is given by the symmetrized expression \({\mathcal {G}}_f(z) + {\mathcal {G}}_f(-\bar{z})\). We formulate this as a corollary.

Corollary 8.6

(Green’s function for two fused SLEs) Let \(0< \kappa \leqslant 4\). Consider a system of two fused multiple \(\hbox {SLE}_\kappa \) paths in \(\mathbb {H}\) started from 0 and growing towards \(\infty \). Then, for each \(z = x + iy \in \mathbb {H}\),

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \epsilon ^{d-2}\mathbf {P}\left( \Upsilon _\infty (z) \leqslant \epsilon \right) = c_* ({\mathcal {G}}_f(z) + {\mathcal {G}}_f(-\bar{z})), \end{aligned}$$

where \(d=1+\kappa /8\), the constant \(c_* = c_*(\kappa )\) is given by (2.13), and the function \({\mathcal {G}}_f\) is defined by (8.20).