1 Introduction

The stochastic Loewner evolution (SLE) introduced by Oded Schramm [1] describes some random fractal curves in plane domains that satisfy conformal invariance and Domain Markov Property. These two properties make SLEs the most suitable candidates for the scaling limits of many two-dimensional lattice models at criticality. These models are proved or conjectured to converge to SLE with different parameters (e.g., [27]). For basics of SLE, the reader may refer to [8] and [9].

There are several different versions of SLEs, among which chordal SLE and radial SLE are the most well-known. A chordal or radial SLE trace is a random fractal curve that grows in a simply connected plane domain from a boundary point. A chordal SLE trace ends at another boundary point, and a radial SLE trace ends an interior point. Their behaviors both depend on a positive parameter \(\kappa \). When \(\kappa \in (0,4]\), both traces are simple curves, and all points on the trace other than the initial and final points lie inside the domain. When \(\kappa >4\), the traces have self-intersections.

A stochastic coupling technique was introduced in [10] to prove that, for \(\kappa \in (0,4]\), chordal SLE\(_\kappa \) satisfies reversibility, which means that if \(\beta \) is a chordal SLE\(_\kappa \) trace in a domain \(D\) from \(a\) to \(b\), then after a time-change, the time-reversal of \(\beta \) becomes a chordal SLE\(_\kappa \) trace in \(D\) from \(b\) to \(a\). The technique was later used [11, 12] to prove Duplantier’s duality conjecture, which says that, for \(\kappa >4\), the boundary of the hull generated by a chordal SLE\(_\kappa \) trace looks locally like an SLE\(_{16/\kappa }\) trace. The technique was also used to prove that the radial or chordal SLE\(_2\) can be obtained by erasing loops on a planar Brownian motion [13], and the chordal SLE\((\kappa ,\rho )\) introduced in [2] also satisfies reversibility for \(\kappa \in (0,4]\) and \(\rho \ge \kappa /2-2\) [14].

Since the initial point and final point of a radial SLE are topologically different, the time-reversal of a radial SLE trace can not be a radial SLE trace. However, we may consider whole-plane SLE instead, which describes a random fractal curve in the Riemann sphere \(\widehat{\mathbb {C}}=\mathbb {C}\cup \{\infty \}\) that grows from one interior point to another interior point. Whole-plane SLE is related to radial SLE as follows: conditioned on the initial part of a whole-plane SLE\(_\kappa \) trace, the rest part of such trace has the distribution of a radial SLE\(_\kappa \) trace that grows in the complementary domain of the initial part of this trace. The main result of this paper is the following theorem.

Theorem 1.1

Whole-plane SLE\(_\kappa \) satisfies reversibility for \(\kappa \in (0,4]\).

The theorem in the case \(\kappa =2\) has been proved in [15]. The proof used the reversibility of loop-erased random walk (LERW for short, see [16]) and the convergence of LERW to whole-plane SLE\(_2\). In this paper we will obtain a slightly more general result: the reversibility of whole-plane SLE\((\kappa ,s)\) process, which is defined by adding a constant drift to the driving function for the whole-plane SLE\(_\kappa \) process. This is the statement of Theorem 7.1.

To get some idea of the proof, let’s first review the proof of the reversibility of chordal SLE\(_\kappa \) in [10]. We constructed a pair of chordal SLE\(_\kappa \) traces \(\gamma _1\) and \(\gamma _2\) in a simply connected domain \(D\), where \(\gamma _1\) grows from a boundary point \(a_1\) to another boundary point \(a_2\), \(\gamma _2\) grows from \(a_2\) to \(a_1\), and these two traces commute in the following sense. Fix \(j\ne k\in \{1,2\}\), if \(T_k\) is a stopping time for \(\gamma _k\), then conditioned on \(\gamma _k(t)\), \(t\le T_k\), the part of \(\gamma _j\) before hitting \(\gamma _k(t)((0,T_k])\) has the distribution of a chordal SLE\(_\kappa \) trace that grows from \(a_j\) to \(\gamma _k(T_k)\) in \(D_k(T_k)\), which is a component of \(D{\setminus }\gamma \!_k(t)((0,T_k])\). In the case \(\kappa \le 4\), a.s. \(\gamma _j\) hits \(\gamma _k(t)((0,T_k])\) exactly at \(\gamma _k(T_k)\), so \(\gamma _j\) visits \(\gamma _k(T_k)\) before any \(\gamma _k(t)\), \(t<T_k\). Since this holds for any stopping time \(T_k\) for \(\gamma _k\), the two traces a.s. overlap, which implies the reversibility.

To prove the reversibility of whole-plane SLE\(_\kappa \), we want to construct two whole-plane SLE\(_\kappa \) traces in \(D=\widehat{\mathbb {C}}\), one is \(\gamma _1\) from \(a_1\) to \(a_2\), the other is \(\gamma _2\) from \(a_2\) to \(a_1\), so that \(\gamma _1\) and \(\gamma _2\) commute. Here we can not expect that they commute in exactly the same sense as in the above paragraph. Note that conditioned on \(\gamma _k(t)\), \(t\le T_k\), the part of \(\gamma _j\) before hitting \(\gamma _k(t)\), \(t\le T_k\), can not have the distribution of a whole-plane SLE\(_\kappa \) trace in \(D_k(T_k)\) from \(a_1\) to \(\gamma _k(T_k)\) because now the complementary domain \(D_k(T_k)\) is topologically different from \(\widehat{\mathbb {C}}\), while whole-plane SLEs are only defined in \(\widehat{\mathbb {C}}\). Since the conditional curve grows from an interior point to a boundary point, it is neither a radial SLE trace nor a chordal SLE trace.

Thus, we need to define SLE traces in simply connected domains that grow from an interior point to a boundary point. We use the idea of defining whole-plane SLE using radial SLE. The situation here is a little different: after a positive initial part, the rest part of the curve grows in a doubly connected domain. Another difference is that there is a marked point on the boundary of the initial domain. In this paper, we use the annulus Loewner equation introduced in [17] together with an annulus drift function \(\Lambda =\Lambda (t,x)\) to define the so-called annulus SLE\((\kappa ,\Lambda )\) process in a doubly connected domain \(D\), which starts from a point \(a\in \partial D\), and whose growth is affected by a marked point \(b\in \partial D\). In the case when \(a\) and \(b\) lie on different boundary components, by shrinking the boundary component containing \(a\) to a singlet, we get the so called disc SLE\((\kappa ,\Lambda )\), which describes a random curve that grows in a simply connected domain and starts from an interior point.

We find that if \(\Lambda =\kappa \frac{\Gamma '}{\Gamma }\), where \(\Gamma \) is a positive differentiable function defined on \((0,\infty )\times \mathbb {R}\) that solves a linear PDE and satisfies some periodic condition [see (4.1) and (4.2)], then using the coupling technique we could construct a coupling of two whole-plane SLE\(_\kappa \) traces: \(\gamma _1\) and \(\gamma _2\), which commute in the sense that, conditioned on one curve up to a finite stopping time \(T\), the other curve is a disc SLE\((\kappa ,\Lambda )\) trace in the remaining domain, and its marked point is the tip point of the first curve at \(T\).

The main new idea in the current paper is an application of a Feynman–Kac representation, which is used to get a formal solution of the PDE for \(\Gamma \) in the case \(\kappa \in (0,4]\). Using Fubini’s Theorem, Itô’s formula, and some estimations, we prove that the formal solution \(\Gamma _\kappa \) is smooth and solves the PDE. We then find that \(\Lambda _\kappa {:=}\kappa \frac{\Gamma _\kappa '}{\Gamma _\kappa }\) satisfies the property that the marked point for an annulus or disc SLE\((\kappa ,\Lambda _\kappa )\) process is a subsequential limit point of the trace. This property implies that, if two whole-plane SLE\(_\kappa \) traces commute as in the previous paragraph, then they must overlap. So the main theorem is proved. Moreover, from the relation between whole-plane SLE\(_\kappa \) and radial SLE\(_\kappa \), we conclude that, for \(\kappa \in (0,4]\), the time-reversal of a radial SLE\(_\kappa \) trace is a disc SLE\((\kappa ,\Lambda _\kappa )\) trace.

The marked point and the initial point of an annulus SLE\((\kappa ,\Lambda )\) process could also lie on the same boundary component. In this case, if \(\Lambda =\kappa \frac{\Gamma '}{\Gamma }\), and \(\Gamma \) satisfies a similar linear PDE [see (4.48)], then for a doubly connected domain \(D\) with two boundary points \(a_1\) and \(a_2\) on the same boundary component, we can construct a pair of annulus SLE\((\kappa ,\Lambda )\) traces \(\gamma _1\) and \(\gamma _2\) in \(D\), which commute with each other. If an SLE process in a doubly connected domain is the scaling limit of some random path in a lattice, which satisfies reversibility at the discrete level, then such SLE should satisfy reversibility. We hope that the work in this paper will shed some light on the study of these processes.

The study on the commutation relations of SLE in doubly connected domains continues the work in [18] by Dubédat, who used some tools from Lie Algebra to obtain commutation conditions of SLE in simply connected domains. The annulus SLE\((\kappa ,\Lambda _{\kappa })\) process used to prove the reversibility of whole-plane SLE\(_\kappa \) was later studied in [19]. When \(\kappa =8/3\), the process satisfies the restriction property, which is similar to the restriction property for chordal SLE\(_{8/3}\) (see [2]). For \(\kappa \in (0,4]{\setminus }\{8/3\}\), it satisfies some “weak” restriction property.

Lawler [20] used a different method to define annulus SLE\(_\kappa \) processes for \(\kappa \in (0,4]\), which agree with our annulus SLE\((\kappa ,\Lambda _{\kappa })\) processes. His construction uses the Brownian loop measures. The “strong” (\(\kappa =8/3\)) and “weak” (\(\kappa \ne 8/3\)) restriction properties of Lawler’s annulus SLE processes are immediate from the definition; and the reversibility of these processes follows from the chordal reversibility. However, the reversibility of whole-plane SLE is not proved in [20]. To get the whole-plane reversibility, some additional work is required based on Lawler’s work. In this paper, the reversibility of annulus SLE\((\kappa ,\Lambda _{\kappa })\) and the reversibility of whole-plane SLE\(_\kappa \) are proved separately, and the coupling technique is applied in both proofs, which are similar though.

Miller and Sheffield [21] recently proved the reversibility of whole-plane SLE for all \(\kappa \in [0,8]\). Their proof uses the imaginary geometry of Gaussian free field developed in their earlier papers (c.f. [22]).

This paper is organized as follows. In Sect. 2, we introduce some symbols and notations. In Sect. 3, we review several versions of Loewner equations. In Sect. 3.4, we define annulus SLE\((\kappa ,\Lambda )\) and disc SLE\((\kappa ,\Lambda )\) processes, whose growth is affected by one marked boundary point. In Sect. 4 we prove that when \(\Gamma \) solves (4.1) or (4.48), there is a commutation coupling of two annulus SLE\((\kappa ,\Lambda )\) processes, where \(\Lambda =\kappa \frac{\Gamma '}{\Gamma }\). In Sect. 5, we construct a coupling of two whole-plane SLE processes, which is similar to the coupling in the previous section. In Sect. 6, we solve PDE (4.1) using a Feynman–Kac expression, and the solution is then used in Sect. 7 to prove the reversibility of whole-plane SLE\(_\kappa \) process. In fact, we obtain a slightly more general result: the reversibility of skew whole-plane SLE\(_\kappa \) processes for \(\kappa \in (0,4]\). In the last section, we find some solutions to the PDE for \(\Gamma \) and \(\Lambda \) when \(\kappa \in \{0,2,3,4,16/3\}\), which can be expressed in terms of well-known special functions.

2 Preliminary

2.1 Symbols

Throughout this paper, we will use the following symbols. Let \(\widehat{\mathbb {C}}=\mathbb {C}\cup \{\infty \}\), \(\mathbb {D}=\{z\in \mathbb {C}:|z|<1\}\), \(\mathbb {T}=\{z\in \mathbb {C}:|z|=1\}\), and \(\mathbb {H}=\{z\in \mathbb {C}:{{\mathrm{Im }}}z>0\}\). For \(p>0\), let \(\mathbb {A}_p=\{z\in \mathbb {C}:1>|z|> e^{-p}\}\) and \(\mathbb {S}_p=\{z\in \mathbb {C}:0< {{\mathrm{Im }}}z<p\}\). For \(p\in \mathbb {R}\), let \(\mathbb {T}_p=\{z\in \mathbb {C}:|z|=e^{-p}\}\) and \(\mathbb {R}_p=\{z\in \mathbb {C}:{{\mathrm{Im }}}z=p\}\). Then \(\partial \mathbb {D}=\mathbb {T}\), \(\partial \mathbb {H}=\mathbb {R}\), \(\partial \mathbb {A}_p=\mathbb {T}\cup \mathbb {T}_p\), and \(\partial \mathbb {S}_p=\mathbb {R}\cup \mathbb {R}_p\). Let \(e^i\) denote the map \(z\mapsto e^{iz}\). Then \(e^i\) is a covering map from \(\mathbb {H}\) onto \(\mathbb {D}\), and from \(\mathbb {S}_p\) onto \(\mathbb {A}_p\); and it maps \(\mathbb {R}\) onto \(\mathbb {T}\) and maps \(\mathbb {R}_p\) onto \(\mathbb {T}_p\). For a doubly connected domain \(D\), we use \({{\mathrm{mod}}}(D)\) to denote its modulus. For example, \({{\mathrm{mod}}}(\mathbb {A}_p)=p\).

A conformal map in this paper is a univalent analytic function. A conjugate conformal map is defined to be the complex conjugate of a conformal map. Let \(I_0(z)=1/\overline{z}\) be the reflection w.r.t. \(\mathbb {T}\). Then \(I_0\) is a conjugate conformal map from \(\widehat{\mathbb {C}} \) onto itself, fixes \(\mathbb {T}\), and interchanges \(0\) and \(\infty \). Let \(\widetilde{I}_0(z)=\overline{z}\) be the reflection w.r.t. \(\mathbb {R}\). Then \(\widetilde{I}_0\) is a conjugate conformal map from \(\mathbb {C}\) onto itself and satisfies \(e^i\circ \widetilde{I}_0=I_0\circ e^i\). For \(p>0\), let \(I_p(z){:=}e^{-p}/\overline{z}\) and \(\widetilde{I}_p(z)=ip+\overline{z}\). Then \(I_p\) and \(\widetilde{I}_p\) are conjugate conformal automorphisms of \(\mathbb {A}_p\) and \(\mathbb {S}_p\), respectively. Moreover, \(I_p\) interchanges \(\mathbb {T}_p\) and \(\mathbb {T}\), \(\widetilde{I}_p\) interchanges \(\mathbb {R}_p\) and \(\mathbb {R}\), and \(I_p\circ e^i =e^i\circ \widetilde{I}_p\).

We will frequently use functions \(\cot (z/2)\), \(\tan (z/2)\), \(\coth (z/2)\), \(\tanh (z/2)\), \(\sin (z/2)\), \(\cos (z/2)\), \(\sinh (z/2)\), and \(\cosh (z/2)\). For simplicity, we write \(2\) as a subscript. For example, \(\cot _2(z)\) means \(\cot (z/2)\), and we have \(\cot _2'(z)=-\frac{1}{2}\sin _2^{-2}(z)\).

An increasing function in this paper will always be strictly increasing. For a real interval \(J\), we use \(C(J)\) to denote the space of real continuous functions on \(J\). The maximal solution to an ODE or SDE with initial value is the solution with the biggest definition domain.

Many functions in this paper depend on two variables. In some of these functions, the first variable represents time or modulus, and the second variable does not. In this case, we use \(\partial _t\) and \(\partial _t^{n}\) to denote the partial derivatives w.r.t. the first variable, and use \('\), \(''\), and the superscripts \((h)\) to denote the partial derivatives w.r.t. the second variable. For these functions, we say that it has period \(r\) (resp. is even or odd) if it has period \(r\) (resp. is even or odd) in the second variable when the first variable is fixed. Some functions in Sects. 4 and 5 depend on two variables: \(t_1\) and \(t_2\), which both represent time. In this case we use \(\partial _j\) to denote the partial derivative w.r.t. the \(j\)-th variable, \(j=1,2\).

In this paper, a domain is a connected open subset of \(\widehat{\mathbb {C}}\), and a continuum is a connected compact subset of \(\widehat{\mathbb {C}}\) that contains more than one point. A continuum \(K\) is called a hull in \(\mathbb {C}\) if \(K\subset \mathbb {C}\) and \(\widehat{\mathbb {C}}{\setminus }\! K\) is connected. In this case, there is a unique conformal map \(f_K\) from \(\widehat{\mathbb {C}}{\setminus }\overline{\mathbb {D}}\) onto \(\widehat{\mathbb {C}}{\setminus }\! K\) and satisfies \(\lim _{z\rightarrow \infty } f_K(z)/z=a_K\) for some positive number \(a_K\). Then \(a_K\) is called the capacity of \(K\), and is denoted by \({{\mathrm{cap}}}(K)\).

A doubly connected domain in this paper is a domain whose complement is a disjoint union of two continuums. Let \(D\) be a doubly connected domain. If \(K\) is a relatively closed subset of \(D\), has positive distance from one boundary component of \(D\), and if \(D{\setminus }K\) is also doubly connected, then we call \(K\) a hull in \(D\), and call the number \({{\mathrm{mod}}}(D)-{{\mathrm{mod}}}(D{\setminus }K)\) the capacity of \(K\) in \(D\), and let it be denoted by \({{\mathrm{cap}}}_{D}(K)\).

2.2 Brownian motions

Throughout this paper, a Brownian motion means a standard one-dimensional Brownian motion, and \(B(t)\), \(0\le t<\infty \), will always be used to denote a Brownian motion. This means that \(B(t)\) is continuous, \(B(0)=0\), and \(B(t)\) has independent increment with \(B(t)-B(s)\sim N(0,t-s)\) for \(t\ge s\ge 0\). For \(\kappa \ge 0\), the rescaled Brownian motion \(\sqrt{\kappa }B(t)\) will be used to define annulus SLE\(_\kappa \). The symbols \(B_*(t)\), \(\widehat{B}_*(t)\), or \(\widetilde{B}_*(t)\) will also be used to denote a Brownian motion, where the \(*\) stands for subscript. Let \(({\mathcal {F}}_t)_{t\ge 0}\) be a filtration. By saying that \(B(t)\) is an \(({\mathcal {F}}_t)\)-Brownian motion, we mean that \((B(t))\) is \(({\mathcal {F}}_t)\)-adapted, and for any fixed \(t_0\ge 0\), \(B(t_0+t)-B(t_0)\), \(t\ge 0\), is a Brownian motion independent of \({\mathcal {F}}_{t_0}\).

Definition 2.1

Let \(\kappa >0\) and \(({\mathcal {F}}_t)_{t\in \mathbb {R}}\) be a right-continuous filtration. A process \(B^{(\kappa )}(t)\), \(t\in \mathbb {R}\), is called a pre-\(({\mathcal {F}}_t)\)-\((\mathbb {T};\kappa )\)-Brownian motion if \((e^i(B^{(\kappa )}(t)))\) is \(({\mathcal {F}}_t)\)-adapted, and for any \(t_0\in \mathbb {R}\),

$$\begin{aligned} B_{t_0}(t){:=}\frac{1}{\sqrt{\kappa }}\Big (B^{(\kappa )}(t_0+t)-B^{(\kappa )}(t_0)\Big ),\quad 0\le t<\infty , \end{aligned}$$
(2.1)

is an \(({\mathcal {F}}_{t_0+t})\)-Brownian motion. If \(({\mathcal {F}}_t)\) is generated by \((e^i(B^{(\kappa )}(t)))\), then we simply call \((B^{(\kappa )}(t))\) a pre-\((\mathbb {T};\kappa )\)-Brownian motion.

Remark

The name of the pre-\((\mathbb {T};\kappa )\)-Brownian motion comes from the fact that \(B_{\mathbb {T}}(t):=e^i(B^{(\kappa )}(t))\), \(t\in \mathbb {R}\), is a Brownian motion on \(\mathbb {T}\) with speed \(\kappa \): for every \(t_0\in \mathbb {R}\), \(B_{\mathbb {T}}(t_0)\) is uniformly distributed on \(\mathbb {T}\); and \(B_{\mathbb {T}}(t_0+t)/B_{\mathbb {T}}(t_0)\), \(t\ge 0\), has the distribution of \(e^i(\sqrt{\kappa }B(t))\), \(t\ge 0\), and is independent of \(B_{\mathbb {T}} (t)\), \(t\le t_0\). One may construct \(B^{(\kappa )}(t)\) as follows. Let \(B_+(t)\) and \(B_-(t)\), \(t\ge 0\), be two independent Brownian motions. Let \(\mathbf{x}\) be a random variable uniformly distributed on \([0,2\pi )\), which is independent of \((B_\pm (t))\). Let \(B^{(\kappa )}(t) =\mathbf{x}+\sqrt{\kappa }B_{{{\mathrm{sign}}}(t)}(|t|)\) for \(t\in \mathbb {R}\). Then \(B^{(\kappa )}(t)\), \(t\in \mathbb {R}\), is a pre-\((\mathbb {T};\kappa )\)-Brownian motion.

Definition 2.2

Let \(B^{(\kappa )}(t)\), \(t\in \mathbb {R}\), be a pre-\(({\mathcal {F}}_t)\)-\((\mathbb {T};\kappa )\)-Brownian motion, where \(({\mathcal {F}}_t)\) is right-continuous, and every \({\mathcal {F}}_t\) contains all eligible events w.r.t. the process \((e^i(B^{(\kappa )}(t)))\). Suppose \(T\) is an \(({\mathcal {F}}_t)\)-stopping time, and \(T>t_0\) for a deterministic number \(t_0\in \mathbb {R}\). We say that \(X(t)\) satisfies the \(({\mathcal {F}}_t)\)-adapted SDE

$$\begin{aligned} dX(t)=a(t) dB^{(\kappa )}(t)+b(t)dt, \quad -\infty <t<T, \end{aligned}$$

if \(e^i(X(t))\), \(a(t)\), and \(b(t)\) are continuous and \(({\mathcal {F}}_t)\)-adapted, and if for any deterministic number \(t_0\) with \(t_0<T\), \(X_{t_0}(t) \,{:=}\,X(t_0+t)-X(t_0)\) satisfies the following \(({\mathcal {F}}_{t_0+t})_{t\ge 0}\)-adapted SDE with the traditional meaning (c.f. Chapter IV, Section 3 of [23]):

$$\begin{aligned} dX_{t_0}(t)=a_{t_0}(t)\sqrt{\kappa }dB_{t_0}(t)+b_{t_0}(t) dt,\quad 0\le t<T-{t_0}, \end{aligned}$$

where \(B_{t_0}(t)\) is given by (2.1), \(a_{t_0}(t)\,{:=}\,a({t_0}\!+\!t)\), and \(b_{t_0}(t)\,{:=}\,b({t_0}\!+\!t)\). Note that \(B_{t_0}(t)\) is an \(({\mathcal {F}}_{{t_0}\!+\!t})_{t\ge 0}\)-Brownian motion, \(X_{t_0}(t)\), \(a_{t_0}(t)\) and \(b_{t_0}(t)\) are all \(({\mathcal {F}}_{{t_0}\!+\!t})_{t\ge 0}\)-adapted.

2.3 Special functions

We now introduce some functions that will be used to define annulus Loewner equations. For \(t>0\), define

$$\begin{aligned} \mathbf{S}(t,z)&= \lim _{M\rightarrow \infty }\sum _{k=-M}^M \frac{e^{2kt}+z}{e^{2kt}-z} ={{\mathrm{P.V.}}}\sum _{2\mid n} \frac{e^{nt}+z}{e^{nt}-z},\\ \mathbf{H}(t,z)&= -i\mathbf{S}(t,e^i(z))=-i{{\mathrm{P.V.}}}\sum _{2\mid n} \frac{e^{nt}+e^{iz}}{e^{nt}-e^{iz}}={{\mathrm{P.V.}}}\sum _{2\mid n}\cot _2(z-int). \end{aligned}$$

Then \(\mathbf{H}(t,\cdot )\) is a meromorphic function in \(\mathbb {C}\), whose poles are \(\{2m\pi +i2kt:m,k\in \mathbb {Z}\}\), which are all simple poles with residue \(2\). Moreover, \(\mathbf{H}(t,\cdot )\) is an odd function and takes real values on \(\mathbb {R}{\setminus }\{\text{ poles }\}\); \({{\mathrm{Im }}}\mathbf{H}(t,\cdot )\equiv -1\) on \(\mathbb {R}_t\); \(\mathbf{H}(t,z+2\pi )=\mathbf{H}(t,z)\) and \(\mathbf{H}(t,z+i2t)=\mathbf{H}(t,z)-2i\) for any \(z\in \mathbb {C}{\setminus }\!\{\text{ poles }\}\).

The power series expansion of \(\mathbf{H}(t,\cdot )\) near \(0\) is

$$\begin{aligned} \mathbf{H}(t,z)=\frac{2}{z}+ \mathbf{r}(t)z+O(z^3), \end{aligned}$$
(2.2)

where \( \mathbf{r}(t)=\sum _{k=1}^\infty \sinh ^{-2}(kt)-\frac{1}{6}\). As \(t\rightarrow \infty \), \(\mathbf{S}(t,z)\rightarrow \frac{1+z}{1-z}\), \(\mathbf{H}(t,z)\rightarrow \cot _2(z)\), and \(\mathbf{r}(t)\rightarrow -\frac{1}{6}\). So we define \(\mathbf{S}(\infty ,z)=\frac{1+z}{1-z}\), \(\mathbf{H}(\infty ,z)=\cot _2(z)\), and \(\mathbf{r}(\infty )=-\frac{1}{6}\). Then \(\mathbf{r}\) is continuous on \((0,\infty ]\), and (2.2) still holds when \(t=\infty \). In fact, we have \(\mathbf{r}(t)-\mathbf{r}(\infty ) =O(e^{-t})\) as \(t\rightarrow \infty \), so we may define \(\mathbf{R}\) on \((0,\infty ]\) by \( \mathbf{R}(t) =-\int _t^\infty (\mathbf{r}(s)-\mathbf{r}(\infty ))ds\). Then \(\mathbf{R}\) is continuous on \((0,\infty ]\), \(\mathbf{R}(t)=O(e^{-t})\) as \(t\rightarrow \infty \), and for \(0<t<\infty \),

$$\begin{aligned} \mathbf{R}'(t)=\mathbf{r}(t)-\mathbf{r}(\infty ). \end{aligned}$$
(2.3)

Let \(\mathbf{S}_I(t,z)=\mathbf{S}(t,e^{-t}z)-1\) and \(\mathbf{H}_I(t,z)=-i\mathbf{S}_I(t,e^{iz})=\mathbf{H}(t,z+it)+i\). It is easy to check:

$$\begin{aligned} \mathbf{S}_I(t,z)={{\mathrm{P.V.}}}\sum _{2\not \mid n} \frac{e^{nt}+z}{e^{nt}-z}, \quad \mathbf{H}_I(t,z) ={{\mathrm{P.V.}}}\sum _{2\not \mid n} \cot _2(z-int). \end{aligned}$$
(2.4)

So \(\mathbf{H}_I(t,\cdot )\) is a meromorphic function in \(\mathbb {C}\) with poles \(\{2m\pi +i(2k+1)t:m,k\in \mathbb {Z}\}\), which are all simple poles with residue \(2\); \(\mathbf{H}_I(t,\cdot )\) is an odd function and takes real values on \(\mathbb {R}\); and \(\mathbf{H}_I(t,z+2\pi )=\mathbf{H}_I(t,z)\), \(\mathbf{H}_I(t,z+i2t)=\mathbf{H}_I(t,z)-2i\) for any \(z\in \mathbb {C}{\setminus }\!\{\text{ poles }\}\).

It is possible to express \(\mathbf{H}\) and \(\mathbf{H}_I\) using classical functions. Let \(\theta (\nu ,\tau )\) and \(\theta _k(\nu ,\tau )\), \(k=1,2,3\), be the Jacobi theta functions defined in Chapter V, Section 3 of [24]. Define \(\Theta (t,z)=\theta \left( \frac{z}{2\pi },\frac{it}{\pi }\right) \) and \(\Theta _I(t,z)=\theta _2\left( \frac{z}{2\pi },\frac{it}{\pi }\right) \). Then \(\Theta _I\) has period \(2\pi \), \(\Theta \) has antiperiod \(2\pi \), and

$$\begin{aligned} \mathbf{H}=2\,\frac{\Theta '}{\Theta },\quad \mathbf{H}_I=2\,\frac{\Theta _I'}{\Theta _I}. \end{aligned}$$
(2.5)

These follow from the product representations of \(\Theta \) and \(\Theta _I\). For example,

$$\begin{aligned} \Theta _I(t,z)=\prod _{m=1}^\infty (1-e^{-2mt})\left( 1-e^{-(2m-1)t}e^{iz}\right) \left( 1-e^{-(2m-1)t}e^{-iz}\right) . \end{aligned}$$
(2.6)

Both \(\Theta \) and \(\Theta _I\) solve the heat equation

$$\begin{aligned} \partial _t\Theta =\Theta '',\quad \partial _t\Theta _I=\Theta _I''. \end{aligned}$$
(2.7)

So \(\mathbf{H}\) and \(\mathbf{H}_I\) solve the PDE:

$$\begin{aligned} \partial _t\mathbf{H}=\mathbf{H}''+\mathbf{H}'\mathbf{H},\quad \partial _t\mathbf{H}_I=\mathbf{H}_I''+\mathbf{H}_I'\mathbf{H}_I. \end{aligned}$$
(2.8)

We rescale the functions \(\mathbf{H}\) and \(\mathbf{H}_I\) as follows. For \(t>0\) and \(z\in \mathbb {C}\), let

$$\begin{aligned} \widehat{\mathbf{H}}(t,z)=\frac{\pi }{t}\mathbf{H}\left( \frac{\pi ^2}{t},\frac{\pi }{t} z\right) +\frac{z}{t},\quad \widehat{\mathbf{H}}_I(t,z)=\frac{\pi }{t}\mathbf{H}_I\left( \frac{\pi ^2}{t},\frac{\pi }{t} z\right) +\frac{z}{t}.\quad \quad \end{aligned}$$
(2.9)

Since \(\widehat{\mathbf{H}}\) and \(\widehat{\mathbf{H}}_I\) have period \(2\pi \),

$$\begin{aligned} \widehat{\mathbf{H}}(t,z+2k t)=\widehat{\mathbf{H}}(t,z)+2k,\quad \widehat{\mathbf{H}}_I(t,z+2k t)=\widehat{\mathbf{H}}_I(t,z)+2k,\quad k\in \mathbb {Z}.\quad \quad \end{aligned}$$
(2.10)

From the identities for \(\theta \) in [24] or formula (3) in [25], we see \(\mathbf{H}(t,z)=i\frac{\pi }{t}\mathbf{H}\left( \frac{\pi ^2}{t},i\frac{\pi }{t} z\right) -\frac{z}{t}\). So

$$\begin{aligned} \widehat{\mathbf{H}}(t,z)=-i\mathbf{H}(t,-iz)={{\mathrm{P.V.}}}\sum _{2\mid n}\coth _2(z-nt). \end{aligned}$$
(2.11)

Since \(\mathbf{H}_I(t,z)=\mathbf{H}(t,z+it)+i\),

$$\begin{aligned} \widehat{\mathbf{H}}_I(t,z)=\widehat{\mathbf{H}}(t,z+\pi i) ={{\mathrm{P.V.}}}\sum _{2\mid n}\tanh _2(z-nt). \end{aligned}$$
(2.12)

From (2.8) and (2.9) we may check that

$$\begin{aligned} -\partial _t\widehat{\mathbf{H}}=\widehat{\mathbf{H}}''+\widehat{\mathbf{H}}'\widehat{\mathbf{H}},\quad -\partial _t\widehat{\mathbf{H}}_I=\widehat{\mathbf{H}}_I''+\widehat{\mathbf{H}}_I'\mathbf{H}_I. \end{aligned}$$
(2.13)

From (2.11) and (2.12) we see that \(\widehat{\mathbf{H}}(t,\cdot )\rightarrow \coth _2\) and \(\widehat{\mathbf{H}}_I(t,\cdot )\rightarrow \tanh _2\) as \(t\rightarrow \infty \).

From (2.4) we see that as \(t\rightarrow \infty \), \(\mathbf{H}_I(t,z)\rightarrow 0\), so its derivatives about \(z\) also tend to \(0\). The following lemma gives some estimations of these limits.

Lemma 2.1

If \(|{{\mathrm{Im }}}z|<t\), then

$$\begin{aligned} |\mathbf{H}_I(t,z)|\le \frac{4e^{|{{\mathrm{Im }}}z|-t}}{(1-e^{|{{\mathrm{Im }}}z|-t})^2(1-e^{2(|{{\mathrm{Im }}}z|-t)})}. \end{aligned}$$
(2.14)

If \(t\ge |{{\mathrm{Im }}}z|+2\), then \(|\mathbf{H}_I(t,z)|< 5.5 e^{|{{\mathrm{Im }}}z|-t}\). For any \(h\in \mathbb {N}\), if \(t\ge |{{\mathrm{Im }}}z|+h+2\), then \(|\mathbf{H}_I^{(h)}(t,z)|< 15\sqrt{h} e^{|{{\mathrm{Im }}}z|-t}\).

Proof

From (2.4), if \(|{{\mathrm{Im }}}z|<t\), then

$$\begin{aligned} |\mathbf{H}_I(t,z)|&= \left| \sum _{k=0}^{\infty }\left( \frac{e^{(2k+1)t}+e^{iz}}{e^{(2k+1)t}-e^{iz}} +\frac{e^{-(2k+1)t}+e^{iz}}{e^{-(2k+1)t}-e^{iz}}\right) \right| \nonumber \\&= \left| \sum _{k=0}^\infty \frac{2\sin (z)}{\cosh ((2k+1)t)-\cos (z)}\right| \nonumber \\&\le \sum _{k=0}^\infty \frac{2e^{|{{\mathrm{Im }}}z|}}{\cosh ((2k+1)t)-\cosh (|{{\mathrm{Im }}}z|)}. \end{aligned}$$
(2.15)

Here we use the facts that \(|\sin (z)| \le e^{|{{\mathrm{Im }}}z|}\) and \(|\cos (z)|\le \cosh (|{{\mathrm{Im }}}z|)<\cosh (t)\). Let \(h_0=t-|{{\mathrm{Im }}}z|>0\). Then for \(k\ge 0\),

$$\begin{aligned}&\cosh ((2k+1)t)-\cosh (|{{\mathrm{Im }}}z|)\\&\quad = 2\sinh _2((2k+1)t+|{{\mathrm{Im }}}z|)\sinh _2((2k+1)t-|{{\mathrm{Im }}}z|)\\&\quad =\frac{1}{2} e^{((2k+1)t+|{{\mathrm{Im }}}z|)/2}(1-e^{-(2k+1)t-|{{\mathrm{Im }}}z|})e^{((2k+1)t-|{{\mathrm{Im }}}z|)/2}(1-e^{-(2k+1)t+|{{\mathrm{Im }}}z|})\\&\quad \ge \frac{1}{2} e^{((2k+1)t+|{{\mathrm{Im }}}z|)/2}e^{((2k+1)t-|{{\mathrm{Im }}}z|)/2}\left( 1-e^{-h_0}\right) ^2=\frac{1}{2} e^{(2k+1)t}\left( 1-e^{-h_0}\right) ^2. \end{aligned}$$

So the RHS of (2.15) is not bigger than

$$\begin{aligned} \sum _{k=0}^\infty \frac{4e^{|{{\mathrm{Im }}}z|}e^{-(2k+1)t}}{(1-e^{-h_0})^2}=\frac{4e^{|{{\mathrm{Im }}}z|-t}}{(1-e^{-h_0})^2(1-e^{-2t})}\le \frac{4e^{-h_0}}{(1-e^{-h_0})^2(1-e^{-2h_0})}. \end{aligned}$$

So we proved (2.14).

If \(t\ge |{{\mathrm{Im }}}z|+2\), then \(4/({(1-e^{|{{\mathrm{Im }}}z|-t})^2(1-e^{2(|{{\mathrm{Im }}}z|-t)})})\le 4/((1-e^{-2})^2(1-e^{-4}))<5.5\). From (2.14) we have \(|\mathbf{H}_I(t,z)|< 5.5 e^{|{{\mathrm{Im }}}z|-t}\). Now we assume \(h\in \mathbb {N}\) and \(t\ge |{{\mathrm{Im }}}z|+h+2\). Then for any \(w\in \mathbb {C}\) with \(|w-z|=h\), we have \(t\ge |{{\mathrm{Im }}}w|+2\), so \(|\mathbf{H}_I(t,w)|< 5.5 e^{|{{\mathrm{Im }}}w|-t}\le 5.5 e^he^{|{{\mathrm{Im }}}z|-t}\). From Cauchy’s integral formula and Stirling’s formula, we have

$$\begin{aligned} |\mathbf{H}_I^{(h)}(t,z)|\le 5.5\frac{h!e^h}{h^h} e^{|{{\mathrm{Im }}}z|-t}\le 5.5\sqrt{2\pi h}e^{1/(12h)} e^{|{{\mathrm{Im }}}z|-t}<15\sqrt{h} e^{|{{\mathrm{Im }}}z|-t}. \end{aligned}$$

\(\square \)

3 Loewner equations

3.1 Whole-plane Loewner equation

To motivate the definition of the whole-plane Loewner equation, let’s start with the well-known radial Leowner equation with reflection about the unit circle \(\mathbb {T}\). Let \(T\in (0,\infty ]\). Let \(\beta _I:[0,T)\rightarrow \mathbb {C}\) be a simple curve with \(\beta _I(0)\in \mathbb {T}\) and \(\beta _I(t)\in \mathbb {C}{\setminus }\overline{\mathbb {D}}\) for \(t\in (0,T)\). Let \(K_I(t)=\overline{\mathbb {D}}\cup \beta _I((0,t])\), \(0\le t<T\). Then each \(K_I(t)\) is a hull in \(\mathbb {C}\), and the capacity increases continuously in \(t\). After a time-change, we may assume that \({{\mathrm{cap}}}(K_I(t))=e^t\), \(0\le t<T\). Let \(g_I(t,\cdot )\) be the unique conformal map from \(\mathbb {C}{\setminus }K_I(t)\) conformally onto \(\mathbb {C}{\setminus }\overline{\mathbb {D}}\) with normalization \(\lim _{z\rightarrow \infty } z/g_I(t,z)=e^t\). It turns out that there is \(\xi \in C([0,T))\) such that \(g_I(t,\cdot )\) satisfies the radial Loewner equation

$$\begin{aligned} \partial _t g_I(t,z)= g_I(t,z)\,\frac{e^{i\xi (t)}+ g_I(t,z)}{e^{i\xi (t)}- g_I(t,z)} \end{aligned}$$
(3.1)

with initial value \(g_I(0,z)=z\). In fact, each \(g_I(t,\cdot )^{-1}\) extends continuously to \(\mathbb {T}\), and maps \(e^{i\xi (t)}\) to \(\beta _I(t)\), and the function \(\xi \) is determined by \(\beta _I\) up to an integer multiple of \(2\pi \).

Let \(a\in \mathbb {R}\) and \(T\in (a,\infty ]\). Now suppose a simple curve \(\beta _I:[a,T)\rightarrow \mathbb {C}\) satisfies \(\beta _I(0)\in e^a\mathbb {T}\) and \(\beta _I(t)\in \mathbb {C}{\setminus }e^a\overline{\mathbb {D}}\) for \(t\in (a,T)\). Let \(K_I(t)=e^a\overline{\mathbb {D}}\cup \beta _I((a,t])\), \(a\le t<T\). Assume that \({{\mathrm{cap}}}(K_I(t))=e^t\), \(a\le t<T\). Then the mappings \(g_I(t,\cdot )\) determined by \(K_I(t)\) also satisfy (3.1) for some \(\xi \in C([a,T))\), and the initial value now is \(g_I(0,z)=e^{-a}z\). Let \(a\) tend to \(-\infty \), then the initial point of \(\beta _I\) approaches \(0\). So let’s consider a simple curve \(\beta _I:[-\infty ,T)\rightarrow \mathbb {C}\) with \(\beta _I(-\infty )=0\). Let \(K_I(t)=\beta _I([-\infty ,t])\), \(-\infty <t< T\). Assume that \({{\mathrm{cap}}}(K_I(t))=e^t\), \(-\infty < t<T\). Then the mappings \(g_I(t,\cdot )\) determined by \(K_I(t)\) still satisfy (3.1) for some \(\xi \in C((-\infty ,T))\), and they have an asymptotic initial value at \(t=-\infty \):

$$\begin{aligned} \lim _{t\rightarrow -\infty } e^t g_I(t,z)=z,\quad z\in \mathbb {C}{\setminus }\{0\}. \end{aligned}$$
(3.2)

For this reason, we also call (3.1) the whole-plane Loewner equation.

We now reverse the above process. Let \(T\in (-\infty ,\infty ]\) and \(\xi \in C((-\infty ,T))\). Let \(g_I(t,\cdot )\), \(-\infty <t<T\), be the solution of the whole-plane Loewner equation (3.1) with the asymptotic initial value (3.2). Note that for each fixed \(z\), (3.1) is an ODE in \(t\). For each \(t\in (-\infty ,T)\), let \(K_I(t)\) be the set of \(z\in \mathbb {C}\) at which \(g_I(t,\cdot )\) is not defined. Then \(K_I(t)\) and \(g_I(t,\cdot )\), \(-\infty <t<T\), are called the whole-plane Loewner hulls and maps driven by \(\xi \).

Remark

Since the asymptotic initial value is used, the existence and uniqueness of the solution is not trivial. From Proposition 4.21 in [8] we know that \(K_I(t)\) and \(g_I(t,\cdot )\) exist and are determined by \(e^{i\xi (s)}\), \(-\infty <s\le t\). Moreover, each \(g_I(t,\cdot )\) maps \(\widehat{\mathbb {C}}{\setminus }K_I(t)\) conformally onto \(\widehat{\mathbb {C}}{\setminus }\overline{\mathbb {D}}\) and fixes \(\infty \), and \(g_I(t,z)=e^{-t}z+O(1)\) near \(\infty \). So each \(K_I(t)\) is a hull in \(\mathbb {C}\) with \({{\mathrm{cap}}}(K_I(t))=e^t\). The whole-plane Loewner equation can be viewed as a mapping which takes the driving function \(\xi \) to a family of hulls \((K_I(t))\) or conformal maps \((g_I(t,\cdot ))\). The family \((K_I(t))\) increases in \(t\), but may not be simple curves.

We say that \(\xi \) generates a whole-plane Loewner trace \(\beta _I\) if

$$\begin{aligned} \beta _I(t)\,{:=}\lim _{|z|>1,z\rightarrow e^{i\xi (t)}} g_I(t,\cdot )^{-1}(z) \end{aligned}$$

exists for \(t\in (-\infty ,T)\), and if \(\beta _I(t)\), \(-\infty \le t<T\), is a continuous curve in \(\mathbb {C}\). Such a trace, if it exists, starts from \(0\), i.e., \(\beta _I(-\infty )\,{:=}\,\lim _{t\rightarrow -\infty } \beta _I(t)=0\). The trace is called simple if \(\beta _I(t)\), \(-\infty \le t<T\), has no self-intersection. If \(\xi \) generates a whole-plane Loewner trace \(\beta _I\), then for each \(t\), \(\mathbb {C}{\setminus }K_I(t)\) is the unbounded component of \(\mathbb {C}{\setminus }\beta _I([-\infty ,t])\). In particular, if \(\beta _I\) is simple, then \(K_I(t)=\beta _I([-\infty ,t])\) for each \(t\), and we recover an earlier picture.

Let \(\kappa >0\). A pre-\((\mathbb {T};\kappa )\)-Brownian motion a.s. generates a whole-plane Loewner trace, which is called a standard whole-plane SLE\(_\kappa \) trace. The trace goes from \(0\) to \(\infty \), i.e., \(\lim _{t\rightarrow \infty }\beta _I(t)=\infty \). If \(\kappa \in (0,4]\), the trace is simple. If the driving function is the sum of a pre-\((\mathbb {T};\kappa )\)-Brownian motion and \(s_0t\) for some constant \(s_0\in \mathbb {R}\), then we also get a whole-plane Loewner trace, which is called a standard whole-plane SLE\((\kappa ,s_0)\) trace. The trace still goes from \(0\) to \(\infty \) as \(t\rightarrow \infty \), and is simple when \(\kappa \le 4\). For any \(z_1\ne z_2\in \widehat{\mathbb {C}}\), we may define whole-plane SLE\(_\kappa \) and SLE\((\kappa ,s_0)\) trace from \(z_1\) to \(z_2\) via Möbius transform.

Remark

Whole-plane SLE\(_\kappa \) is related to radial SLE in the way that, if \(T\in \mathbb {R}\) is fixed, then conditioned on \(K_I(t)\), \(-\infty <t\le T\), the curve \(\beta _I(T+t)\), \(t\ge 0\), is the radial SLE\(_\kappa \) trace in \(\widehat{\mathbb {C}}{\setminus }K_I(T)\) from \(\beta _I(T)\) to \(\infty \). Whole-plane SLE\((\kappa ,s_0)\) is related to radial SLE\((\kappa ,-s_0)\) (the radial Loewner process driven by \(\sqrt{\kappa }B(t)-s_0t\)) in a similar way. The additional negative sign is due to the reflection about \(\mathbb {T}\).

We will need the following inverted whole-plane Loewner process, which grows from \(\infty \). For \(-\infty <t<T\), let \(K(t)=I_0(K_I(t))\) and \(g(t,\cdot )=I_0\circ g_{I}(t,\cdot )\circ I_0\). Then for each \(t\), \(g(t,\cdot )\) maps \(\widehat{\mathbb {C}}{\setminus }K(t)\) conformally onto \(\mathbb {D}\) and fixes \(0\). Moreover, \(g(t,\cdot )\) satisfies (3.1) with some initial value at \(-\infty \). We call \(K(t)\) and \(g(t,\cdot )\) the inverted whole-plane Loewner hulls and maps driven by \(\xi \). If \(\xi \) generates a whole-plane Loewner trace \(\beta _I\), then \(\beta (t)\,{:=}\,I_0\circ \beta _I(t)\) is a continuous curve in \(\widehat{\mathbb {C}}\) that satisfies \(\beta (-\infty )=\infty \) and \(\beta (t)=\lim _{\mathbb {D}\ni z\rightarrow e^{i\xi (t)}} g(t,\cdot )^{-1}(z)\), \(-\infty <t<T\). We call \(\beta \) the inverted whole-plane Loewner trace driven by \(\xi \).

Let \(K_I(t)\) and \(g_I(t,\cdot )\), \(-\infty <t<T\), be as before. Let \(\widetilde{K}_I(t)=(e^i)^{-1}(K_I(t))\), \(-\infty <t<T\). It is easy to see that there exists a unique family \(\widetilde{g}_I(t,\cdot )\), \(-\infty <t<T\), such that, \(\widetilde{g}_I(t,\cdot )\) maps \(\mathbb {C}{\setminus }\widetilde{K}_I(t)\) conformally onto \(-\mathbb {H}\), \(e^i\circ \widetilde{g}_I(t,\cdot )=g_I(t,\cdot )\circ e^i\), and \(\widetilde{g}_I\) satisfies

$$\begin{aligned} \partial _t{\widetilde{g}_I}(t,z)=\cot _2(\widetilde{g}_I(t,z)-\xi (t)),\quad -\infty <t<T, \end{aligned}$$
(3.3)

and the initial value at \(-\infty \):

$$\begin{aligned} \lim _{t\rightarrow -\infty } (\widetilde{g}_I(t,z)-it)=z. \end{aligned}$$

Then we call \(\widetilde{K}_I(t)\) and \(\widetilde{g}_I(t,\cdot )\) the covering whole-plane Loewner hulls and maps driven by \(\xi \).

For \(-\infty <t<T\), let \(\widetilde{K}(t)=\widetilde{I}_0(\widetilde{K}_I(t))\) and \(\widetilde{g}(t,\cdot )=\widetilde{I}_0\circ \widetilde{g}_I(t,\cdot )\circ \widetilde{I}_0\). Then \(\widetilde{K}(t)=(e^i)^{-1}(K(t))\) and \(e^i\circ \widetilde{g}(t,\cdot )=g(t,\cdot )\circ e^i\). We call \(\widetilde{K}(t)\) and \(\widetilde{g}(t,\cdot )\) the inverted covering whole-plane Loewner hulls and maps driven by \(\xi \). Then for each \(t\in (-\infty ,T)\), \(\widetilde{g}(t,\cdot )\) maps \(\mathbb {C}{\setminus }\widetilde{K}(t)\) conformally onto \(\mathbb {H}\), and satisfies (3.3) for \(t\in (-\infty ,T)\) and the initial value at \(-\infty \):

$$\begin{aligned} \lim _{t\rightarrow -\infty } (\widetilde{g}(t,z)+it)=z. \end{aligned}$$
(3.4)

3.2 Annulus Loewner equation

The annulus Loewner equation was introduced in [17] to describe curves in a doubly connected domain. Let \(p\in (0,\infty )\). To motivate the definition, we consider a simple curve \(\beta (t)\), \(0\le t<T\), with \(\beta (0)\in \mathbb {T}\) and \(\beta (t)\in \mathbb {A}_p\), \(0<t<T\). Let \(K(t)=\beta ((0,t])\), \(0\le t<T\). Then each \(K(t)\) is a hull in \(\mathbb {A}_p\), and \({{\mathrm{cap}}}_{\mathbb {A}_p}(K(t))\) increases continuously. After a time-change, we may assume that \({{\mathrm{cap}}}_{\mathbb {A}_p}(K(t))=t\) for all \(t\). For each \(t\), there exists \(g(t,\cdot )\), which maps \(\mathbb {A}_p{\setminus }K(t)\) conformally onto \(\mathbb {A}_{p-t}\), and maps \(\mathbb {T}_p\) onto \(\mathbb {T}_{p-t}\). Such \(g(t,\cdot )\) is unique only up to a rotation. There are different ways to make \(g(t,\cdot )\) unique. For example, we may fix a point on \(w_0\in \mathbb {T}_p\) and require that \(e^{-t} g(t,\cdot )\) fixes \(w_0\). The normalization used here does not have a clear geometric meaning. The work in [17] shows that the \(g(t,\cdot )\) can be chosen to satisfy annulus Loewner equation of modulus \(p\) for some \(\xi \in C([0,T))\):

$$\begin{aligned} \partial _t g(t,z)= g(t,z) \mathbf{S}(p-t, g(t,z)/e^{i\xi (t)}),\quad 0\le t<T,\quad g(0,z)=z, \end{aligned}$$
(3.5)

We now reverse the above process. Let \(\xi \in C([0,T))\) where \(0<T\le p\). Let \(g(t,\cdot )\) be the solution of the ODE (3.5). For \(0\le t<T\), let \(K(t)\) denote the set of \(z\in \mathbb {A}_p\) such that the solution \( g(s,z)\) blows up before or at time \(t\). We call \(K(t)\) and \( g(t,\cdot )\), \(0\le t<T\), the annulus Loewner hulls and maps of modulus \(p\) driven by \(\xi \). It turns out that, fo each \(t\), \(K(t)\) is a hull in \(\mathbb {A}_p\) with \({{\mathrm{cap}}}_{\mathbb {A}_p}(K(t))=t\), and \(g(t,\cdot )\) maps \(\mathbb {A}_p{\setminus }K(t)\) conformally onto \(\mathbb {A}_{p-t}\) and maps \(\mathbb {T}_p\) onto \(\mathbb {T}_{p-t}\). To see that \(g(t,\cdot )\) maps \(\mathbb {T}_p\) onto \(\mathbb {T}_{p-t}\), one may note that (3.5) implies that \(\partial _t \ln |g(t,z)|={{\mathrm{Re }}}\mathbf{S}(p-t, g(t,z)/e^{i\xi (t)})\), and \({{\mathrm{Re }}}\mathbf{S}(r,\cdot )\equiv 1\) on \(\mathbb {T}_{r}\) because \({{\mathrm{Im }}}\mathbf{H}(r,\cdot )\equiv -1\) on \(\mathbb {R}_{r}\) and \(\mathbf{H}(r,z)=-i\mathbf{S}(t,e^i(z))\).

We say that \(\xi \) generates an annulus Loewner trace \(\beta \) of modulus \(p\) if

$$\begin{aligned} \beta (t)\,{:=}\lim _{\mathbb {A}_{p-t}\ni z\rightarrow e^{i\xi (t)}} g(t,\cdot )^{-1}(z) \end{aligned}$$
(3.6)

exists for all \(t\in [0,T)\), and if \(\beta (t)\), \(0\le t<T\), is a continuous curve. The curve lies in \(\mathbb {A}_p\cup \mathbb {T}\) and starts from \(e^{i\xi (0)}\in \mathbb {T}\). The trace is called simple if \(\beta \) has no self-intersection and stays inside \(\mathbb {A}_p\) for \(t>0\). In that case, we have \(K(t)=\beta ((0,t])\) for each \(t\), and recover the picture at the beginning of this subsection.

Remark

  1. 1.

    If \(\xi \) generates an annulus Loewner trace \(\beta \), then for each \(t\), \(\mathbb {A}_p{\setminus }K(t)\) is the component of \(\mathbb {A}_p{\setminus }\beta ((0,t])\) whose boundary contains \(\mathbb {T}_p\). If the trace is simple, then \(K(t)=\beta ((0,t])\) for each \(t\).

  2. 2.

    Let \(\beta (t)\), \(0\le t<T\), be a simple curve with \(\beta (0)\in \mathbb {T}\) and \(\beta (t)\in \mathbb {A}_p\) for \(0<t<T\). If it is parameterized by capacity in \(\mathbb {A}_p\), i.e., \({{\mathrm{cap}}}_{\mathbb {A}_p}(\beta ((0,t]))=t\) for each \(t\), then it is an annulus Loewner trace of modulus \(p\). In the general case, let \(u(t)={{\mathrm{cap}}}_{\mathbb {A}_p}(\beta ((0,t]))\). Then \(\beta (u^{-1}(t))\) is an annulus Loewner trace of modulus \(p\).

  3. 3.

    If \(\xi (t)=\sqrt{\kappa }B(t)\), \(0\le t<p\), then a.s. \(\xi \) generates an annulus Loewner trace. If \(\kappa \in (0,4]\), the trace is simple. From Girsanov theorem, the above still hold if \(\xi \) is a semimartingale, whose stochastic part is \(\sqrt{\kappa }B(t)\), and whose drift part is a continuously differentiable function.

The covering annulus Loewner equation of modulus \(p\) driven by the above \(\xi \) is

$$\begin{aligned} \partial _t{\widetilde{g}}(t,z)=\mathbf{H}(p-t,\widetilde{g}(t,z)-\xi (t)),\quad \widetilde{g}(0,z)=z. \end{aligned}$$
(3.7)

For \(0\le t<T\), let \(\widetilde{K}(t)\) denote the set of \(z\in \mathbb {S}_p\) such that the solution \(\widetilde{g}(s,z)\) blows up before or at time \(t\). Then for \(0\le t<T\), \(\widetilde{g}(t,\cdot )\) maps \(\mathbb {S}_p{\setminus }\widetilde{K}(t)\) conformally onto \(\mathbb {S}_{p-t}\) and maps \(\mathbb {R}_p\) onto \(\mathbb {R}_{p-t}\). We call \(\widetilde{K}(t)\) and \(\widetilde{g}(t,\cdot )\), \(0\le t<T\), the covering annulus Loewner hulls and maps of modulus \(p\) driven by \(\xi \). Let \(K(t)\) and \( g(t,\cdot )\) be the notations appeared above. Then we have \(\widetilde{K}(t)=(e^i)^{-1}(K(t))\) and \(e^i\circ \widetilde{g}(t,\cdot ) = g(t,\cdot )\circ e^i\) for \(0\le t<T\).

Since \(\widetilde{g}(t,\cdot )\) maps \(\mathbb {R}_p\) onto \(\mathbb {R}_{p-t}\) and \(\mathbf{H}_I(t,z)=\mathbf{H}(t,z+it)+i\), we have

$$\begin{aligned} \partial _t {{\mathrm{Re }}}\widetilde{g}(t,z)=\mathbf{H}_I(p-t,{{\mathrm{Re }}}\widetilde{g}(t,z)-\xi (t)),\quad z\in \mathbb {R}_p. \end{aligned}$$

Differentiating the above formula w.r.t. \(z\), we obtain

$$\begin{aligned} \partial _t {\widetilde{g}}'(t,z)=\widetilde{g}'(t,z)\mathbf{H}_I'(p-t,{{\mathrm{Re }}}\widetilde{g}(t,z)-\xi (t)),\quad z\in \mathbb {R}_p. \end{aligned}$$
(3.8)

If \(\xi \) generates an annulus Loewner trace \(\beta \) of modulus \(p\), then a.s.

$$\begin{aligned} \widetilde{\beta }(t)\,{:=}\lim _{\mathbb {S}_{p-t}\ni z\rightarrow \xi (t)} \widetilde{g}(t,\cdot )^{-1}(z) \end{aligned}$$

exists for \(0\le t<T\), and \(\widetilde{\beta }(t)\), \(0\le t<T\), is a continuous curve in \(\mathbb {S}_p\cup \mathbb {R}\) started from \(\widetilde{\beta }(0)=\xi (0)\in \mathbb {R}\). Such \(\widetilde{\beta }\) is called the covering annulus Loewner trace of modulus \(p\) driven by \(\xi \). And we have \(\beta =e^i\circ \widetilde{\beta }\). If \(\beta \) is a simple trace, then \(\widetilde{\beta }\) has no self-intersection, stays inside \(\mathbb {S}_p\) for \(t>0\), and \(\widetilde{K}(t)=\widetilde{\beta }((0,t])+2\pi \mathbb {Z}\) for each \(t\).

Let \(K_I(t)=I_p(K(t))\), \(g_I(t,\cdot )=I_{p-t}\circ g(t,\cdot )\circ I_p\), \(\widetilde{K}_I(t)=\widetilde{I}_p(\widetilde{K}(t))\), and \(\widetilde{g}_I(t,\cdot )=\widetilde{I}_{p-t}\circ \widetilde{g}(t,\cdot )\circ \widetilde{I}_p\). Then \(K_I(t)\) is a hull in \(\mathbb {A}_p\) with \({{\mathrm{cap}}}_{\mathbb {A}_p}(K_I(t))=t\), and \( g_I(t,\cdot )\) maps \(\mathbb {A}_p{\setminus }K_I(t)\) conformally onto \(\mathbb {A}_{p-t}\) and maps \(\mathbb {T}\) onto \(\mathbb {T}\). Moreover, \(\widetilde{K}_I(t)=(e^i)^{-1}(K_I(t))\), \(\widetilde{g}_I(t,\cdot )\) maps \(\mathbb {S}_p{\setminus }\widetilde{K}_I(t)\) conformally onto \(\mathbb {S}_{p-t}\), maps \(\mathbb {R}\) onto \(\mathbb {R}\), satisfies \(e^i\circ \widetilde{g}_I(t,\cdot )=g_I(t,\cdot )\circ e^i\), and the equation

$$\begin{aligned} \partial _t{\widetilde{g}}_I(t,z)=\mathbf{H}_I(p-t,\widetilde{g}_I(t,z)-\xi (t)),\quad \widetilde{g}(0,z)=z. \end{aligned}$$
(3.9)

We call \(K_I(t)\) and \( g_I(t,\cdot )\) (resp. \(\widetilde{K}_I(t)\) and \(\widetilde{g}_I(t,\cdot )\)) the inverted annulus (resp. inverted covering annulus) Loewner hulls and maps of modulus \(p\) driven by \(\xi \). The inverted hulls grow from the smaller circle \(\mathbb {T}_p\) instead of the unit circle \(\mathbb {T}\).

3.3 Disc Loewner equation

We now review the definition of the disc Loewner equation, which is used to describe a simple curve in a simply connected domain started from an interior point. The relation between the disc Loewner equation and the annulus Loewner equation is similar to that between the whole-plane Loewner equation and the radial Loewner equation. Intuitively, one considers the inverted annulus Loewner equations of modulus \(p\) so that the hulls start from \(\mathbb {T}_p\), and then lets \(p\rightarrow \infty \).

Let \(T\in (-\infty ,0]\) and \(\xi \in C((-\infty ,T))\). Let \(g_I(t,\cdot )\), \(-\infty <t<T\), be the solution of

$$\begin{aligned} \partial _t g_I(t,z)&= g_I(t,z)\mathbf{S}_I\left( -t, g_I(t,z)/e^{i\xi (t)}\right) ,\quad -\infty <t<T;\nonumber \\ \lim _{t\rightarrow -\infty } g_I(t,z)&= z,\quad \forall z\in \overline{\mathbb {D}}{\setminus }\{0\}. \end{aligned}$$
(3.10)

For each \(t\in (-\infty ,T)\), let \(K_I(t)\) be the set of \(z\in \mathbb {D}\) at which \(g_I(t,\cdot )\) is not defined. Then \(K_I(t)\) and \(g_I(t,\cdot )\), \(-\infty <t<T\), are called the disc Loewner hulls and maps driven by \(\xi \).

Remark

From Proposition 4.1 and 4.2 in [17] we know that \(K_I(t)\) and \(g_I(t,\cdot )\) exist and are determined by \(e^{i\xi (s)}\), \(-\infty <s\le t\). Moreover, each \(g_I(t,\cdot )\) maps \(\mathbb {D}{\setminus }K_I(t)\) conformally onto \(\mathbb {A}_{-t}\) and maps \(\mathbb {T}\) onto \(\mathbb {T}\).

We say that \(\xi \) generates a disc Loewner trace \(\beta \) if

$$\begin{aligned} \beta _I(t)\,{:=}\lim _{\mathbb {A}_{-t}\ni z\rightarrow e^{t+i\xi (t)}} g_I(t,\cdot )^{-1}(z) \end{aligned}$$

exists for every \(t\in (-\infty ,T)\), and if \(\beta _I(t)\), \(-\infty \le t<T\), is a continuous curve in \(\mathbb {D}\) with \(\beta _I(-\infty )=0\). The trace is called simple if it has no self-intersection. If \(\xi \) generates a disc Loewner trace \(\beta _I\), then for each \(t\), \(\mathbb {C}{\setminus }K_I(t)\) is the unbounded component of \(\mathbb {C}{\setminus }\beta _I([-\infty ,t])\). In particular, if \(\beta _I\) is simple, then \(K_I(t)=\beta _I([-\infty ,t])\) for each \(t\).

Let \(\beta _I(t)\), \(-\infty \le t<T\), be a simple curve in \(\mathbb {D}\) with \(\beta _I(-\infty )=0\). If it is parameterized by capacity in \(\mathbb {D}\), i.e., \({{\mathrm{mod}}}(\mathbb {D}{\setminus }\beta _I([-\infty ,t]))=-t\) for each \(t\), then \(\beta _I\) is a disc Loewner trace. In the general case, let \(u(t)=-{{\mathrm{mod}}}(\mathbb {D}{\setminus }\beta _I([-\infty ,t]))\), then \(\beta _I(u^{-1}(t))\) is a disc Loewner trace.

We will need the following inverted disc Loewner process, which grows from \(\infty \). For \(-\infty <t<T\), let \(K(t)=I_0(K_I(t))\) and \(g(t,\cdot )=I_{-t}\circ g(t,\cdot )\circ I_0\). Then each \(g(t,\cdot )\) maps \(\widehat{\mathbb {C}}{\setminus }\overline{\mathbb {D}}{\setminus }K(t)\) conformally onto \(\mathbb {A}_{-t}\) and maps \(\mathbb {T}\) onto \(\mathbb {T}_{-t}\). Moreover, \(g(t,\cdot )\) satisfies (3.10) with \(\mathbf{S}_I\) replaced by \(\mathbf{S}\). We call \(K(t)\) and \(g(t,\cdot )\), \(-\infty <t<T\), the inverted disc Loewner hulls and maps driven by \(\xi \). If \(\xi \) generates a disc Loewner trace \(\beta _I\), then \(\beta {:=}I_0\circ \beta _I\) is called the inverted disc Loewner trace driven by \(\xi \).

The covering disc Loewner hulls and maps are defined as follows. Let \(\widetilde{K}_I(t)=(e^i)^{-1}(K_I(t))\), \(-\infty <t<T\). There is a unique family \(\widetilde{g}_I(t,\cdot )\), \(-\infty <t<T\), which satisfy that, for each \(t\), \(\widetilde{g}_I(t,\cdot )\) maps \(\mathbb {H}{\setminus }\widetilde{K}_I(t)\) conformally onto \(\mathbb {S}_{-t}\) and maps \(\mathbb {R}\) onto \(\mathbb {R}\), \(e^i\circ \widetilde{g}_I(t,\cdot )=g_I(t,\cdot )\circ e^i\), and the following hold:

$$\begin{aligned} \partial _t{\widetilde{g}}_I(t,z)&= \mathbf{H}_I(-t,\widetilde{g}_I(t,z)-\xi (t)); \end{aligned}$$
(3.11)
$$\begin{aligned} \lim _{t\rightarrow -\infty } \widetilde{g}_I(t,z)&= z. \end{aligned}$$
(3.12)

We call \(\widetilde{K}_I(t)\) and \(\widetilde{g}_I(t,\cdot )\) the covering disc Loewner hulls and maps driven by \(\xi \). Let \(\widetilde{K}(t)=\widetilde{I}_0(\widetilde{K}_I(t))\) and \(\widetilde{g}(t,\cdot )=\widetilde{I}_{-t}\circ \widetilde{g}_I(t,\cdot )\circ \widetilde{I}_0\). Then \(\widetilde{g}(t,\cdot )\) maps \(-\mathbb {H}{\setminus }\widetilde{K}(t)\) conformally onto \(\mathbb {S}_{-t}\), maps \(\mathbb {R}\) onto \(\mathbb {R}_{-t}\), \(e^i\circ \widetilde{g}(t,\cdot )=g(t,\cdot )\circ e^i\), and satisfies \(\partial _t{\widetilde{g}}(t,z)=\mathbf{H}(-t,\widetilde{g}(t,z)-\xi (t))\). We call \(\widetilde{K}(t)\) and \(\widetilde{g}(t,\cdot )\) the inverted covering disc Loewner hulls and maps driven by \(\xi \).

Remark

Now we summarize the conformal maps that appear in the this section so far. The relations between a (inverted) whole-plane, annulus, or disc Loewner map \(g(t,\cdot )\) or \(g_I(t,\cdot )\) and its corresponding covering map \(\widetilde{g}(t,\cdot )\) or \(\widetilde{g}_I(t,\cdot )\) are \(g(t,\cdot )\circ e^i=e^i\circ \widetilde{g}(t,\cdot )\) and \(g_I(t,\cdot )\circ e^i=e^i\circ \widetilde{g}_I(t,\cdot )\). The relation between the inverted pair \(\widetilde{g}(t,\cdot )\) and \(\widetilde{g}_I(t,\cdot )\) depends on the three cases. For the whole-plane Loewner maps,

$$\begin{aligned} \widetilde{g}_I(t,\cdot ):\mathbb {C}{\setminus }\widetilde{K}_I(t)\mathop {\twoheadrightarrow }\limits ^\mathrm{Conf}-\mathbb {H},\quad \widetilde{g}(t,\cdot ):\mathbb {C}{\setminus }\widetilde{K}(t)\mathop {\twoheadrightarrow }\limits ^\mathrm{Conf}\mathbb {H},\quad \widetilde{I}_0\circ \widetilde{g}(t,\cdot )=\widetilde{g}_I(t,\cdot )\circ \widetilde{I}_0. \end{aligned}$$

For the annulus Loewner maps of modulus \(p\),

$$\begin{aligned}&\widetilde{g}(t,\cdot ):(\mathbb {S}_p{\setminus }\widetilde{K}(t);\mathbb {R}_p)\mathop {\twoheadrightarrow }\limits ^\mathrm{Conf}(\mathbb {S}_{p-t};\mathbb {R}_{p-t}),\quad \widetilde{g}_I(t,\cdot ):(\mathbb {S}_p{\setminus }\widetilde{K}_I(t);\mathbb {R})\mathop {\twoheadrightarrow }\limits ^\mathrm{Conf}(\mathbb {S}_{p-t};\mathbb {R}),\\&\widetilde{I}_{p-t}\circ \widetilde{g}_I(t,\cdot )=\widetilde{g}(t,\cdot )\circ \widetilde{I}_p,\quad t\in [0,p). \end{aligned}$$

For the disc Loewner maps,

$$\begin{aligned}&\widetilde{g}_I(t,\cdot ):(\mathbb {H}{\setminus }\widetilde{K}_I(t);\mathbb {R})\mathop {\twoheadrightarrow }\limits ^\mathrm{Conf}(\mathbb {S}_{-t};\mathbb {R}),\quad \widetilde{g}(t,\cdot ):(-\mathbb {H}{\setminus }\widetilde{K}(t);\mathbb {R})\mathop {\twoheadrightarrow }\limits ^\mathrm{Conf}(\mathbb {S}_{-t};\mathbb {R}_{-t}),\\&\quad \widetilde{I}_{-t}\circ \widetilde{g}(t,\cdot )=\widetilde{g}_I(t,\cdot )\circ \widetilde{I}_0,\quad t\in (-\infty ,0). \end{aligned}$$

The relation between \(g(t,\cdot )\) and \(g_I(t,\cdot )\) depends on the three cases in a similar way.

3.4 SLE with marked points

Definition 3.1

A covering crossing annulus drift function is a real valued \(C^{0,1}\) differentiable function defined on \((0,\infty )\times \mathbb {R}\). A covering crossing annulus drift function with period \(2\pi \) is called a crossing annulus drift function.

Definition 3.2

Suppose \(\Lambda \) is a covering crossing annulus drift function. Let \(\kappa > 0\), \(p>0\), and \(x_0,y_0\in \mathbb {R}\). Let \(\xi (t)\), \(0\le t< p\), be the maximal solution to the SDE

$$\begin{aligned} d \xi (t)= \sqrt{\kappa }dB(t)+\Lambda (p-t,\xi (t)-{{\mathrm{Re }}}\widetilde{g}(t,y_0+ pi))dt,\quad \xi (0)=x_0, \end{aligned}$$
(3.13)

where \(\widetilde{g}(t,\cdot )\), \(0\le t<p\), are the covering annulus Loewner maps of modulus \(p\) driven by \(\xi \). Then the covering annulus Loewner trace of modulus \(p\) driven by \(\xi \) is called the covering (crossing) annulus SLE\((\kappa ,\Lambda )\) trace in \(\mathbb {S}_p\) started from \(x_0\) with marked point \(y_0+p i\).

Definition 3.3

Suppose \(\Lambda \) is a crossing annulus drift function. Let \(\kappa > 0\), \(p>0\), \(a\in \mathbb {T}\) and \(b\in \mathbb {T}_p\). Choose \(x_0,y_0\in \mathbb {R}\) such that \(a=e^{ix_0}\) and \(b=e^{-p+iy_0}\). Let \(\xi (t)\), \(0\le t< p\), be the maximal solution to (3.13). The annulus Loewner trace \(\beta \) driven by \(\xi \) is called the (crossing) annulus SLE\((\kappa ,\Lambda )\) trace in \(\mathbb {A}_p\) started from \(a\) with marked point \(b\).

The above definition does not depend on the choices of \(x_0\) and \(y_0\) because \(\Lambda \) has period \(2\pi \), and for any \(n\in \mathbb {Z}\), the annulus Loewner objects driven by \(\xi (t)+2n\pi \) agree with those driven by \(\xi (t)\).

A covering chordal-type annulus drift function is a real valued \(C^{0,1}\) differentiable function defined on \((0,\infty )\times (\mathbb {R}{\setminus }2\pi \mathbb {Z})\). The word “covering” is omitted if the function has period \(2\pi \). If \(\Lambda \) is a chordal-type annulus drift function, using the same idea, we may define the annulus SLE\((\kappa ,\Lambda )\) processes, where the initial point \(a=e^{ix_0}\) and marked point \(b=e^{iy_0}\) both lie on \(\mathbb {T}\) and are distinct. The driving function \(\xi \) is the solution to (3.13) with \({{\mathrm{Re }}}\widetilde{g}(t,y_0+ pi)\) replaced by \(\widetilde{g}(t,y_0)\).

Via conformal maps, we can then define annulus SLE\((\kappa ,\Lambda )\) process and trace in any doubly connected domain started from one boundary point with another boundary point being marked. Here \(\Lambda \) is a chordal-type or crossing annulus drift function depending on whether or not the initial point and the marked point lie on the same boundary component. Let \(\Lambda _I(p,x)=-\Lambda (p,-x)\), then \(\Lambda _I\) is called the dual of \(\Lambda \). If \(W\) is a conjugate conformal map of \(\mathbb {A}_p\), and \(\Lambda _I\) is the dual of \(\Lambda \), then \((W(K(t)))\) is an annulus SLE\((\kappa ,\Lambda _I)\) process in \(W(\mathbb {A}_p)\) started from \(W(a)\) with marked point \(W(b)\).

Definition 3.4

Let \(\kappa \ge 0\), \(b\in \mathbb {T}\), and \(\Lambda \) be a crossing annulus drift function. Choose \(y_0\in \mathbb {R}\) such that \(e^{iy_0}=b\). Let \(B_*^{(\kappa )}(t)\), \(t\in \mathbb {R}\), be a pre-\((\mathbb {T};\kappa )\)-Brownian motion. Suppose \(\xi (t)\), \(-\infty <t<0\), satisfies the following SDE with the meaning in Definition 2.2:

$$\begin{aligned} d\xi (t)=dB_*^{(\kappa )}(t)+\Lambda (-t,\xi (t)-\widetilde{g}_I(t,y_0))dt, \quad -\infty <t<0, \end{aligned}$$

where \(\widetilde{g}_I(t,\cdot )\) are the disc Loewner maps driven by \(\xi \). Then we call the disc Loewner trace driven by \(\xi \) the disc SLE\((\kappa ,\Lambda )\) trace in \(\mathbb {D}\) started from \(0\) with marked point \(b\).

Via conformal maps, we can define disc SLE\((\kappa ,\Lambda )\) trace in any simply connected domain started from an interior point with a marked boundary point.

4 Coupling of two annulus SLE traces

The goal of this section is to prove Theorem 4.1 below, which says that when certain PDE is satisfied, we may couple two annulus SLE\((\kappa ;\Lambda )\) processes such that they commute with each other. Although this result will not be used directly in the proof of the whole-plane reversibility, we prove this theorem because on the one hand, the result may be used in the future, and on the other hand, the proof will serve as a reference for a more complicated proof of the theorem about coupling two whole-plane SLE processes.

After some preparation in Sect. 4.1, the construction formally starts from Sect. 4.2, which resembles Section 3 of [10]. The extra complexity comes from the appearance of covering maps and inverted maps. Then we construct a two-dimensional local martingale \(M\) in Sect. 4.3, which resembles Section 4 of [10]. In the same subsection, we derive the boundedness of \(M\) when the two processes are stopped at some exiting time. In Sect. 4.4, we first construct local commutation couplings using \(M\), then construct the global commutation coupling using the coupling technique, and finishes the proof.

Theorem 4.1

Let \(\kappa >0\) and \(s_0\in \mathbb {R}\). Suppose \(\Gamma \) is a positive \(C^{1,2}\) differentiable function on \((0,\infty )\times \mathbb {R}\) that satisfies

$$\begin{aligned} \partial _t\Gamma&= \frac{\kappa }{2}\Gamma ''+\mathbf{H}_I\Gamma '+\left( \frac{3}{\kappa }-\frac{1}{2}\right) \mathbf{H}_I'\Gamma ;\end{aligned}$$
(4.1)
$$\begin{aligned} \Gamma (t,x+2\pi )&= e^{\frac{2\pi s_0}{\kappa }}\Gamma (t,x),\quad t>0,\,x\in \mathbb {R}. \end{aligned}$$
(4.2)

We call \(\Gamma \) a partition function following Gregory Lawler’s terminology in [20]. Let \(\Lambda =\kappa \frac{\Gamma '}{\Gamma }\). Then \(\Lambda \) is a crossing annulus drift function. Let \(\Lambda _1=\Lambda \) and \(\Lambda _2\) be the dual of \(\Lambda \). Then for any \(p>0\), \(a_{1},a_{2}\in \mathbb {T}\), there is a coupling of two curves: \(\beta _{1}(t)\), \(0\le t<p\), and \(\beta _{2}(t)\), \(0\le t<p\), such that for \(j\ne k\in \{1,2\}\), the following hold.

  1. (i)

    \(\beta _j\) is an annulus SLE\((\kappa ,\Lambda _j)\) trace in \(\mathbb {A}_p\) started from \(a_{j}\) with marked point \(a_{I,k}{:=}I_p(a_{k})\).

  2. (ii)

    If \(t_k<p\) is a stopping time w.r.t. \((\beta _{k}(t))\), then conditioned on \(\beta _k(t)\), \(0\le t\le t_k\), after a time-change, \(\beta _j(t)\), \(0\le t<T_j(t_k)\) is the annulus SLE\((\kappa ,\Lambda _j)\) process in a connected component of \(\mathbb {A}_p{\setminus }I_p(\beta _k((0,t_k]))\) started from \(a_j\) with marked point \(I_p(\beta _k(t_k))\), where \(T_j(t_k)\) is the first time that \(\beta _j\) visits \(I_p\circ \beta _k((0,t_k])\), which is set to be \(p\) if such time does not exist.

Remark

  1. 1.

    The \(\Lambda \) satisfies the PDE:

    $$\begin{aligned} \partial _t \Lambda =\frac{\kappa }{2} \Lambda ''+\Big (3-\frac{\kappa }{2}\Big )\mathbf{H}_I''+\Lambda \mathbf{H}_I'+\mathbf{H}_I\Lambda '+\Lambda \Lambda '. \end{aligned}$$
    (4.3)

    On the other hand, if \(\Lambda \) satisfies (4.3), then there is \(\Gamma \) such that \(\Lambda =\kappa \frac{\Gamma '}{\Gamma }\) and (4.1) holds.

  2. 2.

    The theorem also holds for \(\kappa =0\) if \(\Lambda \) satisfies (4.3) with \(\kappa =0\).

4.1 Transformations of PDE

Lemma 4.1

Let \(\sigma ,s_0\in \mathbb {R}\). Suppose \(\Gamma \), \(\Psi \), and \(\Psi _{s_0}\) are functions defined on \((0,\infty )\times \mathbb {R}\), which satisfy \(\Psi =\Gamma \Theta _I^{\frac{2}{\kappa }}\), \(\Psi _s=\Gamma _s\Theta _I^{\frac{2}{\kappa }}\), and \(\Psi _{s_0}(t,x)=e^{-\frac{s_0x}{\kappa }-\frac{s_0^2t}{2\kappa }}\Psi (t,x)\). Then the following PDEs are equivalent:

$$\begin{aligned} \partial _t\Gamma&= \frac{\kappa }{2}\Gamma ''+\mathbf{H}_I\Gamma '+\Big (\sigma -\frac{1}{\kappa }+\frac{1}{2}\Big )\mathbf{H}_I'\Gamma ;\end{aligned}$$
(4.4)
$$\begin{aligned} \partial _t \Psi&= \frac{\kappa }{2} \Psi '' + \sigma \mathbf{H}_I' \Psi ;\end{aligned}$$
(4.5)
$$\begin{aligned} \partial _t \Psi _{s_0}&= \frac{\kappa }{2} \Psi _{s_0}'' +s_0\Psi _{s_0}'+ \sigma \mathbf{H}_I' \Psi _{s_0}. \end{aligned}$$
(4.6)

Proof

This follows from (2.5), (2.7), and some straightforward computations. \(\square \)

Remark

When \(\sigma =\frac{4}{\kappa }-1\), (4.4) agrees with (4.1).

Lemma 4.2

Let \(\sigma ,s_0\in \mathbb {R}\). Suppose \(\Psi _{s_0}\) is positive, has period \(2\pi \), and solves (4.6) in \((0,\infty )\times \mathbb {R}\). Then \(\Psi _{s_0}(t,x)\rightarrow C\) as \(t\rightarrow \infty \) for some constant \(C>0\), uniformly in \(x\in \mathbb {R}\).

Proof

Fix \(t_0>0\) and \(x_0\in \mathbb {R}\). For \(0\le t<t_0\), let \(X_{x_0}(t)=x_0+\sqrt{\kappa }B(t)+st\) and

$$\begin{aligned} M(t)= \Psi _{s_0}(t_0-t,X_{x_0}(t))\exp \left( \sigma \int \limits _0^t \mathbf{H}_I'(t_0-r,X_{x_0}(r))dr\right) . \end{aligned}$$

From (4.6) and Itô’s formula, \(M(t)\), \(0\le t<t_0\), is a local martingale. Since \(\Psi _{s_0}\) and \(\mathbf{H}_I'\) are continuous on \((0,\infty )\times \mathbb {R}\) and have period \(2\pi \), we see that, for any \(t_1\in (0,t_0]\), \(M(t)\), \(0\le t\le t_0-t_1\), is uniformly bounded, so it is a bounded martingale. Thus,

$$\begin{aligned} \Psi _{s_0}(t_0,x_0)&= \nonumber \\ M(0)&= \mathbf{E}\,\!\left[ \Psi _{s_0}(t_1,X_{x_0}(t_0\!-\!t_1))\exp \!\left( \sigma \int \limits _0^{t_0-t_1} \mathbf{H}_I'(t_0\!-\!r,X_{x_0}(r))dr\right) \right] \!.\quad \quad \quad \end{aligned}$$
(4.7)

Now suppose \(t_0>t_1\ge 3\). From Lemma 2.1, we see that,

$$\begin{aligned} \int \limits _0^{t_0-t_1}|\mathbf{H}_I'(t_0-r,X_{x_0}(r))|dr\le \int \limits _0^{t_0-t_1} 15 e^{r-t_0}dr\le 15 e^{-t_1}. \end{aligned}$$
(4.8)

Let \(\varepsilon >0\). Choose \(t_1\ge 3\) such that \(15\sigma e^{-t_1}<\varepsilon /3\). For \(t\in [t_1,\infty )\) and \(x\in \mathbb {R}\), define

$$\begin{aligned} \Psi _{s_0,t_1}(t,x)=\mathbf{E}\,[\Psi _{s_0}(t_1,X_x(t-t_1))]. \end{aligned}$$

As \(t\rightarrow \infty \), the distribution of \(e^i(X_x(t-t_1))\) tends to the uniform distribution on \(\mathbb {T}\). Since \(\Psi _{s_0}\) is positive, continuous, and has period \(2\pi \), we see that \(\Psi _{s_0,t_1}(t,x)\rightarrow \frac{1}{2\pi } \int _0^{2\pi } \Psi _{s_0}(t_1,x)dx>0\) as \(t\rightarrow \infty \), uniformly in \(x\in \mathbb {R}\). Thus, \(\lim _{t\rightarrow \infty }\ln (\Psi _{s_0,t_1})\) converges uniformly in \(x\in \mathbb {R}\). So there is \(t_2>t_1\) such that if \(t_a,t_b\ge t_2\) and \(x_a,x_b\in \mathbb {R}\), then \(|\ln (\Psi _{t_1}(t_a,x_a))-\ln (\Psi _{t_1}(t_b,x_b))|<\varepsilon /3\). From (4.7) and (4.8) we see that

$$\begin{aligned} |\ln (\Psi _{s_0}(t,x))-\ln (\Psi _{s_0,t_1}(t,x))|\le 15\sigma e^{-t_1}<\varepsilon /3,\quad t\ge t_1,\,x\in \mathbb {R}. \end{aligned}$$

Thus, \(|\ln (\Psi _{s_0}(t_a,x_a))-\ln (\Psi _{s_0}(t_b,x_b))|<\varepsilon \) if \(t_a,t_b\ge t_2\) and \(x_a,x_b\in \mathbb {R}\). So \(\lim _{t\rightarrow \infty }\ln (\Psi _{s_0})\) converges uniformly in \(x\in \mathbb {R}\), which implies the conclusion of the lemma. \(\square \)

Lemma 4.3

Let \(s_0\in \mathbb {R}\). Suppose \(\Gamma \) is positive, satisfies (4.2), and solves (4.4). Then there is \(C>0\) such that \(\Gamma _{s_0}(t,x){:=}C^{-1}e^{-\frac{s_0x}{\kappa }-\frac{s_0^2t}{2\kappa }}\Gamma (t,x)\) has period \(2\pi \) and satisfies \(\lim _{t\rightarrow \infty }\Gamma _{s_0}(t,x)= 1\), uniformly in \(x\in \mathbb {R}\).

Proof

Let \(\Psi _{s_0}\) be given by Lemma 4.1. Since \(\Theta _I>0\), \(\Psi _{s_0}\) is positive and solves (4.6). Since \(\Gamma \) satisfies (4.2) and \(\Theta _I\) has period \(2\pi \), \(\Psi _{s_0}\) also has period \(2\pi \). From Lemma 4.2, there is \(C>0\) such that \(\Psi _{s_0}\rightarrow C\) as \(t\rightarrow \infty \), uniformly in \(x\in \mathbb {R}\). Let \(\Gamma _{s_0}(t,x){:=}C^{-1}e^{-\frac{s_0x}{\kappa }-\frac{s_0^2t}{2\kappa }}\Gamma (t,x)\). Then \(\Gamma _{s_0}=C^{-1}\Psi _{s_0}\Theta _I(t,x)^{-\frac{2}{\kappa }}\). From (2.6), \(\Theta _I\rightarrow 1\) as \(t\rightarrow \infty \), uniformly in \(x\in \mathbb {R}\). Since \(\Theta _I\) has period \(2\pi \), we get the desired conclusion. \(\square \)

4.2 Ensemble

Let \(p>0\) and \(\xi _1,\xi _2\in C([0,p))\). For \(j=1,2\), let \(g_j(t,\cdot )\) (resp. \(g_{I,j}(t,\cdot )\)), \(0\le t<p\), be the annulus (resp. inverted annulus) Loewner maps of modulus \(p\) driven by \(\xi _j\). Let \(\widetilde{g}_j(t,\cdot )\) and \(\widetilde{g}_{I,j}(t,\cdot )\), \(0\le t<p\), \(j=1,2\), be the corresponding covering Loewner maps. Suppose \(\xi _j\) generates a simple annulus Loewner trace of modulus \(p\): \(\beta _j\), \(j=1,2\). Let \(\beta _{I,j}=I_p\circ \beta _j\), \(j=1,2\), be the inverted trace. Define

$$\begin{aligned} {\mathcal {D}}&= \{(t_1,t_2):\beta _{1}((0,t_1])\cap \beta _{I,2}((0,t_2])=\emptyset \}\nonumber \\&= \{(t_1,t_2): \beta _{I,1}((0,t_1])\cap \beta _{2}((0,t_2]) =\emptyset \}. \end{aligned}$$
(4.9)

For \((t_1,t_2)\in {\mathcal {D}}\), we define

$$\begin{aligned} {{\mathrm{m}}}(t_1,t_2)&= {{\mathrm{mod}}}(\mathbb {A}_p{\setminus }\beta _1([0,t_1]){\setminus }\beta _{I,2}([0,t_2]))\nonumber \\&= {{\mathrm{mod}}}(\mathbb {A}_p{\setminus }\beta _{I,1}([0,t_1]){\setminus }\beta _{2}([0,t_2])). \end{aligned}$$
(4.10)

Fix any \(j\ne k\in \{1,2\}\) and \(t_k\in [0,p)\). Let \(T_j(t_k)\) be the maximal number such that for any \(t_j<T_j(t_k)\), we have \((t_1,t_2)\in {\mathcal {D}}\). As \(t_j\rightarrow T_j(t_k)^-\), the spherical distance between \(\beta _j((0,t_j])\) and \(\beta _{I,k}((0,t_k])\) tends to \(0\), so \({{\mathrm{m}}}(t_1,t_2)\rightarrow 0\). For \(0\le t_j<T_j(t_k)\), let \(\beta _{j,t_k}(t_j)= g_{I,k}(t_k,\beta _j(t_j))\). Then \(\beta _{j,t_k}(t_j)\), \(0\le t_j<T_j(t_k)\), is a simple curve that starts from \(g_{I,k}(t_k,e^{i\xi _j(t_j)})\in \mathbb {T}\), and stays inside \(\mathbb {A}_p\) for \(t_j>0\). Let

$$\begin{aligned} v_{j,t_k}(t_j)={{\mathrm{cap}}}_{\mathbb {A}_{p-t_k}}(\beta _{j,t_k}((0,t_j]))=p-t_k-{{\mathrm{m}}}(t_1,t_2). \end{aligned}$$
(4.11)

Then \(v_{j,t_k}\) is continuous and increasing and maps \([0,T_j(t_k))\) onto \([0,S_{j,t_k})\) for some \(S_{j,t_k}\in (0,p-t_k]\). Since \({{\mathrm{m}}}\rightarrow 0\) as \(t_j\rightarrow T_j(t_k)\), \(S_{j,t_k}=p-t_k\). Then \(\gamma _{j,t_k}(t){:=} \beta _{j,t_k}(v_{j,t_k}^{-1}(t))\), \(0\le t<p-t_k\), are the annulus Loewner trace of modulus \(p-t_k\) driven by some \(\zeta _{j,t_k}\in C([0,p-t_k))\). Let \(\gamma _{I,j,t_k}(t)\) be the corresponding inverted annulus Loewner trace. Let \( h_{j,t_k}(t,\cdot )\) and \(h_{I,j,t_k}(t,\cdot )\) be the corresponding annulus and inverted annulus Loewner maps. Let \(\widetilde{h}_{j,t_k}(t,\cdot )\), and \(\widetilde{h}_{I,j,t_k}(t,\cdot )\) be the corresponding covering maps.

For \(0\le t_j<T_j(t_k)\), let \(\xi _{j,t_k}(t_j)\), \(\beta _{I,j,t_k}(t_j)\), \(g_{j,t_k}(t_j,\cdot )\), \(g_{I,j,t_k}(t_j,\cdot )\), \(\widetilde{g}_{j,t_k}(t_j,\cdot )\), and \(\widetilde{g}_{I,j,t_k}(t_j,\cdot )\) be the time-changes of \(\zeta _{j,t_k}(t)\), \(\gamma _{I,j,t_k}(t)\), \( h_{j,t_k}(t,\cdot )\), \(h_{I,j,t_k}(t,\cdot )\), \(\widetilde{h}_{j,t_k}(t,\cdot )\), and \(\widetilde{h}_{I,j,t_k}(t,\cdot )\), respectively, via the map \(v_{j,t_k}\). For example, this means that \(\xi _{j,t_k}(t_j)=\zeta _{j,t_k}(v_{j,t_k}(t_j))\) and \(g_{j,t_k}(t_j,\cdot )=h_{j,t_k}(v_{j,t_k}(t_j),\cdot )\).

For \(0\le t_j<T_j(t_k)\), let

$$\begin{aligned} G_{I,k,t_k}(t_j,\cdot )&= g_{j,t_k}(t_j,\cdot )\circ g_{I,k}(t_k,\cdot )\circ g_j(t_j,\cdot )^{-1},\end{aligned}$$
(4.12)
$$\begin{aligned} \widetilde{G}_{I,k,t_k}(t_j,\cdot )&= \widetilde{g}_{j,t_k}(t_j,\cdot )\circ \widetilde{g}_{I,k}(t_k,\cdot )\circ \widetilde{g}_j(t_j,\cdot )^{-1}. \end{aligned}$$
(4.13)

Then \(G_{I,k,t_k}(t_j,\cdot )\) maps \(\mathbb {A}_{p-t_j}{\setminus }g_j(t_j,\beta _{I,k}((0,t_k]))\) conformally onto \( \mathbb {A}_{{{\mathrm{m}}}(t_1,t_2)}\) and maps \(\mathbb {T}\) onto \(\mathbb {T}\); \(e^i\circ \widetilde{G}_{I,k,t_k}(t_j,\cdot )=G_{I,k,t_k}(t_j,\cdot )\circ e^i\); and \(\widetilde{G}_{I,k,t_k}(t_j,\cdot )\) maps \(\mathbb {R}\) onto \(\mathbb {R}\). Since \(\gamma _{j,t_k}(t)= \beta _{j,t_k}(v_{j,t_k}^{-1}(t))\), from (3.6) and a similar formula for \(\gamma \), we find that \(e^{i\xi _{j,t_k}(t_j)}=G_{I,k,t_k}(t_j,e^{i\xi _j(t_j)})\) for \(0\le t_j<T_j(t_k)\). So there is \(n\in \mathbb {Z}\) such that \(\widetilde{G}_{I,k,t_k}(t_j,\xi _j(t_j))=\xi _{j,t_k}(t_j)+2n\pi \) for \(0\le t_j<T_j(t_k)\). Since \(\zeta _{j,t_k}+2n\pi \) generates the same annulus Loewner hulls as \(\zeta _{j,t_k}\), we may choose \(\zeta _{j,t_k}\) such that for \(0\le t_j<T_j(t_k)\),

$$\begin{aligned} \xi _{j,t_k}(t_j)=\widetilde{G}_{I,k,t_k}(t_j,\xi _j(t_j)). \end{aligned}$$
(4.14)

For \(0\le t_j<T_j(t_k)\), let

$$\begin{aligned} A_{j,h}(t_1,t_2)&= \widetilde{G}_{I,k,t_j}^{(h)}(t_k,\xi _j(t_j)),\quad h=1,2,3, \end{aligned}$$
(4.15)
$$\begin{aligned} A_{j,S}(t_1,t_2)&= \frac{A_{j,3}(t_1,t_2)}{A_{j,1}(t_1,t_2)}-\frac{3}{2}\left( \frac{A_{j,2}(t_1,t_2)}{A_{j,1}(t_1,t_2)}\right) ^2. \end{aligned}$$
(4.16)

Then \(A_{j,S}(t_1,t_2)\) is the Schwarzian derivative of \(\widetilde{G}_{I,k,t_j}(t_k,\cdot )\) at \(\xi _j(t_j)\). A standard argument using Lemma 2.1 in [17] shows that, for \(0\le t_j<T_j(t_k)\),

$$\begin{aligned} v_{j,t_k}'(t_j)=|G_{I,k,t_k}'(t_j,\xi _j(t_j))|^2=\widetilde{G}_{I,k,t_k}'(t_j,\xi _j(t_j))^2=A_{j,1}(t_1,t_2)^2,\quad \quad \end{aligned}$$
(4.17)

so from (4.11) we have

$$\begin{aligned} \partial _j {{\mathrm{m}}}=-A_{j,1}^2. \end{aligned}$$
(4.18)

Moreover, for \(0\le t_j<T_j(t_k)\),

$$\begin{aligned} \partial _t {\widetilde{g}}_{j,t_k}(t_j,z)&= A_{j,1}(t_1,t_2)^2\mathbf{H}({{\mathrm{m}}}(t_1,t_2),\widetilde{g}_{j,t_k}(t_j,z)-\xi _{j,t_k}(t_j));\quad \quad \end{aligned}$$
(4.19)
$$\begin{aligned} \partial _t {\widetilde{g}}_{I,j,t_k}(t_j,z)&= A_{j,1}(t_1,t_2)^2\mathbf{H}_I({{\mathrm{m}}}(t_1,t_2),\widetilde{g}_{I,j,t_k}(t_j,z)-\xi _{j,t_k}(t_j)).\quad \quad \end{aligned}$$
(4.20)

From (4.13) we have

$$\begin{aligned} \widetilde{G}_{I,k,t_k}(t_j,\cdot )\circ \widetilde{g}_j(t_j,z)=\widetilde{g}_{j,t_k}(t_j,\cdot )\circ \widetilde{g}_{I,k}(t_k,z). \end{aligned}$$
(4.21)

Differentiate (4.21) w.r.t. \(t_j\). Let \(w=\widetilde{g}_j(t_j,z)\rightarrow \xi _j(t_j)\). From (3.7), (4.14), (4.19), and (2.2) we get

$$\begin{aligned} \partial _t{\widetilde{G}}_{I,k,t_k}(t_j,\xi _j(t_j))=-3\widetilde{G}_{I,k,t_k}''(t_j,\xi _j(t_j))=-3A_{j,2}(t_1,t_2). \end{aligned}$$
(4.22)

Differentiate (4.21) w.r.t. \(t_j\) and \(z\), and let \(w=\widetilde{g}_j(t_j,z)\rightarrow \xi _j(t_j)\). Then we get

$$\begin{aligned} \frac{\partial _t{\widetilde{G}}'_{I,k,t_k}(t_j,\xi _j(t_j))}{{\widetilde{G}}'_{I,k,t_k}(t_j,\xi _j(t_j))}= \frac{1}{2}\cdot \left( \frac{A_{j,2}}{A_{j,1}}\right) ^2 -\frac{4}{3}\cdot \frac{A_{j,3}}{A_{j,1}} +A_{j,1}^2 \mathbf{r}({{\mathrm{m}}})-\mathbf{r}(p-t_j).\quad \quad \quad \end{aligned}$$
(4.23)

Note that both \(G_{I,k,t_k}(t_j,\cdot )\) and \(g_{I,k,t_j}(t_k,\cdot )\) map \(\mathbb {A}_{p-t_j}{\setminus }\beta _{I,k,t_j}((0,t_k])\) conformally onto \(\mathbb {A}_{{{\mathrm{m}}}(t_1,t_2)}\) and maps \(\mathbb {T}\) onto \(\mathbb {T}\). So they differ by a multiplicative constant of modulus \(1\). Thus, there is \(C_k(t_1,t_2)\in \mathbb {R}\) such that

$$\begin{aligned} \widetilde{G}_{I,k,t_k}(t_j,\cdot )=\widetilde{g}_{I,k,t_j}(t_k,\cdot )+C_k(t_1,t_2). \end{aligned}$$
(4.24)

Interchanging \(j\) and \(k\) in (4.24), we see that there is \(C_j(t_1,t_2)\in \mathbb {R}\) such that

$$\begin{aligned} \widetilde{G}_{I,j,t_j}(t_k,\cdot )=\widetilde{g}_{I,j,t_k}(t_j,\cdot )+C_j(t_1,t_2). \end{aligned}$$
(4.25)

From (4.13) we have

$$\begin{aligned} \widetilde{g}_{I,j,t_k}(t_j,\cdot )\circ \widetilde{g}_k(t_k,\cdot )+C_j&= \widetilde{g}_{k,t_j}(t_k,\cdot )\circ \widetilde{g}_{I,j}(t_j,\cdot ),\end{aligned}$$
(4.26)
$$\begin{aligned} \widetilde{g}_{I,k,t_j}(t_k,\cdot )\circ \widetilde{g}_j(t_j,\cdot )+C_k&= \widetilde{g}_{j,t_k}(t_j,\cdot )\circ \widetilde{g}_{I,k}(t_k,\cdot ). \end{aligned}$$
(4.27)

From the definition of inverted annulus Loewner maps, we have

$$\begin{aligned} \widetilde{g}_{j,t_k}(t_j,\cdot )&= \widetilde{I}_{{{\mathrm{m}}}(t_1,t_2)}\circ \widetilde{g}_{I,j,t_k}(t_j,\cdot )\circ \widetilde{I}_{p-t_k},\quad \widetilde{g}_{j}(t_j,\cdot )=\widetilde{I}_{p-t_j}\circ \widetilde{g}_{I,j}(t_j,\cdot )\circ \widetilde{I}_p;\\ \widetilde{g}_{I,k,t_j}(t_k,\cdot )&= \widetilde{I}_{{{\mathrm{m}}}(t_1,t_2)}\circ \widetilde{g}_{k,t_j}(t_k,\cdot )\circ \widetilde{I}_{p-t_j},\quad \widetilde{g}_{I,k}(t_k,\cdot )=\widetilde{I}_{p-t_k}\circ \widetilde{g}_k(t_k,\cdot )\circ \widetilde{I}_p. \end{aligned}$$

From (4.27) and the above formulas, we get \(\widetilde{g}_{k,t_j}(t_k,\cdot )\circ \widetilde{g}_{I,j}(t_j,\cdot )+C_k=\widetilde{g}_{I,j,t_k}(t_j,\cdot )\circ \widetilde{g}_k(t_k,\cdot )\). Comparing this formula with (4.26), we see that \( C_1+C_2\equiv 0\). Now we define \(X_1\) and \(X_2\) on \({\mathcal {D}}\) by

$$\begin{aligned} X_j(t_1,t_2)&= \xi _{j,t_k}(t_j)-\widetilde{g}_{I,j,t_k}(t_j,\xi _k(t_k))\nonumber \\&= \widetilde{G}_{I,k,t_k}(t_j,\xi _j(t_j))-\widetilde{g}_{I,j,t_k}(t_j,\xi _k(t_k)). \end{aligned}$$
(4.28)

From (4.24), (4.25), and \(C_1+C_2\equiv 0\), we have

$$\begin{aligned} X_1+X_2\equiv 0. \end{aligned}$$
(4.29)

Since \(\mathbf{H}_I'''\) is even, we may define \(Q\) on \({\mathcal {D}}\) by

$$\begin{aligned} Q=\mathbf{H}_I'''({{\mathrm{m}}},X_1)=\mathbf{H}_I'''({{\mathrm{m}}},X_2). \end{aligned}$$
(4.30)

Differentiate (4.20) w.r.t. \(z\) twice. We get

$$\begin{aligned} \frac{\partial _t{\widetilde{g}'}_{I,j,t_k}(t_j,z)}{\widetilde{g}_{I,j,t_k}'(t_j,z)}&= A_{j,1}^2 \mathbf{H}_I'({{\mathrm{m}}},\widetilde{g}_{I,j,t_k}(t_j,z)-\xi _{j,t_k}(t_j)). \end{aligned}$$
(4.31)
$$\begin{aligned} \partial _{t}\left( \frac{\widetilde{g}_{I,j,t_k}''(t_j,z) }{\widetilde{g}_{I,j,t_k}'(t_j,z) }\right)&= A_{j,1}^2\mathbf{H}_I''({{\mathrm{m}}},\widetilde{g}_{I,j,t_k}(t_j,z)-\xi _{j,t_k}(t_j))\widetilde{g}_{I,j,t_k}'(t_j,z). \end{aligned}$$
(4.32)

Let \(z=\xi _k(t_k)\) in (4.20), (4.31), and (4.32). Since \(\mathbf{H}_I\) and \(\mathbf{H}_I''\) are odd and \(\mathbf{H}_I'\) is even, from (4.25) and (4.28) we have

$$\begin{aligned} \partial _j \widetilde{g}_{I,j,t_k}(t_j,\xi _k(t_k))&= -A_{j,1}^2\mathbf{H}_I({{\mathrm{m}}},X_j).\end{aligned}$$
(4.33)
$$\begin{aligned} \frac{\partial _j A_{k,1}}{ A_{k,1}}&= A_{j,1}^2 \mathbf{H}_I'({{\mathrm{m}}},X_j).\end{aligned}$$
(4.34)
$$\begin{aligned} \partial _j\left( \frac{A_{k,2}}{A_{k,1}}\right)&= -A_{j,1}^2\mathbf{H}_I''({{\mathrm{m}}},X_j)A_{k,1}. \end{aligned}$$
(4.35)

Differentiate (4.32) w.r.t. \(z\) again, and let \(z=\xi _k(t_k)\). Since \(\mathbf{H}_I'''\) is even, we get

$$\begin{aligned} \partial _{j}\left( \frac{A_{k,3}}{A_{k,1} } -\left( \frac{A_{k,2} }{A_{k,1} }\right) ^2\right) =A_{j,1}^2[\mathbf{H}_I'''({{\mathrm{m}}},X_j)A_{k,1} ^2 -\mathbf{H}_I''({{\mathrm{m}}},X_j)A_{k,2}], \end{aligned}$$

which together with (4.30) and (4.35) implies that

$$\begin{aligned} \partial _j A_{k,S}= A_{j,1}^2 A_{k,1}^2 Q. \end{aligned}$$
(4.36)

Define \(F\) on \({\mathcal {D}}\) by

$$\begin{aligned} F(t_1,t_2)=\exp \left( \,\int \limits _0^{t_2}\!\int \limits _0^{t_1} A_{1,1}(s_1,s_2)^2 A_{2,1}(s_1,s_2)^2 Q(s_1,s_2)ds_1ds_2\right) , \end{aligned}$$
(4.37)

Since \(\widetilde{g}_{I,j,t_k}(0,\cdot )=\widetilde{h}_{I,j,t_k}(0,\cdot )={{\mathrm{id}}}\), when \(t_j=0\), we have \(A_{k,1}=1\), \(A_{k,2}=A_{k,3}=0\), hence \(A_{k,S}=0\). From (4.36), we see that

$$\begin{aligned} \frac{\partial _{j} F}{F }=A_{j,S}. \end{aligned}$$
(4.38)

Remark

There is an explanation of \(F\) in terms of Brownian loop measure. If \(R\) is a function on \((0,\infty )\) that satisfies \(R'(t)=\mathbf{r}(t)+\frac{1}{t}\), then

$$\begin{aligned} -\frac{1}{3}\ln F(t_1,t_2)-R(t_1,t_2)+R(t_1,0)+R(0,t_2)-R(0,0) \end{aligned}$$

is the Brownian loop measure of the loops in \(\mathbb {A}_p\) that intersect both \(\beta _1([0,t_1])\) and \(\beta _{I,2}([0,t_2])\).

4.3 Martingales in two time variables

Let \(a_{1},a_{2}\in \mathbb {T}\) be as in Theorem 4.1. Let \(a_{I,j}=I_p(a_{j})\in \mathbb {T}_p\), \(j=1,2\). Choose \(x_1,x_2\in \mathbb {R}\) such that \(a_j=e^{ix_j}\), \(j=1,2\). Let \(B_1(t)\) and \(B_2(t)\) be two independent Brownian motions. For \(j=1,2\), let \(({\mathcal {F}}^j_t)\) be the complete filtration generated by \((B_j(t))\). Let \(\Gamma \), \(\Lambda \), \(\Lambda _1\), and \(\Lambda _2\) be as in Theorem 4.1. Since \(\Gamma \) satisfies (4.2), \(\Lambda _j\), \(j=1,2\), has period \(2\pi \), which implies that they are annulus drift functions. For \(j=1,2\), let \(\xi _j(t_j)\), \(0\le t_j<p\), be the solution to the SDE:

$$\begin{aligned} d \xi _j(t_j)=\sqrt{\kappa }d B_j(t_j)+\Lambda _{j}(p-t_j,\xi _j(t_j)-\widetilde{g}_{I,j}(t_j,x_{3-j}))d t_j, \quad \xi _j(0)=x_j.\nonumber \\ \end{aligned}$$
(4.39)

Then \((\xi _1)\) and \((\xi _2)\) are independent. For simplicity, suppose \(\kappa \in (0,4]\) (for the case \(\kappa >4\), we may work on Loewner chains and apply Proposition 2.1 in [17]). Then for \(j=1,2\), a.s. \((\xi _j)\) generates a simple annulus Loewner trace \(\beta _j\), which is an annulus SLE\((\kappa ,\Lambda _j)\) trace \(\beta _j\) in \(\mathbb {A}_p\) started from \(a_{j}\) with marked point \(a_{I,3-j}\). We may apply the results in the prior subsection.

As the annulus Loewner objects driven by \(\xi _j\), \(\beta _j\), \(\beta _{I,j}=I_p\circ \beta _j\), \((g_{I,j}(t_j,\cdot ))\), \((\widetilde{g}_j(t_j,\cdot ))\), and \((\widetilde{g}_{I,j}(t_j,\cdot ))\) are all \(({\mathcal {F}}^j_{t_j})\)-adapted. Fix \(j\ne k\in \{1,2\}\). Since \(\beta _j\) is \(({\mathcal {F}}^j_{t_j})\)-adapted and \((g_{I,k}(t_k,\cdot ))\) is \(({\mathcal {F}}^k_{t_k})\)-adapted, we see that \((t_1,t_2)\mapsto \beta _{j,t_k}(t_j)=g_{I,k}(t_k,\beta _j(t_j))\) defined on \({\mathcal {D}}\) is \(({\mathcal {F}}^1_{t_1}\times {\mathcal {F}}^2_{t_2})\)-adapted. Since \(\widetilde{g}_{j,t_k}(t_j,\cdot )\) and \(\widetilde{g}_{I,j,t_k}(t_j,\cdot )\) are determined by \(\beta _{j,t_k}(s_j)\), \(0\le s_j\le t_j\), they are \(({\mathcal {F}}^1_{t_1}\times {\mathcal {F}}^2_{t_2})\)-adapted. From (4.13), \((\widetilde{G}_{I, k,t_k}(t_j,\cdot ))\) is \(({\mathcal {F}}^1_{t_1}\times {\mathcal {F}}^2_{t_2})\)-adapted. From (4.14), \((\xi _{j,t_k}(t_j))\) is also \(({\mathcal {F}}^1_{t_1}\times {\mathcal {F}}^2_{t_2})\)-adapted. From (4.10), (4.28), (4.15), and (4.16), we see that \(({{\mathrm{m}}})\), \((X_j)\), \((A_{j,h})\), \(h=1,2,3\), and \((A_{j,S})\) are all \(({\mathcal {F}}^1_{t_1}\times {\mathcal {F}}^2_{t_2})\)-adapted.

Fix \(j\ne k\in \{1,2\}\) and any \(({\mathcal {F}}^k_t)\)-stopping time \(t_k\in [0,p)\). Let \({\mathcal {F}}^{j,t_k}_{t_j}={\mathcal {F}}^j_{t_j}\times {\mathcal {F}}^k_{t_k}\), \(0\le t_j<p\). Then \(({\mathcal {F}}^{j,t_k}_{t_j})_{0\le t_j<p}\) is a filtration. Since \((B_j(t_j))\) is independent of \({\mathcal {F}}^k_{t_k}\), it is also an \(({\mathcal {F}}^{j,t_k}_{t_j})\)-Brownian motion. Thus, (4.39) is an \(({\mathcal {F}}^{j,t_k}_{t_j})\)-adapted SDE. From now on, we will apply Itô’s formula repeatedly, all SDE will be \(({\mathcal {F}}^{j,t_k}_{t_j})\)-adapted, and \(t_j\) ranges in \([0,T_j(t_k))\).

From (4.22), (4.28), (4.15), and (4.33), we see that \(X_j\) satisfies

$$\begin{aligned} \partial _j X_j =A_{j,1}\partial \xi _j(t_j)+\left( \frac{\kappa }{2}-3\right) A_{j,2}\partial t_j+A_{j,1}^2 \mathbf{H}_I({{\mathrm{m}}},X_j)\partial t_j. \end{aligned}$$
(4.40)

Let \(\Gamma _{1}=\Gamma \) and \(\Gamma _{2}(t,x)=\Gamma (t,-x)\). Then for \(j=1,2\), \(\Lambda _j=\frac{\Gamma _j'}{\Gamma _j}\) and \(\Gamma _{j}\) satisfies (4.1). From (4.29), we may define \(Y\) on \({\mathcal {D}}\) by

$$\begin{aligned} Y=\Gamma _{1}({{\mathrm{m}}},X_1)=\Gamma _{2}({{\mathrm{m}}},X_2). \end{aligned}$$
(4.41)

From (4.1), (4.18), (4.40), and (4.41), we have

$$\begin{aligned} \frac{\partial _j Y}{Y}&= \frac{1}{\kappa }\Lambda _{j} ({{\mathrm{m}}},X_j) A_{j,1}{\partial \xi _j(t_j)}\nonumber \\&-\left( \frac{3}{\kappa }-\frac{1}{2}\right) \left( A_{j,1}^2\mathbf{H}_I'({{\mathrm{m}}},X_j)+\Lambda _{j}({{\mathrm{m}}},X_j)A_{j,2}\right) \partial t_j. \end{aligned}$$
(4.42)

From (4.23) we have

$$\begin{aligned} \frac{\partial _j A_{j,1} }{ A_{j,1} }&= \frac{A_{j,2}}{A_{j,1}}\cdot \partial \xi _j(t_j) + \left( \frac{1}{2}\cdot \left( \frac{A_{j,2} }{A_{j,1} }\right) ^2+\left( \frac{\kappa }{2}-\frac{4}{3}\right) \cdot \frac{A_{j,3} }{A_{j,1}}\right) \partial t_j\\&+ A_{j,1}^2 \mathbf{r}({{\mathrm{m}}})\partial t_j- \mathbf{r}(p-t_j)\partial t_j. \end{aligned}$$

Let

$$\begin{aligned} \alpha =\frac{6-\kappa }{2\kappa },\qquad c=\frac{(8-3\kappa )(\kappa -6)}{2\kappa }. \end{aligned}$$

Actually, \(c\) is the central charge for SLE\(_\kappa \). Then we compute

$$\begin{aligned} \frac{\partial _j A_{j,1}^{\alpha }}{ A_{j,1}^{\alpha }}=\alpha \cdot \frac{A_{j,2}}{A_{j,1}}\cdot \partial \xi _j(t_j) +\frac{c}{6} A_{j,S}\partial t_j +\alpha A_{j,1}^2 \mathbf{r}({{\mathrm{m}}})\partial t_j-\alpha \mathbf{r}(p-t_j)\partial t_j.\nonumber \\ \end{aligned}$$
(4.43)

Recall the \(\mathbf{R}\) defined in Sect. 2.3. Define \(\widehat{M}\) on \({\mathcal {D}}\) by

$$\begin{aligned} \widehat{M}=A_{1,1}^\alpha A_{2,1}^\alpha F^{-\frac{c}{6}}Y\exp (\alpha \mathbf{R}({{\mathrm{m}}})). \end{aligned}$$
(4.44)

Then \(\widehat{M}\) is positive. From (2.3), (4.18), (4.34), (4.38), (4.42), and (4.43), we have

$$\begin{aligned} \frac{\partial _j\widehat{M}}{\widehat{M}}&= \alpha \frac{A_{j,2}}{A_{j,1}}\partial \xi _j(t_j)+ \frac{A_{j,1}}{\kappa }\Lambda _{j}({{\mathrm{m}}},X_j) {\partial \xi _j(t_j)}\nonumber \\&-\alpha \mathbf{r}(p-t_j)\partial t_j+\alpha A_{j,1}^2\mathbf{r}(\infty )\partial t_j. \end{aligned}$$
(4.45)

When \(t_k=0\), we have \(A_{j,1}=1\), \(A_{j,2}=0\), \({{\mathrm{m}}}=p-t_j\), and \(X_j=\xi _j(t_j)-\widetilde{g}_{I,j}(t_j,x_k)\), so the RHS of (4.45) becomes \(\frac{1}{\kappa }\Lambda _{j}(p-t_j,\xi _j(t_j)-\widetilde{g}_{I,j}(t_j,x_k))\partial \xi _j(t_j)\). Define \(M\) on \({\mathcal {D}}\) by

$$\begin{aligned} M(t_1,t_2)=\frac{\widehat{M}(t_1,t_2) \widehat{M}(0,0)}{\widehat{M}(t_1,0)\widehat{M}(0,t_2)}. \end{aligned}$$
(4.46)

Then \(M\) is also positive, and \(M(\cdot ,0)\equiv M(0,\cdot )\equiv 1\). From (4.39) and (4.45) we have

$$\begin{aligned}&\frac{\partial _j M}{ M}=\left[ \left( 3-\frac{\kappa }{2}\right) \frac{A_{j,2}}{A_{j,1}}+\Lambda _{j} ({{\mathrm{m}}},X_j) A_{j,1} -\Lambda _{j}(p-t_j,\xi _j(t_j)-\widetilde{g}_{I,j}(t_j,x_k)) \right] \frac{\partial B_j(t_j)}{\sqrt{\kappa }}.\nonumber \\ \end{aligned}$$
(4.47)

So when \(t_k\in [0,p)\) is a fixed \(({\mathcal {F}}^k_t)\)-stopping time, \(M\) is a local martingale in \(t_j\).

Let \(\mathcal {J}\) denote the set of Jordan curves in \(\mathbb {A}_p\) that separate \(\mathbb {T}\) and \(\mathbb {T}_p\). For \(J\in \mathcal {J}\) and \(j=1,2\), let \(T_j(J)\) be the first time that \(\beta _j\) visits \(J\). It is also the first time that \(\beta _{I,j}\) visits \(I_p(J)\). Let \({{\mathrm{JP}}}\) denote the set of pairs \((J_1,J_2)\in {\mathcal {J}}^2\) such that \(I_p(J_1)\cap J_2=\emptyset \) and \(I_p(J_1)\) is surrounded by \(J_2\). This is equivalent to that \(I_p(J_2)\cap J_1=\emptyset \) and \(I_p(J_2)\) is surrounded by \(J_1\). Then for every \((J_1,J_2)\in {{\mathrm{JP}}}\), \(\beta _{I,1}((0,t_1])\cap \beta _{2}((0,t_2])=\emptyset \) when \(t_1\le T_{1}(J_1)\) and \(t_2\le T_2(J_2)\), so \([0,T_{1}(J_1)]\times [0,T_{2}(J_2)]\subset {\mathcal {D}}\).

Lemma 4.4

There are positive continuous functions \(N_L(p)\) and \(N_S(p)\) defined on \((0,\infty )\) that satisfies \(N_L(p),N_S(p)=O(pe^{-p})\) as \(p\rightarrow \infty \) and the following properties. Suppose \(K\) is an interior hull in \(\mathbb {D}\) containing \(0\), \(g\) maps \(\mathbb {D}{\setminus }K\) conformally onto \(\mathbb {A}_p\) for some \(p\in (0,\infty )\) and maps \(\mathbb {T}\) onto \(\mathbb {T}\), and \(\widetilde{g}\) is an analytic function that satisfies \(e^i\circ \widetilde{g}=g\circ e^i\). Then for any \(x\in \mathbb {R}\), \(|\ln (\widetilde{g}'(x))|\le N_L(p)\) and \(|S\widetilde{g}(x)|\le N_S(p)\), where \(S\widetilde{g}(x)\) is the Schwarzian derivative of \(\widetilde{g}\) at \(x\), i.e., \(S\widetilde{g}(x)=\widetilde{g}'''(x)/\widetilde{g}'(x)-\frac{3}{2} (\widetilde{g}''(x)/\widetilde{g}'(x))^2\).

Proof

Let \(f=g^{-1}\) and \(\widetilde{f}=\widetilde{g}^{-1}\). Then \(e^i\circ \widetilde{f}=f\circ e^i\). Since \(\widetilde{f}'(\widetilde{g}(x))=1/\widetilde{g}'(x)\) and \(S\widetilde{f}(\widetilde{g}(x))=-S\widetilde{g}(x)/\widetilde{g}'(x)^2\), we suffice to prove the lemma for \(\widetilde{f}\). Let \(P(p,z)=-{{\mathrm{Re }}}\mathbf{S}_I(p,z)-\ln |z|/p\) and \(\widetilde{P}(p,z)=P(p,e^{iz})={{\mathrm{Im }}}\mathbf{H}_I(p,z)+{{\mathrm{Im }}}z/p\). Then \(P(p,\cdot )\) vanishes on \(\mathbb {T}\) and \(\mathbb {T}_p{\setminus }\{e^{-p}\}\) and is harmonic inside \(\mathbb {A}_p\). Moreover, when \(z\in \mathbb {A}_p\) is near \(e^{-p}\), \(P(p,z)\) behaves like \(-{{\mathrm{Re }}}(\frac{e^{-p}+z}{e^{-p}-z})+O(1)\). Thus, \(-P(p,\cdot )\) is a renormalized Poisson kernel in \(\mathbb {A}_p\) with the pole at \(e^{-p}\). Since \(\ln |f|\) is negative and harmonic in \(\mathbb {A}_p\) and vanishes on \(\mathbb {T}\), there is a positive measure \(\mu _K\) on \([0,2\pi )\) such that

$$\begin{aligned} \ln |f(z)|=-\int P(p,z/e^{i\xi })d\mu _K(\xi ),\quad z\in \mathbb {A}_p, \end{aligned}$$

which implies that

$$\begin{aligned} {{\mathrm{Im }}}\widetilde{f}(z)=\int P(p,e^{iz}/e^{i\xi })d\mu _K(\xi )=\int \widetilde{P}(p,z-\xi )d\mu _K(\xi ),\quad z\in \mathbb {S}_p \end{aligned}$$

So for any \(x\in \mathbb {R}\) and \(h=1,2,3\), \(\widetilde{f}^{(h)}(x)=\int \frac{\partial ^{h}}{\partial x^{h-1}\partial y}\widetilde{P}(p,x-\xi )d\mu _K(\xi )\). Let

$$\begin{aligned} m_p&= \inf _{x\in \mathbb {R}} \frac{\partial }{\partial y}\widetilde{P}(p,x),\quad M_p=\sup _{x\in \mathbb {R}} \frac{\partial }{\partial y}\widetilde{P}(p,x) ,\\ M_p^{(h)}&= \sup _{x\in \mathbb {R}} \left| \frac{\partial ^{h}}{\partial _x^{h-1}\partial y}\widetilde{P}(p,x)\right| , \quad h=2,3. \end{aligned}$$

We have \(0<m_p<M_p<\infty \) and \(m_p|\mu _K|\le \widetilde{f}'\le M_p |\mu _K|\) on \(\mathbb {R}\). Since \(\widetilde{f}(2\pi )=\widetilde{f}(0)+2\pi \), we get \(1/M_p\le |\mu _K|\le 1/m_p\). Thus, \(m_p/M_p\le \widetilde{f}'\le M_p/m_p\) and \(|\widetilde{f}^{(h)}|\le M^{(h)}_p/m_p\), \(h=2,3\), from which follows that \(|S\widetilde{f}| \le \frac{M_p^{(3)}M_p}{m_p^2}+\frac{3}{2}\left( \frac{M_p^{(2)}M_p}{m_p^2}\right) ^2\) on \(\mathbb {R}\). Since \(\widetilde{P}(p,z)={{\mathrm{Im }}}\mathbf{H}_I(p,z)+{{\mathrm{Im }}}z/p\), we see that \( \frac{\partial }{\partial y}\widetilde{P}(p,x)=\mathbf{H}_I'(p,x)+\frac{1}{p}\) and \(\frac{\partial ^h}{\partial x^{h-1}\partial y}\widetilde{P}(p,x)=\mathbf{H}_I^{(h)}(p,x)\), \(h=2,3\). From Lemma 2.1, \(M_p,m_p=\frac{1}{p}+O(e^{-p})\) and \(M_p^{(h)}=O(e^{-p})\), \(h=2,3\), as \(p\rightarrow \infty \). So we have \(\ln (M_p/m_p)=O(pe^{-p})\) and \(\frac{M_p^{(3)}M_p}{m_p^2}+\frac{3}{2}\left( \frac{M_p^{(2)}M_p}{m_p^2}\right) ^2=O(pe^{-p})\). \(\square \)

Proposition 4.1

(Boundedness) Fix \((J_1,J_2)\in {{\mathrm{JP}}}\). Then \(|\ln (M)|\) is bounded on \([0,T_1(J_1)]\times [0,T_2(J_2)]\) by a constant depending only on \(J_1\) and \(J_2\).

Proof

In this proof, we say a function is uniformly bounded if its values on \([0,T_1(J_1)]\times [0,T_2(J_2)]\) are bounded in absolute value by a constant depending only on \(p\), \(J_1\), and \(J_2\). If there is no ambiguity, let \(\Omega (A,B)\) denote the domain bounded by sets \(A\) and \(B\), and let \({{\mathrm{mod}}}(A,B)\) denote the modulus of this domain if it is doubly connected. Let \(J_{I,2}=I_0(J_2)\). Let \(p_0={{\mathrm{mod}}}(J_1,J_{I,2})>0\). If \(t_1\le T_1(J_1)\) and \(t_2\le T_2(J_2)\), since \(\Omega (J_1,J_{I,2})\) disconnects \(K_1(t_1)\) and \(K_{I,2}(t_2)\) in \(\mathbb {A}_p\), \({{\mathrm{m}}}(t_1,t_2)\ge p_0\). Since \({{\mathrm{m}}}\le p\) always holds, \({{\mathrm{m}}}\in [p_0,p]\) on \([0,T_{1}(J_1)]\times [0,T_{2}(J_2)]\). Since \(\mathbf{R}\) is continuous on \((0,\infty )\), \(\mathbf{R}({{\mathrm{m}}})\) is uniformly bounded. Since \(Q=\mathbf{H}_I'''({{\mathrm{m}}},X_1)\) and \(\mathbf{H}_I'''\) is continuous and has period \(2\pi \), \(Q\) is uniformly bounded. From Lemma 4.4, for \(j=1,2\), \(|\ln (A_{j,1})|\le N_L({{\mathrm{m}}})\), so it is uniformly bounded. From (4.38), \(\ln (F)\) is uniformly bounded. Let \(s_0\in \mathbb {R}\) be as in Theorem 4.1. Let \(\Gamma _{s_0}>0\) be defined by Lemma 4.3, and \(Y_{s_0}=\Gamma _{s_0}(X_1)\). Then \(\Gamma _{s_0}\) has period \(2\pi \). So \(\ln (Y_{s_0})\) is uniformly bounded. Define \(\widehat{M}_{s_0}\) and \(M_{s_0}\) using (4.44) and (4.46) with \(Y\) and \(\widehat{M}\) replaced by \(Y_{s_0}\) and \(\widehat{M}_{s_0}\), respectively. Then \(\ln (\widehat{M}_{s_0})\) and \(\ln (M_{s_0})\) are uniformly bounded because their factors are. Now it suffices to show that \(\ln (M)-\ln (M_{s_0})\) is uniformly bounded. We have

$$\begin{aligned} \ln (M(t_1,t_2))-\ln (M_{s_0}(t_1,t_2))&= \frac{s_0}{\kappa }(X_1(t_1,t_2)-X_1(t_1,0)-X_1(0,t_2)+X_1(0,0))\\&+\frac{s_0^2}{2\kappa }({{\mathrm{m}}}(t_1,t_2)-{{\mathrm{m}}}(t_1,0)-{{\mathrm{m}}}(0,t_2)+{{\mathrm{m}}}(0,0)). \end{aligned}$$

The second term on the RHS of the above formula is uniformly bounded because \({{\mathrm{m}}}\in [p_0,p]\). So it suffices to show that \( X_1(t_1,t_2)-X_1(t_1,0)-X_1(0,t_2)+X_1(0,0)\) is uniformly bounded. Let

$$\begin{aligned} \widetilde{G}(t_1,t_2)=\widetilde{G}_{I,2,t_2}(t_1,\xi _1(t_1)),\quad \widetilde{g}(t_1,t_2)=\widetilde{g}_{I,1,t_2}(t_1,\xi _2(t_2)). \end{aligned}$$

From (4.28) we have \(X_1=\widetilde{G}-\widetilde{g}\). So it suffices to show that \( \widetilde{G}(t_1,t_2)-\widetilde{G}(t_1,0)-\widetilde{G}(0,t_2)+\widetilde{G}(0,0)\) and \( \widetilde{g}(t_1,t_2)-\widetilde{g}(t_1,0)-\widetilde{g}(0,t_2)+\widetilde{g}(0,0)\) are both uniformly bounded. From (4.20) we have \(\partial _1 \widetilde{g}(t_1,t_2)=A_{1,1}^2\mathbf{H}_I({{\mathrm{m}}}(t_1,t_2),\widetilde{g}(t_1,t_2)-\xi _{1,t_2}(t_1))\). Since \(A_{1,1}^2\) is uniformly bounded, \({{\mathrm{m}}}\in [p_0,p]\), and \(\mathbf{H}_I\) is continuous and has period \(2\pi \), \(\widetilde{g}(t_1,t_2)-\widetilde{g}(0,t_2)\) is uniformly bounded. Thus, \( \widetilde{g}(t_1,t_2)-\widetilde{g}(t_1,0)-\widetilde{g}(0,t_2)+\widetilde{g}(0,0)\) is uniformly bounded. Let \(\widetilde{G}_d(t_1,t_2)=\widetilde{G}(t_1,t_2)-\xi _1(t_1)\). Then \( \widetilde{G}(t_1,t_2)-\widetilde{G}(t_1,0)-\widetilde{G}(0,t_2)+\widetilde{G}(0,0)= \widetilde{G}_d(t_1,t_2)-\widetilde{G}_d(t_1,0)-\widetilde{G}_d(0,t_2)+\widetilde{G}_d(0,0)\). To finish the proof it suffices to show that \(\widetilde{G}_d\) is uniformly bounded.

Let \(J\) be a Jordan curve which is disjoint from \(J_1\) and \(I_p(J_2)\), and separates these two curves. Let \(\widetilde{J}=(e^i)^{-1}(J)\). Since \(\widetilde{G}_d(t_1,t_2)=\widetilde{G}_{I,2,t_2}(t_1,\xi _1(t_1))-\xi _1(t_1)\), from the Maximum principle, we suffice to show that \(\sup _{z\in \widetilde{g}_1(t_1,\widetilde{J})} (\widetilde{G}_{I,2,t_2}(t_1,z)-z)\) is uniformly bounded. Recall from (4.13) that \(\widetilde{G}_{I,2,t_1}(t_1,\cdot )=\widetilde{g}_{1,t_2}(t_1,\cdot )\circ \widetilde{g}_{I,2}(t_2,\cdot )\circ \widetilde{g}_1(t_1,\cdot )^{-1}\). So we suffice to show that the following three quantities are uniformly bounded:

$$\begin{aligned} \sup _{z\in \widetilde{J}}|\widetilde{g}_1(t_1,z)-z|,\quad \sup _{z\in \widetilde{J}}|\widetilde{g}_{I,2}(t_2,z)-z|,\quad \sup _{z\in \widetilde{g}_{I,2}(t_2,\widetilde{J})} |\widetilde{g}_{1,t_2}(t_1,z)-z|. \end{aligned}$$

The uniformly boundedness of these quantities follow from similar arguments. We only work on the last one since it is the hardest. From (4.19) we have

$$\begin{aligned} \widetilde{g}_{1,t_2}(t_1,z)-z=\int \limits _0^{t_1} A_{1,1}(s,t_2)^2\mathbf{H}({{\mathrm{m}}}(s,t_2),\widetilde{g}_{1,t_2}(s,z)-\xi _{1,t_2}(s))ds. \end{aligned}$$

Since \(\int _0^{t_1}A_{1,1}(s,t_2)^2ds={{\mathrm{m}}}(0,t_2)-{{\mathrm{m}}}(t_1,t_2)\) is uniformly bounded, we suffice to show that

$$\begin{aligned} \sup _{z\in \widetilde{g}_{I,2}(t_2,\widetilde{J})}|\mathbf{H}({{\mathrm{m}}}(t_1,t_2),\widetilde{g}_{1,t_2}(t_1,z)-\xi _{1,t_2}(t_1))| \end{aligned}$$

is uniformly bounded. From the properties of \(\mathbf{H}\), we suffice to show that there is a constant \(h>0\) such that \({{\mathrm{Im }}}\widetilde{g}_{1,t_2}(t_1,\cdot )\circ \widetilde{g}_{I,2}(t_2,z)\ge h\) for any \(z\in \widetilde{J}\). This is equivalent to that \(|g_{1,t_2}(t_1,\cdot )\circ g_{I,2}(t_2,z)|\le e^{-h}\) for any \(z\in J\). We suffice to show that the extremal distance (c.f. [26]) between \(\mathbb {T}\) and \(g_{1,t_2}(t_1,\cdot )\circ g_{I,2}(t_2,J)\) is bounded below by a positive constant depending only on \(p\), \(J\), \(J_1\) and \(J_2\). From conformal invariance, that is equal to the extremal distance between \(J\) and \(\mathbb {T}_p\cup \beta _I((0,t_2])\), which is not smaller than the extremal distance between \(J\) and \(I_p(J_2)\) since \(I_p(J_2)\) separates \(J\) from \(\mathbb {T}_p\cup \beta _I((0,t_2])\). So we are done. \(\square \)

4.4 Local couplings and global coupling

Let \(\mu _j\) denote the distribution of \((\xi _j)\), \(j=1,2\). Let \(\mu =\mu _1\times \mu _2\). Then \(\mu \) is the joint distribution of \((\xi _1)\) and \((\xi _2)\), since \(\xi _1\) and \(\xi _2\) are independent. Fix \((J_1,J_2)\in {{\mathrm{JP}}}\). From the local martingale property of \(M\) and Proposition 4.1, we have \( \mathbf{E}\,_\mu [M(T_1(J_1),T_2(J_2))]=M(0,0)=1\). Define \(\nu _{J_1,J_2}\) by \(d\nu _{J_1,J_2}/d\mu =M(T_1(J_1),T_2(J_2))\). Then \(\nu _{J_1,J_2}\) is a probability measure. Let \(\nu _1\) and \(\nu _2\) be the two marginal measures of \(\nu _{J_1,J_2}\). Then \(d\nu _1/d\mu _1=M(T_1(J_1),0)=1\) and \(d\nu _2/d\mu _2=M(0,T_2(J_2))=1\), so \(\nu _j=\mu _j\), \(j=1,2\). Suppose temporarily that the joint distribution of \((\xi _1)\) and \((\xi _2)\) is \(\nu _{J_1,J_2}\) instead of \(\mu \). Then the distribution of each \((\xi _j)\) is still \(\mu _j\).

Fix an \(({\mathcal {F}}^2_t)\)-stopping time \(t_2\le T_2(J_2)\). From (4.39), (4.47), and Girsanov theorem (c.f. Chapter VIII, Section 1 of [23]), under the probability measure \(\nu _{J_1,J_2}\), there is an \(({\mathcal {F}}^1_{t_1}\times {\mathcal {F}}^2_{t_2})_{t_1\ge 0}\)-Brownian motion \(\widetilde{B}_{1,t_2}(t_1)\) such that \(\xi _1(t_1)\), \(0\le t_1\le T_1(J_1)\), satisfies the \(({\mathcal {F}}^1_{t_1}\times {\mathcal {F}}^2_{ t_2})_{t_1\ge 0}\)-adapted SDE:

$$\begin{aligned} d\xi _1(t_1)=\sqrt{\kappa }d \widetilde{B}_{1,t_2}(t_1)+\left( 3-\frac{\kappa }{2}\right) \frac{A_{1,2} }{A_{1,1} }dt_1+\Lambda _{1}({{\mathrm{m}}},X_1 )A_{1,1} dt_1, \end{aligned}$$

which together with (4.14), (4.22), and Itô’s formula implies that

$$\begin{aligned} d\xi _{1,t_2}(t_1)=A_{1,1} \sqrt{\kappa }d\widetilde{B}_{1,t_2}(t_1) +A_{1,1} ^2\Lambda _{1}({{\mathrm{m}}},\xi _{1,t_2}(t_1) -\widetilde{g}_{I,1,t_2}(t_1,\xi _2(t_2))) dt_1. \end{aligned}$$

Recall that \(\zeta _{1,t_2}(s_1)=\xi _{1,t_2}(v_{1,t_2}^{-1}(s_1))\) and \(\widetilde{h}_{I,1,t_2}(s_1,\cdot )=\widetilde{g}_{I,1,t_2}(v_{1,t_2}^{-1}(s_1),\cdot )\). So from (4.11) and (4.17), there is another Brownian motion \(\widehat{B}_{1,t_2}(s_1)\) such that for \(0\le s_1\le v_{1,t_2}(T_1(J_1))\),

$$\begin{aligned} d\zeta _{1,t_2}(s_1)=\sqrt{\kappa }d\widehat{B}_{1,t_2}(s_1)+\Lambda _{1}(p-t_2-s_1,\zeta _{1,t_2}(s_1) -\widetilde{h}_{I,1,t_2}(s_1,\xi _2(t_2)))ds_1. \end{aligned}$$

Moreover, the initial values is \( \zeta _{1,t_2}(0)=\xi _{1,t_2}(0)=\widetilde{G}_{I,2,t_2}(0,x_1)=\widetilde{g}_{I,2}(t_2,x_1)\). Thus, after a time-change, \(g_{I,2}(t_2,\beta _{1}(t_1))\), \(0\le t_1\le T_1(J_1)\), is a partial annulus SLE\((\kappa ,\Lambda _1)\) trace in \(\mathbb {A}_{p-t_2}\) started from \(g_{I,2}(t_2,a_{1})\) with marked point \(I_{p-t_2}\circ e^i(\xi _2(t_2))\). This means that, conditioning on \({\mathcal {F}}^2_{t_2}\), after a time-change, \(\beta _1(t_1)\), \(0\le t_1\le T_1(J_1)\), is a partial annulus SLE\((\kappa ,\Lambda _1)\) trace in \(\mathbb {A}_p{\setminus }\beta _{I,2}((0,t_2])\) started from \(a_1\) with marked point \(\beta _{I,2}(t_2)\). Similarly, the above statement holds true if the subscripts “\(1\)” and “\(2\)” are exchanged.

The joint distribution \(\nu _{J_1,J_2}\) is a local coupling such that the desired properties in the statement of Theorem 4.1 holds true up to the stopping time \(T_1(J_1)\) and \(T_2(J_2)\). Then we can apply the coupling technique developed in Section 7 of [10] to construct a global coupling using the local couplings for different pairs \((J_1,J_2)\).

The coupling technique is composed of several steps. First, let \(\{(J_1^k,J_2^k):k\in \mathbb {N}\}\) denote the set of all pairs in \({{\mathrm{JP}}}\) such that \(J_j^k\), \(k\in \mathbb {N}\), \(j=1,2\), are polygonal curves, whose vertices have rational coordinates. Second, for every \(n\in \mathbb {N}\), one may find a coupling of \(\beta _1\) and \(\beta _2\) such that, for every \(1\le k\le n\), if \(\beta _1\) is stopped at \(\tau _{J_1^k}\), and \(\beta _2\) is stopped at \(\tau _{J_2^k}\), then the joint distribution is \(\nu _{J_1^k,J_2^k}\). To construct such coupling, we work on the two-dimensional random process \(M\). One may prove that there is a process \(M_{n}\) defined on \([0,p]^2\), which satisfies the following properties:

  1. 1.

    \(M_{n}\) is a martingale in one variable, when the other variable is fixed;

  2. 2.

    \(M_{n}=1\) when either variable is \(0\);

  3. 3.

    \(M_{n}=M\) on \([0,\tau _{J_1^k}]\times [0,\tau _{J_2^k}]\), \(1\le k\le n\).

To construct \(M_{n}\), we use vertical lines \(\{t_1=\tau _{J_1^k}\}\) and horizontal lines \(\{t_2=\tau _{J_2^k}\}\), \(1\le k\le n\), to divide the square \([0,p]^2\) into smaller rectangles. First define \(M_{n}\) on

$$\begin{aligned} \bigcup _{k=1}^n [0,\tau _{J_1^k}]\times [0,\tau _{J_2^k}]\cup (\{0\}\times [0,p])\cup ([0,p]\times \{0\}) \end{aligned}$$

according to 2 and 3. Then we extend \(M_{n}\) to other smaller rectangles one by one in such a way that in each rectangle \(R\) not contained in any \([0,\tau _{J_1^k}]\times [0,\tau _{J_2^k}]\), \(M_{n}(t_1,t_2)=f_R(t_1)g_R(t_2)\) for some suitable functions \(f_R\) and \(g_R\). Such extension exists, is unique, and satisfies the desired properties. The reader is referred to Theorem 6.1 in [10] for details. The \(\nu _{n}\) is then defined by \(d\nu _{n}/d\mu =M_{n}(p,p)\). Finally, the global coupling measure \(\nu \) is any subsequential weal limit of the sequence \((\nu _{n})\) in some suitable topology.

4.5 Other results

Here we state without proofs some other results which can be proved using the idea in the proof of Theorem 4.1.

Theorem 4.2

Let \(\kappa >0\). Suppose \(\Gamma \) is a \(C^{1,2}\) differentiable function on \((0,\infty )\times (\mathbb {R}{\setminus }2\pi \mathbb {Z})\) that satisfies

$$\begin{aligned} {\partial _t{ \Gamma }}=\frac{\kappa }{2} { \Gamma ''} +\mathbf{H}{ \Gamma '}+\left( \frac{3}{\kappa }-\frac{1}{2}\right) \mathbf{H}' \Gamma . \end{aligned}$$
(4.48)

Let \(\Lambda =\kappa \frac{\Gamma '}{\Gamma }\), \(\Lambda _1=\Lambda \), and \(\Lambda _2\) be the dual of \(\Lambda \). Then for any \(p>0\) and \(a_1\ne a_2\in \mathbb {T}\), there is a coupling of two curves: \(\beta _1(t)\), \(0\le t<T_1\), and \(\beta _{2}(t)\), \(0\le t<T_2\), such that for \(j\ne k\in \{1,2\}\) the following hold.

  1. (i)

    \(\beta _j \) is an annulus SLE\((\kappa ,\Lambda _j)\) trace in \(\mathbb {A}_p\) started from \(a_j\) with marked point \(a_k\).

  2. (ii)

    If \(t_k\in [0,T_k)\) is a stopping time w.r.t. \((K_k(t))\), then conditioned on \(\beta _{k}(t)\), \(0\le t\le t_k\), after a time-change, \(\beta _j(t)\), \(0\le t<T_j(t_k)\), is a partial annulus SLE\((\kappa ,\Lambda _j)\) process in a component of  \(\mathbb {A}_p{\setminus }\beta _k((0,t_k])\) started from \(a_j\) with marked point \(\beta _k(t_k)\), where \(T_j(t_k)\) is the first time that \(\beta _j\) hits \(\beta _k([0,t_k])\), and is set to be \(T_j\) if such time does not exist. If \(\kappa \in (0,4]\), the word “partial” could be removed.

Remark

  1. 1.

    The \(\Lambda \) in the theorem satisfies the following partial differential equation:

    $$\begin{aligned} \partial _t \Lambda =\frac{\kappa }{2} \Lambda ''+\left( 3-\frac{\kappa }{2}\right) \mathbf{H}''+\Lambda \mathbf{H}'+\mathbf{H}\Lambda '+\Lambda \Lambda '. \end{aligned}$$
    (4.49)

    On the other hand, if \(\Lambda \) satisfies (4.49), then there is \(\Gamma \), which satisfies \(\Lambda =\kappa \frac{\Gamma '}{\Gamma }\) and (4.48).

  2. 2.

    Theorem 4.2 also holds for \(\kappa =0\) if \(\Lambda \) solves (4.48).

  3. 3.

    We may also derive similar results for radial SLE\((\kappa ,\Lambda )\) process and strip SLE\((\kappa ,\Lambda )\) process. In these two cases, \(\Gamma \) and \(\Lambda \) are functions of a single variable, and \(\Lambda =\kappa \frac{\Gamma '}{\Gamma }\). If \(\Lambda =\frac{\rho }{2}\cot _2\) or \(\Lambda =\frac{\rho }{2}\coth _2\), respectively, in these two cases, then we get the radial SLE\((\kappa ,\rho )\) and strip SLE\((\kappa ,\rho )\) processes, respectively (c.f. [27]). For the radial SLE\((\kappa ,\Lambda )\) process, to have the commutation relation, we need that \(\Gamma \) solves the ODE

    $$\begin{aligned} 0=\frac{\kappa }{2} { \Gamma ''}+\cot _2{ \Gamma '}+\left( \frac{3}{\kappa }-\frac{1}{2}\right) \cot _2' \Gamma +C\Gamma , \end{aligned}$$
    (4.50)

    where \(C\) is a constant. For the strip SLE\((\kappa ,\Lambda )\) process, \(\Gamma \) must solves (4.50) with \(\cot _2\) replaced by \(\coth _2\) to guarantee the existence of the commutation coupling.

5 Coupling of whole-plane SLE

The goal of this section is to prove Theorem 5.1 below, which is about commutation couplings of two whole-plane SLE processes. This result will later be used to prove the whole-plane reversibility. Since the proof is similar to the proof of Theorem 4.1, we will frequently quote the arguments in the previous section.

Theorem 5.1

Let \(\kappa >0\) and \(s_0\in \mathbb {R}\). Suppose \(\Gamma \) is a positive \(C^{1,2}\) differentiable function on \((0,\infty )\times \mathbb {R}\) that satisfies (4.1) and (4.2). We also call \(\Gamma \) a partition function. Let \(\Lambda =\kappa \frac{\Gamma '}{\Gamma }\), \(\Lambda _1=\Lambda \), and \(\Lambda _2\) be the dual of \(\Lambda _1\). Let \(s_1=s_0\) and \(s_2=-s_0\). Then there is a coupling of two curves \(\beta _{I,1}(t)\), \(-\infty <t<\infty \), and \(\beta _{I,2}(t)\), \(-\infty <t<\infty \), such that for \(j\ne k\in \{1,2\}\), the following hold.

  1. (i)

    \(\beta _{I,j}\) is a whole-plane SLE\((\kappa ,s_j)\) trace in \(\widehat{\mathbb {C}}\) from \(0\) to \(\infty \);

  2. (ii)

    Let \(t_k\) be a finite stopping time w.r.t. \((K_{I,k}(t))\). Then conditioned on \(\beta _{I,k}(s)\), \(-\infty <s\le t_k\), after a time-change, the curve \(\beta _{I,j}(t_j)\), \(-\infty <t_j<T_j( t_k)\), is a disc SLE\((\kappa ,\Lambda _j)\) process in a component of  \(\widehat{\mathbb {C}}{\setminus }I_0(\beta _{I,j}([-\infty ,t_j]))\) started from \(0\) with marked point \(I_0(\beta _{I,j}(t_j))\), where \(T_j(t_k)\) is the first time that \(\beta _j\) hits \(\beta _k([-\infty ,t_k])\), or \(\infty \) if such time does not exist.

5.1 Estimations on Loewner maps

Let \(\widetilde{g}(t,\cdot )\), \(t\in \mathbb {R}\), be the inverted covering whole-plane Loewner maps driven by some \(\xi \in C(\mathbb {R})\). Let \(z\in \mathbb {C}\) and \(h(t)={{\mathrm{Im }}}\widetilde{g}(t,z)>0\) for \(t\in (-\infty ,\tau _z)\), the interval on which \(\widetilde{g}(t,z)\) is defined. From (3.3) we have \(-\tanh _2(h(t))\ge h'(t)\ge -\coth _2(h(t))\), and

$$\begin{aligned} |\partial _t\widetilde{g}(t,z)+i|\le \frac{2}{e^{{{\mathrm{Im }}}\widetilde{g}(t,z)}-1}=\frac{2}{e^{h(t)}-1},\quad t\in (-\infty ,\tau _z). \end{aligned}$$
(5.1)

So \(h(t)\) decreases, and \(\frac{d}{dt} \ln (\cosh _2(h(t)))\ge -1/2\), which together with (3.4) and integration implies that \( \cosh _2(h(t))\ge \frac{1}{2} e^{\frac{{{\mathrm{Im }}}z}{2}-\frac{t}{2}}\). Then we have \(e^{h(t)}\ge e^{{{\mathrm{Im }}}z-t}-3\). From (5.1) we see that, if \(t<{{\mathrm{Im }}}z-\ln (8)\), then \(|\partial _t\widetilde{g}(t,z)+i|\le \frac{2}{e^{{{\mathrm{Im }}}z-t}-4}\le 4e^{t-{{\mathrm{Im }}}z}\). From (3.4) and integration we have

$$\begin{aligned} |\widetilde{g}(t,z)+it-z|\le 4 e^{t-{{\mathrm{Im }}}z}\le 1/2,\quad \text{ if } t\le {{\mathrm{Im }}}z-\ln (8). \end{aligned}$$
(5.2)

If \(\widetilde{g}(t,\cdot )\) are the covering whole-plane Loewner maps, then from \(\widetilde{g}(t,\cdot )=\widetilde{I}_0\circ \widetilde{g}_I(t,\cdot )\circ \widetilde{I}_0\), we have

$$\begin{aligned} |\widetilde{g}_I(t,z)-it-z|\le 4 e^{t+{{\mathrm{Im }}}z}\le 1/2,\quad \text{ if } t\le -{{\mathrm{Im }}}z- \ln (8). \end{aligned}$$
(5.3)

Let \(\widetilde{g}_I(t,\cdot )\), \(-\infty <t<0\), be the covering disc Loewner maps driven by some \(\xi \in (-\infty ,0)\). Let \(z\in \mathbb {H}\) and \(h(t)={{\mathrm{Im }}}\widetilde{g}_I(t,z)>0\) for \(t\in (-\infty ,\tau _z)\). From Lemma 2.1 and (3.11) we see that if \(-t\ge h(t)+2\) then \(|h'(t)|\le 5.5e^{h(t)+t}\), so \(|\frac{d}{dt} e^{-h(t)}|\le 5.5 e^{t}\). From (3.12) we see that \(-t\ge h(t)+2\) when \(t\) is close to \(-\infty \). Suppose that \(-t\ge h(t)+2\) does not hold for all \(t\in (-\infty ,\tau _z)\), and let \(t_0\) be the first \(t\) such that \(-t=h(t)+2\). Then \(|\frac{d}{dt} e^{-h(t)}|\le 5.5 e^{t}\) on \((-\infty ,t_0]\). From (3.12) and integration we have \(e^{t_0+2}=e^{-h(t_0)}\ge e^{-{{\mathrm{Im }}}z}-5.5e^{t_0}\), which implies that \(e^{-{{\mathrm{Im }}}z}\le (e^2+5.5)e^{t_0}<13e^{t_0}\). Thus, if \(t\le -{{\mathrm{Im }}}z-\ln (13)\) then \(-t\ge h(t)+2\), so \(|h'(t)|\le 5.5e^{h(t)+t}\). From (3.12) and integration, we see that, if \(t\le -{{\mathrm{Im }}}z-\ln (13)\), then \(e^{-{{\mathrm{Im }}}\widetilde{g}_I(t,z)} \ge \frac{7.5}{13}e^{-{{\mathrm{Im }}}z}\), which implies that \({{\mathrm{Im }}}\widetilde{g}_I(t,z)\le {{\mathrm{Im }}}z+\ln (13/7.5)<-t-2\), which, together with Lemma 2.1, implies that \(|\mathbf{H}_I(-t,\widetilde{g}_I(t,z)-\xi (t))| \le 5.5\frac{13}{7.5} e^{{{\mathrm{Im }}}z+t}<10e^{{{\mathrm{Im }}}z+t}\). From (3.11), (3.12) and integration we have \( |\widetilde{g}_I(t,z)-z|\le 10 e^{{{\mathrm{Im }}}z+t}\), if \(t\le -{{\mathrm{Im }}}z- \ln (13)\). If \(\widetilde{g}(t,\cdot )\) are the inverted covering disc Loewner maps, then from \(\widetilde{g}(t,\cdot )=\widetilde{I}_{-t}\circ \widetilde{g}_I(t,\cdot )\circ \widetilde{I}_0\), we have

$$\begin{aligned} |\widetilde{g}(t,z)+it-z|\le 10 e^{-{{\mathrm{Im }}}z+t}\le 10/13,\quad \text{ if } t\le {{\mathrm{Im }}}z- \ln (13). \end{aligned}$$
(5.4)

5.2 Ensemble

The argument in this subsection is parallel to that in Sect. 4.2. Let \(\xi _1,\xi _2\in C(\mathbb {R})\). For \(j=1,2\), let \(g_{I,j}(t,\cdot )\) (resp. \(g_{j}(t,\cdot )\)), \(t\in \mathbb {R}\), be the whole-plane (resp. inverted whole-plane) Loewner maps driven by \(\xi _j\). Let \(\widetilde{g}_{I,j}(t,\cdot )\) and \(\widetilde{g}_{j}(t,\cdot )\), \(t\in \mathbb {R}\), \(j=1,2\), be the corresponding covering Loewner maps. Suppose \(\xi _j\) generates a simple whole-plane Loewner trace: \(\beta _{I,j}\), \(j=1,2\). Let \(\beta _{I,j}=I_0\circ \beta _j\), \(j=1,2\), be the inverted trace. Let \(K_j(t)\) and \(K_{I,j}(t)\) be the corresponding hulls. Define \({\mathcal {D}}\) and \({{\mathrm{m}}}\) using (4.9) and (4.10) with \(0\) replaced by \(-\infty \) and \(\mathbb {A}_p\) replaced by \(\widehat{\mathbb {C}}\). Fix any \(j\ne k\in \{1,2\}\) and \(t_k\in \mathbb {R}\). Let \(T_j(t_k)\) be as defined as before. Then for any \(t_j<T_j(t_k)\), we have \((t_1,t_2)\in {\mathcal {D}}\). Moreover, as \(t_j\rightarrow T_j(t_k)^-\), \({{\mathrm{m}}}(t_1,t_2)\rightarrow 0\).

For \(-\infty \le t_j<T_j(t_k)\), let \(\beta _{I,j,t_k}(t_j)= g_{k}(t_k,\beta _{I,j}(t_j))\). Then \(\beta _{j,t_k}\) is a simple curve in \(\mathbb {D}\) starts from \(0\). For \(-\infty <t_j<T_j(t_k)\), let \(v_{j,t_k}(t_j)=-{{\mathrm{mod}}}(\mathbb {D}{\setminus } \beta _{I,j, t_k}([-\infty ,t_j]))=-{{\mathrm{m}}}(t_1,t_2)\). Then \(v_{j, t_k}\) is continuous and increasing and maps \((-\infty ,T_j(t_k))\) onto \((-\infty ,0)\). Let \(\gamma _{I,j,t_k}(t)=\beta _{I,j,t_k}(v_{j,t_k}^{-1}(t))\), \(-\infty \le t<0\). Then \(\gamma _{I,j,t_k}\) is the disc Loewner trace driven by some \(\zeta _{j,t_k}\in C((-\infty ,0))\). Let \(\gamma _{j,t_k}\) be the corresponding inverted disc Loewner trace. Let \(h_{I,j,t_k}(t,\cdot )\) and \(h_{j,t_k}(t,\cdot )\) be the corresponding disc and inverted disc Loewner maps. Let \(\widetilde{h}_{I,j,t_k}(t,\cdot )\) and \(\widetilde{h}_{j,t_k}(t,\cdot )\) be the corresponding covering Loewner maps.

For \(-\infty <t_j<T_j(t_k)\), let \(\xi _{j,t_k}(t_j)\), \(\beta _{j,t_k}(t_j)\), \(g_{I,j,t_k}(t_j,\cdot )\), \(g_{j,t_k}(t_j,\cdot )\), \(\widetilde{g}_{I,j,t_k}(t_j,\cdot )\), and \(\widetilde{g}_{j,t_k}(t_j,\cdot )\) be the time-changes of \(\zeta _{j,t_k}(t)\), \(\gamma _{j,t_k}(t)\), \(h_{I,j,t_k}(t,\cdot )\), \( h_{j,t_k}(t,\cdot )\), \( \widetilde{h}_{I,j,t_k}(t,\cdot )\), and \( \widetilde{h}_{j,t_k}(t,\cdot )\), respectively, via \(v_{j,t_k}\).

Define \(G_{I,k,t_k}(t_j,\cdot )\) and \(\widetilde{G}_{I,k,t_k}(t_j,\cdot )\) by (4.12) and (4.13). Then we could choose the driving function \(\zeta _{j,t_k}\) such that (4.14) holds. Define \(A_{j,h}\) and \(A_{j,S}\) using (4.15) and (4.16). A standard argument using Lemma 2.1 in [17] shows (4.17) and (4.18) hold here. From the definition of \(\widetilde{G}_{I,k,t_k}(t_j,\cdot )\), we get (4.21), which can be differentiated to conclude that (4.22) holds here, and (4.23) holds with \(p-t_j\) replaced by \(\infty \). Let \(X_j\) be defined by (4.28). Then (4.29) holds. Let \(Q\) be defined by (4.30). Then (4.31)–(4.36) still hold.

From Lemma 2.1, we have

$$\begin{aligned} Q=O(e^{-{{\mathrm{m}}}}),\quad \text{ as } {{\mathrm{m}}}\rightarrow \infty . \end{aligned}$$
(5.5)

From Lemma 4.4, we see that, for \(j=1,2\),

$$\begin{aligned} \ln (A_{j,1}),A_{j,S}=O({{\mathrm{m}}}e^{-{{\mathrm{m}}}}), \quad \text{ as } {{\mathrm{m}}}\rightarrow \infty . \end{aligned}$$
(5.6)

Since \(e^{t_j}\) is the capacity of \(K_{I,j}(t_j)\), which contains \(0\), we have \(K_{I,j}(t_j)\subset \{|z|\le 4e^{t_j}\}\). This then implies that \(K_j(t_j)\subset \{|z|\ge e^{-t_j}/4\}\), \(\widetilde{K}_{I,j}(t_j)\subset \{{{\mathrm{Im }}}z\ge -t_j-\ln (4)\}\), and \(\widetilde{K}_j(t_j)\subset \{{{\mathrm{Im }}}z\le \ln (4)+t_j\}\). Thus,

$$\begin{aligned}&\{(t_1,t_2)\in \mathbb {R}^2:t_1+t_2 <-\ln (16)\}\subset {\mathcal {D}},\end{aligned}$$
(5.7)
$$\begin{aligned}&{{\mathrm{m}}}(t_1,t_2)\ge -t_1-t_2-\ln (16),\quad \text{ if } (t_1,t_2)\in {\mathcal {D}}. \end{aligned}$$
(5.8)

From (5.5)–(5.8), we see that \( A_{1,1}^2 A_{2,1}^2 Q=O(e^{t_1+t_2})\) as \(t_1+t_2\rightarrow -\infty \). Define \(F \) on \({\mathcal {D}}\) using (4.37) with the lower bounds \(0\) replaced by \(-\infty \). From Lemma 4.4, \(A_{k,S}\rightarrow 0\) as \(t_j\rightarrow -\infty \). Thus, (4.38) still holds here, and \(\ln (F(t_1,t_2))=\int _{-\infty }^{t_1}\frac{A_{1,S}(s_1,t_2)}{A_{1,1}(s_1,t_2)^2} \cdot A_{1,1}(s_1,t_2)^2ds_1\). Changing variable with \(x(s_1)={{\mathrm{m}}}(s_1,t_2)\), and using (4.18) and (5.6),we conclude that

$$\begin{aligned} \ln (F)=O({{\mathrm{m}}}e^{-{{\mathrm{m}}}}),\quad \text{ as } {{\mathrm{m}}}\rightarrow \infty . \end{aligned}$$
(5.9)

5.3 Martingales in two time variables

The argument in this subsection is parallel to that in Sect. 4.3. Let \((B^{(\kappa )}_1(t))\) and \((B^{(\kappa )}_2(t))\) be two independent pre-\((\mathbb {T};\kappa )\)-Brownian motion. Let \(\xi _j(t)=B^{(\kappa )}_j(t)+s_jt\), \(t\in \mathbb {R}\), \(j=1,2\). For simplicity, suppose \(\kappa \in (0,4]\). Then for \(j=1,2\), a.s. \((\xi _j)\) generates a simple whole-plane Loewner trace \(\beta _{I,j}\), which is a whole-plane SLE\((\kappa ,s_0)\) trace in \(\widehat{\mathbb {C}}\) from \(0\) to \(\infty \). We may apply the results in the prior subsection. For \(j=1,2\), let \(({\mathcal {F}}^j_t)_{t\in \mathbb {R}}\) be the complete filtration generated by \(e^i(\xi _j)\). The whole-plane Loewner objects driven by \(\xi _j\) are all \(({\mathcal {F}}^j_t)\)-adapted, because they are all determined by \((e^i(\xi _j(t)))\). It is easy to check that for \(j\ne k\in \{1,2\}\), the processes \((\beta _{I,j,t_k})\), \((\widetilde{g}_{I,j,t_k}(t_j,\cdot ))\), \((A_{j,h})\), \(h=1,2,3\), \((A_{j,S})\), \((G_{I,j,t_j}(t_k,\cdot ))\), \((\widetilde{G}_{I,j,t_j}(t_k,\cdot ))\), \((e^i(\xi _{j,t_k}))\), \((e^i(X_j))\), \(({{\mathrm{m}}})\), \((\mathbf{H}_I^{(h)}({{\mathrm{m}}},X_1))\), \((\Gamma _{j}({{\mathrm{m}}},X_j))\), \((\Lambda _{j}({{\mathrm{m}}},X_j))\), \((Q)\) and \((F)\) defined on \({\mathcal {D}}\) are all \(({\mathcal {F}}^1_{t_1}\times {\mathcal {F}}^2_{t_2})\)-adapted. This is not true for \((\xi _{j,t_k}(t_j))\) and \((X_j)\), but is true for their images under the map \(e^i\). Define \(Y\) using (4.41). Then \((Y)\) is also \(({\mathcal {F}}^1_{t_1}\times {\mathcal {F}}^2_{t_2})\)-adapted since \(\Gamma _j\) has period \(2\pi \).

In this section we work on SDE with the meaning as in Definition 2.2: the stochastic part contains pre-\((\mathbb {T};\kappa )\)-Brownian motions, and the time intervals start from \(-\infty \). The traditional Itô’s formula works only for time intervals that start from \(0\) or a finite number. To derive the results in this section, we may truncate the interval “\((-\infty ,T)\)” by an arbitrary real number \(c\) (and we work on the interval \([c,T)\)), which is close to \(-\infty \). Fix \(j\ne k\in \{1,2\}\) and any \(({\mathcal {F}}^k_t)\)-stopping time \(t_k\in \mathbb {R}\). Let \({\mathcal {F}}^{j,t_k}_{t_j}={\mathcal {F}}^j_{t_j}\times {\mathcal {F}}^k_{t_k}\), \(t_j\in \mathbb {R}\). From now on, all SDE will be \(({\mathcal {F}}^{j,t_k}_{t_j})\)-adapted (with the meaning as in Definition 2.2), and \(t_j\) ranges in \([0,T_j(t_k))\).

First, we find that (4.40) still holds here, which then implies (4.42). From the modified (4.23), we see that (4.43) holds here with \(p-t_j\) replaced by \(\infty \). Let \(\widehat{M}\) be defined by (4.44). Then (4.45) holds with \(p-t_j\) replaced by \(\infty \).

Define \(M\) on \({\mathcal {D}}\) by

$$\begin{aligned} M=\widehat{M} \exp \left( \alpha \mathbf{r}(\infty ) ({{\mathrm{m}}}+t_1+t_2)+\sum _{j=1,2}-\frac{s_j}{\kappa }\xi _j(t_j)+\frac{s_j^2}{2\kappa }t_j\right) . \end{aligned}$$
(5.10)

Then \(M\) is \(({\mathcal {F}}^1_{t_1}\times {\mathcal {F}}^2_{t_2})\)-adapted. From the modified (4.45) and that \(\xi _j(t_j)=B^{(\kappa )}_j(t_j)+s_jt_j\), we compute

$$\begin{aligned} \frac{\partial _j M}{ M}=\left[ \left( 3-\frac{\kappa }{2}\right) \frac{A_{j,2}}{A_{j,1}}+ {A_{j,1}} \Lambda _{j}({{\mathrm{m}}},X_j)-s_j\right] \frac{\partial B^{(\kappa )}_j(t_j)}{\kappa }. \end{aligned}$$
(5.11)

So \(M\) is a local martingale in \(t_j\) when \(t_k\) is a finite stopping time.

Let \({\mathcal {J}}\) denote the set of Jordan curves in \(\mathbb {C}{\setminus }\{0\}\) that surround \(0\). For \(J\in \mathcal {J}\) and \(j=1,2\), let \(T_j(J)\) denote the first time that \(\beta _j\) hits \(J\). Then \(T_j(J)\) is also the first time that \(\beta _{I,j}\) hits \(I_0(J)\). Let \(H_J\) denote the closure of the domain bounded by \(I_0(J)\), and let \(C_J\) denote the capacity of \(H_J\). If \(K_{I,j}(t)\subset H_J\), then \(e^t\le C_J\). So we have \(T_j(J)\le \ln (C_J)\).

Let \({{\mathrm{JP}}}\) denote the set of pairs \((J_1,J_2)\in {\mathcal {J}}^2\) such that \(I_0(J_1)\cap J_2=\emptyset \) and \(I_0(J_1)\) is surrounded by \(J_2\). This is equivalent to that \(I_0(J_2)\cap J_1=\emptyset \) and \(I_0(J_2)\) is surrounded by \(J_1\). Then for every \((J_1,J_2)\in {{\mathrm{JP}}}\), \(\beta _{I,1}(t_1)\ne \beta _{2}(t_2)\) when \(t_1\le T_{1}(J_1)\) and \(t_2\le T_2(J_2)\), so \((-\infty ,T_{1}(J_1)]\times (-\infty ,T_{2}(J_2)]\subset {\mathcal {D}}\).

Proposition 5.1

(Boundedness) Fix \((J_1,J_2)\in {{\mathrm{JP}}}\). (i) \(|\ln (M)|\) is bounded on \((-\infty ,T_{1}(J_1)]\times (-\infty ,T_2(J_2)]\) by a constant depending only on \(J_1\) and \(J_2\). (ii) Fix any \(j\ne k\in \{1,2\}\). Then \(\ln (M)\rightarrow 0\) as \(t_j\rightarrow -\infty \) uniformly in \(t_k\in (-\infty ,T_k(J_k)]\).

Proof

Let \(\Gamma _{s_0}\) be given by Lemma 4.3. Let \(\Gamma _{s_0,1}=\Gamma _{s_0}\), and \(\Gamma _{s_0,2}(t,x)=\Gamma _{s_0}(t,-x)\). Define \(Y_{s_0}\) on \({\mathcal {D}}\) by \(Y_{s_0}=\Gamma _{s_0,1}({{\mathrm{m}}},X_1)=\Gamma _{s_0,2}({{\mathrm{m}}},X_2)\). Then \(Y_{s_0}=Y\exp (-\frac{s_0}{\kappa }X_1-\frac{s_0^2{{\mathrm{m}}}}{2\kappa })\). From Lemma 4.3,

$$\begin{aligned} \ln (Y_{s_0})=o({{\mathrm{m}}})\quad \text{ as } {{\mathrm{m}}}\rightarrow \infty . \end{aligned}$$
(5.12)

Define \(\widehat{M}_{s_0}\) using (4.44) with \(Y\) replaced by \(Y_{s_0}\). From (5.10) we have

$$\begin{aligned} M=M_{s_0}\exp \left( \left( \alpha \mathbf{r}(\infty )+\frac{s_0^2}{2\kappa }\right) ({{\mathrm{m}}}+t_1+t_2)+\frac{s_0}{\kappa }(X_1-\xi _1(t_1)+\xi _2(t_2))\right) .\nonumber \\ \end{aligned}$$
(5.13)

From (5.6), (5.9), (5.12), and that \(\mathbf{R}(p)=O(e^{-p})\) as \(p\rightarrow \infty \), we see that there is a positive continuous function \(f\) on \((0,\infty )\) with \(\lim _{x\rightarrow \infty } f(x)= 0\) such that

$$\begin{aligned} |\ln (M_{s_0}(t_1,t_2))|\le f({{\mathrm{m}}}(t_1,t_2)). \end{aligned}$$
(5.14)

Let \(\Omega (I_0(J_1),J_2)\) denote the doubly connected domain bounded by \(I_0(J_1)\) and \(J_2\). Let \(p_0>0\) denote its modulus. For \((t_1,t_2)\in (-\infty ,T_{1}(J_1)]\times (-\infty ,T_{2}(J_2)]\), since \(\Omega (I_0(J_1),J_2)\) disconnects \(K_{I,1}(t_1)\) from \(K_2(t_2)\), we have \({{\mathrm{m}}}(t_1,t_2)\ge p_0\). On the other hand, \({{\mathrm{m}}}\le p\). From (5.14) we see that \(\ln (M_{s_0})\) is uniformly bounded. From (5.7), (5.8), (5.14), and that \(T_k(J_k)\le C_{J_k}<\infty \), we see that \(\ln (M)\rightarrow 0\) as \(t_j\rightarrow -\infty \) uniformly in \(t_k\in (-\infty ,T_k(J_k)]\). The rest of the proof follows from (5.13) and the following proposition. \(\square \)

Proposition 5.2

Fix \((J_1,J_2)\in {{\mathrm{JP}}}\). (i) \(|X_1-\xi _1+\xi _2|\) and \(|{{\mathrm{m}}}+t_1+t_2|\) are bounded on \((-\infty ,T_1(J_1)]\times (-\infty ,T_2(J_2)]\) by constants depending only on \(J_1\) and \(J_2\). (ii) For any \(j\ne k\in \{1,2\}\), \(X_1-\xi _1+\xi _2\rightarrow 0\) and \({{\mathrm{m}}}+t_1+t_2\rightarrow 0\) as \(t_j\rightarrow -\infty \), uniformly in \(t_k\in (-\infty ,T_k(J_k)]\).

Proof

Recall that \(T_j(J_j)\le C_{J_j}<\infty \) for \(j=1,2\), and \({{\mathrm{m}}}\ge p_0>0\) on \((-\infty ,T_1(J_1)]\times (-\infty ,T_2(J_2)]\). If there is no ambiguity, let \(\Omega (A,B)\) denote the domain bounded by sets \(A\) and \(B\), and let \({{\mathrm{mod}}}(A,B)\) denote the modulus of this domain if it is doubly connected.

From (4.28) we have \(X_1(t_1,t_2)=\widetilde{G}_{I,2,t_2}(t_1,\xi _1(t_1))-\widetilde{g}_{I,1,t_2}(t_1,\xi _2(t_2))\). So

$$\begin{aligned} |X_1(t_1,t_2)-\xi _1(t_1)+\xi _2(t_2)|&\le |\widetilde{g}_{I,1,t_2}(t_1,\xi _2(t_2))-\xi _2(t_2)|\nonumber \\&+|\widetilde{G}_{I,2,t_2}(t_1,\xi _1(t_1))-\xi _1(t_1)|. \end{aligned}$$
(5.15)

From (3.12) we have \(\lim _{t_1\rightarrow -\infty } \widetilde{g}_{I,1,t_2}(t_1,\xi _2(t_2))=\xi _2(t_2)\). From (4.18), (4.20), and Lemma 2.1, we see that there is a deterministic positive decreasing function \(f(x)\) with \(\lim _{x\rightarrow \infty }f(x)= 0\) such that \(|\widetilde{g}_{I,1,t_2}(t_1,\xi _2(t_2))-\xi _2(t_2)|\le f({{\mathrm{m}}}(t_1,t_2))\). Since \({{\mathrm{m}}}\ge p_0\) on \((-\infty ,T_{1}(J_1)]\times (-\infty ,T_2(J_2)]\), \(|\widetilde{g}_{I,1,t_2}(t_1,\xi _2(t_2))-\xi _2(t_2)|\) is uniformly bounded by \(f(p_0)\). From (5.8) and that \(T_2(J_2)\le C_{J_2}\), we see that \(\widetilde{g}_{I,1,t_2}(t_1,\xi _2(t_2))-\xi _2(t_2)\rightarrow 0\) as \(t_1\rightarrow -\infty \), uniformly in \(t_2\in (-\infty ,T_2(J_2)]\).

Let \(J\) be a Jordan curve separating \(J_1\) and \(J_{I,2}\). Let \(p_1={{\mathrm{mod}}}(J,J_1)\) and \(p_2={{\mathrm{mod}}}(J,J_{I,2})\). Let \(\widetilde{J}=(e^i)^{-1}(J)\). Let \(h_m=\inf \{{{\mathrm{Im }}}z:z\in \widetilde{J}\}\) and \(h_M=\sup \{{{\mathrm{Im }}}z:z\in \widetilde{J}\}\). Then both \(h_m\) and \(h_M\) are finite. For \(j=1,2\), there is \(h_j>0\) depending only on \(p_j\), such that, if \(K\subset \mathbb {D}\) is an interior hull with \(0\in K\) and \({{\mathrm{mod}}}(\mathbb {D}{\setminus }K)\ge p_j\), then \(K\subset \{|z|\le e^{-h_j}\}\). If \(t_1\le T_1(J_1)\), then \(J_1\) disconnects \(J\) from \(K_1(t_1)\), so \({{\mathrm{mod}}}(J,K_1(t_1))\ge p_1\). Since \(\Omega (J,K_1(t_1))\) is mapped by \(g_1(t_1,\cdot )\) conformally onto \(\Omega (g_1(t_1,J),\mathbb {T})\subset \mathbb {D}\), \({{\mathrm{mod}}}(g_1(t_1,J),\mathbb {T})\ge p_1\). Since \(g_1(t_1,J)\) surrounds \(0\), \(g_1(t_1,J)\subset \{|z|\le e^{-h_1}\}\). Since \(\widetilde{g}_1(t_1,\widetilde{J})=(e^i)^{-1}(g_1(t_1,J))\), \(\widetilde{g}_1(t_1,\widetilde{J})\subset \{{{\mathrm{Im }}}z\ge h_1\}\). Similarly, if \(t_2\le T_2(J_2)\), then \(\widetilde{g}_{I,2}(t_2,\widetilde{J})\subset \{{{\mathrm{Im }}}z\le -h_2\}\). If \(t_1\le T_1(J_1)\) and \(t_2\le T_2(J_2)\), then \(g_{1,t_2}(t_1,\cdot )\circ g_{I,2}(t_2,\cdot )\) maps \(\mathbb {C}{\setminus }K_1(t_1){\setminus }K_{I,2}(t_2)\) conformally onto \(\mathbb {A}_{{{\mathrm{m}}}}\). A similar argument shows that the image of \(J\) under this map lies in \(\{e^{-{{\mathrm{m}}}+h_2}\le |z|\le e^{-h_1}\}\). Thus, \(\widetilde{g}_{1,t_2}(t_1,\widetilde{g}_{I,2}(t_2,\widetilde{J}))\subset \{h_1\le {{\mathrm{Im }}}z\le {{\mathrm{m}}}-h_2\}\), if \(t_1\le T_1(J_1)\) and \(t_2\le T_2(J_2)\).

Let \(z_0\in \mathbb {C}{\setminus }\widetilde{K}_1(t_1){\setminus }\widetilde{K}_{I,2}(t_2)\), \(w_1=\widetilde{g}_1(t_1,z_0)\), \(w_2=\widetilde{g}_{I,2}(t_2,z_0)\), and \(w_3=\widetilde{g}_{1,t_2}(t_1,w_2)\). From (5.2), (5.3), and (5.4) we see that

$$\begin{aligned}&|w_1-(z_0-it_1)|\le 4e^{t_1-{{\mathrm{Im }}}z_0}\le 1/2,\quad \text{ if } t_1\le {{\mathrm{Im }}}z_0-\ln (8);\quad \end{aligned}$$
(5.16)
$$\begin{aligned}&|w_2-(z_0+it_2)|\le 4e^{t_2+{{\mathrm{Im }}}z_0}\le 1/2,\quad \text{ if } t_2\le -{{\mathrm{Im }}}z_0-\ln (8);\quad \end{aligned}$$
(5.17)
$$\begin{aligned}&|w_3-(w_2+i{{\mathrm{m}}})|\le 10e^{-{{\mathrm{m}}}-{{\mathrm{Im }}}w_2}<1,\quad \text{ if } {{\mathrm{Im }}}w_2+{{\mathrm{m}}}\ge \ln (13).\quad \quad \end{aligned}$$
(5.18)

Now let \(z_0\in \widetilde{J}\). From the prior paragraph, \({{\mathrm{Im }}}\widetilde{g}_1(s,z_0)\ge h_1\) for \(s\le t_1\), \({{\mathrm{Im }}}\widetilde{g}_{I,2}(s,z_0)\le -h_2\) for \(s\le t_2\), and \({{\mathrm{m}}}(s,t_2)-h_2\ge \widetilde{g}_{1,t_2}(s,w_2)\ge h_1\) for \(s\le t_1\). From (5.1) we have \(|\partial _t\widetilde{g}_1(s,z_0)+i|\le \frac{2}{e^{h_1}-1}\), for \(s\le t_1\). Similarly, \(|\partial _t\widetilde{g}_{I,2}(s,z_0)-i|\le \frac{2}{e^{h_2}-1}\) for \(s\le t_2\). If \(t_1\le {{\mathrm{Im }}}z_0-\ln (8)\), then from (5.16), \(|w_1-(z_0-it_1)|\le 1/2\). If \(t_1>{{\mathrm{Im }}}z_0-\ln (8)\), we let \(t_1'={{\mathrm{Im }}}z_0-\ln (8)\), and \(w_1'=\widetilde{g}_1(t_1',z_0)\). Then we have \(|w_1'-(z_0-it_1')|\le 1/2\). From the bound of \(|\partial _t\widetilde{g}_1(s,z_0)+i|\), we see that

$$\begin{aligned} |(w_1+it_1)-(w_1'+it_1')|&\le \frac{2(t_1-t_1')}{e^{h_1}-1}\le \frac{2C_{J_1}-2({{\mathrm{Im }}}z_0-\ln (8))}{e^{h_1}-1}\\&\le \frac{2C_{J_1}+2\ln (8)-2h_m}{e^{h_1}-1}. \end{aligned}$$

Let \(A_1=\frac{1}{2} +\max \left\{ 0,\frac{2C_{J_1}+2\ln (8)-2h_m}{e^{h_1}-1} \right\} \). Then in all cases we have

$$\begin{aligned} |w_1-(z_0-it_1)|\le A_1. \end{aligned}$$
(5.19)

Similarly, let \(A_2=\frac{1}{2} +\max \left\{ 0,\frac{2C_{J_2}+2\ln (8)+2h_M}{e^{h_2}-1} \right\} \). Then we always have

$$\begin{aligned} |w_2-(z_0+it_2)|\le A_2. \end{aligned}$$
(5.20)

Since \({{\mathrm{Im }}}z_0\ge h_m\), we have

$$\begin{aligned} t_2-{{\mathrm{Im }}}w_2\le A_2-{{\mathrm{Im }}}z_0\le A_2-h_m. \end{aligned}$$
(5.21)

If \({{\mathrm{Im }}}w_2+{{\mathrm{m}}}(t_1,t_2)\ge \ln (13)\), from (5.18), we have \(|w_3-(w_2+i{{\mathrm{m}}}(t_1,t_2))|<1\). Now suppose that \({{\mathrm{Im }}}w_2+{{\mathrm{m}}}(t_1,t_2)< \ln (13)\). We may choose \(t_1'<t_1\) such that \({{\mathrm{Im }}}w_2+{{\mathrm{m}}}(t_1',t_2)=\ln (13)\). Let \(w_3'=\widetilde{g}_{1,t_2}(t_1',w_2)\). Then we have \(|w_3'-(w_2+i{{\mathrm{m}}}(t_1',t_2))|<1\). For \(s\le t_1\), since \(h_1\le {{\mathrm{Im }}}\widetilde{g}_{1,t_2}(s,w_2)\le {{\mathrm{m}}}(s,t_2)-h_2\), from Lemma 2.1 we have

$$\begin{aligned} |\mathbf{H}_I({{\mathrm{m}}}(s,t_2),i{{\mathrm{m}}}(s,t_2)-\widetilde{g}_{1,t_2}(s,w_2)+\xi _{1,t_2}(s))|\le \frac{4e^{-h_1}}{(1-e^{-h_1})^3}. \end{aligned}$$

Since \(\mathbf{H}_I({{\mathrm{m}}},z)+i=\mathbf{H}_I({{\mathrm{m}}},z-i{{\mathrm{m}}})=-\mathbf{H}_I({{\mathrm{m}}},i{{\mathrm{m}}}-z)\), we have

$$\begin{aligned} |\mathbf{H}({{\mathrm{m}}}(s,t_2),\widetilde{g}_{1,t_2}(s,w_2)-\xi _{1,t_2}(s))+i|\le \frac{4e^{-h_1}}{(1-e^{-h_1})^3},\quad \text{ if } s\le t_1. \end{aligned}$$

Let \(C_0=\frac{4e^{-h_1}}{(1-e^{-h_1})^3}\). From (4.18), (4.20), (5.8), (5.21), and the above inequality, we have

$$\begin{aligned} |(w_3-i{{\mathrm{m}}}(t_1,t_2))-(w_3'-i{{\mathrm{m}}}(t_1',t_2))|&\le C_0({{\mathrm{m}}}(t_1',t_2)-{{\mathrm{m}}}(t_1,t_2))\\&\le C_0(\ln (13)-{{\mathrm{Im }}}w_2+t_1+t_2+\ln (16))\\&\le C_0(\ln (13)+\ln (16)+C_{J_1}+A_2-h_m). \end{aligned}$$

Let \(A_3=1+\max \{0,C_0(\ln (13)+\ln (16)+C_{J_1}+A_2-h_m)\}\). Then \( |w_3-(w_2+i{{\mathrm{m}}})|\le A_3 \) always holds, which together with (5.19) and (5.20) implies that, for any \(t_1\le T_1(J_1)\) and \(t_2\le T_2(J_2)\),

$$\begin{aligned} |\widetilde{G}_{I,2,t_2}(t_1,w_1)-w_1-i({{\mathrm{m}}}+t_1+t_2)|\le A_1+A_2+A_3,\quad w_1\in \widetilde{g}_1(t_1,\widetilde{J}).\quad \quad \quad \end{aligned}$$
(5.22)

Now \(\widetilde{g}_1(t_1,\widetilde{J})\) is a curve with period \(2\pi \) above \(\mathbb {R}\), the function \(w\mapsto \widetilde{G}_{I,2,t_2}(t_1,w)-w\) has period \(2\pi \), is analytic in \(\Omega (\widetilde{g}_1(t_1,\widetilde{J}),\mathbb {R})\), and its imaginary part vanishes on \(\mathbb {R}\). Applying the maximum principle to the real part of this function, and using (5.22), we conclude that

$$\begin{aligned} |\widetilde{G}_{I,2,t_2}(t_1,\xi _1(t_1))-\xi _1(t_1)|\le A_1+A_2+A_3,\quad \text{ if } t_1\le T_1(J_1) \text{ and } t_2\le T_2(J_2). \end{aligned}$$

This together with (5.15) and the estimation of \(|\widetilde{g}_{I,1,t_2}(t_1,\xi _2(t_2))-\xi _2(t_2)|\) implies that \(|X_1-\xi _1+\xi _2|\) is uniformly bounded on \((-\infty ,T_1(J_1)]\times (-\infty ,T_2(J_2)]\).

Since \(G_{I,2,t_2}(t_1,\cdot )\) maps \(\mathbb {T}\) onto \(\mathbb {T}\), and is conformal in the domain that contains the region between \(g_1(t_1,J)\) and \(\mathbb {T}\), there must exist \(z_1\in g_1(t_1,J)\) such that \(|G_{I,2,t_2}(t_1,z_1)|=|z_1|\). Choose \(w_1\in \widetilde{g}_1(t_1,\widetilde{J})\) such that \(e^i(w_1)=z_1\). Then \({{\mathrm{Im }}}\widetilde{G}_{I,2,t_2}(t_1,w_1)={{\mathrm{Im }}}w_1\). From (5.22) we get \(|{{\mathrm{m}}}+t_1+t_2|\le A_1+A_2+A_3\), if \(t_1\le T_1(J_1)\) and \(t_2\le T_2(J_2)\), which finishes the proof of (i).

Now suppose \(t_1+t_2\le -1-2\ln (13)-2\ln (16)\) and \({{\mathrm{Im }}}z_0=\frac{t_1-t_2}{2}\). Then

$$\begin{aligned} {{\mathrm{Im }}}z_0-t_1=-{{\mathrm{Im }}}z_0-t_2=-\frac{t_1+t_2}{2}\ge \frac{1}{2}+\ln (13)+\ln (16)\ge \ln (8). \end{aligned}$$

Since \(\widetilde{K}_1(t_1)\subset \{{{\mathrm{Im }}}z\le \ln (4)+\ln (t_1)\}\) and \(\widetilde{K}_{I,2}(t_2)\subset \{{{\mathrm{Im }}}z\ge -\ln (t_2)-\ln (4)\}\), we have \(z_0\in \mathbb {C}{\setminus }\widetilde{K}_1(t_1){\setminus }\widetilde{K}_{I,2}(t_2)\). From (5.16) and (5.17) we have

$$\begin{aligned} |w_1-(z_0-it_1)|,|w_2-(z_0+it_2)|\le 4e^{\frac{t_1+t_2}{2}}\le 1/2. \end{aligned}$$
(5.23)

From (5.8), (5.23), and the upper bound of \(t_1+t_2\), we have

$$\begin{aligned} {{\mathrm{Im }}}w_2+{{\mathrm{m}}}\ge {{\mathrm{Im }}}z_0+t_2-\frac{1}{2}-t_1-t_2-\ln (16)=-\frac{t_1+t_2\!+1}{2}-\ln (16)\ge \ln (13). \end{aligned}$$

Thus, from (5.18) and the above inequality we have

$$\begin{aligned} |\widetilde{g}_{1,t_2}(t_1,w_2)-(w_2+i{{\mathrm{m}}})|\le 10e^{-{{\mathrm{m}}}-{{\mathrm{Im }}}w_2}\le 264e^{\frac{t_1+t_2}{2}}. \end{aligned}$$
(5.24)

From (4.13), (5.23), and (5.24) we see that if \(t_1+t_2\le -1-2\ln (13)-2\ln (16)\), then

$$\begin{aligned} |\widetilde{G}_{I,2,t_2}(t_1,w_1)-w_1-i({{\mathrm{m}}}+t_1+t_2)|\le 272 e^{\frac{t_1+t_2}{2}},\quad w_1\in \widetilde{g}_1(t_1,\mathbb {R}_{({t_1-t_2})/2}). \end{aligned}$$

The argument between (5.22) and the end of part (i) can be used here to show that, if \(t_1+t_2\le -1-2\ln (13)-2\ln (16)\), then \(|\widetilde{G}_{I,2,t_2}(t_1,\xi _1(t_1))-\xi _1(t_1)|\le 272 e^{\frac{t_1+t_2}{2}}\) and \(|{{\mathrm{m}}}+t_1+t_2|\le 272 e^{\frac{t_1+t_2}{2}}\). These inequalities together with the uniform limit of \(\widetilde{g}_{I,1,t_2}(t_1,\xi _2(t_2))-\xi _2(t_2)\) and the fact that \(T_2(J_2)\le C_{J_2}\) imply that (ii) hold for \(j=1\) and \(k=2\). Interchanging \(t_1\) and \(t_2\), we find that \({{\mathrm{m}}}+t_1+t_2\rightarrow 0\) and \(X_2-\xi _2+\xi _1\rightarrow 0\) as \(t_2\rightarrow 0\), uniformly in \(t_1\in (-\infty ,T_1(J_1)]\). From (4.29) we see that \(X_2-\xi _2+\xi _1=-(X_1-\xi _1+\xi _2)\), so we have \(X_1-\xi _1+\xi _2\rightarrow 0\) as \(t_2\rightarrow 0\), uniformly in \(t_1\in (-\infty ,T_1(J_1)]\). This completes the proof of part (ii). \(\square \)

Let \(\widehat{\mathcal {D}}={\mathcal {D}}\cup \{(t_1,-\infty ):t_1\in [-\infty ,\infty )\}\cup \{(-\infty ,t_2):t_2\in [-\infty ,\infty )\}\), and extend \(M\) to \(\widehat{\mathcal {D}}\) such that \(M=1\) if \(t_1\) or \(t_2\) equals \(-\infty \). From Proposition 5.1, we see that \(M\) is positive and continuous on \(\widehat{\mathcal {D}}\). So for any fixed \(j\ne k\in \{1,2\}\) and any \(({\mathcal {F}}^k_t)\)-stopping time \(t_k\) which is uniformly bounded above, \(M\) is a local martingale in \(t_j\in [-\infty ,T_j(t_k))\).

5.4 Local coupling and global coupling

Let \(\mu _j\) denote the distribution of \((\xi _j)\), \(j\!=\!1,2\). Let \(\mu \!=\!\mu _1\times \mu _2\). Then \(\mu \) is the joint distribution of \((\xi _1)\) and \((\xi _2)\), since \(\xi _1\) and \(\xi _2\) are independent. Fix \((J_1,J_2)\in {{\mathrm{JP}}}\). From the local martingale property of \(M\) and Proposition 5.1, we have \(\mathbf{E}\,_\mu [M(T_1(J_1),T_{2}(J_2))]\!=\!M(-\infty ,-\infty )\!=\!1\). Define \(\nu _{J_1,J_2}\) by \(d\nu _{J_1,J_2}\!=\!M(T_1(J_1),T_{2}(J_2)) d\mu \). Then \(\nu _{J_1,J_2}\) is a probability measure. Let \(\nu _1\) and \(\nu _2\) be the two marginal measures of \(\nu _{J_1,J_2}\). Then \(d\nu _1/d\mu _1\!=\!M(T_1(J_1),-\infty )\!=\!1\) and \(d\nu _2/d\mu _2\!=\!M(-\infty ,T_2(J_2))\!=\!1\), so \(\nu _j\!=\!\mu _j\), \(j\!=\!1,2\). Suppose temporarily that the distribution of \((\xi _1,\xi _2)\) is \(\nu _{J_1,J_2}\) instead of \(\mu \). Then the distribution of each \((\xi _j)\) is still \(\mu _j\).

We may now use the argument in Sect. 4.4 with a few changes. Here \(M(t_1,t_2)\) satisfies (5.11) instead of (4.47); \(\xi _j(t_j)\) does not satisfy (4.39), but is a pre-\((\mathbb {T};\kappa )\)-Brownian motion with drift \(s_j\cdot t\). The traditional Girsanov theorem needs to be modified to work for the current setting. Eventually, we can conclude that, under the probability measure \(\nu _{J_1,J_2}\), for any \(j\ne k\in \{1,2\}\), if \(t_k\) is a fixed \(({\mathcal {F}}^k_t)\)-stopping time with \(t_k\le T_k(J_k)\), and \(g_k(t,\cdot )\), \(-\infty <t<\infty \), are the inverted whole-plane Loewner maps driven by \(\xi _k\), then conditioned on \({\mathcal {F}}^k_{t_k}\), after a time-change, \(g_k(t_k,K_{I,j}(t_j))\), \(-\infty < t_j\le T_j(J_j)\), is a partial disc SLE\((\kappa ,\Lambda _{j})\) process in \(\mathbb {D}\) started from \(0\) with marked point \(e^i(\xi _k(t_k))\).

The proof of Theorem 5.1 can be now completed by applying the coupling technique.

6 Partial differential equations

With Theorem 5.1 at hand, to prove the main theorem we need to find particular solutions to (4.1) that satisfy certain properties. This section serves this purpose. From Lemma 4.1 we see that solving (4.1) is equivalent to solving (4.5) with \(\sigma =\frac{4}{\kappa }-1\). Throughout this section, we assume that \(\kappa >0\) and \(\sigma \in [0,\frac{4}{\kappa })\), and will find solutions to (4.5) in these cases. In particular, we will obtain solutions to (4.1) when \(\kappa \in (0,4]\).

The solutions to (4.5) is obtained by construction. We will transform (4.5) into a similar PDE (6.26), where \(\mathbf{H}_I\) is replaced by \(\widehat{\mathbf{H}}_I\). We know that as \(t\rightarrow \infty \), \(\widehat{\mathbf{H}}_I(t,\cdot )\rightarrow \coth _2\), and PDE (6.26) tends to another PDE (6.27), which has a simple solution \(\widehat{\Psi }_\infty \) given by (6.28). Then we let \(\widehat{\Psi }_q=\widehat{\Psi }/\widehat{\Psi }_\infty \), and find that \(\widehat{\Psi }\) solves (6.26) if and only if \(\widehat{\Psi }_q\) solves PDE (6.29). A formal solution to (6.29) is expressed by a Feynman–Kac formula (6.30), which involves diffusion processes. Such diffusion processes are introduced and studied in Sect. 6.1. In Sect. 6.2 we describe how close is \(\widehat{\mathbf{H}}_I(t,\cdot )\) to \(\coth _2\) when \(t\) is big. In Sect. 6.3 we transform the PDE (4.5) for \(\Psi \) into the PDE (6.29) for \(\widehat{\Psi }_q\), and give an intuitive reason why the formula (6.30) gives a solution to (6.29). In Sect. 6.4 we prove that the \(\widehat{\Psi }_q\) given by (6.30) is smooth, and solves (6.29). So we obtain a solution \(\Gamma \) to (4.1). However, such \(\Gamma \) does not satisfy (4.2). For this purpose, note that (4.1) is a linear PDE, and \(\mathbf{H}_I\) has period \(2\pi \), so any translation of \(\Gamma \) by an integer multiple of \(2\pi \) also solves (4.1). The solutions to (4.1) which also satisfy (4.2) will be obtained by summing over all translations of \(\Gamma \) with suitable weights.

The following symbols will be used in this section. For any \(n,j\in \mathbb {N}\), we call an \(j\)-tuple \(\lambda =(\lambda _1,\dots ,\lambda _j)\in \mathbb {N}^j\) a partition of \(n\) if \(\lambda _1\ge \dots \ge \lambda _j\) and \(\sum _{k=1}^j \lambda _k=n\). The length of such partition is denoted by \(l(\lambda )=j\). Let \(\mathcal{P}_{n}\) denote the set of all partitions of \(n\). For example, \((n)\) is the only element in \(\mathcal{P}_{n}\) with length \(1\). Let \(\mathcal{P}_{\mathbb {N}}=\bigcup _{n\in \mathbb {N}}\mathcal{P}_{n}\) denote the set of all partitions.

6.1 Diffusion processes

Fix \(\tau \le 0\). For \(x\in \mathbb {R}\), let \(u(t,x)\), \(t\ge 0\), be the solution to

$$\begin{aligned} \partial _t u(t,x)= \tau \tanh _2(u(t,x)+\sqrt{\kappa }B(t));\quad u(0,x)=x. \end{aligned}$$
(6.1)

Then \(X_x(t){:=}u(t,x)+\sqrt{\kappa }B(t)\) satisfies the SDE

$$\begin{aligned} dX_x(t)=\sqrt{\kappa }dB(t)+ \tau \tanh _2(X_x(t))dt,\quad X_x(0)=x. \end{aligned}$$
(6.2)

Lemma 6.1

For any \(x\in \mathbb {R}\), we have a.s. \(\int _0^\infty \tanh _2'(X_x(t))dt=\infty \) and

$$\begin{aligned} \limsup _{t\rightarrow \infty } X_x(t)=+\infty , \qquad \liminf _{t\rightarrow \infty }X_x(t)=-\infty . \end{aligned}$$
(6.3)

Proof

Fix \(x\in \mathbb {R}\). Let \(X(t)=X_x(t)\). Define \(f(t)=\int _0^t \cosh _2(s)^{-\frac{4}{\kappa }\tau }ds,\quad t\in \mathbb {R}\). Then \(f\) is a differentiable increasing odd function and satisfies \(\frac{\kappa }{2} f''+\tau \tanh _2f'=0\). Let \(Y(t)=f(X(t))\). From (6.2) and Itô’s formula, we have \(dY(t)=f'(X(t))\sqrt{\kappa }dB(t)\). Define a time-change function \( u(t)=\int _0^t \kappa f'(X(s))^2 ds\). Since \(\tau \le 0\), \(f'(t)\ge 1\), \(t\in \mathbb {R}\). Thus, \(u(t)\ge t\) for all \(t\in \mathbb {R}\). So \(u\) maps \([0,\infty )\) onto \([0,\infty )\), and \(Y(u^{-1}(t))\), \(0\le t<\infty \), has the distribution of a Brownian motion. Thus, (6.3) holds with \(X\) replaced by \(Y\), which then implies (6.3). Since \(X\) is recurrent, and \(\tanh _2'>0\) on \(\mathbb {R}\), we immediately have a.s. \(\int _0^\infty \tanh _2'(X_x(t))dt=\infty \). \(\square \)

Lemma 6.2

For any \(b,c>0\) and \(x\in \mathbb {R}\),

$$\begin{aligned} \mathbb {P}[\exists t\ge 0,|X_x(t)|>ct+b]\le 2e^{\frac{2c}{\kappa }(|x|-b)}. \end{aligned}$$
(6.4)

Proof

First, it is well known that (6.4) holds with \(X_x(t)\) replaced by \(x+\sqrt{\kappa }B(t)\). So it suffices to show that \((|X_x(t)|)\) is bounded above by a process that has the distribution of \((|x+\sqrt{\kappa }B(t)|)\). This can be proved by using Theorem 4.1 in [28]. Here we give a direct proof.

Let \(Y(t)=|X_x(t)|\) From (6.2) and Tanaka–Itô’s formula, we have

$$\begin{aligned} Y(t)=|x|+\sqrt{\kappa }B_0(t)+\frac{\kappa }{2} \tau \int \limits _0^t \tanh _2(Y(s))ds+L(t),\quad t\ge 0, \end{aligned}$$
(6.5)

where \(B_0(t)\) is a Brownian motion, \(L(t)\) is a non-decreasing function, which satisfies \(L(0)=0\) and is constant on every interval of \(\{Y(t)>0\}\).

Fix \(t_0\ge 0\). There is \(t_0'\in [0,t_0]\) such that \(L(t)\) is constant on \([t_0',t_0]\). We may assume \(t_0'\) is the smallest such number. There are two cases. Case 1: \(t_0'=0\). Then \(L(t_0)=L(t_0')=L(0)=0\). Since \(\tau \le 0\), from (6.5), \(Y(t_0)\le |x|+\sqrt{\kappa }B_0(t)\). Case 2: \(t_0'>0\). Then \(Y(t_0')=0\). Since \(\tau \le 0\), from (6.5),

$$\begin{aligned} Y(t_0)-|x|-\sqrt{\kappa }B_0(t_0)\le Y(t_0')-|x|-\sqrt{\kappa }B_0(t_0')=-|x|-\sqrt{\kappa }B_0(t_0'). \end{aligned}$$

Thus, in either case, we have

$$\begin{aligned} Y(t_0)\le |x|+\sqrt{\kappa }B_0(t_0)+\max \left\{ 0,\sup _{0\le s\le t_0}\{-|x|-\sqrt{\kappa }B_0(s)\}\right\} . \end{aligned}$$

The RHS of the above inequality defines a process that has the distribution of \(|x+\sqrt{\kappa }B(t_0)|\), \(t_0\ge 0\) (c.f. Chapter VI, Section 2 of [23]), so the proof is completed. \(\square \)

Lemma 6.3

There are \(C_{n}>0\), \(n\in \mathbb {N}\), with \(C_1=1\), such that

$$\begin{aligned} \left| \tanh _2^{(n)}(x)\right| \le C_{n}\tanh _2'(x)\le \frac{C_{n}}{2},\quad x\in \mathbb {R},\, n\in \mathbb {N}. \end{aligned}$$

Proof

Note that \(\tanh _2'(x)=\frac{1}{2}\cosh _2^{-2}(x)\in (0,1/2]\). So the second “\(\le \)” holds. By induction, one can prove that for every \(n\), there are \(a^{(n)}_j\in \mathbb {R}\), \(0\le j\le n-1\), such that

$$\begin{aligned} \tanh _2^{(n)}(x)=\sum _{j=0}^{n-1}a^{(n)}_j \cosh _2^{-2-j}(x)\sinh _2^{j}(x)= \sum _{j=0}^{n-1} a^{(n)}_j \cosh _2^{-2}(x)\tanh _2^{j}(x). \end{aligned}$$

Since \( \left| \tanh _2^j(x)\right| \le 1\) and \(\cosh _2^{-2}=2\tanh _2'\), we may choose \(C_{n}=2\sum _j \left| a^{(n)}_j\right| \). \(\square \)

Lemma 6.4

For every \(m\in \mathbb {N}\), there is a polynomial \(P_m(t)\) of degree \(m-1\) such that for any \(t>0\) and \(x\in \mathbb {R}\), \(|\frac{\partial ^m}{\partial x^m} X_x(t)|\le P_m(t)\).

Proof

Since \(X_x(t)=u(t,x)+\sqrt{\kappa }B(t)\), \(\frac{\partial ^n}{\partial x^n} X_x(t)=u^{(n)}(t,x)\). It suffices to show that for every \(m\in \mathbb {N}\), there is some polynomial \(P_m(t)\) of degree \(m-1\), such that

$$\begin{aligned} |u^{(m)}(t,x)|\le P_m(t), \quad t>0,\,x\in \mathbb {R}. \end{aligned}$$
(6.6)

Let \(f_x(t)= \tau \tanh _2'(X_x(t))\). Since \(\tau \le 0\) and \(\tanh _2'>0\), \(f_x(t)\le 0\). Differentiating (6.1) w.r.t. \(x\) and using \(u'(0,x)=1\), we get \( u'(t,x)=\exp (\int _0^t f_x(s)ds)\in (0,1]\). Thus, (6.6) holds in the case \(n=1\) with \(P_1(t)\equiv 1\).

Let \(n\in \mathbb {N}\), \(n\ge 2\). Suppose that (6.6) holds for any \(m\le n-1\). Differentiating (6.1) \(n\) times, by induction we find that there are \(b_{n}(\lambda )\in \mathbb {R}\) for \(\lambda \in \mathcal{P}_{n}\) with \(b_{n}((n))=\tau \) such that

$$\begin{aligned} \partial _t u^{(n)}(t,x)\!=\!\sum _{\lambda \in \mathcal{P}_{n}} b_{n}(\lambda )\tanh _2^{(l(\lambda ))}(X_x(t)) \prod _{k=1}^{l(\lambda )} u^{(\lambda _k)}(t,x),\quad u^{(n)}(0,x)\!=\!0.\quad \end{aligned}$$
(6.7)

Observe that the term \(u^{(n)}(t,x)\) appears only once in (6.7), i.e, in the case \(\lambda =((n))\), and the coefficient is \(\tau \tanh _2'(X_x(t))=f_x(t)\). From Lemma 6.3 and induction hypothesis, there is a polynomial \(g_x(t)\) of degree \(n-2\) such that

$$\begin{aligned} \partial _t u^{(n)}(t,x)=f_x(t)u^{(n)}(t,x)+g_x(t),\quad u^{(n)}(0,x)=0. \end{aligned}$$

Solving this inequality using the fact that \(f_x(t)\le 0\), we can conclude that (6.6) holds in the case \(m=n\), which finishes the proof. \(\square \)

6.2 Some estimations

We will need some estimations about the limits of \(\widehat{\mathbf{H}}_{I}-\tanh _2\) as \(t\rightarrow \infty \). Let

$$\begin{aligned} \widehat{\mathbf{H}}_{I,q}(t,z)=\widehat{\mathbf{H}}_I(t,z)-\tanh _2(z). \end{aligned}$$
(6.8)

From (2.12) we have

$$\begin{aligned} \widehat{\mathbf{H}}_{I,q}'(t,x)=\sum _{2|n\ne 0}\tanh _2'(x-nt)=\sum _{2|n\ne 0} \frac{1}{2}\cosh _2^{-2}(x-nt)>0. \end{aligned}$$
(6.9)

Lemma 6.5

Let \(C_{n}\), \(n\in \mathbb {N}\), be as in Lemma 6.3. Note that \(C_1=1\). Then

$$\begin{aligned} |\widehat{\mathbf{H}}_{I,q}(t,x)|&\le \frac{|x|}{t}+3+\frac{2e^{-t}}{1-e^{-2t}},\quad t>0,\,x\in \mathbb {R}.\end{aligned}$$
(6.10)
$$\begin{aligned} \left| \widehat{\mathbf{H}}_{I,q}^{(n)}(t,x)\right|&\le C_{n}\left( \frac{1}{2} + \frac{4e^{-t}}{1-e^{-2t}}\right) ,\quad t>0,\,x\in \mathbb {R},\,n\in \mathbb {N}. \end{aligned}$$
(6.11)

Moreover, for any \(c>0\),

$$\begin{aligned} |\widehat{\mathbf{H}}_{I,q}(t,x)|&\le \frac{2e^{(c-2)t}}{1-e^{-2t}},\quad \text{ if } t>0,\,x\in \mathbb {R},\,|x|\le ct.\end{aligned}$$
(6.12)
$$\begin{aligned} \left| \widehat{\mathbf{H}}_{I,q}^{(n)}(t,x)\right|&\le C_{n}\frac{4e^{(c-2)t}}{1-e^{-2t}},\quad \text{ if } \quad t>0,\,x\in \mathbb {R},\,|x|\le ct,\,n\in \mathbb {N}. \end{aligned}$$
(6.13)

Proof

We first show (6.12). From (2.12) and (6.8) we have

$$\begin{aligned} \widehat{\mathbf{H}}_{I,q}(t,x)&= \sum _{m=1}^{\infty }(\tanh _2(x-2mt)+\tanh _2(x+2mt))\\&= \sum _{m=1}^{\infty }\left( -\frac{e^{2mt}-e^x}{e^{2mt}+e^x}+\frac{e^{2mt}-e^{-x}}{e^{2mt}+e^{-x}}\right) = \sum _{m=1}^{\infty }\frac{2(e^x-e^{-x})}{e^{2mt}+e^{-2mt}+e^x+e^{-x}}. \end{aligned}$$

Thus,

$$\begin{aligned} |\widehat{\mathbf{H}}_{I,q}(t,x)|\le \sum _{m=1}^{\infty }\frac{2e^{|x|}}{e^{2mt}}=\frac{2e^{|x| -2t}}{1-e^{-2t}}. \end{aligned}$$
(6.14)

Then (6.12) is a direct consequence of this inequality.

Secondly, we show (6.10). Since \(|\tanh _2(x)|\le 1\), from (6.8) it suffices to show

$$\begin{aligned} |\widehat{\mathbf{H}}_{I}(t,x)|\le \frac{|x|}{t}+2+\frac{2e^{-t}}{1-e^{-2t}}. \end{aligned}$$
(6.15)

We first consider the case \(|x|\le t\). From (6.14) we have

$$\begin{aligned} |\widehat{\mathbf{H}}_I(t,x)|\le |\tanh _2(x)|+|\widehat{\mathbf{H}}_{I,q}(t,x)|\le 1+\frac{2e^{|x| -2t}}{1-e^{-2t}}\le 1+\frac{2e^{-t}}{1-e^{-2t}}.\quad \quad \quad \end{aligned}$$
(6.16)

Thus, (6.15) holds in this case.

Then we consider the case \(|x|\ge t\). There exists \(m\in \mathbb {N}\) such that \((2m-1)t\le |x|\le (2m+1)t\). Since \(\widehat{\mathbf{H}}_{I}\) is odd, we only need to consider the case that \((2m-1)t\le x\le (2m+1)t\). Let \(x_0=x-2mt\). Then \(|x_0|\le t\). From (2.10) we have \(\widehat{\mathbf{H}}_I(t,x)=2m+\widehat{\mathbf{H}}_I(t,x_0)\). From (6.16) with \(x=x_0\) we have

$$\begin{aligned} |\widehat{\mathbf{H}}_I(t,x)|\le 2m+|\widehat{\mathbf{H}}_I(t,x_0)|\le 2m+1+\frac{2e^{-t}}{1-e^{-2t}}\le \frac{|x|}{t}+2+\frac{2e^{-t}}{1-e^{-2t}}, \end{aligned}$$

where the last inequality uses \(\frac{|x|}{t}\ge 2m-1\). So we have (6.15) and (6.10).

Now we prove (6.11) and (6.13). From (6.9) we have

$$\begin{aligned} 0<\widehat{\mathbf{H}}_{I,q}'(t,x)&= \sum _{2|n\ne 0} \frac{1}{2}\cosh _2^{-2}(|nt-x|)\le \sum _{2|n\ne 0} \frac{1}{2}\cosh _2^{-2}(|n|t-|x|)\nonumber \\&= \sum _{m=1}^\infty \cosh _2^{-2}(2mt-|x|)\le 4\sum _{m=1}^\infty e^{|x|-2mt}=\frac{4e^{|x|-2t}}{1-e^{-2t}}.\quad \quad \quad \quad \quad \end{aligned}$$
(6.17)

which implies (6.13) in the case \(n=1\). From (6.17) we have \(\widehat{\mathbf{H}}_I'(t,x) <\frac{1}{2}+ \frac{4e^{-t}}{1-e^{-2t}}\) if \(|x|\le t\). Since \(\widehat{\mathbf{H}}_I'\) has period \(2t\), this inequality holds for all \(x\in \mathbb {R}\). Since \(\widehat{\mathbf{H}}_{I,q}'<\widehat{\mathbf{H}}_I'\), (6.11) holds in the case \(n=1\). From (6.9) and Lemma 6.3 we have \( |\widehat{\mathbf{H}}_{I,q}^{(n)}(t,x)|\le C_{n} \widehat{\mathbf{H}}_{I,q}'(t,x)\). So (6.11) and (6.13) in the case \(n\ge 2\) follow from those in the case \(n=1\). \(\square \)

Lemma 6.6

For every \(n\in \mathbb {N}\cup \{0\}\), there is a constant \(D_{n}>0\) such that for any \(j\in \{1,2\}\), \(t>0\), and \(x\in \mathbb {R}\),

$$\begin{aligned} \left| \partial _t^j\widehat{\mathbf{H}}_{I,q}^{(n)}(t,x)\right| \le D_{n}\left( \frac{|x|}{t}+3+\frac{2e^{-t}}{1-e^{-2t}}\right) ^j\left( \frac{1}{2} + \frac{4e^{-t}}{1-e^{-2t}}\right) .\quad \end{aligned}$$
(6.18)

Moreover, for any \(n\in \mathbb {N}\cup \{0\}\) and \(c>0\), there is a constant \(D_{n}>0\), such that

$$\begin{aligned} \left| \partial _t^j\widehat{\mathbf{H}}_{I,q}^{(n)}(t,x)\right| \le D_{n}\left( \frac{2e^{(c-2)t}}{1-e^{-2t}}\right) ^{j+1},\quad \text{ if } t>0,\,x\in \mathbb {R},\,|x|\le ct;\quad \end{aligned}$$
(6.19)

Proof

Let \(A(t,x)=\frac{|x|}{t}+3+\frac{2e^{-t}}{1-e^{-2t}}\), \(B(t,x)=\frac{1}{2} + \frac{4e^{-t}}{1-e^{-2t}}\), and \(C_c(t,x)=\frac{2e^{(c-2)t}}{1-e^{-2t}}\). In this proof, by \(X\lesssim Y\) we mean that there is a constant \(C\) such that \(X\le C Y\). Here \(C\) may depend on \(n\) if \(X\) depends on \(n\). From (6.10), (6.11), (6.12), and (6.13), we see that

$$\begin{aligned}&|\mathbf{H}_{I,q}(t,x)|\lesssim A(t,x),\qquad \left| \mathbf{H}_{I,q}^{(n)}(t,x)\right| \lesssim B(t,x)\lesssim A(t,x),\quad x\in \mathbb {R},\, n\in \mathbb {N}.\nonumber \\\end{aligned}$$
(6.20)
$$\begin{aligned}&\left| \mathbf{H}_{I,q}^{(n)}(t,x)\right| \lesssim C_c(t,x),\quad \text{ if } x\in \mathbb {R},\,|x|\le ct,\, n\in \mathbb {N}\cup \{0\}.\quad \quad \end{aligned}$$
(6.21)

As \(t\rightarrow \infty \), \(\widehat{\mathbf{H}}_I\rightarrow \tanh _2\). Then (2.13) becomes \( 0=\tanh _2''+\tanh _2'\tanh _2\), which can be proved directly. From (2.13), (6.8), and the above equation, we get

$$\begin{aligned} \partial _t \widehat{\mathbf{H}}_{I,q}=\widehat{\mathbf{H}}_{I,q}'' +\widehat{\mathbf{H}}_{I,q}'\widehat{\mathbf{H}}_{I,q}+\tanh _2'\widehat{\mathbf{H}}_{I,q}+\widehat{\mathbf{H}}_{I,q}'\tanh _2. \end{aligned}$$
(6.22)

Then (6.18) and (6.19) in the case \(j=1\) and \(n=0\) follow from (6.20), (6.21), (6.22), and Lemma 6.3.

Differentiating (2.13) w.r.t. \(x\) twice, we get

$$\begin{aligned} \partial _t \widehat{\mathbf{H}}_I'&= \widehat{\mathbf{H}}_I'''+\widehat{\mathbf{H}}_I''\widehat{\mathbf{H}}_I+(\widehat{\mathbf{H}}_I')^2.\\ \partial _t \widehat{\mathbf{H}}_I''&= \widehat{\mathbf{H}}_I^{(4)}+\widehat{\mathbf{H}}_I'''\widehat{\mathbf{H}}_I+3\widehat{\mathbf{H}}_I''\widehat{\mathbf{H}}_I'. \end{aligned}$$

Differentiating (2.13) w.r.t. \(t\) and using the above two displayed formulas, we obtain

$$\begin{aligned} \partial _t^2 \widehat{\mathbf{H}}_I=\widehat{\mathbf{H}}_I^{(4)}+2\widehat{\mathbf{H}}_I'''\widehat{\mathbf{H}}_I+ 4\widehat{\mathbf{H}}_I''\widehat{\mathbf{H}}_I'+\widehat{\mathbf{H}}_I''(\widehat{\mathbf{H}}_I)^2+2(\widehat{\mathbf{H}}_I')^2\widehat{\mathbf{H}}_I. \end{aligned}$$

As \(t\rightarrow \infty \), this equation tends to the following equation, which can also be checked directly.

$$\begin{aligned} 0=\tanh _2^{(4)}\,+\,2\tanh _2'''\tanh _2\,+\,4\tanh _2''\tanh _2'\,+\,\tanh _2''\tanh _2^2\,+\,2(\tanh _2')^2\tanh _2. \end{aligned}$$

From (6.8), and the above two equations, we compute

$$\begin{aligned} \partial _t^2 \widehat{\mathbf{H}}_{I,q}&= \widehat{\mathbf{H}}_{I,q}^{(4)} +2\widehat{\mathbf{H}}_{I,q}'''\widehat{\mathbf{H}}_{I,q}+2\tanh _2'''\widehat{\mathbf{H}}_{I,q}\nonumber \\&+2\widehat{\mathbf{H}}_{I,q}'''\widehat{\mathbf{H}}_{I,q}+4\widehat{\mathbf{H}}_{I,q}''\widehat{\mathbf{H}}_{I,q}'\nonumber \\&+4\widehat{\mathbf{H}}_{I,q}''\tanh _2'+\tanh _2''(\widehat{\mathbf{H}}_{I,q})^2+2\widehat{\mathbf{H}}_{I,q}''\widehat{\mathbf{H}}_{I,q}\tanh _2+2\tanh _2''\widehat{\mathbf{H}}_{I,q}\tanh _2\nonumber \\&+\widehat{\mathbf{H}}_{I,q}''(\tanh _2)^2+ \widehat{\mathbf{H}}_{I,q}''(\widehat{\mathbf{H}}_{I,q})^2+2(\widehat{\mathbf{H}}_{I,q}')^2\widehat{\mathbf{H}}_{I,q}+4\widehat{\mathbf{H}}_{I,q}'\tanh _2'\widehat{\mathbf{H}}_{I,q}\nonumber \\&+2(\tanh _2')^2\widehat{\mathbf{H}}_{I,q}+2(\widehat{\mathbf{H}}_{I,q}')^2\tanh _2+4\widehat{\mathbf{H}}_{I,q}'\tanh _2'\tanh _2. \end{aligned}$$
(6.23)

Then (6.18) and (6.19) in the case \(j=2\) and \(n=0\) follow from (6.20), (6.21), (6.23), and Lemma 6.3.

Differentiate (6.22) and (6.23) \(n\) times w.r.t. \(x\). We see that \(\partial _t \widehat{\mathbf{H}}_{I,q}^{(n)}\) can be expressed as a sum of finitely many terms, whose factors are \(\mathbf{H}_{I,q}^{(k)}\) or \(\tanh _2^{(k)}\), \(k\in \mathbb {N}\cup \{0\}\). In every term, the factors of the kind \(\mathbf{H}_{I,q}^{(k)}\) appear at most twice, and the factor \(\mathbf{H}_{I,q}\) appears at most once. So we derive (6.18) and (6.19) in the case \(j=1\) and \(n\in \mathbb {N}\) from (6.20), (6.21), and Lemma 6.3. We see that \(\partial _t^2 \widehat{\mathbf{H}}_{I,q}^{(n)}\) can be expressed as a sum of finitely many terms, whose factors are constant, \(\mathbf{H}_{I,q}^{(k)}\), or \(\tanh _2^{(k)}\). In every term, the factors of the kind \(\mathbf{H}_{I,q}^{(k)}\) appear at most three times, and the factor \(\mathbf{H}_{I,q}\) appears at most twice. So we derive (6.18) and (6.19) in the case \(j=2\) and \(n\in \mathbb {N}\) from (6.20), (6.21), and Lemma 6.3. \(\square \)

6.3 Feynman–Kac expression

We begin with a lemma, which can be proved directly. Recall the definition of \(\widehat{\mathbf{H}}_I\) in (2.9).

Lemma 6.7

Let \(\Psi \) and \(\widehat{\Psi }\) be functions defined on \((0,\infty )\times \mathbb {R}\). The following expressions are equivalent:

$$\begin{aligned} \widehat{\Psi }(t,x)&= e^{\frac{x^2}{2\kappa t}} \left( \frac{\pi }{t}\right) ^{\sigma +\frac{1}{2}}\Psi \left( \frac{\pi ^2}{t},\quad \frac{\pi }{t} x\right) \!.\end{aligned}$$
(6.24)
$$\begin{aligned} \Psi (t,x)&= e^{-\frac{x^2}{2\kappa t}} \left( \frac{\pi }{t}\right) ^{\sigma +\frac{1}{2}}\widehat{\Psi }\left( \frac{\pi ^2}{t},\quad \frac{\pi }{t} x\right) \!. \end{aligned}$$
(6.25)

If the above two equalities hold, then \(\Psi \) satisfies (4.5) if and only if \(\widehat{\Psi }\) satisfies

$$\begin{aligned} -\partial _t {\widehat{\Psi }} = \frac{\kappa }{2} \widehat{\Psi }'' + \sigma \widehat{\mathbf{H}}_I' \widehat{\Psi }. \end{aligned}$$
(6.26)

As \(t\rightarrow \infty \), \(\widehat{\mathbf{H}}_I'\rightarrow \tanh _2'\), so (6.26) tends to

$$\begin{aligned} -\partial _t {\widehat{\Psi }}_\infty = \frac{\kappa }{2} \widehat{\Psi }_\infty '' + \sigma \tanh _2'(x) \widehat{\Psi }_\infty . \end{aligned}$$
(6.27)

Let \(\tau \) be the non-positive root of the equation \(\frac{\tau ^2}{2\kappa }=\frac{\tau }{4} +\frac{\sigma }{2}\), i.e., \(\tau =\kappa /4-\sqrt{{\kappa ^2}/{16}+\kappa \sigma }\). Then \(\tau =\frac{\kappa }{2}-2\) when \(\sigma =\frac{4}{\kappa }-1\). It is easy to check that (6.27) has a simple solution:

$$\begin{aligned} \widehat{\Psi }_\infty (t,x)=e^{-\frac{\tau ^2t}{2\kappa }}\cosh _2^{\frac{2}{\kappa }\tau }(x). \end{aligned}$$
(6.28)

Recall the \(\widehat{\mathbf{H}}_{I,q}\) defined in (6.8). The proof of the following lemma is straightforward.

Lemma 6.8

Let \(\widehat{\Psi }\) and \(\widehat{\Psi }_q\) be defined on \((0,\infty )\times \mathbb {R}\), and satisfy \(\widehat{\Psi }=\widehat{\Psi }_\infty \widehat{\Psi }_q\), where \(\Psi _\infty \) is defined by (6.28). Then \(\widehat{\Psi }\) satisfies (6.26) if and only if \(\widehat{\Psi }_q\) satisfies

$$\begin{aligned} -\partial _t{\widehat{\Psi }}_q=\frac{\kappa }{2} \widehat{\Psi }_q''+\tau \tanh _2 \widehat{\Psi }_q'+\sigma \widehat{\mathbf{H}}_{I,q}'\widehat{\Psi }_q. \end{aligned}$$
(6.29)

Suppose \(\widehat{\Psi }_q\) solves (6.29). Let \(X_{x_0}(t)\) be as in (6.2). Fix \(t_0>0\) and \(x_0\in \mathbb {R}\). Let

$$\begin{aligned} M(t)= \widehat{\Psi }_q(t_0+t,X_{x_0}(t))\exp \left( \sigma \int \limits _{0}^{t} \widehat{\mathbf{H}}_{I,q}'(t_0+s,X_{x_0}(s))ds\right) \!. \end{aligned}$$

From (6.2), (6.29), and Itô’s formula, we see that \(M(t)\) is a local martingale. If \(M(t)\) is a martingale on \([0,\infty ]\), and \(\widehat{\Psi }_q\rightarrow 1\) as \(t\rightarrow \infty \), then from \(M_0=\widehat{\Psi }_q(t_0,x_0)\) we have

$$\begin{aligned} \widehat{\Psi }_q(t_0,x_0)=\mathbf{E}\,\left[ \exp \left( \sigma \int \limits _{0}^{\infty } \widehat{\mathbf{H}}_{I,q}'(t_0+s,X_{x_0}(s))ds\right) \right] \!. \end{aligned}$$
(6.30)

This Feynman–Kac formula holds under many additional assumptions. We do not try to prove it. Instead, we now define \(\widehat{\Psi }_q\) by (6.30). We will prove that \(\widehat{\Psi }_q\) is finite and differentiable, and solves (6.29).

6.4 Regularity

Fix \(c_0\in (1+\frac{\kappa }{4}\sigma ,2)\). This is possible because \(\sigma \in [0,\frac{4}{\kappa })\). Then we have

$$\begin{aligned} \exp \left( {\frac{\sigma }{2(c_0-1)}-\frac{2}{\kappa }}\right) <1. \end{aligned}$$
(6.31)

Throughout this subsection, we use \(C\) to denote a positive constant, which depends only on \(\kappa ,\sigma ,c_0\), and could change between lines. The symbol \(X\lesssim Y\) means that \(X\le CY\) for some \(C\). Let \(\alpha (t)=\frac{4}{1-e^{-2t}}\). Then \(t^{-1}+1\lesssim \alpha (t)\lesssim t^{-1}+1\). For \(m\in \mathbb {N}\cup \{0\}\), let \({\mathcal {E}}_m\) denote the event that \(|X_x(s)|\le s+m\) for all \(s\ge 0\). From (6.4) we have

$$\begin{aligned} \mathbb {P}[{\mathcal {E}}_m^c]\le 2e^{\frac{2}{\kappa }(|x|-m)},\quad m\in \mathbb {N}\cup \{0\}. \end{aligned}$$
(6.32)

Proposition 6.1

\(\widehat{\Psi }_q\) is finite and satisfies

$$\begin{aligned} 1\le \widehat{\Psi }_q(t,x)\le \exp \left( C(t^{-1}+1) e^{(c_0-2)t}\right) (1+Ce^{\frac{2}{\kappa }|x|-\frac{2}{\kappa }c_0t}). \end{aligned}$$
(6.33)

Proof

Fix \(t>0\) and \(x\in \mathbb {R}\). Assume that \({\mathcal {E}}_m\) occurs for some \(m\in \mathbb {N}\cup \{0\}\). If \(s\ge \frac{m-c_0t}{c_0-1}\) then \(|X_x(s)|\le s+m\le c_0(s+t)\), so from (6.13) with \(C_1=1\) we have

$$\begin{aligned} \widehat{\mathbf{H}}_{I,q}'(t+s,X_{x}(s)) \le \frac{4e^{(c_0-2)(s+t)}}{1-e^{-2(s+t)}}\le \alpha (t) e^{(c_0-2)(s+t)}. \end{aligned}$$

If \(0\le s\le \frac{m-c_0t}{c_0-1}\), from \(-1\le c_0-2\) and (6.11) with \(C_1=1\), we have

$$\begin{aligned} \widehat{\mathbf{H}}_{I,q}'(t+s,X_{x}(s)) < \frac{1}{2}+\frac{4e^{-(s+t)}}{1-e^{-2(s+t)}}\le \frac{1}{2}+ \alpha (t) e^{(c_0-2)(s+t)}, \end{aligned}$$

Since \(c_0-2<0\), at the event \({\mathcal {E}}_m\),

$$\begin{aligned} \int \limits _{0}^{\infty } \widehat{\mathbf{H}}_{I,q}'(t+s,X_{x}(s))ds\le \frac{1}{2}\cdot \frac{(m-c_0t)\vee 0}{c_0-1}+\frac{\alpha (t)e^{(c_0-2)t}}{2-c_0}; \end{aligned}$$
(6.34)

Let \(H(t)=\exp (\sigma \int _{0}^{\infty } \widehat{\mathbf{H}}_{I,q}'(t+s,X_{x}(s))ds )\). From (6.32) and (6.34) we have

$$\begin{aligned} \widehat{\Psi }_q(t,x)&= \mathbf{E}\,[1_{{\mathcal {E}}_{\lfloor c_0t\rfloor }} H(t) ]+\sum _{m=\lfloor c_0t\rfloor }^\infty \mathbf{E}\,[1_{{\mathcal {E}}_{m+1}{\setminus }{\mathcal {E}}_{m}}H(t) ]\nonumber \\&\le \exp \left( \frac{\sigma \alpha (t)e^{(c_0-2)t}}{2-c_0}\right) \nonumber \\&+\sum _{m=\lfloor c_0t\rfloor }^\infty 2e^{\frac{2}{\kappa }(|x|-m)} \exp \left( \frac{1}{2} \frac{\sigma (m+1-\lfloor c_0t\rfloor )}{c_0-1}+\frac{\sigma \alpha (t)e^{(c_0-2)t}}{2-c_0} \right) \!.\nonumber \\ \end{aligned}$$
(6.35)

Change index using \(m=l+\lfloor c_0t\rfloor \). The second term of the RHS of (6.35) equals

$$\begin{aligned} 2\exp \left( \frac{2|x|}{\kappa }-\frac{2 \lfloor c_0t\rfloor }{\kappa }+\frac{\sigma }{2(c_0-1)}+\frac{\sigma \alpha (t)e^{(c_0-2)t}}{2-c_0} \right) \sum _{l=0}^\infty \exp \left( \frac{\sigma }{2(c_0-1)}-\frac{2}{\kappa }\right) ^l.\nonumber \\ \end{aligned}$$
(6.36)

From (6.31), the infinite sum is finite. Thus, from \(\widehat{\mathbf{H}}_{I,q}'>0\), \(\sigma \ge 0\), and (6.35), we have

$$\begin{aligned} 1\le \widehat{\Psi }_q(t,x)\le \exp \left( \frac{\sigma \alpha (t)e^{(c_0-2)t}}{2-c_0}\right) \left( 1+Ce^{\frac{2|x|}{\kappa }- \frac{2c_0t}{\kappa }}\right) . \end{aligned}$$

Then (6.33) follows from this formula and that \(\alpha (t)\lesssim t^{-1}+1\). \(\square \)

Let \(n\in \mathbb {N}\). Formally differentiate (6.30) \(n\) times w.r.t. \(x\). If the differentiation commutes with the integration and expectation at every time, then we should have

$$\begin{aligned} \widehat{\Psi }_q^{(n)}(t,x)=\mathbf{E}\,\left[ \exp \left( \sigma \int \limits _{0}^{\infty } \widehat{\mathbf{H}}_{I,q}'(t+s,X_{x}(s))ds\right) \cdot \mathbf{Q}_{0,n}(v_{0,k,\lambda }(t,x))\right] \!,\quad \quad \quad \end{aligned}$$
(6.37)

where \(\mathbf{Q}_{0,n}\) is a polynomial of degree \(\le n\) without constant term in the following variables:

$$\begin{aligned} v_{0,k,\lambda }(t,x){:=}\int \limits _0^\infty \widehat{\mathbf{H}}_{I,q}^{(k)}(t+s,X_x(s))\prod _{r=1}^{l(\lambda )} \frac{\partial ^{\lambda _r}}{\partial x^{\lambda _r}} X_x(s)ds,\quad k\in \mathbb {N},\,\lambda \in \mathcal{P}_{\mathbb {N}}.\quad \quad \quad \end{aligned}$$
(6.38)

With \(\mathbf{Q}_{0,0}\equiv 1\), (6.37) becomes (6.30). Let \(n\in \mathbb {N}\cup \{0\}\). Formally differentiate (6.37) w.r.t. \(t\). If the differentiation commutes with the integration and expectation, then we should have

$$\begin{aligned} \partial _t{\widehat{\Psi }}_q^{(n)}(t,x)\!=\! \mathbf{E}\,\!\left[ \exp \left( \sigma \int \limits _{0}^{\infty } \widehat{\mathbf{H}}_{I,q}'(t\!+\!s,X_{x}(s))ds\right) \cdot \mathbf{Q}_{1,n}(v_{0,k,\lambda },v_{1,k,\lambda })\right] \!,\quad \quad \quad \end{aligned}$$
(6.39)

where \(\mathbf{Q}_{1,n}\) is a polynomial of degree \(\le n+1\) without constant term in the variables \(v_{0,k,\lambda }\) defined by (6.38) and

$$\begin{aligned} v_{1,k,\lambda }(t,x){:=}\int \limits _0^\infty \partial _t \widehat{\mathbf{H}}_{I,q}^{(k)}(t+s,X_x(s))\prod _{r=1}^{l(\lambda )} \frac{\partial ^{\lambda _r}}{\partial x^{\lambda _r}} X_x(s)ds,\quad k\in \mathbb {N},\lambda \in \mathcal{P}_{\mathbb {N}}\cup \{\mathbb {N}^0\}.\nonumber \\ \end{aligned}$$
(6.40)

Here by \(\lambda \in \mathbb {N}^0\) we mean that the factor \(\prod \frac{\partial ^{\lambda _r}}{\partial x^{\lambda _r}} X_x(s)\) disappears. Moreover, in every term of \(\mathbf{Q}_{1,n}\), factors \(v_{1,k,\lambda }\) appear at most once.

Formally differentiate (6.39) w.r.t. \(t\). If the differentiation commutes with the integration and expectation, then we should have

$$\begin{aligned} \partial _t^2 {\widehat{\Psi }}_q^{(n)}(t,x)= \mathbf{E}\,\!\left[ \exp \left( \sigma \int \limits _{0}^{\infty } \widehat{\mathbf{H}}_{I,q}'(t\!+\!s,X_{x}(s))ds\right) \cdot \mathbf{Q}_{2,n}(v_{0,k,\lambda },v_{1,k,\lambda },v_{2,k,\lambda })\right] \!,\!\!\!\nonumber \\ \end{aligned}$$
(6.41)

where \(\mathbf{Q}_{2,n}\) is a polynomial of degree \(\le n+2\) without constant term in the variables \(v_{0,k,\lambda }\) defined by (6.38), \(v_{1,k,\lambda }\) defined by (6.40), and

$$\begin{aligned} v_{2,k,\lambda }(t,x){:=}\int \limits _0^\infty \partial _t^2 \widehat{\mathbf{H}}_{I,q}^{(k)}(t+s,X_x(s))\prod _{j=1}^{l(\lambda )} \frac{\partial ^{\lambda _j}}{\partial x^{\lambda _j}} X_x(s)ds,\quad k\in \mathbb {N},\lambda \in \mathcal{P}_{\mathbb {N}}\cup \{\mathbb {N}^0\}. \end{aligned}$$

Moreover, in every term of \(\mathbf{Q}_{2,n}\), factors \(v_{2,k,\lambda }\) appears at most once; when a factor \(v_{2,k,\lambda }\) appears, factors \(v_{1,k,\lambda }\) disappear; and when factors \(v_{2,k,\lambda }\) disappear, factors \(v_{1,k,\lambda }\) appear at most twice.

Now we suppose \({\mathcal {E}}_m\) occurs for some \(m\in \mathbb {N}\cup \{0\}\). Using (6.11), (6.13), (6.38), Lemma 6.4, and the argument in (6.34), we conclude that, for any \(k\in \mathbb {N}\) and \(\lambda \in \mathcal{P}_{\mathbb {N}}\), there is a polynomial \(P_{k,\lambda }\) with no constant term such that

$$\begin{aligned} |v_{0,k,\lambda }(t,x)|\le P_{k,\lambda }((m-c_0t)\vee 0)+C \alpha (t)e^{(c_0-2)t}. \end{aligned}$$
(6.42)

Let \(j\in \{1,2\}\) and \(n\in \mathbb {N}\cup \{0\}\). If \(s\ge \frac{m-c_0t}{c_0-1}\) then \(|X_x(s)|\le s+m\le c_0(s+t)\), so from (6.19) we have

$$\begin{aligned} |\partial _t^j\widehat{\mathbf{H}}_{I,q}^{(n)}(t+s,X_x(s))|\le D_{n}\left( \frac{2e^{(c_0-2)(t+s)}}{1-e^{-2t}}\right) ^{j+1}\lesssim \alpha (t)^{j+1}e^{(c_0-2)(t+s)}. \end{aligned}$$

If \(m\ge c_0t\) and \(0\le s\le \frac{m-c_0t}{c_0-1}\), from (6.18) and the definition of \({\mathcal {E}}_m\), we see that for \(j=1,2\),

$$\begin{aligned} |\partial _t^j\widehat{\mathbf{H}}_{I,q}^{(n)}(t,X_x(s))|&\le D_{n}\left( \frac{|X_x(s)|}{t+s}+3+\frac{2e^{-t}}{1-e^{-2t}}\right) ^j\left( \frac{1}{2} + \frac{4e^{-t}}{1-e^{-2t}}\right) \\&\le D_{n}\left( \frac{m-c_0t}{t}+c_0+4+\frac{2e^{-t}}{1-e^{-2t}}\right) ^j\left( \frac{1}{2} + \frac{4e^{-t}}{1-e^{-2t}}\right) \\&\lesssim ((m-c_0t)^j+1)\alpha (t)^{j+1}. \end{aligned}$$

Thus, from Lemma 6.4, for \(k\in \mathbb {N}\) and \(\lambda \in \mathcal{P}_{\mathbb {N}}\cup \{\mathbb {N}^0\}\),

$$\begin{aligned} |v_{j,k,\lambda }(t,x)|\lesssim \alpha (t)^{j+1}( e^{(c_0-2)t}+P_{j,k,\lambda }((m-c_0t)\vee 0)),\quad j=1,2, \end{aligned}$$
(6.43)

where \(P_{j,k,\lambda }\) is a polynomial with no constant term.

Let \((j,n)\in \{0,1,2\}\times (\mathbb {N}\cup \{0\}){\setminus }\{(0,0)\}\). From (6.42), (6.43), and the properties of \(\mathbf{Q}_{0,n}\), \(n\in \mathbb {N}\), \(\mathbf{Q}_{1,n}\) and \(\mathbf{Q}_{2,n}\), \(n\in \mathbb {N}\cup \{0\}\), we see that, at the event \({\mathcal {E}}_m\),

$$\begin{aligned} |\mathbf{Q}_{j,n}|\lesssim \alpha (t)^{2j} [P_{j,n}((m-c_0t)\vee 0)+Q_{j,n}((m-c_0t)\vee 0)\alpha (t)^ne^{(c_0-2)t}],\quad \quad \quad \end{aligned}$$
(6.44)

where \(P_{j,n}\) and \(Q_{j,n}\) are polynomials, and \(P_{j,n}(0)=0\).

Proposition 6.2

For \((j,n)\in \{0,1,2\}\times (\mathbb {N}\cup \{0\}){\setminus }\{(0,0)\}\),

$$\begin{aligned}&\mathbf{E}\,\left[ \exp \left( \sigma \int \limits _{0}^{\infty } \widehat{\mathbf{H}}_{I,q}'(t+s,X_{x}(s))ds\right) \cdot |\mathbf{Q}_{j,n}|\right] \nonumber \\&\quad \lesssim \exp \left( C(t^{-1}+1)e^{(c_0-2)t}\right) (t^{-n-2j}+1)(e^{(c_0-2)t}+e^{\frac{2|x|}{\kappa }-\frac{2c_0t}{\kappa }}),\quad \quad \quad \end{aligned}$$
(6.45)

Proof

Let \(H_{j,n}(t)=\exp \left( \sigma \int _{0}^{\infty } \widehat{\mathbf{H}}_{I,q}'(t+s,X_{x}(s))ds\right) \cdot |\mathbf{Q}_{j,n}|\). Recall that (6.34) and (6.44) hold at the event \({\mathcal {E}}_m\). Using (6.32) and the argument in (6.35) and (6.36), we see that

$$\begin{aligned}&\mathbf{E}\,[H_{j,n}(t)]=\mathbf{E}\,[1_{{\mathcal {E}}_{\lfloor c_0t\rfloor }}H_{j,n}(t)]+\sum _{m=\lfloor c_0t\rfloor }^\infty \mathbf{E}\,[ 1_{{\mathcal {E}}_{m+1}{\setminus }{\mathcal {E}}_m} H_{j,n}(t)]\\&\quad \lesssim \exp \left( C\alpha (t)e^{(c_0-2)t}\right) \alpha (t)^{2j+n} e^{(c_0-2)t}+ \exp \left( \frac{2|x|}{\kappa }-\frac{2c_0t}{\kappa }+C\alpha (t)e^{(c_0-2)t}\right) \cdot \\&\quad \cdot \sum _{l=0}^\infty \alpha (t)^{2j}(P_{j,n}(l+1)+Q_{j,n}(l+1)\alpha (t)^ne^{(c_0-2)t})\exp \left( \frac{\sigma }{2(c_0-1)}-\frac{2}{\kappa }\right) ^l. \end{aligned}$$

Then (6.45) follows from (6.31) and that \(\alpha (t)\lesssim t^{-1}+1\) and \(\alpha (t)^{2j+n} \lesssim t^{-n-2j}+1\). \(\square \)

Theorem 6.1

The function \(\widehat{\Psi }_q\) is \(C^{\infty ,\infty }\) differentiable and solves (6.29). Moreover, for \(j\in \{0,1,2\}\), \(n\in \mathbb {N}\cup \{0\}\), there is a positive continuous function \(c_{j,n}(t)\) on \((0,\infty )\) such that for any \(t\in (0,\infty )\) and \(x\in \mathbb {R}\), \(|\partial _t^j \widehat{\Psi }_q^{(n)}(t,x)|\le c_{j,n}(t)e^{\frac{2}{\kappa }|x|}\).

Proof

For \(n\in \mathbb {N}\cup \{0\}\), define \(\widehat{\Psi }_q^{[0,n]}\), \(\widehat{\Psi }_q^{[ 1,n]}(t,x)\) and \(\widehat{\Psi }_q^{[ 2,n]}(t,x)\) to be equal to the RHS of (6.37), (6.39) and (6.41), respectively. From the above two propositions, these functions are well defined, and there are positive continuous functions \(c_{j,n}(t)\) on \((0,\infty )\) such that

$$\begin{aligned} |\widehat{\Psi }_q^{[j,n]}(t,x)|\le c_{j,n}(t)e^{\frac{2}{\kappa }|x|},\quad j=0,1,2,\quad n\in \mathbb {N}\cup \{0\}. \end{aligned}$$
(6.46)

Let \(n\in \mathbb {N}\cup \{0\}\), \(j\in \{0,1,2\}\), \(t\in (0,\infty )\), and \(x_1<x_2\in \mathbb {R}\). Since \(|\widehat{\Psi }_q^{[ j,n+1 ]}|\) satisfies (6.46), from Fubini’s Theorem, we have

$$\begin{aligned} \int \limits _{x_1}^{x_2} \widehat{\Psi }_q^{[ j, n+1 ]} (t,x)dx=\widehat{\Psi }_q^{[ j,n ]}(t,x_2)-\widehat{\Psi }_q^{[ j,n ]}(t,x_1). \end{aligned}$$
(6.47)

Thus, \(\widehat{\Psi }_q^{[ j, n ]}\) is absolutely continuous in \(x\) when \(t\) is fixed, and its partial derivative w.r.t. \(x\) is a.s. equal to \(\widehat{\Psi }_q^{[ j,n+1 ]}\). Since \(\widehat{\Psi }_q^{[ j,n+1 ]}\) is continuous in \(x\) for fixed \(t\), we see that \(\widehat{\Psi }_q^{[ j, n ]}\) is continuously differentiable in \(x\), and the partial derivative exactly equals \(\widehat{\Psi }_q^{[j, n ]}\). The above holds for any \(n\in \mathbb {N}\), so \(\widehat{\Psi }_q^{[ j,0 ]} \) is \(C^\infty \) differentiable in \(x\) when \(t\) is fixed, and \(\widehat{\Psi }_q^{[j, n]}\) is its \(n\)-th partial derivative w.r.t. \(x\). Especially, since \(\widehat{\Psi }_q=\widehat{\Psi }_q^{[0,0]}\), we see that \(\widehat{\Psi }_q\) is \(C^\infty \) differentiable in \(x\) when \(t\) is fixed, and \(\widehat{\Psi }_q^{[0,n]}\) is its \(n\)-th partial derivative w.r.t. \(x\).

A similar argument using Fubini’s Theorem shows that, for any \(n\in \mathbb {N}\cup \{0\}\), \(j\in \{0,1\}\), \(\widehat{\Psi }_q^{[j, n ]}\) is absolutely continuous in \(t\) when \(x\) is fixed, and its partial derivative w.r.t. \(t\) is a.s. equal to \(\widehat{\Psi }_q^{[j+1,n ]}\). So \(\widehat{\Psi }_q^{[ 0, n ]}\) is continuously differentiable in \(t\) when \(x\) is fixed, and the partial derivative exactly equals \(\widehat{\Psi }_q^{[1, n ]}\). From (6.46) and (6.47), we see that \(\widehat{\Psi }_q^{[j,n]}\) is locally uniformly Lipschitz continuous in \(x\). We have seen that \(\widehat{\Psi }_q^{[ j, n ]}\) is continuous in \(t\) for every fixed \(x\). So \(\widehat{\Psi }_q^{[ j, n ]}\) is continuous in both \(t\) and \(x\). Thus, \(\widehat{\Psi }_q=\widehat{\Psi }_q^{[0,0]}\) is \(C^{1,\infty }\) differentiable.

Fix \(t_0\in (0,\infty )\) and \(x_0\in \mathbb {R}\). Let \( M(t)=\mathbf{E}\,\Big [\exp \left( \sigma \int _{0}^\infty \widehat{\mathbf{H}}_{I,q}'(t_0+s,X_{x_0}(s))ds\right) \Big |{\mathcal {F}}_t\Big ]\), \(t\ge 0\). Then \(M(t)\), \(0\le t<\infty \), is a uniformly integrable martingale. From (6.30) we have

$$\begin{aligned} M(t)=\widehat{\Psi }_q(t_0+t,X_{x_0}(t))\exp \left( \sigma \int \limits _{0}^{t} \widehat{\mathbf{H}}_{I,q}'(t_0+s,X_{x_0}(s))ds\right) . \end{aligned}$$
(6.48)

From (6.2), Itô’s formula, and the differentiability of \(\widehat{\Psi }_q\), we see that \(\widehat{\Psi }_q\) solves (6.29) for \(t\ge t_0\). Since this is true for any \(t_0\in (0,\infty )\), \(\widehat{\Psi }_q\) solves (6.29).

Since \(\widehat{\Psi }_q\) is \(C^{1,\infty }\) differentiable, the same is true for the RHS of (6.29). Thus, \(\partial _t\widehat{\Psi }_q\) is also \(C^{1,\infty }\) differentiable. So \(\widehat{\Psi }_q\) is \(C^{2,\infty }\) differentiable. Iterating this argument, we conclude that \(\widehat{\Psi }_q\) is \(C^{\infty ,\infty }\) differentiable. The previous argument shows that \(\partial _t^j\widehat{\Psi }_q^{(n)}=\widehat{\Psi }_q^{[j,n]}\) for any \(j\in \{0,1,2\}\) and \(n\in \mathbb {N}\cup \{0\}\). The bounds of \(|\partial _t^j\widehat{\Psi }_q^{(n)}|\) then follow from (6.46). \(\square \)

Theorem 6.2

Let \(\widehat{\Psi }_0=\widehat{\Psi }_q\cdot \widehat{\Psi }_\infty \), where \(\widehat{\Psi }_\infty \) is defined by (6.28). Then \(\widehat{\Psi }_0\) is a positive \(C^{\infty ,\infty }\) differentiable function on \((0,\infty )\times \mathbb {R}\) and solves (6.26). Moreover, \(j\in \{0,1,2\}\), for \(n\in \mathbb {N}\cup \{0\}\), there is a positive continuous function \(c_{j,n}(t)\) on \((0,\infty )\) such that, for any \(t\in (0,\infty )\) and \(x\in \mathbb {R}\), \(|\partial _t^j \widehat{\Psi }_0^{(n)}(t,x)|\le c_{j,n}(t)e^{\frac{2}{\kappa }|x|}\).

Proof

Since \(\widehat{\Psi }_q\) and \(\widehat{\Psi }_\infty \) are both positive and \(C^{\infty ,\infty }\) differentiable, the same is true for \(\widehat{\Psi }_0=\widehat{\Psi }_q\cdot \widehat{\Psi }_\infty \). Since \(\widehat{\Psi }_q\) solves (6.29), from Lemma 6.8, \(\widehat{\Psi }_0\) solves (6.26). From Lemma 6.3, (6.28), and that \(\tau \le 0\), we see that for any \(j,n\in \mathbb {N}\cup \{0\}\), \(|\partial _t^j\widehat{\Psi }_\infty ^{(n)}(t,x)|\) is bounded by a positive continuous function in \(t\), which, together with Theorem 6.1, implies the upper bounds of \(|\partial _t^j \widehat{\Psi }_0^{(n)}(t,x)|\). \(\square \)

Theorem 6.3

Let \(\Psi _0\) be the transformation of the above \(\widehat{\Psi }_0\) via (6.25) (with \(\widehat{\Psi }\) replaced by \(\widehat{\Psi }_0\)). Then \(\Psi _0\) is a \(C^{\infty ,\infty }\) differentiable positive function on \((0,\infty )\times \mathbb {R}\) and solves (4.5). Moreover, for \(j\in \{0,1,2\}\), \(n\in \mathbb {N}\cup \{0\}\), there is a function \(h_{j,n}(t,|x|)\), which is a polynomial in \(|x|\) for any fixed \(t\), and every coefficient is a positive continuous function in \(t\), such that for any \(t\in (0,\infty )\) and \(x\in \mathbb {R}\), \(|\partial _t^j \Psi _0^{(n)}(t,x)|\le h_{j,n}(t,|x|)e^{-\frac{x^2}{2\kappa t}+\frac{2\pi |x|}{\kappa t}}\).

Proof

Since \(\widehat{\Psi }_0>0\), \(\Psi _0>0\) also. The differentiability of \(\Psi _0\) is obvious. Since \(\widehat{\Psi }_0\) solves (6.29), from Lemma 6.7, \(\Psi _0\) solves (4.5). Let \(\Psi _a(t,x)=\widehat{\Psi }_0(\frac{\pi ^2}{t},\frac{\pi }{t} x)\). From Theorem 6.2, it is straightforward to check that for every \(j\in \{0,1,2\}\), \(n\in \mathbb {N}\cup \{0\}\), there is a function \(f_{j,n}(t,|x|)\), which is a polynomial in \(|x|\) of degree \(j\) when \(t\) is fixed, and every coefficient is a positive continuous function in \(t\), such that

$$\begin{aligned} \left| \partial _t^j \Psi ^{(n)}_0(t,x)\right| \le f_{j,n}(t,|x|) e^{\frac{2}{\kappa }\frac{\pi }{t}|x|},\quad t>0,x\in \mathbb {R}. \end{aligned}$$
(6.49)

It is easy to verify that for every \(j\in \{0,1,2\}\), \(n\in \mathbb {N}\cup \{0\}\), there is a function \(g_{j,n}(t,|x|)\), which is a polynomial in \(|x|\), and every coefficient is a positive continuous function in \(t\), such that

$$\begin{aligned} \left| \partial _t^j \partial _x^n \left( e^{-\frac{x^2}{2\kappa t}}\left( \frac{\pi }{t}\right) ^{\sigma +\frac{1}{2}}\right) \right| \le g_{j,n}(t,|x|) e^{-\frac{x^2}{2\kappa t}},\quad t>0, x\in \mathbb {R}. \end{aligned}$$
(6.50)

From (6.25), \(\Psi _0(t,x)=e^{-\frac{x^2}{2\kappa t}}(\frac{\pi }{t})^{\sigma +\frac{1}{2}}\Psi _a(t,x)\). So we get the upper bounds of \(|\partial _t^j \Psi _0^{(n)}(t,x)|\) from (6.49) and (6.50). \(\square \)

Theorem 6.4

Let \(\Psi _0\) be as in the above theorem. Let \(\Gamma _0=\Psi _0\Theta _I^{-\frac{2}{\kappa }}\) and \(\Gamma _m(t,x)=\Gamma _0(t,x-2m\pi )\), \(m\in \mathbb {Z}\). For \(s_0\in \mathbb {R}\), let \(\Gamma _{\langle s_0\rangle }=\sum _{m\in \mathbb {Z}}e^{\frac{2\pi }{\kappa }m s_0}\Gamma _m\). Then \(\Gamma _{\langle s_0\rangle }\) is a \(C^{\infty ,\infty }\) differentiable positive function on \((0,\infty )\times \mathbb {R}\), satisfies (4.2), and solves (4.4).

Proof

Let \(\Psi _m(t,x)=\Psi _0(t,x-2m\pi )\) for \(m\in \mathbb {Z}\) and \(\Psi _{\langle s_0\rangle }=\sum _{m\in \mathbb {Z}}e^{\frac{2\pi }{\kappa }m s_0}\Psi _m\). Since \(\Theta _I\) has period \(2\pi \), we have \(\Gamma _{\langle s_0\rangle }=\Psi _{\langle s_0\rangle }\Theta _I^{-\frac{2}{\kappa }}\). Since \(\Theta _I\) is a \(C^{\infty ,\infty }\) differentiable positive function with period \(2\pi \), from Lemma 4.1 we suffice to show that \(\Psi _{\langle s_0\rangle }\) is a \(C^{\infty ,\infty }\) differentiable positive function, satisfies (4.2), and solves (4.5). It is clear from the definition that \(\Psi _{\langle s_0\rangle }\) satisfies (4.2). Since \(\Psi _0\) is a \(C^{\infty ,\infty }\) differentiable positive function that solves (4.5), and \(\mathbf{H}_I\) has period \(2\pi \), every \(\Psi _m\) also satisfies these properties. So \(\Psi _{\langle s_0\rangle }\) is positive. The upper bounds of \(|\partial _t^j \Psi _0^{(n)}(t,x)|\) imply that \(\Psi _{\langle s_0\rangle }\) is finite, and the series \(\sum _{m\in \mathbb {Z}}e^{\frac{2\pi }{\kappa }m s_0}\partial _t^j\Psi _m^{(n)}\) converges locally uniformly for every \(j,n\ge 0\). Fubini’s Theorem implies that \(\Psi _{\langle s_0\rangle }\) is \(C^{\infty ,\infty }\) differentiable and \(\partial _t^j\Psi _{\langle s_0\rangle }^{(n)}=\sum _{m\in \mathbb {Z}}e^{\frac{2\pi }{\kappa }m s_0}\partial _t^j\Psi _m^{(n)}\) Thus, \(\Psi _{\langle s_0\rangle }\) also solves (4.5). \(\square \)

6.5 Distributions

Proposition 6.3

Let \(p>0\), \(s_0\in \mathbb {R}\), and \(x_0,y_0\in \mathbb {R}\). Let \(\Gamma _m\), \(m\in \mathbb {Z}\), and \(\Gamma _{\langle s_0\rangle }\) be as in Theorem 6.4. Let \(\Lambda _*=\kappa \frac{\Gamma _*'}{\Gamma _*}\) for \(*\in \{m,\langle s_0\rangle \}\). For \(m\in \mathbb {Z}\), let \(\widetilde{\beta }_m\) be the covering annulus SLE\((\kappa ,\Lambda _0)\) trace in \(\mathbb {S}_p\) started from \(x_0\) with marked point \(y_0+2m\pi +p i\). Let \(\widetilde{\beta }_{\langle s_0\rangle }\) be the covering annulus SLE\((\kappa ,\Lambda _{\langle s_0\rangle })\) trace in \(\mathbb {S}_p\) started from \(x_0\) with marked point \(y_0+p i\). Let \(\mathbb {P}_{\widetilde{\beta },m}\), \(m\in \mathbb {Z}\), and \(\mathbb {P}_{\widetilde{\beta },\langle s_0\rangle }\) denote the distributions of \(\widetilde{\beta }_m\), \(m\in \mathbb {Z}\), and \(\widetilde{\beta }_{\langle s_0\rangle }\), respectively. Then

$$\begin{aligned} \mathbb {P}_{\widetilde{\beta },\langle s_0\rangle }=\sum _{m\in \mathbb {Z}} e^{\frac{2\pi }{\kappa }m s_0}\frac{\Gamma _m(p,x_0-y_0)}{\Gamma _{\langle s_0\rangle }(p,x_0-y_0)}\,\mathbb {P}_{\widetilde{\beta },m}. \end{aligned}$$
(6.51)

Proof

For \(m\in \mathbb {Z}\), let \(\xi _m(t)\), \(0\le t<p\), be the solution to (3.13) with \(\Lambda =\Lambda _0\) and \(y_0\) replaced by \(y_0+2m\pi \). Let \(\xi _{\langle s_0\rangle }(t)\) be the solution to (3.13) with \(\Lambda =\Lambda _{\langle s_0\rangle }\). Then the covering annulus Loewner traces of modulus \(p\) driven by \(\xi _m\), \(m\in \mathbb {Z}\), and \(\xi _{\langle s_0\rangle }\) have distributions \(\mathbb {P}_{\widetilde{\beta },m}\), \(m\in \mathbb {Z}\), and \(\mathbb {P}_{\widetilde{\beta },\langle s_0\rangle }\), respectively. Let \(X_m(t)=\xi _m(t)-{{\mathrm{Re }}}\widetilde{g}^{\xi _m}(t,y_0+2m\pi +pi)+2m\pi \), \(m\in \mathbb {Z}\), and \(X_{\langle s_0\rangle }(t)=\xi _{\langle s_0\rangle }(t)-{{\mathrm{Re }}}\widetilde{g}^{\xi _{\langle s_0\rangle }}(t,y_0+pi)\). Since \(\Gamma _m(t,x)=\Gamma _0(t,x-2m\pi )\), we have \(\Lambda _m(t,x)=\Lambda _0(t,x-2m\pi )\). Since \({{\mathrm{Re }}}g(t,y+pi)=\widetilde{g}_I(t,y)\) for \(y\in \mathbb {R}\), and \(\mathbf{H}_I\) is odd and has period \(2\pi \), from (3.9), we find that, for \(*\in \{m,\langle s_0\rangle \}\), with \(\Phi _*{:=}\Lambda _*+\mathbf{H}_I\), \(X_*(t)\) satisfies

$$\begin{aligned} dX_*(t)=\sqrt{\kappa }dB(t)+\Phi _*(p-t,X_*(t))dt,\quad X_*(0)=x_0-y_0. \end{aligned}$$

Let \(\mathbb {P}_{X,*}\) denote the distributions of \((X_*(t))\). Since \(\xi _{*}(t)= X_{*}(t)+y_0-\int _0^t \mathbf{H}_I( p-r, X_{*}(r))dr\), \(0\le t<p\), we suffice to show that (6.51) holds with the subscripts “\(\widetilde{\beta }\)” replaced by “\(X\)”. The rest of the proof is a standard application of Girsanov theorem. One may check that for every \(m\in \mathbb {Z}\), \(M_m(t){:=}e^{\frac{2\pi }{\kappa }m s_0}\frac{\Gamma _m(p-t,X_{\langle s_0\rangle }(t))}{\Gamma _{\langle s_0\rangle }(p-t,X_{\langle s_0\rangle }(t))}\) is a nonnegative martingale w.r.t. \(\mathbb {P}_{X,\langle s_0\rangle }\), and satisfies that \(\frac{dM_m(t)}{M_m(t)}=(\Lambda _m-\Lambda _{\langle s_0\rangle })\frac{dB(t)}{\sqrt{\kappa }}\) and \(\sum _{m\in \mathbb {Z}} M_m(t)=1\); and we have \(\frac{d\mathbb {P}_{X,m}}{d\mathbb {P}_{X,\langle s_0\rangle }}=\frac{M_m(\infty )}{M_m(0)}\). \(\square \)

Remark

Since \(\Gamma _{\langle s_0\rangle }\) satisfies (4.2), \(\Lambda _{\langle s_0\rangle }\) has period \(2\pi \). So \(\Lambda _{\langle s_0\rangle }\) is a crossing annulus drift function, and we could define the annulus SLE\((\kappa ,\Lambda _{\langle s_0\rangle })\) process. However, each \(\Lambda _m\) does not have period \(2\pi \). It only makes sense to define the covering annulus SLE\((\kappa ,\Lambda _m)\) processes.

Proposition 6.4

Let \(p>0\) and \(x_0,y_0\in \mathbb {R}\). Let \(\Gamma _0\) be as in Theorem 6.4, and \(\Lambda _0=\kappa \frac{\Gamma _0'}{\Gamma _0}\). Let \(\widetilde{\beta }(t)\), \(0\le t<p\), be the covering annulus SLE\((\kappa ,\Lambda _0)\) trace in \(\mathbb {S}_p\) started from \(x_0\) with marked point \(y_0+pi\). Then a.s. \({{\mathrm{dist}}}(y_0+ pi,\widetilde{\beta }([0,p))+2\pi \mathbb {Z})=0\).

Proof

Let \(\xi (t)\) be the driving function, and \(\widetilde{g}(t,\cdot )\), \(0\le t<p\), be the covering Loewner maps. Then \(\widetilde{g}(t,\cdot )\) maps \(\mathbb {S}_p{\setminus }(\widetilde{\beta }([0,p))+2\pi \mathbb {Z})\) conformally onto \(\mathbb {S}_{p-t}\), and maps \(\mathbb {R}_p\) onto \(\mathbb {R}_{p-t}\). From Koebe’s \(1/4\) Theorem, we suffice to show that a.s. \(\widetilde{g}'(t,y_0+ pi)\cdot \frac{p}{p-t}\rightarrow \infty \) as \(t\rightarrow p\).

Let \(X(t)=\xi (t)-{{\mathrm{Re }}}\widetilde{g}(t,y_0+p i)\) and \(\Phi _0=\Lambda _0+\mathbf{H}_I\). Then \(X(t)\) satisfies the SDE:

$$\begin{aligned} dX(t)=\sqrt{\kappa }dB(t)+{\Phi _{0}(p-t, X(t))}\,dt,\quad 0\le t<p. \end{aligned}$$

From (3.8) we have \(\ln (\widetilde{g}'(t,y_0+pi)\cdot \frac{p}{p-t})=\int _0^t (\mathbf{H}_I'(p-s,X(s))+\frac{1}{p-s})ds\). Let \(\widehat{\Phi }_0=\kappa \frac{\widehat{\Psi }_0'}{\widehat{\Psi }_0}\). Since \(\Psi _0\) and \(\widehat{\Psi }_0\) satisfy (6.24), we have \(\widehat{\Phi }_0(s,z)=\frac{\pi }{s}\Phi _0(\frac{\pi ^2}{s},\frac{\pi }{s} z)+\frac{z}{s}\). Let \(\widehat{p}=\frac{\pi ^2}{p}\) and \(\widehat{X}(t)=\frac{\widehat{p}+t}{\pi }X(p-\frac{\pi ^2}{\widehat{p}+t})\), \(0\le t<\infty \). Then \(\widehat{X}(0)=\frac{\widehat{p}}{\pi }X(0)=\frac{\pi }{p}(x_0-y_0)\). Applying Itô’s formula and time-change of a semimartingale, we see that \(\widehat{X}(t)\) satisfies the SDE:

$$\begin{aligned} d\widehat{X}(t)=\sqrt{\kappa }\widehat{B}(t)+\widehat{\Phi }_0(\widehat{p}+t,\widehat{X}(t))dt,\quad 0\le t<\infty , \end{aligned}$$

for some standard Brownian motion \(\widehat{B}(t)\). Changing variables using \(\widehat{s}=\frac{\pi ^2}{p-s}-\widehat{p}\), we get

$$\begin{aligned}&\int \limits _0^t \left( \mathbf{H}_I'(p-s,X(s))+\frac{1}{p-s}\right) ds\\&\quad =\int \limits _0^{\widehat{t}}\left( \mathbf{H}_I'\left( \frac{\pi ^2}{\widehat{p}+\widehat{s}},X\left( p-\frac{\pi ^2}{\widehat{p}+\widehat{s}}\right) \right) +\frac{\widehat{p}+s}{\pi ^2}\right) \frac{\pi ^2}{(\widehat{p}+\widehat{s})^2}\,d\widehat{s}\\&\quad =\int \limits _0^{\widehat{t}}\left( \frac{\pi ^2}{(\widehat{p}+\widehat{s})^2}\mathbf{H}_I'\left( \frac{\pi ^2}{\widehat{p}+\widehat{s}},\frac{\pi }{\widehat{p}+\widehat{s}}\widehat{X}(\widehat{s})\right) +\frac{1}{\widehat{p}+s} \right) d\widehat{s}\\&\quad =\int \limits _0^{\widehat{t}} \widehat{\mathbf{H}}_I'(\widehat{p}+\widehat{s},\widehat{X}(\widehat{s}))d\widehat{s}, \end{aligned}$$

where \(\widehat{t}=\frac{\pi ^2}{p-t}-\widehat{p}\), and the last equality follows from (2.9). So we have

$$\begin{aligned} \lim _{t\rightarrow p^-}\ln (\widetilde{g}'(t,y_0+ pi)\cdot \frac{p}{p-t})=\int \limits _0^\infty \widehat{\mathbf{H}}_I'(\widehat{p}+\widehat{s},\widehat{X}(\widehat{s}))d\widehat{s}\ge \int \limits _0^\infty \tanh _2'(\widehat{X}(\widehat{s}))d\widehat{s}, \end{aligned}$$

where the last inequality follows from (2.12).

From Girsanov theorem and the fact that \(\kappa \frac{\widehat{\Psi }_0'}{\widehat{\Psi }_0}=\kappa \frac{\widehat{\Psi }_q'}{\widehat{\Psi }_q}+\tau \tanh _2\), we find that the distribution of \((\widehat{X}(t))\) is equivalent to that of \((X_{\frac{\pi }{p}(x_0-y_0)}(t))\) defined by (6.2), and the Radon-Nikodym derivative is \(M(\infty )/M(0)\), where \(M(t)\) is defined by (6.48). Since \((X_{\frac{\pi }{p}(x_0-y_0)}(t))\) is homogeneous and recurrent, we have a.s. \(\int _0^\infty \tanh _2'(X_{\frac{\pi }{p}(x_0-y_0)}(t))dt=\infty \), which implies that a.s. \(\int _0^\infty \tanh _2'(\widehat{X}(\widehat{s}))d\widehat{s}=\infty \). Thus, a.s. \(\widetilde{g}'(t,y_0+ pi)\cdot \frac{p}{p-t}\rightarrow \infty \) as \(t\rightarrow p^-\). \(\square \)

Corollary 6.1

Let \(p>0\), \(s_0\in \mathbb {R}\), and \(x_0,y_0\in \mathbb {R}\). Let \(\Gamma _{\langle s_0\rangle }\) be as in Theorem 6.4, and \(\Lambda _{\langle s_0\rangle }=\kappa \frac{\Gamma _{\langle s_0\rangle }'}{\Gamma _{\langle s_0\rangle }}\). Let \(\beta (t)\), \(0\le t<p\), be the annulus SLE\((\kappa ,\Lambda _{\langle s_0\rangle })\) trace in \(\mathbb {S}_p\) started from \(e^{ix_0}\) with marked point \(e^{-p+iy_0}\). Then a.s. \({{\mathrm{dist}}}(e^{-p+iy_0},\beta ([0,p)))=0\).

Proof

This follows immediately from the above two propositions. \(\square \)

Remark

For the reader’s convenience, we now make a list of the functions defined in this section. First, \(\widehat{\Psi }_q\) is defined by a Feynman–Kac formula (6.30) depending on \(\kappa >0\) and \(\sigma \in [0,\frac{4}{\kappa })\). Second, \(\widehat{\Psi }_0\) is defined to be the product \(\widehat{\Psi }_q\widehat{\Psi }_\infty \), where \(\widehat{\Psi }_\infty \) is a simple solution of (6.27) given by (6.28). Third, \(\Psi _0\) is the transformation of the \(\widehat{\Psi }_0\) via (6.25). Fourth, the partition functions are defined by \(\Gamma _0=\Psi _0\Theta _I^{-\frac{2}{\kappa }}\), \(\Gamma _m(t,x)=\Gamma _0(t,x-2m\pi )\), and \(\Gamma _{\langle s_0\rangle }=\sum _{m\in \mathbb {Z}}e^{\frac{2\pi }{\kappa }m s_0}\Gamma _m\). Fifth, the annulus drift functions are defined by \(\Lambda _*=\kappa \frac{\Gamma _*'}{\Gamma _*}\).

7 Reversibility

The main result of this section is the theorem below which generalizes Theorem 1.1.

Theorem 7.1

Let \(\kappa \in (0,4]\) and \(s_0\in \mathbb {R}\). If \(\beta (t)\), \(-\infty \le t<\infty \), is a whole-plane SLE\((\kappa ,s_0)\) trace in \(\widehat{\mathbb {C}}\) from \(a\) to \(b\), then the reversal of \(\beta \), up to a time-change, has the distribution of a whole-plane SLE\((\kappa ,s_0)\) trace in \(\widehat{\mathbb {C}}\) from \(b\) to \(a\).

Proof

From conformal invariance, we only need to consider the case \(a=0\) and \(b=\infty \). Let \(\Gamma _{\langle s_0\rangle }\) be given by Theorem 6.4 with \(\sigma =\frac{4}{\kappa }-1\). Then \(\Gamma _{\langle s_0\rangle }\) solves (4.1) and satisfies (4.2). We now apply Theorem 5.1 to \(\Gamma =\Gamma _{\langle s_0\rangle }\). Let \(\Lambda _j\), \(s_j\) and \(\beta _{I,j}(t)\), \(j=1,2\), be given by Theorem 5.1. Then for \(j=1,2\), \(\beta _{I,j}\) is a whole-plane SLE\((\kappa ,s_j)\) trace in \(\widehat{\mathbb {C}}\) from \(0\) to \(\infty \), and satisfies that, for any \(t_2\in \mathbb {Q}\), conditioned on \(\beta _{I,2}(s)\), \(-\infty \le s\le t_2\), after a time-change, the curve \(\beta _{I,1}(t_1)\), \(-\infty \le t_1<T_1( t_2)\), has the distribution of a disc SLE\((\kappa ,\Lambda _1)\) trace in \(\mathbb {C}{\setminus }I_0(\beta _{I,2}([-\infty ,t_2]))\) started from \(0\) with marked point \(\beta _{I,2}(t_2)\), where \(T_1(t_2)\) is the maximal number in \((-\infty ,+\infty ]\) such that \(\beta _1(t)\cap \beta _{2}([-\infty ,t_2])=\emptyset \) for \(-\infty < t<T_1(t_2)\).

Let \(\xi _2\) be the driving function for \((\beta _{I,2}(t))\), and \(g_2(t,\cdot )\), \(-\infty <t<\infty \), be the inverted whole-plane Loewner maps driven by \(\xi _2\). Then \(g_2(t,\cdot )\) maps \(\mathbb {C}{\setminus }I_0(\beta _{I,2}([-\infty ,t_2]))\) conformally onto \(\mathbb {D}\), fixes \(0\), and takes \(\beta _{I,2}(t_2)\) to \(e^i(\xi _x(t_2))\). Thus, conditioned on \(\beta _{I,2}(s)\), \(-\infty \le s\le t_2\), \(g_2(t,\beta _{I,1}(t_1))\), \(0\le t_1<T_1(t_2)\), is a time-change of a disc SLE\((\kappa ,\Lambda _1)\) trace in \(\mathbb {D}\) started from \(0\) with marked point \(e^i(\xi _x(t_2))\). Since \(\Lambda _1=\Lambda =\kappa \frac{\Gamma _{\langle s_0\rangle }'}{\Gamma _{\langle s_0\rangle }}\), from Corollary 6.1 and the relation between the disc SLE\((\kappa ,\Lambda )\) process and the annulus SLE\((\kappa ,\Lambda )\) process, we conclude that a.s. \(e^i(\xi _2(t_2))\) is a subsequential limit of \(g_2(t_2,\beta _{I,1}(t))\) as \(t\rightarrow T_1(t_2)^-\). Thus, \(\beta _2(t_2)\) is a subsequential limit of \(\beta _{I,1}(t)\) as \(t\rightarrow T_1(t_2)^-\). If \(T_1(t_2)=\infty \), then \(\lim _{t\rightarrow T_1(t_2)^-} \beta _{I,1}(t)=\infty =\beta _2(-\infty )\ne \beta _2(t_2)\), which is a.s. a contradiction. So \(T_1(t_2)<\infty \) a.s., and we have \(\beta _{I,1}(T_1(t_2))=\lim _{t\rightarrow T_1(t_2)^-} \beta _{I,1}(t)=\beta _2(t_2)\) a.s. Since \(\mathbb {Q}\) is countable, we conclude that, a.s. \(\beta _{I,1}(T_1(t_2))=\beta _2(t_2)\) for every \(t_2\in \mathbb {Q}\), which implies that a.s. \(\beta _2(\mathbb {R})\subset \beta _{I,1}(\mathbb {R})\). Since both \(\beta _{I,1}\) and \(\beta _2\) are simple, and the initial (resp. final) point of \(\beta _{I,1}\) agrees with the final (resp. initial) point of \(\beta _2\), we see that \(\beta _2\) is a reversal of \(\beta _{I,1}\). Now \(\beta _{I,1}\) is a whole-plane SLE\((\kappa ,s_0)\) trace in \(\widehat{\mathbb {C}}\) from \(0\) to \(\infty \), and \(\beta _{I,2}\) is a whole-plane SLE\((\kappa ,-s_0)\) trace in \(\widehat{\mathbb {C}}\) started from \(0\) to \(\infty \). Since \(I_0\) is conjugate conformal, \(\beta _2=I_0(\beta _{I,2})\) is a whole-plane SLE\((\kappa ,s_0)\) trace in \(\widehat{\mathbb {C}}\) from \(\infty \) to \(0\). So we proved the theorem in the case \(a=0\) and \(b=\infty \). \(\square \)

Theorem 7.2

If \(\beta (t)\), \(0\le t<\infty \), is a radial SLE\((\kappa ,-s_0)\) trace in a simply connected domain \(D\) from \(a\) to \(b\), then a.s. \(\lim _{t\rightarrow \infty }\beta (t)=b\), and after a time-change, the reversal of \(\beta \) becomes a disc SLE\((\kappa ,\Lambda _{\langle s_0\rangle })\) trace in \(D\) started from \(b\) with marked point \(a\).

Proof

This follows from the property of the coupling in Theorem 7.1 and the relation between whole-plane SLE\((\kappa ,s_0)\) and radial SLE\((\kappa ,-s_0)\). \(\square \)

Theorem 7.3

Let \(D\) be a doubly connected domain with two boundary points \(a,b\) lying on different boundary components. If \(\beta (t)\), \(0\le t<p\), is an annulus SLE\((\kappa ,\Lambda _{\langle s_0\rangle })\) trace in \(D\) started from \(a\) with marked point \(b\), then \(\lim _{t\rightarrow p}\beta (t)=b\), and after a time-change, the reversal of \(\beta \) becomes an annulus SLE\((\kappa ,\Lambda _{\langle s_0\rangle })\) trace in \(D\) started from \(b\) with marked point \(a\).

Proof

This follows from the property of the coupling in Theorem 7.1, and the relation between disc SLE\((\kappa ,\Lambda _{\langle s_0\rangle })\) and annulus SLE\((\kappa ,\Lambda _{\langle s_0\rangle })\). \(\square \)

Remark

For \(\kappa \in (0,6)\) and \(\sigma =\frac{1}{2}+\frac{1}{\kappa }\in [0,\frac{4}{\kappa })\), the \(\Lambda _{\langle 0\rangle }\) given by Proposition 6.3 can be used to decompose an annulus SLE\(_\kappa \) process (without marked point). The statement is similar to Lemma 3.1 in [12].

8 Some particular solutions

In this section, for \(\kappa \in \{4,2,3,0,16/3\}\), we will find solutions to the PDE for \(\Lambda \) ((4.3) and (4.49)) and the PDE for \(\Gamma \) ((4.1) and (4.48)), which can be expressed in terms of \(\mathbf{H}\) and \(\mathbf{H}_I\). Since \(\Lambda =\kappa \frac{\Gamma '}{\Gamma }\), multiplying a function in \(t\) to \(\Gamma \) does not change the value of \(\Lambda \). So we may as well consider the following PDEs for \(\Gamma \), where \(C(t)\) is some real valued continuous depending only on \(t\):

$$\begin{aligned} {\partial _t{\Gamma }}&= \frac{\kappa }{2} {\Gamma ''} +\mathbf{H}_I{\Gamma '}+\left( \frac{3}{\kappa }-\frac{1}{2}\right) \mathbf{H}_I'\Gamma + C(t){\Gamma }.\end{aligned}$$
(8.1)
$$\begin{aligned} {\partial _t{ \Gamma }}&= \frac{\kappa }{2} { \Gamma ''} +\mathbf{H}{ \Gamma '}+\left( \frac{3}{\kappa }-\frac{1}{2}\right) \mathbf{H}' \Gamma + C(t){ \Gamma }. \end{aligned}$$
(8.2)

8.1 \(\kappa =4\)

Let \(\kappa =4\). From Lemma 4.1 we see that if \(\Psi \) solves.

$$\begin{aligned} \partial _t\Psi =2\Psi '', \end{aligned}$$
(8.3)

then \(\Gamma =\Psi \Theta _I^{-2/\kappa }\) solves (4.1). Similarly, \(\Gamma =\Psi \Theta ^{-2/\kappa }\) solves (4.48) if \(\Psi \) solves (8.3). The solutions to (8.3) are well-known. For example, we have the following solutions: \(e^{2c^2t+cx}\), \(\frac{1}{\sqrt{8\pi t}}e^{-\frac{(x-c)^2}{8t}}\), \(e^{-t/2}\sin _2(x-c)\), \(\Theta (2t,x-c)\), and \(\Theta _I(2t,x-c)\). The function \(\Theta _I(2t,x-\pi )\) corresponds to the solution \(\Gamma (t,x)=\Theta _I(2t,x-\pi )\Theta _I(t,x)^{-1/2}\) of (8.1), which agrees with the solution given by Sect. 6.3 for \(\kappa =4\) and \(\sigma =\frac{4}{\kappa }-1=0\). Some of these solutions are related to the Gaussian free field ([5]) in doubly connected domains.

8.2 \(\kappa =2\)

Let \(\kappa =2\). In this case if \(\Xi \) on \((0,\infty )\times \mathbb {R}\) solves

$$\begin{aligned} \partial _t \Xi =\Xi ''+\Xi '\mathbf{H}_I+C(t)\Xi \end{aligned}$$
(8.4)

then \(\Gamma {:=}\Xi '\) solves (8.1). Similarly, if \(\Xi \) on \((0,\infty )\times (\mathbb {R}{\setminus }\{2n\pi :n\in \mathbb {Z}\})\) solves

$$\begin{aligned} \partial _t \Xi =\Xi ''+\Xi '\mathbf{H}+C(t)\Xi . \end{aligned}$$
(8.5)

then \(\Gamma {:=}\Xi '\) solves (8.2).

From (2.8) we see that \(\Xi _1=\mathbf{H}_I\) solves (8.4) and \(\Xi _2=\mathbf{H}\) solves (8.5) with \(C(t)= 0\). It is also easy to check that \(\Xi _3(t,x)=t\mathbf{H}_I(t,x)+x\) solves (8.4) and \(\Xi _4(t,x)=t\mathbf{H}(t,x)+x\) solves (8.5) with \(C(t)=0\). The \(\Xi _3\) corresponds to the solution \(\Gamma (t,x)=t\mathbf{H}_I'(t,x)+1\), which agrees with the solution given by Sect. 6.3 for \(\kappa =2\) and \(\sigma =\frac{4}{\kappa }-1=1\). Such \(\Gamma \) is also the density function of the distribution of the limit point of an annulus SLE\(_2\) trace.

We now derive more solutions. Fix \(t>0\). Let \(L_t=\{2n\pi +i2kt:n,k\in \mathbb {Z}\}\). Let \(F_{1,t}\) denote the set of odd analytic functions \(f\) on \(\mathbb {C}{\setminus }L_t\) such that each \(z\in L_t\) is a simple pole of \(f\), \(2\pi \) is a period of \(f\), and \(i2t\) is an antiperiod of \(f\), i.e., \(f(z+i2t)=-f(z)\). Let \(F_{2,t}\) denote the set of odd analytic functions \(f\) on \(\mathbb {C}{\setminus }L_t\) such that each \(z\in L_t\) is a simple pole of \(f\), \(2\pi \) is an antiperiod of \(f\), and \(i2t\) is a period of \(f\). Let \(F_{3,t}\) denote the set of odd analytic functions \(f\) on \(\mathbb {C}{\setminus }L_t\) such that each \(z\in L_t\) is a simple pole of \(f\), and both \(2\pi \) and \(i2t\) are antiperiods of \(f\). Define

$$\begin{aligned} \Xi _1(t,z)&= \mathbf{H}(2t,z)-\mathbf{H}_I(2t,z), \quad \Xi _2(t,z)=\frac{1}{2}\mathbf{H}\left( \frac{t}{2},\frac{z}{2}\right) -\frac{1}{2}\mathbf{H}\left( \frac{t}{2},\frac{z}{2}+\pi \right) ,\\ \Xi _3(t,z)&= \frac{1}{2}\mathbf{H}\left( t,\frac{z}{2}\right) -\frac{1}{2}\mathbf{H}_I\left( t,\frac{z}{2}\right) -\frac{1}{2}\mathbf{H}\left( t,\frac{z}{2}+\pi \right) +\frac{1}{2}\mathbf{H}_I\left( t,\frac{z}{2}+\pi \right) . \end{aligned}$$

From the properties of \(\mathbf{H}\) and \(\mathbf{H}_I\), it is easy to check that \(F_{j,t}\) is the linear space spanned by \(\Xi _j(t,\cdot )\) for \(j=1,2,3\). For \(j=1,2,3\), Define

$$\begin{aligned} J_j=\partial _t\Xi _j-\Xi _j''-\Xi _j'\mathbf{H},\quad C_j(t)=\frac{1}{2}{{{\mathrm{Res}}}_{z=0} J_j(t,\cdot )}. \end{aligned}$$

Fix \(t>0\). Note that \(0\) is a simple pole of both \(\mathbf{H}(t,\cdot )\) and \(\Xi _1(t,\cdot )\) of residue \(2\). It is easy to conclude that \(0\) is also a simple pole of \(J_1(t,\cdot )\). From that \(\Xi _1(t,\cdot )\in F_{1,t}\), that \(\mathbf{H}(t,\cdot )\) has period \(2\pi \), and that \(\mathbf{H}(t,z+2\pi )=\mathbf{H}(t,z)-2i\), it is easy to check that \(J_1(t,\cdot )\in F_{1,t}\) as well. So \(J_1(t,\cdot )=C_1(t)\Xi _1(t,\cdot )\). Thus, \(\Xi _1\) solves (8.5). Similarly, \(\Xi _2\) and \(\Xi _3\) both solve (8.5).

8.3 \(\kappa =3\)

Let \(\kappa =3\). Let \(\Xi _j\), \(j=1,2,3\), be as in the previous subsection. For \(j=1,2,3\), let \(\Gamma _j=\Xi _j\), and define

$$\begin{aligned} H_j=\partial _t{\Gamma _j}-\frac{3}{2}\Gamma _j''-\mathbf{H}\Gamma _j' -\frac{1}{2}\mathbf{H}'\Gamma _j,\quad C_j(t)=\frac{1}{2}{{{\mathrm{Res}}}_{z=0} H_j(t,\cdot )}. \end{aligned}$$

Using the argument in the last subsection, we find that \(H_j(t,\cdot )\in F_{j,t}\) for any \(t>0\). So \(H_j(t,\cdot )=C_j(t)\Gamma _j(t,\cdot )\). Thus, \(\Gamma _1,\Gamma _2,\Gamma _3\) solve (8.2). For \(j=4,5,6\), let \(\Gamma _j(t,z)=\Gamma _{j-3}(t,z+it)\). Since \(\mathbf{H}_I(t,z)=\mathbf{H}(t,z+it)+i\), \(\Gamma _4,\Gamma _5,\Gamma _6\) solve (8.1).

For \(j=2,3\), \(\Gamma _j\) takes positive real values on \((0,2\pi )+4\pi \mathbb {Z}\), takes negative real values on \((-2\pi ,0)+4\pi \mathbb {Z}\), and has antiperiod \(2\pi \). So \(\Lambda _j{:=}3\frac{\Gamma _j'}{\Gamma _j}\) is a chordal-type annulus drift function that solves (4.49) for \(\kappa =3\). It is worth to mention that the annulus SLE\((\kappa ;\Lambda _j)\) process preserves the following local martingale, which resembles the \(G(\Omega ,a,b,z)\) in Proposition 11 of [7]. The proof uses the fact that \(\Gamma _j\) solves (8.2) for \(z\in \mathbb {C}{\setminus }\{\text{ poles }\}\).

Proposition 8.1

Let \(j\in \{2,3\}\) and \(p>0\). Let \(x_0\in \mathbb {R}\) and \(z_0\in \mathbb {R}{\setminus }(x_0+2\pi \mathbb {Z})\). Let \(\xi (t)\), \(0\le t<T\), be the driving function for the covering annulus SLE\((\kappa ;\Lambda _j)\) process in \(\mathbb {S}_p\) started from \(x_0\) with marked point \(z_0\). Let \(\widetilde{g}_t\), \(0\le t<T\), be the covering annulus Loewner maps of modulus \(p\) driven by \(\xi \). Then for every \(z\in \mathbb {S}_p\),

$$\begin{aligned} M_t(z){:=}\frac{\Gamma _j(p-t,\widetilde{g}_t(z)-\xi (t))}{\Gamma _j(p-t,\widetilde{g}_t(z_0)-\xi (t))} \cdot \frac{\widetilde{g}_t'(z)^{1/2}}{\widetilde{g}_t'(z_0)^{1/2}} \end{aligned}$$

is a local martingale for \(0\le t<T\).

For \(j=1\), \(\Gamma _1(t,\cdot )\) takes nonzero pure imaginary values on \(\mathbb {R}_t\), the related function \(\Gamma _4\) agrees with the solution given by Sect. 6.3 for \(\kappa =3\) and \(\sigma =\frac{4}{\kappa }-1=\frac{1}{3}\) up to a pure imaginary multiplicative constant, and \(\Lambda _4{:=}3\frac{\Gamma _4'}{\Gamma _4}\) is a crossing annulus drift function that solves (4.3) for \(\kappa =3\). The annulus SLE\((\kappa ;\Lambda _4)\) process also preserves a local martingale. In fact, Proposition 8.1 holds with \(z_0\in \mathbb {R}_p\), \(\Lambda _j\) replaced by \(\Lambda _4\), and \(\Gamma _j\) replaced by \(\Gamma _1\).

8.4 \(\kappa =0\)

Let \(\kappa =0\). Let \(L_t\) be as in Sect. 8.2. Let \(\mathbf{H}_2(t,z)=\mathbf{H}(t,z/2)\). From (2.8) we have

$$\begin{aligned} \partial _t \mathbf{H}_2=4\mathbf{H}_2''+2\mathbf{H}_2'\mathbf{H}_2. \end{aligned}$$
(8.6)

Let \(\Lambda _1=\mathbf{H}-2\mathbf{H}_2\). Then for each \(t>0\), \(\Lambda _1(t,\cdot )\) is an odd analytic function on \(\mathbb {C}{\setminus }L_t\), and each \(z\in L_t\) is a simple pole of \(\Lambda _1\). From \(\mathbf{H}(t,z+2\pi )=\mathbf{H}(t,z)\) and \(\mathbf{H}(t,z+i2t)=\mathbf{H}(t,z)-2i\) we see that both \(4\pi \) and \(i4t\) are periods of \(\Lambda _1(t,\cdot )\). Fix \(t>0\), and define

$$\begin{aligned} J(z)=\frac{\Lambda _1(t,z)^2}{2}-2\Lambda _1'(t,z)+3\mathbf{H}'(t,z). \end{aligned}$$

Then \(J\) is an even analytic function on \(\mathbb {C}{\setminus }L_t\) and has periods \(4\pi \) and \(i4t\). Fix any \(z_0=2n_0\pi +i2k_0t\in L_t\) for some \(n_0,k_0\in \mathbb {Z}\). Then \(2 z_0\) is a period of \(J\), so \(J_{z_0}(z){:=}J(z-z_0)\) is an even function. Thus, \({{\mathrm{Res}}}_{z=z_0} J(z)=0\). The degree of \(z_0\) as a pole of \(J\) is at most \(2\). The principal part of \(J\) at \(z_0\) is \(\frac{C(z_0)}{(z-z_0)^2}\) for some \(C(z_0)\in \mathbb {C}\). Note that \({{\mathrm{Res}}}_{z_0} \mathbf{H}(t,z)=2\) and \({{\mathrm{Res}}}_{z_0}\Lambda _1(t,z)=-6\) or \(2\). In either case, we compute \(C(z_0)=0\). Thus, every \(z_0\in L_t\) is a removable pole of \(J\), which, together with the periods \(4\pi \) and \(i4t\), implies that \(J\) is a constant depending only on \(t\). Differentiating \(J\) w.r.t. \(z\), we conclude that

$$\begin{aligned} 2\Lambda _1''=\Lambda _1'\Lambda _1+3\mathbf{H}''. \end{aligned}$$
(8.7)

From \(\Lambda _1=\mathbf{H}-2\mathbf{H}_2\) we have \(2\mathbf{H}_2=\mathbf{H}-\Lambda _1\). So from (8.6) and (8.7), we have

$$\begin{aligned} \partial _t\mathbf{H}-\partial _t \Lambda _1&= 2\partial _t\mathbf{H}_2=8\mathbf{H}_2''+4\mathbf{H}_2'\mathbf{H}_2=4\mathbf{H}''-4\Lambda _1''+(\mathbf{H}'-\Lambda _1')(\mathbf{H}-\Lambda _1)\\&= 4\mathbf{H}''-2(\Lambda _1'\Lambda _1+3\mathbf{H}'')+(\mathbf{H}'-\Lambda _1')(\mathbf{H}-\Lambda _1)\\&= -2\mathbf{H}''-\Lambda _1'\Lambda _1+\mathbf{H}'\mathbf{H}-\Lambda _1'\mathbf{H}-\mathbf{H}'\Lambda _1. \end{aligned}$$

From the above formula and (2.8), we have

$$\begin{aligned} \partial _t \Lambda _1=3\mathbf{H}'' +\Lambda _1'\Lambda _1+\mathbf{H}'\Lambda _1+\Lambda _1'\mathbf{H}. \end{aligned}$$
(8.8)

Thus, \(\Lambda _1\) solves (4.49). Note that \(\mathbf{H}_I(t,z/2)\) also satisfies (8.6). Let \(\Lambda _2(t,z):=\mathbf{H}(t,z)-\mathbf{H}_I(t,\frac{z}{2})\). Then \(\Lambda _2(t,\cdot )\) is also an odd analytic function on \(\mathbb {C}{\setminus }L_t\) and has periods \(4\pi \) and \(i4t\). The principal part of \(\Lambda _2(t,\cdot )\) at every \(z_0\in L_t\) is also either \(\frac{-6}{z-z_0}\) or \(\frac{2}{z-z_0}\). Using a similar argument, we conclude that \(\Lambda _2\) also solves (4.49).

8.5 \(\kappa =16/3\)

Let \(\kappa =16/3\). Let \(\Lambda _1\) and \(\Lambda _2\) be as in the last subsection. Let \(\Lambda _3=-\Lambda _1/3\). From (8.7) we have

$$\begin{aligned} 0=\frac{8}{3}\Lambda _3''+4\Lambda _3'\Lambda _3+\frac{4}{3}\mathbf{H}''. \end{aligned}$$

From (8.8) we have

$$\begin{aligned} \partial _t \Lambda _3=-\mathbf{H}''-3\Lambda _3'\Lambda _3+\mathbf{H}'\Lambda _3+\Lambda _3'\mathbf{H}. \end{aligned}$$

Summing up the above two equalities, we get

$$\begin{aligned} \partial _t \Lambda _3=\frac{8}{3} \Lambda _3''+\frac{1}{3} \mathbf{H}''+\mathbf{H}'\Lambda _3+\Lambda _3'\mathbf{H}+\Lambda _3'\Lambda _3. \end{aligned}$$

Thus, \(\Lambda _3\) solves (4.49). Similarly, \(\Lambda _4{:=}-\Lambda _2/3\) also solves (4.49). Here \(\Lambda _3\) and \(\Lambda _4\) have period \(4\pi \) instead of \(2\pi \). If we want a solution to (4.3) with period \(2\pi \), we may first restrict \(\Lambda _3\) or \(\Lambda _4\) to the interval \((0,2\pi )\) or \((-2\pi ,0)\), and then extend it to \(\mathbb {R}{\setminus }\{2n\pi :n\in \mathbb {Z}\}\) so that the function has period \(2\pi \).