1 Introduction and outline of proof

1.1 Introduction

In this paper we consider loop-erased random walk, LERW, on a square grid. This measure on self-avoiding paths is obtained by running a simple random walk and successively erasing loops as they form. We work with a chordal version in a small mesh lattice approximation of a simply connected domain: given two boundary vertices, we chronologically erase the loops of a random walk started at one of the vertices conditioned to take its first step into the domain (along a prescribed edge) and then exit at the other vertex (along a prescribed edge). By linear interpolation this gives a random continuous curve—the LERW path. It is known that the LERW path has a conformally invariant scaling limit in the sense that it converges in law as a curve up to reparameterization to the chordal SLE\(_2\) path as the mesh size goes to zero. For details, see [20]. We will not use any results about SLE in this paper.

The main theorem of this paper is a different conformal invariance result which does not follow from the convergence of LERW to SLE. We are interested in the probability that the LERW passes through a given edge of a grid approximation of a simply connected domain D and we call this probability the LERW (edge) Green’s function in D. We show that for edges away from the boundary, this probability, when normalized by the inverse mesh size to the power 3 / 4, converges as the mesh size gets smaller to an explicit (up to an unknown lattice-dependent constant) conformally covariant function which coincides with the SLE\(_2\) Green’s function, \(G_D(z; a,b)\). This function is defined as the limit as \(\epsilon \rightarrow 0\) of \(\epsilon ^{-3/4}\) times the probability that the chordal SLE\(_2\) path in D between \(a\in \partial D\) and \(b\in \partial D\) visits the ball of radius \(\epsilon \) around z. As is shown in [23], a formula for \(G_D\) can be written using a covariance rule and the fact that \(G_{{\mathbb {D}}}(0, e^{2i \theta _a}, e^{2i\theta _b})\) equals \(|\sin ^3( \theta _a - \theta _b)|\) up to a constant. Several related results have been obtained previously, see below for further discussion.

Let us be more precise. Let D be a simply connected bounded Jordan domain containing 0. Write \(r_D\) for the conformal radius of D seen from 0. Let \(D_n \subset D\) be an approximating simply connected domain obtained by taking a largest union of squares of side-length 1 / n centered at vertices of \(n^{-1} {\mathbb {Z}}^2\) (see Sect. 1.2 for details.) Given suitable boundary points \(a_n, b_n \in \partial D_n\) tending to ab as n tends to \(\infty \), we let \(\eta _n\) be a LERW in \(D_n\) from \(a_n\) to \(b_n\) (these points are chosen so that there is a unique edge of \(n^{-1}{\mathbb {Z}}^2\) which contains them) and write \(e=e_n\) for the edge [0, 1 / n]. Our main result may then be stated as follows:

Theorem 1.1

There exists \(0< c_0< \infty \) such that for all Dab,  as above there exists a sequence of approximating domains \(D_n \uparrow D\) with boundary points \(a_n \rightarrow a\), \(b_n \rightarrow b\) such that

$$\begin{aligned} \lim _{n \rightarrow \infty } c_0\, n^{3/4}\, \mathbf{P } \left( e \subset \eta _n \right) = r_D^{-3/4}\,\sin ^3\left( \pi {\text {hm}}_D\left( 0, (ab) \right) \right) , \end{aligned}$$

where \(r_D\) is the conformal radius of D from 0, \({\text {hm}}\) denotes harmonic measure, and \((ab) \subset \partial D\) is either of the subarcs from a to b.

The convergence of the domains \(D_n \subset D\) is in the Carathéodory sense. We do not determine the value of the lattice dependent constant \(c_0\). We do give bounds on the rate of convergence, but it will be easier to describe them in terms of the discrete result of Theorem 1.2. There are two sources of error. For the discrete approximation \(D_n\) there is an error in the LERW probability compared to the SLE\(_2\) Green’s function for \(D_n\); we give a uniform bound on this error. There is also an error coming from the approximation of D by \(D_n\); this error depends on the domain D. If \(\partial D\) is nice, say piecewise analytic (analytic close to ab), the first error term is larger.

Several authors have studied the LERW Green’s function (or “intensity” as it is sometimes called) and the closely related growth exponent, that is, the polynomial growth rate exponent as \(n \rightarrow \infty \) of the expected number of steps of a LERW of diameter n. Lawler computed these exponents in dimensions \(d \geqslant 4\) in [15], where they turn out to be the same as for simple random walk with a logarithmic correction in \(d=4\). Kenyon estimated the asymptotics (up to subpower corrections) of the LERW Green’s function in the half-plane setting, thus proving that the growth exponent equals 5 / 4, see [9]. We will only discuss the planar case in the rest of the paper. Masson gave a different proof of Kenyon’s result using the convergence to SLE\(_2\) and known results on SLE exponents [19] and obtained second moment estimates in collaboration with Barlow [1, 25]. Kenyon and Wilson computed several exact numeric values for the Green’s function of the whole-plane LERW on \({\mathbb {Z}}^2\) in the vicinity of the starting point, see [10]. In [16] Lawler recently estimated up to constants the decay rate of the Green’s function for a chordal LERW in a square domain and the main result of this paper is obtained by refining the arguments of that paper. Our use of a branch cut is based on an idea of Kenyon’s [9], as discussed in Section 5.7 of [10].

The present paper is, to our knowledge, the first that treats general simply connected domains and obtains asymptotics. This is critical for the principal application we have in mind, see below. Some of the quantities we consider (and the scaling limit result itself) are related to ones appearing in the analysis of the Ising model, see, e.g., the papers by Hongler and Smirnov and Chelkak and Izyurov [4, 8], but we will not use discrete complex analysis techniques here.

The LERW path is known to converge to the SLE\(_2\) path when parameterized by capacity, a parameterization which is natural from the point of view of conformal geometry. An important question is whether the LERW path also converges when parameterized in the natural Euclidean sense so that, roughly speaking, it takes the same number of steps in each unit of time. The conjecture is that one has convergence in law of LERW to SLE\(_2\) with a particular parameterization, the Natural Parameterization, which can be given as a multiple of the 5 / 4-dimensional Minkowski content of the SLE\(_2\) curve. See [18] and the references therein. One motivation for studying the problem of the present paper is that we believe it to be a critical step in the proof of this conjecture. See also [7] for some results for the corresponding question in the case of percolation interfaces converging to SLE\(_6\).

The starting point of our proof is a combinatorial identity that factors the LERW Green’s function, just as in [16]. We give here a new proof using Fomin’s identity [6] which makes more explicit the connection with determinantal formulas. We actually prove a generalization which considers a LERW path containing as a subset a prescribed self-avoiding walk (SAW) away from the boundary. (A given edge is clearly a special case of such a SAW.) From this it follows that there are two factors whose asymptotics need to be understood. The first is the squared exponential of the random walk loop measure of loops of odd winding number about a dual vertex next to the marked edge. We obtain asymptotics by comparing this quantity with the corresponding conformally invariant Brownian loop measure quantity which can be computed explicitly. The second factor can be written in terms of a “signed” random walk hitting probability or alternatively as an expectation for a random walk on a branched double cover of the domain (the branch point is the dual vertex mentioned above). After some preliminary reductions the required estimates are proved using coupling techniques that include the KMT strong approximation (see [11]) and results from [3, 13]. Some of the auxiliary results in this paper may be of independent interest. For instance, we compare various discrete boundary Poisson kernels and Green’s functions (near the boundary) with their continuous counterparts in slit square domains and we obtain sharp asymptotics for Beurling-type escape probabilities for random walk started near the slit.

1.2 Notation and set-up

The proof of Theorem 1.1 has three principal building blocks. Although we formulated the theorem for a fixed domain being approximated with a grid of small mesh size we prefer to work with discrete domains in \({\mathbb {Z}}^2\) and let the inner radius from 0 tend to infinity. Let us set some notation.

  • We write the planar integer lattice \({\mathbb {Z}}^2\) as \({\mathbb {Z}}\times i {\mathbb {Z}}\subset \mathbb {C}\). Throughout this paper we fix

    $$\begin{aligned} w_0 = \frac{1}{2} - \frac{i}{2} , \end{aligned}$$

    and note that the dual lattice to \({\mathbb {Z}}^2\) is \({\mathbb {Z}}^2 + w_0\).

  • A subset of \(A \subset {\mathbb {Z}}^2\) is called simply connected if both A and \({\mathbb {Z}}^2 {\setminus } A\) are connected subgraphs of \({\mathbb {Z}}^2\). Let \({\mathcal {A}}\) denote the set of simply connected, finite subsets A of \({\mathbb {Z}}^2 \) that contain the origin.

  • Let \(\overrightarrow{\mathcal {E}}=\left\{ [z,w]: \, z,w \in \mathcal {V} \right\} \) be the directed edge set of the graph \({\mathbb {Z}}^2 = {\mathbb {Z}}+ i {\mathbb {Z}}\).

  • Let \(\partial _e A\) denote the edge boundary of A, that is, the set of ordered pairs \([a_-,a_+]\) of lattice points with \(a_- \in A, a_+ \in {\mathbb {Z}}^2 {\setminus } A, |a_- - a_+| = 1\). We sometimes write \(\partial A\) for the set of such \(a_+\) and \(\overline{A} = A \cup \partial A\). We will use the symbol a both for the point \((a_-+a_+)/2\in \partial D_A \) and for the edge \([a_-,a_+]\). It will be clear from context which of the two is meant.

  • For each \(z \in {\mathbb {Z}}^2\), let \({\mathcal {S}}_z\) denote the closed square region of side length one centered at z,

    $$\begin{aligned} {\mathcal {S}}_z =\left\{ z + (x+iy) \in {\mathbb {C}}: 0 \leqslant |x|,|y| \leqslant \frac{1}{2} \right\} . \end{aligned}$$

    Note that the corners of \({\mathcal {S}}_z\) are on the dual lattice \({\mathbb {Z}}^2 + w_0\).

  • If \(A \in {\mathcal {A}}\), let \(D_A \subset {\mathbb {C}}\) be the simply connected domain

    $$\begin{aligned} D_A = \mathrm{int} \left[ \bigcup _{z \in A} {\mathcal {S}}_z \right] . \end{aligned}$$

    This is a Jordan domain such that \(A \subset D_A\) and \(\partial D_A\) is a subset of the edge set of the dual lattice \({\mathbb {Z}}^2 + w_0\). Note that (the midpoint of) each such dual edge determines an edge of \(\partial _e A\); indeed, the midpoint of the dual edge is also the midpoint of a unique edge in \(\partial _e A\).

  • Let \(f = f_A \) denote the unique conformal map \( f: D_A \rightarrow {\mathbb {D}}\) with

    $$\begin{aligned} f (w_0) = 0 , \quad f'(w_0) > 0 . \end{aligned}$$
  • For \(a \in \partial D_A\), we define \(\theta _a \in [0,\pi )\) by

    $$\begin{aligned} f_A(a) = e^{i 2 \theta _a}, \end{aligned}$$

    which can be defined by extension by continuity, since \(D_A\) is a Jordan domain. Note the factor of 2 in the definition, which is included in order to make later formulas cleaner.

  • Let

    $$\begin{aligned} r_A = r_A(w_0)= f '(w_0)^{-1} \end{aligned}$$

    be the conformal radius of \(D_A\) with respect to \(w_0\). If \(r_A(0)\) denotes the conformal radius from 0, then one can use Koebe’s 1 / 4 theorem and the distortion theorem (see [14]) to verify that \(r_A(0)=r_A \; [1+O(r_A^{-1})]\).

  • We write

    $$\begin{aligned} \omega = [\omega _0,\ldots ,\omega _\tau ] \end{aligned}$$

    for nearest neighbor walks in \({\mathbb {Z}}^2\) and simply call them walks or paths. We write \(|\omega | = \tau \) for the length of the path and \(p(\omega ) = 4^{-|\omega |}\) for the simple random walk probability of \(\omega \).

  • We write \(\oplus \) for concatenation of paths. That is to say if \(\omega ^1 = [\omega ^1_0,\ldots ,\omega ^1_k], \omega ^2=[\omega ^2_0,\ldots ,\omega ^2_j]\), the concatenation \(\omega ^1 \oplus \omega ^2\) is defined if \(\omega ^1_k = \omega ^2_0\), in which case

    $$\begin{aligned} \omega ^1 \oplus \omega ^2 = \left[ \omega ^1_0,\ldots ,\omega ^1_k, \omega ^2_1,\ldots ,\omega ^2_j\right] . \end{aligned}$$
  • If ab are distinct elements of \(\partial _e A\), we let

    $$\begin{aligned} {\mathcal {W}}= {\mathcal {W}}(A;a, b) \end{aligned}$$

    be the set of walks

    $$\begin{aligned} \omega = [\omega _0,\ldots ,\omega _\tau ] , \end{aligned}$$

    with \([ \omega _0,\omega _1] =[a_+,a_-] , [\omega _{\tau -1},\omega _\tau ] = [b_-,b_+]\), and \(\omega _1, \ldots ,\omega _{\tau -1} \in A\).

  • We sometimes write

    $$\begin{aligned} \omega : x \rightarrow y, \end{aligned}$$

    where \(\omega \) is a walk and where x and y can be edges or vertices, to mean that \(\omega \) is a walk starting at x, ending at y.

  • For \(a,b\in \partial _e A\), we write

    $$\begin{aligned} H_{\partial A}(a,b) = \sum _{\omega \in {\mathcal {W}}} p(\omega ) \end{aligned}$$

    for the corresponding (boundary) Poisson kernel. If \(x \in {\mathbb {Z}}^2 {\setminus } A\) and \({\text {dist}}(x,A) = 1\), then we will similarly write \(H_{\partial A}(x,b)=\sum _{a:a_+ =x} H_{\partial A}(a,b)\).

  • If \(\omega \in {\mathcal {W}}(A;a, b)\) with \(|\omega | = \tau \), we will also write \(\omega (t), \frac{1}{2} \leqslant t \leqslant \tau - \frac{1}{2},\) for the continuous path of time duration \(\tau -1\) that starts at a and goes to b along the edges of \(\omega \) at speed one. Note that \(\omega (t) \in D_A\) for \(\frac{1}{2} < t < \tau - \frac{1}{2} .\)

  • Let \({\mathcal {W}_{\text {SAW}}}= {\mathcal {W}_{\text {SAW}}}(A;a,b)\) denote the set of walks \(\eta \in {\mathcal {W}}\) that are self-avoiding walks, that is, such that \(\eta (s)\ne \eta (t)\) for \(s<t\). Note that a path \(\omega \) is self-avoiding if and only if \(f \circ \omega \) is a simple curve.

  • For each \(\omega \in {\mathcal {W}}\) there exists a unique \(\eta = L(\omega ) \in {\mathcal {W}_{\text {SAW}}}\) obtained by chronological loop-erasing. (But \(L^{-1}(\eta )\) may have many elements.) See Sect. 2.

  • Let \({\mathcal {W}_{\text {SAW}}}^+\) (resp., \({\mathcal {W}_{\text {SAW}}}^-\)) denote the set of \(\eta \in {\mathcal {W}_{\text {SAW}}}\) that include the ordered edge [0, 1] (resp., [1, 0]) and \({\mathcal {W}_{\text {SAW}}}^* = {\mathcal {W}_{\text {SAW}}}^+ \cup {\mathcal {W}_{\text {SAW}}}^-\). Let \({\mathcal {W}}^*\) be the set of \(\omega \in {\mathcal {W}}\) such that \(L(\omega ) \in {\mathcal {W}_{\text {SAW}}}^*\). Set

    $$\begin{aligned} H_{\partial A}^*(a,b) := \sum _{\omega \in {\mathcal {W}}^* } p(\omega ). \end{aligned}$$

    If e is the unordered edge [0, 1] and \(\eta \) is a LERW from a to b in A, then we can write

    $$\begin{aligned} P(a,b;A) := \mathbf{P } \left( e \subset \eta \right) = \frac{H_{\partial A}^*(a,b) }{ H_{\partial A} (a,b) }; \end{aligned}$$
    (1.1)

    this is the LERW Green’s function at the edge e.

Our main result is a consequence of the following theorem.

Theorem 1.2

There exist \(u > 0\) and \(0 < c_0 < \infty \) such that the following holds. Suppose \(A \in {\mathcal {A}}\) and suppose \(a,b \in \partial _e A\) with \(\left| \sin ( \theta _{a } - \theta _{b} ) \right| \geqslant r_A^{-u}\). Then,

$$\begin{aligned} P(a,b;A) = c_0 \, r_A^{-3/4} |\sin ^3\left( \theta _{a } - \theta _{b}\right) | \left[ 1 + O\left( r_A^{-u} \, \left| \sin \left( \theta _{a } - \theta _{b}\right) \right| ^{-1} \right) \right] . \end{aligned}$$

Let us explain how to derive Theorem 1.1 from Theorem 1.2. Along the way we will comment on convergence rate bounds. Suppose D is a Jordan domain containing the origin with \({\text {dist}}(0,\partial D) = 1\). For each n, let \(A_n\) be the largest simply connected subset of \({\mathbb {Z}}^2\) containing the origin such that \(\overline{D_{A_n}} \subset n D\). Let \(D_n = n^{-1} D_{A_n}\), \(w_n = n^{-1} w_0 = n^{-1}(1/2- i/2)\). Let \(f_{A_n}\) be the corresponding uniformizing conformal map as above and \(F_n(z) = f_{A_n}(nz)\). Then \(F_n: D_n \rightarrow {\mathbb {D}}\) with \(F_n(w_n) = 0, F_{n}'(w_n) >0\). Let \(F:D \rightarrow {\mathbb {D}}\) be the conformal transformation with \(F(0) = 0, F'(0) >0\). Since D is a Jordan domain, F extends to a homeomorphism, \(F: \overline{D} \rightarrow \overline{{\mathbb {D}}}\). Note that \(D_n \subset D\) and for each n we can write

$$\begin{aligned} F_n(z) = M_{n} \circ \psi _n \circ F(z), \quad z \in \overline{D_n}, \end{aligned}$$

where \(\psi _n : F(D_n) \rightarrow {\mathbb {D}}\) with \(\psi _n(0)=0, \psi _n'(0)>0\) and \(M_n(z) = k_n(z-u_n)/(1-\overline{u_n}z)\) is the Möbius transformation of \({\mathbb {D}}\) taking \(u_n=\psi _n \circ F(w_n) = O(1/n)\) to 0 with \(k_n\in \partial {\mathbb {D}}\) chosen so that \([M_{n} \circ \psi _n \circ F]'(w_n) > 0\). We have \([\psi _n\circ F]'(w_n) = \psi _n'(0) F'(0)(1+O(1/n))\) and consequently \(|k_n-1|=O(1/n)\). It follows that if \(|z| > c\) for some constant c, then \(M_n(z) = z(1+O(1/n))\), where the error depends on c.

If \(z \in \partial D_n\), let w be a point in \(\partial D\) such that \(|z-w|={\text {dist}}(z,\partial D)\). Since z is contained in a closed square of side length \(n^{-1}\) that intersects \(\partial D\), we have \(|z-w| \leqslant \sqrt{2}/n\). By the Beurling estimate (see, e.g., [14]), we can see that there is a universal constant c such that \(|F(z) - F(w)| \leqslant c \, n^{-1/2}.\) In other words, there exists \(c_1\) such that

$$\begin{aligned} F(\partial D_n) \subset \left\{ z \in \overline{{\mathbb {D}}}: |z| \geqslant 1 - c_1n^{-1/2}\right\} . \end{aligned}$$
(1.2)

Also, by the Beurling estimate, if e is an edge on the boundary of \(A_n\) (and \({\text {diam}}e = 1\)), then \({\text {diam}}F(n^{-1}e) = O(n^{-1/2})\). Let \(a \in \partial D\) be arbitrary. We will choose a particular point \(a_n \in \partial D_{n}\) so that \(|F(a) - F_n(a_n)|\) is small. Let \(v_n \in {\mathbb {D}}\) be a point on \(F(\partial D_n)\) with the same argument as F(a) and with minimal radius. Then as we showed above \(|v_n| \geqslant 1- c_1 n^{-1/2}\) and there exists an edge \(e \subset \partial D_{A_n}\) (this is an edge of the dual to \({\mathbb {Z}}^2\)) such that \(v_n \in F(n^{-1} e)\). We take \(a_n \in \partial D_n\) to be the midpoint of \(n^{-1} e\). Note that \(na_n\) then determines an element of \(\partial _e A_n\) by virtue of being its midpoint. We have \(|F(a_n)-F(a)| = O(n^{-1/2})\). It is not hard to show that if \(c_1\) is as in (1.2) then for z with \(|z| \leqslant 1-2 c_1 n^{-1/2}\),

$$\begin{aligned} |\psi _n(z) - z| \leqslant c_2 n^{-1/2} \log n, \end{aligned}$$

where \(c_2\) is universal. See, e.g., Section 3.5 of [14]. Using this and the Beurling estimate we see that \(|\psi _n \circ F(a_n) - F(a)| = O(n^{-1/5})\). (We are not attempting to optimize exponents here.) Using the estimate on \(M_n\) it follows that

$$\begin{aligned}|F_n(a_n) - F(a)| = O(n^{-1/5}).\end{aligned}$$

Also, \(r_D = n^{-1} r_{A_n} [1 + O(n^{-1/2})]\). Hence, given \(a,b \in \partial D\), we would like to choose \(a_n, b_n \in \partial D_n\) to approximate ab, and then apply Theorem 1.2 to \(A_n\) with boundary edges determined by \(n a_n, n b_n\), to get a uniform error term in Theorem 1.1.

Unfortunately, although there is a uniform bound on \(|F(a) - F(a_n)|\), there is no uniform bound on \(|a - a_n|\) without additional assumptions on the regularity of \(\partial D\). However, since \(|F(a) - F(a_n)| \leqslant c_2 \, n^{-1/2}\), we certainly have

$$\begin{aligned} |a - a_n| \leqslant \delta _n:= \sup \left\{ |F^{-1}(z) - F^{-1}(w)| : |z-w| \leqslant c_2 \, n^{-1/2} \right\} . \end{aligned}$$

Since D is a Jordan domain \(F^{-1}\) is uniformly continuous and hence \(\delta _n \rightarrow 0\) as \(n \rightarrow \infty \) and so \(a_n \rightarrow a\) and \(b_n \rightarrow b\) and this is all we need for Theorem 1.1 without a convergence rate estimate.

If \(\partial D\) is, e.g., locally analytic at a and b, or more generally, if the map F is bi-Lipschitz in neighborhoods of a and b, then one can improve these estimates giving \(|a-a_n| = O(n^{-1}), |F(a) - F(a_n)| = O(n^{-1})\). Analogous estimates under weaker conditions on \(\partial D\) can also be given. The conclusion is that for sufficiently “nice” domains the biggest error term in our result comes from the discrete result, Theorem 1.2.

1.3 Outline of proof and an important idea

The first step of the proof of Theorem 1.2 rewrites (1.1) as a product of three factors which will then be estimated in the remainder of the paper. Before stating the main estimates we will introduce an idea which is further discussed in Sect. 5.1.

Suppose \(\omega (t), t \in [0,T],\) is a curve in \(\overline{D_A}\) that avoids \(w_0\). Let \(t \mapsto \varTheta _t = \arg [f(\omega (t))]\) be a continuous version of the argument. Define

$$\begin{aligned} J_t&= \left\lfloor \frac{\varTheta _t}{2 \pi } \right\rfloor - \left\lfloor \frac{\varTheta _0}{2 \pi }\right\rfloor ;\\ Q_t&= Q(\omega [0,t]) = (-1)^{J_t}. \end{aligned}$$

Although the argument is only defined up to an integer multiple of \(2 \pi \), the value of \(J_t\), and hence the value of \(Q_t\) are independent of the choice of \(\varTheta _0\). If \(\omega \) has time duration \(\tau \), we write \( Q(\omega ) = Q_{\tau }\). Note that if \(\omega = \omega _1 \oplus \omega _2\), then

$$\begin{aligned} J(\omega ) = J(\omega _1) + J(\omega _2), \quad Q(\omega ) = Q(\omega _1) \, Q(\omega _2) . \end{aligned}$$
(1.3)

In particular, if \(\omega = [\omega _0,\ldots ,\omega _\tau ]\) is a path lying in A, then

$$\begin{aligned} Q(\omega ) = \prod _{j=1}^\tau Q(e_j) , \quad e_j = [\omega _{j-1}, \omega _j] . \end{aligned}$$

Roughly speaking \(Q(e) = -1 \) if and only if the edge e crosses the branch cut \(\beta :=f_A^{-1}([0,1])\). We note the following:

  • \(Q(\overrightarrow{e})\) is a function of the undirected edge e.

  • If \(e = [0,1]\), then \(Q (e ) = 1\) assuming \(r_A\) is sufficiently large. We will assume this throughout the paper.

  • if \(\ell \) is a loop in A, then \(Q(\ell ) = -1\) if and only if the winding number of \(\ell \) about \(w_0\) is odd.

We define (signed) weights by

$$\begin{aligned} q(e) = p(e) \, Q(e) = \frac{1}{4}Q(e), \quad (e \text { edge}), \end{aligned}$$

and if \(\omega \) is a walk as above,

$$\begin{aligned} q(\omega ) = p(\omega ) \, Q(\omega ) = \prod _{j=1}^\tau q(e_j) . \end{aligned}$$

Let \(S_j\) be simple random walk starting in A, and let \(\tau = \tau _A=\inf \{k\geqslant 0: S(k)\not \in A\}\) be the first time that the walk leaves A. As a slight abuse of notation, we write \(S_{\tau }= a\) to mean that the walk exits A through the ordered edge \([a_-,a_+]\). If \(S_{\tau } = a\), we associate to the random walk path the continuous path in \(\overline{D_A} \) of time duration \(\tau - \frac{1}{2}\) ending at \(a \in \partial D_A\).

Let

$$\begin{aligned} I_a=1\{S[1,\tau ] \cap \{0,1\} = \emptyset ; S_{\tau }=a \} \end{aligned}$$
(1.4)

be the indicator of the event that S leaves A at the boundary edge a and never visits the points 0, 1 before leaving A. Let

$$\begin{aligned} R_A(z,a)= & {} \mathbf{E}^z\left[ (-1)^{J(S[0,\tau -\frac{1}{2}])} I_a \right] = \mathbf{E}^z\left[ Q\left( S\left[ 0,\tau -\frac{1}{2}\right] \right) \,I_a \right] ,\\ \varPhi _A(a,b)= & {} \frac{|R_A(0,a) R_A(1,b) - R_A(0,b) R_A(1,a)|}{H_{\partial A}(a,b)}. \end{aligned}$$

Let \(G^q_A(z,w)\) denote the random walk Green’s function in A using the signed weight q,

$$\begin{aligned} G^q_A(z,w) = \sum _{\mathop {\omega \subset A}\limits ^{\omega : z \rightarrow w}} q(\omega ) . \end{aligned}$$

Here the sum is over all walks in A from z to w. From the definition, we can write

$$\begin{aligned}&G^q_A(0,0) = \sum _{j=0}^\infty \mathbf{E}^0\left[ Q\left( S[0,j] \right) ;S_j = 0 ; j < \tau _A\right] ,\\&G^q_{A {\setminus } \{0\}}(1,1) = \sum _{j=0}^\infty \mathbf{E}^1\left[ Q\left( S[0,j] \right) ;S_j = 1 ; j < \tau _{A {\setminus } \{0\}} \right] , \end{aligned}$$

We define

$$\begin{aligned} \bar{q}_A = \frac{1}{4} \, G^q_A(0,0) \, G^q_{A {\setminus } \{0\}}(1,1). \end{aligned}$$

We can also interpret \(4 \, \bar{q}_A\) as the random walk loop measure using the weight q of loops in A that intersect \(\{0,1\}\).

We write \({\mathcal {J}}_A\) for the set of unrooted random walk loops \(\ell \subset A\) with \(Q(\ell )=-1\). (See Sect. 2 for precise definitions.) The following is the combinatorial identity central to our proof:

Theorem 1.3

Let \(A \in {\mathcal {A}}\) and \(a,b \in \partial _e A\). Then,

$$\begin{aligned} P(a,b; A) = \bar{q}_A \, \exp \left\{ 2 m( {\mathcal {J}}_A)\right\} \, \varPhi _A(a,b), \end{aligned}$$
(1.5)

where m is the random walk loop measure and \({\mathcal {J}}_A\) is the set of unrooted random walk loops \(\ell \subset A\) with \(Q(\ell )=-1\).

Proof

See Sect. 3. \(\square \)

It is not hard (see [16, Section 2]) to see that there exists \(\bar{q} \in (0,\infty )\) and \(u > 0\) such that

$$\begin{aligned} \bar{q}_A = \bar{q} + O(r_A^{-u}) . \end{aligned}$$
(1.6)

To obtain Theorem 1.2, the remaining work is then to estimate the other two factors on the right-hand side of (1.5). In Sect. 4 we compare the random walk loop measure with the Brownian loop measure to prove the following. Our proof does not yield the value of the lattice-dependent constant \(c_1\).

Theorem 1.4

There exist \(u>0\) and \(0<c_1 < \infty \) such that if \(A \in {\mathcal {A}}\),

$$\begin{aligned} \exp \{2m({\mathcal {J}}_A)\} = c_1 \, r_A^{1/4} \, \left[ 1 + O\left( r_A^{-u} \right) \right] . \end{aligned}$$

Proof

See Sect. 4. \(\square \)

The last factor in (1.5) is estimated using the following result, the proof of which is the main technical hurdle of the paper. For \(z\in A\) and \(b\in \partial _e A\), let

$$\begin{aligned} H_{A}(z,b) = \mathbf{P}^z(S_{\tau -1}=b_-, S_{\tau }=b_+) \end{aligned}$$
(1.7)

be the discrete Poisson kernel and

$$\begin{aligned} \varLambda _A(z,a) = \frac{R_A(z,a)}{H_{ A}(z,a)}. \end{aligned}$$

Theorem 1.5

There exists \( u > 0\) and \(0 < c_2 < \infty \) such that if \(A \in {\mathcal {A}}\),

$$\begin{aligned} \varLambda _A(0,a)= & {} c_2 r_A^{-1/2}\, \left[ \sin \theta _a + O\left( r_A^{-u}\right) \right] ; \end{aligned}$$
(1.8)
$$\begin{aligned} \varLambda _A(1,a)= & {} c_2 \, r_A^{-1/2} \,\left[ \cos \theta _a + O\left( r_A^{-u}\right) \right] . \end{aligned}$$
(1.9)

Proof

See Sect. 5. \(\square \)

Proof of Theorem 1.2 assuming Theorems 1.3, 1.4, and 1.5

By Theorem 1.5,

$$\begin{aligned}&|\varLambda _A(0,a)\varLambda _A(1,b) - \varLambda _A(1,a)\varLambda _A(0,b)|\\&\quad = c_3 r_A^{-1}\left| \sin (\theta _a)\cos (\theta _b) - \cos (\theta _a)\sin (\theta _b) \right| + O\left( r_A^{-1-u}\right) \\&\quad = c_3 r_A^{-1} \left| \sin \left( \theta _a -\theta _b \right) \right| \left[ 1 + O\left( \left| \sin \left( \theta _a -\theta _b \right) \right| ^{-1}\, r_A^{-u}\right) \right] . \end{aligned}$$

We can then use Theorem 1.1 of [13] which implies that if \(|\sin ( \theta _a -\theta _b)| \geqslant r_A^{-1/20}\), then

$$\begin{aligned} H_{ A}(0,a)\, H_A(0,b) = \frac{2}{\pi } \, \sin ^2(\theta _a - \theta _b)\, H_{\partial A} (a,b)\, \left[ 1 + O(r_A^{-u})\right] . \end{aligned}$$

But a difference estimate for discrete harmonic functions (see for instance Theorem 1.7.1 in [15]) shows that \(H_A(0,a) = H_A(1,a)(1+O(r_A^{-1}))\) and so by combining these estimates we see that

$$\begin{aligned} \varPhi _A(a,b)&= |\varLambda _A(0,a)\varLambda _A(1,b) - \varLambda _A(1,a)\varLambda (0,b)| \, \frac{H_A(0,a) \, H_A(0,b)}{H_{\partial A}(a,b) }\, \left[ 1 + O(r_A^{-1}) \right] \\&=c_4 r_A^{-1} |\sin ^3( \theta _a - \theta _b)|\left[ 1 + O\left( \left| \sin \left( \theta _a -\theta _b \right) \right| ^{-1}\, r_A^{-u} \right) \right] , \end{aligned}$$

and consequently Theorem 1.2 follows from Theorem 1.3 combined with (1.6), Theorem 1.4, and the last equation. \(\square \)

2 Preliminaries

This section sets more notation and collects some background material primarily on loop measures and loop-erased walks.

2.1 Green’s functions and Poisson kernels

We summarize here some definitions and facts about discrete and continuum Green’s functions and Poisson kernels.

  • The (Dirichlet) Green’s function (or Green’s function for Brownian motion) in a simply connected domain \(D \subset \mathbb {C}\), with pole at w, is the positive symmetric function \(g_D(z,w)\) such that \(g_D(z,w) + \log |z-w|\) is harmonic in D and \(g_D(z,w)=0\) if \(w \in \partial D\).

  • For a domain \(D\subset {\mathbb {C}}\) if \(w\in D\), \(z\in \partial D\), and \(\partial D\) is locally analytic at z, the Poisson kernel \(h_D(w,z)\) is the density of harmonic measure with respect to Lebesgue measure and can be given as a normal derivative of \(g_D\). In particular, for any piecewise locally analytic arc \(F\subset \partial D\),

    $$\begin{aligned} \mathbf{P}^w(B(T)\in F) = \int _F h_D(w,z) \, d|z|. \end{aligned}$$
  • If \(w, z, \in \partial D\) and \(\partial D\) is locally analytic at both w and z, then it is useful to define the excursion Poisson kernel

    $$\begin{aligned} h_{\partial D}(w,z) =\lim _{\epsilon \rightarrow 0^+}\epsilon ^{-1} \, h_D(w+\epsilon \mathbf {n}_w,z), \end{aligned}$$

    where \(\mathbf {n}_w\) is the unit vector normal to \(\partial D\) at w, pointing into D. Note that \(h_{\partial D}\) can be directly defined as a constant times the repeated normal derivatives in both variables of the Green’s function, and we see that it is symmetric and conformally covariant.

  • One can define the excursion Poisson kernel at points where \(\partial D\) is not locally analytic, such as at the tip of the slit of (smoothly) slit domains. The slit we consider in this paper is the positive real half-line and so the tip is the origin. Applying a conformal map to the unit disk, say, we can see that at such points, the derivative grows exactly like the inverse of the square root of the distance to the tip. So for such slit domains we may define

    $$\begin{aligned} h_{\partial D}(0,z) =\lim _{\epsilon \rightarrow 0^+}\epsilon ^{-1/2}h_D(-\epsilon ,z). \end{aligned}$$
  • Similar objects are useful in the discrete setting. Let \(A \subsetneq {\mathbb {Z}}^2\) be connected. Recall the definition for \(w,z\in \bar{A}\) of the random walk Green’s function:

    $$\begin{aligned} G_A(z,w) = \sum _{ \begin{array}{c} \omega : z \rightarrow w, \\ \omega \subset A \end{array}} p(\omega ), \quad z,w \in A, \end{aligned}$$

    and \(G_A(z,w)=0\) if \(z\in \partial A\) or \(w\in \partial A\), as well as the random walk q-Green’s function of A, given in Subsect. 1.3:

    $$\begin{aligned} G^q_A(z,w) = \sum _{ \begin{array}{c} \omega : z \rightarrow w, \\ \omega \subset A \end{array}} q(\omega ), \quad z,w \in A, \end{aligned}$$

    and \(G^q_A(z,w)=0\) if \(z\in \partial A\) or \(w\in \partial A\), where the sums are over walks starting at z and ending at w and staying in A.

  • Let \(A \in {\mathcal {A}}\) be given. Another way to define the Green’s function is to let \(S_j\) be a simple random walk in A, \(\tau = \tau _A = \min \{j \geqslant 0: S_j \not \in A \}\), and

    $$\begin{aligned} G_A(z,w) = \mathbf{E}^z \left[ \sum _{j=0}^{\tau -1} 1\{S_j = w \} \right] ,\quad z,w \in A. \end{aligned}$$

    Using a last-exit decomposition, one can see that the Green’s function is related to the Poisson kernel defined in (1.7) as follows:

    $$\begin{aligned} H_A(z,a) = \frac{1}{4} \, G_A(z,a_-) . \end{aligned}$$
    (2.1)

    It is known [13, Theorem 1.2] that there exists a lattice-dependent constant \(C_0\) and \(c < \infty \) such that

    $$\begin{aligned} \left| G_A(0,0) - \frac{2}{\pi }\, \log r_A - C_0 \right| \leqslant c \, \frac{\log r_A}{r_A^{1/3} }. \end{aligned}$$

    For our purposes, it will suffice to use

    $$\begin{aligned} G_A(0,0) = \frac{2}{\pi }\,\log r_A + O(1) , \end{aligned}$$

    where, as before, \(r_A\) is the conformal radius of \(D_A\).

  • We will need the following result which follows from [13, (40)–(41)]. An explicit u can be deduced from these estimates, but it is small and not optimal so we will not give it here.

Theorem 2.1

There exists \(u > 0\) such that if \(z \in A\) with \(|f(z)| \leqslant 1 - r_A^{-u}\), then

$$\begin{aligned} H_A(z,a) = {H_A(0,a)} \, \frac{1-|f(z)|^2}{|f(z)- e^{i2\theta _a}|^2} \, \left[ 1 + O\left( r_A^{-u}\right) \right] . \end{aligned}$$

and hence,

$$\begin{aligned} \frac{h_A(z,a)}{h_A(0,a)} = \frac{H_A(z,a)}{H_A(z,0)} \, \left[ 1 + O\left( r_A^{-u}\right) \right] . \end{aligned}$$
(2.2)

Moreover, if \(|\theta _a - \theta _b| \geqslant r_A^{-1/20}\),

$$\begin{aligned} H_{\partial A}(a,b) = \frac{\pi }{2} \,\frac{ H_A(0,a) \, H_A(0,b) }{\sin ^2(\theta _a - \theta _b)} \, \left[ 1 + O \left( r_A^{-u}\right) \right] . \end{aligned}$$

2.2 Loops and loop measures

We will consider oriented rooted loops on \({\mathbb {Z}}^2\), that is, walks starting and ending at the same vertex:

$$\begin{aligned}\ell =[\ell _0, \ell _1, \ldots , \ell _{\tau }], \quad \ell _0=\ell _\tau .\end{aligned}$$

(Unless otherwise stated, we will simply say “loop”.) The length of a loop, \(|\ell |=\tau \), is clearly an even integer. The vertex \(\ell _0\) is the root of \(\ell \). The edge representation of a rooted loop is the sequence of directed edges

$$\begin{aligned} e(\ell )=[\overrightarrow{e}_1, \ldots , \overrightarrow{e}_\tau ], \quad \overrightarrow{e}_j = [\ell _{j-1}, \ell _j]. \end{aligned}$$

Note that each vertex in \(\ell \) occurs in an even number of edges in \(e(\ell )\).

Let \({\mathcal {L}}_{*}(A)\) be the set of discrete (rooted) loops contained in A and write \({\mathcal {L}}_*={\mathcal {L}}_*({{\mathbb {Z}}^2})\). The rooted loop measures, \(m_*,m_{*}^q\), associated with pq, respectively are the measures on rooted loops and are defined by \(m _*(\ell ) = m^q_*(\ell ) = 0\) if \(|\ell | = 0\), and otherwise

$$\begin{aligned}&m_{*}(\ell ) = \frac{p(\ell )}{|\ell |}, \quad \ell \in \mathcal {L}_{*}.\\&m_{*}^q(\ell ) = \frac{q(\ell )}{|\ell |} = \frac{p(\ell )\, Q(\ell )}{|\ell |}, \quad \ell \in \mathcal {L}_{*}. \end{aligned}$$

An unrooted loop \([\ell ]\) is an equivalence class of rooted loops where two rooted loops are equivalent if and only if the edge representation of one can be obtained from that of the other by a cyclic permutation of indices. Clearly the lengths of two equivalent loops are the same, so we can write \(|[\ell ]|\) for this number. We write \(\#[\ell ]\) for the number of equivalent rooted loops in \([\ell ]\); this is always an integer dividing \(|[\ell ]|\). Moreover, the weights of all equivalent loops in a given class are the same so we may write \(p([\ell ])\) and \(q([\ell ])\).

We write \(\mathcal {L}(A)\) for the set of unrooted loops whose representatives are in \(\mathcal {L}_*(A)\) and \(\mathcal {L}=\mathcal {L}({\mathbb {Z}}^2)\). The loop measures, \(m,m^q\), are the measure on unrooted loops induced by \(m_*,m_{*}^q\):

$$\begin{aligned} m ([\ell ])= & {} \sum _{\ell \in [\ell ]}m_{*} (\ell ) = \frac{\#[\ell ]}{|\ell |}p(\ell ), \quad [\ell ] \in \mathcal {L},\\ m^q([\ell ])= & {} \sum _{\ell \in [\ell ]}m_{*}^q(\ell ) = \frac{\#[\ell ]}{|\ell |}q(\ell ), \quad [\ell ] \in \mathcal {L}, \end{aligned}$$

and where the sums are over the representatives of \([\ell ]\). If \(\ell \) is a rooted loop, we write \(m(\ell ), m^q(\ell )\) for \(m([\ell ]),m^q([\ell ])\).

Suppose that \(V \subset A \subset {\mathbb {Z}}^2\), where A is finite. Consider the set of loops contained in A that meet V:

$$\begin{aligned} \mathcal {L}(V;A) = \{[\ell ] \in \mathcal {L}(A) : \ell \cap V \ne \emptyset \}, \end{aligned}$$

and define \(\mathcal {L}_*(V;A)\) similarly using rooted loops. Let

$$\begin{aligned} F(V;A)= & {} \exp \left\{ m\left[ \mathcal {L}(V;A) \right] \right\} = \exp \left\{ m_*\left[ \mathcal {L}_*(V;A) \right] \right\} ,\\ F^q(V;A)= & {} \exp \left\{ m^q\left[ \mathcal {L}(V;A) \right] \right\} = \exp \left\{ m^q_*\left[ \mathcal {L}_*(V;A) \right] \right\} . \end{aligned}$$

It follows from the second equality that if \(V=\cup _{i=1}^k V_i\), with the \(V_i\) disjoint and \(A_j = A {\setminus } (\cup _{i=1}^j V_i)\), then

$$\begin{aligned} F^q(V;A) = F^q(V_1;A) F^q(V_2; A_1) \cdots F^q(V_k; A_{k-1}), \end{aligned}$$
(2.3)

and similarly for F(VA). In particular, the right-hand side of (2.3) is independent of the order of the \(V_i\) partitioning V.

If \(V = \{z\}\) is a singleton set, we write just \(\mathcal {L}(z;A),\) \( \mathcal {L}_*(z;A), F(z;A)\), and \(F^q(z;A)\).

Lemma 2.1

Suppose that \(z \in A \subsetneq {\mathbb {Z}}^2\). Then

$$\begin{aligned} G_A(z,z)= & {} \sum _{\ell \in \mathcal {L}_*(A) : \, \ell _0=z} p(\ell )=e^{ m\left[ \mathcal {L}(z;A) \right] }=F(z;A),\\ G_A^q(z,z)= & {} \sum _{\ell \in \mathcal {L}_*(A) : \, \ell _0=z} q(\ell )=e^{ m^q\left[ \mathcal {L}(z;A) \right] }=F^q(z;A). \end{aligned}$$

Proof

See [17, Lemma 9.3.2]. Although that book only studies positive measures, the proof is entirely algebraic and holds when using the weight q as well. Let us generalize the proof here.

Suppose L is a set of nontrivial loops rooted at a point \(z\in A\) with the property that if \(\ell _1,\ell _2 \in L\), then \(\ell _1 \oplus \ell _2 \in \ L\). Let \(L_1\) be the set of elementary loops, that is, the set of loops in L that cannot be written as \(\ell _1 \oplus \ell _2\) with \(\ell _1,\ell _2 \in L\). Let \(L_k\) denote the set of loops of the form

$$\begin{aligned} \ell = \ell _1 \oplus \cdots \oplus \ell _k, \quad \ell _j \in L_1 . \end{aligned}$$

Then \(L = \bigcup _{k=1}^\infty L_k\). Suppose now, as is true in the case we will be considering, that every loop in L admits a unique (up to translation) such decomposition into concatenated elementary loops. In this case there is a unique k such that \(\ell \in L_k\). We may then consider the measure \(\lambda \) on loops in L that assigns measure \(k^{-1} q(\ell ) \) to \(\ell \in L_k\). Let \(L'\) denote the set of unrooted loops that have at least one representative in L. Then the measure \(\lambda \) viewed as a measure on unrooted loops is the same as the loop measure of unrooted loops \([\ell ]\) restricted to \(L'\). Indeed, this measure assigns measure \( (j/k) \, q([\ell ])\) to \([\ell ]\) where j is the number of distinct loops among

$$\begin{aligned} \ell _1 \oplus \ell _2 \oplus \cdots \oplus \ell _k,\;\; \ell _2 \oplus \ell _3 \oplus \cdots \oplus \ell _k \oplus \ell _1, \ldots , \ell _{k} \oplus \ell _1 \oplus \cdots \oplus \ell _{k-1}. \end{aligned}$$

If \(|q(L_1)| < 1\), as in our particular case, then

$$\begin{aligned} m^q[L'] = \sum _{k=1}^\infty \frac{1}{k} \, q(L_k) = \sum _{k=1}^\infty \frac{1}{k} \, q(L_1)^k = -\log \left[ 1 - q(L_1) \right] . \end{aligned}$$
(2.4)

In our particular case, if \(L_1\) is the set of nontrivial loops \(\ell \) starting and ending at z, staying in A and otherwise not visiting z, then it is not hard to see that

$$\begin{aligned} G^q(z,z;A) = \frac{1}{1 - q(L_1)}. \end{aligned}$$
(2.5)

Combining (2.4) and (2.5) concludes the proof. \(\square \)

2.3 Loop-erased random walk

Let \(\omega = [\omega _0, \ldots , \omega _\tau ]\) be a walk with \(\tau < \infty \). We say that \(\omega \) is self-avoiding if \(i \ne j\) implies that \(\omega _i \ne \omega _j\). The loop-erasure of \(\omega \), \({\text {LE}}[\omega ]\), is a self-avoiding walk defined as follows:

  • If \(\omega \) is self-avoiding, set \({\text {LE}}[\omega ] = \omega \).

  • Otherwise, set \(s_0 = \max \{ j \leqslant \tau : \omega _j=\omega _0 \}\) and let \({\text {LE}}[\omega ]_0 = \omega _{s_0}\);

  • For \(i \geqslant 0\), if \(s_i < \tau \), set \(s_{i+1} = \max \{j \leqslant \tau : \omega _j = \omega _{s_i}\}\) and let \({\text {LE}}[\omega ]_{i+1} = \omega _{s_i+1}\).

We can now define the “loop-erased q-measure” of walks \(\eta \) staying in A:

$$\begin{aligned} \hat{q}(\eta ;A) := \sum _{\omega \subset A : \, {\text {LE}}[\omega ] = \eta } q(\omega ) = q(\eta ) F^q(\eta ; A), \end{aligned}$$
(2.6)

and we define \(\hat{p}\) in the same manner, replacing q by p. (We will often omit writing out A explicitly so that \(\hat{q}(\eta )=\hat{q}(\eta ;A)\) where no confusion is possible.) Note that this quantity is zero if \(\eta \) is not self-avoiding. The second identity in (2.6) is proved in [17, Proposition 9.5.1] by observing that one can write any walk \(\omega \) as a concatenation of the loops erased by the loop-erasing algorithm and the self-avoiding segments of \({\text {LE}}[\omega ]\) “between” the loops, and then using (2.3) and Lemma 2.1. Again, although [17] deals with positive weights, the proof is equally valid when using the weight q.

2.4 Fomin’s identity

What we call Fomin’s identity is a generalization for LERW of a well-known result of Karlin–McGregor. It is a combinatorial identity that, informally speaking, allows one to express a loop-erased quantity as a determinant of random walk quantities, the latter being easier to estimate. See [6] or Chapter 9.6 of [17] for more information and additional references. We state here the particular case of Fomin’s identity which we will need. Let \(A \in \mathcal {A}\) with two marked edges \(a,b \in \partial _e A\), and set \(K = A {\setminus } [0,1]\). The idea of the proof of the lemma below is to construct a bijection between the set of (pairs of) walks that run from b to 1 and intersect the loop-erasure of a walk from a to 0 and the set of (pairs of) walks that run from b to 0 and intersect the loop-erasure of a walk from a to 1. The existence of this bijection implies that the corresponding terms cancel in the expression on the right-hand side of (2.7) and the result is the expression on the left-hand side.

Lemma 2.2

If \(A \in \mathcal {A}, a,b \in \partial _e A\), and \(K = A {\setminus } [0,1]\), then for walks \(\omega ^1, \omega ^2\) that start and end on the boundary of K and otherwise stay in K, we have

$$\begin{aligned}&\sum _{\omega ^1: a {\rightarrow } 0 } \sum _{\begin{array}{c} \omega ^2: b {\rightarrow } 1 \\ \omega ^2 \cap {\text {LE}}[\omega ^1] = \emptyset \end{array}} p(\omega ^1)p(\omega ^2) - \sum _{\omega ^1: a {\rightarrow } 1 } \sum _{\begin{array}{c} \omega ^2: b {\rightarrow } 0 \\ \omega ^2 \cap {\text {LE}}[\omega ^1] = \emptyset \end{array}}p(\omega ^1)p(\omega ^2) \nonumber \\&\quad = \sum _{\omega ^1: a {\rightarrow } 0 }\sum _{\omega ^2: b {\rightarrow } 1 }p(\omega ^1)p(\omega ^2) - \sum _{\omega ^1: a {\rightarrow } 1 }\sum _{\omega ^2: b {\rightarrow } 0 }p(\omega ^1)p(\omega ^2). \end{aligned}$$
(2.7)

2.5 Brownian loop measure

The random walk loop measure m defined in a previous section has a conformally invariant scaling limit, the Brownian loop measure \(\mu \). It is a sigma-finite measure on equivalence classes of continuous loops \(\omega : [0, t_\omega ] \rightarrow \mathbb {C}, \, \omega (0)=\omega (t_\omega )\), with the equivalence relation given by \(\omega _1 \sim \omega _2\) if there is s such that \(\omega _1(t) = \omega _2(t+s)\) (with addition modulo \(t_{\omega _2}\)); see [22]. One can construct \(\mu \) via the Brownian bubble measure \(\mu ^{\text {bub}}\): a bubble in a domain D, rooted at \(a \in \partial D\), is a continuous function \(\omega : [0, t_{\omega }] \rightarrow {\mathbb {C}}\) with \(\omega (0) = \omega (t_{\omega }) =a \in \partial D\) and \(\omega (0, t_{\omega }) \subset D\). The bubble measure is conformally covariant (with scaling exponent 2, see (2.10)) so it is enough to specify the scaling rule and give the definition in one reference domain, say \({\mathbb {D}}\). Let

$$\begin{aligned} h_{\mathbb {D}}(z,a)=\frac{1}{2\pi } \frac{1 - |z|^2}{|z-a|^2} \end{aligned}$$
(2.8)

be the Poisson kernel of \({\mathbb {D}}\). Note that the Poisson kernel is conformally covariant (which is easily checked, but see Section 2.3 of [14] for a proof). Let \(\mathbf{P}^{z,a}\) be the law of an h-process derived from Brownian motion and the harmonic function \(h_{\mathbb {D}}(z,a)\) (see below); informally, this h-process is a Brownian motion from z conditioned to exit \({\mathbb {D}}\) at a. We define

$$\begin{aligned} \mu ^{\text {bub}}_{\mathbb {D}}(1) = \pi \lim _{\epsilon \rightarrow 0} \epsilon ^{-1} h_{\mathbb {D}}(1-\epsilon ,1) \mathbf{P}^{1-\epsilon ,1}. \end{aligned}$$
(2.9)

The \(\pi \) factor is present to match the notion of [22] and is chosen so that the measure of bubbles in \(\mathbb {H}\) rooted at 0 that intersect the unit circle equals 1. See also Chapter 5 of [14] for a discussion of the suitable metric spaces on which these measures are defined. Suppose \(\varphi : {\mathbb {D}}\rightarrow D\) is a conformal map and that \(\partial D\) is locally analytic at \(\varphi (1)\). Then if we write

$$\begin{aligned} \varphi \circ \mu ^{\text {bub}}_{\mathbb {D}}(1)[A] := \mu ^{\text {bub}}_{\mathbb {D}}(1)[ \{\omega : \varphi \circ \omega \in A \} ], \end{aligned}$$

we have the following scaling rule

$$\begin{aligned} \varphi \circ \mu ^{\text {bub}}_{\mathbb {D}}(1) = |\varphi '(1)|^2\mu ^{\text {bub}}_{D}(\varphi (1)). \end{aligned}$$
(2.10)

We can now define the Brownian loop measure restricted to loops in \({\mathbb {D}}\) as the measure on unrooted loops induced by

$$\begin{aligned} \mu = \frac{1}{\pi } \int _0^{2\pi } \int _0^1 \mu ^{\text {bub}}_{r {\mathbb {D}}}(r e^{i\theta }) r dr d\theta . \end{aligned}$$
(2.11)

The Brownian loop measure in other domains can then be defined by conformal invariance.

The next lemma makes precise that the random walk and Brownian loop measures are close on large enough scales.

Lemma 2.3

There exist constants \(\theta > 0\) and \(c_1 < \infty \) and for all n sufficiently large a coupling of the Brownian and random walk loop measures, \(\mu \) and m, respectively, in which the following holds. There is a set \(\mathcal {E}\) whose complement has measure at most \(e^{-\theta n}\) and on \(\mathcal {E}\) we have that all pairs of loops \((\omega , \ell )\) (\(\omega \) Brownian loop and \(\ell \) random walk loop) with

$$\begin{aligned} {\text {diam}}\omega \geqslant e^{n(1-2\theta )} \quad \text {or} \quad {\text {diam}}\ell \geqslant e^{n(1-2\theta )} \end{aligned}$$

satisfy

$$\begin{aligned} ||\omega -\ell || \leqslant c_1 n, \end{aligned}$$

where \(||\omega - \ell || = \inf _\alpha ||\omega \circ \alpha - \ell ||_\infty \) with the infimum taken over increasing reparameterizations.

This result can be derived from the main theorem of [21]. However, let us sketch the argument. Another way to construct the Brownian loop measure is by the following rooted measure. Suppose \(\omega :[0,t_\omega ]\rightarrow {\mathbb {C}}\) is a loop, that is, a continuous function with \(\omega (0) = t_\omega \). We can describe any loop \(\omega \) as a triple \((z,t_\omega ,\tilde{\omega })\) where \(z \in {\mathbb {C}}, t_\omega > 0\) and \(\tilde{\omega }[0,t_\omega ] \rightarrow {\mathbb {C}}\) is a loop with \(\tilde{\omega }(0) = \tilde{\omega }(t_\omega ) = 0\). The loop \(\omega \) is obtained from \((z,t_\omega ,\tilde{\omega })\) by translation. We consider the measure on \((z,t_\omega ,\tilde{\omega })\) given by

$$\begin{aligned} \mathrm{area} \times (2\pi t)^{-2} \, dt \times (\mathrm{bridge}_t) \end{aligned}$$
(2.12)

where \(\mathrm{bridge}_t\) means the probability measure associated to two-dimensional Brownian motions \(B_t, 0 \leqslant s\leqslant t\) conditioned so that \(B_0 = B_t = 0\). The factor \((2 \pi t)^{-2}\) can be considered as \(t^{-1} \, p_t(0,0)\). where \(p_t\) is the transition kernel for a two-dimensional Brownian motion. This measure, considered as a measure on unrooted loops, is the same as the measure

$$\begin{aligned} \mu = \frac{1}{\pi } \int _0^{2\pi } \int _0^\infty \mu ^{\text {bub}}_{r {\mathbb {D}}}(r e^{i\theta }) r dr d\theta . \end{aligned}$$

The expression for \(\mu \) associates to each unrooted loop the rooted loop obtained by choosing the point farthest from the origin. The expression (2.12) chooses the root using the uniform distribution on \([0,t_\omega ]\).

Similarly, a rooted random walk loop can be written as (z, 2nl) where l is a loop with \(\ell (0) = 0\) and \(|\ell | = 2n\). Then the measure on such triples is

Here \(S_n\) is a simple random walk starting at the origin, and \(\text{ bridge }_n\) denotes the probability measure on \([S_0,S_1,\ldots ,S_{2n}]\) conditioned that \( S_{2n}=0\). Using the relation \(\mathbf{P}\{S_{2n} = 0 \} = (\pi n)^{-1} + O(n^{-2}) \), we can now see our coupling of the two components. For the first component, the root, we couple Brownian loops rooted at \({\mathcal {S}}_z\) with random walk loops rooted at z. We couple Brownian loops with time duration \(n-\frac{1}{2} \leqslant t_\ell < n + \frac{1}{2}\) with random walk loops of time duration 2n. Then we use a version of the KMT coupling (see Theorem 2.2) of the random walk and Brownian loops to couple the paths. One can then check that this coupling has the desired properties.

2.6 KMT coupling

We will use in a number of places the Komlós, Major, and Tusnády (KMT) coupling of random walk and Brownian motion. For a proof of the one-dimensional case, see [12] or [17] and the two-dimensional case follows using a standard trick [17, Theorem 7.6.1].

Theorem 2.2

There exists a coupling of planar Brownian motion B and two-dimensional simple random walk S with \(B_0=S_0\), and a constant \(c>0\) such that for every \(\lambda >0\), every \(n\in {\mathbb {R}}_+\),

$$\begin{aligned} \mathbf {P}\left( \sup _{0\leqslant t\leqslant n \vee T_n \vee \tau _n}|S_{2t}-B_t| > c(\lambda + 1)\log n\right) \leqslant cn^{-\lambda }. \end{aligned}$$

where \(T_n = \min \{t: |S_{2n} | \geqslant n \}, \tau _n = \min \{t: |B_t| \geqslant n \} \).

3 The combinatorial identity: proof of Theorem 1.3

This section states and proves Theorem 3.1 which is a more general version of Theorem 1.3. For the statement of the theorem some more notation is needed.

Fix a discrete domain \(A\in {\mathcal {A}}\). Recall the definition of JQ and q from Sect. 1.3 and that our branch cut is \(\beta = f_A^{-1}([0,1])\) which runs from \(w_0\) to \(\partial D_A\) in \(D_A\). We will assume that \(r_A\) is sufficiently large so that \([0,1] \cap \beta = \emptyset \). This is possible, since \(f_A'(w_0)>0\).

Let

$$\begin{aligned} \lambda =[x_0, \ldots , x_k] \subset A \end{aligned}$$

be a self-avoiding walk (SAW) containing the ordered edge [0, 1] with \({\text {dist}}(0,\partial D) \geqslant 2 {\text {diam}}\lambda \). Given \(\lambda \) write

$$\begin{aligned} \lambda ^R = [x_k, \ldots , x_0] \end{aligned}$$

for its time-reversal. Note that \(\lambda ^R\) contains the ordered edge [1, 0]. For given \(a=[a_-,a_+], b=[b_-,b_+] \in \partial _e A\) we make the following definitions.

  • Let \({\mathcal {W}_{\text {SAW}}}(\lambda )^+={\mathcal {W}_{\text {SAW}}}^+(a, b; \lambda , A)\) be the set of SAWs from a to b in A that contain the walk \(\lambda \). That is, \({\mathcal {W}_{\text {SAW}}}(\lambda )^+\) consists of walks \(\eta \in {\mathcal {W}_{\text {SAW}}}^+\) that can be written as \(\eta ^1 \oplus \lambda \oplus \eta ^2\), where \(\eta ^1, \eta ^2\) are SAWs connecting a with \(x_0\) and \(x_k\) with b, respectively.

  • Let \({\mathcal {W}_{\text {SAW}}}(\lambda )^-={\mathcal {W}_{\text {SAW}}}^+(a, b; \lambda ^R, A)\) be the set of SAWs from a to b in A that contain the reversal of \(\lambda \).

  • Let \({\mathcal {W}_{\text {SAW}}}(\lambda )={\mathcal {W}_{\text {SAW}}}(a,b;\lambda ,A) = {\mathcal {W}_{\text {SAW}}}(\lambda )^+ \cup {\mathcal {W}_{\text {SAW}}}(\lambda )^-\).

We will sometimes suppress the dependence on \(\lambda \) and write just \({\mathcal {W}_{\text {SAW}}}^+, {\mathcal {W}_{\text {SAW}}}^-\), and \({\mathcal {W}_{\text {SAW}}}\) for \({\mathcal {W}_{\text {SAW}}}^+(\lambda ), {\mathcal {W}_{\text {SAW}}}^-(\lambda )\), and \({\mathcal {W}_{\text {SAW}}}(\lambda )\); this should not cause confusion. For topological reasons (see [16] for a detailed argument), every self-avoiding path \(\eta \) from a to b traversing the ordered edge [0, 1] yields the same value of \(Q(\eta )\). Moreover, if \(\eta '\) is another SAW from a to b traversing [1, 0], then \(Q(\eta ') = - Q(\eta )\). Indeed, consider \(\zeta \) to be any boundary arc connecting a to b. Then one of the loops \(\eta \oplus \zeta \) and \(\eta ' \oplus \zeta \) winds around \(w_0\) exactly once and the other does not, so \(Q(\eta \oplus \zeta )+Q(\eta ' \oplus \zeta )=0\), implying (see (1.3)) that \(Q(\zeta )(Q(\eta )+Q(\eta '))=0\). Without loss of generality, we will assume that ab are labelled in such a way that

$$\begin{aligned} \eta \in {\mathcal {W}_{\text {SAW}}}(\lambda )^+ \Longrightarrow Q(\eta ) = +1 ; \quad \eta \in {\mathcal {W}_{\text {SAW}}}(\lambda )^- \Longrightarrow Q(\eta ) = -1. \end{aligned}$$

Recall that \(\mathcal {J}_A\) is the set of unrooted random walk loops in A with odd winding number about \(w_0\).

Set \(K=A {\setminus } \lambda \) and define

$$\begin{aligned} \varDelta _{K}\left( x_0 \rightarrow a, x_k \rightarrow b\right)&= H_{\partial K}(x_0, a)H_{\partial K}(x_k, b) - H_{\partial K}(x_0, b)H_{\partial K}(x_k, a)\\ \varDelta ^q_{K}\left( x_0 \rightarrow a, x_k \rightarrow b\right)&= H_{\partial K}^q(x_0, a)H_{\partial K}^q(x_k, b) - H_{\partial K}^q(x_0, b)H_{\partial K}^q(x_k, a), \end{aligned}$$

where

$$\begin{aligned} H_{\partial K}(x, a) = \sum _{\begin{array}{c} \omega : x {\rightarrow } a \\ \omega \subset K \end{array}}p(\omega ), \quad H_{\partial K}^q(x,a) = \sum _{\begin{array}{c} \omega : x {\rightarrow } a \\ \omega \subset K \end{array}}q(\omega ), \end{aligned}$$

are the boundary Poisson kernels with the sums taken over walks started from the vertex x, taking the first step into K and then exiting K using the edge \(a \in \partial _e A\). Notice that \(\varDelta _K, \varDelta ^q_K\) can be written as determinants.

Theorem 3.1

Under the assumptions above,

$$\begin{aligned} \sum _{\eta \in {\mathcal {W}_{\text {SAW}}}} \hat{p}(\eta ) = p(\lambda ) e^{2m[\mathcal {J}_A]}F^q(\lambda ; A)\left| \varDelta ^q_K\left( x_0 \rightarrow a, x_k \rightarrow b\right) \right| , \end{aligned}$$

where \({\mathcal {W}_{\text {SAW}}}= {\mathcal {W}_{\text {SAW}}}(\lambda )\). In fact,

$$\begin{aligned} \sum _{\eta \in {\mathcal {W}_{\text {SAW}}}^{+}} \hat{p}(\eta )= & {} \frac{p(\lambda )}{2} \left[ e^{2m[\mathcal {J}_A]}F^q(\lambda ; A)|\varDelta ^q_K\left( x_0 \rightarrow a, x_k \rightarrow b\right) |\right. \\&\left. + F(\lambda ;A)\varDelta _K\left( x_0 \rightarrow a, x_k \rightarrow b\right) \right] \end{aligned}$$

and

$$\begin{aligned} \sum _{\eta \in {\mathcal {W}_{\text {SAW}}}^{-}} \hat{p}(\eta )= & {} \frac{p(\lambda )}{2} \left[ e^{2m[\mathcal {J}_A]}F^q(\lambda ; A)|\varDelta ^q_K\left( x_0 \rightarrow a, x_k \rightarrow b\right) |\right. \\&\left. - F(\lambda ;A)\varDelta _K\left( x_0 \rightarrow a, x_k \rightarrow b\right) \right] , \end{aligned}$$

where \({\mathcal {W}_{\text {SAW}}}^+ = {\mathcal {W}_{\text {SAW}}}^+(\lambda )\) and \({\mathcal {W}_{\text {SAW}}}^-={\mathcal {W}_{\text {SAW}}}^-(\lambda )\).

We will use this theorem only in the special case when \(\lambda = [0,1]\) where in the notation of the introduction the theorem gives

$$\begin{aligned} \sum _{\eta \in {\mathcal {W}_{\text {SAW}}}}\hat{p}(\eta ) = \frac{1}{4}F^q([0,1]; A)\, e^{2m[\mathcal {J}_A]}\, \left| R_A(0,a)R_A(1,b) - R_A(0,b)R_A(1,a) \right| . \end{aligned}$$

If we divide both sides of this equation by \(H_{\partial A}(a,b)\) we get (1.5) as stated in the introduction.

Before proving Theorem 3.1 we need a lemma.

Lemma 3.1

Let \(\eta : a {\rightarrow } b \), \(a,b \in \partial _e A\), be a SAW in A containing the (unordered) edge [0, 1]. Then

$$\begin{aligned} F^q(\eta ;A) = F(\eta ;A) \exp \{-2m[ \mathcal {J}_A ] \}, \end{aligned}$$

where \(\mathcal {J}_A\) is the set of unrooted loops in A with odd winding number about \(w_0\).

Proof

For a random walk loop \(\ell \), let \({\text {wind}}(\ell )\) denote its winding number about \(w_0\). Then the definition of q implies that

$$\begin{aligned} m^q\left( \left\{ \ell \subset A : \, \ell \cap \eta \ne \emptyset \right\} \right) =&\, m\left( \left\{ \ell \subset A : \, \ell \cap \eta \ne \emptyset ,\, {\text {wind}}(\ell ) \text { even} \right\} \right) \\&- m\left( \left\{ \ell \subset A : \, \ell \cap \eta \ne \emptyset ,\, {\text {wind}}(\ell ) \text { odd} \right\} \right) \\ =&\, m\left( \left\{ \ell \subset A : \, \ell \cap \eta \ne \emptyset \right\} \right) \\&- 2m\left( \left\{ \ell \subset A : \, \ell \cap \eta \ne \emptyset ,\, {\text {wind}}(\ell ) \text { odd} \right\} \right) . \end{aligned}$$

But any loop with odd winding number must separate \(w_0\) from \(\partial A\), and so intersect every SAW \(\eta : a {\rightarrow } b \) containing [0, 1]. This implies that

$$\begin{aligned} m^q\left( \left\{ \ell \subset A : \, \ell \cap \eta \ne \emptyset \right\} \right) = m\left( \left\{ \ell \subset A : \, \ell \cap \eta \ne \emptyset \right\} \right) - 2 m\left( \left\{ \ell \subset A : \, {\text {wind}}(\ell ) \text { odd} \right\} \right) . \end{aligned}$$

By exponentiating both sides we get the lemma. \(\square \)

Proof of Theorem 3.1

Fix \(\lambda \) as in the statement for the rest of the proof. We write \(\mathcal {W},\mathcal {W}^{\pm }\) for \({\mathcal {W}_{\text {SAW}}}(\lambda ),{\mathcal {W}_{\text {SAW}}}^{\pm }(\lambda )\). The idea is to write the sums \(\sum _{\eta \in \mathcal {W}^+} \hat{p}(\eta )\) and \(\sum _{\eta \in \mathcal {W}^-} \hat{p}(\eta )\) in terms of both random walk and q-random walk quantities via the formulas (see (2.6) and Lemma 3.1)

$$\begin{aligned} \hat{p}(\eta )=p(\eta )F(\eta ;A), \quad F(\eta ;A)=e^{2m(\mathcal {J}_A)}F^q(\eta ;A), \end{aligned}$$

and the facts that \(p(\eta )=\pm q(\eta ), \eta \in \mathcal {W}^{\pm }\). After resummation, when we add and subtract the resulting expressions, a determinant identity due to Fomin (see Sect. 2.4) can be used to write the expressions in terms of random walk determinants \(\varDelta _{A {\setminus } \lambda }\) and \(\varDelta ^q_{A {\setminus } \lambda }\) that do not involve loop-erased walk quantities.

Now we turn to the details. Write \(K = A {\setminus } \lambda , \varDelta = \varDelta _K, \varDelta ^q = \varDelta _K^q.\) First observe that any \(\eta \in \mathcal {W}^+\) can be written as

$$\begin{aligned} \eta = (\eta ^1)^R \oplus \lambda \oplus \eta ^2, \end{aligned}$$

where \(\eta ^1, \eta ^2\) are nonintersecting SAWs in K connecting \(x_0\) with a and \(x_k\) with b, respectively. Note that any loop intersecting \(\eta \) either intersects \(\lambda \subset \eta \) or it does not. Consequently, by (2.3) and (2.6), we can write

$$\begin{aligned} \hat{p}(\eta ) = p(\eta )F(\eta ; A) = p(\eta )F(\lambda ;A) F(\eta ; K). \end{aligned}$$

Using this and the above decomposition, we see that

$$\begin{aligned} \sum _{\eta \in \mathcal {W}^+} \hat{p}(\eta )&= \sum _{\eta \in \mathcal {W}^+}p(\eta ) F(\eta ; A) \nonumber \\&= p(\lambda )F(\lambda ;A)\sum _{\begin{array}{c} {\eta ^1: x_0 {\rightarrow } a } \\ {\eta ^2: x_k {\rightarrow } b } \\ \eta ^1 \cap \eta ^2 = \emptyset \end{array}}p(\eta ^1)p(\eta ^2)F(\eta ^1 \cup \eta ^2; K), \end{aligned}$$
(3.1)

where the sum is over all pairs of nonintersecting SAWs \(\eta ^1: x_0 {\rightarrow } a \) and \(\eta ^2: x_k {\rightarrow } b \) in K . Similarly, any \(\eta \in \mathcal {W}^-\) can be decomposed

$$\begin{aligned} \eta = (\eta ^2)^R \oplus (\lambda )^R \oplus \eta ^1, \end{aligned}$$

where \(\eta ^2: x_k {\rightarrow } a \) and \(\eta ^1: x_0 {\rightarrow } b \) are nonintersecting SAWs in K. We see that

$$\begin{aligned} \sum _{\eta \in \mathcal {W}^-} \hat{p}(\eta ) = p(\lambda )F(\lambda ;A)\sum _{\begin{array}{c} \eta ^1: x_0 {\rightarrow } b \\ \eta ^2: x_k {\rightarrow } a \\ \eta ^1 \cap \eta ^2 = \emptyset \end{array} }p(\eta ^1)p(\eta ^2)F(\eta ^1 \cup \eta ^2; K). \end{aligned}$$
(3.2)

(We are only summing over paths in K.) Let us now consider the sum on the right-hand side of (3.1). Then using (2.3) we have

$$\begin{aligned} \sum _{\begin{array}{c} \eta ^1: x_0 {\rightarrow } a \\ \eta ^2: x_k {\rightarrow } b \\ \eta ^1 \cap \eta ^2 = \emptyset \end{array}}p(\eta ^1)p(\eta ^2)F(\eta ^1 \cup \eta ^2; K)&= \sum _{\begin{array}{c} \eta ^1: x_0 {\rightarrow } a \\ \eta ^2: x_k {\rightarrow } b \\ \eta ^1 \cap \eta ^2 = \emptyset \end{array}}p(\eta ^1)F(\eta _1; K ) p(\eta ^2) F(\eta _2; K {\setminus } \eta _1) \\&=\sum _{\omega ^1: x_0 {\rightarrow } a } \sum _{\begin{array}{c} \omega ^2: x_k {\rightarrow } b \\ \omega ^2 \cap {\text {LE}}[\omega ^1] = \emptyset \end{array}}p(\omega ^1)p(\omega ^2), \end{aligned}$$

where \(\omega ^1: x_0 \rightarrow a\) and \(\omega ^2: x_k \rightarrow b\) are SAWs in K . An identical argument proves the corresponding identity (interchanging \(x_0\) and \(x_k\)) starting from the sum in the right-hand side of (3.2). If we take the difference of the two expressions, Fomin’s identity implies that we may drop the non-intersection condition:

$$\begin{aligned}&\sum _{\omega ^1: x_0 {\rightarrow } a } \sum _{\begin{array}{c} \omega ^2: x_k {\rightarrow } b \\ \omega ^2 \cap {\text {LE}}[\omega ^1] = \emptyset \end{array}} p(\omega ^1)p(\omega ^2) - \sum _{\omega ^1: x_k {\rightarrow } a } \sum _{\begin{array}{c} \omega ^2: x_0 {\rightarrow } b \\ \omega ^2 \cap {\text {LE}}[\omega ^1] = \emptyset \end{array}}p(\omega ^1)p(\omega ^2) \\&\quad = \sum _{\begin{array}{c} \omega ^1: x_0 {\rightarrow } a \\ \omega ^2: x_k {\rightarrow } b \end{array}}p(\omega ^1)p(\omega ^2) - \sum _{\begin{array}{c} \omega ^1: x_k {\rightarrow } a \\ \omega ^2: x_0 {\rightarrow } b \end{array}}p(\omega ^1)p(\omega ^2) \\&\quad = H_{\partial K}(x_0, a)H_{\partial K}(x_k, b) - H_{\partial K}(x_k, a)H_{\partial K}(x_0, b)\\&\quad = \varDelta \left( x_0 \rightarrow a, x_k \rightarrow b\right) . \end{aligned}$$

(Again, we are only considering paths in K.) In other words, subtracting (3.2) from (3.1) gives

$$\begin{aligned} \sum _{\eta \in \mathcal {W}^+}\hat{p}(\eta ) - \sum _{\eta \in \mathcal {W}^-}\hat{p}(\eta ) = p(\lambda )\, F(\lambda ;A)\, \varDelta \left( x_0 \rightarrow a, x_k \rightarrow b\right) \end{aligned}$$
(3.3)

and the right-hand side involves only random walk quantities with no non-intersection conditions. Up to now we have not used the signed weights. The idea is to express the sum \(\sum _{\eta \in \mathcal {W}^+}\hat{p}(\eta ) + \sum _{\eta \in \mathcal {W}^-}\hat{p}(\eta )\) as a difference involving \(\hat{q}\) to which we can apply the Fomin argument.

We first claim that

$$\begin{aligned} \sum _{\eta \in \mathcal {W}^+}\hat{p}(\eta )=\varGamma _A\sum _{\eta \in \mathcal {W}^+} \hat{q}(\eta ), \quad \text{ where } \varGamma _A = \exp \{2m[ \mathcal {J}_A ] \}. \end{aligned}$$
(3.4)

To see this, recall that \(\hat{q}(\eta ) = q(\eta ) F^q(\eta ; A)\). We already noted that \(q(\eta ) = p(\eta )\) for \(\eta \in \mathcal {W}^+\), so Lemma 3.1 gives (3.4). Using that \(p(\eta ) = -q(\eta )\) for \(\eta \in \mathcal {W}^-\), a similar argument shows that

$$\begin{aligned} \sum _{\eta \in \mathcal {W}^-}\hat{p}(\eta )=-\varGamma _A\sum _{\eta \in \mathcal {W}^-} \hat{q}(\eta ). \end{aligned}$$
(3.5)

Hence, adding (3.4) and (3.5) gives

$$\begin{aligned} \sum _{\eta \in \mathcal {W}^+}\hat{p}(\eta )+\sum _{\eta \in \mathcal {W}^-}\hat{p}(\eta )=\varGamma _A\left( \sum _{\eta \in \mathcal {W}^+}\hat{q}(\eta )-\sum _{\eta \in \mathcal {W}^-} \hat{q}(\eta ) \right) . \end{aligned}$$

We can now argue exactly as in the proof of (3.3) replacing p by q; it makes no difference, and in this way we get

$$\begin{aligned} \sum _{\eta \in \mathcal {W}^+}\hat{p}(\eta )+\sum _{\eta \in \mathcal {W}^-}\hat{p}(\eta )&=q(\lambda )\varGamma _A\, F^q(\lambda ;A)\, \varDelta ^q(x_0 \rightarrow a, x_k \rightarrow b)\nonumber \\&= p(\lambda )\, \varGamma _A \, F^q(\lambda ;A)\, \left| \varDelta ^q(x_0 \rightarrow a, x_k \rightarrow b) \right| , \end{aligned}$$
(3.6)

where the last step uses that the left-hand side of (3.6) is positive and that \(|q(\lambda )| = p(\lambda )\). The theorem follows by adding and subtracting (3.3) and (3.6). \(\square \)

4 Comparison of loop measures: proof of Theorem 1.4

In this section we prove the main estimate on the random walk loop measure by comparing it with the corresponding quantity for the Brownian loop measure.

We recall some notation. Given \(A \in \mathcal {A}\), \(\mathcal {J}_A\) is the set of unrooted random walk loops in A with odd winding number about \(w_0\). Given a simply connected domain \(D \ni 0\) we write \(\tilde{{\mathcal {J}}}_D\) for the set of unrooted Brownian loops with odd winding number about 0. Let \(\psi _D : {\mathbb {D}}\rightarrow D\) be the conformal map with \(\psi _D(0)=0, \psi '_D(0) > 0\) and \(\psi '(0)\) is the conformal radius of D from 0. Given a lattice domain \(A \subset \mathcal {A}\) with corresponding \(D=D_A\) we define \(r_A=r_{D_{A}}\). For \(R > 0\), set

$$\begin{aligned} {\mathcal {B}}(R) = \{z \in {\mathbb {C}}: |z| < R \}, \quad B(R) = \{z \in {\mathbb {Z}}^2 : |z| < R\}. \end{aligned}$$

Theorem 4.1

There exist \(0<u, c_0,c < \infty \) such that the following holds. Let \(A \in \mathcal {A}\) and let \(D_A\) be the associated simply connected domain. Then

$$\begin{aligned} \left| m \left( {\mathcal {J}}_A\right) - \frac{1}{8}\log r_A - c_0 \right| \leqslant c \, r_A^{-u} . \end{aligned}$$

Our proof does not determine the value of \(c_0\). Before giving the proof we need a few lemmas.

Lemma 4.1

There exists \( c < \infty \) such that if D is a simply connected domain containing the origin with conformal radius \(r \geqslant 5\), then

$$\begin{aligned} \left| \mu \left( \tilde{{\mathcal {J}}}_{D} {\setminus } \tilde{{\mathcal {J}}}_{{\mathbb {D}}} \right) - \frac{\log r}{8} \right| \leqslant c \, r^{-1}. \end{aligned}$$

Proof

For \(t > 1\), let \(\phi (t) = \mu \left( \tilde{{\mathcal {J}}}_{t{\mathbb {D}}} {\setminus } \tilde{{\mathcal {J}}}_{{\mathbb {D}}} \right) \). Note that if \(1 < t < s\), then conformal invariance of \(\mu \) implies that \( \phi (s) = \phi (t) + \phi (s/t)\), that is, \(\phi (t) = \alpha \log t\) for some \(\alpha \). The constant \(\alpha \) can be computed, see Proposition 4.1 of [16],

$$\begin{aligned} \mu \left( \tilde{{\mathcal {J}}}_{t{\mathbb {D}}} {\setminus } \tilde{{\mathcal {J}}}_{{\mathbb {D}}}\right) = \frac{1}{8} \log t. \end{aligned}$$
(4.1)

Distortion estimates imply that there is a universal c such that

$$\begin{aligned} \psi _D\left( {\mathcal {B}}(r^{-1}-cr^{-2})\right) \subset {\mathbb {D}}\subset \psi _D\left( {\mathcal {B}}(r^{-1} +cr^{-2} )\right) \end{aligned}$$

Hence, by conformal invariance and (4.1),

$$\begin{aligned} \mu \left( \tilde{{\mathcal {J}}}_{D} {\setminus } \tilde{{\mathcal {J}}}_{{\mathbb {D}}} \right) = \frac{1}{8} \, \log \left[ r \pm c \right] = \frac{1}{8}\log r + O(r^{-1}). \end{aligned}$$

\(\square \)

Given Lemma 4.1 and the fact that the conformal radius of \(D_A\) with respect to 0 and \(w_0\) are the same up to a multiplicative error of magnitude \(1 + O(r_A^{-1})\), we see that to prove Theorem 4.1, it suffices to prove that there exists \(c_0\) such that

$$\begin{aligned} m({\mathcal {J}}_A) = \mu \left( \tilde{{\mathcal {J}}}_{D} {\setminus } \tilde{{\mathcal {J}}}_{{\mathbb {D}}} \right) + c_0 + O(r_A^{-u}) , \quad D=D_A. \end{aligned}$$

If we let k be the largest integer such that \(e^{k+1} {\mathbb {D}}\subset D\), then we can write

$$\begin{aligned}&m({\mathcal {J}}_A) - \mu \left( \tilde{{\mathcal {J}}}_{D} {\setminus } \tilde{{\mathcal {J}}}_{{\mathbb {D}}} \right) \\&\quad =\left[ m({\mathcal {J}}_A {\setminus } J_{B^k}) - \mu (\tilde{{\mathcal {J}}}_D {\setminus } \tilde{{\mathcal {J}}}_{{\mathcal {B}}^k}) \right] + \sum _{j=1}^k\left[ m({\mathcal {J}}_{B^{j}} {\setminus } {\mathcal {J}}_{B^{j-1}}) - \mu (\tilde{{\mathcal {J}}}_{{\mathcal {B}}^{j}} {\setminus } \tilde{{\mathcal {J}}}_{{\mathcal {B}}^{j-1}} )\right] , \end{aligned}$$

where \(B^j = B(e^{j}), {\mathcal {B}}^j = {\mathcal {B}}(e^{j}).\) (There are no random walk loops of odd winding number which stay in \({\mathbb {D}}\).) The Koebe-1 / 4 theorem implies that \(({\mathbb {C}}{\setminus } D) \cap \{|z| = r\}\) is nonempty; we write \(r=r_D\). Note that this implies that any loop in D (either random walk or Brownian motion) with odd winding number must intersect \(r{\mathbb {D}}\).

The theorem then follows from the following estimate. The phrasing of the lemma is rather technical but the basic idea is that the measures of the set of random walk loops and Brownian loops with odd winding number that are in D and are not contained in a smaller disk \(\delta r {\mathbb {D}}\) are almost the same.

We use the coupling of random walk and Brownian loops and note that the pairs of coupled loops will have these properties unless one of the following possibilities occur, each of which will be proven to have small measure.

  • The Brownian loop and the random walk loop are not very close. The loops are coupled in such a way that this happens with small measure.

  • One of the loops (in a coupled pair) is contained in D but the other is not. If the loops are close this would require the loop that is inside D to be close to the boundary without intersecting it. The measure of such loops can be controlled using Beurling-type estimates.

  • Similarly, one loop can intersect \(r {\mathbb {D}}\) or be contained in \(\delta r {\mathbb {D}}\) while the other is not. Again, this requires one of the loops to be near a circle without intersecting the circle.

  • The final “bad” possibility is that the loops are close but they are so close to the origin that the winding numbers can differ. The measure of walks that are close to the origin is sufficiently large that we cannot just ignore this term. However, if a loop (random walk or Brownian motion) gets close to the origin it is almost equally likely to have an odd number as an even winding number. This allows us to show that the random walk and Brownian loop measures of such loops are nearly the same.

Lemma 4.2

There exist \( u > 0, c < \infty \) such that the following holds for all \(\delta \geqslant 1/10\). Suppose D is a simply connected domain containing the origin and let \(r = r_D\). Let \(\mu \) denote the Brownian loop measure and m the random walk loop measure.

  • Let \(I(\ell )\) (resp., \(\tilde{I}(\omega )\)) be the indicator function of the event that a random walk loop \(\ell \) (resp., a Brownian loop \(\omega \)) is a subset of D, intersects \(r {\mathbb {D}}\), is not a subset of \(\delta r {\mathbb {D}}\).

  • Let \(U(\ell )\) (resp., \(\tilde{U}(\omega )\)) denote the indicator function that the winding number of \(\ell \) (resp., \(\omega \)) about \(w_0\) (resp., 0) is odd.

Then,

$$\begin{aligned} \left| \mu [\tilde{I} (\omega ) \, \tilde{U}(\omega )]- m[I(\ell ) \, U(\ell ) ] \right| \leqslant c \, r^{-u} . \end{aligned}$$

Here we are writing \(\mu [\cdot ]\) for the integral with respect to \(\mu \) and similarly for \(m[\cdot ]\) and \(\nu [\cdot ]\) in the proof below.

Fig. 1
figure 1

Proof of Lemma 4.2. Left Proof of Claim 1. If \(\ell \) is a good loop starting from \(z_j\), then it must first exit the disk of radius \(2|z_j|\) without hitting \(V_{j-1}\) (represented by the smallest disk), get to \(\partial r \mathbb {D}\), exit \(r^{1+u} \mathbb {D}\), then get back to \(r \mathbb {D}\) all while not exiting D, and finally return to \(z_j\) avoiding \(V_{j-1}\). Right Proof of Claim 2. A random walk and Brownian loop are coupled and close, inside \(r^{1+u} \mathbb {D}\) but \(|I-\tilde{I}| > 0\). One of three possibilities why the latter may hold is that exactly one of the loops exits D. In that case the other loop must get near \(\partial D\) without exiting D

Proof

We will be doing detailed estimates for the random walk loop measure; the Brownian loop measure estimates are done similarly. We will, however, prove one estimate for the Brownian loop measure in order to explicitly illustrate this point. The proof of this lemma will complete the proof of Theorem 4.1.

It will be useful to fix an enumeration \(\{z_0,z_1,\ldots \}\) of \({\mathbb {Z}}^2\) such that \(|z_j|\) is nondecreasing and we let \(V_j = \{z_0,\ldots ,z_j\}\). In particular, \(z_0 = 0\). We will first consider loops that do not lie in \(r^{1+u} {\mathbb {D}}\) for some \(u > 0\). As already noted, any loop in D with odd winding number must intersect \(r {\mathbb {D}}\).

  • Claim 1 The random walk and Brownian loop measures of loops that intersect both \(r{\mathbb {D}}\) and the circle of radius \(r^{1+u}\) is \(O(r^{-u})\).

We will prove this for the random walk measure; the Brownian loop estimate is done similarly (Fig. 1). Let L denote the set of random walk loops in D that intersect both \(r{\mathbb {D}}\) and the circle of radius \(r^{1+u}\). Then

$$\begin{aligned} L = \bigcup _{|z_j| < r} L_j^* , \end{aligned}$$

where \(L_j^*\) denotes the set of such loops \([\ell ]\) such that \(z_j \in \ell \) and \(\ell \cap V_{j-1} = \emptyset \). For any \([\ell ] \in L_j^*\) we call a rooted representative

$$\begin{aligned} \ell = [\ell _0,\ell _1,\ldots ,\ell _{2n} ] \end{aligned}$$

good if \(\ell _0 = z_j\) and if we define k to be the first index with \(|\ell _k| \geqslant r^{1+u}\), then \(\ell _s \ne z_j, 1 \leqslant s \leqslant k\). Let \(L_j^i\) denote the set of unrooted loops that have i good representatives. Then \(L_j^1\) is a set of elementary loops as in the proof of Lemma 2.1. In particular,

$$\begin{aligned} m(L^*_j) = - \log \left[ 1 - p(L_j^1) \right] \end{aligned}$$

For \(j = 0\), a good loop must start from 0, then reach the disk of radius r without returning to 0. This has probability \(O(1/\log r)\). Given this, the Beurling estimate implies that the probability of reaching the disk of radius \(r^{1+u}\) without leaving D is \(O(r^{-u/2})\). Given this, the probability to return to the disk of radius r without leaving D is \(O(r^{-u/2})\). Given this, the expected number of returns to the origin before leaving D is O(1). Combining all of these estimates, we see that \(p(L_0^1) = O(r^{-u}/\log r)\) and hence \(m(L^*_0) = O(r^{-u}/\log r).\)

Let us now consider elementary loops for \(j > 0\). Let \(x = |z_j|\). Using the gambler’s ruin estimate, the probability that the random walk reaches the circle of radius 2x without hitting \(V_{j-1}\) is O(1 / x). Given this, the probability that the walk reaches the circle of radius r without hitting \(V_{j-1}\) is \(O(1 \wedge [\log (r/x)]^{-1})\). Given this, as above, the probability to reach the circle of radius \(r^{1+u}\) and return to the circle of radius r without leaving D is \(O(r^{-u})\). Given this, the probability to reach within distance 2x of \(z_j\) is \(O(1 \wedge [\log (r/x)]^{-1})\). Given this, the probability that the next visit to \(V_j\) is at \(z_j\) is \(O(x^{-1})\). Given this, the expected number of visits to \(z_j\) before leaving \(D{\setminus } V_{j-1}\) is O(1). Combining all of these estimates, we see that

$$\begin{aligned} m(L_j^*) = p(L_j^1) \, \left[ 1 + O(p(L_j^1) ) \right] \leqslant \frac{c}{|z_j|^2\,r^u} \, \left[ 1 \wedge \frac{1}{\log ^2(r/|z_j|)} \right] . \end{aligned}$$
(4.2)

By summing over \(|z_j| \leqslant r\), we get \(m(L) = O(r^{-u})\) which establishes Claim 1.

We will now use the coupling as described in Lemma 2.3. Let us write \((\omega ,\ell )\) for a Brownian motion/random walk loop pair. This coupling defines \((\omega ,\ell )\) on the same measure space \((M,\nu )\) such that

  • The marginal measure on \(\omega \) restricted to nontrivial loops is \(\mu \).

  • The marginal measure on \(\ell \) restricted to nontrivial loops is m.

  • Let E denote the set of \((\omega ,\ell )\) such that at least one of the paths has diameter greater than \(r^{1-u}\) and is contained in the disk of radius \(r^{1+u}\) and such that \(\Vert \omega - \ell \Vert _\infty \geqslant c_0 \, \log r\). (Here by \(\Vert \cdot \Vert _\infty \) we mean the infimum of the supremum norm over all parametrizations.) Then \(\nu (E) \leqslant O(r^{-u})\).

Given Claim 1, we see that it suffices to show that \( |\nu ( \tilde{I} \, \tilde{U} - I \, U)|= O(r^{-u}).\) We will write E for \(1_E\). Let \(K (\ell ) \) (resp., \(\tilde{K}(\omega )\)) denote the indicator function that \({\text {dist}}(0, \ell ) \leqslant r^{1/2}\) (resp., \({\text {dist}}(0,\omega ) \leqslant r^{1/2}\)). Note that

$$\begin{aligned} UI - \tilde{U} \tilde{I} = U I K - \tilde{U} \tilde{I} \tilde{K} + \tilde{U} \tilde{I}\, (\tilde{K} - K) + (UI -\tilde{U}\, \tilde{I}) \, (1 - K). \end{aligned}$$

Note that if \(K = 0\) and \(\Vert \omega - \ell \Vert _\infty \leqslant c_0 \, \log r\), then (for r sufficiently large) \(U = \tilde{U} \). Therefore,

$$\begin{aligned} |\nu (\tilde{I} \, \tilde{U} - I \, U)|\leqslant & {} |\nu (\tilde{I} \tilde{U} \tilde{K} ) - \nu ( I UK) | + \nu [ \tilde{I} \, \tilde{U} \, |\tilde{K} - K| ] \\&+ \nu [ |\tilde{I} - I | \, (\tilde{U} + U) ] + \nu (E). \end{aligned}$$

Therefore it suffices to establish

  • Claim 2

    $$\begin{aligned} \nu [ (\tilde{I} + I) \, |\tilde{K} - K| ] + \nu [ |\tilde{I} - I | ] \leqslant O(r^{-u}), \end{aligned}$$

and

  • Claim 3

    $$\begin{aligned} |\nu (\tilde{I} \tilde{U} \tilde{K} ) - \nu ( I UK) | \leqslant O(r^{-u}). \end{aligned}$$

To prove Claim 2, we first note that if the loops are coupled, and \(IU(\ell ) \ne 0\), then \({\text {diam}}(\ell ) \geqslant \delta r\) and similarly for the Brownian motion loops (Fig. 1). Also, if \((\omega , \ell )\) are in the disk of radius \(r^{1+u}\) with \(E(\omega ,\ell ) =0\) and \(I \ne \tilde{I}\), then either the random walk loop or the Brownian loop does one of the following:

  • gets within distance \(c_0 \log r\) of \(\partial D\) without leaving D

  • gets within distance \(c_0 \log r \) of the circle of radius r without hitting the circle

  • gets within distance \(c_0 \log r\) of the circle of radius \(\delta r\) without hitting the circle.

We can estimate the measure of loops that satisfy this as well as \({\text {diam}}\geqslant \delta r\), by using the rooted loop measure. We will do the random walk case for the first bullet; the Brownian motion case and the other two bullets are done similarly. Let \(\epsilon \) be any positive number. Note that the root must be in the disk of radius \(r^{1+u}\) and hence there are \(O(r^{2(1+u)})\) choices for the root. Using standard large deviations estimates, except for a set of loops of measure \(o(r^{-5})\), these loops must have time duration at least \(r^{2-\epsilon }\). We consider the probability that a random walk returns to the origin at time 2n after getting within distance \(O(\log r)\) of \(\partial D\) but not leaving D. We claim that this is \(O(\log r/n^{5/4})\). To see this, first note that by considering the reversal of the walk, we can see this is bounded above by the probability that the random walk gets within distance \(O(\log r)\) in the first n steps, stays in \(\partial D\), and then returns to the original point at time 2n. Given that the walk is within distance \(O(\log r)\) of \(\partial D\), the Beurling estimate implies that the probability of not leaving D in the next n / 2 steps is \(O(\log r/n^{1/4})\). Given this, the probability of being at the origin at time 2n is \(O(n^{-1})\). The rooted loop measure puts an extra factor of \((2n)^{-1}\) in. Therefore the rooted loop measure of loops rooted at z of time duration \(2n \geqslant r^{2-\epsilon }\) that have diameter at least \(\delta r\), and get within distance \(c_0 \log r\) of \(\partial D\) but stay in D is \(O(\log r/n^{9/4})\). By summing over \(2n \geqslant r^{2-\epsilon } \), we see that the rooted loop measure of loops rooted at z that have diameter at least \(\delta r\), get within distance \(c_0 \log r\) of \(\partial D\) but stay in D is \(O(r^{-(2- 2 \epsilon )5/4})\). If we sum over \(|z| \leqslant r^{1+u}\), we get that the loop measure (rooted or unrooted) of such loops is \(O(r^{2u + \frac{5\epsilon }{2} - \frac{1}{2} }) \leqslant O(r^{-1/4})\) for u sufficiently small. This gives the upper bound on \(\nu [ |\tilde{I} - I | ] \).

Similarly, if \(E(\omega ,\ell ) =0\) , \(K(\ell ) \ne \tilde{K}(\omega )\), and \(I(\ell ) +\tilde{I}(\omega ) \geqslant 1\), then

$$\begin{aligned} r^{1/2} - c_0 \, \log r \leqslant {\text {dist}}(0,\ell ) \leqslant r^{1/2} + c_0 \, \log r. \end{aligned}$$

and for r sufficiently large,

$$\begin{aligned} {\text {diam}}(\ell ) \geqslant \frac{\delta r}{3}. \end{aligned}$$

Therefore \(\nu [(I +\tilde{I}) |K-\tilde{K}| \, (1-E)]\) is bounded above by twice the m measure of the set of loops \(\ell \) that satisfy these conditions. This can be estimated as in the previous paragraph (or using an unrooted loop measure estimate as in the beginning of this proof); we omit the details since we have already written out analogous estimates. This finishes the proof of Claim 2.

For the final claim, first note that

$$\begin{aligned} |\nu (\tilde{I} \tilde{K} ) - \nu ( I K) | \leqslant \nu [ (\tilde{I} + I) \, |\tilde{K} - K| ] + \nu [ |\tilde{I} - I | ] , \end{aligned}$$

and hence by Claim 2,

$$\begin{aligned} |\nu (\tilde{I} \tilde{K} ) - \nu ( I K) |= O(r^{-u}). \end{aligned}$$

Therefore to prove Claim 3 it suffices to show that

$$\begin{aligned} |\nu (\tilde{I} \tilde{U} \tilde{K} ) - \nu ( I UK) | = \frac{1}{2} \, |\nu (\tilde{I} \tilde{K} ) - \nu ( I K) | + O(r^{-u}). \end{aligned}$$

We will prove that the Brownian loop measure of loops of odd and even winding numbers, respectively, that intersect \(r^{1/2} \mathbb {D}\) and \(\delta r {\mathbb {T}}\) (we write \({\mathbb {T}}= \partial {\mathbb {D}}\)), are the same up to a small error (Fig. 2). (This estimate for Brownian loops can be done by explicit computation, but we give an argument that also works for random walk.) Recalling (2.11) and (2.9) we see that it will be enough to prove this for the Brownian bubble measure of bubbles in \(\varDelta _s=\{|z| > s\}\) that are attached at \(0 < s \leqslant r^{1/2}\) and intersect \( \delta r{\mathbb {T}}\); it will be enough to do the argument for \(s=r^{1/2}\). Choose \(\zeta \in \delta r {\mathbb {T}}\) arbitrarily. Consider a Brownian bubble in \(\varDelta _{r^{1/2}}\) attached at \(r^{1/2}\) that intersects \(\delta r {\mathbb {T}}\) for the first time at \(\zeta \). The initial part of the bubble, the part which connects \(r^{1/2}\) with \(\zeta \), has the distribution of a Brownian excursion between these points in the annulus \(\delta r{\mathbb {D}}{\setminus } r^{1/2} {\mathbb {D}}\). We will show that this path is about as likely to have odd as even winding number. For \(k=1,2,\ldots ,\lfloor \log r^{1/2} -2 \rfloor \), let \(A_k\) be the annulus with boundary components \(\delta r 2^{-(k+1)} {\mathbb {T}}\) and \(\delta r 2^{-k} {\mathbb {T}}\). Note that the probability that an excursion as above separates the two boundary components of \(A_k\) between its first hitting times of the boundary components is uniformly bounded away from 0 independently of kr. (This follows from a harmonic measure estimate for Brownian motion and for the excursion by comparing Poisson kernels.) This means the excursion has positive probability (independent of rk) to change winding number parity when crossing each annulus \(A_k\) and that the hitting distributions of the outer boundary of \(A_k\) for excursions of odd and even winding number (up to that hitting time) are uniformly comparable. We may therefore couple two excursions from \(r^{1/2}\) in such a way that with probability \(1-O(r^{-u})\) they have different winding number parity when arriving at \(\zeta \), e.g., as follows: first couple the two sequences of annulus hitting points (in decreasing k order). This can be done so that the sequences eventually agree with large probability. (See [24, Section 1.5].) Then given the hitting points, sample the subpaths connecting these hitting points. The paths couple (and run together) after the point at which the hitting points agree and the winding number parities (at the hitting time) are different. Since constants are independent of rk, the excursions couple with probability \(c>0\) (independent of rk) in each of the annuli, and so the paths couple with probability \(1-O((1-c)^{\log r^{1/2}}) = 1-O(r^{-u})\) for \(u=u(c) > 0\). This shows that the probability of the loop attached at \(r^{1/2}\) having odd winding number equals the probability of the loop having even winding number up to an error of \(O(r^{-u})\). The analogous argument works for the random walk loops and so we have established Claim 3, which completes the proof of the lemma as well as Theorem 4.1. \(\square \)

Fig. 2
figure 2

Proof of Claim 3 of Lemma 4.2. We consider loops that get close to 0, that is, enter \(r^{1/2} \mathbb {D}\) and that are not contained in \(\delta r \mathbb {D}\). Starting from inside \(r^{1/2} \mathbb {D}\), the first part of such a loop (which is the only part shown in the diagram) can be written as an h-process aiming at \(\zeta \in \delta r \partial \mathbb {D}\). In each dyadic annulus the probability to change parity of the winding number is uniformly bounded from below. Hence the exit distributions (at the annulus boundary component of larger radius) of conditioned random walks/Brownian h-processes of odd and even winding number are uniformly comparable. This implies that the paths of odd/even winding number can be coupled with positive probability in each annulus

5 Asymptotics of \(\varLambda \): proof of Theorem 1.5

In this section we study the asymptotics of \( \varLambda _A(z,a) = {R_A(z,a)}/{H_A(z,a)}, \) as \(r_A \rightarrow \infty \). We recall that \(R_A(z,a) = \mathbf {E}^z[Q(S[0,\tau ]) I_a]\) where \(S=S^z\) is simple random walk from z, \(I_a\) is as defined in (1.4), and that \(H_A(z,a) = \mathbf {P}^z\left( S_{\tau } = a \right) \) is discrete harmonic measure. Consequently we have

$$\begin{aligned} \varLambda _A(z,a) =\frac{R_A(z,a)}{H_A(z,a)} = \mathbf {E}^{z,a}[Q(S[0,\tau ]) I], \quad z \in A, \end{aligned}$$
(5.1)

where \(I=1\{S[1,\tau ] \cap \{0,1\} = \emptyset \}\) and \(\mathbf {E}^{z,a}\) denotes expectation with respect to the measure under which S is a simple random walk from z conditioned to exit A at a.

5.1 Continuum functions

Given (5.1) and the fact that simple random walk converges to Brownian motion, we would expect any scaling limit of \(\varLambda _A(0,a)\) to be the corresponding quantity with random walk replaced by an h-transformed Brownian motion conditioned to exit D at \(a \in \partial D\). Here we describe the quantity in the continuum (Fig. 3).

Fig. 3
figure 3

The construction of the continuum observable \(\lambda (z,a)\) in \(\mathbb {D}\). Left We consider a Brownian h-process W in \(\mathbb {D}\) from z conditioned to exit \(\mathbb {D}\) at a. Right W is lifted to a continuous process \(\hat{W}\) in \(\mathbb {H}\) by the multivalued function \(F(w)=-i \log w\) starting from \(\hat{z}\), the point in F(z) with real part in \([0,2\pi )\). The random variable Q equals \(+1\) if \(\hat{W}\) exits \(\mathbb {H}\) in \(\{\hat{a} + 4 k \pi , \, k \in \mathbb {Z}\}\) and \(-1\) otherwise and \(\lambda \) is the expected value of Q with respect to the law of W. By symmetry \(\lambda (z,a)=0\) for \(z \in \alpha \), the antipodal line relative to a. For paths avoiding \(\alpha \), the value of Q depends only on whether \(\alpha \cup \beta \) separates z from a in \(\mathbb {D}\)

We will do the construction in the unit disk \({\mathbb {D}}\); we can use conformal invariance to define the functions in other simply connected domains. Let \(\beta \) denote the line segment \([0,1) \subset {\mathbb {D}}\). Let \(a = e^{2i\theta _a}, 0 < \theta _a < \pi \), and let \(\alpha =\alpha (a)=\{w: w=re^{i(2\theta _a + \pi )}, \, r \in (0,1) \}\) be the antipodal radius relative to a. Let \(\mathbf {P}^{z,a}\) be a probability measure under which the process \(W_t, \, t \in [0, T],\) is a Brownian h-process in \({\mathbb {D}}\) started from z conditioned to exit \({\mathbb {D}}\) at a. Here T is the hitting time of \(\partial {\mathbb {D}}\). For each realization of the process W, we, roughly speaking, let Q equal \(\pm 1\) depending on whether the path W[0, T] intersects \(\beta \) an even or an odd number of times. This is a bit imprecise since there are an infinite number of intersections. One way to make it precise by lifting by the multi-valued logarithm \(F(z) = -i\log z\). The image of \(\beta \), \(F(\beta )\), is the union of the \(2\pi \)-translates of the positive imaginary axis. If we choose a particular image of z, say \(\hat{z}\), then there is a corresponding image \(\hat{a}\) of a such that \(\hat{a}\) is on the boundary of the connected component of \({\mathbb {H}}{\setminus } F(\beta )\) that contains \(\hat{z}\). Once \(\hat{z}\) is given, the h-process is mapped to an h-process \(\hat{W}_t\) in \({\mathbb {H}}\) conditioned to leave \({\mathbb {H}}\) at \(F(a) = \{\hat{a} + 2\pi k: k \in {\mathbb {Z}}\}\). Then \(Q=+1\) if \(\hat{W}\) exits \({\mathbb {H}}\) at \(\{\hat{a} + 4\pi k: k \in {\mathbb {Z}}\}\), and \(Q=-1\) if \(\hat{W}\) exits \({\mathbb {H}}\) at \(\{\hat{a} + 4\pi (k+1/2): k \in {\mathbb {Z}}\}\). For \(z\in {\mathbb {D}}, a\in \partial {\mathbb {D}}\), let

$$\begin{aligned} \lambda (z,a ) = \lambda _{{\mathbb {D}}}(z,a) = \mathbf{E}^{z,a}[Q ]. \end{aligned}$$

For the simply connected domain \(D_A\), we define for \(z\in D_A\) and \(a\in \partial D_A\)

$$\begin{aligned} \lambda _A(z, a ) = \lambda (f_A(z), f_A(a)), \end{aligned}$$
(5.2)

where we recall \(f_A: D_A \rightarrow {\mathbb {D}}\) with \(f_A(w_0) = 0, f_A'(w_0) > 0\).

Remark 5.1

We can also equivalently consider a function \(\hat{\lambda }\) living on the Riemann surface given by the branched two-cover of \({\mathbb {D}}\) with branch cut \(\beta \). We lift the h-process in \({\mathbb {D}}\) so that it starts on the top sheet of the two-cover. The observable is the expectation of the random variable giving \(+1\) if the h-process reaches the boundary on the top sheet and \(-1\) if it reaches it on the bottom sheet. Then \(\hat{\lambda }\) only changes sign when evaluated at the two different points in the fiber of \(z \in {\mathbb {D}}\) and so could in physics language be called a “spinor”. See [5] for a similar construction in a related context.

Note that symmetry implies that \(\lambda (z;a ) = 0\) for \(z \in \alpha \). Let \(\tau = \inf \{t: W_t \in \alpha \}\). If \(\tau > T\), then the value of Q is determined: it is \(+1\) if z and a are in the same component of \({\mathbb {D}}{\setminus } (\alpha \cup \beta )\) and \(-1\) if they are in different components. Therefore,

$$\begin{aligned} |\lambda (z,a)| = | \mathbf{E}^{z,a}[Q; \tau > T]| = \mathbf {P}^{z,a}\left( W[0,T] \cap \alpha = \emptyset \right) = \frac{h_{{\mathbb {D}}{\setminus } \alpha }(z,a)}{h_{{\mathbb {D}}}(z,a)}, \end{aligned}$$
(5.3)

the last expression being a quotient of Poisson kernels.

Lemma 5.1

If \(0 \leqslant \theta < \pi \), then as \(\epsilon \downarrow 0\),

$$\begin{aligned} \lambda (-\epsilon ;a) = 2 \, \epsilon ^{1/2} \, \sin \theta _a + O(\epsilon ). \end{aligned}$$

Proof

The map

$$\begin{aligned} \phi (z) = \frac{2 \sqrt{z}}{z+1} , \quad \phi '(z) = \frac{1 - z}{\sqrt{z} \, (z+1)^2} \end{aligned}$$

is a conformal transformation of \({\mathbb {D}}{\setminus } [0,1)\) onto the upper half plane \({\mathbb H}\) with

$$\begin{aligned} \phi (e^{2i\theta }) = \frac{1}{\cos \theta }, \quad | \phi '(e^{2i\theta })| = \frac{\sin \theta }{2\cos ^2 \theta } , \quad \phi (\epsilon e^{2i\mu }) = 2 \, \epsilon ^{1/2} \, e^{i\mu } \, [1 + O(\epsilon )]. \end{aligned}$$

The scaling rule for the Poisson kernel implies that

$$\begin{aligned} \pi \, h_{{\mathbb {D}}{\setminus } [0,1)}(z, e^{i2\theta })&= \pi \, | \phi '(e^{i 2 \theta })| \, h_{\mathbb H}( \phi (z), \phi ( e^{i2\theta })) \\&= \frac{ | \phi '(e^{i 2 \theta })| \, {\text {Im}}[ \phi (z)]}{ [ \phi ( e^{i2\theta }) - {\text {Re}}\phi (z)]^2 + [{\text {Im}}\phi (z)]^2 } . \end{aligned}$$

In particular,

$$\begin{aligned} 2\pi \, h_{{\mathbb {D}}{\setminus } [0,1)}(\epsilon e^{i 2\mu }, e^{i2\theta }) = { 2\epsilon ^{1/2} \,\sin \mu \sin \theta } + O(\epsilon ). \end{aligned}$$

Therefore, by rotational symmetry, with \(a=e^{2i\theta _a}\),

$$\begin{aligned} 2\pi h_{ {\mathbb {D}}{\setminus } \alpha }(-\epsilon ,a) = 2\pi h_{{\mathbb {D}}{\setminus } [0,1)}(\epsilon e^{i2\theta _a},-1)=2\epsilon ^{1/2}\sin \theta _a + O\left( \epsilon \right) . \end{aligned}$$

Using in (5.3) the fact that for any \(a\in \partial {\mathbb {D}}, \lambda (-\epsilon ,a)>0\) and that \(2\pi h_{{\mathbb {D}}}(-\epsilon ,e^{i2\theta _a})=1+O\left( \epsilon \right) \) now concludes the proof. \(\square \)

5.2 Asymptotics of \(\varLambda _A(0,a)\)

Our construction will be somewhat neater if we start with the observation in the next lemma.

Lemma 5.2

There exists \(\epsilon > 0\) such that the following holds. Suppose D is a simply connected domain containing the origin and \(f: {\mathbb {D}}\rightarrow D\) is a conformal transformation with \(f(0) = 0\). Let \(S = \{ x+ iy: |x|,|y| < |f'(0)| \epsilon \}\) be a square centered at the origin and for \(0\leqslant t < 1\), let \(\gamma (t) = f(t)\). Let \(\sigma = \inf \{t: \gamma (t) \in \partial S\}\). Then \(\gamma [\sigma ,1) \cap S = \emptyset \).

Proof

This can be proved using standard distortion estimates for conformal maps. \(\square \)

We now set some notation.

  • Let \(U_n =\{(x,y)\in {\mathbb {Z}}^2:|x|<n, |y|<n\}\) denote the discrete square centered at 0 of side length 2n and \( U_n^- = U_n {\setminus } \{0, \ldots , n\}\) be the slit discrete square.

  • Let \(V = V_n, V^- = V_n^-\) be the corresponding continuum domains

    $$\begin{aligned} V_n = \{x+iy: |x|, |y| < n \},\quad V_n^- = V {\setminus } (0,n]. \end{aligned}$$
  • Let \(\epsilon \) be a fixed positive number as in Lemma 5.2. For every A, let \(n_A = \lfloor \epsilon r_A \rfloor = \epsilon \, r_A + O(1)\) and let \(U_A, U_A^-,V_A,V_A^-\) denote \(U_{n_A}, U_{n_A}^-,V_{n_A},V_{n_A}^-\), respectively.

  • Let \(\beta ^* =(0,w_0] \cup f_A^{-1}(\beta ) \). The curve \(\beta ^*\) goes from 0 to \(f^{-1}_A(1) \in \partial D_A\). By Lemma 5.2 we can see that after the curve \(\beta ^*\) hits \(\partial V_A\) it does not return to \(V_A\). We will also write \(\beta ^*\) for the arc \(\beta ^* \cap \overline{V_A}\).

  • For \(z,w \in \overline{V^-_A} {\setminus } \beta ^* \) we define \(Q^{z,w} = -1\) if \(\beta ^* \) separates z from w in \(V^-_A\) and \(+1\) otherwise.

  • We use \(S_j\) and \(B_t\), respectively, to denote random walk and Brownian motion, as well as the h-processes defined from them. The probability measure will always make it clear whether we are dealing with the unconditioned or the conditioned process. We use \(\mathbf{P}^{z,a}, \mathbf{E}^{z,a}\) to denote probabilities and expectations with respect to an h-process conditioned to leave the domain at a. We will use this notation for both \(S_j\) and \(B_t\). This should not cause confusion. All h-processes in this paper will be those given by boundary points, that is, where the harmonic function is the Poisson kernel. We recall that

    $$\begin{aligned} \mathbf{P}^{z,a} \{ (S _0, \ldots S _k)=(\omega _0, \ldots ,\omega _k) \} = \frac{H_A (\omega _k,a)}{H_A (z,a)}\, \mathbf{P}^{z} \{ (S _0, \ldots S _k)=(\omega _0, \ldots ,\omega _k) \} , \end{aligned}$$
    (5.4)

    with a similar formula for the Brownian h-process.

Lemma 5.3

We have

$$\begin{aligned} \varLambda _A(0,a)&=\sum _{w\in \partial {U}_A} H_{\partial U^-_A}(0,w) \, \frac{H_A(w,a)}{H_A(0,a)}\, Q^{0,w}\varLambda _A(w,a). \end{aligned}$$
Fig. 4
figure 4

Proofs of Lemma 5.3 and Lemma 5.4. Left Paths that hit the horizontal slit before the boundary of the square can be reflected and coupled. Since these paths have winding number of different parity, the total contribution to \(\varLambda _A\) of these paths is 0. Right Only paths reaching \(\partial V\) before the horizontal slit contribute to \(\varLambda _A\), which can then be written as a sum/integral over the first visited point w of the boundary of the square. The asymptotics of \(\varLambda _A(0,a)\) can therefore be found by separately considering the probability of a random walk from 0 reaching \(\partial V\) before hitting the horizontal slit (Theorem 5.1), and by using strong approximation to compare the discrete with the continuum Poisson kernels and \(\varLambda \) with \(\lambda \) on \(\partial V\) (Theorem 2.1 and Proposition 5.1, respectively)

Proof

We write \(U, U^-\) for \(U_A, U_A^-\). Let \(\sigma = \min \{k \geqslant 1: S_k\in {\mathbb {N}}\cup \{0\}\}\) (Fig. 4). Since \(S_{\tau \wedge \sigma }\in \{0,1\}\) implies that \(I=0\), using (5.1) and the strong Markov property applied at the stopping time \(\rho =\inf \{k\geqslant 0:S_k\in \partial U^-\}\) gives

$$\begin{aligned} \varLambda _A(0,a)= & {} \mathbf {E}^{0,a}[Q(S[0,\rho ])\, I\, \varLambda _A(S_\rho ,a)] \nonumber \\= & {} \mathbf {E}^{0,a}[Q(S[0,\rho ]) \, \varLambda _A(S_{\rho },a); S_{\rho }\in \{2, \ldots , n\}] \nonumber \\&+ \mathbf {E}^{0,a}[Q(S[0,\rho ])\,\varLambda _A(S_{\rho },a)\,;\,S_{\rho }\in \partial {U}]. \end{aligned}$$
(5.5)

Suppose \(\omega \) is a nearest-neighbor path from 0 to \(w\in \{2, \ldots , n\}\), and otherwise staying in \({U}^-\). Then if \(\bar{\omega }\) is the reflection of \(\omega \) about the real axis we have \(Q(\omega ) = - Q(\bar{\omega })\). Indeed, \(\omega \oplus \bar{\omega }\) is a loop rooted at 0 intersecting \({\mathbb {N}}\) exactly once and it is symmetric about the real axis and so has winding number 1 about \(w_0\). Moreover, by (5.4) \(\omega \) and \(\bar{\omega }\) have the same distribution. This implies that \(Q(\omega ) = - Q(\bar{\omega })\). Since the (reflected) paths can be coupled after the first visit to \(\{2, \ldots , n\}\) we conclude that the first expectation in (5.5) vanishes.

If there exists a nearest neighbor path in \(\bar{{U}}\) from 0 to \(w\in \partial {U}\) that does not intersect \(\beta ^*\cup \{0, \ldots , n\}\) except at its starting and endpoint, then any path from 0 to w that avoids \(\{0, \ldots , n\}\) will intersect \(\beta ^*\) an even number of times, yielding a value of \(Q^{0,w}=1\). Similarly, for all other \(w\in \partial {U}\), \(Q^{0,w}=-1\). We then see that,

$$\begin{aligned} \varLambda _A(0,a)&= \mathbf {E}^{0,a}[Q(S[0,\rho ])\,\varLambda _A(S_{\rho }, a);S_{\rho }\in \partial {U}]\\&=\sum _{w\in \partial {U}} \mathbf {P}^{0,a}(S_\rho =w)\,Q^{0,w}\varLambda _A(w,a) \\&= \sum _{w\in \partial {U}} H_{\partial U^-}(0,w) \,\frac{H_A(w,a)}{H_A(0,a)}Q^{0,w}\,\varLambda _A(w,a). \end{aligned}$$

\(\square \)

Lemma 5.4

If \(z = z_n = -n^{1/2}\), then

$$\begin{aligned} \lambda _{A} (z,a)&= O(n^{-3/4} ) + \int _{\partial V_A} h_{V^-_A}(z,w) \frac{h_{A}(w,a)}{h_{A}(z,a)} Q^{z,w} \lambda _{A}(w,a) \, |dw| \\&= O(n^{-3/4} ) + \sum _{w \in \partial U_A} h_{V^-_A}(z,w) \frac{h_{A}(w,a)}{h_{A}(z,a)} Q^{z,w} \lambda _{A} (w,a) . \end{aligned}$$

Proof

The proof of the first expression is similar to that of the last lemma (Fig. 4). Note, however, since the winding is measured around \(w_0\) it is possible that when we concatenate a path from z to the positive real axis with its reflection about the real axis we could get a loop whose winding number about \(w_0\) is even. By topology we can see this can only happen if the path hits \( \eta := [0,w_0] \cup [0,\overline{w_0}]\) before hitting \([0,\infty )\). By the Beurling estimate, the probability of hitting \(\eta \) before hitting the positive real axis is \(O(|z|^{-1/2})\). Also the value of \(\lambda \) on \(\eta \) is \(O(n^{-1/2})\). Therefore, this term produces an error of size \(O(n^{-3/4})\) , which yields the first equality of the lemma.

For the second estimate we first note that the Beurling estimate and the covariance rule for the Poisson kernel show that \(\forall w \in \partial V\)

$$\begin{aligned} |h_{V^-}(z,w)\, \frac{h_A(w,a)}{h_A(z,a)} \,Q^{z,w} \, \lambda _{A} (w,a)| \leqslant c |h_{V^-}(z,w)|\leqslant c' |z|^{1/2} n^{-3/2}. \end{aligned}$$

Let E be the set obtained by removing from \(\partial V\) its intersection with the six balls of radius \(n^{1/2}\) centered at the four corners of V, the point at which the slit meets \(\partial V\), and the point at which \(\beta ^*\) meets \(\partial V\). Then by the last estimate

$$\begin{aligned}&\int _{\partial V} h_{V^-}(z,w) \frac{h_A(w,a)}{h_A(z,a)} Q^{z,w} \lambda _{A} (w,a) \, |dw| \\&\quad = \int _{E} h_{V^-}(z,w) \frac{h_A(w,a)}{h_A(z,a)} Q^{z,w} \lambda _{A} (w,a) |dw| + O(|z|^{1/2} n^{-1}). \end{aligned}$$

Notice that \(Q^{z, w}\) is constant on each component of E. Derivative estimates for positive harmonic functions show that if uv are in the same component of E and \(|u-v|\leqslant 1\) then

$$\begin{aligned} \frac{h_A(u,a)}{h_A(z,a)}&=\frac{h_A(v,a)}{h_A(z,a)}(1+O(n^{-1})). \end{aligned}$$

Finally, since each point on E is distance at least \(n^{1/2}\) from a corner one can map to the unit disk and compare the Poisson kernels there (using, e.g., Schwarz reflection and the distortion theorem to compare the derivatives) to see that

$$\begin{aligned} h_{V^-}(z,u)=h_{V^-}(z,v)(1+O(n^{-1/2})). \end{aligned}$$

These estimates imply the lemma. \(\square \)

We proceed to show that each of the factors in the expression for \(\varLambda _A(0,a)\) in Lemma 5.3 is close to its continuum counterpart in the decomposition of Lemma 5.4 and then appeal to Lemma 5.4 to go back to the continuum function. The estimates naturally split into two parts. The first deals with fine asymptotics close to the tip of the slit in the slit square and the second with what happens between the boundary of the slit square and the boundary of A. We state the results here, but postpone the proofs to the two subsequent subsections. We then combine them to prove Theorem 1.5.

We define

$$\begin{aligned} \bar{H}_n(0,b) = \frac{H_{\partial U_n^{-}}(0,b)}{ \sum _{z \in \partial U_n} H_{\partial U_n^{-}}(0,z)}, \quad b \in \partial U_n , \end{aligned}$$

This is the conditional probability that an excursion starting at 0 in \(U_n^- \) exits \(\partial U_n^-\) at b given that it exits at a point in \(\partial U_n\). We also define the analogous quantity for Brownian motion,

$$\begin{aligned} \bar{h}_n(-1,b) = \frac{h_{ V_n^{-}}(-1,b)}{ \int _{\partial V_n} h_{V_n^{-}}(-1,w) \, |dw| }, \quad b \in \partial V_n. \end{aligned}$$

Theorem 5.1

If \(b \in \partial U_n\),

$$\begin{aligned} \bar{H}_n(0,b) = \bar{h}_n(-1,b) \, \left[ 1 + O(n^{-1/20})\right] . \end{aligned}$$

Proof

See Sect. 5.3. \(\square \)

We do not believe that this error term is optimal, but this suffices for our needs and is all that we will prove. We will also need the following corollary, which in particular implies the sharpness of the Beurling estimate.

Corollary 5.1

There exist \(c,u>0\) such that

$$\begin{aligned} \sum _{b \in \partial U_n} H_{\partial U_n^{-}}(0,b) = c \, n^{-1/2} \, [1 + O(n^{-u})]. \end{aligned}$$

Proof

Let

$$\begin{aligned}&K(n) = \sum _{b \in \partial U_n} H_{\partial U_n^{-}}(0,b) , \\&\tilde{K}(n) = \int _{\partial V_n} h_{V_n^{-}}(-1,w) \, |dw|. \end{aligned}$$

and note that for \(r \geqslant 1\) (we write \(U_{rn} = U_{\lfloor rn \rfloor }\))

$$\begin{aligned}&K(rn) = K(n) \sum _{b \in \partial U_n} \overline{H}_{n}(0,b)\, H_{U_{rn}^-}(b,\partial U_{rn}) ,\\&\tilde{K}(rn) = \tilde{K}(n) \int _{\partial V_n} \overline{h}_{n}(-1,w) \, {\text {hm}}_{V_{rn}^-}(w, \partial V_{rn})\, |dw|. \end{aligned}$$

Using the previous theorem and strong approximation, we can see that for \(1 \leqslant r \leqslant 2\),

$$\begin{aligned} \frac{K(rn)}{K(n)} = \frac{\tilde{K}(rn)}{\tilde{K}(n)} + O(n^{-u}). \end{aligned}$$

By direct calculation using conformal mapping,

$$\begin{aligned} \frac{\tilde{K}(rn)}{\tilde{K}(n)} = r^{-1/2} \, +O(n^{-u}). \end{aligned}$$

Therefore, allowing a different u,

$$\begin{aligned} \frac{K(rn)}{K(n)} = r^{-1/2} \, +O(n^{-u}), \end{aligned}$$

and the result easily follows from this. \(\square \)

Given the corollary we can restate Theorem 5.1 as: there exists \(c >0, u > 0\) such that

$$\begin{aligned} H_{\partial U_n^-}(0,b) =c\, h_{V_n^-}(-1,b) \, \left[ 1 + O(n^{-u})\right] . \end{aligned}$$
(5.6)

We will also need that

$$\begin{aligned} \frac{H_A(w,a)}{H_A(0,a)}= \frac{h_{A}(w,a)}{h_{A}(0,a)} + O(n^{-u}) ,\quad w \in \partial U_n^- , \quad a \in \partial A \end{aligned}$$
(5.7)

which is a direct consequence of Theorem 2.1. This handles the first part of the argument of Theorem 1.5. The second part is handled in the next proposition. Note that for \(w \in \partial U_A\) the quantities \(\varLambda _A(w,a)\) and \(\lambda _A(w,a)\) are “macroscopic” quantities for which one would expect Brownian motion and random walk to give almost the same value.

Proposition 5.1

There exist \(u>0, c < \infty \) such that if \(A\in \mathcal {A}\), \(a \in \partial A\), and \(w\in \partial {U}_A\),

$$\begin{aligned} |\varLambda _A(w,a)-\lambda _{A}(w,a)|\leqslant c \, r_A^{-u}. \end{aligned}$$
(5.8)

Proof

See Sect. 5.4. \(\square \)

Proof of Theorem 1.5 assuming Theorem 5.1 and Proposition 5.1

We will first show (1.8), that is, there is a constant \(0 < c_2 < \infty \) such that

$$\begin{aligned} \varLambda _A(0,a) = c_2 r_A^{-1/2}\, \left[ \sin \theta _a + O\left( r_A^{-u} \right) \right] . \end{aligned}$$

We write \(U,U^-,V,V^-,\bar{H}, \bar{h}\) for \(U_A, U^-_A,V_A,V^-_A, \bar{H}_{n_A}, \bar{h}_{n_A}\). Using Lemma 5.3, (5.6), (5.7), and (5.8) we see that there is a constant \(0 < c_0 < \infty \) such that

$$\begin{aligned} \varLambda _A(0,a)&=\sum _{w\in \partial {U}} H_{\partial U^-}(0,w) \frac{H_A(w,a)}{H_A(0,a)} Q^{z,w}\varLambda _A(w,a) \\&= c_0 \sum _{w\in \partial {U}} h_{V^-}(-1,w) \, \frac{h_{A}(w,a)}{h_{A}(0,a)}\, Q^{z,w}\, \lambda _A(w,a)\left[ 1+O(r_A^{-u}) \right] . \end{aligned}$$

We note that if \(f_A: D_A \rightarrow {\mathbb {D}}\) with \(f_A(w_0) = 0, f_A'(w_0) > 0\), then for z with \(|z|\leqslant n_A^{1/2}\),

$$\begin{aligned} f_A(z) = r_A^{-1} \, z + O(r_A^{-1}), \end{aligned}$$

and hence, by (5.2) and Lemma 5.1,

$$\begin{aligned} \lambda _A(-n_A^{1/2},a) = 2\, r_A^{-1/2} \, n_A^{1/4} \, \sin \theta _a + O(r_A^{-1/2}) \end{aligned}$$
(5.9)

and there exists some \(c>0\) such that for all z with \(|z|\leqslant n_A^{1/2}\),

$$\begin{aligned} \lambda _A(z,a) \leqslant c\, r_A^{-1/2} \, |z|^{1/2}. \end{aligned}$$
(5.10)

A simple argument, using conformal mapping say, shows that

$$\begin{aligned} h_{V^-} (-n_A^{ 1/2},w) = n_A^{1/4} \, h_{V^-}(-1,w) \, \left[ 1+O(n^{-1/4})\right] . \end{aligned}$$

We now use Lemma 5.4 and (5.9) to conclude that

$$\begin{aligned} n_A^{1/4} \, c_0^{-1} \, \varLambda _A(0,a)= & {} O(r_A^{-3/4}) + \lambda _A(-n_A^{1/2},a) \, \left[ 1 + O(n^{-u})\right] \\= & {} 2\, r_A^{-1/2}n_A^{1/4} \, \left[ \sin \theta _a + O(n^{-u})\right] . \end{aligned}$$

Since \(n_A = \epsilon \, r_A +O(1)\), we get (1.8). We will give a symmetry argument here to deduce (1.9) from (1.8). Suppose we replace \(J_t\) with

$$\begin{aligned} \tilde{J}_t = \left\lfloor \frac{\varTheta _t + \pi }{2 \pi } \right\rfloor - \left\lfloor \frac{\varTheta _0 + \pi }{2 \pi }\right\rfloor . \end{aligned}$$

In other words, we place the discontinuity of the argument at \( f^{-1}_A((-1,0]) \) rather than at \( f^{-1}_A([0,1))\). Then we can see that if \(S_\tau = a \), then \(\tilde{Q}(\omega [0,\tau ]) = \pm Q(\omega [0,\tau ])\) with the minus sign appearing if and only if \(\pi /2 \leqslant \theta _a < \pi \).

Now given A, consider its reflection along the line \(\{{\text {Re}}(z) = 1/2\}\), that is, let \(\rho (z) = 1 - \overline{z}\), \(A' = \rho (A)\). Let \(a' = \rho (a)\) and define \(\theta '\) by \(f_{A'}(a') = e^{i2\theta '}\). Note that \( \rho (D_{A'}) = D_A\), \(f_{A'}(z) = -\overline{f_A(\rho (z))}\), \(f_A^{-1}([0,1)) = \rho \left[ f_{A'}^{-1}((-1,0])\right] ,\) and

$$\begin{aligned} f_A(a) = - \overline{f_{A'}(a')} = -\overline{e^{2i\theta '}} = \exp \left\{ 2i\left( \frac{\pi }{ 2} - \theta '\right) \right\} . \end{aligned}$$

In other words, \(\theta _{a} = \frac{\pi }{2} - \theta ' \, (\mathrm{mod} \,\pi )\). If \(\theta _a,\theta ' \in [0,\pi )\), then \(\theta _a = \frac{\pi }{2} - \theta '\) if \(0 \leqslant \theta ' \leqslant \frac{\pi }{2}\) and \(\theta _a = \frac{3\pi }{2} - \theta '\) if \(\frac{\pi }{2} < \theta ' < \pi \), and hence

$$\begin{aligned} \cos \theta _a = \left\{ \begin{array}{ll} \sin \theta ' &{} 0 \leqslant \theta ' \leqslant \pi /2 \\ -\sin \theta ' &{} \pi /2 < \theta ' < \pi . \end{array} \right. \end{aligned}$$

From the previous paragraph and (1.8), we see that

$$\begin{aligned}&\varLambda (1,a) = \varLambda _{A'}(0,a') = c_2 r_A^{-1/2}\, \left[ \sin \theta ' + O\left( r_A^{-u}\right) \right] , \quad 0 \leqslant \theta ' \leqslant \frac{\pi }{2} ,\\&\varLambda (1,a) = -\varLambda _{A'}(0,a') = c_2 r_A^{-1/2}\, \left[ -\sin \theta ' + O\left( r_A^{-u}\right) \right] , \quad \frac{\pi }{2} < \theta < \pi . \end{aligned}$$

\(\square \)

5.3 Poisson kernel convergence in the slit square: proof of Theorem 5.1

The rate of convergence of the discrete Poisson kernel to the continuous kernel is very fast in the case of rectangles aligned with the coordinate axes.

Lemma 5.5

There exists \( c < \infty \) such that if \(n/10 \leqslant m \leqslant 10 n\), and

$$\begin{aligned}&A = A(n,m) = \{j+ik : 1 \leqslant j \leqslant n - 1, 1 \leqslant k \leqslant m-1\} ,\\&R=R(n,m) = \{x+iy: 0 < x < n, 0 < y < m\}, \end{aligned}$$

then

$$\begin{aligned} \left| 4\, h_{\partial R}(ik, n + i k') - H_{\partial A} (ik, n+i k')\right| \leqslant c \, \frac{\min \{k, m-k\}\min \{k', m-k'\}}{{n^5} }. \end{aligned}$$

In particular,

$$\begin{aligned} H_{\partial A} (ik, n+i k') = 4\, h_{\partial R}(ik, n + i k') \left( 1+O\left( n^{-2}\right) \right) . \end{aligned}$$

Proof

We write \( v=ik, w= n + i k'\), and \(d= \min \{k, m-k\}, d' = \min \{k', m-k'\}\). Using separation of variables (see Section 6 of [2] or [17, Chapter 8] for more details), one can find \(h_R(1+ ik, w)\) exactly as an infinite series

$$\begin{aligned} h_R(1+ik,w) = \frac{2}{m} \sum _{l=1}^\infty \frac{\sinh (l \pi /m)}{\sinh ( l\pi n/m) }\, \sin \left( \frac{lk\pi }{m} \right) \, \sin \left( \frac{l k' \pi }{m} \right) . \end{aligned}$$

Similarly one can find the discrete Poisson kernel by separation of variable as a finite Fourier series; more specifically,

$$\begin{aligned} H_{A}(1+ ik, w) = \frac{2}{m}\sum _{l = 1}^{m-1} \frac{\sinh (\alpha _l \pi /n)}{\sinh (\alpha _l \pi ) }\, \sin \left( \frac{lk\pi }{m} \right) \, \sin \left( \frac{l k' \pi }{m} \right) , \end{aligned}$$

where \(\alpha _l\) is the solution to

$$\begin{aligned} \cosh \left( \frac{\alpha _l \pi }{n} \right) + \cos \left( \frac{l \pi }{m} \right) = 2 . \end{aligned}$$

Note that Lemma 6.1 in [2] implies that

$$\begin{aligned} \alpha _l = \frac{ln}{m} \, \left[ 1 + O(l^2/n^2) \right] . \end{aligned}$$
(5.11)

One can find \(c_1>0\) and \(c_2>0\) such that for all mn in the statement of the lemma,

$$\begin{aligned} \frac{\sinh (l \pi /m)}{\sinh ( l\pi n/m) }\leqslant c_1e^{-c_2 l} \text { and } \frac{\sinh (\alpha _l \pi /n)}{\sinh (\alpha _l \pi ) }\leqslant c_1e^{-c_2 l} . \end{aligned}$$

Using this and the inequality \(|\sin x| \leqslant |x|\), we can see that there exists \(c < \infty \) such that

$$\begin{aligned}&|h_R(1 + ik,w) - H_A(1 + ik,w) | \\&\quad \leqslant c \left[ \frac{1}{n^6} +\frac{d d'}{n^3} \sum _{l \leqslant c \, \log n} l^2 \left| \frac{\sinh (l \pi /m)}{\sinh ( l\pi n/m) } - \frac{\sinh (\alpha _l \pi /n)}{\sinh (\alpha _l \pi ) }\right| \right] . \end{aligned}$$

Note that the 6 is arbitrary and can be made arbitrarily large by increasing c. Using (5.11) we can see that for \(l \leqslant c \log n\),

$$\begin{aligned} \frac{\sinh (\alpha _l \pi /n)}{\sinh (\alpha _l \pi ) } = \frac{\sinh (l \pi /m)}{\sinh ( l\pi n/m) }\, \left[ 1 + O(l^3/n^2) \right] , \end{aligned}$$

and hence the sum is \(O(n^{-3})\) giving

$$\begin{aligned} |h_R(1 + ik,w) - H_A(1 + ik,w) | \leqslant \frac{c dd'}{n^6} . \end{aligned}$$

This implies that

$$\begin{aligned} H_{\partial A}(ik, w) = \frac{1}{4} \, H_A(1 + ik, w) = \frac{1}{4} \, h_{R}(1+ik,w) + O(dd'n^{-6}) . \end{aligned}$$

We now assume that \(k\leqslant m/2\). We can extend the function \(h(z) = h_{R}(z,w)\) to \(\{ x+iy: -n < x < n, -m < y < m \}\) by Schwarz reflection. (If \(k > m/2\), we instead extend the function to \(\{ x+iy: -n< x < n, 0 < y < 2m \}\).) Note that on \(\{z: |z - ik| \leqslant \min \{m,n\}/4 \},\)

$$\begin{aligned} |h(z) | \leqslant c \, dd'/n^3. \end{aligned}$$

Since \(\min \{m,n\} \asymp n\), estimates for derivatives of harmonic functions (see, for instance, Section 2.3 in [14]) then imply that

$$\begin{aligned} |\partial _{x}^kh(z)| \leqslant c \, dd'/ n^{3+k} \end{aligned}$$
(5.12)

Using the definition of the boundary Poisson kernel (see Sect. 2.3) and (5.12) in a Taylor expansion of h at ik, we see that

$$\begin{aligned} h_R(ik + 1,w) = h_{\partial R}(ik,w) + O(dd'/n^5). \end{aligned}$$

\(\square \)

Let us abuse notation slightly and write

$$\begin{aligned} G_{U_n^-}(0,\zeta ) = \frac{1}{4} \sum _{|e| = 1} G_{U_n^-}(e,\zeta ). \end{aligned}$$

Lemma 5.6

For every n, there exists \(c_n\) such that for all \(|\zeta | \geqslant n/2\),

$$\begin{aligned} G_{U_n^-}(0,\zeta ) = c_n \, g_{V_n^-}(-1,\zeta ) \, \left[ 1 + O(n^{-1/20})\right] . \end{aligned}$$

Moreover, the constant \(c_n\) is uniformly bounded away from 0 and \(\infty \).

Proof

Let \(z_0 = - \lfloor n/8 \rfloor \) and \(F_n: V_n^- \rightarrow {\mathbb {D}}\) be the conformal transformation with \(F_n(z_0)=0, F_n(0) = 1\). Note that for all \(z\in V_n^-, F_n(z) = F(z/n)\) where \(F = F_1\). It is easy to see that \(|F_n(\zeta ) -1| = |F(\zeta /n) - 1|\) is uniformly bounded away from 0 for \(|\zeta | \geqslant n/2\).

Let \(H_n\) be the restriction to \(V_n^-\) of the Schwarz–Christoffel transformation from \(V_n\) to \(n{\mathbb {D}}\), that sends the origin to the origin and (n, 0) to (n, 0). Then the image of \(H_n\) is \(n{\mathbb {D}}{\setminus } [0,n]\). We can see, e.g., from the explicit form of \(H_n\) that \(H_n(z_0)=-cn(1+O\left( 1/n\right) )\) for some \(0<c<1\). Moreover, \(H_n(-1)=-H_n'(0)(1+O\left( 1/n\right) )\) and \(H'_n(-1)=c'(1+O\left( 1/n\right) )\) for some \(c'>0\).

We can then write \(F_n = G_n\circ H_n\), where \(G_n(z)=(1-z_a)(\sqrt{z/n}+\sqrt{n/z})+(1+z_a)i/(1-z_a)(\sqrt{z/n}+\sqrt{n/z})-(1+z_a)i\) (\(z_a\) is some real in [0, 1] that depends on \(H_n(z_0)\) and can be computed explicitly) maps \(n{\mathbb {D}}{\setminus } [0,n]\) to \({\mathbb {D}}\), \(H_n(z_0)\) to 0 and 0 to 1. Then \(G_n'(H_n(-1))= c'' \, n^{-1/2} \, [1 + O(n^{-1/2})]\) for some \(c''\), so that the chain rule implies that \(F_n'(-1)=c_0 \, n^{-1/2} \, [1 + O(n^{-1/2})]\) for some constant \(c_0\).

Using the explicit form of the Green’s function and Poisson kernel in the disk, we can see that

$$\begin{aligned} g_{V^-}(-1, \zeta )= & {} g_{\mathbb {D}}(F_n(-1), F_n(\zeta )) \\= & {} g_{\mathbb {D}}(1 - c_0 \, n^{-1/2}(1 + O(n^{-1/2})) , F(\zeta /n))\, \\= & {} 2\pi \,c_0 \, n^{-1/2} \, h_{{\mathbb {D}}}(F(\zeta /n),1) \, [1 + O(n^{-1/2})] \\= & {} \frac{2 \pi \, c_0 \, [1 - |F(\zeta /n)|^2]}{n^{1/2} \,|F(\zeta /n) - 1|^2 } \, [1 + O(n^{-1/2})] . \end{aligned}$$

By equation (40) of [13], we can see that

$$\begin{aligned} G_{U^-}(0,\zeta ) = G_{U^-}(0,z') \frac{ 1 -| F(\zeta /n)|^2}{ |F(\zeta /n) - 1|^2 } \, [1 + O(n^{-1/20})]. \end{aligned}$$

Here we are using the uniform bound on \(|F(\zeta /n) - 1|\). All one needs now is that \(G_{U^-}(0,z_0) \asymp n^{-1/2}\), which follows from Proposition 1.5.9 and (2.40) in [15]. \(\square \)

Proof of Theorem 5.1

Let us first consider the case \(b = n + ik'\) with \(0 < k' < n\), Let \(m = \lfloor 3n/4 \rfloor \) and let \(R= R_n\) be the rectangle in the top right corner of U:

$$\begin{aligned} R = \{x +iy: m < x < n, 0 < y < n \}. \end{aligned}$$

As an abuse of notation we will also write R for \(R \cap {\mathbb {Z}}^2\). A last-exit decomposition shows that

$$\begin{aligned} H_{\partial U^-} (0,b) = \sum _{j=1}^{n-1} G_{U_n^-}(0,m+ji) \, H_{\partial R}(m+ji,b) . \end{aligned}$$

A similar decomposition shows that

$$\begin{aligned} h_{V^-}(-1,b) = \frac{1}{2 \pi } \int _0^n g_{V^-}(-1,m+i\zeta ) \, h_{\partial R}(m+i\zeta , b) \, |d\zeta |. \end{aligned}$$

This latter equality is perhaps better seen by writing

$$\begin{aligned} h_{V^-}(-1,b) = \lim _{\epsilon \downarrow 0} \frac{1}{2\pi \epsilon } g_{V^-}(-1, b - \epsilon ) = \lim _{\epsilon \downarrow 0} \frac{1}{2\pi \epsilon } g_{V^-}( b - \epsilon , -1), \end{aligned}$$

and using the strong Markov property. Lemma 5.5 shows that

$$\begin{aligned} H_{\partial R}(m+ji,b) = \frac{h_{\partial R}(m+ji, b)}{4} + O(d/n^4) , \end{aligned}$$
(5.13)

where \(d = \min \{k', n-k'\}\). For \(A\in {\mathbb {Z}}^2\), we let \(\tau _A = \inf \{n\geqslant 1: S(n)\in A\}\) and write \(\tau _x\) when x is just a point. Then, using again Proposition 1.5.9 and (2.40) in [15], for \(1\leqslant j\leqslant n-1\),

$$\begin{aligned} G_{U^-}(0,m+ij)= & {} \mathbf{P}^0(\tau _{m+ij}<\tau _{\partial U^-})G_{U^-}(m+ij,m+ij)\\\leqslant & {} cn^{-1/2}G_{U^-}(m+1+ij,m+ij) \leqslant cn^{-1/2}, \end{aligned}$$

so

$$\begin{aligned} \sum _{j = 1}^{n-1} G_{U^-} (0,m+ji) \leqslant cn^{1/2}. \end{aligned}$$

Therefore, using (5.13),

$$\begin{aligned} H_{\partial U^-} (0,b)= & {} \sum _{j=1}^{n-1} G_{U_n^-}(0,m+ji) \,\left[ H_{\partial R}(m+ji,b) - \frac{1}{4} h_{\partial R}(m+ji,b) \right] \\&+ \frac{1}{4} \sum _{j=1}^{n-1} G_{U_n^-}(0,m+ji) \, h_{\partial R}(m+ji,b)\\= & {} O(d/n^{7/2}) + \frac{1}{4} \sum _{j=1}^{n-1} G_{U_n^-}(0,m+ji) \, h_{\partial R}(m+ji,b) . \end{aligned}$$

By Lemma 5.6, we can write

$$\begin{aligned} H_{\partial U^-} (0,b) = O(d/n^{7/2}) + [1 + O(n^{-1/20})] \sum _{j=1}^{n-1} c_n \, g_{V_n^-}(-1,m+ji) \, h_{\partial R}(m+ji,b) , \end{aligned}$$

where this \(c_n\) is 1 / 4 times the value in that lemma. The sum above is greater than a constant times \(d/n^{3/2}\). Indeed, the probability that a Brownian motion from \(-1\) reaches a point in (say) the middle half of the left side of R before exiting \(V_n\) is at least \(cn^{-1/2}\). Moreover, the function \(h_{\partial {R}}(\cdot , b)\) at such points is at least c d / n. (Recall that \(d = \min \{k', n-k'\}\) and \(b=n+ik'\).) Hence we can write this as

$$\begin{aligned} H_{\partial U^-} (0,b) = [1 + O(n^{-1/20})] \sum _{j=1}^{n-1} c_n \, g_{V_n^-}(-1,m+ji) \, h_{\partial R}(m+ji,b) . \end{aligned}$$

Routine estimates allow us to approximate the sum by an integral,

$$\begin{aligned} H_{\partial U^-} (0,b)= & {} [1 + O(n^{-1/20})] \int _0^{n} c_n \, g_{V_n^-}(-1,\zeta ) \, h_{\partial R}(\zeta ,b) \, |d\zeta |\\= & {} 2\pi \, c_n \, h_{V^-}(-1,b) \, [1 + O(n^{-1/20})]. \end{aligned}$$

We assumed that \(b = n +i k'\) with \(k ' > 0\). There are four other cases. For example, if \(b = k + ni\) with \(k > 0\) we replace the rectangle R with the rectangle

$$\begin{aligned} \{x+iy: - n < x < n, m < y < n \} . \end{aligned}$$

We do the other three cases similarly. In all cases we choose \(z_0 = - \lfloor n/8 \rfloor \). The same argument gives us

$$\begin{aligned} H_{\partial U^-} (0,b) = 2\pi \, c_n \, h_{V^-}(-1,b) \, [1 + O(n^{-1/20})], \end{aligned}$$

with the same value of \(c_n\). From this we conclude that

$$\begin{aligned} \frac{H_{\partial U^-}(0,b)}{\sum _{y \in \partial U} H_{\partial U^-}( 0, y)} = \frac{h_{V^-}(-1,b)}{\int _{\partial V}h_{V^-}(-1,y) |dy|} \left[ 1+O(n^{-1/20}) \right] . \end{aligned}$$

\(\square \)

5.4 From the slit square to \(\partial A\): proof of Proposition 5.1

To prove Proposition 5.1, we need a version of Theorem 2.2 for the h-processes. We will however use slightly different stopping times. We will write f for \(f_A\) and for \(u>0\) consider Brownian and random walk paths B and S with measure either \(\mathbf{P}^z\) or \(\mathbf{P}^{z,a}\) (for the unconditioned and conditioned paths, respectively). We let

$$\begin{aligned}&\displaystyle \tau _u = \inf \{t\geqslant 0: d(f(S_{2t}),\partial {\mathbb {D}})\leqslant n^{-u}\} , \;T_u = \inf \{t\geqslant 0:d(f(B_t),\partial {\mathbb {D}})\leqslant n^{-u}\},\\&\displaystyle \sigma _u = \tau _{u} \wedge T_{u}. \end{aligned}$$

Let \(\tau , T\) be the times that the paths reach \(\partial {\mathbb {D}}\) (of course, under the measure \(\mathbf{P}^{z,a}\), this is the time at which they reach a).

Intuitively, knowing that S and B will exit at the same point a should only make it easier under the measure \(\mathbf{P}^{z,a}\) than under \(\mathbf{P}^z\) to find a coupling that ensures that B and S are close with high probability. Indeed, this is the case. We will prove the following result which does not give the optimal bounds.

Theorem 5.2

There exist \(u, u' >0, c < \infty \) such that if \(n \leqslant {\text {dist}}(z, \partial A) \leqslant n+1\), \(a\in \partial A\), and \(z \in A\) with \( |z| \leqslant 3n/4 \) then one can construct a probability space containing two processes B and S such that the probability of the event that

$$\begin{aligned} \sup _{0\leqslant t \leqslant \sigma _u }|S_{2t}-B_t| \geqslant c\log n, \end{aligned}$$

and

$$\begin{aligned} {\text {diam}}\left( f \circ S[\sigma _u, \tau ]\right) + {\text {diam}}\left( f\circ B[\sigma _u,T] \right) \geqslant c \, n^{-u'} \end{aligned}$$

is bounded above by \(cn^{-u'}\). Here B and S have the distribution of a Brownian, respectively random walk h-process started at z and conditioned to leave \(D_A\) at a.

Proof

We use the KMT coupling of Theorem 2.2 to put the unconditioned paths B and S on a probability space in such a way that if \(\mathcal {K} = \{\sup _{0\leqslant t \leqslant \sigma _u }|S_{2t}-B_t| \leqslant c\log n\}\), we have

$$\begin{aligned} \mathbf{P}^z(\mathcal {K}^c)\leqslant cn^{-2}. \end{aligned}$$

To obtain the corresponding result for the h-processes, we note that the fact that there is a point v with \(|v|=1-n^{-u}\) and \(d(v,f(B(\sigma _u))\leqslant c\log n\), together with the distortion theorem, implies that there exists a constant c such that \(d(f(B(\sigma _u)),\partial {\mathbb {D}})\in [c^{-1}n^{-u},cn^{-u}]\), which, using the explicit form of the Poisson kernel in the unit disk in (2.8), implies that \(h(B(\sigma _u),a)\in [c^{-1}n^{-u},cn^{u}]\). Similarly, using Theorem 4.1 in [3] and (40) in [13], we see that, with a possibly different constant \(c, H(S(\sigma _u),a)\in [c^{-1}n^{-u},cn^{u}]\). Moreover, by Harnack estimates \(h(z,a) \asymp 1\) and \(H(z,a)/H(0,a) \asymp 1\) for z with \(|z| \leqslant 3n/4\). The fact that the measure for the Brownian h-process is obtained by weighing the Brownian paths by a factor of \(h(B(\sigma _u),a)/h(z,a)\) and that the measure for the random walk h-process is obtained by weighing the random walk paths by a factor of \(H(S(\sigma _u),a)/H(z,a)\) now implies that

$$\begin{aligned} \mathbf{P}^{z,a}(\mathcal {K}^c)\leqslant cn^{-2+u}. \end{aligned}$$

It remains to show that there is \(u' > 0\) such that

$$\begin{aligned} \mathbf{P}^{z,a}({\text {diam}}\left( f \circ S[\sigma _u, \tau ]\right) + {\text {diam}}\left( f \circ B[\sigma _u,T]\right) \geqslant c \, n^{-u'})\leqslant cn^{-u'}. \end{aligned}$$

In order to split this into separate estimates for S and B, we have to deal with the technical issue that the joint process (SB) doesn’t satisfy the strong Markov property in the coupling. To get around this, one can use a standard tool (see, for instance, the proof of Theorem 3.1 in [3]) which consists in introducing stopping times \(\tau _u\) for S and \(T_u\) for B such that

$$\begin{aligned} \max \{\tau _u, T_u\}\leqslant \sigma _u \end{aligned}$$

and

$$\begin{aligned} f(S(\tau _u))\in A_{c,u}, \end{aligned}$$
(5.14)

where \(A_{c,u} = \{z\in {\mathbb {D}}: |z|\in [1-c^{-1}n^{-u},1-cn^{-u}]\) and, similarly, \(|f(B(T_u))|\in A_{c,u}\), and applying the strong Markov property at those times. Let us just consider the h-process S, as conformal invariance makes the estimate for the h-process B considerably simpler. Write \(r=n^{-u}\). For simplicity we assume without loss of generality that \(f_A(a) = 1\).

Using the expression for the Poisson kernel in the disk in (2.8) we see that for \(R_1\geqslant 1, h((1-r)e^{iR_1r},1)=(R_1^{-1})\). We can use this and (2.2) to see that if w is a lattice point within one unit of \(f^{-1}((1-c'r)e^{iR_1r})\) with \(c'\in [c^{-1},c]\) and c as in (5.14), and if \(|z|\leqslant 3n/4\),

$$\begin{aligned} \frac{H(w,a)}{H(z,a)}=O\left( R_1^{-1}\right) . \end{aligned}$$

Therefore,

$$\begin{aligned} \mathbf{P}(F)&:=\mathbf{P}^{z,a}(|{\text {Arg}}(f(S(\tau _u)))|\geqslant R_1) \\&=O\left( R_1^{-1}\right) \mathbf{P}^{z}(|{\text {Arg}}(f(S(\tau _u)))|\geqslant R_1) \\&=O\left( R_1^{-1}\right) . \end{aligned}$$

Now let \(R_1=r^{-1/4}\) and \(R_2 = r^{-3/4}\) and note that \(r^{-1}\geqslant R_2\geqslant R_1\geqslant 1\).

$$\begin{aligned} \mathbf{P}^{z,a}({\text {diam}}\left( f \circ S[\tau _u, \tau ]\right) \geqslant R_2r)&\leqslant \sup _{v,w}\frac{H(v,a)}{H(w,a)} \cdot \mathbf{P}^{z}(F^c;{\text {diam}}\left( f \circ S[\tau _u, \tau ]\right) \geqslant R_2r) \\&\quad +O\left( R_1^{-1}\right) \\&\leqslant C\frac{1}{R_2}\frac{R_1}{R_2r}+O\left( R_1^{-1}\right) = O\left( R_1^{-1}\right) , \end{aligned}$$

where the sup is over \(w\in f^{-1}(A_{c,u}\cap D(1,R_1 r))\) and \(v\not \in f^{-1}(D(1,R_2r))\) and we used in the first inequality (40) and (41) of [13] and in the second inequality Proposition 3.1 in [3], letting a point in \(A_{c,u}\) play the role of the origin in that Proposition. Noting that we can let \(u'=u^{1/4}\) concludes the more difficult part of the proof. \(\square \)

Proof of Proposition 5.1

We choose constants \(c,u,u'\) so that the conclusion of Theorem 5.2 holds: We couple the h-processes \(B^{a}\) and \(S^{a}\), started at a point \(z\in \partial U_A\) using the coupling of that theorem and let \(\mathcal {K}\) be the event that

$$\begin{aligned} \sup _{0\leqslant t \leqslant \sigma _u }|S^{a}_{2t}-B^{a}_t| \leqslant c\log n, \end{aligned}$$

and

$$\begin{aligned} {\text {diam}}\left( \psi \circ S^a[\sigma _u, \tau ]\right) + {\text {diam}}\left( \psi \circ B^a[\sigma _u,T]\right) \leqslant c \, n^{-u'}, \end{aligned}$$

so that

$$\begin{aligned} P(\mathcal {K}^c)\leqslant cn^{-u'}. \end{aligned}$$
(5.15)

We define

$$\begin{aligned} \xi _b = \inf \{t: |B_t| \leqslant n^{1/2}+bc\log n\}, \quad \zeta _b = \inf \{t: |S_{2t}| \leqslant n^{1/2}+bc\log n\}, \end{aligned}$$

where c is the same constant as in \(\mathcal {K}\) and write \(\xi \) for \(\xi _0\) and \(\zeta \) for \(\zeta _0\). Let \(Q^B = Q[B^a[0,T]), Q^S = Q[S^a[0,\tau ]].\) Note that \(Q^B = Q^S\, I_a\) provided that \(\xi > T, \zeta > \tau \), and \(\mathcal {K}\) holds. Therefore, if \(z \in \partial U\), the fact that \(|Q^B - Q^S\, I_a|\leqslant 2\) implies that

$$\begin{aligned}&\left| \mathbf{E}^{z,a}\left[ Q^B - Q^S \, I_a\right] \right| \leqslant \left| \mathbf{E}^{z,a}\left[ Q^B; \xi < T\right] \right| + \left| \mathbf{E}^{z,a}\left[ Q^S \,I_a; \zeta < \tau \right] \right| \\&\quad + 2[\mathbf{P}^{z,a}(\xi < T; \tau < \zeta ; \mathcal {K}) + \mathbf{P}^{z,a}(\zeta < \tau ; T < \xi ; \mathcal {K}) + \mathbf{P}(\mathcal {K}^c)]. \end{aligned}$$

Since we know from (5.10) that \(|\lambda _A(z,a)| \leqslant c \, n^{-1/4}\) for \(|z| \leqslant n^{-1/2}\), we can use the strong Markov property to see that

$$\begin{aligned} \left| \mathbf{E}^{z,a}\left[ Q^B; \xi < T\right] \right| \leqslant \left| \mathbf{E}^{z,a}\left[ Q^B \mid \xi < T\right] \right| \leqslant cn^{-1/4}, \end{aligned}$$
(5.16)

and similarly,

$$\begin{aligned} \left| \mathbf{E}^{z,a}\left[ Q^S \,I_a; \zeta < \tau \right] \right| \leqslant cn^{-1/4}. \end{aligned}$$
(5.17)

If we let \(\sigma =\inf \{t\geqslant \zeta _1: |S_{2t}|\geqslant 2n^{1/2}\}\), we see that

$$\begin{aligned} \mathbf{P}^{z,a}(\xi <T; \zeta >\tau ; \mathcal {K}) \leqslant \mathbf{P}^{z,a}(\zeta _1<\tau <\zeta ) \leqslant c\log n/n^{1/2}, \end{aligned}$$
(5.18)

by the strong Markov property and the planar gambler’s ruin estimate (the gambler’s ruin estimate is for simple random walk, but this close to the origin the h-process is mutually absolutely continuous with respect to the simple walk.) We can show in the same way that

$$\begin{aligned} \mathbf{P}^{z,a}(\zeta < \tau ; T < \xi ; \mathcal {K})\leqslant c\log n/n^{1/2} \end{aligned}$$
(5.19)

Combining (5.15)–(5.19) completes the proof. \(\square \)