1 Introduction

It is well known that there are close connections between non-intersecting processes in one dimension and random matrices, based on the reflection principle. There is a generalisation of the reflection principle for more general processes, due to Fomin [8], in which the non-intersection condition is replaced by one involving loop-erased paths. In the context of independent Brownian motions in suitable planar domains, this also has close connections to random matrices, specifically Cauchy-type ensembles. An example of this was first observed by Sato and Katori [34]. We will present further examples, in particular, based on some domains which were discussed in Fomin’s original paper. We will also consider the circular setting, with periodic boundary conditions, for this we extend Fomin’s identity to the affine setting; we show that in this case, by considering independent Brownian motions in an annulus, we obtain a novel interpretation of the Circular Orthogonal Ensemble of random matrix theory.

1.1 Determinant Formulas for Loop-Erased Walks and Affine Generalisations

Determinant formulas for the total weight of one-dimensional non-intersecting processes have many variations, both in continuous and discrete settings. They are also known as the Karlin–McGregor formula for Markov processes [14, 16, 17, 19], or the Lindström–Gessel–Viennot lemma in enumerative combinatorics [11, 13, 31, 36]. Roughly speaking, the argument behind all these determinant formulas is the classical reflection principle, which allows the construction of a particular one-to-one ‘path-switching’ map from a set of intersecting paths onto itself, such that the map is its own inverse (see Sect. 2.1).

For two-dimensional state space processes, it is not clear how to perform the classical reflection principle, since the paths under consideration are allowed to have self-intersections (or loops). However, there is a generalisation of the reflection principle for more general (e.g. planar) paths, due to Fomin [8], in which the non-intersecting condition is replaced by one involving loop-erased paths. Then it is possible to obtain a determinant formula (Theorem 2.2) for the total weight of discrete planar processes which satisfy Fomin’s non-intersection condition, here stated in the context of Markov chains:

Fomin’s identity. Consider a time-homogeneous Markov chain whose state space is a discrete subset V of a simply connected domain \(\Omega \). Assume that the transitions of the chain are determined by a (weighted) planar directed graph (with vertex set V). Multiple loops are allowed. Distinguish a subset \(\partial \Gamma \subset V\) of boundary vertices and assume they all lie on the topological boundary \(\partial \Omega \). Assume that vertices \(a_{n},\ldots ,a_{1}\subset V\) and \(b_{1},\ldots ,b_{n}\subset \partial \Gamma \) lie on the boundary \(\partial \Omega \) and are ordered counterclockwise (along \(\partial \Omega \)), as in Fig. 4. Therefore, if

$$\begin{aligned}h(a_{i},b_{j})\quad 1\le i,j\le n,\end{aligned}$$

denotes the probability (or hitting probability) that the Markov chain, starting at \(a_{i}\), will first hit the boundary \(\partial \Gamma \) at vertex \(b_{j}\) (if \(a_{i}\in \partial \Gamma \), the chain is supposed to walk into \(V\setminus \partial \Gamma \) before reaching \(b_{j}\)), then the \(n\times n\) determinant

$$\begin{aligned} \det (h(a_{i},b_{j}))_{i,j=1}^{n}, \end{aligned}$$
(1.1)

is equal to the probability that n independent trajectories of the Markov chain \(X_{1},\ldots ,X_{n}\), starting at \(a_{1},\ldots ,a_{n}\), respectively, will first hit the boundary \(\partial \Gamma \) at locations \(b_{1},\ldots ,b_{n}\), respectively, and furthermore the trajectory \(X_{j}\) will never intersect the loop-erasure \(LE(X_{i})\) of \(X_{i}\), for all \(i<j\), that is,

$$\begin{aligned} X_{j}\cap LE(X_{i})=\emptyset ,\quad \text {for all}\,\,1\le i<j\le n. \end{aligned}$$
(1.2)

The above identity is the non-acyclic analogue of the determinant formula for non-intersecting one-dimensional processes of Karlin–McGregor/Gessel–Viennot. In this respect, the following details are worth to remark: because of the nature of the underlying graph, trajectories of the Markov chain are allowed to have loops and therefore, for a given trajectory, we can properly define its loop-erasure as the self-avoiding path resulting from erasing its loops chronologically. Moreover, the determinant (1.1) gives the locations of the hitting points \(b_{1},\ldots ,b_{n}\) along the boundary \(\partial \Gamma \), and the condition on the trajectories is given by (1.2), which forces the loop-erased paths to repel each other (see Sect. 2.2). The counterclockwise arrangement of paths is just a particular case in the more general combinatorial identity given by Fomin [8], which can be applied to a wide range of configurations of n distinct paths, depending on the location of the initial and final vertices and the topology of the planar domain \(\Omega \).

Section 3 is a first step towards the extension of the previous framework to non-simply connected domains of the complex plane. There, we state and prove an affine (circular) version of Fomin’s identity (Proposition 3.3), which can be seen as an extension of Fomin’s identity to the setting of the affine symmetric group \({\tilde{A}}_{n}\). In Sect. 5 we relate this affine version with the Circular Orthogonal Ensemble (COE) of random matrix theory. In the context of Markov chains, our affine version of Fomin’s identity can be stated as follows:

An affine version of Fomin’s identity. Consider a time-homogeneous Markov chain whose transitions are determined by the (directed) lattice strip \(G={\mathbb {Z}}\times \{0,1,\ldots ,N\}\). Assume that the transition probabilities are space-invariant with respect to a fixed horizontal translation \({\mathcal {S}}:G\rightarrow G\). If vertices \(a_{n},\ldots ,a_{1},b_{1},\ldots ,b_{n}\) are ordered counterclockwise along the boundary (as in Fig. 5), then the \(n\times n\) determinant

$$\begin{aligned} \det \left( \sum _{k\in {\mathbb {Z}}}\zeta ^{k}h(a_{i},{\mathcal {S}}^{k}b_{j})\right) _{i,j=1}^{n}, \end{aligned}$$
(1.3)

of hitting probabilities \(h(a_{i},{\mathcal {S}}^{k}b_{j})\), where \({\mathcal {S}}^{k}={\mathcal {S}}\circ {\mathcal {S}}^{k-1}\), \(k\in {\mathbb {Z}}\), and

$$\begin{aligned} \zeta = \left\{ \begin{array}{ll} 1 &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is odd}} \\ -1 &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is even}}, \end{array} \right. \end{aligned}$$

is equal to the probability that n independent trajectories of the Markov chain \(X_{1},\ldots ,X_{n}\), starting at \(a_{1},\ldots ,a_{n}\), respectively, will first hit the upper boundary \(\partial \Gamma ={\mathbb {Z}}\times \{N\}\) at any of the n cyclic permutations of the vertices \(b_{1},\ldots ,b_{n}\), shifted also by all possible horizontal translations by \({\mathcal {S}}^{k}\), \(k\in {\mathbb {Z}}\), and furthermore the trajectories are constrained to satisfy

$$\begin{aligned} P_{j}\cap LE(P_{j-1})=\emptyset ,\quad 1< j\le n,\quad \text {and}\quad P_{1}\cap LE({\mathcal {S}}P_{n})=\emptyset . \end{aligned}$$

It is important to note that the non-intersection condition above is related to the one between trajectories in a cylindrical lattice (or annulus on the complex plane), see Fig. 1 and the introduction of Sect. 3. In an acyclic graph, the above affine case agrees with the Gessel–Zeilberger formula for counting paths in alcoves [12] (see Sect. 3.4).

1.2 Scaling Limits

Our interest in the above determinant formulas relies upon their applicability in the context of suitable scaling limits of Fomin’s identity and its affine version. It is well known that the two-dimensional Brownian motion B is the scaling limit of simple random walks on different planar graphs [6]. Moreover, the loop-erasure of those random walks converges (in a certain sense) to a random self-avoiding continuous path in the complex plane called SLE(2), which belongs to the family of Schramm-Loewner evolutions, or SLE(k), \(k\ge 0\), for short [26, 35, 39]). As we might expect from our intuition, the latter SLE(2) path is, in fact, a loop-erasure LE(B) of the Brownian motion B in a sense which can be made precise [40]. The previous considerations offer the possibility of interpreting, at least informally, the scaling limit of Fomin’s identity and its affine version in terms of two-dimensional Brownian motions, in suitable complex domains. For example, since the determinants (1.1) and (1.3) involve hitting probabilities for a single Markov chain, it continues to make sense when h(ab) is the Poisson kernel (or hitting density) of two-dimensional Brownian motion in suitable simply connected domains \(\Omega \) with smooth boundaries. One might expect that determinants of hitting densities are the scaling limits of the corresponding determinants of hitting probabilities for simple random walks, in square grid approximations of \(\Omega \) and, moreover, that the former determinants express non-crossing probabilities between Brownian paths and SLE(2) paths. This scaling limit has been rigorously achieved in the case of \(n=2\) paths [23,24,25], while ongoing works related to the general case \(n>2\) are linked to the theory of (local and global) multiple SLE [4, 20, 21, 24, 25].

Fig. 1
figure 1

Affine setting

Our contribution in the previous context is the connection with random matrix theory that emerges from the following setting: assume that \(\Omega \) is a suitable complex (connected) domain with smooth boundary and \(h(z_{0},y)\) is the (hitting) density of the harmonic measure

$$\begin{aligned} \mu _{z_{0},\Omega }(A)={\mathbb {P}}^{z_{0}}(B_{T}\in A),\quad A\subset \partial \Omega , \end{aligned}$$

with respect to one-dimensional Lebesgue measure (lenght), where B under \({\mathbb {P}}^{z_{0}}\) denotes a two-dimensional Brownian motion starting at \(z_{0}\in \Omega \), and \(T=\inf \{t>0:B_{t}\notin \Omega \}\) is the first exit time of \(\Omega \) (see Sect. 4.1). More generally, \(h(z_{0},y)\) can be the hitting density of a diffusion in a suitable complex domain, with absorbing and normal reflecting boundary conditions (this idea is originally discussed in [8]). Therefore, for \(m\in {\mathbb {R}}\) and appropriately chosen (parametrized) positions \(x_{1},\ldots ,x_{n}\) and \(y_{1},\ldots ,y_{n}\) along the boundary \(\partial \Omega \), the determinants of hitting densities

$$\begin{aligned} H(x,y)=\det \left( h(x_{i},y_{j})\right) _{i,j=1}^{n}dy_{1}\cdots dy_{n}, \end{aligned}$$
(1.4)

and

$$\begin{aligned} H(x,y)=\det \left( \sum _{k\in {\mathbb {Z}}}\zeta ^{k}h(x_{i},y_{j}+mk)\right) _{i,j=1}^{n}dy_{1}\cdots dy_{n}, \end{aligned}$$
(1.5)

where

$$\begin{aligned} \zeta = \left\{ \begin{array}{ll} 1 &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is odd}} \\ -1 &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is even}}, \end{array} \right. \end{aligned}$$

can be interpreted, informally, as the probability that n independent ‘Brownian motions’ \(B_{i}\), \(i=1,\ldots ,n\), starting at positions \(x_{1},x_{2},\ldots ,x_{n}\), respectively, will first hit an absorbing boundary \(\partial \Gamma \subset \partial \Omega \) at (parametrized) positions in the intervals \((y_{i}+dy_{i})\), \(i=1,\ldots ,n\), and whose trajectories are constrained to satisfy the condition

$$\begin{aligned} B_{j}\cap LE(B_{i})=\emptyset ,\quad \text {for all}\,\,1\le i<j\le n, \end{aligned}$$

in (1.4), or

$$\begin{aligned} B_{j}\cap LE(B_{j-1})=\emptyset ,\quad 1< j\le n,\quad \text {and}\quad B_{1}\cap LE(m+B_{n})=\emptyset , \end{aligned}$$

in the affine case (1.5). We remark that in the affine case, we assume \(\Omega \) to be invariant under a fixed (horizontal) translation by \(m\in {\mathbb {R}}\), and therefore \(m+B_{n}\) is the horizontal translation by m of the Brownian path \(B_{n}\), see Fig. 1. We remark that some hitting densities h(xy) can be calculated explicitly for a number of important domains, like disks and half-planes, and many others can be deduced from these by reflection and conformal invariance of the two-dimensional Brownian motion. We consider examples of determinants of hitting densities in Sects. 4 and 5 .

Finally, if \(\partial \Gamma =\partial \Omega \) and then the whole boundary \(\partial \Omega \) is absorbing, we require a different notion of hitting density h(xy) (since the paths need to ‘walk’ into the interior \(\Omega ^{\circ }=\Omega \setminus \partial \Omega \) before reaching their destination). Therefore, in order to study determinants of the form (1.4) and (1.5), we consider the so-called excursion Poisson kernel. In this context, an example of a similar interpretation of determinants of hitting densities of the form (1.4) was first observed by Sato and Katori [34] (see Sect. 4.6).

1.3 Connections to Random Matrix Theory

Non-intersecting processes in one dimension have long been an integral part of random matrix theory, at least since the pioneering work of Dyson [7] in the 1960s. For example, it is well known that, if one considers n independent one-dimensional Brownian particles, started at the origin and conditioned not to intersect up to a fixed time T (see Sect. 4.2 for details), then the locations of the particles at time T have the same distribution as the eigenvalues of a random real symmetric \(n\times n\) matrix with independent centered Gaussian entries, with variance T on the diagonal and T / 2 above the diagonal (this is known as the Gaussian Orthogonal Ensemble (GOE)). Similar statements hold for the circular ensembles, see for example [16] or Proposition 5.4 below.

In two dimensions, we can consider appropriate limits of the form

$$\begin{aligned} \lim _{\begin{array}{c} (x_{1},\ldots ,\,x_{n})\in C\\ x_{i}\rightarrow x\in \partial \Omega \end{array}}{\tilde{H}}(x,y),\quad (y_{1},\ldots ,y_{n})\in C, \end{aligned}$$
(1.6)

where \({\tilde{H}}(x,y)\) is an appropriate normalisation of the determinants H(xy) in (1.4) and (1.5), and the positions \(x_{1},\ldots ,x_{n}\), \(y_{1},\ldots ,y_{n}\) are determined by chambers (alcoves) C of \({\mathbb {R}}^{n}\). These limits give the locations of the n hitting points \(y_{1},\ldots ,y_{n}\) along the absorbing boundary \(\partial \Gamma \), when the processes start at a single common point \(x\in \partial \Omega \). In a way, this is the two-dimensional analogue of the model described in the preceding paragraph. In Sect. 4 we show that the limits (1.6) agree with eigenvalue densities of Cauchy type random matrix ensembles, for determinants of the form (1.4) (see [34] and Sect. 4.6 for similar asymptotic considerations regarding excursion Poisson kernels). For determinants of the form (1.5), in Sect. 5 we show that, by considering the hitting density of the two-dimensional Brownian motion in an annulus on the complex plane, certain limit of the form 1.6 agrees with the Circular Orthogonal Ensemble (COE) of random matrix theory (Proposition 5.5).

1.4 Organisation of the Paper

The paper is structured into two parts that can be read (essentially) independently. The first part (Sects. 2 and 3 ) is mainly concerned to the combinatorial results of Sect. 1.1. In Sect. 2 we give some background on the reflection principle and Fomin’s generalisation for loop-erased walks in discrete lattice models. In Sect. 3, we present the affine version of Fomin’s identity. The second part (Sects. 4 and 5 ) shows calculations and limits for determinants of hitting densities of the form (1.4) and (1.5). In Sect. 4, we show that for suitable simply connected domains, the determinants associated with Fomin’s identity converge, in a certain sense, to some known ensembles of random matrix theory. In Sect. 5 we consider the affine setting and, after revisiting the model of non-intersecting one-dimensional Brownian motion on the circle [16], we show that a determinant of the form (1.5), in the context of independent Brownian motions in an annulus, converges in a suitable limit to the Circular Orthogonal Ensemble.

2 The Reflection Principle and Fomin’s Generalisation

In this section, we consider the discrete versions of some of the determinant formulas considered in the Introduction. This combinatorial approach has some advantages and will be particularly convenient in Sect. 2.2, where some of the main concepts are defined for discrete paths. Let \(G=(V,E,\omega )\) be a directed graph with no multiple edges, countable vertex set V and edge set \(E\subset V\times V\). The graph G need not be acyclic, so multiple loops are allowed. The set \(\omega \) is a family of pairwise distinct formal indeterminates \(\{\omega (e)\}_{e\in E}\) that we will call the weights of the edges. The imposed restriction on edge multiplicity is not essential, but most of the applications we have in mind share this condition.

Let us introduce the notation and terminology we will use through all the following sections. A directed edge e from vertex \(a\in V\) to vertex \(b\in V\), will be denoted as \(a{\mathop {\rightarrow }\limits ^{e}}b\), and, a path or walkP will mean a finite sequence of (directed) edges and vertices

$$\begin{aligned} P:a_{0}{\mathop {\rightarrow }\limits ^{e_{1}}}a_{1}{\mathop {\rightarrow }\limits ^{e_{2}}}a_{2}{\mathop {\rightarrow }\limits ^{e_{3}}}\cdots {\mathop {\rightarrow }\limits ^{e_{n}}}a_{n}. \end{aligned}$$

In this case, we say that P is a path from \(a_{0}\) to \(a_{n}\) of lengthn. For any pair of vertices \(a,b\in V\), we denote the set of all paths in G from a to b by \({\mathcal {H}}(a,b)\), and, if \(\mathbf{a}=(a_{1},\ldots ,a_{n})\) and \(\mathbf{b}=(b_{1},\ldots ,b_{n})\) are two n-tuples of vertices, then \({\mathcal {H}}(\mathbf{a},\mathbf{b})\) will denote the set of n-tuples of paths

$$\begin{aligned} {\mathcal {H}}(\mathbf{a},\mathbf{b})=\{\mathbf{P}=(P_{1},\ldots ,P_{n}): P_{i}\in {\mathcal {H}}(a_{i},b_{i}),\,\,\,\text {for}\,\,\, 1\le i\le n\}. \end{aligned}$$

The weight \(\omega (P)\) of a path P is defined as the product of its edge weights

$$\begin{aligned} \omega (P)=\prod _{i=1}^{n}\omega (e_{i}), \end{aligned}$$

if P is given as before. Analogously, the weight of an n-tuple \(\mathbf{P}=(P_{1},\ldots ,P_{n})\) is the product of the corresponding path weights \(\omega (\mathbf{P})=\prod _{i=1}^{n}\omega (P_{i})\). A quantity of interest will be the generating function

$$\begin{aligned} h(a,b)=\sum _{P\in {\mathcal {H}}(a,b)}\omega (P),\quad a,b\in V, \end{aligned}$$

which encodes all paths \(P\in {\mathcal {H}}(a,b)\) according to their weight. This expression should be understood as a formal power series in the independent variables \(\{\omega (e)\}_{e\in E}\).

Finally, two paths \(P_{1}\) and \(P_{2}\) in Gintersect if they share at least one vertex (in their vertex-sequence definitions) and we will write this as \(P_{1}\cap P_{2}\not =\emptyset \). A family of paths \(\mathbf{P}\in {\mathcal {H}}(\mathbf{a},\mathbf{b})\) is intersecting if any two of them intersect. We will say that P is self-avoiding or has no loops if it does not visit the same vertex more than once, that is, if \(a_{i}\not = a_{j}\) in the vertex sequence definition of P, for all \(0\le i<j\le n\).

2.1 The Classical Reflection Principle

Fig. 2
figure 2

The association \((P_1,P_2)\overset{\varphi }{\mapsto }({\tilde{P}}_1,{\tilde{P}}_2)\) is an involution, \(\varphi ^{2}=id\)

If the graph \(G=(V,E,\omega )\) is acyclic (loops are not allowed), the reflection principle relies upon the following property. Consider two paths \(P_{1}\) and \(P_{2}\) in G, and assume that the two paths intersect (see Fig. 2). Fix a total order for the set of vertices V and let \(A=\{v_{\alpha }:\alpha \in I\}\) be the set of intersection vertices between \(P_{1}\) and \(P_{2}\), which is finite. Among all intersection vertices, let \(v_{\alpha _{0}}\) be the minimal with respect to the given order, and split the paths \(P_{1}\) and \(P_{2}\) at the vertex \(v_{\alpha _{0}}\), into the corresponding subpaths:

$$\begin{aligned} P_{1}:a_{0}&{\mathop {\longrightarrow }\limits ^{P_{1}'}}v_{\alpha _{0}}{\mathop {\longrightarrow }\limits ^{P_{1}''}}a_{n}\\ P_{2}: a'_{0}&{\mathop {\longrightarrow }\limits ^{P_{2}'}}v_{\alpha _{0}}{\mathop {\longrightarrow }\limits ^{P_{2}''}}a'_{m}. \end{aligned}$$

Now interchange the parts \(P_{1}''\) and \(P_{2}''\) above. This procedure creates two new paths \({\tilde{P}}_{1}\) and \({\tilde{P}}_{2}\) given by

$$\begin{aligned} {{\tilde{P}}}_{1}:a_{0}&{\mathop {\longrightarrow }\limits ^{P_{1}'}}v_{\alpha _{0}}{\mathop {\longrightarrow }\limits ^{P_{2}''}}a'_{m}\\ {{\tilde{P}}}_{2}: a'_{0}&{\mathop {\longrightarrow }\limits ^{P_{2}'}}v_{\alpha _{0}}{\mathop {\longrightarrow }\limits ^{P_{1}''}}a_{n}. \end{aligned}$$

The paths \({\tilde{P}}_{1}\) and \({\tilde{P}}_{2}\) also intersect (in particular, \(v_{\alpha _{0}}\) is an intersection vertex) and, more importantly, their set of intersection vertices is also \(A=\{v_{\alpha }:\alpha \in I\}\). This means that the intersection vertices are invariant under the map \((P_{1},P_{2})\mapsto ({\tilde{P}}_{1},{\tilde{P}}_{2})\) and hence so is the minimum vertex \(v_{\alpha _{0}}\). Therefore, if we perform the same procedure to the paths \({\tilde{P}}_{1}\) and \({\tilde{P}}_{2}\), we recover the original paths \(P_{1}\) and \(P_{2}\). In other words, the map \((P_{1},P_{2})\mapsto ({\tilde{P}}_{1},{\tilde{P}}_{2})\) is an involution. Moreover, the weights are also invariant under this operation: \(\omega (P_{1})\omega (P_{2})=\omega ({\tilde{P}}_{1})\omega ({\tilde{P}}_{2})\).

A careful application of the above argument leads to the following enumeration formula for non-intersecting paths by Karlin and McGregor [19] (in the context of Markov chains) and Lindström [31] (further developed by Gessel–Viennot [13]):

Theorem 2.1

In an acyclic graph, let \(\partial \Gamma \subset V\) be the distinguished set of vertices:

$$\begin{aligned} \partial \Gamma =\{a\in V:\not \exists \,a{\mathop {\rightarrow }\limits ^{e}}b\}. \end{aligned}$$

For arbitrary sets \(A=\{a_{1},\ldots ,a_{n}\}\subset V\) and \(B=\{b_{1},\ldots ,b_{n}\}\subset \partial \Gamma \), it holds

$$\begin{aligned} \sum _{\sigma \in S_{n}}\mathrm{sgn}(\sigma )\sum _{\begin{array}{c} \mathbf{P}\in {\mathcal {H}}(\mathbf{a},\mathbf{b}_{\sigma })\\ P_{i}\cap P_{j}=\emptyset ,\,\,i\not =j \end{array}}\omega (\mathbf{P})=\det \left( h(a_{i},b_{j})\right) _{i,j=1}^{n}, \end{aligned}$$

where \(\mathbf{b}_{\sigma }=(b_{\sigma (1)},\ldots ,b_{\sigma (n)})\).

2.2 Loop-Erased Walks and Fomin’s Identity

If the graph \(G=(V,E,\omega )\) is not acyclic, and the paths \(P_{1}\) and \(P_{2}\) intersect, then the invariance of the intersection vertices described in the previous section is no longer guaranteed, since an intersection vertex can be part of a loop. However, there is a modification of the reflection principle for general graphs, due to Fomin [8], which we describe below.

We briefly present the key concept of loop-erased walks introduced by Lawler [28].

Definition 1

For each path P in \(G=(V,E,\omega )\) of the form

$$\begin{aligned} a_{0}{\mathop {\rightarrow }\limits ^{e_{1}}}a_{1}{\mathop {\rightarrow }\limits ^{e_{2}}}a_{2}{\mathop {\rightarrow }\limits ^{e_{3}}}\cdots {\mathop {\rightarrow }\limits ^{e_{n}}}a_{n}, \end{aligned}$$

the loop-erasure of P, denoted LE(P), is the self-avoiding path obtained by chronological loop-erasure of P, as follows:

  • Let \(j_{0}=\max \{j:a_{j}=a_{0}\}\),

  • recursively, if \(j_{k}<n\), then \(j_{k+1}=\max \{j:a_{j}=a_{j_{k}+1}\}\),

  • if \(j_{k}=n\), then LE(P) is the path

    $$\begin{aligned} a_{j_{0}}{\mathop {\longrightarrow }\limits ^{e_{j_{0}+1}}}a_{j_{1}}{\mathop {\longrightarrow }\limits ^{e_{j_{1}+1}}}a_{j_{2}}{\mathop {\longrightarrow }\limits ^{e_{j_{2}+1}}}\cdots {\mathop {\longrightarrow }\limits ^{e_{j_{k-1}+1}}}a_{j_{k}}. \end{aligned}$$

This procedure erases loops in P in the order they appear, and the operation is iterated until no loop remains. Note in particular that LE(P) is a subpath of the original path P, with the same starting and end points \(a_{0}\) and \(a_{n}\), respectively.

Using the above procedure, Fomin [8] introduced the so-called loop-erased switching for paths that are allowed to self-intersect. The loop-erased switching is as follows: consider two paths \(P_{1}:a_{0}=x_{1}{\mathop {\rightarrow }\limits ^{}}\cdots {\mathop {\rightarrow }\limits ^{}}a_{n}=y_{2}\) and \(P_{2}:a'_{0}=x_{2}{\mathop {\rightarrow }\limits ^{}}\cdots {\mathop {\rightarrow }\limits ^{}}a'_{m}=y_{1}\) in the graph G, starting from different vertices \(a_{0}\not =a'_{0}\), and assume \(P_{2}\) and \(LE(P_{1})\) intersect at least at one common vertex, that is, \(P_{2}\cap LE(P_{1})\not =\emptyset \) (see Fig. 3). Among all such intersection vertices, let \(v=a_{j_{i}}\) be the one with minimal index along the vertex sequence of \(LE(P_{1})\), (see Definition 1), and split the path \(P_{1}\) at the end of the edge \(a_{j_{i-1}}{\mathop {\longrightarrow }\limits ^{e_{j_{i-1}}}}v\) into two subpaths:

Fig. 3
figure 3

The loop-erased switching. The path \(P_{2}\) intersects the loop-erased part of \(P_{1}\) and v is the ‘first intersection’ (left). Interchanging the paths at v, the new paths \({\tilde{P}}_{1}\) (black) and \({\tilde{P}}_{2}\) (gray) satisfy the same property, that is, \({\tilde{P}}_{2}\) intersects the loop-erased part of \({\tilde{P}}_{1}\) and v is the ‘first intersection’ (right)

$$\begin{aligned} P_{1}:\,a_{0}{\mathop {\longrightarrow }\limits ^{P_{1}'(v)}}v{\mathop {\longrightarrow }\limits ^{P_{1}''(v)}}a_{n}. \end{aligned}$$

This partition ensures that all possible loops of \(P_{1}\) ‘rooted’ at v are part of \(P_{1}''(v)\), so that \(P_{1}''(v)\) does not intersect the path \(LE(P_{1}'(v))\) at any vertex different from v. Now, if we split the path \(P_{2}\) at its first visit to v

$$\begin{aligned} P_{2}:\,a'_{0}{\mathop {\longrightarrow }\limits ^{P_{2}'(v)}}v{\mathop {\longrightarrow }\limits ^{P_{2}''(v)}}a'_{m}, \end{aligned}$$

then, by construction of v, \(P_{2}''(v)\) does not visit any other vertex of \(LE(P_{1}'(v))\), except for v, so it shares the same property as \(P_{1}''(v)\). The latter common condition allows us to interchange the parts \(P_{1}''(v)\) and \(P_{2}''(v)\) at the vertex v, and create new paths

$$\begin{aligned} {{\tilde{P}}}_{1}:a_{0}&{\mathop {\longrightarrow }\limits ^{P_{1}'(v)}}v{\mathop {\longrightarrow }\limits ^{P_{2}''(v)}}a'_{m}\\ {{\tilde{P}}}_{2}: a'_{0}&{\mathop {\longrightarrow }\limits ^{P_{2}'(v)}}v{\mathop {\longrightarrow }\limits ^{P_{1}''(v)}}a_{n}. \end{aligned}$$

Note that the new paths \({\tilde{P}}_{2}\) and \(LE({\tilde{P}}_{1})\) also intersect (v is an intersection vertex), and therefore \({{\tilde{P}}}_{2}\cap LE({{\tilde{P}}}_{1})\not =\emptyset \). These conditions ensure that the map \((P_{1},P_{2})\mapsto ({{\tilde{P}}}_{1},{{\tilde{P}}}_{2})\) is an involution, and the ‘minimality’ of the intersection vertex v is preserved, exactly as in Sect. 2.1. We also have \(\omega (P_{1})\omega (P_{2})=\omega ({\tilde{P}}_{1})\omega ({\tilde{P}}_{2})\).

The following theorem (Theorem 7.1 in [8]) is an application of the above loop-erased switching procedure. Fix a distinguished subset of vertices \(\partial \Gamma \subset V\) and call it the absorbing boundary. For \(a\in V\) and \(b\in \partial \Gamma \), denote by \({\mathcal {H}}^{+}(a,b)\subset {\mathcal {H}}(a,b)\) the set of all paths of positive length

$$\begin{aligned} a{\mathop {\rightarrow }\limits ^{e_{1}}}a_{1}{\mathop {\rightarrow }\limits ^{e_{2}}}a_{2}{\mathop {\rightarrow }\limits ^{e_{3}}}\cdots {\mathop {\rightarrow }\limits ^{e_{n}}}b, \end{aligned}$$

such that all the internal vertices \(a_{1},\ldots ,a_{n-1}\) lie in \(V\setminus \partial \Gamma \). If \(a\in \partial \Gamma \), we assume \(n\ge 2\), so that the path walks into \(V\setminus \partial \Gamma \) before reaching the vertex b. Analogously, define \({\mathcal {H}}^{+}(\mathbf{a},\mathbf{b})\) for n-tuples of paths \(\mathbf{P}=(P_{1},\ldots ,P_{n})\) as at the beginning of Sect. 2, that is

$$\begin{aligned} {\mathcal {H}}^{+}(\mathbf{a},\mathbf{b})=\{\mathbf{P}=(P_{1},\ldots ,P_{n}): P_{i}\in {\mathcal {H}}^{+}(a_{i},b_{i}),\,\,\,\text {for}\,\,\, 1\le i\le n\}. \end{aligned}$$

Theorem 2.2

(Fomin’s identity) Let \(G=(V,E,\omega )\) be a graph satisfying the above assumptions and \(\partial \Gamma \subset V\). Let \(A=\{a_{1},\ldots ,a_{n}\}\subset V\) and \(B=\{b_{1},\ldots ,b_{n}\}\subset \partial \Gamma \) be two labelled sets of different vertices. Therefore

$$\begin{aligned} \sum _{\sigma \in S_{n}}\mathrm{sgn}(\sigma )\sum _{\begin{array}{c} \mathbf{P}\in {\mathcal {H}}^{+}(\mathbf{a},\mathbf{b}_{\sigma })\\ P_{j}\cap LE(P_{i})=\emptyset ,\,\,i<j \end{array}}\omega (\mathbf{P})=\det (h(a_{i},b_{j}))_{i,j=1}^{n}, \end{aligned}$$
(2.1)

where \(\mathbf{b}_{\sigma }=(b_{\sigma (1)},\ldots ,b_{\sigma (n)})\) and

$$\begin{aligned} h(a,b)=\sum _{P\in {\mathcal {H}}^{+}(a,b)}\omega (P),\quad a\in V, b\in \partial \Gamma . \end{aligned}$$

Remark

Note that the above theorem agrees with Theorem 2.1 if the graph under consideration is acyclic. Also, note that Theorem 2.2 does not give the total weight of families of non-intersecting paths in G connecting A and B (in the strict sense of non-intersection), but the paths are constrained to satisfy

$$\begin{aligned} P_{j}\cap LE(P_{i})=\emptyset ,\quad \text {for all}\,\,\,i<j, \end{aligned}$$

which forces the corresponding loop-erased parts to repeal each other.

Corollary 2.3

Assume that G is planar and it is also embedded into a connected planar domain \(\Omega \) in such a way the vertices in the absorbing boundary \(\partial \Gamma \) lie on the topological boundary \(\partial \Omega \). Let \(A\subset V\) and \(B\subset \partial \Gamma \) be as in Theorem 2.2, and, whenever \(i>i'\) and \(j<j'\), assume that every path \(P\in {\mathcal {H}}^{+}(a_{i},b_{j})\) intersects every path \(P'\in {\mathcal {H}}^{+}(a_{i'},b_{j'})\) at a vertex in \(V\setminus \partial \Gamma \) (see Fig. 4). In this case, the only allowable permutation in (2.1) is the identity permutation, and therefore

$$\begin{aligned} \sum _{\begin{array}{c} \mathbf{P}\in {\mathcal {H}}^{+}(\mathbf{a},\mathbf{b})\\ P_{j}\cap LE(P_{j-1})=\emptyset ,\,\,1<j\le n \end{array}}\omega (\mathbf{P})=\det (h(a_{i},b_{j}))_{i,j=1}^{n}. \end{aligned}$$
(2.2)

In particular, if the weight function \(\omega \) is non-negative, then the right hand side of (2.2) is non-negative.

Fig. 4
figure 4

The graph G is embedded into \(\Omega \) and the vertices \(a_{n},\ldots ,a_{1},b_{1},\ldots ,b_{n}\) are ordered counterclockwise along \(\partial \Omega \)

Remark

Assume that the vertex set V is the state space of a time-homogeneous Markov chain X and the possible transitions between states are determined by the planar graph G. That is, the transition probabilities p(ab) are positive if and only if there is an edge \(a{\mathop {\rightarrow }\limits ^{e}}b\), in which case \(\omega (e)=p(a,b)\). Then the assertion of Corollary 2.3 has the following probabilistic interpretation: the generating function

$$\begin{aligned} h(a,b)=\sum _{P\in {\mathcal {H}}^{+}(a,b)}\omega (P),\quad a\in V, b\in \partial \Gamma , \end{aligned}$$

is the hitting probability\({\mathbb {P}}_{a}(X_{T}=b, T<\infty )\), where T is the first time the chain X hits the boundary \(\partial V\) (if \(a\in \partial \Gamma \), the Markov chain is supposed to walk into \(V\setminus \partial \Gamma \) before reaching \(\partial \Gamma \)). Then the left hand side of (2.2) is equal to the probability that n independent trajectories \(X^{1},\ldots ,X^{n}\) of the Markov process X, starting at locations \(a_{1},a_{2},\ldots ,a_{n}\), respectively, will hit the boundary \(\partial \Gamma \) for the first time at the points \(b_{1},b_{2},\ldots ,b_{n}\), respectively, and furthermore the trajectory \(X^{j}\) will not intersect the loop-erased path \(LE(X^{i})\) at any vertex in \(V\setminus \partial \Gamma \), for all \(i<j\), that is,

$$\begin{aligned} X^{j}\cap LE(X^{i})=\emptyset ,\quad \text {for all}\,\,\,1\le i<j\le n. \end{aligned}$$
Fig. 5
figure 5

The lattice strip G with vertex set \(V={\mathbb {Z}}\times \{0,1,\ldots ,N\}\)

3 Affine Version of Fomin’s Identity

In this section, we extend Fomin’s identity to the setting of the affine symmetric group (Theorem 3.1) and consider its natural projection onto the cylindrical lattice (Proposition 3.3). Our main motivation is to present a preliminary extension of the framework considered in Sect. 4 to non-simply connected domains, and show, in Sect. 5, an interesting connection with circular ensembles of random matrix theory.

As we discussed in Sect. 2.2, the interaction between n paths \(P_{1},P_{2},\ldots ,P_{n}\) imposed in Fomin’s identity (Theorem 2.2) is given by the condition

$$\begin{aligned} P_{j}\cap LE(P_{i})=\emptyset ,\quad \text {for all}\,\,i<j. \end{aligned}$$
(3.1)

In particular, restricted to the lattice strip G of Fig. 5, the above condition ensures a type of ‘repulsion’ between consecutive paths from left to right, that is, every path \(P_{j}\) will not intersect the loop-erased part \(LE(P_{j-1})\) of the path to its right. Theorem 3.1 below is an extension of Fomin’s identity in the sense that we consider families of paths \(P_{1},P_{2},\ldots ,P_{n}\) subject to (3.1) and also subject to an extra non-intersection condition between the path \(P_{1}\) and the translation to the right of \(P_{n}\), given by a fixed translation \({\mathcal {S}}\) of the graph G (see Fig. 6), that is

$$\begin{aligned} P_{1}\cap LE({\mathcal {S}}P_{n})=\emptyset . \end{aligned}$$
(3.2)

This type of interaction is helpful when the lattice strip G is projected onto the cylindrical lattice\({{\tilde{G}}}\), modulo the translation \({\mathcal {S}}\) (or affine setting, see Fig. 7). In this case, the conditions (3.1) and (3.2) will jointly ensure that the projected paths \({\tilde{P}}_{1},{\tilde{P}}_{2},\ldots ,{\tilde{P}}_{n}\), in \({\tilde{G}}\), also satisfy the analogous ‘left to right’ non-intersection condition

$$\begin{aligned} {\tilde{P}}_{j}\cap LE({\tilde{P}}_{j-1})=\emptyset ,\quad 1< j\le n,\quad \text {and}\quad {\tilde{P}}_{1}\cap LE({\tilde{P}}_{n})=\emptyset , \end{aligned}$$

(see Sects. 3.2 and 3.3 ).

3.1 Affine Version of Fomin’s Identity

Consider the lattice strip \(G=(V,E,\omega )\), given by the vertex set \(V={\mathbb {Z}}\times \{0,1,\ldots ,N\}\) and connected by directed horizontal and vertical edges, in both positive and negative directions. We also assume that the weights \(\{\omega (e)\}_{e\in E}\) are invariant under horizontal translations by \(v=(M,0)\), for some positive \(M\in {\mathbb {Z}}\). Let \(\partial \Gamma =\{(i,N): i\in {\mathbb {Z}}\}\) denote the upper boundary of the lattice strip and consider the set \({\mathcal {H}}^{+}(\mathbf{a},\mathbf{b})\) for \(\mathbf{a}=(a_{1},\ldots ,a_{n})\), \(\mathbf{b}=(b_{1},\ldots ,b_{n})\) vectors of vertices, as defined in Sect. 2.2. We have the following.

Fig. 6
figure 6

Loop-erased switching over the paths \(P_1\) and \({\mathcal {S}}P_n\)

Fig. 7
figure 7

A path in the cylindrical lattice \({\tilde{G}}\) with winding number \(k=1\)

Theorem 3.1

Consider integers \(i_{n}<i_{n-1}<\cdots<i_{1}<i_{n}+M\) and \(j_{n}<j_{n-1}<\cdots<j_{1}<j_{n}+M\). Define the following two n-tuples of vertices in G

$$\begin{aligned} a_{k}:=(i_{k},0),\quad \quad b_{k}:=(j_{k},N),\quad 1\le k\le n. \end{aligned}$$

If \({\mathcal {S}}:G\rightarrow G\) is the horizontal translation by (M, 0), the \(2(n+1)\) vertices \(a_{n},\ldots ,a_{1},{\mathcal {S}}a_{n}\), \({\mathcal {S}}b_{n},b_{1},\ldots ,b_{n}\) are ordered counterclockwise along the topological boundary of the lattice strip G (see Fig. 5). Therefore

$$\begin{aligned} \sum _{\begin{array}{c} \mathbf{P}\in {\mathcal {H}}^{+}(\mathbf{a},\mathbf{b}) \\ P_{j}\cap \text {LE}(P_{j-1})=\emptyset ,\quad 1< j\le n \\ P_{1}\cap \text {LE}({\mathcal {S}}P_{n})=\emptyset \end{array}}\omega (\mathbf{P})=\sum _{\sigma \in S_{n}}\sum _{\begin{array}{c} k_{i}\in {\mathbb {Z}}\\ k_{1}+k_{2}+\cdots +k_{n}=0 \end{array}}\mathrm{sgn}(\sigma )\sum _{\mathbf{P}\in {\mathcal {H}}^{+}(\mathbf{a}, {\mathcal {S}}^{\mathbf{k}}{} \mathbf{b}_{\sigma })}\omega (\mathbf{P}), \end{aligned}$$
(3.3)

where \({\mathcal {S}}^{\mathbf{k}}{} \mathbf{b}_{\sigma }=({\mathcal {S}}^{k_{1}}b_{\sigma (1)},\ldots ,{\mathcal {S}}^{k_{n}}b_{\sigma (n)})\) and \({\mathcal {S}}^{k_{i}}={\mathcal {S}}\circ {\mathcal {S}}^{k_{i}-1}\), \(k_{i}\ge 2\). If, as before, \(h(a,b)=\sum _{P\in {\mathcal {H}}^{+}(a,b)}\omega (P)\), then the right hand side of (3.3) takes the form

$$\begin{aligned} \sum _{\sigma \in S_{n}}\sum _{\begin{array}{c} k_{i}\in {\mathbb {Z}}\\ k_{1}+k_{2}+\cdots +k_{n}=0 \end{array}}\mathrm{sgn}(\sigma )\prod _{i=1}^{n}h(a_{i},{\mathcal {S}}^{k_{i}}b_{\sigma (i)}). \end{aligned}$$

Remark

Unlike Fomin’s identity, the extra condition \(P_{1}\cap LE({\mathcal {S}}P_{n})=\emptyset \) in (3.3) forces us to consider families of paths where the n end vertices are permutations and translations of the originals \((b_{1},\ldots ,b_{n})\), see the proof below. In particular, the end vertices should vary among n-tuples \( {\mathcal {S}}^{\mathbf{k}}{} \mathbf{b}_{\sigma }=({\mathcal {S}}^{k_{1}}b_{\sigma (1)},\ldots ,{\mathcal {S}}^{k_{n}}b_{\sigma (n)})\), with \(\sigma \in S_{n}\) and \(k_{1}+\cdots +k_{n}=0\), \(k_{i}\in {\mathbb {Z}}\). This can be thought of as the action of the (infinite) affine symmetric group \({\tilde{A}}_{n}\) on the vertices \((b_{1},\ldots ,b_{n})\).

Remark

In the acyclic case, the above theorem agrees with the Gessel–Zeilberger formula for counting paths in alcoves [12].

Proof of Theorem 3.1

We will follow the strategy of proof of Fomin’s identity (Theorem 6.1 in [8]), that is, we will give a sign-reversing involution on the set of summands on the right hand side of (3.3) which violate the condition

$$\begin{aligned} P_{j}\cap&LE(P_{i})=\emptyset ,\quad \text {for all}\quad 1\le i<j\le n,\quad \text {and}\\ \nonumber P_{j}\cap&LE({\mathcal {S}}P_{n})=\emptyset ,\quad \text {for all}\quad 1\le j\le n. \end{aligned}$$
(3.4)

As a consequence, the sum of all of the latter terms will vanish and, the sum of the remaining terms, the ones which satisfy (3.4), will be simplified to the desired expression in the left-hand side of (3.3).

The sign-reversing involution is as follows. For \(n\ge 2\), let \(\sigma \in S_{n}\) and \(k_{i}\), \(1\le i\le n\), integers such that \(k_{1}+k_{2}+\cdots +k_{n}=0\). Consider a family of paths \(\mathbf{P}\in {\mathcal {H}}(\mathbf{a},{\mathcal {S}}^{\mathbf{k}}{} \mathbf{b}_{\sigma })\) which violates the condition (3.4). We will construct a new family of paths \(\tilde{\mathbf{P}}\in {\mathcal {H}}(\mathbf{a},{\mathcal {S}}^{{{\tilde{\mathbf{k}}}}}{} \mathbf{b}_{{\tilde{\sigma }}})\), with \({\tilde{\sigma }}\in S_{n}\) and \({{\tilde{k}}}_{1}+{{\tilde{k}}}_{2}+\cdots +{{\tilde{k}}}_{n}=0\), that also violates the condition (3.4), and satisfes \(\omega (\tilde{\mathbf{P}})=\omega (\mathbf{P})\) and sgn\(({\tilde{\sigma }})=-\)sgn\((\sigma )\). The construction of the new family \(\tilde{\mathbf{P}}\) is essentially an application of the Fomin’s loop-erased switching (Sect. 2.2) over the paths

$$\begin{aligned} P_{n},P_{n-1},\ldots ,P_{1},{\mathcal {S}}P_{n}. \end{aligned}$$

This construction will also ensure that the correspondence \(\mathbf{P}\mapsto \tilde{\mathbf{P}}\) is one-to-one, as desired.

To make the notation simpler, let us denote \(a_{0}:={\mathcal {S}}a_{n}\) and the corresponding path starting at \({\mathcal {S}}a_{n}\) by \(P_{0}:={\mathcal {S}}P_{n}\). Choose indexes \(i'\) and \(j'\) as follows. Since the family \(\mathbf{P}\in {\mathcal {H}}(\mathbf{a},{\mathcal {S}}^{\mathbf{k}}{} \mathbf{b}_{\sigma })\) violates (3.4), the set of indexes \(0\le i<j\le n\) such that \(P_{j}\cap LE(P_{i})\not =\emptyset \) is not empty, therefore we can choose \(0\le i^{'}< n\) the minimum among those indexes and consider the path LE\((P_{i'})\). Along the latter path, choose a vertex \(v'\) and index \(j'\) in the following order:

  • Along the vertex sequence of the path LE\((P_{i'})\), choose \(v'\) as the ‘closest’ (that is, with minimal index) intersection vertex to the starting vertex \(a_{i'}\).

  • Now consider the set of indexes \(\{j:1\le i'<j\le n\}\) such that \(P_{j}\) intersects LE\((P_{i'})\) at \(v'\) (in other words, \(v'\in P_{j}\cap \text {LE}(P_{i'})\)), and let \(j'\) the minimum of this set.

We have two different scenarios, depending on weather \(P_{i'}\) is the path \({\mathcal {S}}P_{n}\) or not. If \(i'\not =0\) (and \(P_{i'}\) is not the path \({\mathcal {S}}P_{n}\)) we perform the usual loop-erased switching (Sect. 2.2) over the paths \(P_{i'}\) and \(P_{j'}\) at the vertex \(v'\), that is, we define new paths

$$\begin{aligned} {{\tilde{P}}}_{i'}: a_{i'}&\xrightarrow {P_{i'}'}v'\xrightarrow {P_{j'}''}{\mathcal {S}}^{k_{j'}}b_{\sigma (j')}\\ {{\tilde{P}}}_{j'}: a_{j'}&\xrightarrow {P_{j'}'}v'\xrightarrow {P_{i'}''}{\mathcal {S}}^{k_{i'}}b_{\sigma (i')}. \end{aligned}$$

For the remaining paths, \(i\notin \{i',j'\}\), define \({{\tilde{P}}}_{i}:=P_{i}\). The original family \(\mathbf{P}\in {\mathcal {H}}(\mathbf{a},{\mathcal {S}}^{\mathbf{k}}{} \mathbf{b}_{\sigma })\) is then mapped to a new family of paths \(\tilde{\mathbf{P}}\in {\mathcal {H}}(\mathbf{a},{\mathcal {S}}^{{{\tilde{\mathbf{k}}}}}{} \mathbf{b}_{{\tilde{\sigma }}})\), where \(\tilde{\mathbf{k}}=(k_{1},\ldots ,k_{j'},\ldots ,k_{i'},\ldots ,k_{n})\) and \({\tilde{\sigma }}=\sigma \circ (i',j')\in S_{n}\) are the vector \(\mathbf{k}\) and permutation \(\sigma \), with the entries \(i'\) and \(j'\) interchanged, respectively. Note that the sum of the entries of \(\tilde{\mathbf{k}}\) is zero, as desired, and \(\mathrm{sgn}({\tilde{\sigma }})=-\mathrm{sgn}(\sigma )\). Moreover, the family \(\tilde{\mathbf{P}}\) also violates the condition (3.4) since the paths \({{\tilde{P}}}_{i'}\) and \({{\tilde{P}}}_{j'}\) share the vertex \(v'\). Note that the weights are also preserved: \(\omega (\tilde{\mathbf{P}})=\omega (\mathbf{P})\).

In the second case, when \(i'=0\), a more careful selection of paths is needed: we perform the loop-erased switching over the paths \({\mathcal {S}}P_{n}\) and \(P_{j'}\):

$$\begin{aligned} {\mathcal {S}}P_{n}&: {\mathcal {S}}a_{n}\xrightarrow {({\mathcal {S}}P_{n})'}v'\xrightarrow {({\mathcal {S}}P_{n})''}{\mathcal {S}}{\mathcal {S}}^{k_{n}}b_{\sigma (n)}\\ P_{j'}&: a_{j'}\xrightarrow {P_{j'}'}v'\xrightarrow {P_{j'}''}{\mathcal {S}}^{k_{j'}}b_{\sigma (j')}, \end{aligned}$$

and create the two new paths

$$\begin{aligned} {{\tilde{P}}}_{n}&: a_{n}\xrightarrow {P_{n}'}{\mathcal {S}}^{-1}v'\xrightarrow {{\mathcal {S}}^{-1}P_{j'}''}{\mathcal {S}}^{-1}{\mathcal {S}}^{k_{j'}}b_{\sigma (j')}\\ {{\tilde{P}}}_{j'}&: a_{j'}\xrightarrow {P_{j'}'}v'\xrightarrow {({\mathcal {S}}P_{n})''}{\mathcal {S}}{\mathcal {S}}^{k_{n}}b_{\sigma (n)}, \end{aligned}$$

(see Fig. 6). The rest of the paths remain invariant, \({{\tilde{P}}}_{i}:=P_{i}\) for \(i\notin \{n,j'\}\). Thus, the new family \(\tilde{\mathbf{P}}=({\tilde{P}}_1,\ldots ,{\tilde{P}}_n)\) satisfies \(\tilde{\mathbf{P}}\in {\mathcal {H}}(\mathbf{a},{\mathcal {S}}^{{{\tilde{\mathbf{k}}}}}{} \mathbf{b}_{{\tilde{\sigma }}})\), with \(\tilde{\mathbf{k}}=(k_{1},\ldots ,k_{n}+1,\ldots ,k_{n-1},k_{j'}-1)\) and \({\tilde{\sigma }}=\sigma \circ (j',n)\in S_{n}\). Note again that the sum of the entries of \(\tilde{\mathbf{k}}\) is zero and \(\mathrm{sgn}({\tilde{\sigma }})=-\mathrm{sgn}(\sigma )\). Moreover, since the weight \(\omega \) is invariant under horizontal translations, we have \(\omega (\tilde{\mathbf{P}})=\omega (\mathbf{P})\). We only need to show that the family \(\tilde{\mathbf{P}}\) violates the condition (3.4) as well, but this is clearly the case since the paths \({{\tilde{P}}}_{j'}\) and \(LE({\mathcal {S}}{{\tilde{P}}}_{n})\) intersect at the vertex \(v'\), that is

$$\begin{aligned} {{\tilde{P}}}_{j'}\cap LE({\mathcal {S}}{{\tilde{P}}}_{n})\not =\emptyset . \end{aligned}$$

Therefore, in both cases, applying the loop-erased switching to the family \(\tilde{\mathbf{P}}\), we recover the original family \(\mathbf{P}\), so the corresponding map from \(\mathbf{P}\) to \(\tilde{\mathbf{P}}\) is an involution on the set of paths that violate (3.4). Moreover, since \(\mathrm{sgn}(\sigma )\omega ({\tilde{\mathbf{P}}})=-\mathrm{sgn}({\tilde{\sigma }})\omega (\tilde{\mathbf{P}})\), the sum of all these terms vanishes on the right hand side of (3.3), and therefore the total sum is

$$\begin{aligned} \sum _{\sigma \in S_{n}}\sum _{\begin{array}{c} k_{i}\in {\mathbb {Z}}\\ k_{1}+k_{2}+\cdots +k_{n}=0 \end{array}}\mathrm{sgn}(\sigma )\sum _{\begin{array}{c} \mathbf{P}\in {\mathcal {H}}^{+}(\mathbf{a},{\mathcal {S}}^\mathbf{k}{} \mathbf{b}_{\sigma }) \\ \mathbf{P}\,\,\text {satisfies}\,\,(3.4) \end{array}}\omega (\mathbf{P}). \end{aligned}$$
(3.5)

Finally, in the expression above, if a family \(\mathbf{P}\in {\mathcal {H}}^{+}(\mathbf{a},{\mathcal {S}}^\mathbf{k}\mathbf{b}_{\sigma })\) satisfies (3.4), the loop-erased parts \(LE(P_{j})\), \(1\le j\le n\), are pairwise disjoint and then \(\sigma \) must be the identity permutation and \(k_{1}=k_{2}=\cdots =k_{n}=0\), as required. In this case, the condition (3.4) on paths can be simplified to the one in the left hand side of (3.3). \(\square \)

3.2 Projections onto the Cylinder

As described in the introduction of Sect. 3, a useful application of Theorem 3.1 is when we consider the projection of the lattice strip G onto the cylindrical lattice, modulo a translation \({\mathcal {S}}\) (see Fig. 7). Intuitively, a family of n (loop-erased) paths can wind around the cylinder several times (equivalently, translations of the end vertex by \({\mathcal {S}}^{m}={\mathcal {S}}\circ {\mathcal {S}}^{m-1}\), \(m\in {\mathbb {Z}}\), in the strip) before reaching its destination. Moreover, there are exactly n different ways in which the n paths can reach their destination without intersecting, given by the n ‘cyclic permutations’ of the end vertices.

Corollary 3.2 and Proposition 3.3 make the above considerations precise. These considerations give a more tractable form of Theorem 3.1, first as a sum of n determinants in Corollary 3.2 and then as a single determinant in Proposition 3.3.

Corollary 3.2

In the context of Theorem 3.1, by summing up in (3.3) over all the weights of all families of paths \(\mathbf{P}=(P_{1},\ldots ,P_{n})\) starting at \(\mathbf{a}=(a_{1},\ldots ,a_{n})\), and ending at all possible translations of \(\mathbf{b}=(b_{1},\ldots ,b_{n})\) by \({\mathcal {S}}^{m}\), \(m\in {\mathbb {Z}}\), we obtain

$$\begin{aligned} \sum _{\begin{array}{c} \mathbf{P}\in \bigcup _{m\in {\mathbb {Z}}}{\mathcal {H}}^{+}(\mathbf{a},{\mathcal {S}}^{\mathbf{m}}{} \mathbf{b}) \\ P_{j}\cap \text {LE}(P_{j-1})=\emptyset ,\quad 1< j\le n \\ P_{1}\cap \text {LE}({\mathcal {S}}P_{n})=\emptyset \end{array}}\omega (\mathbf{P})=\sum _{\sigma \in S_{n}}\sum _{\begin{array}{c} k_{1}+k_{2}+\cdots +k_{n}=0\\ \mathrm{mod}\,\, n \end{array}}\mathrm{sgn}(\sigma )\sum _{\mathbf{P}\in {\mathcal {H}}^{+}(\mathbf{a},{\mathcal {S}}^{\mathbf{k}}{} \mathbf{b}_{\sigma })}\omega (\mathbf{P}), \end{aligned}$$
(3.6)

where \({\mathcal {S}}^{\mathbf{m}}{} \mathbf{b}=({\mathcal {S}}^{m}b_{1},\ldots ,{\mathcal {S}}^{m}b_{n})\). Moreover, if \(\eta =e^{i\frac{2\pi }{n}}\) is a complex root of unity, then the right hand side above can be expressed as the sum of n determinants:

$$\begin{aligned} \frac{1}{n}\sum _{u=0}^{n-1}\det \left( \sum _{k\in {\mathbb {Z}}}\eta ^{uk}h\big (a_{i},{\mathcal {S}}^{k}b_j\big )\right) _{i,j=1}^{n}, \end{aligned}$$
(3.7)

where \(h(a,b)=\sum _{P\in {\mathcal {H}}^{+}(a,b)}\omega (P)\), \(a\in V, b\in \partial \Gamma \).

Proof

Using the identity (3.3) and summing up over all the weights as indicated in the statement of the corollary, the left hand side of (3.6) takes the form

$$\begin{aligned} \sum _{m\in {\mathbb {Z}}}\sum _{\sigma \in S_{n}}\sum _{\begin{array}{c} k_{i}\in {\mathbb {Z}}\\ k_{1}+k_{2}+\cdots +k_{n}=0 \end{array}}\mathrm{sgn}(\sigma )\sum _{\mathbf{P}\in {\mathcal {H}}^{+}(\mathbf{a},{\mathcal {S}}^{\mathbf{m+k}}{} \mathbf{b}_{\sigma })}\omega (\mathbf{P}), \end{aligned}$$

which, in turn, can be easily simplified to the desired expression on the right-hand side. For the second part, note that, if \(\eta =e^{i\frac{2\pi }{n}}\) is a complex root of unity, then we can eliminate the condition \(\sum _{i=1}^{n} k_{i}=0\) mod n by using the identity

$$\begin{aligned} \frac{1}{n}\sum _{u=0}^{n-1}\eta ^{u\sum _{i=1}^{n} k_{i}}=\left\{ \begin{array}{ll} 1 &{}\quad {\text {if}}\, \sum _{i=1}^{n}k_{i}=0,\,\,\text {mod}\,n \\ 0 &{}\quad {\text {otherwise}}. \end{array} \right. \end{aligned}$$

Then, the right-hand side of (3.6) can be written as

$$\begin{aligned} \frac{1}{n}\sum _{u=0}^{n-1}\sum _{\sigma \in S_{n}}\mathrm{sgn}(\sigma )\sum _{k_{1},\ldots ,k_{n}\in {\mathbb {Z}}}\eta ^{u\sum _{i=1}^{n} k_{i}}\sum _{\mathbf{P}\in {\mathcal {H}}^{+}(\mathbf{a}, {\mathcal {S}}^{\mathbf{k}}{} \mathbf{b}_{\sigma })}\omega (\mathbf{P}), \end{aligned}$$

and the latter as

$$\begin{aligned} \frac{1}{n}\sum _{u=0}^{n-1}\sum _{\sigma \in S_{n}}\mathrm{sgn}(\sigma )\prod _{i=1}^{n}\sum _{k\in {\mathbb {Z}}}\eta ^{uk}\sum _{P\in {\mathcal {H}}^{+}(a_{i},{\mathcal {S}}^{k}b_{\sigma (i)})}\omega (P). \end{aligned}$$

The above expression is (3.7). \(\square \)

Proposition 3.3

Denote by \([\ell ]\in S_n\) the cyclic permutation shifted by \(\ell =0,1,\ldots ,n-1\) :

$$\begin{aligned}{}[\ell ](k)=k-\ell ,\,\,\mod n,\,\,\,\mathrm{in}\,\,\,\{1,\ldots ,n\}. \end{aligned}$$

Let \(\mathbf{a}=(a_{1},\ldots ,a_{n})\) and \(\mathbf{b}=(b_1,b_2,\ldots ,b_n)\) be the vectors of vertices of Theorem 3.1. For each \([\ell ]\in S_{n}\), \(\ell =0,\ldots ,n-1\), define the n-tuple:

$$\begin{aligned} {\mathbf k_{\ell }}=\left( \underbrace{1,\ldots ,1,}_{\ell \,\,\mathrm{times}}\underbrace{0,\ldots ,0}_{n-\ell \,\,\mathrm{times}}\right) . \end{aligned}$$
(3.8)

We have the following

$$\begin{aligned} \sum _{\begin{array}{c} [\ell ]\in S_{n}\\ \ell =0,\ldots ,n-1 \end{array}}\sum _{\begin{array}{c} \mathbf{P}\in \bigcup _{m\in {\mathbb {Z}}}{\mathcal {H}}^{+}(\mathbf{a}, {\mathcal {S}}^{\mathbf{m+k}_{\ell }}{} \mathbf{b}_{[\ell ]}) \\ P_{j}\cap \text {LE}(P_{j-1})=\emptyset ,\quad 1< j\le n \\ P_{1}\cap \text {LE}({\mathcal {S}}P_{n})=\emptyset \end{array}}\omega (\mathbf{P})=\det \left( \sum _{k\in {\mathbb {Z}}}\zeta ^{k}h(a_{i},{\mathcal {S}}^{k}b_{j})\right) _{i,j=1}^{n}, \end{aligned}$$
(3.9)

where

$$\begin{aligned} \zeta = \left\{ \begin{array}{ll} 1 &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is odd}} \\ -1 &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is even}}, \end{array} \right. \end{aligned}$$

and \(h(a,b)=\sum _{P\in {\mathcal {H}}^{+}(a,b)}\omega (P)\). In particular, if the weight function \(\omega \) is non-negative, then the above determinant is non-negative.

Proof

Let \({\mathcal {G}}(\mathbf{a},\mathbf{b})\) denote the left-hand side of (3.6). Using Corollary 3.2, a simple calculation shows that for each \(\ell =0,\ldots ,n-1\):

$$\begin{aligned} {\mathcal {G}}(\mathbf{a},{\mathcal {S}}^{\mathbf{k}_{\ell }}{} \mathbf{b}_{[\ell ]})=\frac{1}{n}\sum _{u=0}^{n-1}\eta ^{-\ell u}\mathrm{sgn}([\ell ])\det \left( \sum _{k\in {\mathbb {Z}}}\eta ^{uk}h(a_{i},{\mathcal {S}}^{k}b_j)\right) . \end{aligned}$$

Therefore, the left hand side of (3.9) can be expressed as

$$\begin{aligned} \sum _{\ell =0}^{n-1}{\mathcal {G}}(\mathbf{a},{\mathcal {S}}^{\mathbf{k}_{\ell }}{} \mathbf{b}_{[\ell ]})=\sum _{u=0}^{n-1}\left( \frac{1}{n}\sum _{\ell =0}^{n-1}\eta ^{-\ell u}\mathrm{sgn}([\ell ])\right) \det \left( \sum _{k\in {\mathbb {Z}}}\eta ^{uk}h(a_{i},{\mathcal {S}}^{k}b_j)\right) , \end{aligned}$$

which is a sum of n determinants.

Case 1. If n is odd, sgn\(([\ell ])=1\) for all \(\ell =0,\ldots ,n-1\) and then

$$\begin{aligned} \frac{1}{n}\sum _{\ell =0}^{n-1}\eta ^{-\ell u}=\left\{ \begin{array}{ll} 1 &{}\quad {\text {if }} -u=0,\,\,\text {mod}\,n \\ 0 &{}\quad {\text {otherwise}}, \end{array} \right. \end{aligned}$$

therefore, the only remaining determinant is the one corresponding to \(u=0\), and so \(\zeta =\eta ^{u}=1\).

Case 2. If n is even, sgn\(([\ell ])=(-1)^{\ell }\) for all \(\ell =0,\ldots ,n-1\) and

$$\begin{aligned} \frac{1}{n}\sum _{\ell =0}^{n-1}(-1)^{\ell }\eta ^{-\ell u}=\frac{1}{n}\sum _{\ell =0}^{n-1}\eta ^{\ell (\frac{n}{2}-u)}. \end{aligned}$$

The above sum is 1 if and only if \(\frac{n}{2}-u=0\) mod n, and zero otherwise. The only remaining determinant is then \(u=\frac{n}{2}\), and therefore \(\zeta =\eta ^{u}=-1\), which concludes the proof. \(\square \)

Remark

Assume that the vertex set \(V={\mathbb {Z}}\times \{0,1,\ldots ,N\}\) is the state space of a time-homogeneous Markov chain X and the possible transitions between states are determined by the lattice strip G introduced at the beginning of the section. In other words, the transition probabilities p(uv) are positive if and only if there is an edge \(u{\mathop {\rightarrow }\limits ^{e}}v\), in which case \(\omega (e)=p(u,v)\). Assume that the transition probabilities are space-invariant with respect to a fixed horizontal translation \(\mathcal {S}:G\rightarrow G\). Then, the assertion of Proposition 3.3 has the following probabilistic interpretation: if vertices \(a_{n},\ldots ,a_{1},b_{1},\ldots ,b_{n}\) are ordered counterclockwise along the boundary (as in Fig. 6), then the \(n\times n\) determinant

$$\begin{aligned} \det \left( \sum _{k\in {\mathbb {Z}}}\zeta ^{k}h(a_{i},{\mathcal {S}}^{k}b_{j})\right) _{i,j=1}^{n}, \end{aligned}$$

of hitting probabilities \(h(a_{i},{\mathcal {S}}^{k}b_{j})\), where \({\mathcal {S}}^{k}={\mathcal {S}}\circ {\mathcal {S}}^{k-1}\), \(k\in {\mathbb {Z}}\), and

$$\begin{aligned} \zeta = \left\{ \begin{array}{ll} 1 &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is odd}} \\ -1 &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is even}}, \end{array} \right. \end{aligned}$$

is equal to the probability that n independent trajectories of the Markov chain \(X_{1},\ldots ,X_{n}\), starting at \(a_{1},\ldots ,a_{n}\), respectively, will first hit the upper boundary \(\partial \Gamma ={\mathbb {Z}}\times \{N\}\) at any of the n cyclic permutations of the vertices \(b_{1},\ldots ,b_{n}\), shifted also by all possible horizontal translations by \({\mathcal {S}}^{k}\), \(k\in {\mathbb {Z}}\), and furthermore the trajectories are constrain to satisfy

$$\begin{aligned} X_{j}\cap LE(X_{j-1})=\emptyset ,\quad 1< j\le n,\quad \text {and}\quad X_{1}\cap LE({\mathcal {S}}X_{n})=\emptyset . \end{aligned}$$

3.3 A Remark on Loop-Erased Paths in a Cylinder

In this section, we consider families of paths defined in the directed cylindrical lattice \({{\tilde{G}}}\) of Fig. 7 and review some properties regarding their loop-erasures. As in the previous sections, the cylindrical lattice need not be acyclic (loops are allowed) and, if the number of paths is odd, we can obtain a variant of Proposition 3.3 by applying Fomin’s identity directly (see Proposition 3.4 below). However, there is a slight difference between these two approaches, since the loop-erasure of a path in \({{\tilde{G}}}\) may differ from the projection of the loop-erasure of the corresponding path in the lattice strip G (see the remark just after Proposition 3.4).

Define the (directed) cylindrical lattice (or, just cylinder) \({\tilde{G}}=({\tilde{V}},{\tilde{E}})\) as the directed graph with vertex set \({\tilde{V}}={\mathbb {Z}}_{M}\times \{0,1,\ldots ,N\}\) and connected by edges in both positive and negative directions. Here, we consider the canonical representation of \({\mathbb {Z}}_{M}\) as \({\mathbb {Z}}/M{\mathbb {Z}}=\{[0],\ldots ,[M-1]\}\). Let’s distinguish the set of (boundary) vertices \(\partial {\tilde{G}}={\mathbb {Z}}_{M}\times \{N\}\). If we consider the lattice strip \(G=(V, E, \omega )\) of Theorem 3.1 and the notation thereof, there is a natural correspondence between paths in the cylinder \({\tilde{G}}\) and paths in the strip G. In particular, every path \({\tilde{P}}\) in \({\tilde{G}}\) starting at \({\tilde{a}}=([i],0)\) and ending at \({\tilde{b}}=([j],N)\), for \(i,j\in \{0,\ldots ,M-1\}\), and with all internal vertices lying in \({\tilde{G}}\setminus \partial {\tilde{G}}\), can be seen as the image of any path \(P^{\ell }\in G\) of the form

$$\begin{aligned} P^{\ell }\in {\mathcal {H}}^{+}({\mathcal {S}}^{\ell }a,{\mathcal {S}}^{\ell +k}b),\quad \ell \in {\mathbb {Z}}, \end{aligned}$$

with \(a=(i,0)\in V\), \(b=(j,N)\in V\), and a unique \(k\in {\mathbb {Z}}\). The integer k is usually called the winding number of the path \({\tilde{P}}\) (see Fig. 7). Since the weight function \(\omega \) defined on the strip G is invariant under the translation \({\mathcal {S}}\) by (M, 0), the graph \({\tilde{G}}\) inherits canonically a weight function \(\tilde{\omega }\) on \({\tilde{E}}\) and the path \({\tilde{P}}\) inherits the weight \(\tilde{\omega }({\tilde{P}}):=\omega (P)\), whenever \(P\in {\mathcal {H}}^{+}(a,{\mathcal {S}}^{k}b)\) is the projection of \({\tilde{P}}\).

Let \({\mathcal {C}}^{+}({\tilde{a}},{\tilde{b}})\) be the set of all paths in the cylinder \({\tilde{G}}\) of positive length, starting at \({\tilde{a}}\in {\tilde{G}}\) and ending at \({\tilde{b}}\in \partial {\tilde{G}}\), with all internal vertices in \({\tilde{G}}\setminus \partial {\tilde{G}}\). Similarly, define \({\mathcal {C}}^{+}(\tilde{\mathbf{a}},\tilde{\mathbf{b}})\) for families of paths \({\tilde{P}}_{1},\ldots ,{\tilde{P}}_{n}\), starting at \(\tilde{\mathbf{a}}=({\tilde{a}}_1,\ldots ,{\tilde{a}}_n)\) and ending at \(\tilde{\mathbf{b}}=({\tilde{b}}_1,\ldots ,{\tilde{b}}_n)\). We have the following:

Proposition 3.4

Consider integers \(0\le i_{n}<i_{n-1}<\cdots<i_{1}<M\) and \(0\le j_{n}<j_{n-1}<\cdots<j_{1}<M\). Define two n-tuples of vertices in the cylinder \({\tilde{G}}\) as

$$\begin{aligned} {\tilde{a}}_{k}:=([i_{k}],0),\quad \quad {\tilde{b}}_{k}:=([j_{k}],N),\quad 1\le k\le n. \end{aligned}$$

If n is odd and we consider the n cyclic permutations defined in Proposition 3.3, we obtain

$$\begin{aligned} \sum _{\sigma \,\,\mathrm{cyclic}}\sum _{\begin{array}{c} \tilde{\mathbf{P}}\in {\mathcal {C}}^{+}(\tilde{\mathbf{a}},\tilde{\mathbf{b}}_{\sigma })\\ {\tilde{P}}_{j}\cap LE({\tilde{P}}_{i})=\emptyset ,\,\,i<j \end{array}}\tilde{\omega }(\tilde{\mathbf{P}})=\det \left( \sum _{k\in {\mathbb {Z}}}h(a_{i},{\mathcal {S}}^{k}b_{j})\right) _{i,j=1}^{n}, \end{aligned}$$
(3.10)

where \(h(a,b)=\sum _{P\in {\mathcal {H}}^{+}(a,b)}\omega (P)\) and \(\tilde{\mathbf{b}}_{\sigma }=({\tilde{b}}_{\sigma (1)},\ldots ,{\tilde{b}}_{\sigma (n)})\).

Proof

Note that the right hand side of (3.10) can be written as the determinant

$$\begin{aligned} \det \left( \sum _{{\tilde{P}}\in {\mathcal {C}}^{+}({\tilde{a}}_{i},{\tilde{b}}_{j})}\tilde{\omega }({\tilde{P}})\right) _{i,j=1}^{n}, \end{aligned}$$

and, since sgn\((\sigma )\)=1 for all \(\sigma \) cyclic if n is odd, the equality (3.10) is a direct application of Fomin’s identity (Theorem 2.2), according to the weight function \(\tilde{\omega }\) on \({\tilde{G}}\). \(\square \)

Remark

The right hand side of (3.10) agrees with the right hand side of identity (3.9), for n odd. This implies that the left hand sides of (3.10) and (3.9) are equal, which is not immediately obvious from the definitions. For example, we can consider the paths \(P_{1}\), \(P_{2}\) and \(P_{3}\) in the lattice strip of Fig. 8. There, we have that the corresponding projections onto the cylinder satisfy \({\tilde{P}}_{j}\cap LE({\tilde{P}}_{i})=\emptyset \), for \(i<j\), and, in particular \({\tilde{P}}_{3}\cap LE({\tilde{P}}_{1})=\emptyset \). However, \(P_{1}\cap LE({\mathcal {S}}P_{3})\not =\emptyset \), and then \((P_{1},P_{2},P_{3})\) is not considered in the left hand side of (3.9). It would be interesting to have a direct combinatorial proof of this identity.

Remark

In the acyclic case, one can obtain determinant formulas for an even number of non-intersecting walks on a cylindrical lattice by introducing modified weights which keep track of windings [10, 30]. However, in the general case, we do not see how to adopt this approach and the only way we know how to study the case of an even number of particles is via the affine version of Fomin’s identity introduced in Theorem 3.1.

3.4 More General Lattices, Gessel–Zeilberger Formula

The results of this section were formulated for the square lattice but are equally valid for more general periodic planar graphs, for example, the hexagonal lattice shown in Fig. 9.

In the acyclic case, Theorem 3.1 agrees with the Gessel–Zeilberger formula [12] (see also [11]). We note that in this context, the identity (3.9) gives a direct connection between the Gessel–Zeilberger formula, for counting paths in alcoves, and the Karlin–McGregor formula [19] (Lindström–Gessel–Viennot lemma [13]) for counting non-intersecting paths on a cylinder; this answers positively a question of Fulmek [10], where the problem of finding such a direct connection was posed as an open question. Moreover, in the continuous case, it also shows that the Karlin–McGregor (for n odd) and Liechty–Wang (for n even) formulas [19, 30] for the transition probability density of n (indistinguishable) non-intersecting Brownian motions on the circle can be obtained directly from the (labelled) model of Hobson–Werner [16], which is a continuous version of the Gessel–Zeilberger formula in the case of the affine symmetric group \({\tilde{A}}_{n}\) (we review this in Sect. 5.1 below).

Fig. 8
figure 8

Three paths, and their projections onto the cylinder

Fig. 9
figure 9

Hexagonal lattice

4 Connections to Random Matrix Theory

As explained in Sect. 1.2 of the introduction, there is a natural way to consider diffusion scaling limits of both Fomin’s identity (Corollary 2.3) and its affine version (Proposition 3.3). Regarding Fomin’s identity, this idea is originally discussed in [8], where some examples for two-dimensional Brownian motion are described in detail. For our purposes, the connection with random matrix theory emerges from the following considerations: assume that \(\Omega \) is a suitable complex (connected) domain with smooth boundary and \(h(z_{0},y)\) is the (hitting) density of the harmonic measure

$$\begin{aligned} \mu _{z_{0},\Omega }(A)={\mathbb {P}}^{z_{0}}(B_{T}\in A),\quad A\subset \partial \Omega , \end{aligned}$$

with respect to one-dimensional Lebesgue measure (lenght), where B under \({\mathbb {P}}^{z_{0}}\) denotes a two-dimensional Brownian motion starting at \(z_{0}\in \Omega \), and \(T=\inf \{t>0:B_{t}\notin \Omega \}\) is the first exit time of \(\Omega \) (see Sect. 4.1). Therefore, for \(m\in {\mathbb {R}}\) and appropriately chosen (parametrized) positions \(x_{1},\ldots ,x_{n}\) and \(y_{1},\ldots ,y_{n}\) along the boundary \(\partial \Omega \), the determinants of hitting densities

$$\begin{aligned} H(x,y)=\det \left( h(x_{i},y_{j})\right) _{i,j=1}^{n}dy_{1}\cdots dy_{n}, \end{aligned}$$
(4.1)

and

$$\begin{aligned} H(x,y)=\det \left( \sum _{k\in {\mathbb {Z}}}\zeta ^{k}h(x_{i},y_{j}+mk)\right) _{i,j=1}^{n}dy_{1}\cdots dy_{n}, \end{aligned}$$
(4.2)

where

$$\begin{aligned} \zeta = \left\{ \begin{array}{ll} 1 &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is odd}} \\ -1 &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is even}}, \end{array} \right. \end{aligned}$$

can be interpreted, informally, as the probability that n independent ‘Brownian motions’ \(B_{i}\), \(i=1,\ldots ,n\), starting at positions \(x_{1},x_{2},\ldots ,x_{n}\), respectively, will first hit an absorbing boundary \(\partial \Gamma \subset \partial \Omega \) at (parametrized) positions in the intervals \((y_{i}+dy_{i})\), \(i=1,\ldots ,n\), and whose trajectories are constrained to satisfy the condition

$$\begin{aligned} B_{j}\cap LE(B_{i})=\emptyset ,\quad \text {for all}\,\,1\le i<j\le n, \end{aligned}$$

in (4.1), or

$$\begin{aligned} B_{j}\cap LE(B_{j-1})=\emptyset ,\quad 1< j\le n,\quad \text {and}\quad B_{1}\cap LE(m+B_{n})=\emptyset , \end{aligned}$$

in the affine case (4.2). We remark again that, in the affine case, we assume \(\Omega \) to be invariant under a fixed (horizontal) translation by \(m\in {\mathbb {R}}\), and therefore \(m+B_{n}\) is the horizontal translation by m of the Brownian path \(B_{n}\).

Our main interest is the determination of the behaviour of the n hitting points \(y_{1},\ldots ,y_{n}\) along the boundary, when the starting points \(x_{1},\ldots ,x_{n}\) merge into a single common point in \(\partial \Omega \). In other words, for determinants of the form (4.1), this section considers certain limits

$$\begin{aligned} \lim _{\begin{array}{c} (x_{1},\ldots ,\,x_{n})\in C\\ x_{i}\rightarrow x\in \partial \Omega \end{array}}{\tilde{H}}(x,y),\quad (y_{1},\ldots ,y_{n})\in C, \end{aligned}$$

where \({\tilde{H}}(x,y)\) is an appropriate normalisation of H(xy) and the positions \(x_{1},\ldots ,x_{n}\), \(y_{1},\ldots ,y_{n}\) are determined by chambers C of \({\mathbb {R}}^{n}\). Determinants of the affine form (4.2) are considered in Sect. 5.

In Sects. 4.34.4 and 4.5 , we revisit the examples considered in [8] (see Fig. 10). We will see that the consideration of the above limits reveals some natural connections to random matrices, particularly Cauchy type ensembles [38]. An example of this connection was first observed by Sato and Katori [34], in the context of excursion Poisson kernel determinants, and we discuss this in Sect. 4.6. Section 5 considers the affine (circular) case and shows that it is also related in a natural way to circular ensembles of random matrix theory.

As a warm-up, in Sect. 4.2 we recall a well-known connection between non-intersecting one-dimensional Brownian motions and the Gaussian Orthogonal Ensemble (GOE) of random matrix theory.

4.1 A Brief Review on Conformal Invariance of Brownian Motion

The Riemann mapping theorem asserts that any two proper simply connected domains of \({\mathbb {C}}\) can be conformally mapped into each other. More precisely, if \(\Omega \subset {\mathbb {C}}\) and \(\Omega '\subset {\mathbb {C}}\) are two proper simply connected domains with \(z_{0}\in \Omega \) and \(z_{0}'\in \Omega '\), then there exists a unique conformal (analytic with non-vanishing derivative) map \(f:\Omega \rightarrow \Omega '\) such that \(f(z_{0})=z_{0}'\) and \(f'(z_{0})>0\). In addition, it is well known that the two-dimensional Brownian motion is invariant under conformal transformations [5]:

Proposition 4.1

If B is a two-dimensional Brownian motion starting at \(z_{0}\in \Omega \) and \(T=\inf \{t>0: B_{t}\notin \Omega \}\) is the exit time of the domain \(\Omega \), then there exists a (random) time change \(\sigma :[0,T']\rightarrow [0,T]\) such that the process

$$\begin{aligned} (f(B_{\sigma (t)}),0\le t<T') \end{aligned}$$

is again a two-dimensional Brownian motion, starting at \(f(z_{0})\in \Omega '\) and stopped at its first exist \(T'\) of \(\Omega '\).

Fig. 10
figure 10

Simply connected domains in the complex plane \({\mathbb {C}}\)

These properties ensure that, under mild conditions on \(\partial \Omega \) (for example, if \(\partial \Omega \) is determined by a Jordan curve; see also Sect. 2.3 of [26] for more general conditions), we have that for all \(A\subset \partial \Omega \)

$$\begin{aligned} {\mathbb {P}}^{z_{0}}(B_{T}\in A)={\mathbb {P}}^{f(z_{0})}(f(B_{T})\in f(A))={\mathbb {P}}^{z_{0}'}(B'_{T'}\in f(A)), \end{aligned}$$
(4.3)

where \(B'\) is another two-dimensional Brownian motion. If we set \(\mu _{z_{0}, \Omega }(A)={\mathbb {P}}^{z_{0}}(B_{T}\in A)\), \(A\subset \partial \Omega ,\) then \(\mu _{z_{0}, \Omega }\) defines a measure on \(\partial \Omega \), which is called the harmonic measure or hitting measure on \(\partial \Omega \). Therefore, identity (4.3) becomes

$$\begin{aligned} \mu _{z_{0}, \Omega }(A)=\mu _{z_{0}', \Omega '}(f(A)),\quad \text {for all}\,\, A\subset \partial \Omega . \end{aligned}$$
(4.4)

If both measures are absolutely continuous with respect to one-dimensional Lebesgue measure, or lenght (which is the case in all the examples considered in this paper), then, from (4.4) we obtain (see also [25, 26]):

Proposition 4.2

Let \(\Omega \) and \(\Omega '\) be two simply connected domains with \(z_{0}\in \Omega \). Let \(f:\Omega \rightarrow \Omega '\) be a conformal map and set \(z'_{0}=f(z_{0})\). Assume that we can define harmonic measures \(\mu _{z_{0}, \Omega }\) and \(\mu _{z_{0}',\Omega '}\) and both are absolutely continuous with respect to one-dimensional Lebesgue measure (lenght), therefore

$$\begin{aligned} h_{\Omega }(z_{0}, y)=|f'(y)|h_{\Omega '}(f(z_{0}),f(y)), \end{aligned}$$
(4.5)

where \(h_{\Omega }(z_{0},\cdot )\) and \(h_{\Omega '}(z_{0}',\cdot )\) are the corresponding densities of \(\mu _{z_{0}, \Omega }\) and \(\mu _{z_{0}',\Omega '}\), respectively.

Definition 2

When the harmonic measure \(\mu _{z_{0}, \Omega }\) has a density \(h_{\Omega }(z_{0},y)\) with respect to one-dimensional Lebesgue measure (lenght), we call this density the hitting density or Poisson kernel of \(\Omega \).

We often drop the suffix \(\Omega \) in the definition above and simply write \(h=h_{\Omega }\). In practice, the explicit computation of the harmonic measure (or its density) for an arbitrary simply connected domain \(\Omega \) is not an easy task, but there are some examples where this computation can be easily performed. In Sects. 4.34.4 and 4.5 we consider the positive quadrant \(\Omega _{1}\), the infinite strip \(\Omega _{2}\) and the upper half-circle \(\Omega _{3}\), respectively (see Fig. 10):

$$\begin{aligned} \Omega _{1}&=\{z\in {\mathbb {C}}: \mathfrak {Re}(z)>0, \mathfrak {Im}(z)>0\},\\ \Omega _{2}&=\{z\in {\mathbb {C}}: 0<\mathfrak {Im}(z)< t\},\quad t>0,\\ \Omega _{3}&=\{z\in {\mathbb {C}}: |z|<1, \mathfrak {Im}(z)>0\}. \end{aligned}$$

4.2 Non-intersecting Brownian Motions and the GOE

Consider a system of n independent one-dimensional Brownian motions conditioned not to intersect up to a fixed time \(t>0\), starting at positions \(x_{n}<x_{n-1}<\cdots <x_{1}\), respectively. This is the n-dimensional Brownian motion starting at \(x=(x_{1},\ldots ,x_{n})\in {\mathbb {R}}^{n}\) and conditioned to stay in the chamber \(C=\{y\in {\mathbb {R}}^{n}: y_{n}<y_{n-1}<\cdots <y_{1}\}\) up to time \(t>0\). Since the n-dimensional Brownian motion is a strong Markov process with continuous paths, the Karlin-McGregor formula [19] gives the (unnormalised) density of the positions of the process at time t:

$$\begin{aligned} {\hat{p}}_{t}(x,y)=\det \left[ p_{t}(x_{i},y_{j})\right] _{i,j=1}^{n},\quad x,y\in C, \end{aligned}$$
(4.6)

where

$$\begin{aligned}p_{t}(x,y)=\frac{1}{\sqrt{2\pi t}}e^{-\frac{(x-y)^{2}}{2t}}.\end{aligned}$$

Let \(M_{t,x}\) be the normalisation constant for (4.6), that is,

$$\begin{aligned}M_{t,x}=\int _{C}{\hat{p}}_{t}(x,y)dy.\end{aligned}$$

We have the following:

Proposition 4.3

The positions at time \(t>0\) of n independent one-dimensional Brownian motions, started at the origin, and conditioned not to intersect up to time \(t>0\), are given by

$$\begin{aligned} \lim _{\begin{array}{c} x\in C\\ x\rightarrow 0 \end{array}}\frac{1}{M_{t,x}}{\hat{p}}_{t}(x,y)=\frac{1}{M'_{t}}e^{-\frac{1}{2t}\sum _{i=1}^{n}y_{i}^{2}}\prod _{1\le i<j\le n}(y_{j}-y_{i}), \end{aligned}$$

where \(M_{t}'\) is the corresponding normalisation constant.

Remark

The above expression agrees with the joint density of the eigenvalues of an \(n\times n\) GOE random matrix with variance parameter t [9, 32].

Proof

A simple calculation shows that

$$\begin{aligned} \frac{1}{M_{t,x}}{\hat{p}}_{t}(x,y)&=\frac{1}{M_{t,x}}\frac{1}{(2\pi t)^{n/2}}e^{-\frac{1}{2t}\sum _{i=1}^{n}(x_{i}^{2}+y_{i}^{2})}\det \left( e^{\frac{1}{t}x_{i}y_{j}}\right) _{i,j=1}^{n}, \end{aligned}$$

and, dividing both numerator and denominator by the Vandermonde determinant

$$\begin{aligned}\Delta (x)=\prod _{1\le i<j\le n}(x_{j}-x_{i}),\end{aligned}$$

we can use Lemma A.1 to compute the limit in the proposition as follows:

$$\begin{aligned} \lim _{\begin{array}{c} x\in C\\ x\rightarrow 0 \end{array}}\frac{1}{M_{t,x}}{\hat{p}}_{t}(x,y)&=\frac{1}{M'_{t}}t^{\frac{(n-1)n}{2}}e^{-\frac{1}{2t}\sum _{i=1}^{n}y_{i}^{2}}\det \left( \left( \frac{y_{j}}{t}\right) ^{i-1}\right) _{i,j=1}^{n}\\&=\frac{1}{M'_{t}}e^{-\frac{1}{2t}\sum _{i=1}^{n}y_{i}^{2}}\det \left( y_{j}^{i-1}\right) _{i,j=1}^{n}\\&=\frac{1}{M'_{t}}e^{-\frac{1}{2t}\sum _{i=1}^{n}y_{i}^{2}}\prod _{1\le i<j\le n}(y_{j}-y_{i}), \end{aligned}$$

where

$$\begin{aligned} M'_{t}=\int _{C}e^{-\frac{1}{2t}\sum _{i=1}^{n}y_{i}^{2}}\prod _{1\le i<j\le n}(y_{j}-y_{i})dy. \end{aligned}$$

\(\square \)

4.3 Brownian Motion in the Positive Quadrant

Let’s identify the two-dimensional Euclidean space \({\mathbb {R}}^{2}\) with the complex plane \({\mathbb {C}}\). The positive quadrant is the simply connected domain given by

$$\begin{aligned} \Omega _{1}&=\{z\in {\mathbb {C}}: \mathfrak {Re}(z)>0, \mathfrak {Im}(z)>0\}. \end{aligned}$$

For the two dimensional Brownian motion B, starting at a point x in the positive x-axis, the density of the first hitting point \(y=iy\), \(y\in {\mathbb {R}}\) in the y-axis, is given by the Cauchy density (see [5], Sect. 1.9):

$$\begin{aligned} h'(x,y)=\frac{1}{\pi }\frac{x}{x^{2}+y^{2}},\quad y\in {\mathbb {R}}. \end{aligned}$$

Therefore, we can consider the two-dimensional ‘Brownian motion’ \(B'\) in the positive quadrant \({\overline{\Omega }}_{1}\) starting at the point x, with normal reflection on the positive x-axis, and the positive y-axis acting as absorbing boundary. The process \(B'\) first hits the positive y-axis at a point \(y=iy\), \(y>0\), with density

$$\begin{aligned} h(x,y)=h'(x,y)+h'(x,-y)=\frac{2}{\pi }\frac{x}{x^{2}+y^{2}},\quad y>0. \end{aligned}$$
(4.7)

Consider the determinant of hitting densities

$$\begin{aligned} H(x,y)=\det \left( h(x_{i},y_{j})\right) _{i,j=1}^{n},\qquad x,y\in D, \end{aligned}$$

where

$$\begin{aligned}D=\{x\in {\mathbb {R}}^{n}: 0<x_{n}<x_{n-1}<\cdots <x_{1}\}.\end{aligned}$$

Proposition 4.4

For any \(y\in D\), and \(t>0\),

$$\begin{aligned} \lim _{\begin{array}{c} x\in D\\ x\rightarrow t \end{array}}{{\tilde{H}}}(x,y)&=\frac{1}{M_{t}}\prod _{1\le j\le n}(t^{2}+y_{j}^{2})^{-n}\prod _{1\le i<j\le n}(y_{i}^{2}-y_{j}^{2}), \end{aligned}$$

where

$$\begin{aligned}{\tilde{H}}(x,y)=\left( \int _{D}H(x,y)dy\right) ^{-1}H(x,y),\end{aligned}$$

and \(M_t\) is the corresponding normalisation constant.

Remark

In particular, when \(t=1\), the above density takes the form

$$\begin{aligned} \frac{1}{M_1}\prod _{1\le j\le n}(1+y_{j}^{2})^{-n}\prod _{1\le i<j\le n}(y_{j}^{2}-y_{i}^{2}), \end{aligned}$$

which is a Cauchy-type ensemble on the positive half-line [9, 38].

Proof

For all \(x,y\in D\), the function H(xy) is positive (see [18]) and a Cauchy determinant. Therefore

$$\begin{aligned} H(x,y)=\left( \frac{2}{\pi }\right) ^{n}\prod _{i=1}^{n}x_{i}\prod _{1\le i,j\le n}(x_{i}^{2}+y_{j}^{2})^{-1}\prod _{1\le i<j\le n}(x_{i}^{2}-x_{j}^{2})(y_{i}^{2}-y_{j}^{2}). \end{aligned}$$

Regarded as a probability density on D, we can consider the normalised density

$$\begin{aligned} {{\tilde{H}}}(x,y)=\left( \int _{D}F_{x}(y)dy\right) ^{-1}F_{x}(y), \end{aligned}$$
(4.8)

where

$$\begin{aligned}F_{x}(y)=\prod _{1\le i,j\le n}(x_{i}^{2}+y_{j}^{2})^{-1}\prod _{1\le i<j\le n}(y_{i}^{2}-y_{j}^{2}).\end{aligned}$$

Fix a real number \(t>0\) and consider \(0<\varepsilon <t\), we have that for all \(x\in D\) such that \(|x_{i}-t|<\varepsilon \), \(1\le i\le n\), and for all \(y\in D\)

$$\begin{aligned} |F_{x}(y)|\le \prod _{1\le j\le n}(T^{2}+y_{j}^{2})^{-n}\prod _{1\le i<j\le n}(y_{i}^{2}-y_{j}^{2}),\quad T=t-\varepsilon >0. \end{aligned}$$

The function on the right hand side is integrable over D, which can be verified by using the relation

$$\begin{aligned} e^{i\theta }=\frac{iy-\sqrt{T}}{iy+\sqrt{T}},\quad 0<\theta <\pi , \end{aligned}$$
(4.9)

so that

$$\begin{aligned} \prod _{1\le j\le n}(T^{2}+y_{j}^{2})^{-n}\prod _{1\le i<j\le n}(y_{i}^{2}-y_{j}^{2})dy\propto \prod _{1\le i<j\le n}|e^{i\theta _{i}}-e^{i\theta _{j}}||e^{i\theta _{i}}-e^{-i\theta _{j}}|d\theta , \end{aligned}$$

and the latter function is integrable over the bounded domain (see Sect. 4.5):

$$\begin{aligned} \{\theta \in {\mathbb {R}}^{n}:0<\theta _{1}<\theta _{2}<\cdots<\theta _{n}<\pi \}. \end{aligned}$$

Therefore, by the dominated convergence theorem, when the n starting points \(x_{1},x_{2},\ldots ,x_{n}\) approach to the common point \(t>0\), along the x-half positive axis, the limit in the proposition can be easily computed from the expression (4.8), as

$$\begin{aligned} \lim _{\begin{array}{c} x\in D\\ x\rightarrow t \end{array}}{{\tilde{H}}}(x,y)&=\frac{1}{M_{t}}\prod _{1\le j\le n}(t^{2}+y_{j}^{2})^{-n}\prod _{1\le i<j\le n}(y_{i}^{2}-y_{j}^{2}), \end{aligned}$$

where \(M_t\) is the corresponding normalisation constant. \(\square \)

4.4 Brownian Motion in a Strip

Consider the infinite strip given by

$$\begin{aligned} \Omega _{2}=\{z\in {\mathbb {C}}: 0<\mathfrak {Im}(z)< t\}. \end{aligned}$$

By conformal invariance of the two-dimensional Brownian motion (the function \(f(z)=e^{\pi z/2t}\) maps \(\Omega _{2}\) onto the positive quadrant \(\Omega _{1}\)), we can also consider a ‘Brownian motion’ constrained to live in the strip \({\overline{\Omega }}_{2}\), starting at a point \(x\in {\mathbb {R}}\) on the x-axis (which is normal reflecting) and stopped once it hits the (absorbing) boundary line \(\mathfrak {Im}(z)=t\). If the process starts at a point \(x\in {\mathbb {R}}\), and it first hits the absorbing boundary at \(y:=y+it\), \(y\in {\mathbb {R}}\), then by conformal invariance (that is, using Proposition 4.2 and formula (4.7) above) we obtain a formula for the hitting density \(h_{t}(x,y)\) of \(\Omega _{2}\):

$$\begin{aligned} h_{t}(x,y)&=\frac{\pi }{2t}|ie^{\pi y/2t}|h\left( e^{\pi x/2t},e^{\pi y/2t}\right) =\frac{1}{t}\frac{1}{e^{\pi (y-x)/2t}+e^{-\pi (y-x)/2t}}. \end{aligned}$$

In terms of hyperbolic functions, the above expression can be written as

$$\begin{aligned} h_{t}(x,y)=\frac{1}{2t}\mathrm{sech}\left( \frac{\pi }{2t}(y-x)\right) ,\quad y\in {\mathbb {R}}. \end{aligned}$$
(4.10)

Define the determinant of hitting densities

$$\begin{aligned} H_{t}(x,y)=\det \left( h_{t}(x_{i},y_{j})\right) _{i,j=1}^{n},\quad x,y\in C, \end{aligned}$$
(4.11)

where

$$\begin{aligned}C=\{x\in {\mathbb {R}}^{n}: x_{n}<x_{n-1}<\cdots <x_{1}\}.\end{aligned}$$

Proposition 4.5

For any \(y\in C\), and \(t>0\),

$$\begin{aligned} \lim _{\begin{array}{c} x\in C\\ x\rightarrow 0 \end{array}}{{\tilde{H}}}_{t}(x,y)=\frac{1}{M_t}\prod _{j=1}^{n}{\mathrm{sech}}\left( \frac{\pi }{2t}y_{j}\right) \prod _{1\le i<j\le n}\left( \tanh \left( \frac{\pi }{2t}y_{i}\right) -\tanh \left( \frac{\pi }{2t}y_{j}\right) \right) , \end{aligned}$$

where

$$\begin{aligned} {\tilde{H}}_{t}(x,y)=\left( \int _{C}H_{t}(x,y)dy\right) ^{-1}H_{t}(x,y), \end{aligned}$$

\(M_t\) is the corresponding normalisation constant.

Proof

For all \(x,y\in C\), the function \(H_{t}(x,y)\) is positive (see [18]), and the following explicit expression for \(H_{t}(x,y)\) can be obtained

$$\begin{aligned}&H_{t}(x,y) {=}\frac{1}{(2t)^{n}}\prod _{1{\le } i,j{\le } n}\mathrm{sech}\left( \frac{\pi }{2t}(y_{j}{-}x_{i})\right) \prod _{1\le i<j{\le } n}\mathrm{sinh}\left( \frac{\pi }{2t}(x_{i}{-}x_{j})\right) \mathrm{sinh}\left( \frac{\pi }{2t}(y_{i}-y_{j})\right) . \end{aligned}$$

Consider the normalised density

$$\begin{aligned} {{\tilde{H}}}_{t}(x,y)=\left( \int _{C}F_{x}(t,y)dy\right) ^{-1}F_{x}(t,y), \end{aligned}$$

where

$$\begin{aligned} F_{x}(t,y)=\prod _{1\le i,j\le n}\mathrm{sech}\left( \frac{\pi }{2t}(y_{j}-x_{i})\right) \prod _{1\le i<j\le n}\mathrm{sinh}\left( \frac{\pi }{2t}(y_{i}-y_{j})\right) . \end{aligned}$$

Let \(\varepsilon >0\), we have that for all \(x\in C\) such that \(|x_{i}|<\varepsilon \), \(1\le i\le n\), and for all \(y\in C\)

$$\begin{aligned} |F_{x}(t,y)|\le \prod _{j=1}^{n}\left( \frac{2c}{e^{\frac{\pi }{2t}y_{j}}+c^{-2}e^{-\frac{\pi }{2t}y_{j}}}\right) ^{n}\prod _{1\le i<j\le n}\mathrm{sinh}\left( \frac{\pi }{2t}(y_{i}-y_{j})\right) , \end{aligned}$$

where \(c=e^{\pi \varepsilon /2t}\). It can be verified that the function on the right hand side above is integrable over C. Therefore, by the dominated convergence theorem, for any \(t>0\) and \(y\in C\), it holds

$$\begin{aligned} \lim _{\begin{array}{c} x\in C\\ x\rightarrow 0 \end{array}}{{\tilde{H}}}_{t}(x,y)&=\frac{1}{M_t}\prod _{1\le i,j\le n}\mathrm{sech}\left( \frac{\pi }{2t}y_{j}\right) \prod _{1\le i<j\le n}\mathrm{sinh}\left( \frac{\pi }{2t}(y_{i}-y_{j})\right) \\&=\frac{1}{M_t}\prod _{j=1}^{n}{\mathrm{sech}}\left( \frac{\pi }{2t}y_{j}\right) \prod _{1\le i<j\le n}\left( \tanh \left( \frac{\pi }{2t}y_{i}\right) -\tanh \left( \frac{\pi }{2t}y_{j}\right) \right) , \end{aligned}$$

where \(M_t\) is the corresponding normalisation constant. \(\square \)

4.5 Brownian Motion in the Half Unit Disk

The image of the positive quadrant \(\Omega _{1}\) through the conformal map \(f(z)=\frac{z-1}{z+1}\) gives the upper half unit disk

$$\begin{aligned} \Omega _{3}=\{z\in {\mathbb {C}}: |z|<1, \mathfrak {Im}(z)>0 \}. \end{aligned}$$

The ‘Brownian motion’ B in \({\overline{\Omega }}_{3}\) reflects in the x-axis and stops once it reaches the boundary \(|z|=1\). The hitting density for this process, starting at a point \(x\in {\mathbb {R}}\), \(|x|<1\), and stopped until it hits the point \(z=e^{i\theta }\), \(0<\theta <\pi \), is given by the well-known formula (see [5], Sect. 1.10):

$$\begin{aligned} h(x,\theta )&=\frac{1}{\pi }\frac{1-x^{2}}{1-2x\cos \theta +x^{2}},\quad 0<\theta <\pi . \end{aligned}$$

As before, consider the determinant of hitting densities

$$\begin{aligned} H(x,\theta )=\det (h(x_{i},y_{j}))_{i,j=1}^{n},\quad x\in N, \theta \in \Theta , \end{aligned}$$

where

$$\begin{aligned} N=&\{x\in {\mathbb {R}}^{n} : -1<x_{n}<x_{n-1}<\cdots<x_{1}<1\}\quad \mathrm{and}\\ \Theta =&\{\theta \in {\mathbb {R}}^{n}:0<\theta _{1}<\theta _{2}<\cdots<\theta _{n}<\pi \}. \end{aligned}$$

Proposition 4.6

For any \(\theta \in \Theta \)

$$\begin{aligned} \lim _{\begin{array}{c} x\in N\\ x\rightarrow 0 \end{array}}{{\tilde{H}}}(x,\theta )=\frac{1}{M}\prod _{1\le i<j\le n}|e^{i\theta _{i}}-e^{i\theta _{j}}||e^{i\theta _{i}}-e^{-i\theta _{j}}|, \end{aligned}$$

where

$$\begin{aligned} {\tilde{H}}(x,\theta )=\left( \int _{\Theta }H(x,\theta )d\theta \right) ^{-1}H(x,\theta ), \end{aligned}$$

M is the corresponding normalisation constant.

Remark

The above density can be thought of as the \(\beta =1\) version of the eigenvalue density of a random matrix in SO(2n), which is the subgroup of unitary matrices consisting of \(2n\times 2n\) orthogonal matrices with determinant one (see [3]).

Proof

The function \(H(x,\theta )\) can be expressed explicitly as

$$\begin{aligned} H(x,\theta )=H'(x)\prod _{1\le i,j\le n}(1-2x_{i}\cos \theta _{j}+x_{i}^{2})^{-1}\prod _{1\le i<j\le n}2(\cos \theta _{i}-\cos \theta _{j}), \end{aligned}$$

where

$$\begin{aligned} H'(x)=\frac{1}{\pi ^{n}}\prod _{i=1}^{n}\left( 1-x_{i}^{2}\right) \prod _{1\le i<j\le n}\left( x_{i}-x_{j}\right) \left( 1-x_{i}x_{j}\right) . \end{aligned}$$

For all \(x\in N\), \(\theta \in \Theta \), the function \(H_{t}(x,y)\) is then positive. Consider the normalised density

$$\begin{aligned} {\tilde{H}}(x,\theta )=\left( \int _{\Theta }F_{x}(\theta )d\theta \right) ^{-1}F_{x}(\theta ), \end{aligned}$$

where

$$\begin{aligned} F_{x}(\theta )=\frac{H(x,\theta )}{H'(x)}=\prod _{1\le i,j\le n}(1-2x_{i}\cos \theta _{j}+x_{i}^{2})^{-1}\prod _{1\le i<j\le n}2(\cos \theta _{i}-\cos \theta _{j}). \end{aligned}$$

Let \(0<\varepsilon <1\), and assume that \(|x_{i}|<\varepsilon \), for all \(1\le i\le n\). We have that \(|1-2x_{i}\cos \theta _{j}+x_{i}^{2}|>(1-\varepsilon )^{2}\) and therefore

$$\begin{aligned} |F_{x}(\theta )|\le 2^{n(n-1)}(1-\varepsilon )^{-2n^{2}},\quad \text {for all}\,\,\theta \in \Theta . \end{aligned}$$

Since \(\Theta \) is a bounded set, by the bounded convergence theorem it follows that for any \(\theta \in \Theta \)

$$\begin{aligned} \lim _{\begin{array}{c} x\in N\\ x\rightarrow 0 \end{array}}{{\tilde{H}}}(x,\theta )&=\frac{1}{M}\prod _{1\le i<j\le n}2(\cos \theta _{i}-\cos \theta _{j})\\&=\frac{1}{M}\prod _{1\le i<j\le n}|e^{i\theta _{i}}-e^{i\theta _{j}}||e^{i\theta _{i}}-e^{-i\theta _{j}}|,\nonumber \end{aligned}$$
(4.12)

where

$$\begin{aligned} M=\int _{\Theta }\prod _{1\le i<j\le n}2(\cos \theta _{i}-\cos \theta _{j})d\theta \end{aligned}$$

is the normalisation constant. \(\square \)

4.6 A Note on Excursion Poisson Kernel Determinants

In all the examples of Sects. 4.34.4 and 4.5 , we have imposed both absorbing and normal reflecting boundary conditions on the domains under consideration. If, on the other hand, the whole boundary \(\partial \Omega \) is absorbing, then we require a different notion of hitting density h(xy) (since the paths need to ‘walk’ into the interior \(\Omega ^{\circ }=\Omega \setminus \partial \Omega \) before reaching their destination). Therefore, in order to study determinants of the form (4.1) and (4.2), we consider the so-called excursion Poisson kernel\(h_{\partial \Omega }(x,y)\), which can be defined as the limit

$$\begin{aligned} h_{\partial \Omega }(x,y)=\lim _{\varepsilon \rightarrow 0}\frac{1}{\varepsilon }h(x+\varepsilon \mathbf{n}_{x},y),\quad x,y\in \partial \Omega , \end{aligned}$$

where h(zy), \(z\in \Omega \), is the usual hitting density (Definition 2), and \(\mathbf{n}_{x}\) is the unit normal at x pointing into \(\Omega \) (see [26] for details). As we said before, intuitively, the excursion Poisson kernel requires the path to ‘walk’ into \(\Omega \) before reaching \(\partial \Omega \), and it is the scaling limit of simple random walk excursion probabilities [25, 26].

It can be shown that, similarly to Proposition 4.2, the excursion Poisson kernel satisfies a conformal covariance property:

$$\begin{aligned} h_{\partial \Omega }(x,y)=|f'(x)||f'(y)|h_{\partial \Omega '}(f(x),f(y)), \end{aligned}$$

where \(f:\Omega \rightarrow \Omega '\) is any conformal transformation. This implies that the determinant of excursion Poisson kernels:

$$\begin{aligned} \frac{\det (h_{\partial \Omega }(x_{i},y_{j}))_{i,j=1}^{n}}{\prod _{i=1}^{n}h_{\partial \Omega }(x_{i},y_{i})}, \end{aligned}$$
(4.13)

is a conformal invariant (see [25]). In particular, if \(\Omega \) is the half unit circle of Sect. 4.5, standard calculations show that the excursion Poisson kernel is given by

$$\begin{aligned} h_{\partial \Omega }(x,\theta )=\frac{2}{\pi }\frac{(1-x^{2})\sin \theta }{(1-2x\cos \theta +x^{2})^{2}},\quad 0<\theta <\pi , \end{aligned}$$

for \(x\in {\mathbb {R}}\), \(|x|<1\). The next proposition is the excursion Poisson kernel analogue of Proposition 4.6.

Proposition 4.7

As in Proposition 4.6, let \(\Theta \) be the set

$$\begin{aligned} \Theta =\{\theta \in {\mathbb {R}}^{n}:0<\theta _{1}<\theta _{2}<\cdots<\theta _{n}<\pi \}. \end{aligned}$$

Then

$$\begin{aligned} \lim _{x_{1},\ldots ,x_{n}\rightarrow 0}\frac{\det (h_{\partial \Omega }(x_{i},\theta _{j}))_{i,j=1}^{n}}{\int _{\Theta }\det (h_{\partial \Omega }(x_{i},\theta _{j}))_{i,j=1}^{n}d\theta }=\frac{1}{M}\prod _{j=1}^{n}\sin \theta _{j}\prod _{1\le i<j\le n}(\cos \theta _{i}-\cos \theta _{j}), \end{aligned}$$

where the limit is taken over points \(-1<x_{n}<x_{n-1}<\cdots<x_{1}<1\) and M is the corresponding normalisation constant.

Proof

Note that

$$\begin{aligned} \det (h_{\partial \Omega }(x_{i},\theta _{j}))_{i,j=1}^{n}=\left( \frac{2}{\pi }\right) ^{n}\prod _{i=1}^{n}(1-x_{i}^{2})\prod _{j=1}^{n}\sin \theta _{j}\,\det \left( B\right) , \end{aligned}$$

where \(B=(b_{i,j})\) is the \(n\times n\) matrix with positive entries

$$\begin{aligned} b_{i,j}=\frac{1}{(1-2x_{i}\cos \theta _{j}+x_{i}^{2})^{2}}. \end{aligned}$$

The determinant \(\det (B)\) can be expressed as the product (see [2]):

$$\begin{aligned} \det (B)=\det \left( \frac{1}{1-2x_{i}\cos \theta _{j}+x_{i}^{2}}\right) _{i,j=1}^{n}\mathrm{per}\left( \frac{1}{1-2x_{i}\cos \theta _{j}+x_{i}^{2}}\right) _{i,j}^{n}, \end{aligned}$$
(4.14)

where the permanent of a square matrix is defined as

$$\begin{aligned} \mathrm{per}(a_{i,j})_{i,j=1}^{n}=\sum _{\sigma \in S_{n}}\prod _{i=1}^{n}a_{i,\sigma (i)}. \end{aligned}$$

The determinant in the right hand side of (4.14) was considered in Sect. 4.5 and therefore we can conclude that

$$\begin{aligned} \det (h_{\partial \Omega }(x_{i},\theta _{j}))_{i,j=1}^{n}=G(x)P(x,\theta )\prod _{j=1}^{n}\sin \theta _{j}\prod _{1\le i<j\le n}2(\cos \theta _{i}-\cos \theta _{j}), \end{aligned}$$
(4.15)

where G(x) and \(P(x,\theta )\) are given by

$$\begin{aligned} G(x)&=\left( \frac{2}{\pi }\right) ^{n}\prod _{i=1}^{n}(1-x_{i}^{2})\prod _{1\le i<j\le n}(x_{i}-x_{j})(1-x_{i}x_{j}),\\ P(x,\theta )&=\prod _{1\le i,j\le n}(1-2x_{i}\cos \theta _{j}+x_{i}^{2})^{-1}\mathrm{per}\left( \frac{1}{1-2x_{i}\cos \theta _{j}+x_{i}^{2}}\right) _{i,j}^{n}. \end{aligned}$$

For all \(-1<x_{n}<x_{n-1}<\cdots<x_{1}<1\) and \(\theta \in \Theta \), the determinant (4.15) is then positive and, since the term G(x) depends only on the variables \(x_{1},\ldots ,x_{n}\), it holds that

$$\begin{aligned} \frac{\det (h_{\partial \Omega }(x_{i},\theta _{j}))_{i,j=1}^{n}}{\int _{\Theta }\det (h_{\partial \Omega }(x_{i},\theta _{j}))_{i,j=1}^{n}d\theta }=\left( \int _{\Theta }Q_{x}(\theta )d\theta \right) ^{-1}Q_{x}(\theta ), \end{aligned}$$

where

$$\begin{aligned} Q_{x}(\theta )=P(x,\theta )\prod _{j=1}^{n}\sin \theta _{j}\prod _{1\le i<j\le n}2(\cos \theta _{i}-\cos \theta _{j}). \end{aligned}$$

Finally, note that for each \(\theta \in \Theta \), \(\lim P(x,\theta )=n!\) when \(x_{i}\rightarrow 0\), \(1\le i\le n\), and

$$\begin{aligned} |Q_{x}(\theta )|\le n!\,2^{n(n-1)}(1-\varepsilon )^{-2(n^{2}+n)},\quad \text {for all}\,\,\theta \in \Theta , \end{aligned}$$

whenever \(|x_{i}|<\varepsilon \), \(0<\varepsilon <1\), for all \(1\le i\le n\). Since \(\Theta \) is bounded, the desired result follows from the bounded convergence theorem. \(\square \)

Proposition 4.7 agrees with certain asymptotics of an excursion Poisson kernel determinant in [34], in the context of rectangular domains of the complex plane.

5 Circular Ensembles

In this section we consider limits of determinants of hitting densities of the (affine) form (4.2)

$$\begin{aligned} H(x,y)=\det \left( \sum _{k\in {\mathbb {Z}}}\zeta ^{k}h(x_{i},y_{j}+mk)\right) _{i,j=1}^{n}dy_{1}\cdots dy_{n}, \end{aligned}$$
(5.1)

where

$$\begin{aligned} \zeta = \left\{ \begin{array}{ll} 1 &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is odd}} \\ -1 &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is even}}, \end{array} \right. \end{aligned}$$

and reveal some natural connections with circular ensembles of random matrix theory, similar to the connections we described in Sect. 4 with Cauchy type ensembles. In particular, by considering the hitting density of the two-dimensional Brownian motion in an annulus on the complex plane, we obtain a novel interpretation of the Circular Orthogonal Ensemble (COE) (see Sect. 5.2). Another example is given in Sect. 5.1, where we review the well-known model of n non-intersecting (one-dimensional) Brownian motions on the circle [16] and detail its connection with the Circular Orthogonal Ensemble. An interesting consequence is Proposition 5.3, which recovers the Karlin–McGregor (for n odd) and Liechty–Wang (for n even) determinant formulas [19, 30], for the transition density of nindistinguishable non-intersecting Brownian motions on the circle, from the one in [16].

5.1 Brownian Motion on the Unit Circle

As a warm up before Sect. 5.2, we describe the model of n non-intersecting Brownian motions on the unit circle, originally studied by Hobson and Werner [16]. Here, the Brownian motions on \({\mathbb {T}}=\{e^{i\theta }:-\pi \le \theta <\pi \}\) are given by

$$\begin{aligned} \beta _{k}:=e^{i B_{k}},\quad 1\le k\le n, \end{aligned}$$

where \(B_{1},B_{2},\ldots ,B_{n}\) are n independent one-dimensional Brownian motions and we assume \(n\ge 2\). The following proposition shows that the above model can be studied by considering the exit time of the n-dimensional Brownian motion \(B=(B_{1},B_{2},\ldots ,B_{n})\) of the domain

$$\begin{aligned} {\tilde{A}}_{n}:=\{\nu \in {\mathbb {R}}^{n} : \nu _{n}<\nu _{n-1}<\cdots<\nu _{2}<\nu _{1}<\nu _{n}+2\pi \}. \end{aligned}$$

Proposition 5.1

(Hobson and Werner [16]) Let B and \({\tilde{A}}_{n}\) as above. The transition density of the Brownian motion B killed at its first exit from \({\tilde{A}}_{n}\) is given by

$$\begin{aligned} q_{t}(\theta , \nu )=\sum _{\sigma \in S_{n}}\sum _{k_{1}+k_{2}+\cdots +k_{n}=0}\mathrm{sgn}(\sigma )\prod _{i=1}^{n}p_{t}(\theta _{i},\nu _{\sigma (i)}+2\pi k_{i}),\quad t>0, \end{aligned}$$
(5.2)

where \(\theta =(\theta _{1},\ldots ,\theta _{n})\in {\tilde{A}}_{n}\), \(\nu =(\nu _{1},\ldots ,\nu _{n})\in {\tilde{A}}_{n}\), and

$$\begin{aligned} p_{t}(x,y)=\frac{1}{\sqrt{2\pi t}}e^{-\frac{(x-y)^{2}}{2t}} \end{aligned}$$

is the normal density with mean x and variance t.

The method of proof of the last proposition is by a path-switching argument, similar to the one of Theorem 3.1. The following corollary is a restatement of part (i) of the main theorem in [16] and describes the transition density for nlabelled particles in Brownian motion on the circle, constrained not to intersect until a fixed positive time.

Corollary 5.2

The (unnormalised) transition density of n non-intersecting Brownian motions \((\beta _{1},\ldots ,\beta _{n})\) on the circle is

$$\begin{aligned} q^{*}_{t}(e^{i\theta },e^{i\nu })=\sum _{\sigma \in S_{n}}\sum _{\begin{array}{c} k_{1}+k_{2}+\cdots +k_{n}=0\\ \mathrm{mod}\,n \end{array}}\mathrm{sgn}(\sigma )\prod _{i=1}^{n}p_{t}(\theta _{i},\nu _{\sigma (i)}+2\pi k_{i}),\quad t>0 \end{aligned}$$

where \(e^{i\theta }=(e^{i\theta _{1}},\ldots ,e^{i\theta _{n}})\in {\mathbb {T}}^{n}\), \(e^{i\nu }=(e^{i\nu _{1}},\ldots ,e^{i\nu _{n}})\in {\mathbb {T}}^{n}\), and

$$\begin{aligned} \theta ,\nu \in C={\tilde{A}}_{n}\cap \{\nu \in {\mathbb {R}}^{n}: -\pi \le \nu _{n}< \pi \}. \end{aligned}$$

Moreover, \(q^{*}_{t}(e^{i\theta },e^{i\nu })\) can be expressed as the sum of n determinants:

$$\begin{aligned} q^{*}_{t}\left( e^{i\theta },e^{i\nu }\right) =\frac{1}{n}\sum _{u=0}^{n-1}\det \left( \sum _{k\in {\mathbb {Z}}}\eta ^{uk}p_{t}(\theta _{i},\nu _{j}+2\pi k)\right) _{i,j=1}^{n}. \end{aligned}$$
(5.3)

Proof

Since any point in the circle is the projection of an infinite set of points in the real line modulo \(2\pi \), the first part follows immediately by summing up in (5.2) over all the images of \(\nu =(\nu _{1},\ldots ,\nu _{n})\in {\tilde{A}}_{n}\) under translations of \(2\pi \), that is

$$\begin{aligned} q^{*}_{t}(e^{i\theta },e^{i\nu })=\sum _{\ell \in {\mathbb {Z}}}q_{t}(\theta ,\nu +2\pi \ell (1,\ldots ,1)). \end{aligned}$$

For the second part, if \(\eta =e^{i\frac{2\pi }{n}}\) is a complex root of unity, we can eliminate the condition \(k_1+k_2+\cdots +k_n=0\), mod n, by using the identity

$$\begin{aligned} \frac{1}{n}\sum _{u=0}^{n-1}\eta ^{u\sum _{i=1}^{n} k_{i}}=\left\{ \begin{array}{ll} 1 &{}\quad {\text {if }}\, \sum _{i=1}^{n}k_{i}=0,\,\,\text {mod}\,n \\ 0 &{}\quad {\text {otherwise}}, \end{array} \right. \end{aligned}$$

and (5.3) follows. \(\square \)

Interestingly, if we do not label the n Brownian particles in Corollary 5.2 (and therefore the locations at time \(t>0\) are given by any of the n cyclic permutations of the vector \((e^{i\nu _{1}},\ldots ,e^{i\nu _{n}})\) along the circle), then the corresponding transition density becomes a single determinant:

Proposition 5.3

The (unnormalised) transition density of n ‘indistinguishable’ non-intersecting Brownian motions on the circle is given by

$$\begin{aligned} H_{t}\left( e^{i\theta },e^{i\nu }\right) =\det \left( \sum _{k\in {\mathbb {Z}}}e^{i2\pi xk} p_{t}(\theta _{i},\nu _{j}+2\pi k)\right) _{i,j=1}^{n},\quad \theta ,\nu \in C, \end{aligned}$$
(5.4)

where

$$\begin{aligned} x= \left\{ \begin{array}{ll} 0 &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is odd}}, \\ \frac{1}{2} &{} \quad {\text {if}}\,\, { n}\,\, {\text {is even}}. \end{array} \right. \end{aligned}$$

Remark

In particular, Proposition 5.3 recovers the Karlin-McGregor (for n odd) and Liechty-Wang (for n even) determinant formulas [19, 30], for the transition density of nindistinguishable non-intersecting Brownian motions on the circle.

Remark

Using modular transformations for Jacobi theta functions, the entries of the matrix in (5.4) can be written in terms of theta functions as follows:

$$\begin{aligned} \left\{ \begin{array}{ll} \frac{1}{2\pi }\,\theta _{3}\left( -\frac{(\nu _{j}-\theta _{i})}{2},e^{-t/2}\right) &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is odd}}, \\ \frac{1}{2\pi }\,\theta _{4}\left( -\frac{(\nu _{j}-\theta _{i})}{2},e^{-t/2}\right) &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is even}}, \end{array} \right. \end{aligned}$$

where \(\theta _{k}(z,q)\) is the k-Jacobi theta function, \(z\in {\mathbb {C}}\), \(|q|<1\) (see [33]).

Proof of Proposition 5.3

The method of proof is by summing-up, in (5.3), the n different destinations of the labelled process of Corollary 5.2. Fix \(\theta \in C\) and \(\nu \in C\). If \([\ell ]\in S_{n}\) is the shift by \(\ell =0,1,\ldots ,n-1\), let \(\nu _{[\ell ]}\) be the unique representative of \((\nu _{[\ell ](1)},\ldots ,\nu _{[\ell ](n)})\) in C. Then, the n different ‘cyclic permutations’ of the vector \(e^{i\nu }=(e^{i\nu _{1}},\ldots ,e^{i\nu _{n}})\in {\mathbb {T}}^{n}\) along the unit circle are given by

$$\begin{aligned} e^{i\nu _{[\ell ]}},\quad \ell =0,1,\ldots ,n-1. \end{aligned}$$

With the notation of Corollary 5.2, it holds that

$$\begin{aligned} q_{t}^{*}\left( e^{i\theta },e^{i\nu _{[\ell ]}}\right) =\frac{1}{n}\sum _{u=0}^{n-1}\eta ^{-\ell u}\mathrm{sgn}(\sigma )\det \left( \sum _{k\in {\mathbb {Z}}}\eta ^{uk}p_{t}(\theta _{i},\nu _{j}+2\pi k)\right) _{i,j=1}^{n}. \end{aligned}$$

Finally, following the same argument as in the proof of Proposition 3.3, we obtain

$$\begin{aligned} \sum _{\ell =0}^{n-1}q_{t}^{*}(e^{i\theta },e^{i\nu _{[\ell ]}})=\det \left( \sum _{k\in {\mathbb {Z}}}e^{i2\pi xk} p_{t}\left( \theta _{i},\nu _{j}+2\pi k\right) \right) _{i,j=1}^{n}, \end{aligned}$$

where

$$\begin{aligned} x= \left\{ \begin{array}{ll} 0 &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is odd}}, \\ \frac{1}{2} &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is even}}. \end{array} \right. \end{aligned}$$

\(\square \)

Following the notation of Proposition 5.3, consider now the normalised density

$$\begin{aligned} {\tilde{H}}_{t}(e^{i\theta },e^{i\nu })=\frac{1}{M_{t,\theta }}H_{t}(e^{i\theta },e^{i\nu }),\quad \theta ,\nu \in C, \end{aligned}$$

where

$$\begin{aligned} M_{t,\theta }=\int _{C}H_{t}(e^{i\theta },e^{i\nu })d\nu . \end{aligned}$$

The following proposition is essentially a reformulation of (ii) and (iii) of the main theorem in [16], here stated in the case of nindistinguishable non-intersecting Brownian motions on the circle. We also take into consideration the corresponding normalisation constants.

Proposition 5.4

For any \(\theta ,\nu \in C\),

$$\begin{aligned} \lim _{t\rightarrow \infty }{\tilde{H}}_{t}(e^{i\theta },e^{i\nu })&=\frac{1}{M}\prod _{1\le i<j\le n}|e^{i\nu _{j}}-e^{i\nu _{i}}|, \end{aligned}$$
(5.5)

where

$$\begin{aligned}\lim _{t\rightarrow \infty }M_{t,\theta }=M,\end{aligned}$$

and M is the corresponding normalisation constant in the right hand side of (5.5).

Remark

The above limit agrees with the eigenvalue density of the Circular Orthogonal Ensemble (COE), defined on \(C={\tilde{A}}_{n}\cap \{\nu \in {\mathbb {R}}^{n}: -\pi \le \nu _{n}< \pi \}\).

Proof of Proposition 5.4

Using the Poisson summation formula for each entry of the matrix array in (5.4), we have

$$\begin{aligned} \sum _{k\in {\mathbb {Z}}}e^{i2\pi xk} p_{t}(\theta _{i},\nu _{j}+2\pi k)=\frac{1}{2\pi }\sum _{k\in {\mathbb {Z}}}e^{-i(\nu _{j}-\theta _{i})(x+k)}e^{-\frac{t}{2}(x+k)^{2}}. \end{aligned}$$

Alternatively, the above can be seen as a direct consequence of the definitions by infinite series of the Jacobi’s theta functions \(\theta _{3}\) and \(\theta _{4}\) (see second remark after Proposition 5.3). Now, by standard properties of determinants we obtain

$$\begin{aligned} H_{t}(e^{i\theta },e^{i\nu })&=\frac{1}{(2\pi )^{n}}\sum _{\begin{array}{c} k_{1}<k_{2}<\cdots <k_{n}\\ k_{i}\in {\mathbb {Z}} \end{array}}\det \left( e^{-i\nu _{j}(x+k_{i})}\right) \det \left( e^{i\theta _{j}(x+k_{i})}\right) g_{t}(\mathbf{k}), \end{aligned}$$

where \(\mathbf{k}=(k_{1},\ldots ,k_{n})\) and

$$\begin{aligned} g_{t}(\mathbf{k})=\exp \left( -\frac{t}{2}\sum _{i=1}^{n}(x+k_{i})^{2}\right) . \end{aligned}$$

Remember from (5.4) that \(x=0\) if n is odd and \(x=1/2\) if n is even. Regarding the term \(g_{t}(\mathbf{k})\), note that over all sequences of integers \(k_{1}<k_{2}<\cdots <k_{n}\), we have

$$\begin{aligned} \min _{\begin{array}{c} k_{i}\in {\mathbb {Z}}\\ k_{1}<\cdots <k_{n} \end{array}} \frac{1}{2}\sum _{i=1}^{n}(x+k_{i})^{2}=\frac{n(n-1)(n+1)}{24}, \end{aligned}$$

and the minimum is attained uniquely at \(k_{i}=k'_{i}\), \(1\le i\le n\), where

$$\begin{aligned} x+k'_{i}=i-\frac{n+1}{2},\quad i=1,2,\ldots ,n. \end{aligned}$$

Therefore

$$\begin{aligned} H_{t}(e^{i\theta },e^{i\nu })=\frac{g_{t}(\mathbf{k}')}{(2\pi )^{n}}\left( \det (e^{i\theta _{j}(x+k'_{i})})_{i,j=1}^{n} \det (e^{-i\nu _{j}(x+k'_{i})})_{i,j=1}^{n}+Q_{t}(\theta ,\nu )\right) , \end{aligned}$$

where \(Q_{t}(\theta ,\nu )\) satisfies

$$\begin{aligned} |Q_{t}(\theta ,\nu )|&\le (n!)^2 \sum _{\begin{array}{c} k_{1}<k_{2}<\cdots<k_{n}\\ \mathbf{k}\not =\mathbf{k}' \end{array}}\frac{g_{t}(\mathbf{k})}{g_{t}(\mathbf{k}')}\\&= (n!)^2 \sum _{\begin{array}{c} k_{1}<k_{2}<\cdots <k_{n}\\ \mathbf{k}\not =\mathbf{k}' \end{array}}e^{-t\left( \frac{1}{2}\sum _{i=1}^{n}(x+k_{i})^{2}-\frac{n(n-1)(n+1)}{24}\right) }. \end{aligned}$$

It is not difficult to check that \(Q_{t}=o(1)\) uniformly in \(\theta \) and \(\nu \), as \(t\rightarrow \infty \). Furthermore, the normalised density \({\tilde{H}}_{t}(e^{i\theta },e^{i\nu })\) can be written as

$$\begin{aligned} {\tilde{H}}_{t}(e^{i\theta },e^{i\nu })=\left( \int _{C}F_{t}^{\theta }(\nu )d\nu \right) ^{-1}F_{t}^{\theta }(\nu ), \end{aligned}$$

where

$$\begin{aligned} F_{t}^{\theta }(\nu )=\det (e^{i\theta _{j}(x+k'_{i})})_{i,j=1}^{n} \det (e^{-i\nu _{j}(x+k'_{i})})_{i,j=1}^{n}+Q_{t}(\theta ,\nu ). \end{aligned}$$

For each fixed \(\theta \in C\),

$$\begin{aligned} \lim _{t\rightarrow \infty }F_{t}^{\theta }(\nu )=\det (e^{i\theta _{j}(x+k'_{i})})_{i,j=1}^{n} \det (e^{-i\nu _{j}(x+k'_{i})})_{i,j=1}^{n},\quad \forall \,\nu \in C, \end{aligned}$$

and, moreover, for all \(t>T\), \(T>0\) sufficiently large, we have

$$\begin{aligned} |F_{t}^{\theta }(\nu )|\le (n!)^{2}(1+K),\quad \forall \,\nu \in C, \end{aligned}$$

where K is a positive constant. Therefore, by the bounded convergence theorem, for any \(\theta ,\nu \in C\), it holds that

$$\begin{aligned} \lim _{t\rightarrow \infty }{\tilde{H}}_{t}(e^{i\theta },e^{i\nu })&=\frac{1}{\int _{C}\det (e^{-i\nu _{j}(x+k'_{i})})d\nu }\det (e^{-i\nu _{j}(x+k'_{i})})_{i,j=1}^{n}\\&=\frac{1}{M}\prod _{1\le i<j\le n}|e^{i\nu _{j}}-e^{i\nu _{i}}|, \end{aligned}$$

where M is the corresponding normalisation. For the last equality, see [32, p. 208]. \(\square \)

5.2 Brownian Motion in an Annulus

Let \(0<r<1\) and \(\Omega \) be the annulus centered at the origin defined by

$$\begin{aligned} \Omega =\{z\in {\mathbb {C}}: r<|z|< 1\}. \end{aligned}$$

Consider the ‘Brownian motion’ B in \({\overline{\Omega }}\), with normal reflection on the inner circle (of radius r), and stopped once it first hits the unit circle. The conformal invariance of the two-dimensional Brownian motion allows us to see the trajectories of B as the conformal image of a ‘Brownian motion’ \(\beta \) in the horizontal strip

$$\begin{aligned} \Omega '=\{z\in {\mathbb {C}}: 0<\mathfrak {Im}(z)< |\log r|\}, \end{aligned}$$

with normal reflection on the real axis and absorbing boundary \(\mathfrak {Im}(z)=|\log r|\), see Fig. 11. From Sect. 4.4, we know that if the process \(\beta \) starts at a point \(\theta \in {\mathbb {R}}\), then the distribution of its first hitting point at \(\mathfrak {Im}(z)=|\log r|\) has the density

$$\begin{aligned} h(\theta ,\nu )=\frac{1}{2|\log r|}{\mathrm{sech}}\left( \frac{\pi }{2|\log r|}(\nu -\theta )\right) ,\quad \nu \in {\mathbb {R}}. \end{aligned}$$
Fig. 11
figure 11

Mapping the strip onto the annulus

Consider the bounded set

$$\begin{aligned} C={\tilde{A}}_{n}\cap \{\nu \in {\mathbb {R}}^{n}: -\pi \le \nu _{n}< \pi \}, \end{aligned}$$

where

$$\begin{aligned} {\tilde{A}}_{n}:=\{\nu \in {\mathbb {R}}^{n} : \nu _{n}<\nu _{n-1}<\cdots<\nu _{2}<\nu _{1}<\nu _{n}+2\pi \}. \end{aligned}$$

Definition 3

Let \(\eta =e^{i\frac{2\pi }{n}}\) be the n-th root of unity. Define, for \(\theta ,\nu \in C\),

$$\begin{aligned} H^{a}_{r}(e^{i\theta },e^{i\nu })=\det \left( \sum _{k\in {\mathbb {Z}}}e^{i2\pi xk}h(\theta _{i},\nu _{j}+2\pi k)\right) _{i,j=1}^{n}, \end{aligned}$$
(5.6)

where

$$\begin{aligned} x= \left\{ \begin{array}{ll} 0 &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is odd}}, \\ \frac{1}{2} &{} \quad {\text {if}}\,\, {\textit{n}}\,\, {\text {is even}}. \end{array} \right. \end{aligned}$$

Remark

The strip \(\Omega '\subset {\mathbb {C}}\) is clearly invariant under horizontal translations by \(2\pi k\), \(k\in {\mathbb {Z}}\), and therefore the determinant (5.6) is a determinant of hitting densities of the affine form (4.2), described at the beginning of Sect. 4. Since (5.6) is defined as a natural continuous analogue of the determinant in Proposition 3.3, we expect the determinant \(H_{r}^{a}(e^{i\theta },e^{i\nu })\) to be positive and be interpreted (informally) as the probability that n independent trajectories \(B_{1},\ldots ,B_{n}\) of the process B in the annulus \({\overline{\Omega }}\), starting at positions

$$\begin{aligned} re^{i\theta _{j}},\quad j=1,\ldots ,n, \end{aligned}$$

will hit the unit circle by first time at points

$$\begin{aligned} e^{i\nu _{j}},\quad j=1,\ldots ,n, \end{aligned}$$

with an angle in each of the intervals \((\nu _{j},\nu _{j}+d\nu _{j})\), \(j=1,\ldots ,n\), and whose trajectories are constrained to satisfy

$$\begin{aligned} B_{j}\cap LE(B_{j-1})=\emptyset ,\quad 1< j\le n,\quad \text {and}\quad B_{1}\cap LE(B_{n})=\emptyset . \end{aligned}$$

Note that we do not require that the trajectory which started at point \(re^{i\theta _{j}}\) hits the unit circle at the corresponding point \(e^{i\nu _{j}}\).

Remark

If n is odd, the entries of the matrix in (5.6) can be written as

$$\begin{aligned} \frac{1}{2\pi }\,\theta _{3}(0,r)\theta _{2}(0,r)\frac{\theta _{3}\left( \frac{i\pi }{2|\log r|}(\nu _{j}-\theta _{i}),r\right) }{\theta _{2}\left( \frac{i\pi }{2|\log r|}(\nu _{j}-\theta _{i}),r\right) }, \end{aligned}$$

where \(\theta _{k}(z,q)\) is the k-Jacobi theta function, \(z\in {\mathbb {C}}\), \(|q|<1\) (see [33]).

Consider the normalised density

$$\begin{aligned} {\tilde{H}}^{a}_{r}(e^{i\theta },e^{i\nu })=\frac{1}{M_{r,\theta }}H^{a}_{r}(e^{i\theta },e^{i\nu }), \end{aligned}$$

where

$$\begin{aligned} M_{r,\theta }=\int _{C}H^{a}_{r}(e^{i\theta },e^{i\nu })d\nu . \end{aligned}$$

The following proposition gives the limit of \({\tilde{H}}^{a}_{r}(e^{i\theta },e^{i\nu })\) as the inner radius r goes to zero. This models the situation where the n Brownian motions start at the origin of the complex plane.

Proposition 5.5

For any \(\theta ,\nu \in C\),

$$\begin{aligned} \lim _{r\rightarrow 0}{\tilde{H}}^{a}_{r}(e^{i\theta },e^{i\nu })=\frac{1}{M}\prod _{1\le i<j\le n}|e^{i\nu _{i}}-e^{i\nu _{j}}|, \end{aligned}$$
(5.7)

where

$$\begin{aligned}\lim _{r\rightarrow 0}M_{r,\theta }=M,\end{aligned}$$

and M is the corresponding normalisation constant in the right hand side of (5.7).

Remark

The above limit agrees with the eigenvalue density of a random matrix belonging to the Circular Orthogonal Ensemble (COE), defined on C [9, 32].

Proof of Proposition 5.5

By Lemma A.2 and standard properties of determinants, we can express (5.6) as the sum

$$\begin{aligned} H^{a}_{r}(e^{i\theta },e^{i\nu })&=\frac{1}{(2\pi )^{n}}\sum _{\begin{array}{c} k_{1}<k_{2}<\cdots <k_{n}\\ k_{i}\in {\mathbb {Z}} \end{array}}\det (e^{-i\nu _{j}(x+k_{i})})\det (e^{i\theta _{j}(x+k_{i})})g_{r}(\mathbf{k}), \end{aligned}$$

where \(\mathbf{k}=(k_{1},\ldots ,k_{n})\) and

$$\begin{aligned} g_{r}(\mathbf{k})=\prod _{i=1}^{n}{\mathrm{sech}}(|\log r|(x+k_{i})). \end{aligned}$$

Here \(x=0\) if n is odd and \(x=1/2\) if n is even. The terms \(g_{r}(\mathbf{k})\) are always positive and

$$\begin{aligned} g_{r}(\mathbf{k})\le 2^{n}r^{\sum _{i=1}^{n}|x+k_{i}|}. \end{aligned}$$

If we minimise \(\sum _{i=1}^{n}|x+k_{i}|\) over all sequences of integers \(k_{1}<k_{2}<\cdots <k_{n}\), we obtain

$$\begin{aligned} \min _{\begin{array}{c} k_{i}\in {\mathbb {Z}}\\ k_{1}<\cdots <k_{n} \end{array}}\sum _{i=1}^{n}|x+k_{i}|=\frac{n^{2}-[n]}{4},\quad n\equiv [n]\in \{0,1\} \,\,\,\text {mod}\,2, \end{aligned}$$

and the minimum is attained uniquely at \(k'_{i}=k_{i}\), \(1\le i\le n\), where

$$\begin{aligned} x+k'_{i}=i-\frac{n+1}{2},\quad i=1,2,\ldots ,n. \end{aligned}$$

Hence, the function \(H^{a}_{r}(e^{i\theta },e^{i\nu })\) can be expressed as

$$\begin{aligned} H_{t}^{a}(e^{i\theta },e^{i\nu })=\frac{g_{r}(\mathbf{k}')}{(2\pi )^{n}}\left( \det (e^{i\theta _{j}(x+k'_{i})})_{i,j=1}^{n}\det (e^{-i\nu _{j}(x+k'_{i})})_{i,j=1}^{n}+Q'_{t}(\theta ,\nu )\right) , \end{aligned}$$

where \(Q'_{t}(\theta ,\nu )\) satisfies

$$\begin{aligned} |Q'_{t}(\theta ,\nu )|&\le (n!)^{2}\sum _{\begin{array}{c} k_{1}<k_{2}<\cdots<k_{n}\\ \mathbf{k}\not =\mathbf{k}' \end{array}}\frac{g_{r}(\mathbf{k})}{g_{r}(\mathbf{k}')}\\&\le (n!)^{2}\,2^{n}\sum _{\begin{array}{c} k_{1}<k_{2}<\cdots <k_{n}\\ \mathbf{k}\not =\mathbf{k}' \end{array}}r^{\left( \sum _{i=1}^{n}|x+k_{i}|-\frac{n^{2}-[n]}{4}\right) }. \end{aligned}$$

One can check that \(Q'_{t}=o(1)\) uniformly in \(\theta \) and \(\nu \), as \(r\rightarrow 0\). Then, similar to the proof of Proposition 5.4, the bounded convergence theorem implies that, for each \(\theta ,\nu \in C\)

$$\begin{aligned} \lim _{r\rightarrow 0}{\tilde{H}}_{r}^{a}(e^{i\theta },e^{i\nu })&=\frac{1}{\int _{C}\det (e^{-i\nu _{j}(x+k'_{i})})d\nu }\det (e^{-i\nu _{j}(x+k'_{i})})_{i,j=1}^{n}\\&=\frac{1}{M}\prod _{1\le i<j\le n}|e^{i\nu _{j}}-e^{i\nu _{i}}|, \end{aligned}$$

which concludes the proof. For the last identity, see [32, p. 208]). \(\square \)

6 Conclusions

We have developed connections between loop-erased walks in two dimensions and random matrices, based on an identity of Fomin [8]. This complements earlier work of Sato and Katori [34], where an example of this type of connection was exhibited in a slightly different context, as explained in Sects. 1.3 and 4.6 . These connections resemble the well-known relations between non-intersecting processes in one dimension and random matrices. For two-dimensional Brownian motions in suitable simply connected domains, conditioned (in an appropriate sense) to satisfy a certain non-intersection condition, we obtain, in particular scaling limits, eigenvalue densities of Cauchy type.

As a first step towards the consideration of non-simply connected domains, we have formulated and proved an affine (circular) version of Fomin’s identity. Applying this in the context of independent Brownian motions in an annulus, conditioned to satisfy a circular version of Fomin’s non-intersection condition, we obtain, in a particular scaling limit, the circular orthogonal ensemble of random matrix theory.

Exploring relations between random matrices, SLE and related combinatorial models, seems to be an interesting direction for future research. We hope that our preliminary findings will motivate further developments in this direction.