1 Introduction

This paper is about the exact solution of certain interacting particle systems connected to random matrices. We begin with the simplest motivating example, which albeit in the discrete setting is very closely related, namely the totally asymmetric simple exclusion process (TASEP), see [93]. The study of TASEP has received a lot of attention in the last few decades, arguably culminating, at least for the purposes we are interested in, in the exact solution (in a sense to be discussed later) for arbitrary initial condition in [77]. This led in the 1:2:3 scaling to the construction of the KPZ fixed point [77], the central object in the KPZ universality class [31]. The continuous space analogue of TASEP is the model of Brownian motions with one-sided collisions or reflections [101], also called Brownian TASEP, and which is also equivalent to Brownian last passage percolation [32, 80]. It was first discovered in [27, 80], and will be discussed in a more general setting in this paper, that this particle system is intimately related to Hermitian Brownian motion [40]. The Brownian model has also recently been solved for general initial condition and shown to converge to the KPZ fixed point [78]. More generally, particle systems of this type, at least in the discrete setting, have been intensely studied and we give a more detailed literature review in Sect. 1.2.

A different type of interacting particle system of significant interest is that of non-colliding (also called non-intersecting) diffusions. Such systems were first studied since they arise as eigenvalue evolutions of Hermitian matrix valued diffusions [53]. The quintessential example is that of non-intersecting Brownian motions, also called Dyson’s Brownian motion, which arises as the eigenvalue evolution of Brownian motion on Hermitian matrices. This model has been studied from many different points of view for decades [6, 40, 41, 59, 69, 94, 98]. Our interest here is in exact solvability, for arbitrary deterministic initial condition, in the sense of obtaining explicit formulae for the space-time correlations of the model. These turn out to be given in terms of determinants of an explicit correlation kernel. For fixed time this result goes back to the works of Johansson [59] and Brezin and Hikami [28] and for multiple times it is due to Katori and Tanemura in [69]. Another such system which can be solved exactly in this sense is that of non-intersecting squared Bessel processes [70] and we give a more detailed literature review in Sects. 1.2 and 2.

In this paper we study one-dimensional diffusions with polynomial drift and diffusion coefficients which interact via one-sided collisions, namely they solve a simple system of stochastic differential equations with reflection, see equations (3) and (5). We note that the individual one-dimensional diffusions are called Pearson diffusions by virtue of their relation to the important family of Pearson distributions [46]. They were first considered by Kolmogorov in 1931 [72], revisited by Wong in the 1960s [103], and in the past decades they have been much studied in statistics and mathematical finance [46]. Moreover, we study the model of non-colliding Pearson diffusions, which also solve a system of interacting stochastic differential equations, see equation (25). This model includes as special cases all the eigenvalue evolutions of matrix processes related to the classical ensembles of random matrices [47], see Sect. 2 for more details.

Our main results, Theorems 1.3, 1.5 and 1.6 below, on Pearson diffusions with one-sided collisions, starting from arbitrary deterministic initial condition, give a formula for the finite dimensional distributions of this particle system, at a fixed time, in terms of a Fredholm determinant of an explicit kernel. Our main result, Theorem 1.9 below, on non-colliding Pearson diffusions starting from arbitrary initial condition determines the space-time correlations of the induced point process in terms of determinants of an explicit kernel. Except for the Brownian case, and for the non-colliding model also the squared Bessel, all our theorems are new for all other Pearson diffusions.

The contribution of this paper is two-fold. First, in the case of TASEP-like particle systems, both in discrete and continuous space, we solve exactly (in the sense of Theorems 1.3 and 1.5) for the first time, for generalFootnote 1 initial condition, models for which the motion of particles depends in a non-trivial way on their spatial location (in previous works the motion of particles was translation invariant, see Sect. 1.2). Second, we show how the backward in time diffusion flow applied to certain families of polynomials can be used as a key tool to solve both diffusions with one-sided reflections and non-colliding diffusions in a uniform way (within each model).

Armed with the explicit Fredholm determinant formulae we obtain in this paper it would be possible to investigate scaling limits and connections to integrable systems for the interacting particle systems we consider, see the discussion in Sects. 1.4 and 1.5. We will pursue this in the future. Moreover, it is possible to consider in the discrete setting space-inhomogeneous particle systems with pushing and blocking mechanism. The ideas presented here, if adapted appropriately, should allow to solve exactly such discrete models as well. We leave this for future work.

Finally, we note that the backward in time diffusion flow in the case of Brownian motion appeared recently in the proof of Newman’s conjecture from number theory [89], in a remarkable conjecture on deforming the characteristic polynomial of the Ginibre random matrix ensemble to that of the Gaussian unitary ensemble [56], in statistical mechanics [64] and finite free probability [74]. Whether analogous applications exist for other Pearson diffusions is not clear but would be interesting to find out.

1.1 Models and main results

We fix \(N\in {\mathbb {N}}\) and an open interval \((l,r)\subseteq {\mathbb {R}}\) once and for all throughout the paper. We consider the following differential operator:

$$\begin{aligned} {\textsf{L}}={\textsf{a}}(x)\frac{ d^2}{dx^2}+{\textsf{b}}(x)\frac{d}{dx}, \end{aligned}$$

where the functions \({\textsf{a}}\) and \({\textsf{b}}\) are given by the polynomials:

$$\begin{aligned} {\textsf{a}}(x)=a_2x^2+a_1x+a_0, \ \ {\textsf{b}}(x)=b_1x+b_0. \end{aligned}$$
(1)

Moreover, define the polynomials for each \(k=1,\dots , N\),

$$\begin{aligned} {\textsf{b}}^{(k)}(x)={\textsf{b}}(x)+(N-k){\textsf{a}}'(x) \end{aligned}$$
(2)

and consider the differential operators, for \(k=1,\dots ,N\),

$$\begin{aligned} {\textsf{L}}^{(k)}={\textsf{a}}(x)\frac{ d^2}{dx^2}+{\textsf{b}}^{(k)}(x)\frac{d}{dx}, \end{aligned}$$

so that in particular \({\textsf{L}}^{(N)}\equiv {\textsf{L}}\). Note that given \({\textsf{L}}\) and N, the operators \({\textsf{L}}^{(k)}\), for \(k=1,\dots ,N\), are completely determined. The following is the standing, and basically only, assumption throughout the paper (and will not be recalled in every single result statement).

Definition 1.1

(Standing assumption) We assume that for each \(k=1,\dots ,N\) the differential operator \({\textsf{L}}^{(k)}\), with \({\textsf{a}}\) and \({\textsf{b}}\) as in (1) and (2), with \({\textsf{a}}(x)>0\) for all \(x\in (l,r)\), is the generator of a one-dimensional diffusion process in (lr) with each boundary point lr being either natural or entrance, see [25, 42, 58, 65] for details on this terminology. In particular, the boundary points are inaccessible and the diffusion associated to \({\textsf{L}}^{(k)}\) (which acts on a suitable domain of functions, see [25, 42, 58, 65]) is completely determined by \({\textsf{a}}\) and \({\textsf{b}}\) without needing to specify boundary conditions at l or r.

There are concrete integral conditions due to Feller, involving the functions \({\textsf{a}}\) and \({\textsf{b}}\), for when a boundary point is natural or entrance, see for example [25, 42, 58, 65]. These have already been worked out for the diffusions we consider and we will give references in the sequel. We call the diffusion with generator \({\textsf{L}}^{(k)}\) the \({\textsf{L}}^{(k)}\)-diffusion (similarly for \({\textsf{L}}\)). We write \(\left( e^{t{\textsf{L}}^{(k)}};t\ge 0\right) \) for the associated semigroup and, abusing notation, \(e^{t{\textsf{L}}^{(k)}}(x,y)\) for its transition density with respect to the Lebesgue measure in (lr), and analogously for \({\textsf{L}}\). All these transition densities can be written explicitly in terms of hypergeometric functions, see for example [8, 14, 103], but we will not make use of such formulae in this paper. We only need some basic qualitative properties. By standard results, see for example [95], \(e^{t{\textsf{L}}^{(k)}}(x,y)\) is smooth in \((x,y)\in (l,r)^2\) and \(y\mapsto \left| \partial _x^ie^{t{\textsf{L}}^{(k)}}(x,y)\right| \), \(i\in {\mathbb {N}}\), integrates polynomials in (lr). We moreover note that using the spectral expansion [58] of the transition density in terms of hypergeometric functions [8, 14, 103], \(z\mapsto e^{t{\textsf{L}}^{(k)}}(z,y)\), for \(y\in (l,r)\), can be extendedFootnote 2 to an analytic function in a complex neighbourhood of any compact subinterval of (lr).

Finally, from the stochastic analysis point of view, by our assumption above (the form of \({\textsf{a}}\) and \({\textsf{b}}\)) and the Yamada-Watanabe theorem [57, 87] the \({\textsf{L}}^{(k)}\)-diffusion is the unique strong solution \(\left( {\textsf{x}}(t);t\ge 0\right) \) to the stochastic differential equation (SDE) in (lr):

$$\begin{aligned} d{\textsf{x}}(t)=\sqrt{2{\textsf{a}}({\textsf{x}}(t))}d{\textsf{w}}(t)+{\textsf{b}}^{(k)}({\textsf{x}}(t))dt, \end{aligned}$$

with \(\left( {\textsf{w}}(t);t\ge 0\right) \) a standard Brownian motion.

Diffusions With One-Sided Collisions Our interest in this paper is in \({\textsf{L}}^{(k)}\)-diffusions interacting by one-sided reflections (also called collisions). Towards that end, define the Weyl chambers \({\mathbb {W}}_N^\uparrow \) and \({\mathbb {W}}_N^\downarrow \), corresponding to the interval (lr):

$$\begin{aligned} {\mathbb {W}}_N^{\uparrow }&=\left\{ x=(x_1,\dots ,x_N)\in (l,r)^N:x_1\le \cdots \le x_N \right\} , \\ {\mathbb {W}}_N^{\downarrow }&=\left\{ x=(x_1,\dots ,x_N)\in (l,r)^N:x_1\ge \cdots \ge x_N \right\} . \end{aligned}$$

We write \({\mathbb {W}}_N^{\uparrow ,\circ },{\mathbb {W}}_N^{\downarrow ,\circ }\) for the interiors (when the inequalities are strict) of \({\mathbb {W}}_N^{\uparrow },{\mathbb {W}}_N^{\downarrow }\) respectively.

We consider the following system of SDEs with reflection [57, 87] in the chamber \({\mathbb {W}}_N^{\uparrow }\):

$$\begin{aligned} d{\textsf{x}}^\uparrow _k(t)=\sqrt{2{\textsf{a}}\left( {\textsf{x}}^{\uparrow }_{k}(t)\right) }d{\textsf{w}}_k(t) +{\textsf{b}}^{(k)}\left( {\textsf{x}}^\uparrow _k(t)\right) dt+\frac{1}{2}d{\mathfrak {l}}_k^{\uparrow }(t), \end{aligned}$$
(3)

with the \({\textsf{w}}_k\) being independent standard Brownian motions and where the finite variation terms \({\mathfrak {l}}_k^{\uparrow }\), which only increases when particles collide to keep them ordered (in other words in \({\mathbb {W}}_N^{\uparrow }\)), can be identified with a semimartingale local time:

$$\begin{aligned} {\mathfrak {l}}_k^{\uparrow }= \text {sem. loc. time of } {\textsf{x}}^\uparrow _k-{\textsf{x}}^\uparrow _{k-1} \text { at } 0, \end{aligned}$$
(4)

with \({\mathfrak {l}}_1^{\uparrow } \equiv 0\). These SDEs have a unique strong solution in \({\mathbb {W}}_N^{\uparrow }\), see [13] (by virtue of the form of \({\textsf{a}}\) and \({\textsf{b}}\) the Yamada-Watanabe condition therein is satisfied). Write \(\left( {\textsf{S}}_t^{\uparrow , (N)};t \ge 0\right) \) for the semigroup of the corresponding Markov process; remarkably this has an explicit expression, see Proposition 4.2, which is the starting point of our analysis. In words, the dynamics are as follows: for each k the k-th particle evolves as an independent \({\textsf{L}}^{(k)}\)-diffusion and when it collides with the \((k-1)\)-th particle it receives an infinitesimal push \(\frac{1}{2}{\mathfrak {l}}_k^{\uparrow }\) (which is the only form of interaction between the particles) to keep the ordering. Such particle systems are known as diffusions with one-sided collisions or one-sided reflections. The most famous particle system of this type is when the \({\textsf{L}}^{(k)}\)-diffusion is a Brownian motion in which case it is also called Brownian TASEP [78, 101]. Since the Brownian local time can be written as a running maximum, see [87], it becomes equivalent to so-called Brownian last passage percolation [80].

Similarly, consider the following system of SDEs with reflection (now to the left) in the chamber \({\mathbb {W}}_N^{\downarrow }\):

$$\begin{aligned} d{\textsf{x}}^\downarrow _k(t)=\sqrt{2{\textsf{a}}\left( {\textsf{x}}^\downarrow _{k}(t)\right) }d{\textsf{w}}_k(t) +{\textsf{b}}^{(k)}\left( {\textsf{x}}^\downarrow _k(t)\right) dt-\frac{1}{2}d{\mathfrak {l}}_k^{\downarrow }(t), \end{aligned}$$
(5)

with the \({\textsf{w}}_k\) being independent standard Brownian motions and where again the finite variation terms \({\mathfrak {l}}_k^{\downarrow }\) can be identified with a local time:

$$\begin{aligned} {\mathfrak {l}}_k^{\downarrow }= \text {sem. loc. time of } {\textsf{x}}^\downarrow _k-{\textsf{x}}^\downarrow _{k-1} \text { at } 0, \end{aligned}$$
(6)

with \({\mathfrak {l}}_1^{\downarrow } \equiv 0\). Again, these equations have a unique strong solution in \({\mathbb {W}}_N^{\downarrow }\), see [13] (by virtue of the form of \({\textsf{a}}\) and \({\textsf{b}}\)). We write \(\left( {\textsf{S}}_t^{\downarrow , (N)};t \ge 0\right) \) for the semigroup of the corresponding Markov process. This again has an explicit expression, see Proposition 4.3. The dynamics have an analogous intuitive description as the one above for (3).

To state our main results, Theorems 1.3 and 1.5 below, on the particle systems (3) and (5) we need to introduce some basic ingredients. We write \({\textbf{1}}_{({\mathcal {A}})}\) for the indicator function of a set \({\mathcal {A}}\).

Definition 1.2

Let \(x=(x_1,\dots ,x_N)\in (l,r)^N\). For \(n=1,\dots ,N\) and \(k=0,\dots ,n-1\) we define the polynomial \({\textsf{q}}_k^{(n)}(z)={\textsf{q}}_k^{(n)}(z;x)\) of degree k by requiring, for \(i=0,\dots ,n-1\),

$$\begin{aligned} \partial _z^{i}{\textsf{q}}_k^{(n)}(z;x)\big |_{z=x_{n-i}} =(-1)^k{\textbf{1}}_{(i=k)}. \end{aligned}$$
(7)

Clearly, we only require condition (7) to hold for \(i=0,\dots ,k\) to define \({\textsf{q}}_k^{(n)}\) uniquely. Moreover, using (7) we can write a triangular system of equations for the coefficients of \({\textsf{q}}_k^{(n)}\) which in particular can be solved to give a complicated explicit expression for them. Alternatively, we have the following rather neat integral expression (where we perform each nested integral \(\int _{y_j}^z f(y_{j+1})dy_{j+1}\) consecutively assuming \(y_j<z\)) for \({\textsf{q}}_k^{(n)}(z)\), which however we will not make use of in this paper,

$$\begin{aligned} {\textsf{q}}_k^{(n)}(z)={\textsf{q}}_k^{(n)}(z;x)=\int ^{x_n}_z\int ^{x_{n-1}}_{y_1} \cdots \int ^{x_{n-k+2}}_{y_{k-2}}\int ^{x_{n-k+1}}_{y_{k-1}}dy_k dy_{k-1}\cdots dy_1. \end{aligned}$$
(8)

These polynomials appear implicitly in [78]. They seem quite natural but we do not know whether they have been studied before for different purposes. We finally record the simplest possible example of such polynomials. If all the coordinates of x are equal, namely \(x=(x_*,x_*,\dots ,x_*)\) then we observe that \({\textsf{q}}^{(n)}_k\) is given by

$$\begin{aligned} {\textsf{q}}^{(n)}_k(z)=(-1)^k\frac{(z-x_*)^k}{k!}. \end{aligned}$$
(9)

At several places throughout the paper we will need to apply the diffusion flow corresponding to \({\textsf{L}}\) (or more generally \({\textsf{L}}^{(k)}\)) backward in time to certain families of polynomials. In general, solving the diffusion equation backward in time is non-sensical but it is well-defined on polynomials p(z) as a power series, for any \(t\in {\mathbb {C}}\),

$$\begin{aligned} e^{t{\textsf{L}}}p(z)=\sum _{j=0}^\infty \frac{t^j}{j!}{\textsf{L}}^jp(z). \end{aligned}$$
(10)

The fact that this makes sense and matches, as it should, for \(t>0\), the action of the semigroup \(e^{t{\textsf{L}}}\) on p will be discussed in Sect. 3. Moreover, for a multivariate function \(g(z_1,\dots ,z_m)\) which is a polynomial in the variable \(z_j\) we denote by \(e^{t{\textsf{L}}_{z_j}}g(z_1,\dots ,z_m)\), for \(t\in {\mathbb {C}}\), the application of \(e^{t{\textsf{L}}}\) to the polynomial \(z_j\mapsto g(z_1,\dots ,z_m)\).

We write \(\partial ^{-1}\) for the operator, acting on suitably integrable functions f (which integrate polynomials in (lr) for example; all functions to which we will apply \(\partial ^{-1}\) in the sequel will be such),

$$\begin{aligned} \partial ^{-1}f(x)=\int _l^xf(y)dy. \end{aligned}$$
(11)

It is then easy to see that, for such f, for any \(m\in {\mathbb {N}}\),

$$\begin{aligned} \partial ^{-m}f(x)=\underbrace{\partial ^{-1}\cdots \partial ^{-1}}_{\text {m times}}f(x) =\int _l^x\frac{(x-y)^{m-1}}{(m-1)!}f(y)dy. \end{aligned}$$
(12)

Abusing notation we will also write \(\partial ^{-m}(x,y)\) for the corresponding integral kernel:

$$\begin{aligned} \partial ^{-m}\left( x,y\right) =\frac{(x-y)^{m-1}}{(m-1)!}{\textbf{1}}_{(y<x)}. \end{aligned}$$

We also define the constants \({\textsf{c}}^{(k)}={\textsf{c}}^{(k)}({\textsf{L}})\), for \(k=1,\dots ,N\),

$$\begin{aligned} {\textsf{c}}^{(k)}=2(N-k-1)a_2+b_1. \end{aligned}$$

Finally, for a fixed vector \(z=(z_1,\dots ,z_M)\in (l,r)^M\) and indices \(n_1<\cdots <n_M\) we introduce the multiplication operators

$$\begin{aligned} \chi _z^+\left( n_j,w\right) ={\textbf{1}}_{(w>z_j)}, \ \ \chi _z^-\left( n_j,w\right) ={\textbf{1}}_{(w<z_j)}. \end{aligned}$$
(13)

Our two first main results, Theorems 1.3 and 1.5 below, give a formula for the distribution of the interacting particle systems (3) and (5) in terms of a Fredholm determinant, see for example [92] for background on Fredholm determinants, of an explicit kernel.

Theorem 1.3

Under the standing assumption in Definition 1.1, consider the interacting particle system \(\left( \left( {\textsf{x}}_1^\uparrow (t),\dots ,{\textsf{x}}_N^\uparrow (t)\right) ;t\ge 0\right) \) evolving according to the dynamics (3) with initial condition \(x=(x_1,\dots ,x_N)\in {\mathbb {W}}_N^\uparrow \). For any \(t>0\), indices \(1\le n_1<n_2<\cdots <n_M \le N\) and locations \(z=(z_1,\dots ,z_M)\in (l,r)^M\), we have

$$\begin{aligned} {\mathbb {P}}\left( {\textsf{x}}^\uparrow _{n_j}(t)\le z_j, j=1,\dots ,M\right) = \det \left( {\textbf{I}}-\chi _z^+{\mathfrak {K}}_t\chi _z^+\right) _{L^2\left( \left\{ n_1,\dots , n_M\right\} \times (l,r)\right) }, \end{aligned}$$
(14)

where \(\det \) is the Fredholm determinant (\({\textbf{I}}\) is the identity operator), with

$$\begin{aligned} {\mathfrak {K}}_t\left[ \left( n_1,y_1\right) ;\left( n_2,y_2\right) \right]&=-\frac{\left( y_1-y_2\right) ^{n_2-n_1-1}}{(n_2-n_1-1)!}{\textbf{1}}_{(y_2<y_1)}{\textbf{1}}_{(n_2>n_1)} +\partial _{y_1}^{n_1}{\mathfrak {G}}_{n_2}\left( y_1,y_2\right) e^{-t{\textsf{L}}^{(n_2)}_{y_2}}, \end{aligned}$$
(15)

where \({\mathfrak {G}}_{n}(y_1,y_2)\) is given by (note this is where the initial condition \(x=(x_1,\dots ,x_N)\) appears)

$$\begin{aligned} {\mathfrak {G}}_n(y_1,y_2)=\sum _{k=1}^{n}e^{t\sum _{j=k}^{n-1}{\textsf{c}}^{(j)}} \partial _{y_1}^{-k}e^{t{\textsf{L}}^{(k)}}\left( x_k,y_1\right) {\textsf{q}}_{n-k}^{(n)}(y_2;x). \end{aligned}$$
(16)

Remark 1.4

Note that, the notation \(\partial _{y_1}^{n_1}{\mathfrak {G}}_{n_2}\left( y_1,y_2\right) e^{-t{\textsf{L}}^{(n_2)}_{y_2}}\) in (15) is shorthand for

$$\begin{aligned} \partial _{y_1}^{n_1}\sum _{k=1}^{n_2}e^{t\sum _{j=k}^{n_2-1}{\textsf{c}}^{(j)}} \partial _{y_1}^{-k}e^{t{\textsf{L}}^{(k)}}\left( x_k,y_1\right) \left[ e^{-t{\textsf{L}}^{(n_2)}}{\textsf{q}}_{n_2-k}^{(n_2)}\right] (y_2;x). \end{aligned}$$
(17)

Theorem 1.5

Under the standing assumption in Definition 1.1, consider the interacting particle system \(\left( \left( {\textsf{x}}_1^\downarrow (t),\dots ,{\textsf{x}}_N^\downarrow (t)\right) ;t\ge 0\right) \) evolving according to the dynamics (5) with initial condition \(x=(x_1,\dots ,x_N)\in {\mathbb {W}}_N^\downarrow \). For any \(t>0\), indices \(1\le n_1<n_2<\cdots <n_M \le N\) and locations \(z=(z_1,\dots ,z_M)\in (l,r)^M\), we have

$$\begin{aligned} {\mathbb {P}}\left( {\textsf{x}}^\downarrow _{n_j}(t)\ge z_j, j=1,\dots ,M\right) = \det \left( {\textbf{I}}-\chi _z^-{\mathfrak {K}}_t\chi _z^-\right) _{L^2\left( \left\{ n_1,\dots , n_M\right\} \times (l,r)\right) }, \end{aligned}$$
(18)

where \({\mathfrak {K}}_t\) is constructed as in (15) and (16), but with \(x\in {\mathbb {W}}_N^\downarrow \) instead.

We now give a probabilistic representation, in terms of a discrete time random walk with exponentially distributed steps, for the kernel \({\mathfrak {K}}_t\) in the case of interacting squared Bessel diffusions. Writing out the SDE (5) in \((0,\infty )\) explicitly in the squared Bessel case:

$$\begin{aligned} d{\textsf{x}}^\downarrow _k(t)=2\sqrt{{\textsf{x}}^\downarrow _{k}(t)}d{\textsf{w}}_k(t) +\left( \theta +2N-2k\right) dt-\frac{1}{2}d{\mathfrak {l}}_k^{\downarrow }(t), \end{aligned}$$
(19)

where we need \(\theta \ge 2\), in order to satisfy our standing assumption (since \(\theta +2N-2k\ge 2\) the point 0 is an entrance boundary point, see [87], while \(\infty \) is always natural for \({\textsf{L}}^{(k)}\), for \(k=1,\dots ,N\)). Now, for \(\theta \in {\mathbb {R}}\), write \({\mathcal {B}}^{(\theta )}=2x\frac{d^2}{dx^2}+\theta \frac{d}{dx}\) for the generator of a squared Bessel diffusion process in \((0,\infty )\) with dimension \(\theta \) killed when (if) it hits the origin and moreover write \(e^{t{\mathcal {B}}^{(\theta )}}(x,y)\) for its transition density. Note that, for \(\theta \ge 2\) the origin is almost surely never reached while for \(\theta <2\) this happens almost surely, see [52, 87]. In particular the transition density \(e^{t{\mathcal {B}}^{(\theta )}}(x,y)\) is sub-Markovian (integrates to less than 1) for \(\theta <2\). It is given explicitly, for \(\theta \ge 2\), by

$$\begin{aligned} e^{t{\mathcal {B}}^{(\theta )}}(x,y)=\frac{1}{2t}\left( \frac{y}{x}\right) ^{\frac{\theta }{2}}e^{ -\frac{(x+y)}{2t}}I_\theta \left( \frac{\sqrt{xy}}{t}\right) , \end{aligned}$$
(20)

where \(I_\theta \) is the modified Bessel function of the first kind, and for \(\theta <2\) it is obtained by the symmetry property \(e^{t{\mathcal {B}}^{(\theta )}}(x,y)=e^{t{\mathcal {B}}^{(4-\theta )}}(y,x)\), see [52, 87]. The result then reads as follows.

Theorem 1.6

Let \(\theta \ge 2\). Consider the squared Bessel diffusions interacting according to the dynamics (19) with initial condition \(x=(x_1,\dots ,x_N)\in {\mathbb {W}}_N^\downarrow \). Then, for any \(t>0\), indices \(1\le n_1<n_2<\cdots <n_M \le N\) and locations \(z=(z_1,\dots ,z_M)\in (l,r)^M\), we have

$$\begin{aligned} {\mathbb {P}}\left( {\textsf{x}}^\downarrow _{n_j}(t)\ge z_j, j=1,\dots ,M\right) = \det \left( {\textbf{I}}-\chi _z^-{\mathfrak {B}}^{(\theta )}_t\chi _z^-\right) _{L^2\left( \left\{ n_1,\dots , n_M\right\} \times (l,r)\right) }, \end{aligned}$$
(21)

where the kernel \({\mathfrak {B}}_t^{(\theta )}\) is given by

$$\begin{aligned} {\mathfrak {B}}_t^{(\theta )}\left[ \left( n_1,y_1\right) ;\left( n_2,y_2\right) \right]&=-\frac{\left( y_1-y_2\right) ^{n_2-n_1-1}}{(n_2-n_1-1)!}{\textbf{1}}_{(y_2<y_1)}{\textbf{1}}_{(n_2>n_1)}\\&+\partial _{y_1}^{n_1}e^{t{\mathcal {B}}_{y_1}^{(4-\theta -2N)}}{\textbf{E}}_{{\textsf{R}}_0=y_1} \left[ e^{y_1-{\textsf{R}}_\tau }\frac{\left( {\textsf{R}}_\tau -y_2\right) ^{n_2-\tau -1}}{(n_2-\tau -1)!} {\textbf{1}}_{(\tau <n_2)}\right] e^{-t{\mathcal {B}}_{y_2}^{(\theta +2N-2n_2)}}, \end{aligned}$$

where \(\left( {\textsf{R}}_k;k\ge 0\right) \) is a discrete-time random walk with exponential, with parameter 1, steps to the left and \(\tau =\tau (x)=\min \left\{ k\ge 0:{\textsf{R}}_k\ge x_{k+1}\right\} \) an integer-valued stopping time and \({\textbf{E}}\) denotes expectation with respect to this random walk. Moreover, this kernel can be written as

$$\begin{aligned}{} & {} {\mathfrak {B}}_t^{(\theta )}\left[ \left( n_1,y_1\right) ;\left( n_2,y_2\right) \right] \nonumber \\{} & {} =-\partial ^{-(n_2-n_1)}(y_1,y_2){\textbf{1}}_{(n_2>n_1)}+\partial _{y_1}^{-(n_2-n_1)} {\mathfrak {B}}_t^{(\theta )}\left[ \left( n_2,y_1\right) ;\left( n_2,y_2\right) \right] . \end{aligned}$$
(22)

Remark 1.7

As in Theorems 1.3 and 1.5, the second term in the sum in the definition of \({\mathfrak {B}}_t^{(\theta )}\) is shorthand for

$$\begin{aligned} \partial _{y_1}^{n_1}e^{t{\mathcal {B}}_{y_1}^{(4-\theta -2N)}}{\textbf{E}}_{{\textsf{R}}_0=y_1} \left[ e^{y_1-{\textsf{R}}_\tau }\frac{1}{(n_2-\tau -1)!}e^{-t{\mathcal {B}}_{y_2}^{(\theta +2N-2n_2)}} \left( {\textsf{R}}_\tau -y_2\right) ^{n_2-\tau -1}{\textbf{1}}_{(\tau <n_2)}\right] .\nonumber \\ \end{aligned}$$
(23)

The formula in the theorem above is analogous and motivated by the representations in terms of random walks in [76,77,78] for the Brownian case and for models in the discrete setting. It is natural to ask if analogous representations also exist for the more general diffusions considered in this paper and we discuss this briefly in Remark 4.17.

Non-colliding diffusions We now go on to discuss our results on non-colliding diffusions. Towards this end, we denote by \(\mathsf {\Delta }_N(x)\), for \(x\in {\mathbb {W}}_N^\uparrow \), the Vandermonde determinant:

$$\begin{aligned} \mathsf {\Delta }_N(x)=\prod _{1\le i<j\le N}(x_j-x_i). \end{aligned}$$

Then, as we explain in Sect. 5 it is possible to consider the following Markov semigroup \(\left( {\textsf{P}}_t^{(N)};t\ge 0\right) \) in \({\mathbb {W}}_N^{\uparrow }\), given by its explicit transition kernel

$$\begin{aligned} {\textsf{P}}_t^{(N)}(x,dy)=e^{-t\lambda _N}\frac{\mathsf {\Delta }_N(y)}{\mathsf {\Delta }_N(x)}\det \left( e^{t{\textsf{L}}}(x_i,y_j)\right) _{i,j=1}^Ndy, \end{aligned}$$
(24)

with \(\lambda _N=\frac{1}{6}N(N-1)(2a_2(N-2)+3b_1)\). This is the Doob h-transform [36, 83, 87] of N independent \({\textsf{L}}\)-diffusions killed when they intersect (equivalently the diffusion with generator \(\sum _{i=}^N {\textsf{L}}_{x_i}\) with Dirichlet boundary conditions in \({\mathbb {W}}_N^\uparrow \)) by the Vandermonde determinant \(\mathsf {\Delta }_N\). The formula (24) is initially defined for \(x\in {\mathbb {W}}_N^{\uparrow ,\circ }\) and then extended by L’Hôpital’s rule to general \(x\in {\mathbb {W}}_N^\uparrow \), by virtue of the smoothness of \(z\mapsto e^{t{\textsf{L}}}(z,y)\) (weak continuity of the probability measures \(x\mapsto {\textsf{P}}_t\left( x,dy\right) \) will be discussed in more detail in the proof of Proposition 5.3).

By standard results, see [83, 87], on how the generator of a diffusion process transforms under a Doob’s h-transform [36] the semigroup (24) corresponds to the dynamics given by the system of SDEs in \({\mathbb {W}}_N^{\uparrow }\):

$$\begin{aligned} d{\textsf{z}}_i(t)=\sqrt{2{\textsf{a}}({\textsf{z}}_i(t))}d{\textsf{w}}_i(t) +\left( {\textsf{b}}({\textsf{z}}_i(t))+2{\textsf{a}}({\textsf{z}}_i(t))\sum _{j\ne i}\frac{1}{{\textsf{z}}_i(t)-{\textsf{z}}_j(t)}\right) dt, \end{aligned}$$
(25)

where the \({\textsf{w}}_i\) are independent standard Brownian motions. For initial conditions \(x\in {\mathbb {W}}_N^{\uparrow ,\circ }\) it is easy to show, using a generic argument by virtue of the Doob h-transform structure, see for example [12], that these SDEs have a unique strong solution with a.s. no collision, in particular a.s. for all \(t>0\), \({\textsf{z}}(t)=\left( {\textsf{z}}_1(t),\dots ,{\textsf{z}}_N(t)\right) \in {\mathbb {W}}_N^{\uparrow ,\circ }\). This remains true for general initial conditions \(x\in {\mathbb {W}}_N^{\uparrow }\) (despite initially coinciding coordinates) and it can be shown using the results of [54]. We discuss this in Sect. 5.

Interacting particle systems of the form (25) arise as the eigenvalue processes of matrix valued diffusions. In the most classical case of Brownian motion, \({\textsf{a}}(x)\equiv \frac{1}{2}, {\textsf{b}}(x)\equiv 0\), this is the much-studied Dyson Brownian motion [6, 40, 41], the eigenvalue process of Brownian motion on the space of Hermitian matrices. We discuss more examples in Sect. 2.

Our next main result, Theorem 1.9 below, says that the correlations in space and time of the interacting particle system (25) can be computed explicitly. In order to state this result precisely we need to recall the definition of a determinantal point process as suitable for the setting of this paper, see [17, 61] for generalities.

Consider the space \({\mathfrak {X}}=D\times (l,r)\) where D is some discrete set and endow \({\mathfrak {X}}\) with the measure \(\mu =\textsf{Count}\times \textsf{Leb}\) which is the product of counting measure on D with Lebesgue measure on (lr). A point configuration in \({\mathfrak {X}}\) is a locally finite collection of points (also referred to as particles) in \({\mathfrak {X}}\), which we assume are pairwise distinct. We denote the set of all point configurations in \({\mathfrak {X}}\) by \(\textsf{Conf}({\mathfrak {X}})\). There is a natural way to equip \(\textsf{Conf}({\mathfrak {X}})\) with a Borel structure, see [17]. A random point process on \({\mathfrak {X}}\) is a probability measure \({\mathfrak {M}}\) on \(\textsf{Conf}({\mathfrak {X}})\). More generally, abusing terminology (strictly speaking when there is no positivity we cannot speak about randomness), a “signed random point process" is a (possibly) signed measure on \(\textsf{Conf}({\mathfrak {X}})\) of total mass one (this will be relavant for the proofs of Theorems 1.3 and 1.5). We define the correlation functions \(\{\rho _n\}_{n=1}^\infty \) (with respect to \(\mu \)) of the point process associated to the measure \({\mathfrak {M}}\) on \(\textsf{Conf}({\mathfrak {X}})\), if they exist, as follows. For any \(n\ge 1\) and any f a compactly supported bounded Borel function on \({\mathfrak {X}}^n\) we have:

$$\begin{aligned}&\int _{{\mathfrak {X}}^n}f({\mathfrak {z}}_1,\dots ,{\mathfrak {z}}_n)\rho _n({\mathfrak {z}}_1,\dots ,{\mathfrak {z}}_n)d\mu ({\mathfrak {z}}_1)\cdots d\mu ({\mathfrak {z}}_n)\nonumber \\&= \int _{\textsf{Conf}({\mathfrak {X}})}\sum _{{\mathfrak {y}}_{i_1},\dots ,{\mathfrak {y}}_{i_n}\in Y} f({\mathfrak {y}}_{i_1},\dots ,{\mathfrak {y}}_{i_n})d{\mathfrak {M}}(Y), \end{aligned}$$
(26)

where the sum is taken over all n-tuples \({\mathfrak {y}}_{i_1},\dots ,{\mathfrak {y}}_{i_n}\) of pairwise distinct points of the point configuration Y.

Definition 1.8

A “signed random point process" on \({\mathfrak {X}}\) given by the (possibly signed) measure \({\mathfrak {M}}\) (of total mass 1) on \(\textsf{Conf}({\mathfrak {X}})\) is called a (“signed") determinantal point process if there exists a function \(K:{\mathfrak {X}}\times {\mathfrak {X}}\rightarrow {\mathbb {C}}\) such that all the correlation functions of the point process \(\{\rho _n\}_{n=1}^\infty \) (with respect to \(\mu =\textsf{Count}\times \textsf{Leb}\)) defined by (26) are given by:

$$\begin{aligned} \rho _n({\mathfrak {z}}_1,\dots ,{\mathfrak {z}}_n)= \det \left( K({\mathfrak {z}}_i,{\mathfrak {z}}_j)\right) _{i,j=1}^n, \ \ \text {for} \ n=1,2,\dots , \end{aligned}$$
(27)

in which case the function K is called the correlation kernel.

Our main result on the interacting particle system (25) is then the following.

Theorem 1.9

Under the standing assumption in Definition 1.1, consider the dynamics (25), with semigroup (24), starting from \(x=\left( x_1,\dots ,x_N\right) \in {\mathbb {W}}_N^\uparrow \). For any times \(0<t_1<t_2<\cdots <t_M\) these dynamics give rise in a natural way to a random point process on \(\{t_1,\dots ,t_M\}\times (l,r)\). This point process is then determinantal with correlation kernel given by (with the first variable corresponding to time and the second space),

$$\begin{aligned} {\textsf{K}}\left[ (s,y_1);(t,y_2)\right]&=-e^{(s-t){\textsf{L}}}(y_2,y_1){\textbf{1}}_{(t<s)}\nonumber \\&\quad +\frac{1}{2\pi \text {i}}\oint _{\mathsf {\Gamma }^{(N)}}e^{s{\textsf{L}}}(z,y_1)e^{ -t{\textsf{L}}_{y_2}}(y_2-z)^{N-1}\prod _{m=1}^N\frac{1}{z-x_m}dz, \end{aligned}$$
(28)

where \(e^{-t{\textsf{L}}_{y_2}}(y_2-z)^{N-1}\) denotes the application of \(e^{-t{\textsf{L}}}\) to \((y_2-z)^{N-1}\) as a polynomial in the \(y_2\) variable and \(\mathsf {\Gamma }^{(N)}\) is a positively oriented contour, encircling all the points \(x_1,\dots ,x_N\), in a complex neighbourhood of \((x_1,x_N)\).

Remark 1.10

As mentioned earlier, it is possible to give a formula for \({\textsf{K}}\) that does not make use of complex analysis, involving divided differences [33], see the proofs in Sect. 5 for more details.

Although the interacting particle systems (3), (5) and (25) might seem unrelated and of very different nature it is actually possible to couple them, for special initial conditions, in a larger interacting particle system taking values in an interlacing array. We discuss this in Sect. 6, which although being independent to the rest of paper it is rather important conceptually. In the Brownian case this type of result is very well-known and there are a couple of ways of constructing this coupling, see for example [80, 100]. Moreover, it is possible to give a direct connection between the semigroups \({\textsf{S}}_t^{\uparrow ,(N)}\), \({\textsf{S}}_t^{\downarrow ,(N)}\) and \({\textsf{P}}_t^{(N)}\). We again explain this in Sect. 6. Although there is a partial connection between these two types of interacting particle systems we currently do not have a direct connection between Theorems 1.31.5 and Theorem 1.9. A key feature of the formulae appearing in all these theorems is the application of the backward in time diffusion flow to different families of polynomials but beyond that a more precise connection is elusive. It would be very interesting if one exists. In particular, is it possible to relate directly the correlation kernel \({\mathfrak {K}}_t\) from Theorems 1.3 and 1.5 to the correlation kernel \({\textsf{K}}\) from Theorem 1.9?

1.2 Relation to previous results

Results of the form of Theorems 1.3 and 1.5 first appeared in the discrete space setting, for special initial conditions, for a number of TASEP-like systems in [19, 21, 22, 90]. Then, in the continuous space setting the model of Brownian motions with one-sided collisions was solved exactlyFootnote 3 for some special initial conditions, which lead to Airy processes in the limit, in [44, 45, 101]. In the last few years, in a breakthrough work [77], the exact solution of TASEP from general initial condition was found. Making use of this,Footnote 4 it is not hard to solve the Brownian model from arbitrary initial condition as well, see [78]. Beyond this, the exact solution of particle systems (3) and (5) for all other diffusions considered here is new.

Since the appearance of [77] there have been a number of works [7, 16, 75, 76, 79] which find exact solutions to some interacting particle systems, the most general setup being that of [76] (except in the case of discrete-time TASEP when a time-inhomogeneous version can be considered as well using the RSK correspondence, see [16]). In [76] a nice theory is developed to solve exactly certain discrete particle systems, which include for example the ones studied in [35]. We note that all of these papers consider models where the motion of individual particles (without the interactions) does not depend on their spatial location. In this work we consider and solve, for the first time, models for which the motion of particles depends in a non-trivial way on their spatial location. In particular, our results would not follow even after adapting the general theory of [76] to the continuous space setting.

Regarding the particle system (25) an exact computation of the correlation kernel \({\textsf{K}}\) (in equivalent form) in the Brownian case, for fixed time \(s=t\), first appeared in the work of Brezin and Hikami [28] and Johansson [59] on universality for random Wigner matrices. The computation of the kernel for any times st for both the Brownian and squared Bessel cases appeared in the works of Katori and Tanemura [69, 70]. Again, this is obtained there in an equivalent form as it is possible to give \(e^{-t{\textsf{L}}}p\), with p a polynomial and \(t>0\), as an integral expression in these cases. All other cases considered in this paper are new.

The work of Katori on determinantal martingales [67] used to solve explicitly (in the sense of Theorem 1.9) some non-colliding diffusions is also worth mentioning. The so-called fundamental martingale polynomials \(m_n(t,x)\) therein which are a key ingredient in this theory are basically given by the simple formula \(e^{-t{\textsf{L}}}x^n\) (in the Brownian and squared Bessel cases which are the ones considered there), see Proposition 3.5, also Sect. 4.3. Using determinantal martingales it is also possible to solve exactly some elliptic versions of space-time determinantal processes and non-colliding Brownian motions on the unit circle, see [67, 68]. Although these models do not directly fall in our framework it is conceivable that a backward in time diffusion flow on the unit circle applied to some sort of trigonometric polynomials, see [64], can be used to solve these models as well.

1.3 On the proofs

The proof of Theorems 1.3 and 1.5 is in two stepsFootnote 5: first we prove there is an underlying “signed" determinantal process structure and then obtain an explicit formula for the correlation kernel by solving a biorthogonalization problem. Our starting point is explicit formulae for the transition semigroups \({\textsf{S}}_t^{\uparrow ,(N)}\), \({\textsf{S}}_t^{\downarrow ,(N)}\) obtained in [13]. Then, the fact that there is a determinantal structure in this non-translation invariant in space setting is non-trivial and relies on a key relation between the transition densities \(e^{t{\textsf{L}}^{(k)}}(x,y)\) and \(e^{t{\textsf{L}}^{(k+1)}}(x,y)\).

To then solve the biorthogonalization problem we first obtain, using this relation, two different representations for the functions that appear as data in the problem. Finally, finding the biorthogonal functions (polynomials) and proving the required orthogonality relation is very clean using the backward in time diffusion flow technology. To prove Theorem 1.6 we use some rather special properties of the squared Bessel diffusion. A key ingredient being Lemma 4.15 which gives some kind of commutation relation between the \(\partial ^{-1}\) operator and squared Bessel diffusion semigroups. We note here that \(\partial ^{-1}\) and the semigroup of a squared Bessel diffusion do not commute exactly and this is an important difference and extra complication to the works [76,77,78] where the corresponding operators actually commute. Analogues possibly exist for other Pearson diffusions as well, see the discussion in Remark 4.17. Now, to prove Theorem 1.9 we again follow the same two-step strategy. The fact that there is an underlying determinantal structure is basically an immediate consequence of the Karlin-McGregor formula for non-intersecting paths [65] and the Eynard-Mehta theorem [24]. The non-trivial part of the proof is to compute the kernel explicitly. In order to do this we again need to find certain biorthogonal functions. Although the proof is actually presented in a somewhat different way, the essence of the key result in Sect. 5, Proposition 5.3, is to find such biorthogonal functions, see the discussion preceding that proposition for more details. Using the backward in time diffusion flow finding these biorthogonal functions can again be done in a uniform and clean way.

1.4 Scaling limits

Using the Fredholm determinant formula in Theorems 1.3 and 1.5 we can consider various scaling limits of the interacting diffusions considered here. Such results were proven for special initial conditions for a number of different models in the past two decades [19,20,21, 44, 45, 101]. Then in [77], TASEP was exactly solved for general initial condition and this was used to construct and give a formula for the transition probabilities of the KPZ fixed point \({\mathfrak {h}}(t,x)\), by showing convergence of the Fredholm determinants corresponding to TASEP. This convergence is proven by showing trace class convergence of the rescaled kernels. This can be very technically demanding and has been worked out in detail for general initial conditions only in the case of TASEP. Convergence has also been shown for general initial conditions for the Brownian model in [78] but some results from [32] are used which shorten the technical arguments. Now, at a kind of soft edge scaling we would expect that the interacting squared Bessel diffusions (19) should also converge to the KPZ fixed point.

At least for the initial condition \(x=(0,\dots ,0)\), which corresponds to looking at consecutive submatrices of a Laguerre unitary ensemble matrix [47] convergence of the correlation kernels, in an equivalent form involving Laguerre polynomials, see Sect. 4.3, to the extended Airy kernel follows from [48]. It should be possible to prove this for general initial conditions using the probabilistic representation in Theorem 1.6 but this would be long and technical to perform rigorously and will be done elsewhere. A more robust way to show convergence to the KPZ fixed point, at the appropriate scaling, would be to generalize the recent breakthrough work [86] to the continuous state space setting. There convergence to the KPZ fixed point is shown for finite range exclusion processes (and the KPZ equation by taking yet another limit) by comparing their transition probabilities to the ones for TASEP using energy estimates. One would expect that the role of TASEP should be played by the Brownian model. In any case, our primary interest in these models is not showing convergence to the KPZ fixed point (which would be interesting) but rather to show convergence to some novel limiting objects. In particular, we believe that at least in the squared Bessel case, in the hard edge scaling, new behaviour should emerge. We will pursue it in the future.

Now, regarding the non-colliding diffusions (25) the asymptotics of the correlation kernel, in certain regimes, have been computed in the Brownian and squared Bessel cases in [69, 70]. This has been done using an equivalent expression, which somehow writes \(e^{-t{\textsf{L}}}p\), for \(t>0\), as an integral (it is unclear whether this can be done in general). Returning to the general case we note that although \(e^{t{\textsf{L}}}(x,y)\) is explicit [8, 14, 103] for any \({\textsf{L}}\) satisfying our standing assumption, the expression can be rather involved. Fortunately, working at the level of generators things become much clearer conceptually but not fully rigorous when taking limits. In particular, after a gauge transformation and rescaling of the kernel and some non-trivial formal computations it is possible to see that, at least in some examples beyond the Brownian and squared Bessel cases, the kernel \({\textsf{K}}\) (recall this depends on N) should converge as \(N\rightarrow \infty \) to a limiting kernel \({\textsf{K}}_\infty \). The following type of problem, imprecisely stated, becomes central to make this investigation rigorous. Suppose \(f_N\) is a polynomial and that, in a suitable sense, \(f_N \rightarrow f_\infty \) for some not necessarily polynomial but nice function \(f_\infty \). Moreover, assume that the second order differential operator \({\mathfrak {A}}_N\), which is some transformed version of \({\textsf{L}}^{(N)}={\textsf{L}}\) so that \(e^{-t{\mathfrak {A}}_N}f_N\) makes sense for any \(t\in {\mathbb {R}}\), satisfies \({\mathfrak {A}}_N \rightarrow {\mathfrak {A}}_\infty \), in an appropriate sense, for some limiting second order differential operator \({\mathfrak {A}}_\infty \). Then one would hope that, subject to certain conditions (finding the right conditions is part of the actual problem),

$$\begin{aligned} e^{-t{\mathfrak {A}}_N}f_N \overset{N \rightarrow \infty }{\longrightarrow } e^{-t {\mathfrak {A}}_\infty }f_\infty , \ \ t\in {\mathbb {R}}, \end{aligned}$$

and moreover be able to obtain strong enough estimates to prove convergence of kernels in trace class.Footnote 6 To prove this sort of result would involve some non-trivial analysis with Schrödinger semigroups and we will pursue it elsewhere.

1.5 Connections to integrable systems

There is a long history of connections between random matrices and integrable systems, see for example [47]. This extends to the level of interacting particle systems as we now discuss. We first consider one-point distributions, i.e. the distribution of a single particle. We note that from Corollary 6.8, with \(\mu \) a probability measure supported on \({\mathbb {W}}_N^{\uparrow ,\circ }\)Footnote 7 and the Markov kernels \(\mathsf {\Lambda }{\textsf{E}}_\uparrow \) and \(\mathsf {\Lambda }{\textsf{E}}_\downarrow \) defined in Sect. 6, we have:

$$\begin{aligned} {\mathbb {P}}_{\mu \mathsf {\Lambda }{\textsf{E}}_\uparrow }\left( {\textsf{x}}_N^\uparrow (t)\le w\right)= & {} {\mathbb {P}}_{\mu }\left( {\textsf{z}}_N(t)\le w\right) , \ \ {\mathbb {P}}_{\mu \mathsf {\Lambda }{\textsf{E}}_\downarrow }\left( {\textsf{x}}_N^\downarrow (t)\ge w\right) \\= & {} {\mathbb {P}}_{\mu }\left( {\textsf{z}}_1(t)\ge w\right) , \ \ \forall t\ge 0, \ w\in (l,r). \end{aligned}$$

Then, using the connection between \(\left( {\textsf{z}}(t);t\ge 0\right) \) and eigenvalues of matrix diffusions we obtain that, for certain special initial conditions, these probabilities (which are thus gap probabilities for eigenvalues of random matrices) can be represented in terms of Painlevé equations [3, 18, 47, 50, 51, 97, 102]. We note that there is some well-developed theory on gap probabilities of eigenvalues of random matrices and integrable systems, see for example [3, 18, 47, 50, 51, 97, 102].

It is rather remarkable, and far from well-understood, that more sophisticated connections to intergrable systems exist for the multipoint and multitime distributions, at least for some related models that we now survey. For the KPZ fixed point \({\mathfrak {h}}(t,x)\), with general initial condition \({\mathfrak {h}}_0\), it was shown in [85], that the multipoint distributions:

$$\begin{aligned} {\mathbb {P}}_{{\mathfrak {h}}_0}\left( {\mathfrak {h}}(t,x_1)\le w_1, {\mathfrak {h}}(t,x_2)\le w_2,\dots ,{\mathfrak {h}}(t,x_m)\le w_m\right) , \end{aligned}$$

after an appropriate transformation, satisfy the matrix KP equation, see [85] for the precise statement. It is natural then to ask if something analogous happens for pre-limit models (multipoint in this case would mean looking at distributions of several particles at the same time). At least for TASEP, as the authors mention in [75], it appears that the transition probabilities do not solve some classical completely integrable equation. However, it was very recently shown in [75] that the multipoint distributions of another discrete model, the polynuclear growth model (PNG), after an appropriate transformation, satisfy the non-abelian 2D Toda equation. It would be very interesting if such results existFootnote 8 for the interacting particle systems (3) and (5). We hope to investigate this in the future.

In a different direction, for the non-colliding diffusions (25), in the case of Ornstein-Uhlenbeck and squared radial Ornstein-Uhlenbeck processes (see Sect. 2), starting from the invariant measure \({\textsf{M}}_N\), see Proposition 2.1, it has been shown in [4, 97] that the multitime distributions of the top-curve \(({\textsf{z}}_N(t);t\ge 0)\):

$$\begin{aligned} {\mathbb {P}}_{{\textsf{M}}_N}\left( {\textsf{z}}_N(t_1)\le w_1, {\textsf{z}}_N(t_2)\le w_2,\dots , {\textsf{z}}_N(t_m)\le w_m\right) , \end{aligned}$$
(29)

solve a system of nonlinear partial differential equations. It is reasonable to expect that analogous results should hold for the other diffusions considered in this paper, when starting from their invariant measure. Finally, as far as we can tell, connections to integrable systems for non-colliding diffusions starting from arbitrary deterministic initial condition have not been investigated yet.

Organisation of the paper The paper is organised as follows. In Sect. 2 we discuss examples of diffusions that fall within our framework and whether and where the corresponding interacting particle systems (3), (5), (25) have been studied before. In Sect. 3 we study the diffusion flow corresponding to \({\textsf{L}}\) (or \({\textsf{L}}^{(k)}\)) on polynomials backward in time. In Sect. 4 we prove Theorems 1.31.5 and 1.6. In Sect. 4.1 we prove the existence of some “signed" determinantal point process structure underlying the particle systems (3) and (5). In Sect. 4.2 we obtain the explicit form of the kernel \({\mathfrak {K}}_t\) in Theorems 1.3 and 1.5. In Sect. 4.4 we prove Theorem 1.6. In Sect. 5 we prove Theorem 1.9. In Sect. 6 we discuss the connection between the particle systems and their transition kernels.

2 Some Examples of Diffusions and Invariant Measures

In this section we give examples of diffusions that fall within our framework (satisfying our standing assumption in Definition 1.1), in particular to which Theorems 1.31.5 and 1.9 can all be applied. We also give references to where the explicit form of the transition densities \(e^{t{\textsf{L}}}(x,y)\) can be found along with other relevant facts on these diffusions, such as invariant measures. Before discussing the explicit diffusion examples we record here a little result, that we refer back to below, on the invariant measure of the non-colliding SDEs (25).

Proposition 2.1

Consider the measure on (lr) with density

$$\begin{aligned} {\textsf{m}}(y)=\frac{1}{Z_1} \frac{1}{\left( a_2y^2+a_1y+a_0\right) }\exp \left( \int ^y\frac{b_1z+b_0}{a_2z^2+a_1z+a_0}dz\right) , \end{aligned}$$
(30)

where the indefinite integral \(\int ^yf(z)dz\) denotes an anti-derivative of f and \(Z_1\) is a normalization constant, which we assume is finite and so that \(\int _l^r{\textsf{m}}(y)dy=1\) (in particular \({\textsf{m}}\) is unambiguously defined). Moreover, assume that this probability measure has moments of order at least \(2N-2\), namely \(\int _l^r |y|^{2N-2}{\textsf{m}}(y)dy<\infty \). Then, the unique invariant probability measure of the dynamics (25) with semigroup (24) in \({\mathbb {W}}_N^\uparrow \) is given by:

$$\begin{aligned} {\textsf{M}}_N(dx)=\frac{1}{Z_N}\prod _{i=1}^N{\textsf{m}}(x_i)\times \mathsf {\Delta }_N^2(x){\textbf{1}}_{\left( x\in {\mathbb {W}}_N^\uparrow \right) }dx, \end{aligned}$$
(31)

for some normalization constant \(Z_N\).

Proof

The proof is standard. The integrability assumption above gives that \({\textsf{M}}_N\) in (31) is a well-defined probability measure (since the highest degree of the monomials appearing in \(\mathsf {\Delta }_N^2(x)\) is \(2N-2\)). Note that, (30) is the formula for the density of the so-called speed measure of the \({\textsf{L}}\)-diffusion with respect to which \({\textsf{L}}\) is reversible, see for example [25, 58, 66]. Then, the rest of the argument is word for word the same as the proof of Proposition 4.4 of [11] to which we refer the reader (there a special \({\textsf{L}}\)-diffusion, the Hua-Pickrell diffusion below, is considered but the argument is completely generic). \(\square \)

Remark 2.2

The measure (31) is exactly an orthogonal polynomial ensemble with weight \({\textsf{m}}\), see [47]. The corresponding point process on (lr) is determinantal with kernel given in terms of the orthogonal polynomials corresponding to \({\textsf{m}}\), see [47].

Brownian motion We have the generator \({\textsf{L}}=\frac{1}{2}\frac{d^2}{dx^2}\) in \((-\infty ,\infty )\). Both \(-\infty \) and \(\infty \) are natural boundary points, see [25]. The transition kernel is simply the heat kernel and there is no invariant probability measure. As mentioned, the model of Brownian motions with one-sided collisions (3), (5), in its many equivalent forms, has been heavily studied in the last two decades, see [32, 78, 80, 100, 101]. The non-colliding particle system (25) in this case is called Dyson’s Brownian motion, it arises as the eigenvalue evolution of Brownian motion on Hermitian matrices and has been intensely studied from many points of view for decades [6, 40, 41, 59, 69, 94, 98].

Ornstein-Uhlenbeck process This generalises the Brownian motion. We have the generator \({\textsf{L}}=\frac{1}{2}\frac{d^2}{dx^2}-\gamma \frac{d}{dx}\) in \((-\infty ,\infty )\). Both \(-\infty \) and \(\infty \) are natural boundary points for any \(\gamma \), see [25]. The transition kernel can be written explicitly in terms of a series involving Hermite polynomials [103] or as a closed expression using Mehler’s formula, see [25]. For \(\gamma >0\) the invariant measure is given by the Gaussian distribution and, by virtue of Proposition 2.1 for example, the invariant measure for the non-colliding diffusions in (25) is given by the law of the eigenvalues of the Gaussian unitary ensemble, see [47]. The non-colliding particle system (25) arises as the eigenvalues of a Hermitian Ornstein-Uhlenbeck process and has been studied as much as the standard Brownian model [40, 41, 60].

Geometric Brownian motion We have the generator \({\textsf{L}}=\sigma ^2x^2\frac{d^2}{dx^2}+\beta x \frac{d}{dx}\) in \((0,\infty )\). Both 0 and \(\infty \) are natural boundary points for any \(\sigma ,\beta \), see [25]. The transition kernel can easily be obtained from the heat kernel and there is no invariant probability measure, see [25]. The non-colliding particle system (25) in this case was briefly discussed in [13] but, as far as we can tell, a canonical matrix diffusion has not been considered somewhere. We note that non-colliding geometric Brownian motions of (25) do not arise by simply exponentiating Dyson Brownian motion (as can be checked by applying Itô’s formula).

Squared Bessel process We have the generator \({\textsf{L}}=2x\frac{d^2}{dx^2}+\gamma \frac{d}{dx}\) in \((0,\infty )\). For \(\gamma \ge 2\), the point 0 is an entrance boundary point while \(\infty \) is always natural for any \(\gamma \), see [25]. The transition kernel of \({\textsf{L}}\) we have already given explicitly in (20) and there is no invariant probability measure, see [25]. The non-colliding diffusions (25) arise as the eigenvalues of the so-called Laguerre matrix process on positive definite Hermitian matrices introduced in [73] and further studied in [34]. The corresponding diffusion on real symmetric positive definite matrices appeared earlier and is known as the Wishart process [29].

Squared radial Ornstein-Uhlenbeck process This generalises the squared Bessel process. We have the generator \({\textsf{L}}=2x\frac{d^2}{dx^2}+\left( -\gamma _1 x+\gamma _2\right) \frac{d}{dx}\) in \((0,\infty )\). For \(\gamma _2 \ge 2\), the point 0 is an entrance boundary point while \(\infty \) is always natural for any \(\gamma _1,\gamma _2\), see [25]. The transition kernel can be written explicitly in terms of a series involving Laguerre polynomials [103] or as a closed expression involving Bessel functions, see [25]. For \(\gamma _1>0\) the invariant measure is given by the Gamma distribution and, by virtue of Proposition 2.1 for example, the invariant measure for (25) is given by the law of the eigenvalues of the Laguerre unitary ensemble, see [47]. The non-colliding diffusions (25) arise as the eigenvalues of the Ornstein-Uhlenbeck analogue of the Laguerre matrix process on positive definite Hermitian matrices.

Jacobi process We have the generator \({\textsf{L}}=2x(1-x)\frac{d^2}{dx^2}+\left( -(\gamma _1+\gamma _2)x+\gamma _2\right) \frac{d}{dx}\) in (0, 1). For \(\gamma _1,\gamma _2 \ge 2\) both the points 0 and 1 are entrance boundary points, see [5]. The transition kernel can be written explicitly as a series involving Jacobi polynomials, see [103]. For \(\gamma _1,\gamma _2>0\) the invariant measure is given by the beta distribution and, by virtue of Proposition 2.1 for example, the invariant measure for (25) is given by the law of the eigenvalues of the Jacobi unitary ensemble, see [47]. The non-colliding particle system (25) arises as the evolution of eigenvalues of the so-called matrix Jacobi diffusion introduced in [37].

Inverse gamma diffusion This is also known as inhomogeneous geometric Brownian motion. We have the generator \({\textsf{L}}=2x^2\frac{d^2}{dx^2}+\left( \gamma _1x+\gamma _2\right) \frac{d}{dx}\) in \((0,\infty )\). For \(\gamma _2>0\) the point 0 is an entrance boundary while \(\infty \) is always a natural boundary point for any \(\gamma _1,\gamma _2\), see [99]. An explicit formula for the transition kernel in terms of the Bessel orthogonal polynomials and hypergeometric functions can be found in [103]. For \(\gamma _1<2\) (still assuming \(\gamma _2> 0\)) the invariant measure is the inverse Gamma distribution, and by virtue of Proposition 2.1 for example (we need a stricter restriction on the parameters to get finite moments), the invariant measure for (25) is given by the law of eigenvalues of the inverse Laguerre ensemble, [47]. As far as we can tell, the non-colliding diffusions (25) for this particular diffusion, and the corresponding matrix process, first appeared in the work of Rider and Valkó [88] in relation to a matrix extension of Dufresne’s identity [38].

Hua-Pickrell diffusion We have the generator \({\textsf{L}}=(1+x^2)\frac{d^2}{dx^2}+\left( \gamma _1x+\gamma _2\right) \frac{d}{dx}\) in \((-\infty ,\infty )\). Both \(-\infty \) and \(\infty \) are always natural boundary points for any \(\gamma _1,\gamma _2\), see [11]. An explicit formula for the transition kernel in terms of the Romanovski orthogonal polynomials and hypergeometric functions can be found in [8, 103]. For \(\gamma _1<1\) the invariant measure is the Pearson IV distribution (the Cauchy distribution is the special case \(\gamma _1=\gamma _2=0\)), see [11], and by virtue of Proposition 2.1 for example (we need a stricter restriction on the parameters to get finite moments), the invariant measure for (25) is given by the law of eigenvalues of the so-called Hua-Pickrell or Cauchy unitary ensemble, see [11, 23, 47, 102]. We introduced and studied the non-colliding diffusions (25), and the associated matrix process, in [11] motivated by the construction of infinite-dimensional dynamics associated to a certain determinantal point process with infinitely many points and a matrix extension [9] of Bougerol’s identity [26].

Fisher-Snedecor diffusion We have the generator \({\textsf{L}}=2x(1+x)\frac{d^2}{dx^2}+\left( \gamma _1x+\gamma _2\right) \frac{d}{dx}\) in \((0,\infty )\). For \(\gamma _2 \ge 2\) the point 0 is an entrance boundary point while \(\infty \) is always natural for any \(\gamma _1,\gamma _2\). An explicit formula for the transition kernel in terms of the Fisher-Snedecor orthogonal polynomials and hypergeometric functions can be found in [14]. For \(\gamma _1,\gamma _2>0\) the invariant measure of the diffusion is the beta prime distribution and by virtue of Proposition 2.1 (we need a stricter restriction on the parameters to get finite moments) the invariant measure of the non-colliding diffusions (25) is the law of the eigenvalues of the Hermitian matrix beta prime distribution [55]. As far as we can tell, the non-colliding particle system for this diffusion (25) had not been considered in the literature before and a corresponding matrix diffusion has not yet been introduced.

3 Backward in Time Diffusion Flow on Polynomials

In this section we show that the definition in (10) of \(e^{t{\textsf{L}}}p\) as a power series makes sense and it is consistent, for \(t>0\), with integration against the transition kernel of the \({\textsf{L}}\)-diffusion. We also prove some other facts about the backward in time diffusion flow on polynomials which, although not strictly necessary for subsequent developments, might be of independent interest. Write \([z^i]p(z)\) for the coefficient of the term \(z^i\) in a polynomial p(z).

Proposition 3.1

Let \(M\ge 1\) and \(p \in {\mathfrak {P}}_M\), where \({\mathfrak {P}}_M\) is the set of polynomials (with complex coefficients) of degree M. Let \(t\in {\mathbb {C}}\). Then, the linear map

$$\begin{aligned} e^{t{\textsf{L}}}:{\mathfrak {P}}_M \longrightarrow {\mathfrak {P}}_M, \end{aligned}$$

given by (10) is well-defined, with the series (10) converging uniformly on compact sets in \((t,z)\in {\mathbb {C}}^2\). Moreover, we have:

$$\begin{aligned}{}[z^M]e^{t{\textsf{L}}}p(z)=e^{tM(b_1+a_2(M-1))}[z^M]p(z). \end{aligned}$$
(32)

Proof

Consider a polynomial \(p(z)=\sum _{i=0}^M \epsilon _i z^i \in {\mathfrak {P}}_M\). Differentiating we get

$$\begin{aligned} \frac{d}{dz}p(z)=\sum _{i=0}^{M-1}(i+1)\epsilon _{i+1}z^i, \ \ \frac{d^2}{dz^2}p(z)=\sum _{i=0}^{M-2}(i+1)(i+2)\epsilon _{i+2} z^i. \end{aligned}$$

Define \(p_j(z)={\textsf{L}}^jp(z)=\sum _{i=0}^M\epsilon _i^{(j)}z^i\) and observe that this is a polynomial of degree at most M. We have the following recurrence for the coefficients \(\epsilon _i^{(j)}\) in j:

$$\begin{aligned} \epsilon _i^{(j+1)}&=a_2 i(i-1)\epsilon _i^{(j)}+a_1(i+1)i\epsilon _{i+1}^{(j)}+a_0 (i+1)(i+2) \epsilon _{i+2}^{(j)}+b_1 i \epsilon _i^{(j)}+b_0(i+1)\epsilon _{i+1}^{(j)}\\&=\epsilon _i^{(j)}[a_2i(i-1)+b_1i]+\epsilon _{i+1}^{(j)}[a_2i(i+1)+b_0(i+1)]+a_0(i+1)(i+2)\epsilon _{i+2}^{(j)}. \end{aligned}$$

In particular, there is some finite constant \(\eta \), depending only on \(a_0, a_1, a_2, b_0, b_1\) and i (recall that \(i=0,\dots , M\) is fixed) such that:

$$\begin{aligned} \left| \epsilon _i^{(j+1)}\right| \le \eta \left( \left| \epsilon _i^{(j)}\right| +\left| \epsilon _{i+1}^{(j)}\right| +\left| \epsilon _{i+2}^{(j)}\right| \right) . \end{aligned}$$

By summing this relation over i we get that, for a different constant \({\tilde{\eta }}\), independent of j:

$$\begin{aligned} \sum _{i=0}^M\left| \epsilon _i^{(j+1)}\right| \le {\tilde{\eta }} \sum _{i=0}^M\left| \epsilon _i^{(j)}\right| . \end{aligned}$$

By iterating, we finally get that, for any \(i=1,\dots ,M\), for some constant \(\eta ^*\) independent of j, \(\left| \epsilon _i^{(j)}\right| \le \sum _{m=1}^M \left| \epsilon _m^{(j)}\right| \le (\eta ^*)^j\). Suppose now that \((t,z)\in {\mathcal {V}}\), where \({\mathcal {V}}\) is an arbitrary compact subset of \({\mathbb {C}}^2\). Then, we have the following bound for any j:

$$\begin{aligned} \left| \frac{t^j{\textsf{L}}^j}{j!}p(z) \right| \le \frac{|t|^j}{{j!}}\sum _{i=0}^M\left| \epsilon _i^{(j)}\right| |z|^i \le \frac{|t|^j\left( \eta ^*\right) ^j}{{j!}}\sum _{i=0}^M|z|^i \le \frac{C_{{\mathcal {V}},\eta ^*}^j}{j!}, \end{aligned}$$

for some finite constant \(C_{{\mathcal {V}},\eta ^*}\). By the Weirstrass M-test this gives that \(e^{t{\textsf{L}}}p(z)\) is well-defined, being a polynomial of degree at most M, with the series (10) converging uniformly in (tz) on compact sets in \({\mathbb {C}}^2\). Finally, using the recurrence for \(\epsilon _M^{(j)}\) we obtain that for \(p\in {\mathfrak {P}}_M\):

$$\begin{aligned} {}[z^M]{\textsf{L}}^jp(z)=\left( b_1 M+a_2M(M-1)\right) ^j[z^M]p(z), \end{aligned}$$

from which both (32) and the first claim of the proposition that the degree of \(e^{t{\textsf{L}}}p\) is exactly M follow (since by (32) \([z^M]e^{t{\textsf{L}}}p(z)\ne 0\)). \(\square \)

Remark 3.2

The operator \(e^{t{\textsf{L}}}\) is well-defined as a power series not only on polynomials but more generally on analytic functions with certain growth conditions. Since we do not need this in this paper we do not pursue it further.

Lemma 3.3

Let p be a polynomial. Then, for \(t>0\) and \(x\in (l,r)\), the definition of \(e^{t{\textsf{L}}}p(x)\) in (10) coincides with

$$\begin{aligned} \int _{l}^r e^{t{\textsf{L}}}(x,y)p(y)dy, \end{aligned}$$
(33)

where recall that \(e^{t{\textsf{L}}}(x,y)\) is the transition density with respect to the Lebesgue measure of an \({\textsf{L}}\)-diffusion in (lr).

Proof

We first claim that the classical solution to the partial differential equation with \((t,x)\in [0,\infty ) \times (l,r)\):

$$\begin{aligned} \partial _t u(t,x)={\textsf{L}}_xu(t,x), \ \ u(0,x)=p(x), \end{aligned}$$
(34)

subject to the bound, for any \(T>0\), \(|\partial _x u(t,x)|\le h_T(x)\) uniformly in \(t\in [0,T]\) for some non-negative polynomial \(h_T\), is unique and given by the probabilistic representation \(u(t,x)={\mathbb {E}}_x\left[ p({\textsf{x}}(t))\right] \), where \(({\textsf{x}}(t);t\ge 0)\) is an \({\textsf{L}}\)-diffusion starting from x, which is exactly the expression in (33). The claim can be proven as follows. Let u be an arbitrary solution of (34) subject to the above conditions. Applying Itô’s formula [87] we see that, for any t, the process \(\left( s\mapsto u(t-s,{\textsf{x}}(s));0\le s \le t\right) \) is a local martingale by virtue of (34). Moreover, by virtue of the polynomial bound in the condition above and the fact that the \({\textsf{L}}\)-diffusion has polynomial moments we obtain that the quadratic variation of \(\left( s\mapsto u(t-s,{\textsf{x}}(s));0\le s \le t\right) \) is integrable and hence it is a true martingale [87]. The optional stopping theorem [87] gives the desired claim.

Finally, it is easily seen that \(e^{t{\textsf{L}}}p(x)\) defined in (10) also solves (34) and satisfies the required bound; we can justify differentiating term by term by adapting the proof of Proposition 3.1. The conclusion follows. \(\square \)

Lemma 3.4

Let p be a polynomial and \(s,t\in {\mathbb {C}}\). We have \(e^{s{\textsf{L}}}e^{t{\textsf{L}}}p=e^{(s+t){\textsf{L}}}p\).

Proof

This follows from the power series definition in (10). \(\square \)

Proposition 3.5

Let p be a polynomial and \(\left( {\textsf{x}}(t);t\ge 0\right) \) be a realisation of the \({\textsf{L}}\)-diffusion, with \({\textsf{x}}(0)=x\) deterministic, with natural filtration \(\left( {\mathcal {F}}_s\right) _{s\ge 0}\). Then, \(\Big (\left[ e^{-t{\textsf{L}}}p\right] ({\textsf{x}}(t))\Big )_{t\ge 0}\) is a martingale.

Proof

First note that, since the \({\textsf{L}}\)-diffusion has polynomial moments, \({\mathbb {E}}_x\left[ \left| e^{-t{\textsf{L}}}p\left( {\textsf{x}}(t)\right) \right| \right] <\infty \). Then, for \(s\le t\), by the Markov property and Lemma 3.4 we have:

$$\begin{aligned} {\mathbb {E}}\left[ e^{-t{\textsf{L}}}p({\textsf{x}}(t))\big |{\mathcal {F}}_s\right] =\left[ e^{(t-s) {\textsf{L}}}e^{-t{\textsf{L}}}p\right] ({\textsf{x}}(s))=\left[ e^{-s{\textsf{L}}}p\right] ({\textsf{x}}(s)), \end{aligned}$$

and the conclusion follows. \(\square \)

Remark 3.6

Suppose that p is a polynomial eigenfunction of \({\textsf{L}}\) with eigenvalue \(\lambda \), \({\textsf{L}}p(x)=\lambda p(x)\). Then, it is easy to see from (10), that for any \(t\in {\mathbb {C}}\), \(e^{t{\textsf{L}}}p(x)=e^{\lambda t}p(x)\).

We finally record here the equations of motion for the zeroes of \(e^{t{\textsf{L}}}p(z)\). This is of interest to us since applying the diffusion flow backward in time to the polynomial \(\prod _{i=1}^N(z-x_i)\), namely with roots given by the initial condition \(x=(x_1,\dots ,x_N)\in {\mathbb {W}}_N^{\uparrow }\) of the SDEs (25), plays a key role in Theorem 1.9. Observe how similar (essentially we remove the noise and reverse time) these equations are to the interacting SDE (25).

Proposition 3.7

Let \(p \in {\mathfrak {P}}_M\). Let \({\mathfrak {z}}_1(t), \dots , {\mathfrak {z}}_M(t)\) be the zeroes of \(e^{t{\textsf{L}}}p(z)\). Suppose that for some \(t_0\in {\mathbb {C}}\) the zeroes \({\mathfrak {z}}_1(t_0), \dots , {\mathfrak {z}}_M(t_0)\) are distinct. Then, for all \(t\in {\mathbb {C}}\) in a neighbourhood of \(t_0\), each \({\mathfrak {z}}_i(t)\) depends holomorphically on t and we have the following equations of motion, for \(i=1,\dots ,M\):

$$\begin{aligned} \partial _t {\mathfrak {z}}_i(t)=-{\textsf{b}}({\mathfrak {z}}_i(t))+2{\textsf{a}}({\mathfrak {z}}_i(t)) \sum _{j\ne i} \frac{1}{{\mathfrak {z}}_j(t)-{\mathfrak {z}}_i(t)}. \end{aligned}$$
(35)

Proof

The proof is a straightforward adaptation of the argument in [56, 96] given there for the heat kernel/Brownian case, see in particular Proposition 2.7 in [56] for more details. Local holomorphicity of the roots \({\mathfrak {z}}_i(t)\) in t is a consequence of the (holomorphic) implicit function theorem, since by assumption the \({\mathfrak {z}}_i(t_0)\) are distinct, we have that \(\partial _z e^{t_0{\textsf{L}}}p\) is non-zero at \({\mathfrak {z}}_i(t_0)\). Hence, implicitly differentiating in t in a neighbourhood of \(t_0\) the equation \(e^{t{\textsf{L}}}p\left( {\mathfrak {z}}_i(t)\right) =0\), using the fact \(\partial _t e^{t{\textsf{L}}}p(z)={\textsf{L}}_z e^{t{\textsf{L}}}p(z)\), for any \(z\in {\mathbb {C}}\), we get the equation:

$$\begin{aligned} {\textsf{L}}_z e^{t{\textsf{L}}}p\left( {\mathfrak {z}}_i(t)\right) +\partial _t{\mathfrak {z}}_i(t)\partial _z e^{t{\textsf{L}}}p\left( {\mathfrak {z}}_i(t)\right) =0. \end{aligned}$$

After some computations, see [56, 96] for more details, we obtain (35). \(\square \)

4 Proof of Theorems 1.31.5 and 1.6

4.1 Existence of determinantal structure

In this subsection we prove that there is a hidden “signed" determinantal point process structure underlying the interacting particle systems (3) and (5). We need some additional notation. Recall that we defined the operator \(\partial ^{-1}\) in (11) by \(\partial ^{-1}f(x)=\int _{l}^x f(y)dy\). Moreover, define the operator \({\hat{\partial }}^{-1}\), on functions f in (lr) which integrate polynomials, by:

$$\begin{aligned} {\hat{\partial }}^{-1}f(x)=-\int _{x}^rf(y)dy. \end{aligned}$$
(36)

It is easy to see that, for any \(m\in {\mathbb {N}}\),

$$\begin{aligned} {\hat{\partial }}^{-m}f(x)=\underbrace{{\hat{\partial }}^{-1}\cdots {\hat{\partial }}^{-1}}_{\text {m times}}f(x)=-\int _x^r\frac{(x-y)^{m-1}}{(m-1)!}f(y)dy. \end{aligned}$$
(37)

For \(n\in {\mathbb {N}}\), we use the convention \({\hat{\partial }}^n=\partial ^n\).

In what follows we will make frequent use of the following observations.

Lemma 4.1

Let \(m,n \in {\mathbb {N}}\). Suppose f is a smooth function and integrates polynomials in (lr), namely its integral against polynomials in (lr) is finite. Then, we have

$$\begin{aligned} \partial ^m\partial ^n f&=\partial ^{n}\partial ^{m}f=\partial ^{m+n}f,\\ \partial ^{-m}\partial ^{-n}f&=\partial ^{-n}\partial ^{-m}f=\partial ^{-n-m}f,\\ \partial ^m\partial ^{-n}f&=\partial ^{m-n}f. \end{aligned}$$

Moreover, for \(k=1,\dots ,N\) and \(t>0\), we have

$$\begin{aligned} \partial _y^{m}\partial _x^n e^{t{\textsf{L}}^{(k)}}(x,y)&=\partial _x^n\partial _y^m e^{t{\textsf{L}}^{(k)}}(x,y), \end{aligned}$$
(38)
$$\begin{aligned} \partial _y^{-m}\partial _x^n e^{t{\textsf{L}}^{(k)}}(x,y)&=\partial _x^n\partial _y^{-m} e^{t{\textsf{L}}^{(k)}}(x,y). \end{aligned}$$
(39)

All the relations above also hold with \(\partial \) replaced by \({\hat{\partial }}\).

Proof

The first three relations are immediate by the definitions. Relation (38) follows from smoothness of \(e^{t{\textsf{L}}^{(k)}}(x,y)\) in \((x,y)\in (l,r)^2\) for \(t>0\) while (39) is valid by virtue of the dominated convergence theorem since by standard estimates [95], see [11] where this is worked out in detail in the Hua-Pickrell case, we get that for a small neighbourhood \({\mathcal {U}}_x\subset (l,r)\) of x, the function \(\sup _{\xi \in {\mathcal {U}}_x}\big |\partial _\xi ^n e^{t{\textsf{L}}^{(k)}}(\xi ,\cdot )\big |\) integrates polynomials. Finally, it is easy to see that \(\partial \) can be replaced by \({\hat{\partial }}\) in all the above. \(\square \)

We note that, for \(m,n \in {\mathbb {N}}\), \(\partial ^{-m} \partial ^n f\) is in general not equal to \(\partial ^{-m+n}f=\partial ^n\partial ^{-m}f\). We would need to enforce boundary conditions at l (respectively r for \({\hat{\partial }}\)) for f for this to be true, for example \(\partial ^{-1}\partial f(x)=f(x)-f(l)\), \({\hat{\partial }}^{-1}{\hat{\partial }}f(x)=f(x)-f(r)\). However, we will not need to make use of this.

The starting point of our analysis are the following explicit expressions for the semigroups \({\textsf{S}}_t^{\uparrow ,(N)}\) and \({\textsf{S}}_t^{\downarrow ,(N)}\) obtained in [13]. These are usually called Schutz type formulae because of the seminal work of Schutz [91] on the transition probabilities of TASEP.

Proposition 4.2

Under the standing assumption in Definition 1.1, the transition kernel of the interacting particle system (3) in \({\mathbb {W}}_N^{\uparrow }\) is given by, with \(x=(x_1,\dots ,x_N),y=(y_1,\dots ,y_N)\in {\mathbb {W}}_N^\uparrow \),

$$\begin{aligned} {\textsf{S}}_t^{\uparrow ,(N)}(x,dy)=\det \left( \left[ \partial ^{j-i}_{\cdot }e^{t{\textsf{L}}^{(i)}}(x_i,\cdot )\right] (y_j)\right) _{i,j=1}^Ndy. \end{aligned}$$
(40)

Proposition 4.3

Under the standing assumption in Definition 1.1, the transition kernel of the interacting particle system (5) in \({\mathbb {W}}_N^{\downarrow }\) is given by, with \(x=(x_1,\dots ,x_N),y=(y_1,\dots ,y_N)\in {\mathbb {W}}_N^\downarrow \),

$$\begin{aligned} {\textsf{S}}_t^{\downarrow ,(N)}(x,dy)=\det \left( \left[ {\hat{\partial }}^{j-i}_{\cdot }e^{t{\textsf{L}}^{(i)}} (x_i,\cdot )\right] (y_j)\right) _{i,j=1}^Ndy. \end{aligned}$$
(41)

The following identities, proved in Section 13.4 of [13] and used to obtain the explicit form of the transition kernels in Propositions 4.2 and 4.3 will be important below,

$$\begin{aligned} e^{t{\textsf{L}}^{(j)}}(x,y)&=-e^{-t{\textsf{c}}^{(j)}}\partial _y^{-1}\partial _xe^{t{\textsf{L}}^{(j+1)}}(x,y), \end{aligned}$$
(42)
$$\begin{aligned} e^{t{\textsf{L}}^{(j)}}(x,y)&=-e^{-t{\textsf{c}}^{(j)}}{\hat{\partial }}_y^{-1}\partial _xe^{t{\textsf{L}}^{(j+1)}}(x,y). \end{aligned}$$
(43)

Observe that, since \(e^{t{\textsf{L}}^{(j+1)}}\) is a bona-fide Markov semigroup (\(e^{t{\textsf{L}}^{(j+1)}}{\textbf{1}}={\textbf{1}}\)) then the two identities are actually equivalent.

The proposition below is the key step in the proof of existence of a determinantal structure for the interacting particle system (3). In the translation invariant (in space) setting the argument has been employed several times [19,20,21,22] and is known as Sasamoto’s trick [90]. The fact that something analogous can be done in the non-translation invariant setting we consider here is non-trivial and crucially relies on the equation (42) above. It is worth noting that the set \({\mathcal {D}}_N\) in (46) below is very closely related to the notion of interlacing arrays \(\mathbb{I}\mathbb{A}_N\) defined in (90) as we explain in Remark 4.5. Moreover, the measure in (45) below actually coincides, for special initial conditions, with the distribution at time t of certain dynamics (91) in interlacing arrays, see Proposition 6.1 and Remark 6.3. Finally, a more subtle connection between the signed measure (45) and the dynamics (91) exists beyond these special initial conditions, see the discussion around Proposition 6.7 (which also gives an alternative approach to Proposition 4.4).

Fig. 1
figure 1

A visualisation of an element \((z_i^{(n)})_{1\le i \le n \le N}\) in \({\mathcal {D}}_N\). The n-th row corresponds to the vector \((z_1^{(n)},\dots ,z_n^{(n)})\in (l,r)^n\), which by virtue of the inequalities in (46) actually belongs to \({\mathbb {W}}_n^{\uparrow ,\circ }\). We observe from the inequalities in (46) that the coordinates strictly increase as we go down the diagonals starting from the top row and going to the right edge while they increase (not necessarily strictly) as we go up the diagonals starting from the left edge and going towards the top row as illustrated in the figure

Proposition 4.4

Let \(x=(x_1,\dots ,x_N)\in {\mathbb {W}}_N^{\uparrow }\). The distribution in \({\mathbb {W}}_N^{\uparrow }\) at time t of the interacting particle system (3) starting from x, namely,

$$\begin{aligned} \det \left( \left[ \partial ^{j-i}_{\cdot }e^{t{\textsf{L}}^{(i)}}(x_i,\cdot )\right] \left( y^{(j)}_j\right) \right) _{i,j=1}^Ndy_1^{(1)}dy_2^{(2)}\cdots dy_N^{(N)} \end{aligned}$$
(44)

is the marginal in the \((y_1^{(1)},y_2^{(2)},\dots ,y_N^{(N)})\) variables of the signed measure

$$\begin{aligned} \left( -1\right) ^{\frac{N(N-1)}{2}}e^{-t\sum _{k=1}^{N-1}k{\textsf{c}}^{(k)}}\det \left( \partial _{x_i}^{N-i} e^{t{\textsf{L}}}\left( x_i,y_j^{(N)}\right) \right) _{i,j=1}^N {\textbf{1}}_{\left( (y^{(1)},\dots ,y^{(N)})\in {\mathcal {D}}_N\right) }\prod _{1\le i \le n \le N}dy_i^{(n)}, \end{aligned}$$
(45)

where the set \({\mathcal {D}}_N\) is given by, see Fig. 1 for an illustration,

$$\begin{aligned} {\mathcal {D}}_N=\left\{ z_i^{(n)}\in (l,r); i=1,\dots ,n; n=1,\dots ,N:z_i^{(n+1)}<z_i^{(n)}\le z_{i+1}^{(n+1)}\right\} . \end{aligned}$$
(46)

Proof

Looking at the m-th column of the matrix in the determinant in (44) we claim that the i-th entry can be written as (where the second equality is simply writing the \(\partial ^{-1}\) notation out):

$$\begin{aligned}&\partial _{y_m^{(m)}}^{m-i}e^{t{\textsf{L}}^{(i)}}\left( x_i,y_m^{(m)}\right) =(-1)^{N-i} e^{-t\sum _{k=i}^{N-1}{\textsf{c}}^{(k)}}\partial ^{-(N-m)}_{y^{(m)}_m}\partial _{x_i}^{N-i} e^{t{\textsf{L}}^{(N)}}\left( x_i,y_m^{(m)}\right) \\&=(-1)^{N-i}e^{-t\sum _{k=i}^{N-1}{\textsf{c}}^{(k)}}\int _l^{y_{m}^{(m)}}\int _l^{y_m^{(m+1)}}\cdots \int _l^{y_m^{(N-1)}}\partial _{x_i}^{N-i}e^{t{\textsf{L}}^{(N)}}\left( x_i,y_m^{(N)}\right) dy_m^{(N)}\cdots y_m^{(m+1)}, \end{aligned}$$

for some dummy variables \(y_m^{(m+1)},\dots ,y_m^{(N)}\) so that \(y_m^{(N)}\le y_m^{(N-1)}\le \cdots \le y_m^{(m+1)}\le y_m^{(m)}\). This can be seen as follows. By repeated use of the identity (42) and Lemma 4.1, we have

$$\begin{aligned} e^{t{\textsf{L}}^{(i)}}\left( x_i,y_m^{(m)}\right)&=-e^{-{\textsf{c}}^{(i)}t}\partial _{y_m^{(m)}}^{-1} \partial _{x_i}e^{t{\textsf{L}}^{(i+1)}}\left( x_i,y_m^{(m)}\right) =\cdots \\&=(-1)^{N-i}e^{-t\sum _{k=i}^{N-1} {\textsf{c}}^{(k)}}\partial _{y_m^{(m)}}^{-(N-i)}\partial _{x_i}^{N-i}e^{t{\textsf{L}}^{(N)}}\left( x_i,y_m^{(m)}\right) . \end{aligned}$$

Then, applying \(\partial _{y_m^{(m)}}^{m-i}\) to both sides of the identity above gives the desired claim (observe that \(-(N-i)\le 0\) and hence \(\partial ^{m-i}\partial ^{-(N-i)}=\partial ^{-(N-m)}\) from Lemma 4.1).

Hence, by multilinearity of the determinant, we can take the multiple integrals outside the determinant and we can write display (44) (suppressing \(\prod _{n=1}^Ndy_n^{(n)}\)) as follows:

$$\begin{aligned} \left( -1\right) ^{\frac{N(N-1)}{2}}e^{-t\sum _{k=1}^{N-1}k{\textsf{c}}^{(k)}}\int _{\widehat{{\mathcal {D}}}_N }\det \left( \partial _{x_i}^{N-i}e^{t{\textsf{L}}^{(N)}}\left( x_i,y_j^{(N)}\right) \right) _{i,j=1}^N \prod _{1\le i<n \le N}dy_i^{(n)}, \end{aligned}$$
(47)

where \(\widehat{{\mathcal {D}}}_N=\widehat{{\mathcal {D}}}_N\left( y_1^{(1)},\dots ,y_N^{(N)}\right) \) is given by

$$\begin{aligned} \widehat{{\mathcal {D}}}_N=\left\{ z_i^{(n)}\in (l,r): z_n^{(n)}=y_n^{(n)}; z_i^{(n+1)}\le z_i^{(n)}\right\} . \end{aligned}$$

Since the determinant is antisymmetric in the \(y_j^{(N)}\)-variables we can then use Lemma 5.6 in [101] to write display (47) as

$$\begin{aligned} \left( -1\right) ^{\frac{N(N-1)}{2}}e^{-t\sum _{k=1}^{N-1}k{\textsf{c}}^{(k)}}\int _{\widetilde{{\mathcal {D}}}_N }\det \left( \partial _{x_i}^{N-i}e^{t{\textsf{L}}^{(N)}}\left( x_i,y_j^{(N)}\right) \right) _{i,j=1}^N \prod _{1\le i<n \le N}dy_i^{(n)}, \end{aligned}$$

where \(\widetilde{{\mathcal {D}}}_N=\widetilde{{\mathcal {D}}}_N\left( y_1^{(1)},\dots ,y_N^{(N)}\right) \) is given by

$$\begin{aligned} \widetilde{{\mathcal {D}}}_N=\left\{ z_i^{(n)}\in (l,r): z_n^{(n)}=y_n^{(n)}; z_i^{(n+1)}\le z_i^{(n)}<z_{i+1}^{(n+1)}\right\} . \end{aligned}$$

Finally, since we are integrating a continuous function, the integral remains unchanged if we change a strict inequality “<" for an inequality “\(\le \)" which, after recalling that \({\textsf{L}}^{(N)}\equiv {\textsf{L}}\), concludes the proof. \(\square \)

Remark 4.5

We note that the set \({\mathcal {D}}_N\) is very closely related to the notion of interlacing arrays \(\mathbb{I}\mathbb{A}_N\) defined in (90). The only difference is that for \({\mathcal {D}}_N\) we require some of the inequalities, the ones appearing as we move down the diagonals from the top row to the right edge as depicted in Fig. 1 to be strict (while for interlacing arrays they do not need to be strict). Since the measures we study have continuous densities, considering such measures over \({\mathcal {D}}_N\) or over interlacing arrays (90) is essentially one and the same. The main reason for using \({\mathcal {D}}_N\) is that its indicator function can be written as a product of determinants of special form (and we need the strict inequalities for this to be true), see Lemma 4.7 below, which will allow us to apply the Eynard-Mehta theorem [17, 24, 61] in the sequel.

An analogous result holds for the particle system (5), making use of the identity (43) now instead.

Proposition 4.6

Let \(x=(x_1,\dots ,x_N)\in {\mathbb {W}}_N^{\downarrow }\). The distribution in \({\mathbb {W}}_N^{\downarrow }\) at time t of the interacting particle system (5) starting from x, namely,

$$\begin{aligned} \det \left( \left[ {\hat{\partial }}^{j-i}_{\cdot }e^{t{\textsf{L}}^{(i)}}(x_i,\cdot )\right] \left( y^{(j)}_1\right) \right) _{i,j=1}^Ndy_1^{(1)}dy_1^{(2)}\cdots dy_1^{(N)} \end{aligned}$$
(48)

is the marginal in the \((y_1^{(1)},y_1^{(2)},\dots ,y_1^{(N)})\) variables of the signed measure in (45), with \(x\in {\mathbb {W}}_N^{\downarrow }\) instead.

Proof

We argue as in the proof of Proposition 4.4. Looking at the m-th column of the matrix in the determinant in (48) we claim that the i-th entry can be written as:

$$\begin{aligned} {\hat{\partial }}_{y_m^{(m)}}^{m-i}e^{t{\textsf{L}}^{(i)}}\left( x_i,y_m^{(m)}\right)&=(-1)^{N-i} e^{-t\sum _{k=i}^{N-1}{\textsf{c}}^{(k)}}{\hat{\partial }}^{-(N-m)}_{y^{(m)}_m}\partial _{x_i}^{N-i} e^{t{\textsf{L}}^{(N)}}\left( x_i,y_m^{(m)}\right) \\&=(-1)^{-i-m}e^{-t\sum _{k=i}^{N-1}{\textsf{c}}^{(k)}}\int _{y_1^{(m)}}^r\\&\cdots \int _{y_{N-m}^{(N-1)}}^{r} \partial _{x_i}^{N-i}e^{t{\textsf{L}}^{(N)}}\left( x_i,y_{N-m+1}^{(N)}\right) dy_{N-m+1}^{(N)}\cdots y_2^{(m+1)}, \end{aligned}$$

for some dummy variables \(y_{2}^{(m+1)},\dots ,y_{N-m+1}^{(N)}\) so that \(y_1^{(m)}\le y_2^{(m+1)}\le \cdots \le y_{N-m}^{(N-1)} \le y_{N-m+1}^{(N)}\). As before, the claim can be seen by repeatedly using the identity (43) and Lemma 4.1:

$$\begin{aligned} e^{t{\textsf{L}}^{(i)}}\left( x_i,y_m^{(m)}\right)&=-e^{-{\textsf{c}}^{(i)}t}{\hat{\partial }}_{y_m^{(m)}}^{-1} \partial _{x_i}e^{t{\textsf{L}}^{(i+1)}}\left( x_i,y_m^{(m)}\right) \\&=\cdots =(-1)^{N-i}e^{-t\sum _{k=i}^{N-1} {\textsf{c}}^{(k)}}{\hat{\partial }}_{y_m^{(m)}}^{-(N-i)}\partial _{x_i}^{N-i}e^{t{\textsf{L}}^{(N)}}\left( x_i,y_m^{(m)}\right) . \end{aligned}$$

and then applying \({\hat{\partial }}_{y_m^{(m)}}^{m-i}\) to both sides. Thus, by multilinearity of the determinant, we can write display (48) (suppressing \(\prod _{n=1}^Ndy_n^{(n)}\)) as follows:

$$\begin{aligned} e^{-t\sum _{k=1}^{N-1}k{\textsf{c}}^{(k)}}\int _{\overline{{\mathcal {D}}}_N }\det \left( \partial _{x_i}^{N-i}e^{t{\textsf{L}}^{(N)}}\left( x_i,y_{N-j+1}^{(N)}\right) \right) _{i,j=1}^N \prod _{2\le i\le n \le N}dy_i^{(n)}\end{aligned}$$
(49)
$$\begin{aligned} =\left( -1\right) ^{\frac{N(N-1)}{2}}e^{-t\sum _{k=1}^{N-1}k{\textsf{c}}^{(k)}}\int _{\overline{{\mathcal {D}}}_N }\det \left( \partial _{x_i}^{N-i}e^{t{\textsf{L}}^{(N)}}\left( x_i,y_j^{(N)}\right) \right) _{i,j=1}^N \prod _{2\le i\le n \le N}dy_i^{(n)}, \end{aligned}$$
(50)

where \(\overline{{\mathcal {D}}}_N=\overline{{\mathcal {D}}}_N\left( y_1^{(1)},\dots ,y_1^{(N)}\right) \) (not to be confused with the closure of \({\mathcal {D}}_N\)) is given by

$$\begin{aligned} \overline{{\mathcal {D}}}_N=\left\{ z_i^{(n)}\in (l,r): z_1^{(n)}=y_1^{(n)}; z_i^{(n)}\le z_{i+1}^{(n+1)}\right\} . \end{aligned}$$

Finally, arguing as in the proof of Proposition 4.4, using Lemma 5.6 of [101], we obtain the conclusion. \(\square \)

We will shortly need the following well-known lemma.

Lemma 4.7

We have the representation

$$\begin{aligned} {\textbf{1}}_{\left( (y^{(1)},\dots ,y^{(N)})\in {\mathcal {D}}_N\right) }=\prod _{n=1}^{N}\det \left( \partial ^{-1} \left( y_i^{(n-1)},y_j^{(n)}\right) \right) _{i,j=1}^n, \end{aligned}$$

with the convention that \(y_n^{(n-1)}\) are some “virtual" variables so that by definition \(\partial ^{-1}(y_n^{(n-1)},z)\equiv 1\).

Proof

This follows from the well-known linear algebraic fact,

$$\begin{aligned} {\textbf{1}}_{(y_1<x_1\le y_2<x_2\le \cdots \le y_n<x_n\le y_{n+1})}=\det \left( \partial ^{-1}\left( x_i,y_j\right) \right) _{i,j=1}^{n+1}, \end{aligned}$$

with the convention \(\partial ^{-1}(x_{n+1},z)\equiv 1\), for any z. \(\square \)

Remark 4.8

We note that we also have the following representation

$$\begin{aligned} {\textbf{1}}_{\left( (y^{(1)},\dots ,y^{(N)})\in {\mathcal {D}}_N\right) }=\prod _{n=1}^{N}\det \left( {\hat{\partial }}^{-1}\left( y_i^{(n-1)},y_j^{(n)}\right) \right) _{i,j=1}^n, \end{aligned}$$

where \({\hat{\partial }}^{-1}\left( x,y\right) =-{\textbf{1}}_{x\le y}\) is the integral kernel of (36) and by definition \({\hat{\partial }}^{-1}\left( y_n^{(n-1)},z\right) \equiv 1\).

To proceed we require a special case of (one of the many variants of) the Eynard-Mehta theorem, see for example Lemma 3.5 in [101]. For the convenience of the reader we give the statement from [101] explicitly.

Proposition 4.9

Assume we have a signed measure (of total mass one) on \((l,r)\times (l,r)^2 \times \cdots \times (l,r)^N\) of the form

$$\begin{aligned} \frac{1}{Z} \prod _{n=1}^N\det \left( \phi \left( y_i^{(n-1)},y_j^{(n)}\right) \right) _{i,j=1}^n \det \left( \Psi _{N-i}^{(N)}\left( y_j^{(N)}\right) \right) _{i,j=1}^N, \end{aligned}$$
(51)

where the \(y_{n}^{(n-1)}\) are some “virtual" variables (in particular we assume \(\phi \left( y_n^{(n-1)},z\right) \) has been defined and is independent of n) and Z is a non-zero normalization constant. Define, for \(n \in {\mathbb {N}}\):

$$\begin{aligned} \phi ^{(n)}(y_1,y_2)=\left( \phi *\cdots *\phi \right) (y_1,y_2), \end{aligned}$$

where \(\phi \) appears n times in the convolutions \(\left( \phi *\psi \right) (y_1,y_2)=\int _l^r\phi (y_1,z)\psi (z,y_2)dz\). Moreover, define for \(n=1,\dots , N-1\), the functions

$$\begin{aligned} \Psi _{n-j}^{(n)}(z)=\left( \phi ^{(N-n)}*\Psi _{N-j}^{(N)}\right) (z), \ \ j=1,\dots ,N. \end{aligned}$$
(52)

For each \(n=1,\dots ,N\), the functions

$$\begin{aligned} \left\{ \phi ^{(n)}\left( y_1^{(0)},z\right) ,\phi ^{(n-1)}\left( y_2^{(1)},z\right) ,\dots ,\phi \left( y_n^{(n-1)},z\right) \right\} \end{aligned}$$
(53)

are linearly independent and generate some n-dimensional space \(V_n\). Moreover, define (uniquely) functions \(\Phi _{n-j}^{(n)}(z)\), \(j=1,\dots ,n\) spanning \(V_n\), with

$$\begin{aligned} \int _l^r\Phi _{n-i}^{(n)}(z)\Psi _{n-j}^{(n)}(z)dz={\textbf{1}}_{(i=j)}, \end{aligned}$$
(54)

for \(1\le i,j\le n\). Finally, assume that \(\phi \left( y_n^{(n-1)},z\right) =c_n\Phi _{0}^{(n)}(z)\), for some \(c_n\ne 0\), for \(n=1,\dots ,N\). Then, the correlation functions of the “signed random point process" induced by (51) are determinantal and moreover the correlation kernel is given by

$$\begin{aligned} K\left[ \left( n_1,y_1\right) ;\left( n_2,y_2\right) \right] =-\phi _{n_2-n_1}\left( y_1,y_2\right) {\textbf{1}}_{(n_2>n_1)}+\sum _{k=1}^{n_2}\Psi _{n_1-k}^{(n_1)}(y_1)\Phi _{n_2-k}^{(n_2)}(y_2). \end{aligned}$$
(55)

The following result proves that there is some determinantal structure associated to the interacting particle systems (3) and (5). However, the correlation kernel at this stage is still given implicitly in terms of the solution of a biorthogonalisation problem which will be solved explicitly in the next subsection.

Theorem 4.10

Consider the interacting particle system \(\left( \left( {\textsf{x}}_1^\uparrow (t),\dots ,{\textsf{x}}_N^\uparrow (t)\right) ;t\ge 0\right) \) evolving according to the dynamics (3) with initial condition \(x=(x_1,\dots ,x_N)\in {\mathbb {W}}_N^\uparrow \). For any \(t>0\), indices \(1\le n_1<n_2<\cdots <n_M \le N\) and locations \(z=(z_1,\dots ,z_M)\in (l,r)^M\), we have

$$\begin{aligned} {\mathbb {P}}\left( {\textsf{x}}^\uparrow _{n_j}(t)\le z_j, j=1,\dots ,M\right) = \det \left( {\textbf{I}} -\chi _z^+K\chi _z^+\right) _{L^2\left( \left\{ n_1,\dots , n_M\right\} \times (l,r)\right) }, \end{aligned}$$
(56)

where \(\det \) is the Fredholm determinant, with

$$\begin{aligned} K\left[ \left( n_1,y_1\right) ;\left( n_2,y_2\right) \right]&=-\partial ^{-(n_2-n_1)}\left( y_1,y_2\right) {\textbf{1}}_{(n_2>n_1)}+\sum _{k=1}^{n_2}\mathsf {\Psi }_{n_1-k}^{(n_1)}(y_1)\Phi _{n_2-k}^{(n_2)}(y_2), \end{aligned}$$
(57)

where the functions \(\mathsf {\Psi }\) are given by:

$$\begin{aligned} \mathsf {\Psi }_{N-i}^{(N)}(w)&=\partial _{x_i}^{N-i}e^{t{\textsf{L}}}(x_i,w) \ \text {and} \ \mathsf {\Psi }_{n-i}^{(n)}(w)=\left( \partial ^{-(N-n)}\mathsf {\Psi }_{N-i}^{(N)}\right) (w),\nonumber \\ \ n&=1,\dots , N-1, i=1,\dots , N, \end{aligned}$$
(58)

and the functions \(\Phi _i^{(n)}\), \(i=0,\dots ,n-1\), are uniquely determined by the following conditions:

  1. 1.

    \(\text {span}\left\{ \Phi _0^{(n)}(w),\dots ,\Phi _{n-1}^{(n)}(w) \right\} ={\mathfrak {P}}_{\le n-1}\), the space of polynomials of degree at most \(n-1\),

  2. 2.

    The biorthogonality relations \(\int _l^r\Phi _i^{(n)}(w)\mathsf {\Psi }_{j}^{(n)}(w)dw={\textbf{1}}_{i=j}\), \(0\le i,j\le n-1\), hold.

Moreover, if we consider the interacting particle system \(\left( \left( {\textsf{x}}_1^\downarrow (t),\dots ,{\textsf{x}}_N^\downarrow (t)\right) ;t\ge 0\right) \) evolving according to the dynamics (5) with initial condition \(x=(x_1,\dots ,x_N)\in {\mathbb {W}}_N^\downarrow \), we have

$$\begin{aligned} {\mathbb {P}}\left( {\textsf{x}}^\downarrow _{n_j}(t)\ge z_j, j=1,\dots ,M\right) = \det \left( {\textbf{I}}-\chi _z^-K\chi _z^-\right) _{L^2\left( \left\{ n_1,\dots , n_M\right\} \times (l,r)\right) }, \end{aligned}$$
(59)

where K is constructed as above, with \(x\in {\mathbb {W}}_N^\downarrow \) instead.

Proof

The proof is standard given all the preliminary results. We apply Proposition 4.9, noting Lemma 4.7, to the measure (45) (which after symmetrization is exactly of the form (51)). This measure due to Propositions 4.4 and 4.6 has the distributions of the interacting particle systems (3) and (5), at fixed time \(t>0\), as marginals, either as the right-most \((y_1^{(1)},y_2^{(2)},\dots ,y_n^{(n)})\) or left-most \((y_1^{(1)},y_1^{(2)},\dots ,y_1^{(n)})\) coordinates respectively.

In applying Proposition 4.9 we first identify the \(\phi (y_1,y_2)\) function with \(\partial ^{-1}(y_1,y_2)\) so that in particular \(\phi ^{(n)}(y_1,y_2)=\partial ^{-n}(y_1,y_2)\), with the convention that \(\phi (y_n^{(n-1)},y)\equiv 1\). Then, we can identify the \(\Psi ^{(n)}_{n-i}\) functions with the functions \(\mathsf {\Psi }_{n-i}^{(n)}\) given in (58) by virtue of (52). Moreover, as in [44, 45, 78, 101] (we have the same \(\phi \) function) the space \(V_n\) from Proposition 4.9 is given by

$$\begin{aligned} V_n=\text {span}\left\{ y^{n-1},\dots ,y,1\right\} ={\mathfrak {P}}_{\le n-1}. \end{aligned}$$
(60)

Now, we note that by virtue of display (62) in Proposition 4.12 below and due to the fact that \(e^{t{\textsf{L}}^{(i)}}(x_i,\cdot )\) integrates to 1 in (lr) we have (since we can take the \(\partial ^{k}_{x_i}\) derivatives outside the integral which is identically 1):

$$\begin{aligned} \int _{l}^r\mathsf {\Psi }_0^{(n)}(w)dw=(-1)^{N-n}e^{t\sum _{j=n}^{N-1}{\textsf{c}}^{(j)}}, \ \ \int _l^r \mathsf {\Psi }_j^{(n)}(w)dw=0, \ \ \text { for } j=1,\dots ,n-1, \end{aligned}$$

which together with (54) leads to \(\Phi _0^{(n)}(y)=(-1)^{n-N}e^{-t\sum _{j=n}^{N-1}{\textsf{c}}^{(j)}}\). Hence, the final assumption \(\phi (y_n^{(n-1)},y)=c_n\Phi _{0}^{(n)}(y)\) of Proposition 4.9 is satisfied.

We can thus write down the correlation kernel associated to the signed measure (45) as in (57), with the \(\mathsf {\Psi }^{(n)}_{i}(z)\) functions given by (58) and the \(\Phi _j^{(n)}(z)\) functions determined by the conditions in the statement of the proposition. Finally, the Fredholm determinant identity (56) is a standard consequence of looking at an edge marginal (either right-most or left-most particles) of a determinantal point process, see for example [17, 61, 101]. To conclude, the argument for the interacting particle system (5) and identity (59) is completely analogous by using Proposition 4.6 now instead. \(\square \)

Remark 4.11

Observe that, we could have used in the proof above the representation for the \(\phi \)-function from Remark 4.8 which would have given a slightly different correlation kernel (giving rise to the same “signed" point process of course).

4.2 Biorthogonalization and proof of Theorems 1.3 and 1.5

We begin by giving two representations of \(\mathsf {\Psi }_{n-i}^{(n)}\). One valid for any \(i=1,\dots ,N\) which we have already used in the proof of Theorem 4.10 above and will also use in computing the formula for the correlation kernel \({\mathfrak {K}}_t\) and one valid only for \(i=1,\dots ,n\) which is key in finding the biorthogonal polynomials below.

Proposition 4.12

Let \(n=1,\dots ,N\) be arbitrary. Then, we have the following representations for the \(\mathsf {\Psi }\)-functions from (58),

$$\begin{aligned} \mathsf {\Psi }_{n-i}^{(n)}(z)&=(-1)^{N-n}e^{t\sum _{j=n}^{N-1}{\textsf{c}}^{(j)}}\partial _{x_i}^{n-i}e^{t{\textsf{L}}^{(n)}}(x_i,z), \ \ \text { for } i=1,\dots ,n, \end{aligned}$$
(61)
$$\begin{aligned} \mathsf {\Psi }_{n-i}^{(n)}(z)&=(-1)^{N-i}e^{t\sum _{j=i}^{N-1}{\textsf{c}}^{(j)}}\partial _z^{n-i}e^{t{\textsf{L}}^{(i)}}(x_i,z), \ \ \text { for } i=1,\dots ,N. \end{aligned}$$
(62)

Proof

Recall that by definition \(\mathsf {\Psi }_{n-i}^{(n)}(z)=\left( \partial ^{-(N-n)}\mathsf {\Psi }_{N-i}^{(N)}\right) (z)\). Now, observe that for any choice of \(i,n=1,\dots , N\), since \(-(N-i)\le 0\), we have \(\partial _{z}^{-(N-n)}=\partial _z^{n-i}\partial _z^{-(N-i)}\) from Lemma 4.1. Hence, by making repeated use of equation (42) and Lemma 4.1:

$$\begin{aligned} \mathsf {\Psi }_{n-i}^{(n)}(z)=\partial _z^{-(N-n)}\partial _{x_i}^{N-i}e^{t{\textsf{L}}^{(N)}}(x_i,z)&=\partial _z^{n-i}\partial _z^{-(N-i)}\partial _{x_i}^{N-i}e^{t{\textsf{L}}^{(N)}}(x_i,z)\\&=(-1)^{N-i}e^{t\sum _{j=i}^{N-1}{\textsf{c}}^{(j)}}\partial _{z}^{n-i}e^{t{\textsf{L}}^{(i)}}(x_i,z), \end{aligned}$$

giving (62). To prove the other identity, suppose \(i=1,\dots ,n\), so that in particular \(\partial _z^{N-i}=\partial _z^{n-i}\partial _z^{N-n}\) from Lemma 4.1. Then, again by repeated use of equation (42) and Lemma 4.1 we can compute

$$\begin{aligned} \mathsf {\Psi }_{n-i}^{(n)}(z)&=\partial _z^{-(N-n)}\partial _{x_i}^{N-i}e^{t{\textsf{L}}^{(N)}}(x_i,z) =\partial _z^{-(N-n)}\partial _{x_i}^{n-i}\partial _{x_i}^{N-n}e^{t{\textsf{L}}^{(N)}}(x_i,z)\\&=\partial _{x_i}^{n-i}\partial _{z}^{-(N-n)}\partial _{x_i}^{N-n}e^{t{\textsf{L}}^{(N)}}(x_i,z) =(-1)^{N-n}e^{t\sum _{j=n}^{N-1}{\textsf{c}}^{(j)}}\partial _{x_i}^{n-i}e^{t{\textsf{L}}^{(n)}}(x_i,z), \end{aligned}$$

giving (61). \(\square \)

We are now led to the definition of certain polynomials \(\mathsf {\Phi }_i^{(n)}(z)\). The choice of notation is clearly suggestive. We will shortly identify \(\mathsf {\Phi }_i^{(n)}\) with the polynomials \(\Phi _i^{(n)}\) from Theorem 4.10. The computation in Proposition 4.14 below, albeit very simple, is the crux of the argument.

Definition 4.13

Let \(x=(x_1,\dots ,x_N) \in (l,r)^N\). For \(n=1,\dots ,N\) and \(i=0,1,\dots ,n-1\) define the following polynomials (we drop dependence on x in the notation):

$$\begin{aligned} \mathsf {\Phi }_i^{(n)}(z)=(-1)^{i+n-N}e^{-t\sum _{j=n}^{N-1}{\textsf{c}}^{(j)}}e^{-t{\textsf{L}}^{(n)}}{\textsf{q}}_i^{(n)}(z;x), \end{aligned}$$
(63)

where recall that the polynomial \({\textsf{q}}_i^{(n)}(z)={\textsf{q}}_i^{(n)}(z;x)\) was defined in Definition 1.2. Observe that, \(\mathsf {\Phi }_0^{(n)}(z)\equiv (-1)^{n-N}e^{-t\sum _{j=n}^{N-1}{\textsf{c}}^{(j)}}\).

Proposition 4.14

Let \(n=1,\dots ,N\). Then, the functions \(\mathsf {\Psi }_i^{(n)}\) and polynomials \(\mathsf {\Phi }_j^{(n)}\) are biorthogonal with respect to Lebesgue measure in (lr):

$$\begin{aligned} \int _{l}^r \mathsf {\Psi }_i^{(n)}(z)\mathsf {\Phi }_j^{(n)}(z)dz={\textbf{1}}_{(i=j)}, \end{aligned}$$
(64)

for \(i,j=0,1,\dots ,n-1\).

Proof

Using the representation (61), we can compute by virtue of Lemma 3.4, after taking the derivative \(\partial _{x_{n-i}}^i\) outside the integral,

$$\begin{aligned} \int _{l}^r \mathsf {\Psi }_i^{(n)}(z)\mathsf {\Phi }_j^{(n)}(z)dz&=\int _l^r\partial _{x_{n-i}}^i e^{t{\textsf{L}}^{(n)}}(x_{n-i},z)(-1)^ie^{-t{\textsf{L}}^{(n)}}{\textsf{q}}_j^{(n)}(z;x)dz\\&=(-1)^i\partial _z^{i}{\textsf{q}}_j^{(n)}(z;x)\big |_{z=x_{n-i}}={\textbf{1}}_{(i=j)}, \end{aligned}$$

for any \(i,j=0,1,\dots ,n-1\). \(\square \)

We can finally prove our main results.

Proof of Theorem 1.3

We apply Theorem 4.10. Observe that, since by Proposition 3.1, \(\mathsf {\Phi }_j^{(n)}\) is a polynomial of degree j, we have

$$\begin{aligned} \text {span}\left\{ \mathsf {\Phi }_0^{(n)}(y),\dots ,\mathsf {\Phi }_{n-1}^{(n)}(y)\right\} ={\mathfrak {P}}_{\le n-1}, \end{aligned}$$

and moreover by virtue of the biorthogonality relation (64) in Proposition 4.14 we can then identify the \(\Phi _j^{(n)}\) functions in Theorem 4.10 with \(\mathsf {\Phi }_j^{(n)}\). Let us denote the correlation kernel K from Theorem 4.10 by \({\mathfrak {K}}_t\). It remains to simplify the sum term in (57) as follows, recalling the representation (62) for \(\mathsf {\Psi }_{n-i}^{(n)}\), by observing that

$$\begin{aligned} \sum _{k=1}^{n_2}\mathsf {\Psi }_{n_1-k}^{(n_1)}(y_1)\mathsf {\Phi }_{n_1-k}^{(n_2)}(y_2) =\partial _{y_1}^{n_1}\left[ \sum _{k=1}^{n_2}e^{t\sum _{j=k}^{n_2-1}{\textsf{c}}^{(j)}} \partial _{y_1}^{-k}e^{t{\textsf{L}}^{(k)}}(x_k,y_1){\textsf{q}}_{n_2-k}^{(n_2)}(y_2;x)\right] e^{-t{\textsf{L}}_{y_2}^{(n_2)}}, \end{aligned}$$

which gives the form of the function \({\mathfrak {G}}_{n_2}(y_1,y_2)\) and concludes the proof.

Proof of Theorem 1.5

The proof is word for word the same as the one of Theorem 1.3 above.

4.3 Connection to orthogonal polynomials

We now briefly explain how one can recover from \({\mathfrak {K}}_t\) the standard forms (directly related to random matrices as we now point out) of the correlation kernel for the most classical initial condition \(x=(0,\dots ,0)\) in the Brownian and squared Bessel cases. Using the results of Section 6 it can be shown that the correlation kernel \({\mathfrak {K}}_t\) equivalently governs the distribution, at a fixed time \(t>0\), of some stochastic dynamics (91) in an interlacing array (the measure (45) coincides with the measure (93) in Proposition 6.1 as \(x\rightarrow (0,\dots ,0)\), see also Remark 6.3). This is known, see Remark 6.4, to match the distribution of the eigenvalues of consecutive principal submatrices of the Gaussian and Laguerre unitary ensembles respectively, see [13, 100]. The correlation kernels can be computed [2, 43, 49, 62, 63] in terms of Hermite and Laguerre polynomials respectively.

To obtain these formulae from \({\mathfrak {K}}_t\) in the squared Bessel/Laguerre case one needs the following ingredients. Recall that, as observed in (9), for \(x=(0,\dots ,0)\), \({\textsf{q}}_k^{(n)}(z;x)=(-1)^k\frac{z^k}{k!}\). We then have the following identity (obtained by direct computation or see [15]):

$$\begin{aligned} e^{-t{\mathcal {B}}^{(\theta )}}z^n=n!(-2t)^nL_n^{(\frac{\theta }{2}-1)}\left( \frac{z}{2t}\right) , \end{aligned}$$
(65)

where \(L_n^{(\alpha )}(z)=\sum _{k=0}^n(-1)^k \left( {\begin{array}{c}n+\alpha \\ n-k\end{array}}\right) \frac{z^k}{k!}\) is the Laguerre polynomial. Moreover, we have the identity:

$$\begin{aligned} \partial _z^ne^{t{\mathcal {B}}^{(\theta )}}(z,y)\big |_{z=0}=\frac{n!(-1)^n\Gamma \left( \frac{d}{2}\right) }{(2t)^n\Gamma \left( \frac{d}{2}+n\right) }L_n^{\left( \frac{\theta }{2}-1\right) }\left( \frac{y}{2t}\right) e^{t{\mathcal {B}}^{(\theta )}}(0,y). \end{aligned}$$
(66)

This can be proven as follows. We make use of the formula

$$\begin{aligned} \partial _z^n e^{t{\mathcal {B}}^{(\theta )}}(z,y)=\frac{1}{(2t)^n}\sum _{k=0}^n e^{t{\mathcal {B}}^{(\theta +2k)}}(z,y)\left( {\begin{array}{c}n\\ k\end{array}}\right) (-1)^{n-k}, \end{aligned}$$

obtained by iterating the following relation (itself proven by direct computation using the explicit formula (20) and basic properties of derivatives of Bessel functions):

$$\begin{aligned} \partial _z e^{t{\mathcal {B}}^{(\theta )}}(z,y)=\frac{1}{2t}\left[ e^{t{\mathcal {B}}^{(\theta +2)}}(z,y) -e^{t{\mathcal {B}}^{(\theta )}}(z,y)\right] \end{aligned}$$

and recalling that, see for example [87],

$$\begin{aligned} e^{t{\mathcal {B}}^{(\theta )}}(0,y)=\frac{1}{(2t)^\frac{\theta }{2}\Gamma \left( \frac{\theta }{2}\right) } y^{\frac{\theta }{2}-1}e^{-\frac{y}{2}t}. \end{aligned}$$

Putting everything together gives the explicit form of the kernel in terms of Laguerre polynomials. The situation is very similar in the Brownian case using classical identities, analogues of (65), (66), that relate the heat kernel, \(e^{t\frac{1}{2}\partial ^2}(z,y)\), to the Hermite polynomials:

$$\begin{aligned} \partial _z^{n}e^{t\frac{1}{2}\partial ^2}(z,y)=t^{-\frac{n}{2}}H_n\left( \frac{y-z}{\sqrt{t}}\right) e^{t\frac{1}{2}\partial ^2}(z,y), \ \ e^{-t\frac{1}{2}\partial ^2}z^n=t^nH_n\left( \frac{z}{t}\right) , \end{aligned}$$

where \(H_n(z)=n!\sum _{m=0}^{\lfloor \frac{n}{2} \rfloor }\frac{(-1)^m}{m!(n-2\,m)!}\frac{z^{n-2\,m}}{2^m}\) is the Hermite polynomial.

4.4 Proof of Theorem 1.6

In this subsection we prove Theorem 1.6. The lemma below is the key ingredient in the proof.

Lemma 4.15

Let \(\theta \ge 2\). Then, for any \(k \in {\mathbb {N}}\) we have, with \(x,z\in (0,\infty )\), where \(\delta _x\) is the delta function at x,

$$\begin{aligned} \partial _z^{-k}e^{t{\mathcal {B}}^{(\theta )}}(x,z)=\left[ e^{t{\mathcal {B}}^{(4-\theta -2k)}}\partial ^{-k}\delta _x\right] (z). \end{aligned}$$
(67)

Proof

We begin with the following symmetry identity. For \(\gamma \ge 2\) and \(x,z\in (0,\infty )\) we have:

$$\begin{aligned} e^{t{\mathcal {B}}^{(\gamma )}}(x,z)=e^{t{\mathcal {B}}^{(4-\gamma )}}(z,x), \end{aligned}$$
(68)

see Proposition 3 in [52]. Moreover, we have the following identity, for \(x,z\in (0,\infty )\),

$$\begin{aligned} \int _0^z e^{t{\mathcal {B}}^{(\gamma )}}(x,y)dy=\int _x^\infty e^{t{\mathcal {B}}^{(\gamma +2)}}(y,z)dy, \end{aligned}$$
(69)

which can be seen as follows. First note that, the identity (42) in the squared Bessel case can be written as \(e^{t{\mathcal {B}}^{(\gamma +2)}}(y,z)=-\partial _z^{-1}\partial _y e^{t{\mathcal {B}}^{(\gamma )}}(y,z)\) and then integrate in y from x to \(\infty \) (note that the boundary term at \(\infty \) on the right hand side vanishes). The \(k=1\) case of the proposition then easily follows by using (68) and (69) in the third and second equalities below respectively:

$$\begin{aligned} \partial _z^{-1}e^{t{\mathcal {B}}^{(\theta )}}(x,z)=\int _0^ze^{t{\mathcal {B}}^{(\theta )}}(x,y)dy=\int _{x}^\infty e^{t{\mathcal {B}}^{(\theta +2)}}(y,z)dy&=\int _{x}^\infty e^{t{\mathcal {B}}^{(2-\theta )}}(z,y)dy\\&=\left[ e^{t{\mathcal {B}}^{(2-\theta )}}\partial ^{-1}\delta _x\right] (z). \end{aligned}$$

The general case is analogous, by making repeated use of (69) in the second equality below and then (68) we obtain:

$$\begin{aligned} \partial _z^{-k}e^{t{\mathcal {B}}^{(\theta )}}(x,z)&=\int _0^z\int _0^{y_k}\cdots \int _0^{y_2}e^{t{\mathcal {B}}^{(\theta )}}(x,y_1)dy_1\cdots dy_k\\&=\int _x^\infty \int _{y_k}^{\infty }\cdots \int _{y_2}^{\infty }e^{t{\mathcal {B}}^{(\theta +2k)}}(y_1,z)dy_1\cdots dy_k\\&=\int _x^\infty \int _{y_k}^{\infty }\cdots \int _{y_2}^{\infty }e^{t{\mathcal {B}}^{(4-2k-\theta )}}(z,y_1)dy_1\cdots dy_k\\&=\int _x^{\infty }e^{t{\mathcal {B}}^{(4-\theta -2k)}}(z,y)\frac{(y-x)^{k-1}}{(k-1)!}dy= \left[ e^{t{\mathcal {B}}^{(4-\theta -2k)}}\partial ^{-k}\delta _x\right] (z), \end{aligned}$$

since we note that for a non-negative function f

$$\begin{aligned} \int _x^{\infty }\int _{y_k}^{\infty } \cdots \int _{y_2}^{\infty } f(y_1)dy_1\cdots dy_k= \int _x^{\infty } f(y)\frac{(y-x)^{k-1}}{(k-1)!}dy, \end{aligned}$$

and this completes the proof. \(\square \)

We will also need the following important lemma from [78], see Proposition 5.7 therein which we have translated in our notation. Analogous results in the discrete setting have appeared in [76, 77].

Lemma 4.16

Let \(x=(x_1,\dots ,x_N)\in {\mathbb {W}}_N^{\downarrow }\) and \(n=1,\dots ,N\). Let \(\left( {\textsf{R}}_k;k\ge 0\right) \) be a random walk with exponential random variable with parameter 1 steps to the left and denote by \({\textbf{E}}\) expectation with respect to it. Let \(\tau =\min \left\{ k\ge 0:{\textsf{R}}_k\ge x_{k+1}\right\} \). Then, for any \(y_1,y_2\), we have

$$\begin{aligned} \sum _{k=0}^{n-1}\left[ \partial ^{-(n-k)}\delta _{x_{n-k}}\right] (y_1){\textsf{q}}^{(n)}_{k}(y_2;x) ={\textbf{E}}_{{\textsf{R}}_0=y_1}\left[ e^{y_1-{\textsf{R}}_\tau }\frac{\left( {\textsf{R}}_\tau -y_2\right) ^{n-\tau -1}}{(n-\tau -1)!}{\textbf{1}}_{(\tau <n)}\right] . \end{aligned}$$
(70)

We can now prove Theorem 1.6.

Proof of Theorem 1.6

We apply Theorem 1.5. Observe that we have \({\textsf{L}}^{(k)}={\mathcal {B}}^{(\theta +2N-2k)}\) and \({\textsf{c}}^{(k)}\equiv 0\). Moreover, since \(\theta \ge 2\) the left boundary 0 is an entrance boundary point while \(\infty \) is always natural for all \(k=1,\dots ,N\), so that in particular our standing assumption is satisfied. We now rewrite the function \({\mathfrak {G}}_{n_2}(y_1,y_2)\) in the form that appears in the definition of the kernel \({\mathfrak {B}}_t^{(\theta )}\). We compute

$$\begin{aligned} {\mathfrak {G}}_{n_2}(y_1,y_2)&=\sum _{k=0}^{n_2-1}\partial _{y_1}^{-(n_2-k)}e^{t{\textsf{L}}^{(n_2-k)}} (x_{n_2-k},y_1){\textsf{q}}_k^{(n_2)}(y_2;x)\\&=\sum _{k=0}^{n_2-1}\partial _{y_1}^{-(n_2-k)}e^{t{\mathcal {B}}^{(\theta +2N-2n_2+2k)}}(x_{n_2-k},y_1){\textsf{q}}_k^{(n_2)}(y_2;x)\\&=e^{t{\mathcal {B}}_{y_1}^{(4-\theta -2N)}}\sum _{k=0}^{n_2-1}\left[ \partial ^{-(n_2-k)}\delta _{x_{n_2-k}}\right] (y_1){\textsf{q}}^{(n_2)}_{k}(y_2;x)\\&=e^{t{\mathcal {B}}_{y_1}^{(4-\theta -2N)}}{\textbf{E}}_{{\textsf{R}}_0=y_1}\left[ e^{y_1-{\textsf{R}}_\tau } \frac{\left( {\textsf{R}}_\tau -y_2\right) ^{n-\tau -1}}{(n-\tau -1)!}{\textbf{1}}_{(\tau <n)}\right] , \end{aligned}$$

where we have used \(\partial _z^{-(n_2-k)}e^{t{\mathcal {B}}^{(\theta +2N-2n_2+2k)}}(x_{n_2-k},y_1)=\left[ e^{t{\mathcal {B}}^{(4-\theta -2N)}}\partial ^{-(n_2-k)}\delta _{x_{n_2-k}}\right] (y_1)\) by virtue of Lemma 4.15 (note that the semigroup on the right hand side is independent of k and thus can be taken out of the sum) and then Lemma 4.16. This gives the first expression for \({\mathfrak {B}}_t^{(\theta )}\).

It remains to establish the equivalent form of \({\mathfrak {B}}_t^{(\theta )}\) given in (22). Observe that for \(n_1\ge n_2\) the equality is obvious since \(\partial ^{-(n_2-n_1)}\partial ^{n_2}=\partial ^{n_1}\) in this case. For \(n_2>n_1\), we claim that for \(n_2=0,1,\dots , N\), we have for any \(z\in (l,r)\)

$$\begin{aligned} \partial _{y_1}^{n_2}e^{t{\mathcal {B}}^{(4-\theta -2N)}}(y_1,z)\big |_{y_1=0}=0, \end{aligned}$$
(71)

from which the desired result follows by virtue of the fact that for \(0\le k \le m\), and f satisfying \(\partial ^if(l)=0\), for \(i=0,\dots ,m-1\), we have \(\partial ^{-k}\partial ^{m} f=\partial ^{m-k}f\). This fact can easily be proven by induction: \(\partial ^{-k}\partial ^m f=\partial ^{-(k-1)} \partial ^{-1}\partial \partial ^{m-1}f=\partial ^{-(k-1)}\partial ^{m-1}f\), since \(\partial ^{m-1}f(l)=0\), and so on. Now, equality (71) can be seen as follows. From the symmetry property (68) we have:

$$\begin{aligned} \partial _{y_1}^{n_2}e^{t{\mathcal {B}}^{(4-\theta -2N)}}(y_1,z)= \partial _{y_1}^{n_2}e^{t{\mathcal {B}}^{(2N+\theta )}}(z,y_1). \end{aligned}$$

Then, observe that by iterating the equality (proven by direct computation using properties of Bessel functions)

$$\begin{aligned} \partial _y e^{t{\mathcal {B}}^{(\theta )}}(z,y)=\frac{1}{2t}\left( -e^{t{\mathcal {B}}^{(\theta )}}(z,y) +e^{t{\mathcal {B}}^{(\theta -2)}}(z,y)\right) \end{aligned}$$

we obtain the formula

$$\begin{aligned} \partial _{y_1}^{n_2}e^{t{\mathcal {B}}^{(2N+\theta )}}(z,y_1) = \frac{1}{(2t)^{n_2}} \sum _{j=0}^{n_2}(-1)^{n_2-j}\left( {\begin{array}{c}n_2\\ j\end{array}}\right) e^{t{\mathcal {B}}^{(2N+\theta -2j)}}(z,y_1). \end{aligned}$$

The claim follows since \(e^{t{\mathcal {B}}^{(2N+\theta -2j)}}(z,0)=0\), for \(j=0,1,\dots ,N\).

Remark 4.17

Some formal computations (one needs to be careful with boundary conditions) show that it is possible to find an analogue of Lemma 4.15 for general \({\textsf{L}}^{(k)}\)-diffusions as in our standing assumption. Further computations then lead to an expression of the form

$$\begin{aligned} \sum _{k=0}^{n-1}g(k)\left[ \partial ^{-(n-k)}\delta _{x_{n-k}}\right] (y_1){\textsf{q}}^{(n)}_{k}(y_2;x), \end{aligned}$$
(72)

appearing in the formula for the correlation kernel, for a certain function g(k) depending on \({\textsf{L}}^{(k)}\) (in the squared Bessel and Brownian cases \(g(k)\equiv 1\)). Following the working in [76,77,78] it is not hard to give an intermediate probabilistic representation for (72) in terms of an exponential random walk \(\left( {\textsf{R}}^*_k;k\ge 0\right) \) with steps only to the right; after conjugation \(\left[ \partial ^{-(n-k)}\delta _{x_{n-k}}\right] (y_1)\) can be written as a transition probability for \({\textsf{R}}^*_k\) and \({\textsf{q}}_k^{(n)}(y_2)\) in terms of a related stopping time. However, in order to obtain an expression as in Lemma 4.16, one needs to reverse time (the direction of the walk) and with non-constant g(k) this involves a last exit time for the walk \(\left( {\textsf{R}}_k;k\ge 0\right) \) which is a not a Markov time (we do not have the strong Markov property starting from a last exit time) and this is not conducive for the rest of the argument. It might still be possible to give a somewhat different (but still potentially useful for asymptotics) random walk representation to Lemma 4.16 but we do not pursue it further here.

5 Proof of Theorem 1.9

We begin with a brief discussion on why the semigroup \(\left( {\textsf{P}}_t^{(N)};t\ge 0\right) \) from (24) is well-defined and on well-posedness of the SDE (25), even from points with multiple coinciding coordinates. The following lemma is an immediate consequence of the differentiation formulae for \(\mathsf {\Delta }_N(x)\) in [37], by virtue of the polynomial form of \({\textsf{a}}\) and \({\textsf{b}}\).

Lemma 5.1

Let \(N\in {\mathbb {N}}\). Consider \({\textsf{L}}\) with \({\textsf{a}}\) and \({\textsf{b}}\) as in (1). We have the following relation, with \(x=(x_1,\dots ,x_N)\in (l,r)^N\),

$$\begin{aligned} \left[ \sum _{i=1}^N {\textsf{L}}_{x_i}\right] \mathsf {\Delta }_N(x)=\lambda _N \mathsf {\Delta }_N(x), \ \ \text { with } \lambda _N=\frac{1}{6}N(N-1)(2a_2(N-2)+3b_1). \end{aligned}$$

Thus, \(\mathsf {\Delta }_N(x)\) is a strictly positive eigenfunction in \({\mathbb {W}}_N^{\uparrow ,\circ }\) of the generator \(\sum _{i=1}^N {\textsf{L}}_{x_i}\) of N independent \({\textsf{L}}\)-diffusions in \({\mathbb {W}}_N^{\uparrow }\) killed when they intersect (namely with Dirichlet boundary conditions on the boundary of \({\mathbb {W}}_N^{\uparrow }\)). We can then define the Doob h-transform [36, 83, 87] by \(\mathsf {\Delta }_N\) of the corresponding sub-Markov semigroup, with transition density with respect to the Lebesgue measure given by the Karlin-McGregor formula [65] as \(\det \left( e^{t{\textsf{L}}}(x_i,y_j)\right) _{i,j=1}^N\), which gives the definition of \({\textsf{P}}_t^{(N)}\) in (24).

We also have the following result on the SDEs. As mentioned, for initial conditions \({\textsf{z}}(0)\in {\mathbb {W}}_N^{\uparrow ,\circ }\) a direct generic argument is available that uses the Doob h-transform structure, see Appendix A in [12]. Instead, for \({\textsf{z}}(0)\in {\mathbb {W}}_N^{\uparrow }\) we apply the general results of [54] by checking the conditions therein. These conditions have already been checked in previous works [11, 54] for some generators of the form \({\textsf{L}}\) and the general case is analogous.

Proposition 5.2

Let \({\textsf{z}}(0)\in {\mathbb {W}}_N^{\uparrow }\). Then, the system of SDEs (25) has a unique strong solution with no collisions for positive time, namely almost surely, for all \(t>0\), \({\textsf{z}}(t)\in {\mathbb {W}}_N^{\uparrow ,\circ }\).

Proof

We apply Theorem 2.2 in [54]. The Yamada-Watanabe conditions (C1)-(C2) therein hold by our standing assumption (namely the form of the polynomial functions \({\textsf{a}}\) and \({\textsf{b}}\)). Moreover, note that we can write the interacting drift term in the SDE (25) as follows, for \(i=1,\dots ,N\):

$$\begin{aligned} 2{\textsf{a}}(x_i)\sum _{j\ne i}\frac{1}{x_i-x_j}=\sum _{j\ne i}\frac{2a_2x_ix_j+a_1(x_i+x_j)+2a_0}{x_i-x_j}+2(N-1)a_2x_i+(N-1)a_1. \end{aligned}$$

We can thus take the function \(H_{ij}(x,y)\equiv H(x,y)\) in [54] to be equal to

$$\begin{aligned} H(x,y)=2a_2xy+a_1(x+y)+2a_0. \end{aligned}$$

If we write

$$\begin{aligned} H_1(x,y)=2a_2xy, \ H_2(x,y)=a_1(x+y), \ H_3(x,y)=2a_0, \end{aligned}$$

we note that conditions (A1), (A2), (A3) in [54] have already been checked for each of \(H_1, H_2, H_3\) in [11, 54] from which the corresponding condition for H follows simply by adding up the individual inequalities in those conditions. Finally, the sets in condition (A4) in [54] are empty since, by our standing assumption, \({\textsf{a}}(x)>0\) for all \(x\in (l,r)\) and moreover condition (A5) is vacuous since the drift \({\textsf{b}}\) does not depend depend on i. The conclusion of the proposition follows from the aforementioned theorem. \(\square \)

Moving on towards proving Theorem 1.9 the following proposition plays a key role. In the Brownian and squared Bessel cases the proposition below specialises, in an equivalent form (as in these cases it is possible to write \(e^{-t{\textsf{L}}}p\), with p a polynomial and \(t>0\), as a certain integral expression, see [69, 70]), to some key results from [69, 70]. The main idea is to rewrite the measure from (77), (78) below in an equivalent form so that the functions appearing in the first and last determinants in (73) are biorthogonal with respect to the transition density \(e^{(t_M-t_1){\textsf{L}}}(y_1,y_2)\) as we show later on. This working is basically equivalent (but better adapted to the present setting) to finding biorthogonal functions as we did in the previous section and will allow for an application of yet another variant of the Eynard-Mehta theorem [17, 24, 61].

Proposition 5.3

Let \(x=(x_1,\dots ,x_N)\in {\mathbb {W}}_N^{\uparrow }\) and \(M\in {\mathbb {N}}\) be arbitrary. The distribution of the dynamics (25), starting from initial condition x, at times \(0<t_1< t_2< \cdots < t_M\), can be written as follows, with respect to Lebesgue measure on the M-fold product \({\mathbb {W}}_N^\uparrow \times \cdots \times {\mathbb {W}}_N^\uparrow \):

$$\begin{aligned} \frac{1}{Z_{N,t_M}} \det \left( {\mathcal {Q}}_i^{(t_1)}\left( y^{(1)}_j;x\right) \right) _{i,j=1}^N \det \left( e^{(t_2-t_1){\textsf{L}}}\left( y^{(1)}_i,y^{(2)}_j\right) \right) _{i,j=1}^N\times \cdots \nonumber \\ \times \det \left( e^{(t_M-t_{M-1}){\textsf{L}}}\left( y^{(M-1)}_i,y^{(M)}_j\right) \right) _{i,j=1}^N \left( {\mathcal {P}}_{i}^{(t_M)}\left( y_j^{(M)};x\right) \right) _{i,j=1}^N, \end{aligned}$$
(73)

for some normalization constant \(Z_{N,t_M}\) independent of the x variables and the functions \({\mathcal {Q}}_i^{(t)}\left( \cdot \right) ={\mathcal {Q}}_i^{(t)}\left( \cdot ;x\right) \) and polynomials \({\mathcal {P}}_i^{(t)}\left( \cdot \right) ={\mathcal {P}}_i^{(t)}\left( \cdot ;x\right) \), for \(i=1,\dots ,N\), are given by

$$\begin{aligned} {\mathcal {Q}}_i^{(t)}(y;x)&=\frac{1}{2\pi \text {i}}\oint _{\mathsf {\Gamma }_i^{(N)}}e^{t{\textsf{L}}}(z,y)\frac{1}{\prod _{k=1}^i(z-x_k)}dz, \end{aligned}$$
(74)
$$\begin{aligned} {\mathcal {P}}_{i}^{(t)}(y;x)&=e^{-t{\textsf{L}}_y}\prod _{k=1}^{i-1}(y-x_k), \end{aligned}$$
(75)

with the convention \({\mathcal {P}}_1^{(t)}\equiv 1\) and where \(\mathsf {\Gamma }_i^{(N)}\) is a positively oriented contour around \(x_1,\dots ,x_i\) in a complex neighbourhood of \((x_1,x_N)\). Alternatively, \({\mathcal {Q}}_i^{(t)}(y;x)\) can be written as

$$\begin{aligned} {\mathcal {Q}}_i^{(t)}(y;x)= e^{t{\textsf{L}}}(\cdot ,y)[x_1,\dots ,x_i], \end{aligned}$$
(76)

where \(f[x_1,\dots ,x_i]\) denotes the divided difference [33] of a function f on (lr) evaluated at points \((x_1,\dots ,x_i)\).

Proof

Observe that, by the Markov property the distribution of the dynamics (25), starting from initial condition \(x=(x_1,\dots ,x_N)\in {\mathbb {W}}_N^\uparrow \), at times \(0<t_1< t_2< \cdots < t_M\) is given by:

$$\begin{aligned} {\textsf{P}}_{t_1}^{(N)}\left( x,dy^{(1)}\right) {\textsf{P}}_{t_2-t_1}^{(N)}\left( y^{(1)},dy^{(2)}\right) \cdots {\textsf{P}}_{t_M-t_{M-1}}^{(N)}\left( y^{(M-1)},dy^{(M)}\right) . \end{aligned}$$
(77)

Consider \(x\in {\mathbb {W}}_N^{\uparrow ,\circ }\) first. From the explicit formula in (24) the above display (disregarding the \(dy^{(i)}\) terms to ease notation), can be written as:

$$\begin{aligned} \frac{1}{Z_{N,t_M}}\frac{1}{\mathsf {\Delta }_N(x)}\det \left( e^{t_1{\textsf{L}}} \left( x_i,y^{(1)}_j\right) \right) _{i,j=1}^N\det \left( e^{(t_2-t_1){\textsf{L}}}\left( y^{(1)}_i,y^{(2)}_j\right) \right) _{i,j=1}^N \times \cdots \nonumber \\ \times \det \left( e^{(t_M-t_{M-1}){\textsf{L}}}\left( y^{(M-1)}_i,y^{(M)}_j\right) \right) _{i,j=1}^N \mathsf {\Delta }_N\left( y^{(M)}\right) . \end{aligned}$$
(78)

Here, \(Z_{N,t_M}\) is a normalization constant which may change from line to line. We concentrate on the first two factors. Using the identity \(\mathsf {\Delta }_N(x)=\prod _{i=2}^N \prod _{m=1}^{i-1}(x_i-x_m)\) we can write

$$\begin{aligned} \frac{1}{\mathsf {\Delta }_N(x)}\det \left( e^{t_1{\textsf{L}}}\left( x_i,y^{(1)}_j\right) \right) _{i,j=1}^N =\det \left( e^{t_1{\textsf{L}}}\left( x_i,y_j^{(1)}\right) \frac{1}{\prod _{m=1}^{i-1}(x_i-x_m)}\right) _{i,j=1}^N. \end{aligned}$$

Then, by row operations we get that this is equal to:

$$\begin{aligned} \det \left( \sum _{m=1}^ie^{t_1{\textsf{L}}}(x_m,y^{(1)}_j)\frac{1}{\prod _{\begin{array}{c} k=1\\ k\ne m \end{array}}^i(x_m-x_k)}\right) _{i,j=1}^N. \end{aligned}$$

By the residue theorem, recall that \(z\mapsto e^{t{\textsf{L}}}(z,y)\) is analytic in a complex neighbourhood of \((x_1,x_N)\), we can write:

$$\begin{aligned} \sum _{m=1}^ie^{t_1{\textsf{L}}}(x_m,y)\frac{1}{\prod _{\begin{array}{c} k=1\\ k\ne m \end{array}}^i(x_m-x_k)}=\frac{1}{2\pi \text {i}}\oint _{\mathsf {\Gamma }_i^{(N)}}e^{t{\textsf{L}}}(z,y)\frac{1}{\prod _{k=1}^i(z-x_k)}dz. \end{aligned}$$
(79)

This gives the complex variables representation in (74) for the \({\mathcal {Q}}\)-functions, while in (82) below we will see the divided differences representation (76).

Now, we turn to the last factor \(\mathsf {\Delta }_N\left( y^{(M)}\right) \). Recall that from Proposition 3.1, we have that \(e^{-t_M{\textsf{L}}_y}\prod _{k=1}^{i-1}(y-x_k)\) is a polynomial of degree \(i-1\) in y, with leading order coefficient \(e^{t_M(i-1)(b_1+a_2(i-2))}\). Thus, by row operations \(\mathsf {\Delta }_N\left( y^{(M)}\right) \) is equal, up to a multiplicative constant depending only on \(t_M, b_1, a_2, N\), to

$$\begin{aligned} \det \left( e^{-t_M{\textsf{L}}_{y_{j}^{(M)}}}\prod _{k=1}^{i-1}\left( y_j^{(M)}-x_k\right) \right) _{i,j=1}^N. \end{aligned}$$
(80)

This gives the required representation (75) for the \({\mathcal {P}}\)-polynomials.

We now extend the statement to general \(x\in {\mathbb {W}}_N^{\uparrow }\) by continuity as follows. We show that the measure on \({\mathbb {W}}_N^\uparrow \times \cdots \times {\mathbb {W}}_N^\uparrow \) in (77), given by its density (78) for \(x\in {\mathbb {W}}_N^{\uparrow ,\circ }\), which we have equivalently rewritten in (73), is weakly continuous in the initial condition x. We first show pointwise continuity in x for the density as written in (73). Towards this end, observe that the expression for \({\mathcal {P}}_i^{(t)}(\cdot ;x)\) in (80) is easily seen to be continuous in \(x=(x_1,\dots ,x_N)\in {\mathbb {W}}_N^\uparrow \) since we can write by linearity

$$\begin{aligned} e^{-t_M{\textsf{L}}_y}\prod _{k=1}^{i-1}(y-x_k) = \sum _{m=0}^{i-1} (-1)^{1-i-m}{\mathfrak {e}}_{i-1-m}(x_1,\dots ,x_{i-1}) e^{-t_M{\textsf{L}}_y}y^m, \end{aligned}$$
(81)

where \({\mathfrak {e}}_k(x_1,\dots ,x_{i-1})\) is the k-th elementary symmetric polynomial which is continuous in \(x\in {\mathbb {W}}_N^{\uparrow }\). To see that the left hand side of (79) is continuous in \(x\in {\mathbb {W}}_N^\uparrow \) is slightly more involved (from the contour integral expression on the right hand side of (79) this is straightforward but the real variable arguments below will also be used at the end of the proof) and we need some preliminaries. We define the divided difference \(f[y_1,\dots ,y_n]\) of a smooth function f on (lr) at (distinct) points \((y_1,\dots ,y_n)\in {\mathbb {W}}_n^{\uparrow ,\circ }\) by the following procedure:

$$\begin{aligned} f[y_1,y_2]=\frac{f(y_2)-f(y_1)}{y_2-y_1}, \ \ f[y_1,y_2,y_3]=\frac{f[y_2,y_3]-f[y_1,y_2]}{y_3-y_1}, \end{aligned}$$

and so on until

$$\begin{aligned} f[y_1,\dots ,y_n]=\frac{f[y_2,\dots ,y_n]-f[y_1,\dots ,y_{n-1}]}{y_n-y_1}. \end{aligned}$$

The key observation, see for example [33] for the general formula, is that for \(x\in {\mathbb {W}}_N^{\uparrow ,\circ }\), for \(i=1,\dots , N\), we have (with the obvious convention \(f[y_1]=f(y_1)\)):

$$\begin{aligned} \sum _{m=1}^i e^{t_1{\textsf{L}}}(x_m,y)\frac{1}{\prod _{\begin{array}{c} k=1\\ k\ne m \end{array}}^i(x_m-x_k)}= e^{t_1{\textsf{L}}}(\cdot ,y)[x_1,\dots ,x_i]. \end{aligned}$$
(82)

Then, using the fact that divided differences are continuous for smooth functions, even at points with coinciding coordinates, see [33], we can extend the left hand side of (79), namely \({\mathcal {Q}}_i^{(t)}\left( \cdot ;x\right) \), for \(i=1,\dots ,N\), by continuity to general \(x\in {\mathbb {W}}_N^\uparrow \). This gives pointwise continuity of the densities.

Now, consider points \(x^{(n)}\in {\mathbb {W}}_N^{\uparrow ,\circ }\) converging to \(x\in {\mathbb {W}}_N^\uparrow \) as \(n\rightarrow \infty \). To complete the proof we need to dominate the density given in (73), where we plug in \(x=x^{(n)}\) in the formula, independently of n by some integrable function on \({\mathbb {W}}_N^\uparrow \times \cdots \times {\mathbb {W}}_N^\uparrow \) (recall that for weak convergence we are testing against some continuous bounded function \({\textsf{F}}\)). By taking absolute values and expanding the determinans it suffices to dominate, uniformly in n, each of the following terms,

$$\begin{aligned}&\prod _{i=1}^N \left| {\mathcal {Q}}_i^{(t_1)}\left( y_{\sigma _1(i)}^{(1)};x^{(n)}\right) \right| \prod _{i=1}^N e^{(t_2-t_1){\textsf{L}}}\left( y_i^{(1)},y_{\sigma _2(i)}^{(2)}\right) \times \cdots \\&\cdots \times \prod _{i=1}^N e^{(t_M-t_{M-1}){\textsf{L}}}\left( y_i^{(M-1)},y_{\sigma _M(i)}^{(M)}\right) \prod _{i=1}^N \left| {\mathcal {P}}_i^{(t_M)}\left( y_{\sigma _{M+1}(i)}^{(M)};x^{(n)}\right) \right| , \end{aligned}$$

by an integrable function on \({\mathbb {W}}_N^\uparrow \times \cdots \times {\mathbb {W}}_N^\uparrow \), where the \(\sigma _1, \sigma _2, \dots ,\sigma _{M+1}\) are arbitrary permutations of \(\left\{ 1,2,..., N \right\} \). Moreover, it suffices to bound, uniformly in n, each factor of the form, for any \(i,j=1,\dots ,N\),

$$\begin{aligned} \left| {\mathcal {Q}}_i^{(t_1)}\left( y_1;x^{(n)}\right) \right| e^{(t_2-t_1){\textsf{L}}}\left( y_1,y_2\right) \cdots e^{(t_M-t_{M-1}){\textsf{L}}}\left( y_{M-1},y_{M}\right) \left| {\mathcal {P}}_j^{(t_M)}\left( y_M;x^{(n)}\right) \right| \end{aligned}$$
(83)

by an integrable function of \(y=(y_1,\dots ,y_M) \in (l,r)^M\). Towards this end, note that \(|e^{-t_M{\textsf{L}}_{y_M}}y_M^m|\) is dominated by a non-negative polynomial \(h_m\) of degree \(2\left\lfloor \frac{m+1}{2} \right\rfloor \). Hence, from (81) the sequence \(\left\{ \left| {\mathcal {P}}_j^{(t)}\left( y_M;x^{(n)}\right) \right| \right\} _{n=1}^\infty \) is dominated by the polynomial:

$$\begin{aligned} \sum _{m=0}^{j-1} \sup _{n\ge 1}\left| {\mathfrak {e}}_{i-1-m}\left( x^{(n)}_1,\dots ,x^{(n)}_{j-1}\right) \right| h_m(y_M). \end{aligned}$$
(84)

Now, using the mean value theorem for divided differences, see [33], there exists some \(\xi ^{(n)}_i \in \left( x^{(n)}_1,x^{(n)}_i\right) \) such that:

$$\begin{aligned} {\mathcal {Q}}_i^{(t)}\left( y_1;x^{(n)}\right) =e^{t_1{\textsf{L}}}(\cdot ,y_1) \left[ x^{(n)}_1,\dots ,x^{(n)}_i\right] =\frac{1}{i!}\partial _{\xi ^{(n)}_i}^{i-1}e^{t_1{\textsf{L}}}\left( \xi ^{(n)}_i,y_1\right) . \end{aligned}$$
(85)

In particular, the sequence \(\left\{ \left| {\mathcal {Q}}_i^{(t)}\left( y_1;x^{(n)}\right) \right| \right\} _{n=1}^\infty \) is dominated by \(\sup _{\xi \in {\mathcal {U}}_i} \frac{1}{i!}\left| \partial _{\xi }^{i-1}e^{t_1 {\textsf{L}}}(\xi ,y_1)\right| \), with \({\mathcal {U}}_i=\left( \inf _{n\ge 1}x_1^{(n)},\sup _{n\ge 1}x_i^{(n)}\right) \). Putting everything together it thus suffices to show that

$$\begin{aligned} \sup _{\xi \in {\mathcal {U}}_i} \left| \partial _{\xi }^{i-1}e^{t_1 {\textsf{L}}}(\xi ,y_1)\right| e^{(t_2-t_1){\textsf{L}}}\left( y_1,y_2\right) \cdots e^{(t_M-t_{M-1}){\textsf{L}}}\left( y_{M-1},y_{M}\right) h_j\left( y_M\right) \end{aligned}$$
(86)

is integrable on \((l,r)^M\). Then, using Tonelli’s theorem (observe that all factors are non-negative), noting that,

$$\begin{aligned}{} & {} \int _l^r \cdots \int _l^r e^{(t_2-t_1){\textsf{L}}}(y_1,y_2)\cdots e^{(t_M-t_{M-1}){\textsf{L}}}(y_{M-1},y_M)h_j(y_M)dy_2\cdots dy_M\nonumber \\ {}{} & {} =e^{(t_M-t_1){\textsf{L}}}h_j(y_1) \end{aligned}$$
(87)

is a polynomial and the fact that by standard estimates estimates [95], see [11] where this is worked out in detail in the Hua-Pickrell case, \(\sup _{\xi \in {\mathcal {U}}_i} \left| \partial _{\xi }^{i-1}e^{t_1 {\textsf{L}}}(\xi ,\cdot )\right| \) integrates polynomials in (lr) we get that (86) is integrable. The dominated convergence theorem gives the desired weak convergence (and we have already shown convergence of the densities to the desired formula), extending the result to general \(x\in {\mathbb {W}}_N^\uparrow \) and this completes the proof. \(\square \)

The following two lemmas, along with the version of the Eynard-Mehta theorem stated in Proposition 5.6, essentially explain why we wrote the distribution of the dynamics at different times in the form given in Proposition 5.3.

Lemma 5.4

Let \(t>0\) and \(x\in {\mathbb {W}}_N^{\uparrow }\). The functions \({\mathcal {Q}}_i^{(t)}(\cdot ;x)\) and polynomials \({\mathcal {P}}_j^{(t)}(\cdot ;x)\) are biorthogonal with respect to the Lebesgue measure on (lr):

$$\begin{aligned} \int _{l}^r {\mathcal {Q}}_i^{(t)}(y;x){\mathcal {P}}_j^{(t)}(y;x)dy={\textbf{1}}_{(i=j)}, \end{aligned}$$

for \(i,j=1,\dots ,N\).

Proof

We first assume \(x \in {\mathbb {W}}_N^{\uparrow ,\circ }\). We use the representation of \({\mathcal {Q}}\) as a sum on the left hand side of (79). We thus have, using Lemmas 3.3 and 3.4,

$$\begin{aligned} \int _{l}^r {\mathcal {Q}}_i^{(t)}(y;x){\mathcal {P}}_j^{(t)}(y;x)dy&=\int _l^r \sum _{m=1}^ie^{t{\textsf{L}}}(x_m,y)\frac{1}{\prod _{\begin{array}{c} k=1\\ k\ne m \end{array}}^i(x_m-x_k)}e^{-t{\textsf{L}}_y}\prod _{k=1}^{j-1}(y-x_k)dy\\&= \sum _{m=1}^i\frac{1}{\prod _{\begin{array}{c} k=1\\ k\ne m \end{array}}^i(x_m-x_k)}\int _{l}^r e^{t{\textsf{L}}}(x_m,y)e^{-t{\textsf{L}}_y}\prod _{k=1}^{j-1}(y-x_k)dy\\&=\sum _{m=1}^i\frac{1}{\prod _{\begin{array}{c} k=1\\ k\ne m \end{array}}^i(x_m-x_k)}\prod _{k=1}^{j-1}(x_m-x_k)\\&=\frac{1}{2\pi \text {i}}\oint _{\mathsf {\Gamma }_i^{(N)}}\frac{\prod _{k=1}^{j-1}(z-x_k)}{\prod _{k=1}^{i}(z-x_k)}={\textbf{1}}_{(i=j)}, \end{aligned}$$

the last equality being a simple fact from complex analysis. We finally extend the result to general \(x\in {\mathbb {W}}_N^{\uparrow }\) by continuity, arguing using the dominated convergence theorem by virtue of (84) and (85), as in the proof of Proposition 5.3. \(\square \)

Lemma 5.5

Let \(x\in {\mathbb {W}}_N^\uparrow \). Let \(0\le s\le t\) and \(i=1,\dots ,N\). Then, we have

$$\begin{aligned} {\mathcal {Q}}_i^{(s)}e^{(t-s){\textsf{L}}}(y;x)={\mathcal {Q}}_i^{(t)}(y;x), \ \ e^{(t-s){\textsf{L}}}{\mathcal {P}}_i^{(t)}(y;x)={\mathcal {P}}_i^{(s)}(y;x). \end{aligned}$$

Proof

We first prove the identity for \({\mathcal {Q}}\) and assume \(x\in {\mathbb {W}}_N^{\uparrow ,\circ }\). We use the representation of \({\mathcal {Q}}\) as a sum on the left hand side of (79) to compute, using the semigroup property,

$$\begin{aligned}&\int _l^r{\mathcal {Q}}_i^{(s)}(w)e^{(t-s){\textsf{L}}}(w,y)dw\\ {}&=\sum _{m=1}^i\frac{1}{\prod _{\begin{array}{c} k=1\\ k\ne m \end{array}}^i(x_m-x_k)}\int _l^{r}e^{s{\textsf{L}}}(x_m,w)e^{(t-s){\textsf{L}}}(w,y)dw ={\mathcal {Q}}_i^{(t)}(y). \end{aligned}$$

We then extend to general \(x\in {\mathbb {W}}_N^\uparrow \) by continuity as in the proof of Proposition 5.3. Finally, the relation \(e^{(t-s){\textsf{L}}}{\mathcal {P}}_i^{(t)}={\mathcal {P}}_i^{(s)}\), for any \(x\in {\mathbb {W}}_N^\uparrow \), is an immediate consequence of Lemma 3.4. \(\square \)

We require the following special case (yet another variation) of the Eynard-Mehta theorem, see for example [17, 24, 61].

Proposition 5.6

Consider a probability measure on the M-fold product \((l,r)^N\times \cdots \times (l,r)^N\) of the form

$$\begin{aligned} \frac{1}{Z}\det \left( {\mathcal {H}}_i\left( y_j^{(1)}\right) \right) _{i,j=1}^N\prod _{r=1}^{M-1}\det \left( \phi _{r,r+1}\left( y_i^{(r)},y_j^{(r+1)}\right) \right) _{i,j=1}^N \det \left( \widetilde{{\mathcal {H}}}_i\left( y_j^{(M)}\right) \right) _{i,j=1}^N, \end{aligned}$$
(88)

with Z a non-zero normalization constant. For \(n>m\), define

$$\begin{aligned} \phi _{m,n}(y_1,y_2)=\left( \phi _{m,m+1}*\cdots *\phi _{n-1,n}\right) (y_1,y_2), \end{aligned}$$

where \(\left( \phi *\psi \right) (y_1,y_2)=\int _l^r\phi (y_1,z)\psi (z,y_2)dz\). Suppose we have, for \(i,j=1,\dots ,N\),

$$\begin{aligned} \int _l^r \int _l^r{\mathcal {H}}_i(y_1)\phi _{1,M}(y_1,y_2)\widetilde{{\mathcal {H}}}_j(y_2)dy_1dy_2={\textbf{1}}_{(i=j)}. \end{aligned}$$

Finally, we define for

$$\begin{aligned} {\mathcal {H}}_i^{(r)}(y)=\left( {\mathcal {H}}_i*\phi _{1,r}\right) (y), \ \ \widetilde{{\mathcal {H}}}_i^{(r)}(y)=\left( \phi _{r,M}*\widetilde{{\mathcal {H}}}_i\right) (y). \end{aligned}$$

Then, the point process on \(\{1,\dots ,M\}\times (l,r)\) induced by (88) is determinantal with correlation kernel:

$$\begin{aligned} K\left[ (m,y_1);(n,y_2)\right] =\sum _{i=1}^N{\mathcal {H}}_i^{(m)}(y_1)\widetilde{{\mathcal {H}}}_i^{(n)}(y_2) -\phi _{n,m}(y_2,y_1){\textbf{1}}_{(n<m)}. \end{aligned}$$
(89)

We are now in a position to prove Theorem 1.9.

Proof of Theorem 1.9

We apply Proposition 5.6, with the obvious identifications (observe that after symmetrization the probability measure (73) is exactly of the form (88)), since by virtue of Proposition 5.3 and Lemmas 5.4 and 5.5 we have:

$$\begin{aligned} \int _{l}^r \int _l^r {\mathcal {Q}}_i^{(t_1)}(y_1;x)e^{(t_M-t_1){\textsf{L}}}(y_1,y_2){\mathcal {P}}_j^{(t_M)}(y_2;x) dy_1 dy_2 ={\textbf{1}}_{(i=j)}. \end{aligned}$$

To obtain the desired form of the kernel it only remains to simplify the sum as follows:

$$\begin{aligned} \sum _{i=1}^N{\mathcal {Q}}_i^{(s)}\left( y_1;x\right) {\mathcal {P}}_i^{(t)}\left( y_2;x\right)&=\sum _{i=1}^{N}\frac{1}{2\pi \text {i}}\oint _{\mathsf {\Gamma }_i^{(N)}}e^{s{\textsf{L}}}(z,y_1)\\&\quad \frac{1}{\prod _{m=1}^{i}(z-x_m)}dz \times e^{-t{\textsf{L}}_{y_2}}\prod _{m=1}^{i-1}(y_2-x_m)\\&=\frac{1}{2\pi \text {i}}\oint _{\mathsf {\Gamma }^{(N)}}e^{s{\textsf{L}}}(z,y_1)e^{-t{\textsf{L}}_{y_2}} \sum _{i=1}^N\frac{\prod _{m=1}^{i-1}(y_2-x_m)}{\prod _{m=1}^{i}(z-x_m)}dz, \end{aligned}$$

since we can enlarge the contour from \(\mathsf {\Gamma }_i^{(N)}\) to \(\mathsf {\Gamma }^{(N)}=\mathsf {\Gamma }_N^{(N)}\) without encountering any poles and thus can bring the sum over i inside the integral. Then, using the identity

$$\begin{aligned} \sum _{i=1}^{N}\frac{\prod _{m=1}^{i-1}(w-x_m)}{\prod _{m=1}^{i}(z-x_m)} =\left( \prod _{m=1}^N\frac{w-x_m}{z-x_m}-1\right) \frac{1}{w-z}=(w-z)^{N-1}\prod _{m=1}^N\frac{1}{z-x_m}, \end{aligned}$$

we get the conclusion. Finally, we note that we could have used instead in the sum above the representation of \({\mathcal {Q}}_i^{(s)}\left( y;x\right) \) as a divided difference from (76) which avoids the use of complex variables but the expression is possibly less amenable to asymptotic analysis.

6 A Partial Connection Between the Interacting Particle Systems

In this section we explain a rather partial, but still non-trivial connection, between the interacting particle systems with one-sided collisions (3), (5) and the non-colliding SDEs (25). We need some notation and terminology. We say that \(x \in {\mathbb {W}}_n^{\uparrow }\) and \(y\in {\mathbb {W}}_{n+1}^{\uparrow }\) interlace and write \(x\prec y\) if the following inequalities hold:

$$\begin{aligned} y_1 \le x_1 \le y_2 \le \cdots \le y_n \le x_n \le y_{n+1}. \end{aligned}$$

We then define interlacing arrays (of length N) by:

$$\begin{aligned} \mathbb{I}\mathbb{A}_{N}=\left\{ X^{(N)}=(x^{(1)},x^{(2)},\dots , x^{(N)})\in {\mathbb {W}}_1^{\uparrow }\times {\mathbb {W}}_2^{\uparrow }\times \cdots \times {\mathbb {W}}_N^{\uparrow }:x^{(1)}\prec x^{(2)} \prec \cdots \prec x^{(N)}\right\} .\nonumber \\ \end{aligned}$$
(90)

We observe that a similar object to interlacing arrays, the set \({\mathcal {D}}_N\), has appeared before in (46), see Remark 4.5 for more details on this connection. We call the \(x^{(i)}\)’s the rows of the array. We consider the following system of SDEs with reflection in \(\mathbb{I}\mathbb{A}_N\):

$$\begin{aligned} d{\textsf{x}}_i^{(k)}(t)=\sqrt{2{\textsf{a}}\left( {\textsf{x}}_i^{(k)}(t)\right) } d{\textsf{w}}_i^{(k)}(t)+{\textsf{b}}^{(k)}\left( {\textsf{x}}_i^{(k)}(t)\right) dt+\frac{1}{2} d{\mathfrak {l}}_i^{\uparrow ,(k)}(t)-\frac{1}{2}d{\mathfrak {l}}_i^{\downarrow ,(k)}(t),\nonumber \\ \end{aligned}$$
(91)

with the \({\textsf{w}}_i^{(k)}\) being independent standard Brownian motions and the finite variation terms \({\mathfrak {l}}_i^{\uparrow ,(k)}, {\mathfrak {l}}_i^{\downarrow ,(k)}\), which increase only when particles collide in order to preserve the interlacing can be identified with the semimartingale local times:

$$\begin{aligned} {\mathfrak {l}}_i^{\uparrow ,(k)}&=\text {sem. loc. time of } {\textsf{x}}_i^{(k)}-{\textsf{x}}_{i-1}^{(k-1)} \text { at } 0,\\ {\mathfrak {l}}_i^{\downarrow ,(k)}&=\text {sem. loc. time of } {\textsf{x}}_i^{(k)}-{\textsf{x}}_{i}^{(k-1)} \text { at } 0. \end{aligned}$$

For indices which overflow or underflow, the corresponding terms are identically zero. See Fig. 2 for an illustration of the dynamics (91). These SDEs have a unique strong solution (by virtue of the polynomial form of \({\textsf{a}}\) and \({\textsf{b}}\)), see [13], up until the stopping time

$$\begin{aligned} \tau _{\text {col}}=\inf \left\{ t\ge 0: \exists \ (n,i,j), \ 2 \le n \le N-1, \ 1 \le i <j \le n, \text { such that } {\textsf{x}}_i^{(n)}(t)={\textsf{x}}_j^{(n)}(t) \right\} , \end{aligned}$$

which corresponds to the problematic situation when a particle \({\textsf{x}}_i^{(k)}\) gets trapped between \({\textsf{x}}_{i-1}^{(k-1)}\) and \({\textsf{x}}_i^{(k-1)}\) and gets pushed in opposing directions. Under the initial conditions we will consider, see equation (92) below, \(\tau _{\text {col}}=\infty \) almost surely and thus this situation does not arise.

Fig. 2
figure 2

The figure gives a cartoon description of the dynamics (91) in \(\mathbb{I}\mathbb{A}_N\). The particles at row k evolve as independent \({\textsf{L}}^{(k)}\)-diffusions modulo the following interactions. Particle \({\textsf{x}}_i^{(k)}\) is autonomous except for its interaction with its two nearest neighbours in row \((k-1)\), namely \({\textsf{x}}_{i-1}^{(k-1)}\) and \({\textsf{x}}_{i}^{(k-1)}\): it receives an infinitesimal push, corresponding to the local times \({\mathfrak {l}}_i^{\uparrow ,(k)}\) and \({\mathfrak {l}}_i^{\downarrow ,(k)}\) respectively, only when it collides with them, denoted in the figure by the incoming arrows, in order for the interlacing to remain true. The projection maps \({\textsf{E}}_{\uparrow }\) and \({\textsf{E}}_{\downarrow }\) defined later on in (94) and (95) simply return the coordinates at the right and left edges of the array respectively as encircled in the figure. It is clear that the corresponding particle systems \(({\textsf{x}}_1^{(1)}(t),\dots ,{\textsf{x}}_N^{(N)}(t))\) and \(({\textsf{x}}_1^{(1)}(t),\dots ,{\textsf{x}}_1^{(N)}(t))\) on the right and left edges of the array only interact among themselves (observe that there are no incoming arrows from other coordinates) and thus their evolution is Markovian and upon relabelling governed by the equations (3) and (5)

The connection of the dynamics (91) to the interacting particle systems (3) and (5) is clear. They are simply the projections to the autonomous coordinates on the right and left edges of the array respectively, see Fig. 2 for an illustration. A connection to the particle system (25) on the other hand is not obvious at all. The next result explains it.

Proposition 6.1

Let \(\mu \) be a probability measure supported on \({\mathbb {W}}_N^{\uparrow ,\circ }\). Under the standing assumption in Definition 1.1, suppose the SDEs (91) in \(\mathbb{I}\mathbb{A}_N\) are initialized according to:

$$\begin{aligned} \mu \left( dx^{(N)}\right) \frac{\prod _{j=1}^{N-1}j!}{\mathsf {\Delta }_N\left( x^{(N)}\right) }{\textbf{1}}_{\left( x^{(1)}\prec \cdots \prec x^{(N)}\right) } dx^{(1)}\cdots dx^{(N-1)}. \end{aligned}$$
(92)

Then, \(\tau _{\text {col}}=\infty \) almost surely, the projection on the top row \(\left( {\textsf{x}}^{(N)}(t);t\ge 0\right) \) is distributed as a Markov process with semigroup \(\left( {\textsf{P}}_t^{(N)};t\ge 0\right) \) from (25) and for fixed time \(T\ge 0\) the distribution of \(\left( {\textsf{x}}^{(1)}(T),\dots ,{\textsf{x}}^{(N)}(T)\right) \) in \(\mathbb{I}\mathbb{A}_N\) is given by:

$$\begin{aligned} \left[ \mu {\textsf{P}}_T^{(N)}\right] \left( dx^{(N)}\right) \frac{\prod _{j=1}^{N-1}j!}{\mathsf {\Delta }_N\left( x^{(N)}\right) }{\textbf{1}}_{\left( x^{(1)}\prec \cdots \prec x^{(N)}\right) } dx^{(1)}\cdots dx^{(N-1)}. \end{aligned}$$
(93)

Proof

We apply Proposition 13.9 in [13], see Section 13.3.1 there for the general setup. We take the \(L_n\) generator in display (13.31) therein our \({\textsf{L}}^{(n)}\)-diffusion. Then, the so-called dual or conjugate diffusion \(\widehat{{\textsf{L}}^{(n)}}\) has generator:

$$\begin{aligned} \widehat{{\textsf{L}}^{(n)}}={\textsf{a}}(x)\frac{d^2}{dx^2}+\left[ {\textsf{a}}'(x) -{\textsf{b}}^{(n)}(x)\right] \frac{d}{dx}={\textsf{a}}(x)\frac{d^2}{dx^2} -\left[ {\textsf{b}}(x)+(N-n-1){\textsf{a}}'(x)\right] \frac{d}{dx}, \end{aligned}$$

and note that if a boundary point is natural for \({\textsf{L}}^{(n)}\) it remains natural for \(\widehat{{\textsf{L}}^{(n)}}\) and if it is entrance for \({\textsf{L}}^{(n)}\) it is exit (see [25, 42, 58, 66] for this terminology) for \(\widehat{{\textsf{L}}^{(n)}}\), see [13] for justifications. Associated to the diffusion \(\widehat{{\textsf{L}}^{(n)}}\) is a certain positive measure, called the speed measure (see also Proposition 2.1 and its proof where this is also discussed), having density with respect to Lebesgue measure in (lr) given by (denoted as \(\widehat{m^n}\) in the notation of [13]), see [13]:

$$\begin{aligned} \widehat{{\textsf{m}}^{(n)}}(x)=\exp \left( -\int _{\zeta }^x\frac{{\textsf{b}}^{(n)}(y)}{{\textsf{a}}(y)}dy\right) . \end{aligned}$$

Here, \(\zeta \in (l,r)\) is an arbitrary (fixed) point; changing \(\zeta \) amounts to changing \(\widehat{{\textsf{m}}^{(n)}}\) by a multiplicative constant but this plays no role in what follows. We then pick the functions \(g_n, G_n\) and constants \(c_n\) in Proposition 13.9 (more precisely in Section 13.3.1 therein) of [13] as follows:

$$\begin{aligned} g_n(x_1,\dots ,x_n)&=\prod _{i=1}^n\left( \widehat{{\textsf{m}}^{(n+1)}}(x_i)\right) ^{-1}\mathsf {\Delta }_n(x),\\ G_n(x_1,\dots ,x_n)&=\frac{1}{(n-1)!}\mathsf {\Delta }_n(x),\\ c_n&=a_2\frac{(n+1)n(n-1)}{3}+(b_1+2a_2(N-n-1))\frac{(n+1)n}{2}. \end{aligned}$$

Observe that, we have

$$\begin{aligned} \mathsf {\Delta }_n(x)=(n-1)!\int _{y\prec x} \prod _{i=1}^{n-1} \widehat{{\textsf{m}}^{(n)}}(y_i) \prod _{i=1}^n \left( \widehat{{\textsf{m}}^{(n)}}(y_i)\right) ^{-1} \mathsf {\Delta }_{n-1}(y)dy_1\cdots dy_{n-1} \end{aligned}$$

and hence the relation (13.33) in [13] holds. Moreover, a simple computation shows that \(\left( \widehat{{\textsf{m}}^{(n+1)}}\right) ^{-1}\) is an eigenfunction of \(\widehat{{\textsf{L}}^{(n+1)}}\) with eigenvalue \({\textsf{c}}^{(n)}=2(N-n-1)a_2+b_1\) and the corresponding Doob h-transform [36, 83, 87] by this function gives a \({\textsf{L}}^{(n)}\)-diffusion, \(\widehat{{\textsf{m}}^{(n+1)}}\circ \widehat{{\textsf{L}}^{(n+1)}}\circ \left( \widehat{{\textsf{m}}^{(n+1)}}\right) ^{-1}-{\textsf{c}}^{(n)}={\textsf{L}}^{(n)}\). At the level of transition densities in (lr) we then have:

$$\begin{aligned} e^{-{\textsf{c}}^{(n)}t}\frac{\widehat{{\textsf{m}}^{(n+1)}}(x)}{\widehat{{\textsf{m}}^{(n+1)}}(y)} e^{t\widehat{{\textsf{L}}^{(n+1)}}}(x,y)&=e^{t{\textsf{L}}^{(n)}}(x,y). \end{aligned}$$

We note that if a boundary point is exit for \(\widehat{{\textsf{L}}^{(n+1)}}\) its transition kernel has an atom there. This is disregarded when we consider the transition density \(e^{t\widehat{{\textsf{L}}^{(n+1)}}}(x,y)\) in (lr), which equivalently can be thought of as the transition kernel of a \(\widehat{{\textsf{L}}^{(n+1)}}\)-diffusion killed (instead of absorbed) when it reaches an exit boundary point. Thus, we readily check that

$$\begin{aligned}&e^{c_{n-1}t}\frac{\mathsf {\Delta }_N(y)}{\mathsf {\Delta }_{N}(x)}\det \left( e^{t{\textsf{L}}^{(n)}}(x_i,y_j)\right) _{i,j=1}^n\\ {}&=e^{-c_n t}\frac{\prod _{i=1}^n\widehat{{\textsf{m}}^{(n+1)}}(x_i)\mathsf {\Delta }_N(y)}{\prod _{i=1}^n\widehat{{\textsf{m}}^{(n+1)}}(y_i)\mathsf {\Delta }_{N}(x)}\det \left( e^{t\widehat{{\textsf{L}}^{(n+1)}}}(x_i,y_j)\right) _{i,j=1}^n, \end{aligned}$$

and so display (13.34) in [13] holds. Finally, the assumptions (R), (BC+) and (YW) in Proposition 13.9 in [13] all hold by our standing assumption in Definition 1.1. We can thus apply Proposition 13.9 of [13], from which the conclusion follows by noting that

$$\begin{aligned} \prod _{n=1}^{N-1}{\mathfrak {L}}_n^{n+1}\left( x^{(n+1)},dx^{(n)}\right) = \frac{\prod _{j=1}^{N-1}j!}{\mathsf {\Delta }_{N}\left( x^{(N)}\right) }{\textbf{1}}_{\left( x^{(1)}\prec \cdots \prec x^{(N)}\right) }dx^{(1)}\cdots dx^{(N-1)}, \end{aligned}$$

where the Markov kernel \({\mathfrak {L}}^{n}_{n-1}\) from \({\mathbb {W}}_n^{\uparrow }\) to \({\mathbb {W}}^{\uparrow }_{n-1}\) is given by the formula (for \(x\in {\mathbb {W}}_N^{\uparrow ,\circ }\) but can be extended to general \(x\in {\mathbb {W}}_N^{\uparrow }\) by continuity, see for example [11]):

$$\begin{aligned} {\mathfrak {L}}_{n-1}^n(x,dy)=\frac{\prod _{i=1}^{n-1}\widehat{{\textsf{m}}^{(n)}}(y_i)g_{n-1}(y_1,\dots ,y_{n-1})}{G_{n}(x_1,\dots ,x_n)}{\textbf{1}}_{(y\prec x)}dy= \frac{(n-1)!\mathsf {\Delta }_{n-1}(y)}{\mathsf {\Delta }_n(x)}{\textbf{1}}_{(y\prec x)}dy. \end{aligned}$$

\(\square \)

Remark 6.2

Proposition 6.1 first appeared in the case of Brownian motion in the work of Warren [100]. Further examples (the Ornstein-Uhlenbeck, squared Bessel, radial Ornstein-Uhlenbeck and Jacobi cases) were discussed in [13].

Remark 6.3

The dynamics (91) can also be studied from singular initial conditions, when some of the coordinates of the top row coincide, using an entrance law, see [87] for the terminology, which is built from an entrance law for the dynamics of the top row \(\left( {\textsf{P}}_t^{(N)};t\ge 0\right) \), see [12, 13] for more details. In the particular case of all the coordinates of the top row initially equal to some \(x_*\in (l,r)\), the resulting measure in (93) is seen to coincide (after a little computation), subject to identifying the set \({\mathcal {D}}_N\) from (46) with \(\mathbb{I}\mathbb{A}_N\) as discussed in Remark 4.5, with the measure in (45) with \(x_i\equiv x_*\). For other initial conditions these two measures are different. Nevertheless, they are still in some sense related and we explain this in more detail around Proposition 6.7 below.

Remark 6.4

By virtue of Proposition 6.1 the projection of the dynamics (91) in \(\mathbb{I}\mathbb{A}_N\) on the top row match the evolution of eigenvalues of certain Hermitian matrix valued diffusions, or equivalently the dynamics (25), see Sect. 2 for references (also we note that although the result is stated only for the top level analogous results hold for the n-th level by replacing \( {\textsf{L}}\) by \({\textsf{L}}^{(n)}\)). Moreover, it can be shown (and has been shown in the cases mentioned in Remark 6.2) that the measure (93) is also the distribution of the eigenvalues of consecutive sub-matrices of such a matrix valued diffusion at a fixed time \(T\ge 0\). This is certainly true in the general setting of this paper (assuming one introduces the right Hermitian matrix valued diffusion corresponding to \({\textsf{L}}\)) but we will not pursue it further here. Finally, we note that the joint dynamics of the eigenvalues of consecutive sub-matrices are different from (91), even in the case of Hermitian Brownian motion, see [1].

We now give a more direct connection between the semigroups of the interacting particle systems in (3), (5) and (25). Define the projections \({\textsf{E}}_{\uparrow }:\mathbb{I}\mathbb{A}_N \rightarrow {\mathbb {W}}_N^{\uparrow }\) and \({\textsf{E}}_{\downarrow }:\mathbb{I}\mathbb{A}_N \rightarrow {\mathbb {W}}_N^{\downarrow }\) as follows, see Fig. 2 for an illustration,

$$\begin{aligned} {\textsf{E}}_{\uparrow }\left[ (x^{(1)},x^{(2)},\dots ,x^{(N)})\right]&=(x_1^{(1)},x_2^{(2)},\dots ,x_N^{(N)}), \end{aligned}$$
(94)
$$\begin{aligned} {\textsf{E}}_{\downarrow }\left[ (x^{(1)},x^{(2)},\dots ,x^{(N)})\right]&=(x_1^{(1)},x_1^{(2)},\dots ,x_1^{(N)}). \end{aligned}$$
(95)

Observe that, we can also think of \({\textsf{E}}_{\uparrow }, {\textsf{E}}_{\downarrow }\) as Markov kernels. Define the Markov kernel \(\mathsf {\Lambda }\) from \({\mathbb {W}}_N^{\uparrow ,\circ }\) to \(\mathbb{I}\mathbb{A}_N\) by, with \(Y^{(N)}=(y^{(1)},\dots ,y^{(N)})\):

$$\begin{aligned} \mathsf {\Lambda }\left( x,dY^{(N)}\right) =\frac{\prod _{j=1}^{N-1}j!}{\mathsf {\Delta }_N(x)} {\textbf{1}}_{\left( y^{(N)}=x,Y^{(N)}\in \mathbb{I}\mathbb{A}_N\right) }dy^{(1)}\cdots dy^{(N-1)}dy^{(N)}. \end{aligned}$$
(96)

Note that, \(\mathsf {\Lambda }\left( x,\cdot \right) \) is simply the distribution of a uniformly random interlacing array in \(\mathbb{I}\mathbb{A}_N\) with top arrow x. A key role is played by the kernels \(\mathsf {\Lambda }{\textsf{E}}_{\uparrow }\) and \(\mathsf {\Lambda }{\textsf{E}}_{\downarrow }\) from \({\mathbb {W}}_N^{\uparrow ,\circ }\) to \({\mathbb {W}}_N^{\uparrow }\) and to \({\mathbb {W}}_N^{\downarrow }\) respectively: we simply pick a random array with fixed top row and then project to either edge. We have the following intertwining relations.

Proposition 6.5

Let \(t \ge 0\). Then, under the standing assumption in Definition 1.1, we have the intertwinings

$$\begin{aligned} {\textsf{P}}_t^{(N)}\mathsf {\Lambda }{\textsf{E}}_{\downarrow }=\mathsf {\Lambda }{\textsf{E}}_{\downarrow } {\textsf{S}}_t^{\downarrow ,(N)}, \end{aligned}$$
(97)
$$\begin{aligned} {\textsf{P}}_t^{(N)}\mathsf {\Lambda }{\textsf{E}}_{\uparrow }=\mathsf {\Lambda }{\textsf{E}}_{\uparrow } {\textsf{S}}_t^{\uparrow ,(N)}. \end{aligned}$$
(98)

Proof

Let \(\left( {\textsf{A}}_t^{(N)};t\ge 0\right) \) be the semigroupFootnote 9 of the process \(({\textsf{X}}^{(N)}(t)=({\textsf{x}}^{(1)}(t),\dots ,{\textsf{x}}^{(N)}(t));t\ge 0)\) following the dynamics (91) in the array \(\mathbb{I}\mathbb{A}_N\). Observe that, since the projections \(\left( {\textsf{E}}^{\uparrow }\left( {\textsf{X}}^{(N)}(t)\right) ;t\ge 0\right) \) and \(\left( {\textsf{E}}^{\downarrow }\left( {\textsf{X}}^{(N)}(t)\right) ;t\ge 0\right) \) are autonomous and given by the interacting particle systems (3) and (5) respectively, by Dynkin’s criterion [39] we have the intertwinings, for any \(t\ge 0\):

$$\begin{aligned} {\textsf{A}}_t^{(N)}{\textsf{E}}_{\downarrow }&={\textsf{E}}_{\downarrow }{\textsf{S}}_t^{\downarrow ,(N)}, \end{aligned}$$
(99)
$$\begin{aligned} {\textsf{A}}_t^{(N)}{\textsf{E}}_{\uparrow }&={\textsf{E}}_{\uparrow }{\textsf{S}}_t^{\uparrow ,(N)}. \end{aligned}$$
(100)

We now exhibit an intertwining between \({\textsf{A}}_t^{(N)}\) and \({\textsf{P}}_t^{(N)}\) which is less obvious. We follow the filtering framework for intertwinings presented in [30]. Suppose we are in the setting of Proposition 6.1, with the dynamics (91) initialized according to a probability measure of the form (92). Consider the natural filtrations \(\left( {\mathcal {F}}_t\right) _{t\ge 0}\) and \(\left( {\mathcal {G}}_t\right) _{t\ge 0}\) of \({\textsf{X}}^{(N)}\) and \({\textsf{x}}^{(N)}\):

$$\begin{aligned} {\mathcal {F}}_t=\sigma \left( {\textsf{X}}^{(N)}(s)|s\le t\right) , \ \ {\mathcal {G}}_t=\sigma \left( {\textsf{x}}^{(N)}(s)|s\le t\right) . \end{aligned}$$

By Proposition 6.1, \(\left( {\textsf{x}}^{(N)}(t);t\ge 0\right) \) is Markov with respect to its natural filtration \(\left( {\mathcal {G}}_t\right) _{t\ge 0}\) with semigroup \(\left( {\textsf{P}}^{(N)}_t;t\ge 0\right) \). Moreover, by construction, see Corollary 13.1 and the argument for the proof of Proposition 13.9 in [13], we have that the conditional distribution of \({\textsf{X}}^{(N)}(t)\) given \({\mathcal {G}}_t\), namely the history of \(({\textsf{x}}^{(N)}(s);s\le t)\), is uniform on arrays in \(\mathbb{I}\mathbb{A}_N\) with top row \({\textsf{x}}^{(N)}(t)\) (the same holds if we only condition on \(\widetilde{{\mathcal {G}}}_t=\sigma \left( {\textsf{x}}^{(N)}(t)\right) \) and this would also give the result below). More formally, for any \(t\ge 0\) and Borel function \(f:\mathbb{I}\mathbb{A}_N \rightarrow {\mathbb {R}}_+\) we have:

$$\begin{aligned} {\mathbb {E}}\left[ f\left( {\textsf{X}}^{(N)}(t)\right) \big | {\mathcal {G}}_t\right] =\mathsf {\Lambda }f\left( {\textsf{x}}^{(N)}(t)\right) . \end{aligned}$$

Let \({\textsf{x}}^{(N)}(0)=x\in {\mathbb {W}}_N^{\uparrow ,\circ }\) and a Borel function \(f:\mathbb{I}\mathbb{A}_N \rightarrow {\mathbb {R}}_+\) be arbitrary. Then, we can compute the following expectation in two ways:

$$\begin{aligned} {\mathbb {E}}\left[ f\left( {\textsf{X}}^{(N)}(t)\right) \right]&=\left[ \mathsf {\Lambda }{\textsf{A}}_t^{(N)}f\right] (x),\\ {\mathbb {E}}\left[ f\left( {\textsf{X}}^{(N)}(t)\right) \right]&={\mathbb {E}}\left[ {\mathbb {E}}\left[ f\left( {\textsf{X}}^{(N)}(t)\right) \big |{\mathcal {G}}_t\right] \right] ={\mathbb {E}}\left[ \mathsf {\Lambda }f\left( {\textsf{x}}^{(N)}(t)\right) \right] =\left[ {\textsf{P}}_t^{(N)}\mathsf {\Lambda }f\right] (x). \end{aligned}$$

Hence, since \(x\in {\mathbb {W}}_N^{\uparrow ,\circ }\) and \(f:\mathbb{I}\mathbb{A}_N \rightarrow {\mathbb {R}}_+\) were arbitrary we have the intertwining, for \(t\ge 0\):

$$\begin{aligned} {\textsf{P}}_t^{(N)}\mathsf {\Lambda }=\mathsf {\Lambda }{\textsf{A}}_t^{(N)}. \end{aligned}$$
(101)

By putting (99) and (100) together with (101) we get the statement of the proposition. \(\square \)

We now discuss an alternative route to Proposition 4.4. For concreteness we focus on the right edge dynamics \({\textsf{S}}_t^{\uparrow ,(N)}\) but the situation for \({\textsf{S}}_t^{\downarrow ,(N)}\) is completely analogous. Define the operator \(\left( \mathsf {\Lambda }{\textsf{E}}_\uparrow \right) ^{-1}\) acting on sufficiently smooth functions g on \({\mathbb {W}}_N^{\uparrow ,\circ }\) as follows:

$$\begin{aligned} \left( \mathsf {\Lambda }{\textsf{E}}_\uparrow \right) ^{-1}g(x)=\frac{1}{\prod _{j=1}^{N-1}j!} \prod _{i=1}^{N-1}\left( -\partial _{x_i}\right) ^{N-i}\left[ g(x)\mathsf {\Delta }_N(x)\right] . \end{aligned}$$
(102)

Then, \(\left( \mathsf {\Lambda }{\textsf{E}}_\uparrow \right) ^{-1}\) is a left inverse to the Markov kernel \(\mathsf {\Lambda }{\textsf{E}}_\uparrow \) as shown below.

Lemma 6.6

Let f be a continuous function on \({\mathbb {W}}_N^\uparrow \). Then, with \(x\in {\mathbb {W}}_N^{\uparrow ,\circ }\), we have

$$\begin{aligned} \left[ \left( \mathsf {\Lambda }{\textsf{E}}_\uparrow \right) ^{-1}\mathsf {\Lambda }{\textsf{E}}_\uparrow f\right] (x) =f(x). \end{aligned}$$

Proof

We prove this by induction. Observe that, we can write explicitly:

$$\begin{aligned} \mathsf {\Lambda }{\textsf{E}}_\uparrow f(x)=\frac{\prod _{j=1}^{N-1}j!}{\mathsf {\Delta }_N(x)}\int _{x^{(1)}\prec x^{(2)} \prec \cdots \prec x^{(N)}=x}f\left( x_1^{(1)},x_2^{(2)},\dots ,x_N^{(N)}\right) dx^{(1)}dx^{(2)}\cdots dx^{(N-1)}.\nonumber \\ \end{aligned}$$
(103)

Note that the above integral is \(2^{-1}N(N-1)\)-dimensional. The base case of the induction is then equivalent to the identity

$$\begin{aligned} -\partial _{x_1}\int _{x_1}^{x_2}f(y,x_2)dy=f(x_1,x_2). \end{aligned}$$

For the inductive step we first apply \(\left( -\partial _{x_1}\right) ^{N-1}\) to the iterated integral in (103) by sequentially applying \((N-1)\) times the operator \(-\partial _{x_1}\) (also observe that the Vandermonde determinants cancel out). This annihilates the integrals in the variables \(x_1^{(N-1)},x_1^{(N-2)},\dots ,x_1^{(1)}\) and plugs in \(x_1\) for \(x_1^{(1)}\) in f. Then, after appropriate relabelling of the variables, we are left with a \(2^{-1}(N-1)(N-2)\)-dimensional integral which is exactly in the form required to apply the inductive hypothesis (we are basically looking at the interlacing array of length \(N-1\) and top row \(\big (x_2^{(N)},\dots ,x_N^{(N)}\big )=(x_2,\dots ,x_N)\) obtained by removing the left edge coordinates). This completes the proof. \(\square \)

Hence, by virtue of Proposition 6.5, by applying \(\left( \mathsf {\Lambda }{\textsf{E}}_\uparrow \right) ^{-1}\) on both sides of the relevant intertwining, we obtain

$$\begin{aligned} {\textsf{S}}_t^{\uparrow ,(N)}=\left( \mathsf {\Lambda }{\textsf{E}}_{\uparrow }\right) ^{-1} {\textsf{P}}_t^{(N)}\mathsf {\Lambda }{\textsf{E}}_{\uparrow }. \end{aligned}$$
(104)

This gives an alternative expression to the one in (40) for \({\textsf{S}}_t^{\uparrow ,(N)}\) (by combining Propositions 4.4 and 6.7 below it follows directly that the right hand side of (104) coincides with the expression in (40)). We note that the argument just used to obtain (104) appears to have been first used in [35] to study the transition kernels of some discrete interacting particle systems related to the RSK algorithm. Our interest in expression (104) is that it provides a rather clear path to Proposition 4.4. Namely, consider the following measure on \(\mathbb{I}\mathbb{A}_N\) given by, with \(x\in {\mathbb {W}}_N^{\uparrow ,\circ }\) (this extends to \(x\in {\mathbb {W}}_N^\uparrow \) as the Vandermonde determinants in the formula cancel out),

$$\begin{aligned} \left( \mathsf {\Lambda }{\textsf{E}}_{\uparrow }\right) ^{-1}{\textsf{P}}_t^{(N)}\mathsf {\Lambda }\left( x,\cdot \right) . \end{aligned}$$
(105)

Now, by virtue of (104), \({\textsf{S}}_t^{\uparrow ,(N)}(x,\cdot )\) is seen to be the right edge marginal of (105) in \(\mathbb{I}\mathbb{A}_N\). The following proposition then gives an alternative, rather roundabout (given all the preliminaries we need) way, of obtaining Proposition 4.4 but which is maybe conceptually more satisfying.

Proposition 6.7

The measure (45), subject to identifying \({\mathcal {D}}_N\) from (46) with \(\mathbb{I}\mathbb{A}_N\), coincides with the measure (105).

Proof

This follows by simply writing out the explicit expressions for \({\textsf{P}}_t^{(N)}\), \(\mathsf {\Lambda }\) and \(\left( \mathsf {\Lambda }{\textsf{E}}_\uparrow \right) ^{-1}\) given in (24), (96) and (102) respectively, noting that,

$$\begin{aligned} \prod _{i=1}^{N-1}\left( -\partial _{x_i}\right) ^{N-i}=(-1)^{\frac{N(N-1)}{2}}\prod _{i=1}^{N-1}\partial _{x_i}^{N-i} \ \text { and } \ e^{-t\sum _{k=1}^{N-1}k{\textsf{c}}^{(k)}}= e^{-t\lambda _N}, \end{aligned}$$

and comparing with the explicit expression in (45). \(\square \)

Finally, observe that for deterministic initial condition \(x\in {\mathbb {W}}_N^{\uparrow ,\circ }\) for the top row, the measure (92) on \(\mathbb{I}\mathbb{A}_N\) can be written as \(\mathsf {\Lambda }(x,\cdot )\), and by Proposition 6.1 its evolution under the dynamics (91) after time t is given by \({\textsf{P}}_t^{(N)}\mathsf {\Lambda }(x,\cdot )\) which is related (by virtue of the above) to the signed measure (45) by an application of \(\left( \mathsf {\Lambda }{\textsf{E}}_\uparrow \right) ^{-1}\).

We conclude with the following immediate corollary of Proposition 6.1. Of course, this was known for the diffusions discussed in Remark 6.2. For the Brownian case this goes back even earlier by using certain path transformations related to the RSK algorithm, see [27, 81].

Corollary 6.8

Let \(\mu \) be a probability measure supported on \({\mathbb {W}}_N^{\uparrow ,\circ }\). Suppose the particle system \(\left( {\textsf{z}}(t);t\ge 0\right) \) following (25) is initialized according to \(\mu \) and the particle systems \(\left( {\textsf{x}}^\downarrow (t);t \ge 0\right) \) and \(\left( {\textsf{x}}^\uparrow (t);t \ge 0\right) \) following (3) and (5) respectively are initialized according to \(\mu \mathsf {\Lambda }{\textsf{E}}_{\uparrow }\) and \(\mu \mathsf {\Lambda }{\textsf{E}}_{\downarrow }\). Then, we have the following equality in distribution at the process level:

$$\begin{aligned} \left( {\textsf{z}}_1(t);t\ge 0\right)&\overset{\text {d}}{=} \left( {\textsf{x}}_N^\downarrow (t);t \ge 0\right) ,\\ \left( {\textsf{z}}_N(t);t\ge 0\right)&\overset{\text {d}}{=} \left( {\textsf{x}}_N^\uparrow (t);t \ge 0\right) . \end{aligned}$$

Remark 6.9

For completeness we also record explicitly the probabilistic interpretation of the intertwinings (99) and (100). These relations imply that the projections under \({\textsf{E}}_{\downarrow }\) and \({\textsf{E}}_{\uparrow }\) of the full interlacing Markov process (91) are Markovian with semigroups \({\textsf{S}}_t^{\downarrow ,(N)}\) and \({\textsf{S}}_t^{\uparrow ,(N)}\) respectively (this of course can also be observed directly from the SDEs, as we have done already). This is an instance of Dynkin’s criterion [39] for when a process given as a function (in this case \({\textsf{E}}_\downarrow \) or \({\textsf{E}}_{\uparrow }\)) of a Markov process is Markov itself.