1 Introduction

By classical results, the empirical distributions of \(\beta \)-Hermite, \(\beta \)-Laguerre, and \(\beta \)-Jacobi ensembles of dimension N tend for \(N\rightarrow \infty \) to semicircle, Marchenko–Pastur as well as Kesten–McKay and Wachter distributions after suitable normalizations; see e.g. [5, 12, 21, 29, 41]. Moreover, in the Hermite and Laguerre cases, there are dynamical versions of these results in terms of Bessel processes \((X_t^N)_{t\ge 0}\) of dimension N for the root systems of types A and B; see [7, 31] for the background on these processes. Namely, let \(\mu \) be some starting distribution on \({\mathbb {R}}\) or \([0,\infty [\), and let for \(N\in {\mathbb {N}}\), \(x_N\) be starting vectors in \({\mathbb {R}}^N\) such that the empirical distributions of the components of the \(x_N\) tend to \(\mu \). If we consider the Bessel processes \((X_t^N)_{t\ge 0}\) with start in these points \(x_N\), then under mild additional conditions and with an appropriate scaling, the empirical distributions of the components of the \(X_t^N\) tend for \(N\rightarrow \infty \) almost surely weakly to measures \(\mu _t\) for all \(t\ge 0\). In the \(\beta \)-Hermite case, i.e. for Bessel processes of type A, one has \(\mu _t=\mu \boxplus \mu _{sc, 2\sqrt{t}}\), where \(\mu _{sc, 2\sqrt{t}}\) is the semicircle distribution with radius \(2\sqrt{t}\) and \(\boxplus \) the usual additive free convolution; see Section 4.3 of [2] and [39] for different approaches. Moreover, for the \(\beta \)-Laguerre case, i.e. Bessel processes of type B, there are corresponding results in terms of Marchenko–Pastur distributions and some construction in [39] which is related to the rectangular free convolutions in [3, 4]. These results for Bessel processes hold also for stationary Ornstein–Uhlenbeck-type versions as indicated in the end of Section 2 of [39]. For the background on stochastic analysis and free probability we recommend [2, 25, 26, 30].

In this paper, we show that such limit results also appear for N-dimensional Jacobi processes on \([-1,1]^N\) for \(N\rightarrow \infty \) which were introduced and studied in [9, 10, 14, 27, 28, 32, 33, 38]. These processes depend on 3 parameters and may be described in different ways. For instance, in the compact case, motivated by the theory of special functions associated with root systems of Heckman and Opdam [17, 18], one can define these processes as diffusions on the alcoves

$$\begin{aligned} {\tilde{A}}_N:= \{\theta \in [0,\pi ]: \> 0\le \theta _N\le \ldots \le \theta _1\le \pi \} \end{aligned}$$

with the Heckman–Opdam Laplacians

$$\begin{aligned} L_\mathrm{{trig}, k}f(\theta ):= & {} \Delta f(\theta ) + \sum _{i=1}^N\Biggl (k_1 \cot (\theta _i/2)+ 2k_2 \cot (\theta _i)\\{} & {} +k_3\sum _{j: j\ne i} \Bigl ({\cot }(\frac{\theta _i-\theta _j}{2})+{\cot }(\frac{\theta _i+\theta _j}{2})\Bigr )\Biggr )f_{\theta _i}(\theta ) \end{aligned}$$

of type BC as generators with the multiplicities \(k_1\in \mathbb R\), \(k_2\ge 0\), \(k_3>0\) with \(k_1+k_2\ge 0\) with reflecting boundaries. It is convenient to transform these objects in this trigonometric form via \(x_i:=\cos \theta _i\) (\(i=1,\ldots ,N\)) into an algebraic form as in [10, 38]. We then obtain diffusions on the alcoves

$$\begin{aligned} A_N:=\{x\in {\mathbb {R}}^N: \> -1\le x_1\le \ldots \le x_N\le 1\} \end{aligned}$$

with the algebraic Heckman–Opdam Laplacians

$$\begin{aligned} L_kf(x){} & {} := \sum _{i=1}^N (1-x_i^2)f_{x_i,x_i}(x)\nonumber \\{} & {} \quad \ + \sum _{i=1}^N\Biggl ( -k_1-(1+k_1+2k_2)x_i +2k_3\sum _{j: j\ne i} \frac{1-x_i^2}{x_i-x_j}\Biggr )f_{x_i}(x). \end{aligned}$$
(1.1)

as generators with reflecting boundaries. The eigenfunctions of the \(L_k\) are Heckman–Opdam Jacobi polynomials, and the transition probabilities can be expressed via these polynomials; see [10, 27, 28]. On the other hand, these processes \((X_t=(X_{t,1}, \ldots ,X_{t,N}))_{t\ge 0}\) can be described as unique strong solutions of the stochastic differential equations (SDEs)

$$\begin{aligned} \textrm{d}X_{t,i}{} & {} = \sqrt{2(1-X_{t,i}^2)}\> \textrm{d}B_{t,i}\nonumber \\{} & {} \quad \ + \Bigl ( -k_1-(1+k_1+2k_2)X_{t,i} +2k_3\sum _{j: j\ne i} \frac{1-X_{t,i}^2}{X_{t,i}-X_{t,j}}\Bigr )\textrm{d}t \end{aligned}$$
(1.2)

for \(i=1,\ldots ,N\) with an N-dimensional Brownian motion \((B_t)_{t\ge 0}\) and with paths which are reflected on \(\partial A_N\); see [10, 16, 32, 33]. Following [10], we also transform the parameters via

$$\begin{aligned}{} & {} \kappa :=k_3>0, \quad q:= N-1+\frac{1+2k_1+2k_2}{2k_3}>N-1, \nonumber \\{} & {} p:= N-1+\frac{1+2k_2}{2k_3}>N-1, \end{aligned}$$
(1.3)

and rewrite (1.2) as

$$\begin{aligned} \textrm{d}X_{t,i}{} & {} = \sqrt{2(1-X_{t,i}^2)}\> \textrm{d}B_{t,i}\nonumber \\{} & {} \quad \ +\kappa \Bigl ((p-q) -(p+q)X_{t,i} + 2\sum _{j: \> j\ne i}\frac{1-X_{t,i}X_{t,j}}{X_{t,i}-X_{t,j}}\Bigr )\textrm{d}t \end{aligned}$$
(1.4)

for \(i=1,\ldots ,N\) and \(t>0\). It is known (see e.g. [10, 14]) that for \(\kappa \ge 1\) and \(p,q\ge N-1+2/\kappa \), the process does not meet \(\partial A_N^A\) almost surely.

It is useful, also to consider the transformed processes \((\tilde{X}_{t}:=X_{t/\kappa })_{t\ge 0}\) which satisfy

$$\begin{aligned} \textrm{d}{\tilde{X}}_{t,i}{} & {} = \frac{\sqrt{2}}{\sqrt{\kappa }} \sqrt{1-\tilde{X}_{t,i}^2}\> \textrm{d}{\tilde{B}}_{t,i} \nonumber \\{} & {} \quad \ +\Bigl ((p-q) -(p+q){\tilde{X}}_{t,i} + 2\sum _{j:\> j\ne i}\frac{1-{\tilde{X}}_{t,i}{\tilde{X}}_{t,j}}{\tilde{X}_{t,i}-{\tilde{X}}_{t,j}}\Bigr )\textrm{d}t \end{aligned}$$
(1.5)

for \( i=1,\ldots ,N\). For \(\kappa =\infty \) and \(p,q>N-1\), these SDEs degenerate into the ODE

$$\begin{aligned} \frac{d}{\textrm{d}t}x_i(t) =(p-q)-(p+q)x_i(t) +2\sum _{j:\> j\ne i}\frac{1-x_i(t) x_j(t)}{x_i(t)-x_j(t)}\quad ( i=1,\dots ,N)\nonumber \\ \end{aligned}$$
(1.6)

which is interesting for itself and closely related to the classical Jacobi polynomials \((P_N^{(\alpha ,\beta )})_{N\ge 0}\) on \([-1,1]\) with the parameters

$$\begin{aligned} \alpha :=q-N>-1, \quad \beta :=p-N>-1. \end{aligned}$$

These polynomials are orthogonal w.r.t. the weights \((1-x)^\alpha (1+x)^\beta \) as usual; see Ch. 4 of [37]. The ODE (1.6) has the following properties which will be proved in Appendix Section:

Theorem 1.1

Let \(N\in {\mathbb {N}}\) and \(p,q> N-1\). Then, for each \(x_0\in A_N\), (1.6) has a unique solution x(t) for \(t\ge 0\) in the sense that there is a unique continuous function \(x:[0,\infty )\rightarrow A_N\) with \(x(0)=x_0\) such that for \(t>0\), \(x(t)\in A_N{\setminus }\partial A_N\) holds and satisfies (1.6).

Moreover, this solution satisfies \(\lim _{t\rightarrow \infty }x(t)=z\in A_N{\setminus }\partial A_N\) where the coordinates of z are the ordered roots of \(P_N^{(q-N,p-N)}\). This z is the only stationary solution of (1.6) in \(A_N\).

The stationary solution \(z\in A_N\) of (1.6) is the freezing limit for \(\kappa \rightarrow \infty \) of the stationary distributions of the corresponding Jacobi processes with fixed pq where these distributions belong to corresponding \(\beta \)-Jacobi (or \(\beta \)-MANOVA) ensembles with the densities

$$\begin{aligned} c(k_1,k_2,k_3)\cdot \prod _{i=1}^N (1-x_i)^{k_1+k_2-1/2}(1+x_i)^{k_2-1/2} \cdot \prod _{i<j}|x_i-x_j|^{2k_3} \end{aligned}$$
(1.7)

with known Selberg-type norming constants \(c(k_1,k_2,k_3)\). We recapitulate that, possibly after some obvious transformation, these measures appear as the distributions of the ordered eigenvalues of the tridiagonal models in [22, 23] and in log gas models on \([-1,1]\); see [15]. Moreover, for certain parameters, these distributions and the Jacobi processes have an interpretation as invariant distributions and Brownian motions on compact Grassmann manifolds of rank N over \({\mathbb {F}} ={\mathbb {R}}, {\mathbb {C}}\) and the quaternions by the connection between the Heckman–Opdam theory and spherical functions; see [10, 17, 18, 27, 28]. Even more generally for some parameters, these objects appear as the ordered eigenvalues of matrices \(B^*B\) for upper left blocks B of size \(M\times N\) of Haar distributed random variables and Brownian motions in the unitary group \(U(R,{\mathbb {F}})\), respectively, with the dimension parameters \(R> M> N\); see [10, 14].

We now turn to the main content of this paper. We follow [39] and derive several almost sure limit theorems as \(N\rightarrow \infty \) for the empirical distributions of the Jacobi processes \((\tilde{X}_t^N)_{t\ge 0}\) and their deterministic freezing limits for \(\kappa =\infty \) satisfying (1.6), which are related to mean field limits of [34]. Considering the parameters \(p,q,\kappa \), it will turn out that the limits depend on \(\kappa \) in a trivial way while the parameters pq, which depend on N, lead after suitable affine-linear transformations to different limit distributions. The different cases are motivated by the stationary deterministic case, where the empirical distributions of the roots of the classical Jacobi polynomials appear. In this setting several limiting regimes with semicircle, Marchenko–Pastur and Wachter distributions were derived in [13]. We follow this decomposition of the cases and start with the deterministic case. However, compared to the stationary case, we get not one but multiple limit results by using different time scalings. For this we derive recurrence relations for the moments and PDEs for the Cauchy and the R-transforms of the empirical distributions of the solutions in a general setting in Sect. 2; see in particular Eq. (2.7). This PDE can then be applied to the different regimes. In Sect. 3, we shall do so for two regimes where semicircle and Marchenko–Pastur distributions appear. In the semicircle case we obtain the following result where \(\mu _{sc,\tau }\) is the semicircle law with support \([-\tau ,\tau ]\) for \(\tau \ge 0\) and \(\mu _{sc,0}=\delta _0\):

Theorem 1.2

Consider sequences \((p_N)_{N\in {\mathbb {N}}},(q_N)_{N\in \mathbb N}\subset ]0,\infty [\) with \(\lim _{N\rightarrow \infty }p_N/N=\infty \) and \(\lim _{N\rightarrow \infty }q_N/N=\infty \) such that \(C:=\lim _{N\rightarrow \infty }p_N/q_N\ge 0\) exists. Define

$$\begin{aligned} a_N:=\frac{q_N}{\sqrt{Np_N}}, \quad b_N:=\frac{p_N-q_N}{p_N+q_N} \quad (N\in {\mathbb {N}}). \end{aligned}$$

Let \(\mu \in M^1({\mathbb {R}})\) be a probability measure satisfying some moment condition (see Theorem 3.1 for the details), and let \((x_N=(x_{1}^N,\dots ,x_{N}^N))_{N\in {\mathbb {N}}}\) be starting vectors \(x_N\in A_N\) such that all moments of the empirical measures

$$\begin{aligned} \mu _{N,0}:=\frac{1}{N}\sum _{i=1}^N\delta _{a_N( x_i^N-b_N)} \end{aligned}$$

tend to those of \(\mu \) for \(N\rightarrow \infty \). Let \(x_N(t)\) be the solutions of the ODEs (1.6) with \(x_N(0)=x_N\) for \(N\in {\mathbb {N}}\). Then, for all \(t>0\), all moments of the empirical measures

$$\begin{aligned} \mu _{N,t/(p_N+q_N)} =\frac{1}{N}\sum _{i=1}^N \delta _{a_N( x_i^N(t/(p_N+q_N))-b_N)}\end{aligned}$$

tend to those of the probability measures

$$\begin{aligned} \mu _t:= (e^{-t}\mu ) \boxplus \left( \sqrt{1-e^{-2t}}\mu _{sc,4(1+C)^{-3/2}}\right) . \end{aligned}$$

This in particular implies that the \(\mu _{N,t/(p_N+q_N)}\) tend weakly to the \(\mu _t\).

Theorem 1.2 is a local limit theorem on our particle systems around the points \(b_N\in ]-1,1[\) for small times on the space scale \(1/a_N\). We point out that this local result preserves the asymptotic stationarity of the global systems. In fact, there are local limit results on different time and space scales in Sect. 3 where this asymptotic stationarity does not appear; see e.g.  Theorem 3.4. We also derive Marchenko–Pastur-type local limit results in neighbourhoods of the boundary points \(\pm 1\) in Sect. 3; see e.g. Theorem 3.7. In the proof of this theorem we again solve the associated PDE for the R-transforms; a modification of this PDE also appears in [6].

There are further limit regimes involving Kesten–McKay and Wachter distributions, which are also motivated by [13] and limit results for \(\beta \)-Jacobi ensembles in [12, 41]. In these cases it can be shown that under corresponding prerequisites on the initial conditions, the empirical measures \(\mu _{N,t/(p_n+q_N)}\) also converge to some probability measures \(\mu _t\) for \(t\ge 0\). However, the details of the limits are more involved here. In contrast to the results presented in this work these limits cannot be easily described by free convolutions but one canonically uses free projections; see remark at end of sect. 4.

The deterministic results of Sect. 3 will be extended in Sect. 4 to almost sure versions for Jacobi processes with fixed parameter \(\kappa \) in the compact setting. We mention that for \(\kappa \in \{1/2,1\}\), these results can be also derived via the approach in [24, 36] for Jacobi matrix models by verifying the conditions of Theorem 2.1 in [36]. However, our direct approach has the advantage that we obtain appropriate scalings in a more obvious way, as by our freezing technique, all information on the limiting behaviour for \(N\rightarrow \infty \) is already encoded in the deterministic dynamical systems. This provides a simple unifying approach without matrix models.

In Sect. 5, we transfer some of our results in Sects. 2, 3 and 4 to a noncompact setting. For some parameters, these results have interpretations in terms of Brownian motions on the noncompact Grassmann manifolds. It will turn out that in these cases, some results remain valid up to some time inversion. However, it seems that here no analogue to the stationary results like Theorem 1.2 holds, as the initial conditions do not fit the prerequisites on the parameters \(p_N,q_N, a_N, b_N\). Finally, we prove Theorem 1.1 and its noncompact analogue in Appendix Section.

2 Moments of the Empirical Distributions in the Deterministic Case

In this section, we study the solutions \(x^N(t)\) of the ODEs (1.6) for suitable initial conditions \(x_0^N\in A_N\) for \(N\in {\mathbb {N}}\) with \(N\rightarrow \infty \) and suitable \(p=p_N, q=q_N>N-1\) where this implies \(p=p_N, q=q_N\rightarrow \infty \). There are several limit regimes for the empirical measures

$$\begin{aligned} \frac{1}{N}(\delta _{x^N_1(t)}+\ldots +\delta _{x^N_N(t)})\in M^1([-1,1]) \end{aligned}$$

for \(N\rightarrow \infty \) and \(t\ge 0\) under the condition that a corresponding limit holds for the time \(t=0\). For some of these limits we must transform the data in an affine-linear way in all coordinates depending on N. For this, we introduce suitable sequences \((a_N)_{N\in {\mathbb {N}} }\subset ]0,\infty [\) and \((b_N)_{N\in {\mathbb {N}}}\subset {\mathbb {R}} \) which will be specified later in specific situations. We consider the transformed solutions \(\tilde{x}^N(t)=(\tilde{x}^N_{1}(t),\ldots , \tilde{x}^N_{N}(t))\) with

$$\begin{aligned} \tilde{x}^N_{i}(t):=a_N(x^N_{i}(t)-b_N) \quad (1\le i\le N) \end{aligned}$$

as well as the transformed empirical distributions

$$\begin{aligned} \mu _{N,t}:=\frac{1}{N}(\delta _{\tilde{x}^N_1(t)}+\ldots +\delta _{{\tilde{x}}^N_N(t)})= \frac{1}{N}(\delta _{a_N(x_1^N(t)-b_N)}+\ldots +\delta _{a_N(x_N^N(t)-b_N)}).\nonumber \\ \end{aligned}$$
(2.1)

In order to determine possible weak limits of the measures \(\mu _{N,t}\), we shall study the moments

$$\begin{aligned} S_{N,l}(t):=\int _{[-1,1]}y^l\,d\mu _{N,t}(y) =\frac{a_N^{l}}{N}\sum _{i=1}^N(x^N_{i}(t)-b_N)^l =\frac{1}{N}\sum _{i=1}^N\tilde{x}^N_{i}(t)^l, \end{aligned}$$
(2.2)

of these measures for \(l\in {\mathbb {N}}_0\), \(t\ge 0\), and \(N\in \mathbb N\).

Proposition 2.1

The moments \(S_{N,l}(t)\) of \(\mu _{N,t}\) satisfy the ODEs

$$\begin{aligned} \frac{d}{\textrm{d}t}S_{N,0}\equiv 0,\quad \quad \frac{d}{\textrm{d}t}S_{N,1}=-(p+q)S_{N,1}+a_N(p-q-b_N(p+q)), \end{aligned}$$
(2.3)

and

$$\begin{aligned} \frac{d}{\textrm{d}t}S_{N,l} =l&\Bigl [-(p+q-(l-1))S_{N,l}+a_N(p-q-b_N(p+q-2(l-1)))S_{N,l-1}\nonumber \\&-a_N^2(1-b_N^2)(l-1)S_{N,l-2} +Na_N^2(1-b_N^2)\sum _{k=0}^{l-2}S_{N,k}S_{N,l-2-k}\nonumber \\ {}&-N\sum _{k=0}^{l-2}S_{N,k+1}S_{N,l-1-k} -2a_Nb_N N\sum _{k=0}^{l-2}S_{N,k}S_{N,l-1-k}\Bigr ]\,,\;l\ge 2\,, \end{aligned}$$
(2.4)

where we used the shorthand \(p=p_N,q=q_N\).

Proof

We rewrite the ODE (1.6) as an ODE for \(\tilde{x}^N_{i}\) by

$$\begin{aligned} \frac{d}{\textrm{d}t}\tilde{x}_i^N(t) =&a_N(p-q-b_N(p+q))-(p+q)\tilde{x}_i^N(t)\nonumber \\&+2\sum _{j: j\ne i}\frac{a_N^2(1-b_N^2) -a_Nb_N(\tilde{x}_i^N(t)+\tilde{x}^N_j(t)) -\tilde{x}_i^N(t)\tilde{x}^N_j(t)}{\tilde{x}_i^N(t)-\tilde{x}^N_j(t)}\,, \end{aligned}$$
(2.5)

where we agree that a summation \(j: j\ne i\) means that we sum over all \(j\ne i\) from 1 to N. We shall also suppress the dependence of \(S_{N,l}\) and \(\tilde{x}^N\) on t. (2.5) yields the following ODEs for \(l\in {\mathbb {N}}\):

$$\begin{aligned} \frac{d}{\textrm{d}t}S_{N,l}&=\frac{l }{N}\sum _{i=1}^n\left( \tilde{x}_i^N\right) ^{l-1} \left( \frac{d}{\textrm{d}t}\tilde{x}_i^N\right) \\&=\frac{l}{N} \Biggl [ a_N(p-q-b_N(p+q))N \cdot S_{N,l-1} -(p+q)N\cdot S_{N,l} \\&\quad \quad +2\sum _{i,j: \> i\ne j} \frac{(a_N^2(1-b_N^2)-\tilde{x}_i^N\tilde{x}^N_j)\left( \tilde{x}_i^N\right) ^{l-1} - b_Na_N(\left( \tilde{x}_i^N\right) ^l+\left( \tilde{x}_i^N\right) ^{l-1}\tilde{x}^N_j)}{\tilde{x}_i^N-\tilde{x}^N_j}\Biggr ]\,. \end{aligned}$$

Hence, for \(l=1\),

$$\begin{aligned} \frac{d}{\textrm{d}t}S_{N,1}=a_N(p-q-b_N(p+q))-(p+q)S_{N,1}.\end{aligned}$$

Moreover, for \(l\ge 2\) we first observe that

$$\begin{aligned} 2&\sum _{i,j: \> i\ne j}(a_N^2(1-b_N^2)-\tilde{x}_i^N\tilde{x}_j^N) \frac{\left( \tilde{x}_i^N\right) ^{l-1}}{\tilde{x}_i^N-\tilde{x}_j^N} \\&=2\sum _{i,j: \> i< j}(a_N^2(1-b_N^2)-\tilde{x}_i^N\tilde{x}_j^N) \frac{\left( \tilde{x}_i^N\right) ^{l-1}-\left( \tilde{x}_j^N\right) ^{l-1}}{\tilde{x}_i^N-\tilde{x}_j^N}\\&=\sum _{k=0}^{l-2}\sum _{i,j: \> i\ne j}(a_N^2(1-b_N^2)-\tilde{x}_i^N\tilde{x}_j^N)\left( \tilde{x}_i^N\right) ^k\left( \tilde{x}^N_j\right) ^{l-2-k}\\&=a_N^2(1-b_N^2)\Biggl (N^2\sum _{k=0}^{l-2}S_{N,k}S_{N,l-2-k}-(l-1)NS_{N,l-2}\Biggr )\\&\quad \quad -N^2\sum _{k=0}^{l-2}S_{N,k+1}S_{N,l-1-k}+(l-1)NS_{N,l}\,. \end{aligned}$$

Similarly it holds that

$$\begin{aligned} \sum _{i,j: \> i\ne j}\frac{\left( \tilde{x}_i^N\right) ^l+\left( \tilde{x}_i^N\right) ^{l-1}\tilde{x}_j^N}{\tilde{x}_i^N-\tilde{x}_j^N}= N^2\sum _{k=0}^{l-2}S_{N,k}S_{N,l-1-k}-N(l-1)S_{N,l-1}. \end{aligned}$$

Combining these equations yields (2.4). \(\square \)

We next state Proposition 2.1 in terms of the Cauchy transform of \(\mu _{N,t}\). Recall that the Cauchy transform of \(\mu \in M^1({\mathbb {R}})\) is defined as

$$\begin{aligned} G_{\mu }(z):=\int _{{\mathbb {R}}}\frac{1}{z-x}\,d\mu (x) \quad \quad (z\in \{z\in {\mathbb {C}}:\,\Im (z)>0\}). \end{aligned}$$

We now set \(G^N(t,z):=G_{\mu _{N,t}}(z)\). For \(|z|\) sufficiently large we write \(G^N\) as

$$\begin{aligned} G^N(t,z)=\sum _{l=0}^{\infty }z^{-(l+1)}S_{N,l}(t). \end{aligned}$$
(2.6)

The partial derivatives \(G_t^N(t,z):=\partial _t G^N(t,z)\), \(G_z^N(t,z):=\partial _z G^N(t,z)\) and \(G_{zz}^N(t,z):=\partial _z^2 G^N(t,z)\) are related as follows:

Proposition 2.2

The Cauchy transforms \(G^N(t,z)\) of the measures \(\mu _{N,t}\), \(t\ge 0\), satisfy the PDE

$$\begin{aligned} G_t^N&(t,z)\nonumber \\ =&(p+q)zG^N_z(t,z)+(p+q)G^N(t,z)+\partial _{zz}\left( z^2G^N(t,z)\right) \nonumber \\&-a(p-q-b(p+q))G^N_z(t,z)\nonumber \\&+2ab\partial _{zz}\left( zG^N(t,z)\right) -(1-b^2)a^2G_{zz}^N(t,z)-2Na^2(1-b^2)G^N(t,z)G^N_z(t,z)\nonumber \\&+2N\left[ z^2G^N(t,z)G^N_z(t,z)+z(G^N(t,z))^2-zG^N_z(t,z)-G^N(t,z)\right] \nonumber \\&+2bNa((G^N)^2+2zG_z^NG^N-G_z^N)\,. \end{aligned}$$
(2.7)

This PDE can be used to derive limit theorems for the \(\mu _{N,t}\) under different assumptions on the parameters \(p=p_N,q=q_N, a_N, b_N\) for \(N\rightarrow \infty \) and \(t\ge 0\). We present such limit results in the next section where in the limit roughly speaking free sums of the limit initial distributions with Wigner and Marchenko–Pastur distributions appear.

Proof

(2.6) gives

$$\begin{aligned} G^N_t(t,z)=\sum _{l=0}^{\infty }z^{-(l+1)}\frac{d}{\textrm{d}t}S_{N,l}(t) =\sum _{l=1}^{\infty }z^{-(l+1)}\frac{d}{\textrm{d}t}S_{N,l}(t). \end{aligned}$$
(2.8)

We now calculate this series by using (2.3) and (2.4). For this we use the following equations:

$$\begin{aligned}&-\sum _{l=1}^{\infty }z^{-(l+1)}l(p+q-(l-1))S_{N,l}\nonumber \\&=-(p+q)\sum _{l=1}^{\infty }z^{-(l+1)}lS_{N,l}+\sum _{l=1}^{\infty }z^{-(l+1)}l(l-1)S_{N,l}\nonumber \\&=(p+q)zG^N_z(t,z)+(p+q)G^N(t,z)+\partial _{zz}\left( z^2G^N(t,z)\right) \,, \end{aligned}$$
(2.9)
$$\begin{aligned}&\sum _{l=1}^{\infty }lz^{-(l+1)}a_N(p-q-b_N((p+q)-2(l-1)))S_{N,l-1}\nonumber \\&=-a_N(p-q-b_N(p+q))G^N_z(t,z)+2a_Nb_N\partial _{zz}\left( zG^N(t,z)\right) \,, \end{aligned}$$
(2.10)
$$\begin{aligned}&-\sum _{l=2}^{\infty }z^{-(l+1)}l(l-1)S_{N,l-2}=-G_{zz}^N(t,z), \nonumber \\&\sum _{l=2}^{\infty }z^{-(l+1)}lN\sum _{k=0}^{l-2}S_{N,k}S_{N,l-2-k}=-2NG^N(t,z)G^N_z(t,z)\,, \end{aligned}$$
(2.11)
$$\begin{aligned}&-\sum _{l=2}^{\infty }z^{-(l+1)}lN\sum _{k=0}^{l-2}S_{N,k+1}S_{N,l-1-k}=2N(z^2G^N(t,z)G^N_z(t,z)\nonumber \\&\quad +z(G^N(t,z))^2-zG^N_z(t,z)-G^N(t,z))\,, \end{aligned}$$
(2.12)

and

$$\begin{aligned} -\sum _{l=2}^{\infty }z^{-(l+1)}l\sum _{k=0}^{l-2}S_{N,k}S_{N,l-1-k} =(G^N)^2+2zG_z^NG^N-G_z^N. \end{aligned}$$
(2.13)

If we combine (2.9)–(2.13) with (2.3), (2.4) and (2.8), we obtain the PDE (2.7). \(\square \)

3 Wigner- and Marchenko–Pastur-Type Limit Theorems in the Deterministic Case

We now study conditions on the parameters \(p_N,q_N, a_N, b_N\) above which lead to limits for the measures \(\mu _{N,t}\) that involve semicircle and Marchenko–Pastur distributions. In both cases, we consider \(a_N\rightarrow \infty \) which implies that we possibly must work with measures with noncompact supports. We thus need moment conditions. We recapitulate from [1] that a measure \(\mu \in M^1({\mathbb {R}})\) satisfies the Carleman condition if the moments \(c_l=\int _{{\mathbb {R}}}x^l\,d\mu (x)\) (\(l\in \mathbb {N}\)), of \(\mu \) satisfy

$$\begin{aligned} \sum _{l=1}^{\infty }c_{2l}^{-\frac{1}{2l}}=\infty . \end{aligned}$$
(3.1)

By [1], a probability measure with the Carleman condition is determined uniquely by its moments.

We also use the following modified moment condition:

$$\begin{aligned} \text {There exists some } \gamma >0\text { with }|c_l|\le (\gamma l)^l \text { for all }l\in {\mathbb {N}}. \end{aligned}$$
(3.2)

It can be easily checked that (3.2) implies the Carleman condition.

Further, we recapitulate from [2] that the R-transform of \(\mu \in M^1({\mathbb {R}})\) is

\(R_{\mu }(z):=\sum _{n=0}^{\infty }k_{n+1}(\mu )z^n\) with the n-th free cumulants \(k_n(\mu )\) of \(\mu \). It is related to the Cauchy transform by

$$\begin{aligned} R_{\mu }(G_{\mu }(z))=z-1/G_{\mu }(z). \end{aligned}$$
(3.3)

Moreover, the R-transform satisfies \(R_{\mu \boxplus \nu }=R_{\mu }+R_{\nu }\) for \(\mu ,\nu \in M^1(\mathbb {R})\) for the free additive convolution \(\boxplus \).

Additionally, we use the following notation: We denote the image of \(\mu \in M^1({\mathbb {R}})\) under some continuous mapping f by \(f(\mu )\). We use this notation in particular for the maps \(x\mapsto |x|\) and \(x\mapsto x^2\) and write \(|\mu |\) and \(\mu ^2\). Furthermore, for a constant \(v\in {\mathbb {R}}\setminus \{0\}\) let \(v\mu \) the image of \(\mu \) under \(x\mapsto vx\). Finally, for a probability measure \(\mu \) on \([0,\infty [\), let \(\mu _\textrm{even}\) the unique even probability measure on \({\mathbb {R}}\) with \(|\mu _\textrm{even}|=\mu \). With these notations we have \(G_{v\mu }(z)=v^{-1}G_{\mu }\left( z/v\right) \) and thus, by (3.3),

$$\begin{aligned} R_{v\mu }(z)=vR_{\mu }(vz). \end{aligned}$$
(3.4)

We now turn to the first limit case where semicircle laws \(\mu _{sc,\lambda }\in M^1({\mathbb {R}})\) with radius \(\lambda >0\) appear. We recapitulate that the Wigner law \(\mu _{sc,\lambda }\) with radius \(\lambda >0\) has the Lebesgue density

$$\begin{aligned} \frac{2}{\pi \lambda ^2}\sqrt{\lambda ^2-x^2}{} \textbf{1}_{[-\lambda ,\lambda ]}(x). \end{aligned}$$

It is well-known that \(R_{\mu _{sc,\lambda }}(z)=\frac{\lambda ^2}{4}z\); see Section 5.3 of [2].

We have the following first result:

Theorem 3.1

Consider sequences \((p_N)_{N\in {\mathbb {N}}},(q_N)_{N\in \mathbb N}\subset ]0,\infty [\) with \(\lim _{N\rightarrow \infty }p_N/N=\infty \) and \(\lim _{N\rightarrow \infty }q_N/N=\infty \) such that \(C:=\lim _{N\rightarrow \infty }p_N/q_N\ge 0\) exists. Define

$$\begin{aligned} a_N:=\frac{q_N}{\sqrt{Np_N}}, \quad b_N:=\frac{p_N-q_N}{p_N+q_N} \quad (N\in {\mathbb {N}}). \end{aligned}$$

Let \(\mu \in M^1({\mathbb {R}})\) satisfy (3.2). Moreover, let \((x_N)_{N\in {\mathbb {N}}}=((x_{1}^N,\dots ,x_{N}^N))_{N\in \mathbb {N}}\) be a sequence of starting vectors \(x_N\in A_N\) such that all moments of the empirical measures

$$\begin{aligned} \mu _{N,0}:=\frac{1}{N}\sum _{i=1}^N\delta _{a_N( x_i^N-b_N)} \end{aligned}$$

tend to those of \(\mu \) for \(N\rightarrow \infty \). Let \(x_N(t)\) be the solutions of the ODEs (1.6) with \(x_N(0)=x_N\) for \(N\in {\mathbb {N}}\). Then, for all \(t>0\), all moments of the empirical measures

$$\begin{aligned} \mu _{N,t/(p_N+q_N)} =\frac{1}{N}\sum _{i=1}^N \delta _{a_N( x_i^N(t/(p_N+q_N))-b_N)}\end{aligned}$$

tend to those of the probability measures

$$\begin{aligned} \mu _t:= (e^{-t}\mu ) \boxplus \left( \sqrt{1-e^{-2t}}\mu _{sc,4(1+C)^{-3/2}}\right) . \end{aligned}$$
(3.5)

Proof

Using the recurrence relations (2.4), (2.3) together with the initial conditions for \(t=0\) and our choice of \(b_N\), we see that the moments \(\tilde{S}_{N,l}(t):=S_{N,l}(t/(p_N+q_N))\) of \(\mu _{N,t/(p_N+q_N)}\) satisfy

$$\begin{aligned} {\tilde{S}}_{N,0}\equiv 1, \quad {\tilde{S}}_{N,1}(t)=e^{-t}S_{N,1}(0) \end{aligned}$$

and, for \(l\ge 2\),

$$\begin{aligned} {\tilde{S}}_{N,l}(t)&= \exp \Bigl (\Bigl (-l+\frac{l(l-1)}{p_N+q_N}\Bigr )t\Bigr )\Biggl [S_{N,l}(0)\nonumber \\&\quad +\frac{l}{p_N+q_N}\int _0^t \exp \Bigl (\Bigl (l-\frac{l(l-1)}{p_N+q_N}\Bigr )s\Bigr )\Biggl ( 2a_Nb_N(l-1) {\tilde{S}}_{N,l-1}(s)\nonumber \\&\quad -(1-b_N^2)a_N^2(l-1) {\tilde{S}}_{N,l-2}(s) +Na_N^2(1-b_N^2)\sum _{k=0}^{l-2} {\tilde{S}}_{N,k}(s) {\tilde{S}}_{N,l-2-k}(s)\nonumber \\&\quad - N\sum _{k=0}^{l-2} {\tilde{S}}_{N,k+1}(s) {\tilde{S}}_{N,l-1-k}(s) -2b_NNa_N\sum _{k=0}^{l-2} {\tilde{S}}_{N,k}(s) {\tilde{S}}_{N,l-1-k}(s) \Biggr )\textrm{d}s\Biggr ]. \end{aligned}$$
(3.6)

As the \(S_{N,l}(0)\) (\(l\ge 0\)) tend to the corresponding moments of \(\mu \), we conclude by induction on l, that the \({\tilde{S}}_{N,l}(t)\) converge to some functions \(S_l(t)\) for \(l\ge 0\) and \(t\ge 0\). Moreover, these limits satisfy

$$\begin{aligned}{} & {} S_0\equiv 1,\quad S_1(t)=S_1(0)e^{-t}, \nonumber \\{} & {} S_l(t)=e^{-lt}\left( S_l(0)+4l(1+C)^{-3}\int _0^te^{ls}\sum _{k=0}^{l-2}S_k(s)S_{l-2-k}(s) \,\textrm{d}s\right) \end{aligned}$$
(3.7)

for \(l\ge 2\). We now prove that the \(S_l(t)\) satisfy (3.2) with some constant R (instead of \(\gamma \)) and thus the Carleman condition (3.1) for \(t>0\) so that there are unique \(\mu _t\in M^1({\mathbb {R}})\) with \((S_l(t))_l\) as moment sequences. For this we first notice that (3.2) for \(l\in \{0,1\}\) holds for R sufficiently large. Moreover, by induction we have for \(l\ge 2\) and \(t\ge 0\) that

$$\begin{aligned} \begin{aligned} |S_l(t)|&\le e^{-lt}|S_l(0)|+ e^{-lt}4l(1+C)^{-3}\int _0^t e^{ls} \sum _{k=0}^{l-2}|S_k(s)|\,|S_{l-2-k}(s)|\,\textrm{d}s\\&= e^{-lt}|S_l(0)|+ 4l(1+C)^{-3}\int _0^t e^{-ls} \sum _{k=0}^{l-2}|S_k(t-s)|\,|S_{l-2-k}(t-s)|\,\textrm{d}s\\&\le (\gamma l)^l+4(1+C)^{-3}(Rl)^{l-2} \le (\gamma l)^l+R^{l-2}l^l. \end{aligned} \end{aligned}$$
(3.8)

For R large enough (depending on \(\gamma \)) we can bound the RHS of (3.8) by \((Rl)^l\) as claimed. We thus conclude that the measures \(\mu _{N,t/(p_N+q_N)}\) tend weakly to some probability measures \(\mu _t\).

To identify the \(\mu _t\) we employ a PDE for the corresponding Cauchy and R-transforms. We set

$$\begin{aligned} G(t,z):=G_{\mu _t}(z) =\lim _{N\rightarrow \infty }G_{\mu _{N,t/(p_n+q_N)}}(z). \end{aligned}$$

We now use the PDEs (2.7) and interchange derivatives w.r.t. tz with the limits \(N\rightarrow \infty \). This interchangeability can be proved via the Laurent series for \(G,G_N\) as in Proposition 2.9 of [39]. In this way, we obtain that G satisfies the PDE

$$\begin{aligned} G_t(t,z)=zG_z(t,z)+G(t,z)-8(1+C)^{-3}G(t,z)G_z(t,z),\quad G(0,z)=G_{\mu }(z). \end{aligned}$$

Using

$$\begin{aligned} R(t,G(t,z))&=z-1/G(t,z)\nonumber \\ R_z(t,G(t,z))&=1/G_z(t,z)+1/G^2(t,z)\nonumber \\ R_t(t,G(t,z))&=-G_t(t,z)/G_z(t,z)\,. \end{aligned}$$
(3.9)

for the R-transforms \(R(t,z):=R_{\mu _t}(z)\), we see that

$$\begin{aligned} R_t(t,z)=-R(t,z)+8(1+C)^{-3}z-R_z(t,z)z,\quad R(0,z)=R_{\mu }(z). \end{aligned}$$
(3.10)

As \( R(t,z)=e^{-t}R_{\mu }(ze^{-t})+4(1+C)^{-3}(1-e^{-2t})z\) solves (3.10), it follows from (3.4) and the properties of the R-transform above that \( \mu _t =(e^{-t}\mu )\boxplus \left( \sqrt{1-e^{-2t}} \mu _{sc,4(1+C)^{-3/2}}\right) \,\) as claimed. \(\square \)

Remark 3.2

The exchange of the \(p_N,q_N\) in our dynamical systems corresponds to a sign change (and thus a reverse numbering) of all particles in \([-1,1]\). In this way we may assume w.l.o.g. that \(C:=\lim _{N\rightarrow \infty }p_N/q_N\in [0,1]\) holds in Theorem 3.1. Moreover, the degenerated case \(C=\infty \) corresponds to the degenerated case \(C=0\) and is thus also included in Theorem 3.1 in principle.

In order to understand the meaning of Theorem 3.1, consider the following example:

Example 3.3

The easiest possible scaling in Theorem 3.1 is the choice \(p_N=q_N=N^{\delta }\), for \(\delta >1\), \(a_N=N^{(\delta -1)/2}\), \(b_N=0\), \(x^N=0\). Then, all \(\mu _{N,0}=\delta _0\) and \(\mu _0=\delta _0\). The theorem now states that the limiting measures \(\mu _t\) from (3.5) are simply the rescaled semicircle laws \(\sqrt{1-e^{-2t}}\mu _{sc,\sqrt{2}}\) for \(t>0\).

More generally: Let \(p_N,q_N, a_N, b_N\) be given as in Theorem 3.1, and take \(x_i^N:=b_N\) for all iN. In this case, the measures \(\mu _t\) from (3.5) are the semicircle laws \(\sqrt{1-e^{-2t}}\mu _{sc,4(1+C)^{-3/2}}\) for \(t>0\).

These measures describe the deviation of the particles \(x_i^N(t)\) at time \(t/(p_N+q_N)\) from the numbers \(b_N\in ]-1,1[\) locally w.r.t. to the space scalings \(a_N\). Notice that this even makes sense for the degenerated case \(C=0\) where \(\lim _{N\rightarrow \infty }b_N=-1\) holds.

Theorem 3.1 is a local limit result which describes the behaviour of the system around the numbers \(b_N\) for small times. It is therefore astonishing that in (3.5) a stationary behaviour appears which is available on the global scale on \([-1,1]\). This picture appears also in the degenerated case \(C=0\) in Theorem 3.4(1). However, this stationarity disappears if we use scalings in space and time of higher orders; see Theorem 3.4(2) and (3).

Theorem 3.4

Let \(\mu \in M^1({\mathbb {R}})\) satisfy (3.2), and let \((x_N)_{N\in {\mathbb {N}}}=((x_{1}^N,\dots ,x_{N}^N))_{N\in \mathbb {N}}\) be starting vectors \(x_N\in A_N\) such that all moments of the empirical measures

$$\begin{aligned} \mu _{N,0}:=\frac{1}{N}\sum _{i=1}^N\delta _{a_N( x_i^N-b_N)} \end{aligned}$$

tend to those of \(\mu \) for \(N\rightarrow \infty \). Let \(x_N(t)\) be the solutions of (1.6) with \(x_N(0)=x_N\) for \(N\in \mathbb N\). For some given sequence \((s_N)_{N\in {\mathbb {N}}}\subset ]0,\infty [\) consider the empirical measures

$$\begin{aligned} \mu _{N,t/s_N} =\frac{1}{N}\sum _{i=1}^N \delta _{a_N( x_i^N(t/s_N)-b_N)}. \end{aligned}$$
  1. (1)

    If \(\lim _{N\rightarrow \infty }p_N/N=\lim _{N\rightarrow \infty }q_N/N=\infty \) and \(C:=\lim _{N\rightarrow \infty }p_N/q_N=0\), and if we put \(a_N:=\frac{\sqrt{q_N}}{\sqrt{N}}\), \(b_N:=\frac{p_N-q_N}{p_N+q_N}\), \(s_N:=p_N+q_N\), then for all \(t>0\), all moments of \(\mu _{N,t/s_N}\) tend to those of \(e^{-t}\mu \).

  2. (2)

    Let \(p_N,q_N>N-1\) for \(N\in {\mathbb {N}}\) and \((b_N)_{N\in {\mathbb {N}}}\subset ]-1,1[\) such that \(B:=\lim b_N\in [-1,1]\) exists. Let \((s_N)_{N\in \mathbb N}\subset ]0,\infty [\) such that \(\lim _{N\rightarrow \infty }\frac{p_N+q_N}{\sqrt{Ns_N}}=0\). Let \( a_N:=\sqrt{s_N/N}\). Then, for \(t>0\), all moments of \(\mu _{N,t/s_N}\) tend to those of \(\mu \boxplus \mu _{sc,2\sqrt{2(1-B^2)t}}\).

  3. (3)

    Assume that \(\lim _{N\rightarrow \infty }(p_N+q_N)/N=\infty \). Let \((b_N)_{N\in {\mathbb {N}}}\subset ]-1,1[\) such that \(B:=\lim b_N\in [-1,1]\) exists, and let \((s_N)_{N\in {\mathbb {N}}}\subset ]0,\infty [\) such that \(\lim _{N\rightarrow \infty }\frac{p_N+q_N}{\sqrt{Ns_N}}\ge 0\). Put \( a_N:=\sqrt{s_N/N}\) and \(c:=\lim _{N\rightarrow \infty }a_N\left( p_N-q_N-b_N(p_N+q_N)\right) /s_N\). Then, for \(t>0\), all moments of \(\mu _{N,t/s_N}\) tend to those of \(\mu \boxplus \mu _{sc,2\sqrt{2(1-B^2)t}}\boxplus \delta _{ct}\).

Proof

The proof is analogue to that of Theorem 3.1 where the limits \(S_l(t):=\lim _{N\rightarrow \infty }S_{N,l}(t/s_N)\) now satisfy different recurrence relations, which lead to slightly different PDEs for the Cauchy- and R- transforms. We just remark the following:

The limit in (1) can be interpreted as \((e^{-t}\mu )\boxplus (\sqrt{1-e^{-2t}}\mu _{sc,0})\) where the semicircle law degenerates into \(\mu _{sc,0}=\delta _0\). In (2) we have

$$\begin{aligned}{} & {} S_0\equiv 1,\quad S_1(t)=S_1(0),\nonumber \\{} & {} S_l(t)=S_l(0)+l(1-B^2)\int _0^t\sum _{k=0}^{l-2}S_k(s)S_{l-2-k}(s) \,\textrm{d}s \quad (l\ge 2), \end{aligned}$$
(3.11)

such that the computations in Section 2 of [39] (see the proofs of Lemma 2.4 and Theorem 2.10 there) lead to the claim. (3) can be obtained in the same way. \(\square \)

Remark 3.5

The limits in Theorem 3.4(2) and (3) correspond to results for the Bessel processes and their frozen versions in Sections 2 and 3 of [39], where here the conditions on the parameters \(p_N,q_N, b_N\) are more flexible. We also remark that this result admits an analogue for Jacobi processes on noncompact spaces; see Sect. 5.

In the next step, we combine the ideas of the proof of Theorem 3.1 with Theorem 1.1 which says that the vectors with the ordered zeros of corresponding Jacobi polynomials form stationary solutions of (1.6). This leads to the following result which was derived in [13] by different methods:

Theorem 3.6

Let \((p_N)_{N\in {\mathbb {N}}},(q_N)_{N\in {\mathbb {N}}}\subset ]0,\infty [\) with \(\lim _{N\rightarrow \infty }p_N/N=\infty \) and \(\lim _{N\rightarrow \infty }q_N/N=\infty \) such that \(C:=\lim _{N\rightarrow \infty }p_N/q_N\ge 0\) exists. Define

$$\begin{aligned} a_N:=\frac{q_N}{\sqrt{Np_N}}, \quad b_N:=\frac{p_N-q_N}{p_N+q_N} \quad (N\in {\mathbb {N}}). \end{aligned}$$

Let \(-1<z_1^N<\ldots<z_N^N<1\) be the ordered zeros of the Jacobi polynomials \(P_N^{(q_N-N,p_N-N)}\). Then, all moments of

$$\begin{aligned} {\tilde{\mu }}_{N}:=\frac{1}{N}\sum _{i=1}^N \delta _{a_N (z_i^N-b_N)} \end{aligned}$$

tend to those of \(\mu _{sc,4(1+C)^{-3/2}}\). In particular, the \({\tilde{\mu }}_{N}\) tend weakly to \(\mu _{sc,4(1+C)^{-3/2}}\).

Proof

Consider the solutions of the ODEs (1.6) as in Theorem 3.1 with the initial conditions \(x_N:=(b_N,\ldots ,b_N)\in A_N\), i.e. with \(\mu =\delta _0\) and \( S_{N,l}(0)=0\) for \(l\ge 1\). We show that for the moments \(\tilde{S}_{N,l}(t)\) from the proof of Theorem 3.1 the limits \({\tilde{S}}_{N,l}(\infty ):=\lim _{t\rightarrow \infty }{\tilde{S}}_{N,l}(t)\) exist. In fact, this is clear for \(l=0,1\), and (3.6) and dominated convergence show inductively for \(l\ge 2\) that

$$\begin{aligned} {\tilde{S}}_{N,l}(\infty )&=\frac{l}{p_N+q_N}\lim _{t\rightarrow \infty }\int _0^t exp\Bigl (-\Bigl (l-\frac{l(l-1)}{p_N+q_N}\Bigr )(t-s)\Bigr ) H_{N,l}(s) \> \textrm{d}s\nonumber \\&=\frac{l}{p_N+q_N}\lim _{t\rightarrow \infty }\int _0^t exp\Bigl (-\Bigl (l-\frac{l(l-1)}{p_N+q_N}\Bigr )s\Bigr ) H_{N,l}(t-s) \> \textrm{d}s\nonumber \\&=\frac{1}{p_N+q_N-l+1}\Biggl ( 2a_Nb_N(l-1) {\tilde{S}}_{N,l-1}(\infty ) \nonumber \\&\quad - (1-b_N^2)a_N^2(l-1) \tilde{S}_{N,l-2}(\infty ) +Na_N^2(1-b_N^2)\sum _{k=0}^{l-2} \tilde{S}_{N,k}(\infty ) {\tilde{S}}_{N,l-2-k}(\infty ) \nonumber \\&\quad - N\sum _{k=0}^{l-2} \tilde{S}_{N,k+1}(\infty ) {\tilde{S}}_{N,l-1-k}(\infty ) -2b_NNa_N\sum _{k=0}^{l-2} {\tilde{S}}_{N,k}(\infty ) \tilde{S}_{N,l-1-k}(\infty )\Biggr ) \end{aligned}$$
(3.12)

where \(H_{N,l}(s)\) is the term in the big brackets in the last 3 lines of (3.6). On the other hand, Theorem 1.1 yields that the \({\tilde{S}}_{N,l}(\infty )\) are the moments of the measures \({\tilde{\mu }}_{N}\). Furthermore, similar to (3.7), we see that for all l the limits \(S_l(\infty ):=\lim _{N\rightarrow \infty } \tilde{S}_{N,l}(\infty ) \) exist with \(S_0(\infty )=1\), \(S_1(\infty )=0\), and

$$\begin{aligned} S_l(\infty )= \frac{4l}{(1+C)^3}\sum _{k=0}^{l-2}S_k(\infty )S_{l-2-k}(\infty ) \quad (l\ge 2). \end{aligned}$$

As this is just the recurrence for the Catalan numbers up to some rescaling (see e.g. Section 2.1.1 of [2]), it follows readily that the \(S_l(\infty )\) are the moments of \(\mu _{sc,4(1+C)^{-3/2}}\). \(\square \)

We next turn to the case of Marchenko–Pastur distributions, which is motivated by Corollary 2.5 of [13]. We here assume that the \(p_N, q_N\) satisfy

$$\begin{aligned} \lim _{N\rightarrow \infty } p_N/N=:{\hat{p}}\in [1,\infty [,\quad \lim _{N\rightarrow \infty }q_N/N=\infty \end{aligned}$$
(3.13)

and use the norming constants

$$\begin{aligned} b_N:=-1,\quad a_N:=q_N/N. \end{aligned}$$
(3.14)

We then obtain limits which involve Marchenko–Pastur distributions \(\mu _{MP,c,t}\in M^1([0,\infty [)\) which, for \(c\ge 0\), \(t>0\), are the probability measure with \(\mu _{MP,c,t}=\tilde{\mu }\) for \(c\ge 1\) and \(\mu _{MP,c,t}=(1-c)\delta _0+c \tilde{\mu }\) for \(0\le c < 1\), where for \(x_\pm :=t(\sqrt{c}\pm 1)^2\), the measure \(\tilde{\mu }\) on \(]x_-,x_+[\) has the density

$$\begin{aligned} \frac{1}{2\pi x t}\sqrt{(x_+-x)(x-x_-)}.\end{aligned}$$
(3.15)

We recall (see Exercise 5.3.27 of [2]) that the R-transforms of the \(\mu _{MP,c,t}\) are given by

$$\begin{aligned} R_{MP,c,t}(z)=\frac{ct}{1-tz}. \end{aligned}$$
(3.16)

This in particular implies the well-known relation

$$\begin{aligned} \mu _{MP,a,t}\boxplus \mu _{MP,b,t}=\mu _{MP,a+b,t} \quad (a,b,t>0).\end{aligned}$$
(3.17)

The following local limit theorem of stationary type corresponds to Theorem 3.1.

Theorem 3.7

Let \(p_N,q_N,a_N,b_N\) as in (3.13) and (3.14), and \(\mu \in M^1([0,\infty [)\) with (3.2). Let \((x_N)_{N\in \mathbb {N}}=((x_{1}^N,\dots ,x_{N}^N))_{N\in {\mathbb {N}}}\) be associated starting vectors with \(x_N\in A_N\) as in Theorem 3.1.

Let \(x_N(t)\) be the solutions of (1.6) with \(x_N(0)=x_N\) for \(N\in {\mathbb {N}}, t\ge 0\). Then, for \(t>0\), all moments of the measures \( \mu _{N,t/(p_N+q_N)} =\frac{1}{N}\sum _{i=1}^N \delta _{a_N( x_i^N(t/(p_N+q_N))-b_N)}\) tend to those of

$$\begin{aligned} \mu (t):=\left( \mu _{SC,2\sqrt{2(1-e^{-t})}} \boxplus \left( \sqrt{e^{-t}\mu }\right) _{\text {even}}\right) ^2 \boxplus \mu _{MP,{\hat{p}}-1,2(1-e^{-t})}. \end{aligned}$$
(3.18)

Proof

As in the proof of Theorem 3.1, induction on l shows that the \(S_{N,l}(t/(p_N+q_N))\) converge to some functions \(S_l(t)\) for \(l\ge 0\), \(t\ge 0\). These limits satisfy

$$\begin{aligned} \begin{gathered} S_0\equiv 1,\quad S_1(t)=e^{-t}\left( S_1(0)-2{\hat{p}}\right) +2{\hat{p}},\\ S_l(t) =e^{-lt}\left( S_l(0)+2l\int _0^te^{ls}\left( {\hat{p}}S_{l-1}(s)+ \sum _{k=0}^{l-2}S_k(s)S_{l-1-k}(s)\right) \,\textrm{d}s\right) ,\;l\ge 2. \end{gathered}\nonumber \\ \end{aligned}$$
(3.19)

Once more, by the same arguments as in the proof of Theorem 3.1, we see that the \(S_l(t)\) satisfy the Carleman condition (3.1) for \(t>0\). Thus, by the moment convergence theorem there exist unique \(\mu _t\in M^1({\mathbb {R}})\) with \((S_l(t))_l\) as sequences of moments.

To identify the \(\mu _t\) we again derive a PDE for the Cauchy and R-transforms of the \(\mu _t\). We set

$$\begin{aligned} G(t,z):=G_{\mu _t}(z) =\lim _{N\rightarrow \infty }G_{\mu _{N,t/(p_n+q_N)}}(z). \end{aligned}$$

The PDEs (2.7) here lead to the PDE

$$\begin{aligned} G_t(t,z)=&zG_z(t,z)+G(t,z)-2(G(t,z)^2+2zG(t,z)G_z(t,z) -G_z(t,z))-2{\hat{p}}G_z(t,z)\nonumber \\ =&(z-2({\hat{p}}-1)-4zG(t,z))G_z(t,z)+G(t,z)-2G(t,z)^2\,. \end{aligned}$$
(3.20)

Using (3.9), we obtain

$$\begin{aligned} \begin{aligned} -R_t(t,G(t,z))&=\frac{G_t(t,z)}{G_z(t,z)}\\&=(R(t,G(t,z))+\frac{1}{G(t,z)})(1-4G(t,z))-2({\hat{p}}-1)\\&\quad +(G(t,z)-2G(t,z)^2)(R_z(t,G(t,z))-\frac{1}{G(t,z)^2})\\&=-4G(t,z)R(t,G(t,z))-2({\hat{p}}-1)-2\\&\quad +(G(t,z)-2G(t,z)^2)R_z(t,G(t,z))+R(t,G(t,z)) \end{aligned} \end{aligned}$$

and thus

$$\begin{aligned} 0=R_t(t,z)-(2z^2-z)R_z(t,z)-(4z-1)R(t,z)-2{\hat{p}}. \end{aligned}$$

If \(\phi (z):=R(0,z)\), the method of characteristics (see e.g. [35]) leads to the solution

$$\begin{aligned} R(t,z)&=e^{-t}(1-2z(1-e^{-t}))^{-2}\phi (e^{-t}z(1-2z(1-e^{-t}))^{-1}) \nonumber \\&\quad +\frac{2(1-e^{-t})}{1-2(1-e^{-t})z} +\frac{2({\hat{p}}-1)(1-e^{-t})}{1-2(1-e^{-t})z}. \end{aligned}$$
(3.21)

The third summand on the RHS of this equation corresponds to the second \(\boxplus \)-summand in (3.18). We thus only have to investigate the first two summands on the RHS of (3.21). For this we fix \(s>0\) and define the function \({\widehat{\phi }}(z):=e^{-s}\phi (e^{-s}z)\). We also define

$$\begin{aligned} f(t,z):=(1-tz)^{-2}{\widehat{\phi }}\left( \frac{z}{1-tz}\right) +\frac{t}{1-tz}\quad (z\in \mathbb C\setminus {\mathbb {R}}, t>0). \end{aligned}$$

One can check that f solves the PDE

$$\begin{aligned} f_{t}(t,z)=1+2zf(t,z)+z^2f_z(t,z), \quad f(0,z)=R_{\exp (-s)\mu }(z). \end{aligned}$$
(3.22)

Theorem 4.8 in [39] and (3.4) now imply that

$$\begin{aligned} f(t,z) =R_{\left( \mu _{SC,2\sqrt{t}} \boxplus \left( \sqrt{\exp (-s)\mu }\right) _{\text {even}}\right) ^2}(z) \quad \text {for}\,\,t>0. \end{aligned}$$

This and the formula \(R_{\mu \boxplus \nu }=R_{\mu }+R_{\nu }\) for the R-transform now complete the proof. \(\square \)

Remark 3.8

If we take the starting distribution \(\mu =\mu _{MP,r,s}\) for \(r\ge 0, s>0\), then, with the notations above, \({\widehat{\phi }}(z)=R_{\mu _{MP,r,s}}(z)=\frac{rs}{1-sz}\). A partial fraction decomposition here leads to

$$\begin{aligned} f(t,z)= & {} (1-tz)^{-2}{\widehat{\phi }}\left( \frac{z}{1-tz}\right) +\frac{t}{1-tz} \\= & {} \frac{rs}{(1-tz)(1-(t+s)z)}+\frac{t}{1-tz}=\frac{r(t+s)}{1-(t+s)z}+\frac{(1-r)t}{1-tz}. \end{aligned}$$

Hence,

$$\begin{aligned} R_{\left( \mu _{SC,\sqrt{t}}\boxplus \left( \sqrt{\mu _{MP,r,s}}\right) _{\text {even}}\right) ^2}(z) =\frac{r(t+s)}{1-(t+s)z}+\frac{(1-r)t}{1-tz},\quad r,\,s,\,t\ge 0\, \end{aligned}$$

which generalizes (4.14) in [39] slightly.

We now consider a variant of Theorem 3.7 with a different scaling in space and time where the limit loses its stationary behaviour, and where the limit corresponds to the results for the Bessel processes of type B and their frozen versions in Sections 4 and 5 of [39].

Theorem 3.9

Let \((p_N)_{N\in {\mathbb {N}}},(q_N)_{N\in {\mathbb {N}}}\) with \(p_N,q_N>N-1\) for \(N\ge 1\) and \(\lim _{N\rightarrow \infty }p_N/N={\hat{p}}\). Let \((s_N)_{N\in {\mathbb {N}}}\subset ]0,\infty [\) be time scalings with \(\lim _{N\rightarrow \infty }(p_n+q_N)/s_N=0\). Define the space scalings \(a_N:=s_N/N\), \(b_N:=-1\) (\(N\in {\mathbb {N}}\)). Let \(\mu \in M^1([0,\infty [)\) satisfy (3.2) and \((x_N)_{N\in {\mathbb {N}}}\) starting vectors as in Theorem 3.7. Let \(x_N(t)\) be the solutions of the ODEs (1.6) with \(x_N(0)=x_N\) for \(N\in {\mathbb {N}}\). Then, for all \(t>0\), all moments of the empirical measures

$$\begin{aligned} \mu _{N,t/s_N} =\frac{1}{N}\sum _{i=1}^N \delta _{a_N( x_i^N(t/s_N)-b_N)} \end{aligned}$$

tend to those of \(\left( \mu _{sc,2\sqrt{2t}} \boxplus \left( \sqrt{\mu }\right) _{\text {even}}\right) ^2 \boxplus \mu _{MP,{\hat{p}}-1,2t}\).

Proof

As in the proof of Theorem 3.1, our starting conditions and induction show that the \({\tilde{S}}_{N,l}(t)\) tend to some functions \(S_l(t)\) with

$$\begin{aligned} S_0\equiv & {} 1,\quad S_1(t)=S_1(0)+2{\hat{p}}t,\nonumber \\ S_l(t)= & {} S_l(0)+2l\int _0^t\left( {\hat{p}}S_{l-1}(s)+\sum _{k=0}^{l-2}S_k(s)S_{l-1-k}(s) \right) ds \end{aligned}$$
(3.23)

for \(l\ge 2\) and \(t\ge 0\). The computations in Section 4 of [39] (see the proofs of Lemma 4.3 and Theorem 4.8 there) then yield the claim. \(\square \)

A slight modification of the proof of Theorem 3.7 in combination with the assertion about the stationary case in Theorem 1.1 leads to the following limit result on the zeros of the Jacobi polynomials; see also [13]. As the proof is analogous to that of Theorem 3.6, we skip it.

Theorem 3.10

Consider \(p_N,q_N\) with \( \lim _{N\rightarrow \infty } p_N/N=:{\hat{p}}\in [1,\infty [\) and \(\lim _{N\rightarrow \infty }q_N/N=\infty \). and define the norming constants \( b_N:=-1,\quad a_N:=q_N/N\).

Let \(-1<z_1^N<\ldots<z_N^N<1\) be the ordered zeros of the \(P_N^{(q_N-N,p_N-N)}\). Then, all moments of the empirical measures \({\tilde{\mu }}_{N}:=\frac{1}{N}\sum _{i=1}^N \delta _{a_N (z_i^N-b_N)}\) tend to those of

$$\begin{aligned} \left( \mu _{SC,2\sqrt{2}}\right) ^2 \boxplus \mu _{MP,{\hat{p}}-1,2}= \mu _{MP,{\hat{p}},2}. \end{aligned}$$
(3.24)

In particular, the \({\tilde{\mu }}_{N}\) tend weakly to \(\mu _{MP,{\hat{p}},2}\).

4 Almost Sure Limit Theorems for Jacobi Processes

In this section, we study the empirical measures of the renormalized Jacobi processes \((\tilde{X}_t)_{t\ge 0}\) on \(A_N\) from the introduction. Recall that these processes satisfy

$$\begin{aligned} d\tilde{X}_{t,i}= & {} \frac{\sqrt{2}}{\sqrt{\kappa }}\sqrt{(1-\tilde{X}_{t,i}^2)}\,\textrm{d}B_{t,i} \nonumber \\{} & {} +\left( (p_N-q_N)-(p_N+q_N)\tilde{X}_{t,i} +2\sum _{j:j\ne i}\frac{1-\tilde{X}_{t,i}\tilde{X}_{t,j}}{\tilde{X}_{t,i}-\tilde{X}_{t,j}}\right) \,\textrm{d}t \end{aligned}$$
(4.1)

for \(i=1,\dots ,N\) with fixed \(\kappa >0\).

Let \(a_N\subset ]0,\infty [\) and \(b_N\subset {\mathbb {R}}\). As in Sect. 3 we investigate the empirical measures

$$\begin{aligned} \mu _{N,t}:=\frac{1}{N}\sum _{i=1}^N\delta _{a_N(\tilde{X}_{t/s_N,i}-b_N)} \end{aligned}$$

for appropriate scalings \(a_N,b_N, s_N\). We begin with the following a.s. version of Theorem 3.1:

Theorem 4.1

Assume that \(\lim _{N\rightarrow \infty }p_N/N=\infty \) and \(\lim _{N\rightarrow \infty }q_N/N=\infty \) such that \(C:=\lim _{N\rightarrow \infty }p_N/q_N\ge 0\) exists. Define

$$\begin{aligned} a_N:=\frac{q_N}{\sqrt{Np_N}}, \quad b_N:=\frac{p_N-q_N}{p_N+q_N} \quad (N\in {\mathbb {N}}). \end{aligned}$$

Let \(\mu \in M^1({\mathbb {R}})\) satisfy (3.2) and let \((x^N)_{N\in {\mathbb {N}}}=((x_{1}^N,\dots ,x_{N}^N))_{N\in \mathbb {N}}\) starting vectors \(x_N\in A_N\) such that all moments of the measures

$$\begin{aligned} \mu _{N,0}:=\frac{1}{N}\sum _{i=1}^N\delta _{a_N( x_i^N-b_N)} \end{aligned}$$

tend to those of \(\mu \) for \(N\rightarrow \infty \). Let \((\tilde{X}^N_{t})_{t\ge 0}\) be the solutions of the SDEs (4.1) with start in \(\tilde{X}^N(0)=x^N\) for \(N\in {\mathbb {N}}\), \(t\ge 0\). Then, for all \(t>0\), all moments of the empirical measures

$$\begin{aligned} \mu _{N,t/(p_N+q_N)} =\frac{1}{N}\sum _{i=1}^N \delta _{a_N(\tilde{X}_{t/(p_N+q_N),i}^N-b_N)}\end{aligned}$$

tend to those of the probability measures \((e^{-t}\mu ) \boxplus \left( \sqrt{1-e^{-2t}}\mu _{sc,4(1+C)^{-3/2}}\right) \) almost surely.

Before proving this theorem with the specific scaling there, we first proceed as in Sect. 2 and investigate arbitrary affine shifts of \(\tilde{X}_t\) first. For this, define \(Y_t:=a_N(\tilde{X}_{t/((p_N+q_N))}-b_N)\) and

$$\begin{aligned} \mu _{N,t}=\frac{1}{N}\sum _{i=1}^N\delta _{Y_{t,i}},\quad S_{N,l}(t)=\frac{1}{N}\sum _{i=1}^NY_{t,i}^l\, \end{aligned}$$

which fits to the notation in our theorem. For abbreviation, we suppress the dependence of pqab on N. Then, by Itô’s formula

$$\begin{aligned} \begin{aligned}&dY_{t,i} =\sqrt{\frac{2}{\kappa (p+q)}} \sqrt{a^2-(Y_{t,i}+ab)^2}\,\textrm{d}B_{t,i}\\&\quad +\left[ a\left( \frac{p-q}{p+q}-b\right) -Y_{t,i} +\frac{2}{p+q} \sum _{j:j\ne i}\frac{a^2(1-b^2)-Y_{t,i}Y_{t,j}-ab(Y_{t,i}+Y_{t,j})}{Y_{t,i}-Y_{t,j}}\right] \,\textrm{d}t. \end{aligned}\nonumber \\ \end{aligned}$$
(4.2)

Furthermore, for \(l\in {\mathbb {N}}\) we define

$$\begin{aligned} M_{l,t}:=\frac{l}{N}\sqrt{\frac{2}{\kappa (p+q)}} \int _0^t\sum _{i=1}^NY_{s,i}^{l-1}\sqrt{a^2-(Y_{s,i}+ab)^2}\,\textrm{d}B_{s,i}. \end{aligned}$$
(4.3)

Note that \(|Y_{t,i}|\le a(1+|b|)\) for all it ensures that the \((M_{l,t})_{t\ge 0}\) are continuous martingales (w.r.t. the usual filtration). The first empirical moment now satisfies

$$\begin{aligned} S_{N,1}(t)-S_{N,1}(0) =\int _0^t\left( -S_{N,1}(s)+a\left( \frac{p-q}{p+q}-b\right) \right) \,\textrm{d}s+M_{1,t}. \end{aligned}$$

This is a linear stochastic differential equation of the form

$$\begin{aligned} f(t)-f(0)=\int _0^t(\lambda f(s)+g(s))\,\textrm{d}s+h(t),\end{aligned}$$
(4.4)

with \(\lambda =-1\), \(f(t)=S_{N,l}(t)\), \(g(t)=a\left( \frac{p-q}{p+q}-b\right) \), and \(h(t)=M_{1,t}\). As the solution of (4.4) is

$$\begin{aligned} f(t) = e^{\lambda t}\left( f(0)+\int _0^te^{-\lambda s}\left( g(s)+\lambda h(s)\right) \,\textrm{d}s \right) +h(t), \end{aligned}$$
(4.5)

we have

$$\begin{aligned} S_{N,1}(t) =e^{-t}\left( S_{N,1}(0)+\int _0^te^{s}\left( a\left( \frac{p-q}{p+q}-b\right) -M_{1,s} \right) \,\textrm{d}s\right) +M_{1,t}. \end{aligned}$$
(4.6)

By another application of Itô’s formula the higher empirical moments satisfy

$$\begin{aligned}&S_{N,l}(t)-S_{N,l}(0)={}\frac{l}{N}\sqrt{\frac{2}{\kappa (p+q)}} \sum _{i=1}^N\int _{0}^tY_{s,i}^{l-1}\sqrt{a^2-(Y_{s,i}+ab)^2}\,\textrm{d}B_{s,i}\nonumber \\&+\frac{l}{N}\sum _{i=1}^N\int _0^tY_{s,i}^{l-1}\nonumber \\&\quad \left[ a\left( \frac{p-q}{p+q}-b\right) -Y_{s,i} +\frac{2}{p+q}\sum _{j:j\ne i}\frac{a^2(1-b^2)-Y_{s,i}Y_{s,j}-ab(Y_{s,i}+Y_{s,j})}{Y_{s,i}-Y_{s,j}}\right] \textrm{d}s\nonumber \\&+\frac{l(l-1)}{N}\sum _{i=1}^N\frac{2}{\kappa (p+q)} \int _{0}^tY_{s,i}^{l-2}\left( a^2-(Y_{s,i}+ab)^2\right) \,\textrm{d}s\nonumber \\ ={}&M_{l,t}+\int _0^t\left( C_l S_{N,l}(s)+f_l(S_{N,1}(s),\dots ,S_{N,l-1}(s))\right) \,\textrm{d}s\,, \end{aligned}$$
(4.7)

with

$$\begin{aligned} C_l:=-l\left( 1+\frac{l-1}{p+q}\left( \frac{2}{\kappa }-1\right) \right) \end{aligned}$$
(4.8)

and, using the calculations leading to (2.4) and (2.3),

$$\begin{aligned} \begin{aligned}&f_l(S_{N,1},\dots ,S_{N,l-1})\\&:={}-l\left( -a\left( \frac{p-q}{p+q} -b\left( 1+\frac{2(l-1)}{p+q}\left( \frac{2}{\kappa }-1\right) \right) \right) S_{N,l-1} \right. \\&\quad \left. -\frac{a^2(1-b^2)(l-1)}{p+q}\left( \frac{2}{\kappa }-1\right) S_{N,l-2}\right. \\&\left. -\frac{Na^2(1-b^2)}{p+q}\sum _{k=0}^{l-2}S_{N,k}S_{N,l-2-k} +\frac{N}{p+q}\sum _{k=0}^{l-2}S_{N,k+1}S_{N,l-1-k} \right. \\&\quad \left. +\frac{2bNa}{p+q}\sum _{k=0}^{l-2}S_{N,k}S_{N,l-1-k}\right) . \end{aligned} \end{aligned}$$

Hence, by (4.5),

$$\begin{aligned} S_{N,l}(t){} & {} =e^{C_lt}\left( S_{N,l}(0)+\int _0^te^{-C_ls} \left( f_l(S_{N,1}(s),\dots ,S_{N,l-1}(s))+C_lM_{l,s}\right) \,\textrm{d}s \right) \nonumber \\{} & {} \quad \ +M_{l,t}. \end{aligned}$$
(4.9)

For the proof of Theorem 4.1 and further limit theorems the following observation is crucial.

Lemma 4.2

Let \(T>0\). Let \(p_N,q_N,a_N,b_N\) as in Theorem 4.1 or Theorem 4.3 below. Assume that \(\lim _{N\rightarrow \infty }S_{N,l}(0)\) exists for all \(l\in {\mathbb {N}}\). Then, for all \(l\in {\mathbb {N}}\) the martingales \((M_{l,t})_{t\ge 0}\) from (4.3) converge uniformly to 0 on [0, T] a.s.

Proof

In a first step we show that the sequence \((E[|S_{N,l}(t)|])_{N\in {\mathbb {N}}}\) is uniformly bounded on [0, T]. Here we first study the case \(l\in 2{\mathbb {N}}\). By (4.9) and our assumptions on pqab it holds, that there are non-negative bounded sequences \(d_1(N),\dots ,d_5(N)\) of numbers such that

$$\begin{aligned} \begin{aligned}&E(S_{N,l}(t))\\&\le {} e^{C_lt}\left( S_{N,l}(0) +\int _0^te^{-C_ls}\left( d_1E\left[ |S_{N,l-1}(s)|\right] +d_2\left[ |S_{N,l-2}(s)|\right] \right. \right. \\&\quad \left. \left. +d_3\sum _{k=0}^{l-2}E\left[ |S_{N,k}(s)S_{N,l-2-k}(s)|\right] \right. \right. \\&\quad \left. \left. +d_4\sum _{k=0}^{l-2}E\left[ |S_{N,k+1}S_{N,l-1-k}(s)|\right] +d_5\sum _{k=0}^{l-2}E\left[ |S_{N,k}(s)S_{N,l-1-k}(s)|\right] \right) \,\textrm{d}s\right) . \end{aligned} \end{aligned}$$

Moreover, by the triangle inequality and Jensen’s inequality,

$$\begin{aligned} |S_{N,l-1}(s)|\le \frac{1}{N}\sum _{i=1}^N|Y_{s,i}|^{l-1} \le \left( \frac{1}{N}\sum _{i=1}^N Y_{s,i}^l\right) ^{\frac{l-1}{l}} \le 1+S_{N,l}(s). \end{aligned}$$
(4.10)

By the same reasons, we also have

$$\begin{aligned} |S_{N,k}(s)S_{N,l-1-k}(s)|\le \left( \frac{1}{N}\sum _{i=1}^N|Y_{s,i}|^{l-1}\right) ^{\frac{k}{l-1}} \left( \frac{1}{N}\sum _{i=1}^N|Y_{s,i}|^{l-1} \right) ^{\frac{l-1-k}{l-1}} \le 1+ S_{N,l}(s), \end{aligned}$$

\(|S_{N,k}(s)S_{N,l-2-k}(s)|\le S_{N,l-2}(s)\le 1+S_{N,l}(s)\) and \(|S_{N,k+1}(s)S_{N,l-1-k}(s)|\le S_{N,l}(s)\). Thus, there exist non-negative bounded sequences \(\tilde{d}_1(N),\tilde{d}_2(N)\) of numbers such that

$$\begin{aligned} e^{-C_lt}E[S_{N,l}(t)] \le S_{N,l}(0) +\int _0^te^{-C_ls}\left( \tilde{d}_1+\tilde{d}_2E[S_{N,l}(s)]\right) \,\textrm{d}s. \end{aligned}$$

By Gronwall’s inequality we conclude that

$$\begin{aligned} e^{-C_{l}t}E[S_{N,l}(t)] \le \left( S_{N,l}(0) +\int _0^t\tilde{d}_1e^{-C_ls}\,\textrm{d}s\right) \cdot \exp \left( \tilde{d}_2t\right) \end{aligned}$$

where the \(C_l\) from (4.8) remain bounded. Thus \((E[S_{N,l}(t)])_{N\in {\mathbb {N}}}\) remains uniformly bounded for \(t\in [0,T]\) in the case of even l. Finally, by (4.10) this also holds for l odd.

In a second step we now show the claim of the lemma. The quadratic variation of \(M_{l,t}\) is given by

$$\begin{aligned}{}[M_{l}]_t =\frac{2}{N^2\kappa (p+q)}\sum _{i=1}^N \int _0^tl^2Y_{s,i}^{2l-2}\left( a^2-(Y_{s,i}+ab)^2\right) \,\textrm{d}s. \end{aligned}$$

By the Chebyshev inequality and the Burkholder-Davis-Gundy inequality there is a constant \(c>0\) independent from N such that

$$\begin{aligned} \begin{aligned}&P\left( \sup _{0\le t\le T}|M_{l,t}|>\epsilon \right) \\&\le {}\frac{1}{\epsilon ^2}E\left[ \sup _{0\le t\le T}|M_{l,t}|^2\right] \le {}\frac{c}{\epsilon ^2}E\left[ [M_{l}]_T\right] \\&={}\frac{2cl^2}{N^2\kappa (p+q)}\sum _{i=1}^N \int _0^TE\left[ Y_{s,i}^{2l-2}\left( a^2-(Y_{s,i}+ab)^2\right) \right] \,\textrm{d}s\\&\le {}\frac{2cl^2(a^2(1-b^2)+2a)}{N\kappa (p+q)} \int _0^TE\left[ 1+S_{N,2l-2}(s)\right] \,\textrm{d}s. \end{aligned} \end{aligned}$$

If we choose pqab as in Theorem 4.1 or Theorem 4.3 we have \(\frac{a^2(1-b^2)+2a}{N(p+q)}\in {\mathcal {O}}(N^{-2})\). By the first part of the proof we thus conclude that \(P\left( \sup _{0\le t\le T}|M_{l,t}|>\epsilon \right) \in {\mathcal {O}}(N^{-2})\) for each \(\epsilon >0\). The claim now follows by the Borel–Cantelli lemma. \(\square \)

We now turn to the specific scaling in Theorem 4.1:

Proof of Theorem 4.1

We again suppress the dependence of pqab on N. We define

$$\begin{aligned} \mu _t:=(e^{-t}\mu )\boxplus \left( \sqrt{1-e^{-2t}}\mu _{sc,4(1+C)^{-3/2}}\right) \end{aligned}$$

with the moments \(c_l(t):=\int _{{\mathbb {R}}}x^l\,d\mu _t(x)\). By the proof of Theorem 3.1, we have \(c_1(t)=e^{-t}c_1(0)\) and

$$\begin{aligned} c_l(t) =e^{-lt}\left( c_l(0)+4l(1+C)^{-3}\int _0^te^{ls}\sum _{k=0}^{l-2}c_k(s)c_{l-2-k}(s)\, \textrm{d}s\right) , \quad l\ge 2. \end{aligned}$$

By induction we will show that the limits \(S_l(t):=\lim _{N\rightarrow \infty }\int _{{\mathbb {R}}}x^l\,d\mu _{N,t/(p+q)}(x)\), \(l\in {\mathbb {N}}\), exist and satisfy the same recursion as the \(c_l(t)\).

Let \(l=1\). By (4.6), our choice of \(b_N\) and Lemma 4.2 we have \(S_1(t):=\lim _{N\rightarrow \infty }S_{N,1}(t)=e^{-t}c_l\) locally uniformly in t a.s.

Let \(l\ge 2\). Note that \(C_l\) in (4.9) converges to \(-l\). We now calculate the limit of \(f_l(S_{N,1}(t),\dots ,S_{N,l-1}(t))\). For this note that

$$\begin{aligned}{} & {} \lim _{N\rightarrow \infty }\frac{4l(l-1)ab}{\kappa (p+q)}=0,\; \lim _{N\rightarrow \infty }\frac{2l(l-1)a^2}{\kappa (p+q)}=0,\\{} & {} \lim _{N\rightarrow \infty }a\left( \frac{p-q}{p+q}-b\left( 1+\frac{2(l-1)}{p+q} \left( \frac{2}{\kappa }-1\right) \right) \right) =0 ,\\{} & {} \lim _{N\rightarrow \infty }\frac{(1-b^2)a^2(l-1)}{p+q}\left( \frac{2}{\kappa }-1\right) =0,\; \lim _{N\rightarrow \infty }N/(p+q)=0,\; \lim _{N\rightarrow \infty }\frac{2bNa}{p+q}=0,\\{} & {} \lim _{N\rightarrow \infty }\frac{Na^2(1-b^2)}{p+q}=4(1+C)^{-3}. \end{aligned}$$

Hence, by our induction assumption, we have a.s. locally uniformly in t that

$$\begin{aligned} \lim _{N\rightarrow \infty }f_l(S_{N,1}(t),\dots ,S_{N,l-1}(t)) =4l(1+C)^{-3}\sum _{k=0}^{l-2}S_k(t)S_{l-2-k}(t). \end{aligned}$$

Thus by (4.9) and Lemma 4.2, the limit \(S_l(t)=\lim _{N\rightarrow \infty }S_{N,l}(t)\) exists and satisfies

$$\begin{aligned} S_{l}(t) =e^{-lt}\left( S_l(0)+4l(1+C)^{-3}\int _0^te^{ls}\sum _{k=0}^{l-2}S_{k}(s)S_{l-2-k}(s)\,\textrm{d}s \right) \;\text {a.s.}, \end{aligned}$$

so that the \(S_l(t)\) satisfy the same recursion as the \(c_l(t)\).

This proves the claim in the same way as in the proof of Theorem 3.1. \(\square \)

By using the same technique, we also readily get the following stochastic version of Theorem 3.7; please notice that here also Lemma 4.2 is available.

Theorem 4.3

Consider \(p_N,q_N,a_N,b_N\) as in (3.13) and (3.14). Let \(\mu \in M^1([0,\infty [)\) satisfy (3.2). Moreover, let \((x^N)_{N\in \mathbb {N}}=((x_{1}^N,\dots ,x_{N}^N))_{N\in {\mathbb {N}}}\) be an associated sequence of starting vectors \(x^N\in A_N\) as the preceding results.

Let \(\tilde{X}^N_{t}\) be the solutions of the SDEs (4.1) with start in \(\tilde{X}^N(0)=x^N\) for \(N\in {\mathbb {N}}\), \(t\ge 0\). Then, for all \(t>0\), all moments of the empirical measures

$$\begin{aligned} \mu _{N,t/(p_N+q_N)} =\frac{1}{N}\sum _{i=1}^N \delta _{a_N(\tilde{X}_{t/(p_N+q_N),i}^N-b_N)}\end{aligned}$$

tend almost surely to those of the probability measures

$$\begin{aligned} \left( \mu _{SC,2\sqrt{2(1-e^{-t})}} \boxplus \left( \sqrt{e^{-t}\mu }\right) _{\text {even}}\right) ^2 \boxplus \mu _{MP,{\hat{p}}-1,2(1-e^{-t})},\quad t>0. \end{aligned}$$
(4.11)

Remark 4.4

The methods of the proof above also lead to stochastic versions of Theorems 3.4 and 3.9. This means that in these theorems the moment convergence holds a.s. for the rescaled Jacobi process \(\tilde{X}_t\) instead of the solution x(t) of (1.6).

For some parameters \(\kappa ,p,q\), the solutions \((\tilde{X}_t)_{t\ge 0}\) of the SDEs (4.1) admit interpretations in terms of dynamic versions of MANOVA-ensembles over the fields \({\mathbb {F}}={\mathbb {R}},{\mathbb {C}}\) by Doumerc [14] as follows. Let \(d=1,2\) be the real dimension of \({\mathbb {F}}\). Consider Brownian motions \((Z_t^n)_{t\ge 0}\) on the compact groups \(SU(n,{\mathbb {F}})\) with some suitable time scalings. Now take positive integers Np with \(N\le p\le n\), and denote the \(N\times p\)-block of a square matrix A of size n by \(\pi _{N,p}(A)\). Moreover, let \(\sigma (B)\) be the ordered spectrum of some positive semidefinite matrix B. It is shown in [14] that then

$$\begin{aligned} \Bigl ({\tilde{X}}_t:= 2\cdot \sigma \Bigl ( \pi _{N,p}(Z_t^N) \pi _{N,p}(Z_t^N)^*\Bigr )-1\Bigr )_{t\ge 0} \end{aligned}$$

is a diffusion on \(A_N\) satisfying the SDE (4.1) with the parameters \(p\ge N\), \(q:=n-p\), and \(\kappa =d/2\). Clearly, all of the preceding limit results in Sect. 4 can be applied in this case for suitable sequences \(p_N, n_N \) of dimension parameters depending on N. Moreover the limiting regime \(a_N,b_N,p_N/N,q_N/N\sim \text {const.}\) has been studied in this context by free probability methods using projections of free unitary Brownian motion; see e.g. [8, 11].

This geometric interpretation includes the interpretation for \(n=p+N\), i.e. \(q=N\), where the Jacobi processes are suitable projections of Brownian motions on the compact Grassmann manifolds with the dimension parameters Np over \({\mathbb {F}}\). We also remark that this even works for the field of quaternions with \(\kappa =d/2=2\); see [18] for the analytical background.

5 Limit Theorems in the Noncompact Case

The Jacobi processes on compact alcoves in the preceding section admit analogues in a noncompact setting, namely the so-called Heckman–Opdam Markov processes associated with root systems of type BC introduced in [32, 33]. Due to the close connections with the Jacobi processes on compact alcoves above, we call these processes Jacobi processes in a noncompact setting. For some parameters, these processes are related to Brownian motions on noncompact Grassmann manifolds over \({\mathbb {R}}, {\mathbb {C}}\), and the quaternions similar to the comments above. For the general background we refer to the monographs [17, 18] and references therein.

We here derive analogues of the main results of Sects. 2, 3 and 4 in this noncompact setting. For this, we first introduce these processes in a similar way as in the compact case. We fix some dimension \(N\ge 2\) and parameters \(k_1,k_2\in {\mathbb {R}}\) and \(k_3>0\) with \(k_2\ge 0\) and \(k_1+k_2\ge 0\). We define the (noncompact) Heckman–Opdam Laplacians of type BC on the Weyl chambers

$$\begin{aligned} {\tilde{C}}_N:=\{w\in {\mathbb {R}}^N: \> 0\le w_1\le \ldots \le w_N\} \end{aligned}$$

of type B by

$$\begin{aligned} L_{trig,k}f(w)&:=\Delta f(w) + \sum _{i=1}^N\Biggl ( k_1 \textrm{coth}\>(w_i/2)+ 2k_2 \textrm{coth}\>(w_i)\nonumber \\&+k_3\sum _{j: j\ne i} \Bigl (\textrm{coth}\>(\frac{w_i-w_j}{2})+\textrm{coth}\>(\frac{w_i+w_j}{2})\Bigr )\Biggr )f_{x_i}(w) \end{aligned}$$
(5.1)

for functions \(f\in C^2({\mathbb {R}}^N)\) which are invariant under the associated Weyl group. By [32, 33], the \(L_{trig,k}\) are the generators of Feller diffusions \((W_t)_{t\ge 0}\) on \(C_N\) where the paths are reflected on the boundary. We next use the transformation \(x_i:=\cosh w_i\) (\(i=1,\ldots ,n\)) with

$$\begin{aligned} x\in C_N:=\{x\in {\mathbb {R}}^N: \> 1\le x_1\le \ldots \le x_N\}. \end{aligned}$$

The diffusions \((W_t)_{t\ge 0}\) on \({\tilde{C}}_N\) then are transformed into Feller diffusions \((X_t)_{t\ge 0}\) on \( C_N\) with reflecting boundaries and, by some elementary calculus, with the generators

$$\begin{aligned} L_kf(x):= & {} \sum _{i=1}^N (x_i^2-1)f_{x_ix_i}(x)\nonumber \\{} & {} + \sum _{i=1}^N\Biggl ( (k_1+2k_2+ 2k_3(N-1)+1)x_i+k_1 +2k_3\sum _{j: j\ne i} \frac{x_ix_j-1}{x_i-x_j}\Biggr )f_{x_i}(x).\nonumber \\ \end{aligned}$$
(5.2)

As in the introduction, we redefine the parameters by

$$\begin{aligned} \kappa :=k_3>0, \quad q:= N-1+\frac{1+2k_1+2k_2}{2k_3}, \quad p:= N-1+\frac{1+2k_2}{2k_3} \end{aligned}$$
(5.3)

with \(p, q>N-1\) and rewrite (5.2) as

$$\begin{aligned} L_kf(x){} & {} :=\sum _{i=1}^N (x_i^2-1)f_{x_ix_i}(x)\nonumber \\{} & {} \quad \ + \kappa \sum _{i=1}^N\Biggl ((q-p) + (q+p)x_i +2\sum _{j: j\ne i} \frac{x_ix_j-1}{x_i-x_j}\Biggr )f_{x_i}(x). \end{aligned}$$
(5.4)

Moreover, we also consider the transformed processes \((\tilde{X}_{t}:=X_{t/\kappa })_{t\ge 0}\) with the generators \(\frac{1}{\kappa } L_k\) which then are the unique strong solutions of the SDEs

$$\begin{aligned} \textrm{d}{\tilde{X}}_{t,i} =\frac{\sqrt{2}}{\sqrt{\kappa }} \sqrt{\tilde{X}_{t,i}^2-1}\> \textrm{d}B_{t,i} +\Bigl ((q-p) +(q+p){\tilde{X}}_{t,i} + 2\sum _{j: j\ne i}\frac{{\tilde{X}}_{t,i}{\tilde{X}}_{t,j}-1}{\tilde{X}_{t,i}-{\tilde{X}}_{t,j}}\Bigr )\textrm{d}t \nonumber \\ \end{aligned}$$
(5.5)

for \( i=1,\ldots ,N\), and starting points \(x_0\) in the interior of \(C_N\).

For \(\kappa =\infty \) and \(p,q>N-1\), these SDEs degenerate to the ODEs

$$\begin{aligned} \frac{d}{\textrm{d}t}x_i(t) =(q-p)+(q+p)x_i(t) +2\sum _{j: j\ne i}\frac{x_i(t) x_j(t)-1}{x_i(t)-x_j(t)} \quad (i=1,\dots ,N).\nonumber \\ \end{aligned}$$
(5.6)

The RHS of (5.6) is the negative of the RHS of (1.6) where the solutions now exist on some different “complementary” domain. Theorem 1.1 here has the following form; see Appendix Section.

Theorem 5.1

Let \(N\in {\mathbb {N}}\) and \(p,q> N-1\). Then, for each \(x_0\in C_N\) the ODE (5.6) has a unique solution x(t) for \(t\ge 0\), i.e. there is a unique continuous function \(x:[0,\infty )\rightarrow C_N\) with \(x(0)=x_0\) such that for \(t>0\), x(t) is in the interior of \( C_N\) and satisfies (5.6).

For the solutions of (5.6), we have the following local Wigner-type limit theorem which is completely analogous to Theorem 3.4 (2).

Theorem 5.2

Consider \((p_N)_{N\in {\mathbb {N}}},(q_N)_{N\in {\mathbb {N}}}\subset ]0,\infty [\) with \(p_N,q_N>N-1\) for \(N\ge 1\). Let \((b_N)_{N\in {\mathbb {N}}}\subset ]1,\infty [\) such that \(B:=\lim b_N\in [1,\infty ]\) exists, and let \((s_N)_{N\in {\mathbb {N}}}\subset ]0,\infty [\) be time scalings with

$$\begin{aligned} \lim _{N\rightarrow \infty }\frac{p_N+q_N}{\sqrt{Ns_N}}=0. \end{aligned}$$

Define the space scalings \( a_N:=\sqrt{s_N/N}\).

Let \(\mu \in M^1({\mathbb {R}})\) satisfy (3.2), and let \((x_N)_{N\in {\mathbb {N}}}\) be associated starting vectors with \(x_N\in C_N\) as in the preceding limit results.

Let \(x_N(t)\) be the solutions of the ODEs (1.6) with \(x_N(0)=x_N\) for \(N\in {\mathbb {N}}\). Then, for \(t>0\), all moments of the measures \( \mu _{N,t/(p_N+q_N)} =\frac{1}{N}\sum _{i=1}^N \delta _{a_N( x_i^N(t/s_N)-b_N)}\) tend to those of \( \mu \boxplus \mu _{sc,2\sqrt{(B^2-1)t}}\).

Proof

As the RHSs of (5.6) and (1.6) are equal up to a sign, the computations in Sect. 2 and in the proof of Theorem 3.4 (2) imply that for \(l\ge 0\) and \(t\ge 0\), the moments \({\tilde{S}}_{N,l}(t)\) of the empirical measures \(\mu _{N,t/s_N}\) converge for \(N\rightarrow \infty \) to functions \(S_l(t)\) which satisfy

$$\begin{aligned}{} & {} S_0\equiv 1,\quad S_1(t)=S_1(0),\nonumber \\{} & {} S_l(t)=S_l(0)+l(B^2-1)\int _0^t\sum _{k=0}^{l-2}S_k(s)S_{l-2-k}(s) \,\textrm{d}s \quad (l\ge 2). \end{aligned}$$
(5.7)

The claim now follows in the same way as in Theorem 3.4 (2). \(\square \)

The stationary local limit Theorem 3.1 does not seem to have a meaningful analogue in the noncompact setting, as the assumptions on the \(p_N, q_N, a_N, b_N\) in Theorem 3.1 imply that \(b_N\in ]-1,1[\) holds for all N such that the rescaled empirical measures for \(t=0\) in the assumptions of Theorem 3.1 cannot converge. On the other hand, we have the following variants of Theorems 3.7 and 3.9 which involve Marchenko–Pastur distributions. Due to the time-inversion, the analogue to Theorem 3.7 is now non-stationary:

Theorem 5.3

Consider sequences \((p_N)_{N\in {\mathbb {N}}},(q_N)_{{\mathbb {N}}}\subset ]0,\infty ]\) with

$$\begin{aligned} \lim _{N\rightarrow \infty }p_N/N=\infty \quad \text { and}\quad \lim _{N\rightarrow \infty }q_N/N={\hat{q}}. \end{aligned}$$

Define \(a_N:=p_N/N\), \(b_N:=1\) \((N\in {\mathbb {N}})\). Let \(\mu \in M^1([0,\infty [)\) satisfy (3.2), and let \((x_N)_{N\in {\mathbb {N}}}\) be associated starting vectors \(x_N\in C_N\) as before. Let \(x_N(t)\) be the solutions of the ODEs (5.6) with start in \(x_N(0)=x_N\) for \(N\in {\mathbb {N}}, t\ge 0\). Then, for \(t>0\), all moments of the measures

$$\begin{aligned} \mu _{N,t/(p_N+q_N)} =\frac{1}{N}\sum _{i=1}^N \delta _{a_N( x_i^N(t/(p_N+q_N))-b_N)}\end{aligned}$$

tend to those of the measures

$$\begin{aligned} \mu (t):=\left( \mu _{SC,2\sqrt{2(e^t-1})} \boxplus \left( \sqrt{e^{t}\mu }\right) _{\text {even}}\right) ^2 \boxplus \mu _{MP,{\hat{q}}-1,2(e^t-1)},\quad t>0. \end{aligned}$$
(5.8)

Proof

The proof is analogous to the one of Theorem 3.7. We just give the main steps.

The moments \(S_{N,l}(t/(p_N+q_N))\) of the empirical measures \(\mu _{N,t/(p_N+q_N)}\) tend to functions \(S_l(t)\) which satisfy

$$\begin{aligned} \begin{gathered} S_0\equiv 1,\quad S_1(t)=e^{t}\left( S_1(0)-2{\hat{q}}\right) +2{\hat{q}},\\ S_l(t) =e^{lt}\left( S_l(0)+2l\int _0^te^{-ls}\left( {\hat{q}}S_{l-1}(s)+ \sum _{k=0}^{l-2}S_k(s)S_{l-1-k}(s)\right) \,\textrm{d}s\right) ,\;l\ge 2. \end{gathered}\nonumber \\ \end{aligned}$$
(5.9)

One then can deduce that the corresponding R-transform satisfies

$$\begin{aligned} 0=R_t(t,z)-(z+2z^2)R_z(t,z)-2{\hat{q}}-(4z+1)R(t,z),\;R(0,z)=:\phi (z), \end{aligned}$$

which is solved by

$$\begin{aligned} R(t,z)= & {} e^{t}(1-2z(e^t-1))^{-2}\phi (e^{t}z(1-2z(e^t-1))^{-1}) \nonumber \\{} & {} +\frac{2(e^t-1)}{1-2z(e^t-1)} +\frac{2({\hat{q}}-1)(e^t-1)}{1-2z(e^t-1)}. \end{aligned}$$
(5.10)

Finally one concludes as in the proof of Theorem 3.7 by using the time change \(s\mapsto -s\) in the definition of \(\hat{\phi }\). \(\square \)

The following result also follows in the same way by the methods of the proof of Theorem 3.9.

Theorem 5.4

Let \(p_N,q_N>N-1\) for \(N\ge 1\) with \(\lim _{N\rightarrow \infty }q_N/N={\hat{q}}\in [1,\infty [\). Let \((s_N)_{N\in {\mathbb {N}}}\subset ]0,\infty [\) be time scalings with \(\lim _{N\rightarrow \infty }(p_n+q_N)/s_N=0\). Define the space scalings \(a_N:=s_N/N\), \(b_N:=1\) (\(N\in {\mathbb {N}}\)). Let \(\mu \in M^1([0,\infty [)\) satisfy (3.2) and let \((x_N)_{N\in {\mathbb {N}}}\) associated starting vectors as before. Let \(x_N(t)\) be the solutions of the ODEs (5.6) with \(x_N(0)=x_N\) for \(N\in {\mathbb {N}}\). Then, for \(t>0\), all moments of the empirical measures

$$\begin{aligned} \mu _{N,t/s_N} =\frac{1}{N}\sum _{i=1}^N \delta _{a_N( x_i^N(t/s_N)-b_N)} \end{aligned}$$

tend to those of \(\left( \mu _{sc,2\sqrt{2t}} \boxplus \left( \sqrt{\mu }\right) _{\text {even}}\right) ^2 \boxplus \mu _{MP,{\hat{q}}-1,2t}\).

The corresponding stochastic limit results from Sect. 4 can be also transferred to the noncompact setting. We skip the details.