1 Introduction

The aim of this short note is to establish intertwining relations between the semigroups of general \(\beta \)-Laguerre and \(\beta \)-Jacobi processes, in analogy to the ones obtained for general \(\beta \)-Dyson Brownian motion in [20] (see also [13]). These also generalize the relations obtained for \(\beta =2\) in [3] when the transition kernels for these semigroups are given explicitly in terms of h-transforms of Karlin–McGregor determinants.

We begin, by introducing the stochastic processes we will be dealing with. Consider the unique strong solution to the following system of SDEs with \(i=1,\ldots ,n\) with values in \([0,\infty )^n\),

$$\begin{aligned} \hbox {d}X_i^{(n)}(t)=2\sqrt{X_i^{(n)}(t)}\hbox {d}B_i^{(n)}(t)+\beta \left( \frac{d}{2}+\sum _{1\le j \le k, j \ne i}^{}\frac{2X_i^{(n)}(t)}{X_i^{(n)}(t)-X_j^{(n)}(t)}\right) \hbox {d}t, \end{aligned}$$
(1)

where the \(B_i^{(n)}\), \(i=1,\ldots , n,\) are independent standard Brownian motions. This process was introduced and studied by Demni [7] in relation to Dunkl processes (see, for example, [22]) where it is referred to as the \(\beta \)-Laguerre process, since its distribution at time 1, if started from the origin, is given by the \(\beta \)-Laguerre ensemble (see Sect. 5 of [7]). We could, equally well, have called this the \(\beta \)-squared Bessel process, since for \(\beta =2\) it exactly consists of nBESQ(d) diffusion processes conditioned to never collide as first proven in [15], but we stick to the terminology of [7]. Similarly, consider the unique strong solution to the following system of SDEs in \([0,1]^n\),

$$\begin{aligned} \hbox {d}X_i^{(n)}(t)= & {} 2\sqrt{X_i^{(n)}(t)(1-X_i^{(n)}(t))}\hbox {d}B_i^{(n)}(t)\nonumber \\&+\,\beta \left( a-(a+b)X_i^{(n)}(t)+\sum _{1\le j \le k, j \ne i}^{}\frac{2X_i^{(n)}(t)(1-X_i^{(n)}(t))}{X_i^{(n)}(t)-X_j^{(n)}(t)}\right) \hbox {d}t,\nonumber \\ \end{aligned}$$
(2)

where again, the \(B_i^{(n)}\), \(i=1,\ldots , n,\) are independent standard Brownian motions. We call this solution the \(\beta \)-Jacobi process. It was first introduced and studied in [6] as a generalization of the eigenvalue evolutions of matrix Jacobi processes and whose stationary distribution is given by the \(\beta \)-Jacobi ensemble (see Sect. 4 of [6]):

$$\begin{aligned} {\mathcal {M}}^{Jac,n}_{a,b,\beta }(\hbox {d}x)=C_{n,a,b,\beta }^{-1}\prod _{i=1}^{n}x^{\frac{\beta }{2}a-1}_{i}(1-x_i)^{\frac{\beta }{2}b-1}\prod _{1\le i < j \le n}^{}|x_j-x_i|^{\beta }\hbox {d}x, \end{aligned}$$
(3)

for some normalization constant \(C_{n,a,b,\beta }\).

We now give sufficient conditions that guarantee the well-posedness of the SDEs above. For \(\beta \ge 1\) and \(d\ge 0\) and \(a,b\ge 0\), (1) and (2) have a unique strong solution with no collisions and no explosions and with instant diffraction if started from a degenerate (i.e., when some of the coordinates coincide) point (see Corollaries 6.5 and 6.7, respectively, of [14]). In particular, the coordinates of \(X^{(n)}\) stay ordered. Thus, if

$$\begin{aligned} X^{(n)}_1(0) \le \cdots \le X^{(n)}_n(0), \end{aligned}$$

then with probability one,

$$\begin{aligned} X^{(n)}_1(t)< \cdots < X^{(n)}_n(t),\quad \ \forall \ t>0. \end{aligned}$$

From now on, we restrict to those parameter values.

It will be convenient to define \(\theta =\frac{\beta }{2}\). We write \(P^{(n)}_{d,\theta }(t)\) for the Markov semigroup associated with the solution of (1). Similarly, write \(Q^{(n)}_{a,b,\theta }(t)\) for the Markov semigroup associated with the solution of (2). Furthermore, denote by \({\mathcal {L}}^{(n)}_{d,\theta }\) and \({\mathcal {A}}^{(n)}_{a,b,\theta }\) the formal infinitesimal generators for (1) and (2), respectively, given by,

$$\begin{aligned} {\mathcal {L}}^{(n)}_{d,\theta }&=\sum _{i=1}^{n}2z_i\frac{\partial }{\partial z^2_i}+2 \theta \sum _{i=1}^{n}\left( \frac{d}{2}+\sum _{1\le j \le k, j \ne i}^{}\frac{2z_i}{z_i-z_j}\right) \frac{\partial }{\partial z_i},\end{aligned}$$
(4)
$$\begin{aligned} {\mathcal {A}}^{(n)}_{a,b,\theta }&=\sum _{i=1}^{n}2z_i(1-z_i)\frac{\partial }{\partial z^2_i}+2 \theta \sum _{i=1}^{n}\left( a-(a+b)z_i+\sum _{1\le j \le k, j \ne i}^{}\frac{2z_i(1-z_i)}{z_i-z_j}\right) \frac{\partial }{\partial z_i}. \end{aligned}$$
(5)

With I denoting either \([0,\infty )\) or [0, 1], define the chamber,

$$\begin{aligned} W^n(I)=\{x=(x_1,\ldots ,x_n)\in I^n:x_1\le \cdots \le x_n\}. \end{aligned}$$

Moreover, for \(x\in W^{n+1}\) define the set of \(y \in W^{n}\) that interlace with x by,

$$\begin{aligned} W^{n,n+1}(x)=\{y=(y_1,\ldots ,y_n)\in I^n: x_1\le y_1 \le x_2 \le \cdots \le y_n \le x_{n+1}\}. \end{aligned}$$

For \(x\in W^{n+1}\) and \(y\in W^{n,n+1}(x)\), define the Dixon–Anderson conditional probability density on \(W^{n,n+1}(x)\) (originally introduced by Dixon at the beginning of the last century in [9] and independently rediscovered by Anderson in his study of the Selberg integral in [1]) by,

$$\begin{aligned} \lambda ^{\theta }_{n,n+1}(x,y)=\frac{\Gamma (\theta (n+1))}{\Gamma (\theta )^{n+1}}\prod _{1\le i<j \le n+1}^{}(x_j-x_i)^{1-2\theta }\prod _{1\le i <j \le n}^{}(y_j-y_i)\prod _{i=1}^{n}\prod _{j=1}^{n+1}|y_i-x_j|^{\theta -1}. \end{aligned}$$
(6)

Denote by \(\Lambda ^{\theta }_{n,n+1}\), the integral operator with kernel \(\lambda ^{\theta }_{n,n+1}\), i.e.,

$$\begin{aligned} \left( \Lambda ^{\theta }_{n,n+1}f\right) (x)=\int _{y\in W^{n,n+1}(x)}^{}\lambda ^{\theta }_{n,n+1}(x,y)f(y)\hbox {d}y. \end{aligned}$$

Then, our goal is to prove the following theorem, which should be considered as a generalization to the other two classical \(\beta \)-ensembles, the Laguerre and Jacobi, of the result of [20] for the Gaussian ensemble.

Theorem 1.1

Let \(\beta \ge 1\), \(d\ge 2\) and \(a,b \ge 1\). Then, with \(\theta =\frac{\beta }{2}\), we have the following equalities of Markov kernels, \(\forall t \ge 0\),

$$\begin{aligned} P^{(n+1)}_{d-2,\theta }(t)\Lambda ^{\theta }_{n,n+1}&=\Lambda ^{\theta }_{n,n+1}P^{(n)}_{d,\theta }(t), \end{aligned}$$
(7)
$$\begin{aligned} Q^{(n+1)}_{a-1,b-1,\theta }(t)\Lambda ^{\theta }_{n,n+1}&=\Lambda ^{\theta }_{n,n+1}Q^{(n)}_{a,b,\theta }(t) . \end{aligned}$$
(8)

Remark 1.2

For \(\beta =2\), this result was already obtained in [3]; see in particular Sects. 3.7 and 3.8 therein, respectively.

Remark 1.3

The general theory of intertwining diffusions (see [19]) suggests that there should be a way to realize these intertwining relations by coupling these n and \(n+1\) particle processes, so that they interlace. In the Laguerre case (the Jacobi case is analogous), the resulting process \(Z=(X,Y)\), with Y evolving according to \(P^{(n)}_{d,\theta }(t)\) and X in its own filtration according to \(P^{(n+1)}_{d-2,\theta }(t)\), should (conjecturally) have generator given by,

$$\begin{aligned} {\mathcal {L}}^{n,n+1}_{\beta ,d}= & {} \sum _{j=1}^{n}2y_j\partial ^2_{y_j}+\beta \sum _{j=1}^{n}\left( \frac{d}{2}+\sum _{i\ne j}^{}\frac{2y_j}{y_j-y_i}\right) \partial _{y_j}\\&+\,\sum _{j=1}^{n+1}2x_j\partial ^2_{x_j}+\beta \sum _{j=1}^{n+1}\left( \frac{d-2}{2}+\sum _{i\ne j}^{}\frac{2x_j}{x_j-x_i}\right) \partial _{x_j}\\&+\,(1-\beta )\sum _{j=1}^{n+1}\sum _{i \ne j}^{}\frac{4x_j}{x_i-x_j}\partial _{x_j}+\left( \frac{\beta }{2}-1\right) \sum _{j=1}^{n+1}\sum _{i=1}^{n}\frac{4x_j}{x_j-y_i}\partial _{x_j}, \end{aligned}$$

with reflecting boundary conditions of the X components on the Y particles (in case they do collide). For a rigorous construction of the analogous coupled process in the case of Dyson Brownian motions with \(\beta >2\), see Sect. 4 of [13]. In fact, for certain values of the parameters, the construction of the process with the generator above can be reduced to the results of [13] and a more detailed account will appear as part of the author’s Ph.D. thesis [2].

As just mentioned, such a coupling was constructed for Dyson Brownian motion with \(\beta > 2\) in [13] and in [3] (see also [23]) for copies of general one-dimensional diffusion processes, which in particular includes the squared Bessel (this corresponds to the Laguerre process of this note) and Jacobi cases for \(\beta =2\), when the interaction, between the two levels, entirely consists of local hard reflection and the transition kernels are explicit. Given such 2-level couplings, one can then iterate to construct a multilevel process in a Gelfand–Tsetlin pattern, as in [25] which initiated this program (see also [3, 13, 19]). For a different type of coupling, for \(\beta =2\) Dyson Brownian motion preceded [15] and is related to the Robinson–Schensted correspondence; see [16, 17] and the related work [5].

Using Theorem 1.1 and that \({\mathcal {M}}^{Jac,n}_{a,b,\beta }\) is the unique stationary measure of (2) which follows from smoothness and positivity of the transition density \(p^{n,\beta ,a,b}_t(x,y)\), with respect to Lebesgue measure of \(Q^{(n)}_{a,b,\theta }(t)\) (see Proposition 4.1 of [6]; for this to apply, we further need to restrict to \(a,b > \frac{1}{\beta }\)) and the fact that two distinct ergodic measures must be mutually singular (see [24]), we immediately get:

Corollary 1.4

For \(\beta \ge 1\) and \(a,b > 1\) and with \(\theta =\frac{\beta }{2}\),

$$\begin{aligned} {\mathcal {M}}^{Jac,n+1}_{a-1,b-1,\beta }\Lambda ^{\theta }_{n,n+1}={\mathcal {M}}^{Jac,n}_{a,b,\beta }. \end{aligned}$$
(9)

Proof

From (8), we obtain that \({\mathcal {M}}^{Jac,n+1}_{a-1,b-1,\beta }\Lambda ^{\theta }_{n,n+1}\) is the unique stationary measure of \(Q^{(n)}_{a,b,\theta }(t)\).

\(\square \)

Before closing this introduction, we remark that in order to establish Theorem 1.1, we will follow the strategy given in [20]; namely, we rely on the explicit action of the generators and integral kernel on the class of Jack polynomials which, along with an exponential moment estimate, will allow us to apply the moment method. We note that, although the \(\beta \)-Laguerre and \(\beta \)-Jacobi diffusions look more complicated than \(\beta \)-Dyson’s Brownian motion, the main computation, performed in Step 1 of the proof below, is actually simpler than the one in [20].

2 Preliminaries on Jack Polynomials

We collect some facts on the Jack polynomials \(J_{\lambda }(z;\theta )\) which as already mentioned will play a key role in obtaining these intertwining relations. We mainly follow [20] which in turn follows [4] (note that there is a misprint in [20]; there is a factor of \(\frac{1}{2}\) missing from Eq. (2.7) therein c.f. Eq. (2.13d) in [4]). The \(J_{\lambda }(z;\theta )\) are defined to be the (unique up to normalization) symmetric polynomial eigenfunctions in n variables of the differential operator \({\mathcal {D}}^{(n),\theta }\),

$$\begin{aligned} {\mathcal {D}}^{(n),\theta }=\sum _{i=1}^{n}z^2_i\frac{\partial }{\partial z^2_i}+2 \theta \sum _{i=1}^{n}\sum _{1\le j \le k, j \ne i}^{}\frac{z^2_i}{z_i-z_j}\frac{\partial }{\partial z_i}, \end{aligned}$$
(10)

indexed by partitions \(\lambda =(\lambda _1 \ge \lambda _2\ge \cdots )\) of length l with eigenvalue \(\hbox {eval}(\lambda ,n,\theta )=2B(\lambda ')-2\theta B(\lambda )+2\theta (n-1)|\lambda |\) where \(B(\lambda )=\sum (i-1)\lambda _i=\sum \left( {\begin{array}{c}\lambda '_i\\ 2\end{array}}\right) \) and \(\lambda '\) is the conjugate partition. With \(1_n\) denoting a row vector of n 1s, we have the normalization,

$$\begin{aligned} J_{\lambda }(1_n;\theta )=\theta ^{-|\lambda |}\prod _{i=1}^{l}\frac{\Gamma \left( \left( n+1-i\right) \theta +\lambda _i\right) }{\Gamma \left( \left( n+1-i\right) \theta \right) }. \end{aligned}$$

Define the following differential operators,

$$\begin{aligned} {\mathcal {B}}_1^{(n)}&= \sum _{i=1}^{n}\frac{\partial }{\partial z_i}, \end{aligned}$$
(11)
$$\begin{aligned} {\mathcal {B}}_2^{(n),\theta }&=\sum _{i=1}^{n}z_i\frac{\partial }{\partial z^2_i}+2 \theta \sum _{i=1}^{n}\sum _{1\le j \le k, j \ne i}^{}\frac{z_i}{z_i-z_j}\frac{\partial }{\partial z_i}, \end{aligned}$$
(12)
$$\begin{aligned} {\mathcal {B}}_3^{(n)}&= \sum _{i=1}^{n}z_i\frac{\partial }{\partial z_i}. \end{aligned}$$
(13)

Then, the action of these operators on the \(J_{\lambda }(z;\theta )\)’s is given explicitly by (see [4] Eqs. (2.13a), (2.13d) and (2.13b), respectively),

$$\begin{aligned} {\mathcal {B}}_1^{(n)}J_{\lambda }(z;\theta )&=J_{\lambda }(1_n;\theta )\sum _{i=1}^{l}\left( {\begin{array}{c}\lambda \\ \lambda _{(i)}\end{array}}\right) _{\theta }\frac{J_{\lambda _{(i)}}(z;\theta )}{J_{\lambda _{(i)}}(1_n;\theta )}, \end{aligned}$$
(14)
$$\begin{aligned} {\mathcal {B}}_2^{(n),\theta }J_{\lambda }(z;\theta )&=J_{\lambda }(1_n;\theta )\sum _{i=1}^{l}\left( {\begin{array}{c}\lambda \\ \lambda _{(i)}\end{array}}\right) _{\theta }(\lambda _i-1+(n-i)\theta )\frac{J_{\lambda _{(i)}}(z;\theta )}{J_{\lambda _{(i)}}(1_n;\theta )}, \end{aligned}$$
(15)
$$\begin{aligned} {\mathcal {B}}_3^{(n)}J_{\lambda }(z;\theta )&=|\lambda |J_{\lambda }(z;\theta ), \end{aligned}$$
(16)

where \(\lambda _{(i)}\) is the sequence given by \(\lambda _{(i)}=(\lambda _1,\ldots ,\lambda _{i-1},\lambda _i-1,\lambda _{i+1},\ldots )\) (in case \(i=l\) and \(\lambda _i=1\), we drop \(\lambda _l\) from \(\lambda \)) and the combinatorial coefficients \(\left( {\begin{array}{c}\lambda \\ \rho \end{array}}\right) _{\theta }\) are defined by the following expansion (we set \(\left( {\begin{array}{c}\lambda \\ \lambda _{(i)}\end{array}}\right) _{\theta }=0\) in case \(\lambda _{(i)}\) is no longer a non-decreasing positive sequence),

$$\begin{aligned} \frac{J_{\lambda }(1_n+z;\theta )}{J_{\lambda }(1_n;\theta )}=\sum _{m=0}^{|\lambda |}\sum _{|\rho |=m}^{}\left( {\begin{array}{c}\lambda \\ \rho \end{array}}\right) _{\theta }\frac{J_{\rho }(z;\theta )}{J_{\rho }(1_n;\theta )}, \end{aligned}$$

but whose exact values will not be required in what follows. Finally, we need the following about the action of \(\Lambda ^{\theta }_{n,n+1}\) on \(J_{\lambda }(\cdot ;\theta )\) (see [18] Sect. 6),

$$\begin{aligned} \int _{W^{n,n+1}(x)}^{}\lambda ^{\theta }_{n,n+1}(x,y)J_{\lambda }(y;\theta )\hbox {d}y=J_{\lambda }(x;\theta )c(\lambda ,n,\theta ) , \end{aligned}$$
(17)

where

$$\begin{aligned} c(\lambda ,n,\theta )=\frac{\Gamma ((n+1)\theta )}{\Gamma (\theta )}\prod _{i=1}^{n}\frac{\Gamma \left( \left( n+1-i\right) \theta +\lambda _i\right) }{\Gamma \left( \left( n+2-i\right) \theta +\lambda _i\right) }. \end{aligned}$$
(18)

3 Proof

We split the proof in 4 steps, following the strategy laid out in [20].

Proof of Theorem 1.1

First, note that we can write the operators \({\mathcal {L}}^{(n)}_{d,\theta }\) and \({\mathcal {A}}^{(n)}_{a,b,\theta }\) as follows,

$$\begin{aligned} {\mathcal {L}}^{(n)}_{d,\theta }&=2{\mathcal {B}}_2^{(n),\theta }+\theta d {\mathcal {B}}_1^{(n)}, \end{aligned}$$
(19)
$$\begin{aligned} {\mathcal {A}}^{(n)}_{a,b,\theta }&=2{\mathcal {B}}_2^{(n),\theta }-2{\mathcal {D}}^{(n),\theta }+2\theta a {\mathcal {B}}_1^{(n)}-2\theta (a+b){\mathcal {B}}_3^{(n),\theta }. \end{aligned}$$
(20)

Step 1 The aim of this step is to show the intertwining relation at the level of the infinitesimal generators acting on the Jack polynomials. Namely, that

$$\begin{aligned} {\mathcal {L}}^{(n+1)}_{d-2,\theta }\Lambda ^{\theta }_{n,n+1}J_{\lambda }(\cdot ;\theta )&=\Lambda ^{\theta }_{n,n+1}{\mathcal {L}}^{(n)}_{d,\theta }J_{\lambda }(\cdot ;\theta ), \end{aligned}$$
(21)
$$\begin{aligned} {\mathcal {A}}^{(n+1)}_{a-1,b-1,\theta }\Lambda ^{\theta }_{n,n+1}J_{\lambda }(\cdot ;\theta )&=\Lambda ^{\theta }_{n,n+1}{\mathcal {A}}^{(n)}_{a,b,\theta }J_{\lambda }(\cdot ;\theta ). \end{aligned}$$
(22)

We will show relation (22) for the Jacobi case and at the end of Step 1 indicate how to obtain (21).

$$\begin{aligned} (\mathrm{LHS})= & {} {\mathcal {A}}^{(n+1)}_{a-1,b-1,\theta }J_{\lambda }(x;\theta )c(\lambda ,n,\theta )\\= & {} c(\lambda ,n,\theta )\left( 2{\mathcal {B}}_2^{(n+1),\theta }-2{\mathcal {D}}^{(n+1),\theta }+2\theta (a-1) {\mathcal {B}}_1^{(n+1)}-2\theta (a+b-2){\mathcal {B}}_3^{(n+1),\theta }\right) J_{\lambda }(x;\theta )\\= & {} c(\lambda ,n,\theta )\bigg [2J_{\lambda }(1_{n+1};\theta )\sum _{i=1}^{l}\left( {\begin{array}{c}\lambda \\ \lambda _{(i)}\end{array}}\right) _{\theta }(\lambda _i-1+(n+1-i)\theta )\frac{J_{\lambda _{(i)}}(x;\theta )}{J_{\lambda _{(i)}}(1_{n+1};\theta )}\\&-\,2\hbox {eval}(\lambda ,n+1,\theta )J_{\lambda }(x;\theta )\\&+\,2\theta (a-1)J_{\lambda }(1_{n+1};\theta )\sum _{i=1}^{l}\left( {\begin{array}{c}\lambda \\ \lambda _{(i)}\end{array}}\right) _{\theta }\frac{J_{\lambda _{(i)}}(x;\theta )}{J_{\lambda _{(i)}}(1_{n+1};\theta )}-2\theta (a+b-2)|\lambda |J_{\lambda }(x;\theta )\bigg ]. \end{aligned}$$

(RHS): We start by computing \({\mathcal {A}}^{(n)}_{a,b,\theta }J_{\lambda }(y;\theta )\).

$$\begin{aligned} {\mathcal {A}}^{(n)}_{a,b,\theta }J_{\lambda }(y;\theta )&=\left( 2{\mathcal {B}}_2^{(n),\theta }-2{\mathcal {D}}^{(n),\theta }+2\theta a {\mathcal {B}}_1^{(n)}-2\theta (a+b){\mathcal {B}}_3^{(n),\theta }\right) J_{\lambda }(y;\theta ) \nonumber \\&=\bigg [2J_{\lambda }(1_{n};\theta )\sum _{i=1}^{l}\left( {\begin{array}{c}\lambda \\ \lambda _{(i)}\end{array}}\right) _{\theta }(\lambda _i-1+(n-i)\theta )\frac{J_{\lambda _{(i)}}(y;\theta )}{J_{\lambda _{(i)}}(1_{n+1};\theta )}\nonumber \\&\quad -2eval(\lambda ,n,\theta )J_{\lambda }(y;\theta )\nonumber \\&\quad +\,2\theta a J_{\lambda }(1_{n};\theta )\sum _{i=1}^{l}\left( {\begin{array}{c}\lambda \\ \lambda _{(i)}\end{array}}\right) _{\theta }\frac{J_{\lambda _{(i)}}(y;\theta )}{J_{\lambda _{(i)}}(1_{n};\theta )}-2\theta (a+b)|\lambda |J_{\lambda }(y;\theta )\bigg ] . \end{aligned}$$
(23)

Now, apply \(\Lambda ^{\theta }_{n,n+1}\) to obtain that

$$\begin{aligned} (\mathrm{RHS})= & {} 2J_{\lambda }(1_{n};\theta )\sum _{i=1}^{l}\left( {\begin{array}{c}\lambda \\ \lambda _{(i)}\end{array}}\right) _{\theta }(\lambda _i-1+(n-i)\theta )c(\lambda _{(i)},n,\theta )\frac{J_{\lambda _{(i)}}(x;\theta )}{J_{\lambda _{(i)}}(1_{n+1};\theta )}\\&-\,2c(\lambda ,n,\theta )eval(\lambda ,n,\theta )J_{\lambda }(x;\theta )\\&+\,2\theta a J_{\lambda }(1_{n};\theta )\sum _{i=1}^{l}\left( {\begin{array}{c}\lambda \\ \lambda _{(i)}\end{array}}\right) _{\theta }c(\lambda _{(i)},n,\theta )\frac{J_{\lambda _{(i)}}(x;\theta )}{J_{\lambda _{(i)}}(1_{n};\theta )}\\&-2\theta (a+b)|\lambda |c(\lambda ,n,\theta )J_{\lambda }(x;\theta ). \end{aligned}$$

Now, in order to check \((\hbox {LHS})=(\hbox {RHS})\) we check that the coefficients of \(J_{\lambda }\) and \(J_{\lambda _{(i)}}\)\(\forall i\) coincide on both sides.

  • First, the coefficients of \(J_{\lambda }(x;\theta )\):

    (LHS): \(-\,2c(\lambda ,n,\theta )\hbox {eval}(\lambda ,n+1,\theta )-c(\lambda ,n,\theta )|\lambda |2 \theta (a+b-2)\).

    (RHS): \(-\,2c(\lambda ,n,\theta )\hbox {eval}(\lambda ,n,\theta )-c(\lambda ,n,\theta )|\lambda |2 \theta (a+b)\).

    These are equal iff:

    $$\begin{aligned} \frac{-\,2\hbox {eval}(\lambda ,n,\theta )+2\hbox {eval}(\lambda ,n+1,\theta )}{4\theta |\lambda |}=1, \end{aligned}$$

    which is easily checked from the explicit expression of \(\hbox {eval}(n,\lambda ,\theta )\).

  • Now, for the coefficients of \(J_{\lambda _{(i)}}(x;\theta )\):

(LHS):

$$\begin{aligned}&2J_{\lambda }(1_{n+1};\theta )\left( {\begin{array}{c}\lambda \\ \lambda _{(i)}\end{array}}\right) _{\theta }(\lambda _i-1+(n+1-i)\theta )\frac{c(\lambda ,n,\theta )}{J_{\lambda _{(i)}}(1_{n+1};\theta )}\\&\quad +\,2\theta (a-1)J_{\lambda }(1_{n+1};\theta )\left( {\begin{array}{c}\lambda \\ \lambda _{(i)}\end{array}}\right) _{\theta }\frac{c(\lambda ,n,\theta )}{J_{\lambda _{(i)}}(1_{n+1};\theta )}. \end{aligned}$$

(RHS):

$$\begin{aligned}&2J_{\lambda }(1_{n};\theta )\left( {\begin{array}{c}\lambda \\ \lambda _{(i)}\end{array}}\right) _{\theta }(\lambda _i-1+(n-i)\theta )\frac{c(\lambda _{(i)},n,\theta )}{J_{\lambda _{(i)}}(1_{n};\theta )}\\&\quad +\,2\theta aJ_{\lambda }(1_{n};\theta )\left( {\begin{array}{c}\lambda \\ \lambda _{(i)}\end{array}}\right) _{\theta }\frac{c(\lambda _{(i)},n,\theta )}{J_{\lambda _{(i)}}(1_{n};\theta )}. \end{aligned}$$

These are equal iff:

$$\begin{aligned} a-1= & {} \frac{J_{\lambda }(1_{n};\theta )c(\lambda _{(i)},n,\theta )J_{\lambda _{(i)}}(1_{n+1};\theta )}{J_{\lambda _{(i)}}(1_{n};\theta )c(\lambda ,n,\theta )J_{\lambda }(1_{n+1};\theta )}a\\&+\,\frac{1}{\theta }\frac{J_{\lambda }(1_{n};\theta )c(\lambda _{(i)},n,\theta )J_{\lambda _{(i)}}(1_{n+1};\theta )}{J_{\lambda _{(i)}}(1_{n};\theta )c(\lambda ,n,\theta )J_{\lambda }(1_{n+1};\theta )}(\lambda _i-1+(n-i)\theta )\\&-\,\frac{1}{\theta }(\lambda _i-1+(n+1-i)\theta ). \end{aligned}$$

We first claim that

$$\begin{aligned} \frac{J_{\lambda }(1_{n};\theta )c(\lambda _{(i)},n,\theta )J_{\lambda _{(i)}}(1_{n+1};\theta )}{J_{\lambda _{(i)}}(1_{n};\theta )c(\lambda ,n,\theta )J_{\lambda }(1_{n+1};\theta )}=1. \end{aligned}$$

This immediately follows from

$$\begin{aligned} \frac{J_{\lambda }(1_{n};\theta )}{J_{\lambda _{(i)}}(1_{n};\theta )}&=\theta ^{-1}\frac{\Gamma \left( \left( n+1-i\right) \theta +\lambda _i\right) }{\Gamma \left( \left( n+1-i\right) \theta +\lambda _i-1\right) },\\ \frac{J_{\lambda _{(i)}}(1_{n+1};\theta )}{J_{\lambda }(1_{n+1};\theta )}&=\theta \frac{\Gamma \left( \left( n+2-i\right) \theta +\lambda _i-1\right) }{\Gamma \left( \left( n+2-i\right) \theta +\lambda _i\right) },\\ \frac{c(\lambda _{(i)},n,\theta )}{c(\lambda ,n,\theta )}&=\frac{\Gamma \left( \left( n+1-i\right) \theta +\lambda _i-1\right) \Gamma \left( \left( n+2-i\right) \theta +\lambda _i\right) }{\Gamma \left( \left( n+1-i\right) \theta +\lambda _i\right) \Gamma \left( \left( n+2-i\right) \theta +\lambda _i-1\right) }. \end{aligned}$$

Hence, we need to check that the following is true,

$$\begin{aligned} a-1=a+\frac{1}{\theta }(\lambda _i-1+(n-i)\theta )-\frac{1}{\theta }(\lambda _i-1+(n-i+1)\theta ), \end{aligned}$$

which is obvious.

Now, in order to obtain (21) we only need to consider coefficients in \(J_{\lambda _{(i)}}\)’s (since the operators \({\mathcal {D}}^{(n),\theta }\) and \({\mathcal {B}}_3^{(n)}\) that produce \(J_{\lambda }\)’s are missing) and replace a by \(\frac{d}{2}\).

To prove the analogous result for \(\beta \)-Dyson Brownian motions, one needs to observe, as done in [20], that the generator of n particle \(\beta \)-Dyson Brownian motion \(L^{(n)}_{\theta }\) can be written as a commutator, namely \(L^{(n)}_{\theta }=[{\mathcal {B}}_1^{(n)},{\mathcal {B}}_2^{(n),\theta }]={\mathcal {B}}_1^{(n)}{\mathcal {B}}_2^{(n),\theta }-{\mathcal {B}}_2^{(n),\theta }{\mathcal {B}}_1^{(n)}\).

Step 2 We obtain an exponential moment estimate, namely regarding \({\mathbb {E}}_{x}[\hbox {e}^{\epsilon \Vert X^{(n)}(t)\Vert }]\). This is obviously finite by compactness of \([0,1]^n\) in the Jacobi case. In the Laguerre case, we proceed as follows. Writing \(X^{(n)}\) for the solution to (1), letting \(\Vert \cdot \Vert \) denote the \(l_1\) norm and recalling that all entries of \(X^{(n)}\) are nonnegative, we obtain

$$\begin{aligned} \hbox {d}\Vert X^{(n)}(t)\Vert =\sum _{i=1}^{n}2\sqrt{\hbox {d}X_i^{(n)}(t)}\hbox {d}B_i^{(n)}(t)+\beta \left( \frac{d}{2}n+\sum _{i=1}^{n}\sum _{1 \le j \le n, j\ne i}^{}\frac{2X_i^{(n)}(t)}{X_i^{(n)}(t)-X_j^{(n)}(t)}\right) \hbox {d}t. \end{aligned}$$

Note that

$$\begin{aligned} \sum _{i=1}^{n}\sum _{1 \le j \le n, j\ne i}^{}\frac{2X_i^{(n)}(t)}{X_i^{(n)}(t)-X_j^{(n)}(t)}=2\left( {\begin{array}{c}n\\ 2\end{array}}\right) \end{aligned}$$

and that by Levy’s characterization, the local martingale \((M(t),t\ge 0)\) defined by

$$\begin{aligned} \hbox {d}M(t)=\frac{1}{\sqrt{\Vert X^{(n)}(t)\Vert }}\sum _{i=1}^{n}\sqrt{X^{(n)}_i(t)}\hbox {d}B^{(n)}_i(t) \end{aligned}$$

is equal to a standard Brownian motion \((W(t),t\ge 0)\) and so we obtain

$$\begin{aligned} \hbox {d}\Vert X^{(n)}(t)\Vert =2\sqrt{\Vert X^{(n)}(t)\Vert }\hbox {d}W(t)+\beta \left( \frac{d}{2}n+2\left( {\begin{array}{c}n\\ 2\end{array}}\right) \right) \hbox {d}t. \end{aligned}$$

Thus, \(\Vert X^{(n)}(t)\Vert \) is a squared Bessel process of dimension \(\hbox {dim}_{\beta ,n,d}=\beta \left( \frac{d}{2}n+2\left( {\begin{array}{c}n\\ 2\end{array}}\right) \right) \). Hence, from standard estimates (see [21] Chapter IX.1 or Proposition 2.1 of [10]; in case that \(\hbox {dim}_{\beta ,n,d}\) is an integer the result is an immediate consequence of Fernique’s theorem ( [11]) since \(\Vert X^{(n)}(t)\Vert \) is the square of a Gaussian process), it follows that, for \(\epsilon >0\) small enough, \({\mathbb {E}}_{x}[\hbox {e}^{\epsilon \Vert X^{(n)}(t)\Vert }]<\infty \).

Step 3 We now lift the intertwining relation to the semigroups acting on the Jack polynomials, namely

$$\begin{aligned} P^{(n+1)}_{d-2,\theta }(t)\Lambda ^{\theta }_{n,n+1}J_{\lambda }(\cdot ;\theta )&=\Lambda ^{\theta }_{n,n+1}P^{(n)}_{d,\theta }(t)J_{\lambda }(\cdot ;\theta ),\\ Q^{(n+1)}_{a-1,b-1,\theta }(t)\Lambda ^{\theta }_{n,n+1}J_{\lambda }(\cdot ;\theta )&=\Lambda ^{\theta }_{n,n+1}Q^{(n)}_{a,b,\theta }(t)J_{\lambda }(\cdot ;\theta ). \end{aligned}$$

The proof follows almost word for word the elegant argument given in [20]. We reproduce it here, elaborating a bit on some parts, for the convenience of the reader, moreover only considering the Laguerre case for concreteness. We begin by applying Ito’s formula to \(J_{\lambda }(X^{(n)}(t);\theta )\) and, taking expectations (note that the stochastic integral term is a true martingale since its expected quadratic variation is finite which follows by the exponential estimate of Step 2), we obtain

$$\begin{aligned} P^{(n)}_{d,\theta }(t)J_{\lambda }(\cdot ;\theta )=J_{\lambda }(\cdot ;\theta )+\int _{0}^{t}P^{(n)}_{d,\theta }(s){\mathcal {L}}^{(n)}_{d,\theta }J_{\lambda }(\cdot ;\theta )\hbox {d}s. \end{aligned}$$
(24)

Now, note that by (23), \({\mathcal {L}}^{(n)}_{d,\theta }J_{\lambda }(\cdot ;\theta )\) is given by a linear combination of Jack polynomials \(J_{\kappa }(\cdot ;\theta )\) for some partitions \(\kappa \) with \(\kappa _i\le \lambda _i\)\(\forall i \le l\) and we will write \(\kappa \le \lambda \) if this holds. We will denote the action of \({\mathcal {L}}^{(n)}_{d,\theta }\) on this finite-dimensional vector space, spanned by the Jack polynomials indexed by partitions \(\kappa \) with \(\kappa \le \lambda \), by the matrix \(M_2\).

Moreover, each \(J_{\kappa }(\cdot ;\theta )\) for \(\kappa \le \lambda \) obeys (24), and thus, we obtain the following system of integral equations, with \(f_{\kappa }(t)=P^{(n)}_{d,\theta }(t)J_{\kappa }(\cdot ;\theta )\),

$$\begin{aligned} f_{\kappa }(t)=f_{\kappa }(0)+\sum _{\nu \le \lambda }^{}M_2(\kappa ,\nu )\int _{0}^{t}f_{\nu }(s)\hbox {d}s, \end{aligned}$$

whose unique solution is given by the matrix exponential,

$$\begin{aligned} f_{\kappa }(t)=\sum _{\nu \le \lambda }^{}e^{tM_2}(\kappa ,\nu )f_{\nu }(0). \end{aligned}$$
(25)

Now, observe that by (17) the Markov kernel \(\Lambda ^{\theta }_{n,n+1}\) also acts on the aforementioned finite- dimensional vector space of Jack polynomials as a matrix, which we denote by \(M_1\). We will also denote by a matrix \(M_3\) the action of \({\mathcal {L}}^{(n+1)}_{d-2,\theta }\) and note that the intertwining relation (21) can be written in terms of matrices as follows: \(M_3M_1=M_1M_2\). Thus, making use of the following elementary fact about finite-dimensional square matrices,

$$\begin{aligned} M_3M_1=M_1M_2 \implies \hbox {e}^{tM_3}M_1=M_1\hbox {e}^{tM_2}\quad \hbox {for} \ t \ge 0, \end{aligned}$$

and display (25), along with its analog with \(M_2\) replaced by \(M_3\), we get that

$$\begin{aligned} P^{(n+1)}_{d-2,\theta }(t)\Lambda ^{\theta }_{n,n+1}J_{\lambda }(\cdot ;\theta )&=\Lambda ^{\theta }_{n,n+1}P^{(n)}_{d,\theta }(t)J_{\lambda }(\cdot ;\theta ). \end{aligned}$$

Step 4 We again follow [20]. Recall (see [20] and the references therein) that we can write any symmetric polynomial p in n variables as a finite linear combination of Jack polynomials in n variables. Hence, for any such p,

$$\begin{aligned} P^{(n+1)}_{d-2,\theta }(t)\Lambda ^{\theta }_{n,n+1}p(\cdot )&=\Lambda ^{\theta }_{n,n+1}P^{(n)}_{d,\theta }(t)p(\cdot ), \end{aligned}$$
(26)
$$\begin{aligned} Q^{(n+1)}_{a-1,b-1,\theta }(t)\Lambda ^{\theta }_{n,n+1}p(\cdot )&=\Lambda ^{\theta }_{n,n+1}Q^{(n)}_{a,b,\theta }(t)p(\cdot ). \end{aligned}$$
(27)

Now, any probability measure \(\mu \) on \(W^n(I)\) will give rise to a symmetrized probability measure \(\mu ^\mathrm{symm}\) on \(I^n\) as follows,

$$\begin{aligned} \mu ^\mathrm{symm}(\hbox {d}z_1.\ldots ,\hbox {d}z_n)=\frac{1}{n!}\mu (\hbox {d}z_{(1)}.\ldots ,\hbox {d}z_{(n)}), \end{aligned}$$

where \(z_{(1)}\le z_{(2)}\le \cdots \le z_{(n)}\) are the order statistics of \((z_1,z_2,\ldots ,z_n)\). Moreover, for every (not necessarily symmetric) polynomial q in n variables, with \(S_n\) denoting the symmetric group on n symbols, we have

$$\begin{aligned} \int _{I^n}^{}q(z)\hbox {d}\mu ^\mathrm{symm}(z)= & {} \int _{I^n}^{}\frac{1}{n!}\sum _{\sigma \in S_n}^{}q(z_{\sigma (1)},\cdots ,z_{\sigma (n)})\mathrm{d}\mu ^{symm}(z)\\= & {} \int _{W^{n}(I)}^{}\frac{1}{n!}\sum _{\sigma \in S_n}^{}q(z_{\sigma (1)},\cdots ,z_{\sigma (n)})\mathrm{d}\mu (z). \end{aligned}$$

Note that now \(p(z)=\frac{1}{n!}\sum _{\sigma \in S_n}^{}q(z_{\sigma (1)},\ldots ,z_{\sigma (n)})\) is a symmetric polynomial (in n variables). Thus, from (26) and (27) all moments of the symmetrized versions of both sides of (BESQintertwining) and (8) coincide. Hence, by Theorem 1.3 of [8] (and the discussion following it) along with the fact that \((\Lambda ^{\theta }_{n,n+1}f)(z)\le \hbox {e}^{\epsilon \Vert z\Vert _1}\) where \(f(y)=\hbox {e}^{\epsilon \Vert y\Vert _1}\) (since all coordinates are positive) and our exponential moment estimate from Step 2, we obtain that the symmetrized versions of both sides of (7) and (8) coincide, where we view for each \(x\in W^{n+1}\) and \(t\ge 0\)\(P^{(n+1)}_{d-2,\theta }(t)\Lambda ^{\theta }_{n,n+1}\) and \(\Lambda ^{\theta }_{n,n+1}P^{(n)}_{d,\theta }(t)\) as probability measures on \(W^n\). In fact, by the discussion after Theorem 1.3 of [8], since we work in \([0,\infty )^n\) and not the full space \({\mathbb {R}}^n\), we need not require that the symmetrized versions of these measures have exponential moments but that they only need to integrate \(\hbox {e}^{\epsilon \sqrt{\Vert z\Vert }}\). The theorem is now proven. \(\square \)