## 1 Introduction

Let $${\mathbb {U}}(N)$$ be the group of $$N\times N$$ unitary matrices and consider $$\mathbf {V}\in {\mathbb {U}}(N)$$. Introduce the quantities, where $$e^{\text {i} \theta _1},\dots , e^{\text {i}\theta _N}$$ are the eigenvalues of $$\mathbf {V}$$:

\begin{aligned} \begin{aligned} \Phi _{\mathbf {V}}(u) := \prod _{j=1}^N\left( 1-e^{\text {i}(u-\theta _j)}\right) , \quad \Psi _{\mathbf {V}}(u) := e^{\frac{1}{2}\text {i}\left( N(u+\pi ) + \sum _{j=1}^N \theta _j\right) }\Phi _\mathbf {V}(u). \end{aligned} \end{aligned}
(1)

Namely, $$\Phi _{\mathbf {V}}$$ is the characteristic polynomial of $$\mathbf {V}$$ and $$\Psi _{\mathbf {V}}$$ is chosen to satisfy $$\left| \Psi _{\mathbf {V}}(u) \right| = |\Phi _\mathbf {V}(u)|$$ and $$\Psi _\mathbf {V}(u) \in {\mathbb {R}}$$ whenever $$u\in {\mathbb {R}}$$. Let $$d\mathsf {Haar}$$ denote the Haar probability measure on $${\mathbb {U}}(N)$$ and consider the following joint moments, which exist for $$-\frac{1}{2}<h<s+\frac{1}{2}$$:

\begin{aligned} F_{N}(s,h) := \int _{{\mathbb {U}}(N)} \left| \Psi _\mathbf {V}(0)\right| ^{2s-2h} \left| \frac{d}{du}\Psi _\mathbf {V}(u)\bigg |_{u=0}\right| ^{2h}d\mathsf {Haar}(\mathbf {V}). \end{aligned}
(2)

There has been significant interest in the problem of asymptotics of $$F_N(s,h)$$, as $$N\rightarrow \infty$$, for the past twenty years, beginning with the thesis of Hughes from 2001 [27]. See [4,5,6, 8, 14, 16, 17, 27, 40] for a number of different approaches (of analytic, combinatorial or probabilistic nature) to this problem that have been developed through the years. Part of the initial motivation of the thesis of Hughes for studying the asymptotics of these moments comes from a remarkable conjectural connection to the joint moments of Hardy’s function from analytic number theory, see [13, 25, 27]. More recently, a connection to random unitarily invariant infinite Hermitian matrices was understood in [4], see also [5]. Finally, these moments are also closely related to the theory of integrable systems, in particular Painlevé equations, see [4,5,6, 8, 23].

Before continuing, we note that it turns out that the study of these moments is equivalent to the study of the moments of the sum of points of certain determinantal point processes (or equivalently the trace of certain unitarily invariant random Hermitian matrices), see [20] for the definitions. Such problems have their own intrinsic interest and have been studied for a long time [20]. For example, it has been shown that for special cases of determinantal processes the Laplace transform of the sum of points is connected to integrable systems [7, 12] (it can be instructive to think of this quantity in analogy to the gap probability of a determinantal process [20] as both are given by expectations of very simple multiplicative functionals and in particular as Fredholm determinants).

Now, returning to our problem, it is well-known, see [20], that the distribution of the eigenangles $$\varvec{\theta }=(\theta _1,\dots ,\theta _N)$$ of a Haar distributed matrix from $${\mathbb {U}}(N)$$ is explicit and given by the probability measure on $$[0,2\pi ]^N$$:

\begin{aligned} \frac{1}{(2\pi )^NN!}\prod _{1\le j< k \le N}\left| e^{\text {i} \theta _j}-e^{\text {i} \theta _k}\right| ^2 d\theta _1 \cdots d\theta _N. \end{aligned}

This measure is also called the Circular Unitary Ensemble (CUE). There is a two-parameter generalization (the CUE is the special case $$\beta =2$$, $$\delta =0$$) of this measure, called the Circular Jacobi $$\beta$$-Ensemble, depending on parameters $$\beta >0$$ and $$\Re (\delta )>-\frac{1}{2}$$. This is the most natural extension of CUE for which explicit formulae for various quantities and connections to integrable systems exist, see [11, 20, 22]. Its special case $$\delta =0$$ is called the Circular $$\beta$$-Ensemble ($${\mathsf {C}}\beta {\mathsf {E}}_N$$) and is arguably the most well-known example of a beta ensemble from random matrix theory, see for example [20]. The Circular Jacobi $$\beta$$-Ensemble is then the following probability measure on $$[0,2\pi ]^N$$ that we denote by $${\mathsf {C}}{\mathsf {J}}\beta {\mathsf {E}}_{N,\delta }$$:

\begin{aligned} {\mathsf {C}}{\mathsf {J}}\beta {\mathsf {E}}_{N,\delta }(d\varvec{\theta })&=\frac{1}{c_{N,\beta ,\delta }}\prod _{1\le j< k \le N}\left| e^{\text {i} \theta _j}-e^{\text {i} \theta _k}\right| ^\beta \prod _{j=1}^N \left( 1-e^{-\text {i}\theta _j}\right) ^\delta \\&\quad \times \left( 1-e^{\text {i}\theta _j}\right) ^{{\overline{\delta }}} d\theta _1 \cdots d\theta _N, \end{aligned}

where $$c_{N,\beta ,\delta }$$ is chosen so that this is a probability measure, see [11, 20]. In the distinguished cases $$\beta =\{1,2,4\}, \delta =0$$ this is the law of the eigenangles of a random matrix that can be constructed by a natural transformation from a Haar distributed unitary matrix, see [20]. The cases $$\beta =1$$ and $$\beta =4$$ (and $$\delta =0$$) are called the Circular Orthogonal (COE) and Circular Symplectic Ensembles (CSE) respectively. Matrix models also exist for all values of $$\beta >0$$ and $$\Re (\delta )>-\frac{1}{2}$$, see [11, 30]. Finally, observe that for $$\delta \in \frac{\beta }{2}{\mathbb {N}}$$, $${\mathsf {C}}{\mathsf {J}}\beta {\mathsf {E}}_{N,\delta }$$ coincides with $${\mathsf {C}}\beta {\mathsf {E}}_N$$ conditioned to have eigenvalues at 1 (equivalently eigenangles at 0).

Associated to $$\varvec{\theta }=(\theta _1,\dots ,\theta _N)\in [0,2\pi ]^N$$ we define $$\Phi _{\varvec{\theta }}$$ and $$\Psi _{\varvec{\theta }}$$ as in (1) and denote the expectation with respect to $${\mathsf {C}}{\mathsf {J}}\beta {\mathsf {E}}_{N,\delta }$$ by $$\mathbf {E}_{N,\beta ,\delta }$$. It is then natural to consider the joint moments corresponding to the $${\mathsf {C}}{\mathsf {J}}\beta {\mathsf {E}}_{N,\delta }$$:

\begin{aligned} F_{N,\beta ,\delta }(s,h) := \mathbf {E}_{N,\beta ,\delta }\left[ \left| \Psi _{\varvec{\theta }}(0)\right| ^{2s-2h} \left| \frac{d}{du}\Psi _{\varvec{\theta }}(u)\bigg |_{u=0}\right| ^{2h}\right] . \end{aligned}
(3)

We note that these moments exist whenever $$\beta >0$$, $$\Re (\delta )>-\frac{1}{2}$$ and $$-\frac{1}{2}< h < \Re (\delta )+s+\frac{1}{2}$$. The problem of the large N asymptotics of $$F_{N,\beta ,0}(s,h)$$ for $$\beta \ne 2$$, namely for $${\mathsf {C}}\beta {\mathsf {E}}_N$$, was first considered in a recent paper by Forrester [21]. As we will indicate at several places in the sequel, when one leaves the world of random unitary matrices ($$\beta =2$$, $$\delta =0$$) several structures break down. Nevertheless, the author in [21] was able, using explicit computations with generalised hypergeometric functions, to prove convergence of the rescaled joint moments $$F_{N,\beta ,0}(s,h)$$ and obtain an explicit combinatorial expression for the limit (that we recall in Theorem 2.4 below) for integer s and realFootnote 1h.

In this paper, using a different approach, exploiting results on consistent probability measures on interlacing arrays, we prove convergence for general complex $$\delta \ne 0$$ and positive real exponents s and h and give a probabilistic representation for the limit. Using the techniques presented in the sequel we also extendFootnote 2Footnote 3 Forrester’s explicit formula for the limit to real s and $$\delta$$ and integer h. Our more general goal in this paper is to provide a framework for the study of moments of the sum of points in rows of consistent random interlacing arrays, see Sect. 3. Then, we apply (by virtue of Proposition 2.2) this general theory to the problem of the joint moments of $${\mathsf {C}}{\mathsf {J}}\beta {\mathsf {E}}_{N,\delta }$$ characteristic polynomials to obtain our main result, Theorem 1.1 below. Moreover, we also find an application in the study of the moments of the logarithmic derivative of the characteristic polynomial of the Laguerre $$\beta$$-ensemble, see Sect. 5.

To state our main result we require some notation. In what follows, $${\mathbb {Y}}$$ denotes the set of all integer partitions (Young diagrams). Given $$\kappa = (\kappa _1,\kappa _2,\kappa _3,\ldots ) \in {\mathbb {Y}}$$, we write $$|\kappa | := \kappa _1+\kappa _2 +\kappa _3+ \cdots$$; recall also that $$(x)_k:=\prod _{j=0}^{k-1}(x+j)$$ is the Pochhammer symbol, and $$[x]_\kappa ^{(\alpha )} := \prod _j \left( x - \frac{1}{\alpha }(j-1)\right) _{\kappa _j}$$ is the generalised Pochhammer symbol associated to the partition $$\kappa$$. Given a box $$\Box \in \kappa$$, we let $$\alpha (\Box )$$ denote the arm length (the number of boxes to the right of $$\Box$$) and $$\ell (\Box )$$ denote the leg length (the number of boxes below $$\Box$$). We also define the co-arm length $$\alpha ^\prime (\Box )$$ as the number of boxes to the left of $$\Box$$, and the co-leg length $$\ell ^\prime (\Box )$$ as the number of boxes above $$\Box$$. For example,

and the arm lengths, leg-lengths, co-arm lengths and co-leg lengths are given as follows:

Following [15], we also introduce the function:

\begin{aligned} \Upsilon _\beta (z)&:= \frac{\beta }{2}\log G\left( 1+\frac{2z}{\beta }\right) -\left( z-\frac{1}{2}\right) \log \Gamma \left( 1+\frac{2z}{\beta }\right) \nonumber \\&\quad +\int _0^\infty \left( \frac{1}{2x}-\frac{1}{x^2}+\frac{1}{x(e^x-1)} \right) \frac{e^{-xz}-1}{e^{x\beta /2}-1}dx + \frac{z^2}{\beta }+\frac{z}{2}, \end{aligned}
(4)

where G(z) denotes the Barnes G-function. We note that for special values of $$\beta$$ and z, $$\Upsilon _\beta (z)$$ takes an even more explicit form, see for example Lemma 7.1 in [15]. Finally, the standard notation $${\mathbb {E}}\left[ {\mathsf {Z}}\right]$$ denotes the expectation of a random variable $${\mathsf {Z}}$$ (unless we have introduced specific notation for the underlying probability measure in which case we use that instead). Our main result is:

### Theorem 1.1

Let $$\beta >0$$. Let $$s,h \in {\mathbb {R}}$$ and $$\delta \in {\mathbb {C}}$$ be such that $$\Re (\delta )>-\frac{1}{3}$$, $$s>-\frac{1}{3}$$, $$s+\Re (\delta )>0$$ and $$0\le h <s+\Re (\delta )+\frac{1}{2}$$. Then, there exists a family of real random variables $$\left\{ {\mathsf {X}}_{\beta }(\tau )\right\} _{\tau \in {\mathbb {C}},\Re (\tau )>0}$$ such that the following limit exists:

\begin{aligned}&\lim \limits _{N \rightarrow \infty } \frac{1}{N^{2s^2/\beta + 2h}}F_{N,\beta ,\delta } (s,h)\overset{\text {def}}{=}{F}_{\beta ,\delta }(s,h) ={F}_{\beta ,\delta }(s,0) 2^{-2h} {\mathbb {E}}\left[ \left| {\mathsf {X}}_\beta (s +\delta )\right| ^{2h}\right] <\infty ,\nonumber \\ \end{aligned}
(5)

where $$F_{\beta ,\delta }(s,0)$$ is given explicitly as:

\begin{aligned}&F_{\beta ,\delta }(s,0)\nonumber \\&\quad = e^{2s\frac{\delta +{\overline{\delta }}}{\beta } +\Upsilon _\beta \left( 1+\delta -\beta /2\right) - \Upsilon _\beta \left( 1+\delta +\frac{s}{2}-\beta /2\right) + \Upsilon _\beta \left( 1+{\overline{\delta }}-\frac{\beta }{2}\right) - \Upsilon _\beta \left( 1+\delta +{\overline{\delta }}-\frac{\beta }{2}\right) - \Upsilon _\beta \left( 1+{\overline{\delta }}+\frac{s}{2}-\beta /2\right) + \Upsilon _\beta \left( 1+\delta +{\overline{\delta }}+s-\beta /2\right) }.\nonumber \\ \end{aligned}
(6)

When $$\tau$$ is real $${\mathsf {X}}_{\beta }(\tau )$$ is symmetric about the origin in law. Moreover, for $$h \in {\mathbb {N}} \cup \{0\}$$ and $$\tau >h-\frac{1}{2}$$ we have the explicit formula:

\begin{aligned} {\mathbb {E}}\left[ {\mathsf {X}}_\beta (\tau )^{2h}\right]&= (-1)^h \sum _{|\kappa |\le 2h}\frac{(-2h)_{|\kappa |} 2^{|\kappa |}}{[4\tau /\beta ]_\kappa ^{(\beta /2)}}\nonumber \\ {}&\quad \times \prod _{\Box \in \kappa }\frac{\frac{\beta }{2} \alpha ^\prime (\Box ) + \tau - \ell ^\prime (\Box )}{\left( \frac{\beta }{2}(\alpha (\Box )+1)+ \ell (\Box )\right) \left( \frac{\beta }{2}\alpha (\Box )+ \ell (\Box ) + 1\right) }. \end{aligned}
(7)

### Corollary 1.2

For $$\beta>0,\delta >-\frac{1}{3}$$, $$s>-\frac{1}{3}$$, $$s+\delta >\frac{1}{2}$$ and $$0\le h <s+\delta +\frac{1}{2}$$ we have $$F_{\beta ,\delta }(s,h)>0$$.

### Proof

For $$\tau >\frac{1}{2}$$, evaluating (7) at $$h=1$$ gives, after further algebraic simplification, the following formula for the 2nd moment of $${\mathsf {X}}_\beta (\tau )$$:

\begin{aligned} {\mathbb {E}}\left[ {\mathsf {X}}_\beta (\tau )^{2}\right] =\frac{\beta }{(2\tau -1)(4\tau +\beta )}, \end{aligned}

which does not vanish for any $$\beta>0, \tau >\frac{1}{2}$$. In particular, for these values of $$\tau$$ and $$\beta$$ the random variable $${\mathsf {X}}_\beta (\tau )$$ is not almost surely zero. Combining this with (5) gives the desired result. $$\square$$

We expect the convergence statement in Theorem 1.1 to extend to the full range of parameters for which the joint moments exist but this would require some new ideas to establish. In fact, for $$\beta =2$$ and $$\delta =0$$ it is possible to go slightly beyond the parameter range for s in Theorem 1.1 above, see [4]. However, the proof in [4] uses in an essential way the underlying determinantal point process structure which is absent for $$\beta \ne 2$$. Finally, there are a few results that have been proven for $${\mathsf {X}}_2(\tau )$$ in the literature, see Remarks 1.51.6 and 1.7 for more details, but the proofs of all of these again rely heavily on the determinantal point process structure and so they do not easily generalise to $${\mathsf {X}}_\beta (\tau )$$.

### Remark 1.3

In a revised version of the manuscript [21], posted on arXiv a week before the present paper first appeared, Forrester proves an explicit formula which is essentially equivalent to (7). The two proofs were obtained independently and are different. In the initial version of [21] the explicit formula for $$F_{\beta ,0}(s,h)$$ is only obtained for integer s. In the revised version of [21] additional explicit computations are performed that give a combinatorial formula for $$F_{N,\beta ,0}(s,h)/F_{N,\beta ,0}(s,0)$$ for real s and integer h, and then the large N limit is taken. The main point of our proof of (7) is that the extension to real s can be done already in the limit, using only the explicit formula of Forrester for $$F_{\beta ,0}(s,h)$$ for integer s and h, and further explicit computations are not required. The approach that we take is not evident and we use some general results developed in Sect. 3 in order to do this.

### Remark 1.4

For $$h=0$$ and general $$\beta ,s>0,\Re (\delta )>-\frac{1}{3}$$ (in this case $$F_{N,\beta ,\delta }(s,0)$$ is completely explicit using the Selberg integral, see [29]) a proof of the asymptotics was first given, as far as we are aware, in [15], see Proposition 2.3 below. The asymptotics in the case $$\delta =0$$ and $$s\in {\mathbb {N}}\cup \{0\}$$ go back earlier and were first considered in [29].

### Remark 1.5

For $$\beta =2$$ and real $$\tau$$ the random variable $${\mathsf {X}}_2(\tau )$$ has a representation in terms of the principal value sum of points of a determinantal point process as proven by Qiu [37]. For general $$\beta >0$$ we expect an analogous representation in terms of the principal value sum of the eigenvalues of a certain stochastic operator, see [31, 39]. Although, as far as we are aware, this has not been worked out explicitly, it might be possible to obtain using the techniques of [31, 39].

### Remark 1.6

We expect that for general $$\tau$$, the random variable $${\mathsf {X}}_\beta (\tau )$$ is not almost surely zero, so that in particular $$F_{\beta ,\delta }(s,h)>0$$. Establishing this for the full range of parameters turns out to be tricky, even in the case $$\beta =2$$ and real $$\tau$$ which was proven in [4]. This was done using the representation mentioned in Remark 1.5 above. In fact, we expect a much stronger result: that the law of $${\mathsf {X}}_\beta (\tau )$$ has a density with respect to the Lebesgue measure. Remarkably, when $$\tau \in {\mathbb {N}}\cup \left\{ 0\right\}$$ this density is completely explicit. This was first obtained in the case $$\beta =2$$ in [5] and for general $$\beta >0$$, by a different method, in [21].

### Remark 1.7

For $$\beta =2$$ and any real $$\tau >-\frac{1}{2}$$, the characteristic function $$t\mapsto {\mathbb {E}}\left[ e^{\text {i}\frac{t}{2}{\mathsf {X}}_2(\tau )}\right]$$ is a tau- function of a special case of the $$\sigma$$-Painlevé III’ equation, which depends on the parameter $$\tau$$, see [5] for more details. It is not clear whether such connections to integrable systems extend beyond $$\beta =2$$ (even the simpler determinantal case of general complex $$\tau$$ for $$\beta =2$$ is still open). It would be interesting to investigate this.

Let us say a word about the strategy of proof. Our starting point is an observation alluded to earlier, that was first made in [4], and also used in [5], in the setting of $$\beta =2$$ and $$\delta =0$$, which connects $$F_{N,\beta ,\delta }(s,h)$$ to the moments of the trace of the Hermitian Hua–Pickrell matrix ensemble (also known as Cauchy ensemble, see [9, 11, 20, 26, 32, 36, 41]). An analogous connection also exists for general $$\beta >0$$ and $$\delta =0$$, as established in [21]. The further extension to $$\delta \ne 0$$ is straightforward and we present it in Sect. 2. Modulo this common start, our approach is completely different from the one of Forrester in [21]. It is probabilistic in nature and makes heavy use of some hidden exchangeable structure, a feature shared with the approach in [4]. A key ingredient in [4] is the fact that one can correctly define a unitarily invariant Hua–Pickrell measure on infinite Hermitian matrices. Here instead we make use of some general results from [3] on consistent (this will be made precise in the sequel) distributions, depending on a parameter $$\beta$$, on infinite interlacing arrays and further develop a little theory using some arguments based on exchangeability, see Sect. 3.1. Although the appearance of random infinite interlacing arrays might seem slightly unmotivated at first sight, this setting is, in some sense, the natural general $$\beta$$ analogue of random unitarily invariant infinite Hermitian matrices, see Sect. 3.1 and Remark 3.3 in particular for more details.

Finally, using the framework developed here we prove in Sect. 5 an analogous result to Theorem 1.1 for the moments of the logarithmic derivative of the characteristic polynomial of the Laguerre $$\beta$$-ensemble, see Proposition 5.1. The random variables appearing in Proposition 5.1 also appear in the asymptotics of the joint moments of characteristic polynomials from the classical compact groups. This is established, using different methods, in work in preparation by one us [24] and gives the conjectural asymptotics of joint moments over various L-function families, as the conductor of the family tends to infinity.

## 2 Preliminaries

We need to introduce some notation and definitions and state some previous results. Define the Weyl chamber, for $$N\in {\mathbb {N}}$$, by:

\begin{aligned} {\mathbb {W}}_N=\left\{ \mathbf {x}=\left( x_1,\ldots ,x_N\right) \in {\mathbb {R}}^N:x_1\ge \ldots \ge x_N \right\} . \end{aligned}

We will make frequent use, throughout the paper, without explicit mention, of the following fact. If f is a symmetric function on $${\mathbb {R}}^N$$ then, $$\int _{{\mathbb {W}}_N}f(\mathbf {x})d\mathbf {x}=\frac{1}{N!}\int _{{\mathbb {R}}^N}f(\mathbf {x})d\mathbf {x}.$$

### Definition 2.1

For $$\beta > 0$$ and $$\Re (\tau )>-\frac{1}{2}$$, we introduce the probability measure on $${\mathbb {W}}_N$$:

\begin{aligned} {\mathfrak {m}}_{N,\beta }^{(\tau )}(d \mathbf {x}):= & {} \frac{N!}{{\mathcal {C}}_{N,\beta }^{(\tau )}}\prod _{j=1}^N(1 +\text {i}x_j)^{-\tau -\beta (N-1)/2-1}\nonumber \\&(1-\text {i}x_j)^{-{\overline{\tau }}-\beta (N-1)/2-1}\left| \Delta (\mathbf {x})\right| ^\beta d\mathbf {x}, \end{aligned}
(8)

where $$\Delta (\mathbf {x}) := \prod _{1 \le i < j \le N} (x_i-x_j)$$ is the Vandermonde determinant, and $${\mathcal {C}}_{N,\beta }^{(\tau )}$$ is the normalisation constant, see [20]. We will need its explicit value only for real values of $$\tau$$, in which case it is given by, see [20]:

\begin{aligned} {\mathcal {C}}_{N,\beta }^{(\tau )} = 2^{-\beta N(N-1)/2 -2N\tau }\pi ^N \prod _{j=0}^{N-1}\frac{\Gamma \left( \frac{\beta }{2}j + 2\tau + 1\right) \Gamma \left( \frac{\beta }{2}(j+1)+1\right) }{\Gamma \left( \frac{\beta }{2}j +\tau +1\right) ^2 \Gamma \left( \frac{\beta }{2}+1\right) }. \end{aligned}
(9)

These are known as the general-$$\beta$$ Hua–Pickrell measures (also known as the Cauchy $$\beta$$-ensemble, see [20]). Throughout this paper we use $${\mathbb {E}}_{N,\beta }^{(\tau )}$$ to denote expectations taken with respect to the measure $${\mathfrak {m}}_{N,\beta }^{(\tau )}$$, and in the context of taking these expectations $$(x_1^{(N)},\ldots ,x_N^{(N)})$$ denotes a point in $${\mathbb {W}}_N$$ distributed according to $${\mathfrak {m}}_{N,\beta }^{(\tau )}$$. One of the key propositions in [21] links the joint moments $$F_{N,\beta ,0}(s,h)$$ to moments taken against the Hua–Pickrell measures. Following a similar method, we obtain the following generalization of [21][Proposition 2.1] to $$\delta \ne 0$$.

### Proposition 2.2

For all $$\beta > 0$$, $$\Re (\delta )>-\frac{1}{2}$$, $$s+\Re (\delta )>-\frac{1}{2}$$ and $$-\frac{1}{2}< h < \Re (\delta )+s + \frac{1}{2}$$, we have:

\begin{aligned} F_{N,\beta ,\delta }(s,h) = F_{N,\beta ,\delta }(s,0) 2^{-2h} {\mathbb {E}}_{N,\beta }^{(s+\delta )}\left[ \left| x_1^{(N)} +\cdots +x_N^{(N)}\right| ^{2h}\right] . \end{aligned}
(10)

### Proof

We argue as in [21][Proposition 2.1]. Firstly, observe that:

\begin{aligned} \frac{\Psi _{\varvec{\theta }}'(0)}{\Psi _{\varvec{\theta }}(0)}=-\frac{1}{2}\sum _{j=1}^N \cot \left( \frac{\theta _j}{2}\right) . \end{aligned}

Thus, making the transformation $$x_j=\cot \left( \frac{\theta _j}{2}\right)$$ gives the equality:

\begin{aligned} F_{N,\beta ,\delta }(s,h)= K_{N,\beta ,s,\delta } 2^{-2h} {\mathbb {E}}_{N,\beta }^{(s+\delta )}\left[ \left| x_1^{(N)} +\cdots +x_N^{(N)}\right| ^{2h}\right] \end{aligned}

where $$K_{N,\beta ,s,\delta }$$ is a constant that only depends on $$N,\beta ,s$$ and $$\delta$$. Then, evaluating both sides at $$h=0$$ gives the statement of the proposition. $$\square$$

We will also use the following corollary of [15, Lemma 4.14]:

### Proposition 2.3

[15, Lemma 4.14] Let $$\beta >0$$ and $$\Re (\delta )>-\frac{1}{3}$$. For $$s > -\frac{1}{3}$$, we have:

\begin{aligned} \log F_{N, \beta ,\delta }(s,0)&=\frac{2s^2}{\beta }\log (N)+2s\frac{\delta +{\overline{\delta }}}{\beta }+\Upsilon _\beta \left( 1+\delta -\beta /2\right) \\&\quad - \Upsilon _\beta \left( 1+\delta +\frac{s}{2}-\beta /2\right) \\&\quad + \Upsilon _\beta \left( 1+{\overline{\delta }}-\frac{\beta }{2}\right) - \Upsilon _\beta \left( 1+\delta +{\overline{\delta }}-\frac{\beta }{2}\right) \\&\quad - \Upsilon _\beta \left( 1+{\overline{\delta }}+\frac{s}{2}-\beta /2\right) \\&\quad + \Upsilon _\beta \left( 1+\delta +{\overline{\delta }}+s-\beta /2\right) + o(1), \end{aligned}

as $$N \rightarrow \infty$$.

Thus, an immediate corollary of Proposition 2.3 is the explicit evaluation (6).

Our final main ingredient is the following result of [21], giving an explicit evaluation of $$F_{\beta ,0}(s,h)$$ for $$s \in {\mathbb {N}} \cup \{0\}$$, $$h \in {\mathbb {C}}$$, $$-\frac{1}{2}< \Re (h) < s + \frac{1}{2}$$. We will use this formula to derive (7); that is, the explicit evaluation for $$h \in {\mathbb {N}} \cup \{0\}$$ and $$\tau > h - \frac{1}{2}$$.

### Theorem 2.4

[21, Theorem 1.1] For $$\beta > 0$$, $$s \in {\mathbb {N}} \cup \{0\},$$ and $$-\frac{1}{2}< \Re (h) < s + \frac{1}{2}$$, we have the explicit evaluation:

\begin{aligned} F_{\beta ,0}(s,h)= & {} \prod _{j=1}^s \frac{\Gamma (2j/\beta )}{\Gamma (2(s+j)/\beta )} \times \frac{1}{2^{2h} \cos (\pi h)}\sum _{\kappa }\frac{(-2h)_{|\kappa |} 2^{|\kappa |}}{[4s/\beta ]_\kappa ^{(\beta /2)}} \nonumber \\&\times \prod _{\Box \in \kappa }\frac{\frac{\beta }{2} \alpha ^\prime (\Box ) + s - \ell ^\prime (\Box )}{\left( \frac{\beta }{2}(\alpha (\Box )+1)+ \ell (\Box )\right) \left( \frac{\beta }{2}\alpha (\Box )+ \ell (\Box ) + 1\right) }. \end{aligned}
(11)

Here the sum over $$\kappa$$ ranges over all partitions with at most s parts.

## 3 Proof of the Convergence Statement in Theorem 1.1

### 3.1 A Moments Convergence Result for Rows of Random Interlacing Arrays

In this section we prove some results on the moments of averages of the points in rows of consistent (this will be made precise shortly) random infinite interlacing arrays. As we explain in Remarks 3.1 and 3.3 below such random arrays, for $$\beta =1,2,4$$, arise by looking at the eigenvalues of consecutive principal submatrices of an infinite conjugation-invariant random matrix. The definition of a consistent random infinite interlacing array makes sense for all $$\beta >0$$ however. The main observation behind this section is that instead of studying the sum of eigenvalues of the matrix (namely the sum of points in rows of the array), for $$\beta =1,2,4$$, it is easier to study the sum of diagonal entries of the matrix (of course these two sums are equal) because of their exchangeable structure. There is a natural generalisation of the notion of diagonal entries of a matrix to that of diagonal entries corresponding to an infinite interlacing array for any $$\beta >0$$. These still have an exchangeable structure and this is what makes everything work. A key input to our approach are the results from [3] which generalise the theory developed for the case $$\beta =2$$ in [34] and [9]. The arguments in this section are based on exchangeability and this seems to be the most general setting to which they apply in a random matrix contextFootnote 4.

We need some notation and definitions. For $$\mathbf {x}\in {\mathbb {W}}_N$$ and $$\mathbf {y}\in {\mathbb {W}}_{N+1}$$, we write $$\mathbf {x}\prec \mathbf {y}$$ if the following inequalities hold:

\begin{aligned} y_1\ge x_1 \ge y_2 \ge \cdots \ge y_N \ge x_N \ge y_{N+1}. \end{aligned}

For any $$\beta >0$$ and $$N\ge 1$$, consider the following Markov kernel $${\Lambda }^{(\beta )}_{N+1,N}$$ from $${\mathbb {W}}_{N+1}$$ to $${\mathbb {W}}_N$$ given by the explicit formula, for $$\mathbf {y}\in {\mathbb {W}}^{\circ }_{N+1}$$ (the interior of $${\mathbb {W}}_{N+1}$$):

\begin{aligned} {\Lambda }^{(\beta )}_{N+1,N}\left( \mathbf {y},d\mathbf {x}\right)&=\frac{\Gamma \left( \frac{\beta }{2}(N+1)\right) }{\Gamma \left( \frac{\beta }{2}\right) ^{N+1}}\prod _{1\le i<j \le N+1} (y_i-y_j)^{1-\beta }\prod _{1\le i<j\le N} (x_i-x_j)\\&\quad \times \prod _{i=1}^N \prod _{j=1}^{N+1}|x_i-y_j|^{\frac{\beta }{2}-1}\mathbf {1}_{\mathbf {x}\prec \mathbf {y}}d\mathbf {x}, \end{aligned}

where $$\mathbf {1}_{{\mathcal {A}}}$$ is the indicator function of the set $${\mathcal {A}}$$. The fact that this correctly integrates to one is a result of Dixon and Anderson, see [1, 18]. We note that this formula extends continuously to arbitrary $$\mathbf {y}\in {\mathbb {W}}_{N+1}$$. This is evident from an equivalent definition of the kernel $${\Lambda }^{(\beta )}_{N+1,N}$$ in terms of the distribution of the roots of a certain random polynomial associated to $$\mathbf {y}\in {\mathbb {W}}_{N+1}$$, see Definition 1.1 and Proposition 1.2 in [3].

### Remark 3.1

This Markov kernel might seem to come out of thin air but for the values $$\beta =1,2,4$$ (which provide the motivation) $${\Lambda }^{(\beta )}_{N+1,N}\left( \mathbf {y},\cdot \right)$$ corresponds to the conditional distribution of the eigenvalues of the principal $$N\times N$$ submatrix of a random conjugationFootnote 5-invariant $$(N+1)\times (N+1)$$ self-adjoint matrix, with either real ($$\beta =1$$) or complex ($$\beta =2$$) or quaternion ($$\beta =4$$) entries, with given spectrum $$\mathbf {y}=\left( y_1,\dots ,y_{N+1}\right)$$, see [3] and Remark 3.3 below.

The following definition will be important in what follows.

### Definition 3.2

Let $$\beta >0$$ and $$N\ge 1$$. A consistent random family of interlacing arrays with parameter $$\beta$$ and length N is a family of random sequences $$\left( \mathbf {x}^{(i)}\right) _{i=1}^N$$ such that $$\mathbf {x}^{(i)}\in {\mathbb {W}}_i$$ with:

\begin{aligned} \mathbf {x}^{(1)}\prec \mathbf {x}^{(2)} \prec \cdots \prec \mathbf {x}^{(N-1)} \prec \mathbf {x}^{(N)} \end{aligned}

and the joint distribution $${\mathcal {N}}$$ of the sequence $$\left( \mathbf {x}^{(i)}\right) _{i=1}^N$$ satisfies:

\begin{aligned} {\mathcal {N}}\left( d\mathbf {x}^{(1)},\ldots ,d\mathbf {x}^{(N)}\right) =\nu _N(d\mathbf {x}^{(N)}){\Lambda }_{N,N-1}^{(\beta )}\left( \mathbf {x}^{(N) },d\mathbf {x}^{(N-1)}\right) \cdots {\Lambda }^{(\beta )}_{2,1}\left( \mathbf {x}^{(2)},d\mathbf {x}^{(1)}\right) , \end{aligned}

where $$\nu _N$$ is the distribution of the top row $$\mathbf {x}^{(N)}$$ in the sequence. A consistent random family of infinite interlacing arrays with parameter $$\beta$$ is a family of random infinite sequences $$\left( \mathbf {x}^{(i)}\right) _{i=1}^{\infty }$$ such that the above holds for all $$N\ge 1$$.

### Remark 3.3

Consistent random families of interlacing arrays are very closely related to random conjugation-invariant matrices for $$\beta =1, 2, 4$$. Let $$\mathbf {H}$$ be a random infinite self-adjoint matrix with real (for $$\beta =1$$), complex (for $$\beta =2$$) or quaternion (for $$\beta =4$$) entries, so that the law of all its finite top-left submatrices is invariant under orthogonal (for $$\beta =1$$), unitary (for $$\beta =2$$) or symplectic (for $$\beta =4$$) conjugation. Then, the eigenvalues of the consecutive top-left submatrices of $$\mathbf {H}$$ form a consistent family of infinite interlacing arrays for parameter $$\beta$$. Conversely any consistent random family of infinite interlacing arrays for $$\beta \in \{ 1, 2, 4\}$$ comes from such an infinite conjugation-invariant random matrix $$\mathbf {H}$$, see Proposition 1.7 in [3] for a proof.

We now define the so-called diagonal entries $$\left( {\mathsf {d}}_1,{\mathsf {d}}_2,{\mathsf {d}}_3,\ldots \right)$$ of an interlacing array $$\left( \mathbf {x}^{(i)}\right) _{i=1}^{\infty }$$ by $${\mathsf {d}}_1=\mathbf {x}^{(1)}$$ and for $$i\ge 1$$:

\begin{aligned} {\mathsf {d}}_{i+1}=\sum _{j=1}^{i+1}x_j^{(i+1)}-\sum _{j=1}^{i}x_j^{(i)}. \end{aligned}

### Remark 3.4

This terminology comes from the fact that it coincides with the usual notion of diagonal entries of matrices for $$\beta \in \{1, 2, 4 \}$$, see Remark 3.3.

We have the following result for the diagonal entries of consistent random infinite interlacing arrays.

### Proposition 3.5

Let $$\beta >0$$. Let $${\mathfrak {M}}$$ be the law of a consistent random infinite interlacing array with parameter $$\beta$$. Then, the random infinite sequence of diagonal entries $$\left( {\mathsf {d}}_1,{\mathsf {d}}_2,{\mathsf {d}}_3,\ldots \right)$$ associated to the array with law $${\mathfrak {M}}$$ is exchangeable.

### Proof

Fix $$\beta >0$$ and let $$N\ge 1$$ be arbitrary. For any consistent distribution/law $${\mathfrak {M}}$$ on infinite interlacing arrays (with parameter $$\beta$$) we write $$\mathsf {Law}_{{\mathfrak {M}}}\left( {\mathsf {d}}_1,\dots ,{\mathsf {d}}_N\right)$$ for the joint law of the diagonal elements $${\mathsf {d}}_1,\dots ,{\mathsf {d}}_N$$. The extremal consistent distributions on infinite interlacing arrays (namely the ones that cannot be written as a convex combination of other consistent distributions) have been classified in [3]. They are parametrised by an infinite-dimensional space $$\Omega$$ endowed with a certain topology, see Definition 1.9 in [3]. For any $$\omega \in \Omega$$ we denote by $${\mathfrak {N}}_{\omega }^{(\beta )}$$ the corresponding extremal consistent distribution. Then, under $${\mathfrak {N}}_\omega ^{(\beta )}$$ the diagonal elements are i.i.d., see Theorem 1.13 in [3], namely

\begin{aligned} \mathsf {Law}_{{\mathfrak {N}}_\omega ^{(\beta )}}\left( {\mathsf {d}}_1, \dots ,{\mathsf {d}}_N\right) ={\mathfrak {n}}_{\omega }^{(\beta )}(dx_1)\cdots {\mathfrak {n}}_{\omega }^{(\beta )}(dx_N), \end{aligned}

where the probability measure $${\mathfrak {n}}_\omega ^{(\beta )}$$ on $${\mathbb {R}}$$ is explicit, see Theorem 1.13 in [3]. Moreover, from Theorem 1.16 in [3] we obtain that for any consistent distribution $${\mathfrak {M}}$$ there exists a unique Borel probability measure $$\nu ^{{\mathfrak {M}}}$$ on $$\Omega$$ such that

\begin{aligned} \mathsf {Law}_{{\mathfrak {M}}}\left( {\mathsf {d}}_1,\dots ,{\mathsf {d}}_N \right)= & {} \int _{\Omega }\nu ^{{\mathfrak {M}}}(d\omega )\mathsf {Law}_{{\mathfrak {N} }_\omega ^{(\beta )}}\left( {\mathsf {d}}_1,\dots ,{\mathsf {d}}_N\right) \\= & {} \int _{\Omega }\nu ^{{\mathfrak {M}}}(d\omega ){\mathfrak {n}}_{\omega }^{(\beta )} (dx_1)\cdots {\mathfrak {n}}_{\omega }^{(\beta )}(dx_N). \end{aligned}

Now, let $$\sigma$$ be an arbitrary permutation from the symmetric group on N symbols. Then,

\begin{aligned} \mathsf {Law}_{{\mathfrak {M}}}\left( {\mathsf {d}}_{\sigma (1)}, \dots ,{\mathsf {d}}_{\sigma (N)}\right)= & {} \int _{\Omega }\nu ^{{\mathfrak {M}}} (d\omega ){\mathfrak {n}}_{\omega }^{(\beta )}(dx_{\sigma (1)})\cdots {\mathfrak {n}}_{\omega }^{(\beta )}(dx_{\sigma (N)})\\= & {} \mathsf {Law}_{{\mathfrak {M}}}\left( {\mathsf {d}}_1,\dots ,{\mathsf {d}}_N\right) \end{aligned}

and since $$N\ge 1$$ is arbitrary the desired conclusion follows. $$\square$$

We also need the following notion of consistent sequences of probability measures on the Weyl chambers. Intuitively it can be understood as looking at the individual rows of a consistent random infinite interlacing array.

### Definition 3.6

We say that a sequence of probability measures $$\{\mu _N\}_{N=1}^{\infty }$$ on $$\{{\mathbb {W}}_N \}_{N\ge 1}$$ is consistent for a parameter $$\beta$$ if:

\begin{aligned} \mu _{N+1}{\Lambda }_{N+1,N}^{(\beta )}=\mu _N, \ \ \forall N\ge 1. \end{aligned}
(12)

Using Kolmogorov’s extension theorem we see that consistent distributions of parameter $$\beta$$ on infinite interlacing arrays are in canonical bijection (under which the law of the Nth row of the array is given by $$\mu _N$$) with consistent sequences of probability measures for parameter $$\beta$$ defined above.

We have the following general convergence result from [3] for the sample mean of the elements on the Nth row.

### Proposition 3.7

Let $$\beta >0$$. Let $$\{\mu _N \}_{N=1}^{\infty }$$ be a consistent sequence of probability measures for the parameter $$\beta$$. Let $${\mathfrak {M}}$$ be the law of the corresponding consistent random infinite interlacing array. Denote by $$\left( {\mathsf {x}}^{(N)}_1,\ldots , {\mathsf {x}}^{(N)}_N\right)$$ the elements of the Nth row (having law $$\mu _N$$) and consider the average $${\mathsf {T}}_N=N^{-1}\left( {\mathsf {x}}_1^{(N)}+\cdots +{\mathsf {x}}_N^{(N)}\right)$$. Then, $${\mathsf {T}}_N$$ converges $${\mathfrak {M}}$$-almost surely as $$N\rightarrow \infty$$ to some random variable $${\mathsf {T}}_\infty$$.

### Proof

This is a consequence of Theorem 3.6 in [3]. The O-V condition, see Definition 2.1 and Definition 2.2 in [3], for the parameter $$\gamma _1^{(N)}$$ therein, which holds almost surely, for any consistent distribution $${\mathfrak {M}}$$ with parameter $$\beta$$ on infinite interlacing arrays, is exactly the convergence of $${\mathsf {T}}_N$$ (as $$\gamma _1^{(N)}={\mathsf {T}}_N$$). $$\square$$

We now move on to upgrading this convergence to convergence of the moments. When $${\mathsf {T}}_1={\mathsf {x}}_1^{(1)}={\mathsf {d}}_1$$ is an $$L^1$$ random variable this can be achieved in a rather neat way. One could in principle use Proposition 3.7 above along with proving uniform integrability which can be done using exchangeability. This approach was taken in [4]. Here instead we give a somewhat different argument. Below we denote by $${\mathbb {E}}_{\mu }$$ the expectation with respect to a probability measure $$\mu$$.

### Proposition 3.8

In the setting of Proposition 3.7 above, suppose $$r\ge 1$$ and $${\mathbb {E}}_{\mu _1}\left[ \left| {\mathsf {T}}_1\right| ^r\right] <\infty$$. Then, for any $$0\le t\le r$$ we have

\begin{aligned} {\mathbb {E}}_{\mu _N}\left[ \left| {\mathsf {T}}_N\right| ^t\right] \overset{N\rightarrow \infty }{\longrightarrow } {\mathbb {E}}_{{\mathfrak {M}}}\left[ \left| {\mathsf {T}}_\infty \right| ^t\right] <\infty . \end{aligned}

### Proof

Clearly, we have $${\mathsf {T}}_N=N^{-1}\left( {\mathsf {d}}_1+\cdots +{\mathsf {d}}_N\right)$$. It is then standard that the sequence $$\left( {\mathsf {T}}_{-N}\right) _{N\le -1}=\left( {\mathsf {T}}_N\right) _{N\ge 1}$$ forms a backward martingale with respect to the exchangeable filtrationFootnote 6, see [28]. Then, since $$r\ge 1$$ and $${\mathbb {E}}_{\mu _1}\left[ \left| {\mathsf {T}}_1\right| ^r\right] <\infty$$, from the backward martingale convergence theorem, see [28], we have convergenceFootnote 7 in $$L^r$$:

\begin{aligned} {\mathbb {E}}_{{\mathfrak {M}}}\left[ \left| {\mathsf {T}}_N -{\mathsf {T}}_\infty \right| ^{r}\right] \overset{N \rightarrow \infty }{\longrightarrow } 0. \end{aligned}

The extension to $$0\le t \le r$$ follows from Jensen’s inequality. $$\square$$

Finally, we have the following explicit formula for the positive integer r-moments of the abstract random variable $${\mathsf {T}}_\infty$$ in terms of the r-moments of the sums of elements of the rows 1 up to r of the array (which can be analysed in a concrete way in many cases). The proof relies in an essential way on exchangeability and is an abstraction to, what seems to be, the most general setting of an argument given in [4].

### Proposition 3.9

In the setting of Proposition 3.7, let $$r\in {\mathbb {N}}$$ and assume $${\mathbb {E}}_{\mu _1}\left[ \left| {\mathsf {T}}_1\right| ^{r}\right] <\infty$$. Then, we have the following formula:

\begin{aligned} {\mathbb {E}}_{{\mathfrak {M}}}\left[ {\mathsf {T}}_{\infty }^{r}\right] =\frac{1}{r!} \sum _{k=1}^{r} (-1)^{r-k} \left( {\begin{array}{c}r\\ k\end{array}}\right) {\mathbb {E}}_{\mu _k}\left[ \left( {\mathsf {x}}_1^{(k)}+\cdots +{\mathsf {x}}^{(k)}_k\right) ^{r}\right] . \end{aligned}
(13)

### Proof

We prove the following more general statement. Let $$\left( {\mathsf {Y}}_1,{\mathsf {Y}}_2,{\mathsf {Y}}_3,\ldots \right)$$ be any sequence of exchangeable real random variables, with $${\mathbb {E}}\left[ \left| {\mathsf {Y}}_1\right| ^r\right] <\infty$$, where $$h\in {\mathbb {N}}$$ and consider $${\mathsf {Y}}_\infty =\lim _{N\rightarrow \infty } N^{-1}\sum _{i=1}^N {\mathsf {Y}}_i$$. This limit exists by the law of large numbers for exchangeable sequences (by assumption the $${\mathsf {Y}}_i$$ are integrable), see [28]. Then, we have the following formula (by assumption all moments involved are finite):

\begin{aligned} {\mathbb {E}}\left[ {\mathsf {Y}}_{\infty }^{r}\right] =\frac{1}{r!} \sum _{k=1}^{r} (-1)^{r-k} \left( {\begin{array}{c}r\\ k\end{array}}\right) {\mathbb {E}}\left[ \left( {\mathsf {Y}}_1+\cdots +{\mathsf {Y}}_k\right) ^{r}\right] , \end{aligned}
(14)

which we will prove shortly. The statement of the proposition is then a consequence of (14) and the simple observation:

\begin{aligned} {\mathbb {E}}_{{\mathfrak {M}}}\left[ \left( {\mathsf {d}}_1+\cdots +{\mathsf {d}}_k\right) ^{r}\right] ={\mathbb {E}}_{\mu _k} \left[ \left( {\mathsf {x}}_1^{(k)}+\cdots +{\mathsf {x}}_k^{(k)}\right) ^{r}\right] . \end{aligned}

We now turn to the proof of (14). We claim that

\begin{aligned} {\mathbb {E}}\left[ {\mathsf {Y}}_\infty ^{r}\right] ={\mathbb {E}} \left[ {\mathsf {Y}}_1{\mathsf {Y}}_2\cdots {\mathsf {Y}}_{r}\right] . \end{aligned}
(15)

Then, (14) is a consequence of (15) along with the formula

\begin{aligned} {\mathbb {E}}\left[ {\mathsf {Y}}_1{\mathsf {Y}}_2\cdots {\mathsf {Y}}_{r}\right] =\frac{1}{r!}\sum _{k=1}^{r}(-1)^{r-k} \left( {\begin{array}{c}r\\ k\end{array}}\right) {\mathbb {E}}\left[ \left( {\mathsf {Y}}_1+\cdots +{\mathsf {Y}}_k\right) ^{r}\right] , \end{aligned}

which follows from exchangeability and the elementary identity, see [3] for a proof:

\begin{aligned} w_1\cdots w_{r}=\frac{1}{r!}\sum _{k=1}^{r}(-1)^{r-k}\sum _{1\le i_1<i_2<\cdots < i_k\le r}\left( w_{i_1}+w_{i_2}+\cdots + w_{i_{k}}\right) ^{r}. \end{aligned}

It remains to prove (15). We recall a couple of classical facts about infinite sequences of exchangeable random variables, see [28]. Let $${\mathcal {E}}$$ denote the exchangeable sigma algebra. Then, by the de Finetti–Hewitt–Savage theorem the $$\left( {\mathsf {Y}}_i\right) _{i=1}^{\infty }$$ are conditionally i.i.d. given $${\mathcal {E}}$$ and moreover (since the $${\mathsf {Y}}_i$$ are integrable) by the backward martingale convergence theorem $${\mathsf {Y}}_\infty ={\mathbb {E}}\left[ {\mathsf {Y}}_i|{\mathcal {E}}\right]$$. Hence, we can compute using the tower property:

\begin{aligned} {\mathbb {E}}\left[ {\mathsf {Y}}_1{\mathsf {Y}}_2\cdots {\mathsf {Y}}_{r}\right] ={\mathbb {E}}\left[ {\mathbb {E}}\left[ {\mathsf {Y}}_1 \cdots {\mathsf {Y}}_{r}|{\mathcal {E}}\right] \right] ={\mathbb {E}} \left[ {\mathbb {E}}\left[ {\mathsf {Y}}_1|{\mathcal {E}}\right] \cdots {\mathbb {E}}\left[ {\mathsf {Y}}_{r}|{\mathcal {E}}\right] \right] ={\mathbb {E}}\left[ {\mathsf {Y}}_\infty ^{r}\right] . \end{aligned}

$$\square$$

### 3.2 Application to the Hua–Pickrell Measures

We now specialize the previous results to the case of the Hua–Pickrell measures from Definition 2.1. We begin with the following which says that the theory of Sect. 3.1 is indeed applicable.

### Lemma 3.10

Let $$\beta >0$$ and $$\Re (\tau )>-\frac{1}{2}$$. The Hua–Pickrell measures $$\left\{ {\mathfrak {m}}_{N,\beta }^{(\tau )} \right\} _{N=1}^\infty$$ are consistent with parameter $$\beta$$.

### Proof

The required multidimensional integral (12) that needs to be checked follows from Lemma 2.2 of [33]. $$\square$$

We denote by $${\mathsf {X}}_\beta (\tau )$$ the limiting random variable $${\mathsf {T}}_\infty$$ from Proposition 3.7 corresponding to $$\left\{ {\mathfrak {m}}_{N,\beta }^{(\tau )} \right\} _{N=1}^\infty$$. We note that the law of $${\mathsf {X}}_\beta (\tau )$$ is symmetric about the origin whenever $$\tau \in {\mathbb {R}}$$, as this holds for each pre-limit random variable $${\mathsf {T}}_N$$ using the symmetry of $${\mathfrak {m}}_{N,\beta }^{(\tau )}$$ for $$\tau \in {\mathbb {R}}$$.

### Proposition 3.11

Let $$\beta ,\Re (\tau )>0$$ and $$0\le h <\Re (\tau )+\frac{1}{2}$$. Then,

\begin{aligned} \lim \limits _{N \rightarrow \infty } \frac{1}{N^{2h}}{\mathbb {E}}_{N,\beta }^{(\tau )}\left[ \left| x_1^{(N)} +\cdots +x_N^{(N)}\right| ^{2h}\right] ={\mathbb {E}}\left[ \left| {\mathsf {X}}_\beta (\tau )\right| ^{2h}\right] <\infty . \end{aligned}

### Proof

This is an application of Proposition 3.8 by virtue of Lemma 3.10 and the fact that

\begin{aligned} {\mathbb {E}}_{1,\beta }^{(\tau )}\left[ \left| x_1^{(1)}\right| ^r\right]&=\frac{1}{{\mathcal {C}}_{1,\beta }^{(\tau )}}\int _{-\infty }^\infty \left| x\right| ^r\left( 1+\text {i}x\right) ^{-\tau -1}\left( 1 -\text {i}x\right) ^{-{\bar{\tau }}-1}dx\\&=\frac{1}{{\mathcal {C}}_{1,\beta }^{(\tau )}}\int _{-\infty }^\infty \left| x\right| ^r(1+x^2)^{-\Re (\tau )-1}e^{2\Im (\tau )\arctan (x)}dx<\infty , \end{aligned}

for any $$0\le r<1+2\Re (\tau )$$, which can be chosen so that $$r\ge 1$$. $$\square$$

### Proof of the convergence statement in Theorem 1.1

This follows immediately by combining Proposition 2.2, Proposition 2.3 along with Proposition 3.11 above. $$\square$$

We finally record the following formula for the even moments of $${\mathsf {X}}_\beta (\tau )$$ in terms of the moments of the Hua–Pickrell ensembles. This will be a key ingredient in proving the explicit formula (7) by extendingFootnote 8 Forrester’s result, Theorem 2.4. Using this we can readily study the moments $${\mathbb {E}}\left[ {\mathsf {X}}_\beta (\tau )^{2h}\right]$$ as functions of $$\beta$$ and $$\tau$$ and obtain certain analytic properties and bounds. Proving these properties directly, using an approximation from the integral formula for finite N appears difficult (one needs certain uniform in N estimates which are not straightforward to obtain).

### Proposition 3.12

Let $$\beta > 0$$, let $$h \in {\mathbb {N}} \cup \{0\}$$, and let $$\Re (\tau ) > h - \frac{1}{2}$$. Then,

\begin{aligned} {\mathbb {E}}\left[ {\mathsf {X}}_\beta (\tau )^{2h}\right] =\frac{1}{(2h)!}\sum _{k=1}^{2h} (-1)^{2h-k} {2h \atopwithdelims ()k} {\mathbb {E}}_{k,\beta }^{(\tau )}\left[ \left( x_1^{(k)}+\cdots +x_k^{(k)}\right) ^{2h}\right] . \end{aligned}
(16)

### Proof

This is an application of Proposition 3.9 (the case $$h=0$$ is notation convention).

$$\square$$

## 4 Proof of the Explicit Formula in Theorem 1.1

In this section we consider the case when $$\tau \in {\mathbb {R}}$$, and prove the explicit formula (7) for the even moments of $${\mathsf {X}}_\beta (\tau )$$ by making use of Theorem 2.4, Proposition 3.12 and some estimates proven below. We begin by defining $$\tilde{{\mathcal {C}}}_{N,\beta }^{(\tau )}$$, for $$\Re (\tau )>-\frac{1}{2}$$, by

\begin{aligned} \tilde{{\mathcal {C}}}_{N,\beta }^{(\tau )} = 2^{-\beta N(N-1)/2 -2N\tau }\pi ^N \prod _{j=0}^{N-1}\frac{\Gamma \left( \frac{\beta }{2}j + 2\tau + 1\right) \Gamma \left( \frac{\beta }{2}(j+1)+1\right) }{\Gamma \left( \frac{\beta }{2}j+\tau +1\right) ^2 \Gamma \left( \frac{\beta }{2}+1\right) } . \end{aligned}
(17)

We note that for real $$\tau$$ this is precisely the normalization constant $${\mathcal {C}}_{N,\beta }^{(\tau )}$$, from (9), for the Hua–Pickrell $$\beta$$-ensemble, but for complex $$\tau$$ these constants differ. In a similar spirit, for $$\Re (\tau )>-\frac{1}{2}$$, we define the linear operator $${\mathfrak {E}}_{N,\beta }^{(\tau )}$$, acting on symmetric functions f on $${\mathbb {R}}^N$$, by

\begin{aligned} {\mathfrak {E}}_{N,\beta }^{(\tau )}\left[ f\right] =\frac{1}{\tilde{{\mathcal {C}}}_{N,\beta }^{(s)}} \int _{{\mathbb {R}}^N} \frac{f(x_1,\ldots ,x_N)\left| \Delta (\mathbf {x})\right| ^\beta }{\prod _{j=1}^N (1+x_j^2)^{\beta (N-1)/2+1+\tau }}d\mathbf {x}, \end{aligned}
(18)

whenever the integral exists. Similar to before, we note that $${\mathfrak {E}}_{N,\beta }^{(\tau )}\left[ f\right]$$ coincides with the expectation $${\mathbb {E}}^{(\tau )}_{N,\beta }[f(x_1^{(1)},\dots ,x_N^{(N)})]$$, taken with respect to the Hua–Pickrell measure $${\mathfrak {m}}^{(\tau )}_{N,\beta }$$ whenever $$\tau$$ is real, but they differ for general complex $$\tau$$.

### Lemma 4.1

Fix $$h \in {\mathbb {N}} \cup \{0\}$$. Whenever $$\beta$$ is an even integer:

\begin{aligned} \tau \mapsto {\mathfrak {E}}_{N,\beta }^{(\tau )}\left[ \left( x_1+\cdots +x_N\right) ^{2h}\right] \end{aligned}
(19)

is a rational function.

### Proof

We proceed as in [4, Proposition 1.4]. Since $$\beta$$ is an even integer, we expand the factor $$\left| \Delta (\mathbf {x})\right| ^\beta$$ as a polynomial in the variables $$x_1, \ldots ,x_N$$. We find that $${\mathfrak {E}}_{N,\beta }^{(\tau )}\left[ \left( x_1+\cdots +x_N\right) ^{2h}\right]$$ is expressible as a linear combination, with coefficients not dependent on $$\tau$$, of terms of the form:

\begin{aligned} \frac{1}{\tilde{{\mathcal {C}}}_{N,\beta }^{(\tau )}} \int _{{\mathbb {R}}^N} \prod _{j=1}^N \frac{x_j^{2m_j}}{(1+x_j^2)^{\beta (N-1)/2 + 1 + \tau }} d \mathbf {x} \end{aligned}
(20)

(No odd exponents appear, by symmetry about the origin). Recalling the standard evaluation

\begin{aligned} \int _{-\infty }^\infty \frac{x^{2m}}{(1+x^2)^{\beta (N-1)/2 + 1 + \tau }} dx = \frac{\Gamma \left( m+\frac{1}{2}\right) \Gamma \left( \frac{\beta }{2}(N-1) + \tau -m+\frac{1}{2}\right) }{\Gamma \left( \frac{\beta }{2}(N-1) + 1 + \tau \right) }, \end{aligned}
(21)

we are required to show that

\begin{aligned} 2^{2{\tau } N}\prod _{j=1}^{N}\frac{\Gamma \left( \frac{\beta }{2}j + \tau +1\right) ^2\Gamma \left( \frac{\beta }{2}(N-1) + \tau -m_j+\frac{1}{2}\right) }{\Gamma \left( \frac{\beta }{2}j +2\tau +1\right) \Gamma \left( \frac{\beta }{2}(N-1) + 1 + \tau \right) } \end{aligned}
(22)

is a rational expression in $$\tau$$. Since $$\beta$$ is an even integer, we note that

\begin{aligned} \frac{\Gamma \left( \frac{\beta }{2}j+\tau +1\right) }{\Gamma \left( \frac{\beta }{2}(N-1)+1+\tau \right) } = \left( \beta /2 j+\tau +1\right) ^{-1}_{\beta /2(N-1-j)}, \end{aligned}
(23)

which is evidently rational in $$\tau$$. Thus we are reduced to showing that

\begin{aligned} 2^{2\tau N}\prod _{j=1}^{N}\frac{\Gamma \left( \frac{\beta }{2}j + \tau +1\right) \Gamma \left( \frac{\beta }{2}(N-1) + \tau -m_j+\frac{1}{2}\right) }{\Gamma \left( \frac{\beta }{2}j+2\tau +1\right) } \end{aligned}
(24)

is rational in $$\tau$$. By Legendre’s duplication formula,

\begin{aligned} \Gamma \left( \frac{\beta }{2}j+2\tau +1\right) = \text {const} \times 2^{2\tau } \Gamma \left( \tau +\frac{1}{2}+\frac{\beta }{4}j\right) \Gamma \left( \tau +1+\frac{\beta }{4}j\right) , \end{aligned}
(25)

where the constant is independent of $$\tau$$. Therefore, (24) equals the following, up to a constant factor independent of $$\tau$$:

\begin{aligned} \prod _{j=1}^{N}\frac{\Gamma \left( \frac{\beta }{2}j + \tau +1\right) \Gamma \left( \frac{\beta }{2}(N-1) + \tau -m_j+\frac{1}{2}\right) }{\Gamma \left( \frac{\beta }{4}j+\tau +1\right) \Gamma \left( \frac{\beta }{4}j+\tau +\frac{1}{2}\right) }. \end{aligned}
(26)

Finally, we note that the expression

\begin{aligned} \frac{\Gamma \left( \frac{\beta }{2}j + \tau +1\right) \Gamma \left( \frac{\beta }{2}(N-1) + \tau -m_j+\frac{1}{2}\right) }{\Gamma \left( \frac{\beta }{4}j+\tau +1\right) \Gamma \left( \frac{\beta }{4}j+\tau +\frac{1}{2}\right) } \end{aligned}
(27)

is rational in $$\tau$$ since, as $$\beta$$ is an even integer, one of $$\frac{\beta }{4}j+1$$ and $$\frac{\beta }{4}j+\frac{1}{2}$$ is an integer, and then the other is a half-integer. $$\square$$

### Lemma 4.2

Let $$\tau >-\frac{1}{2}$$ and $$h \in {\mathbb {N}} \cup \{0\}$$ with $$h<\tau +\frac{1}{2}$$. The function given by

\begin{aligned} g_N^{(\tau )}:\beta \mapsto {\mathfrak {E}}_{N,\beta }^{(\tau )}\left[ \left( x_1+\cdots +x_N\right) ^{2h}\right] \end{aligned}
(28)

is holomorphic in $$\Re (\beta ) \ge 0$$, and belongs to the exponential class: there are constants $$C>0$$ and $$\mu > 0$$ such that $$|g_N^{(\tau )}(\beta )| \le C e^{\mu |\beta |}$$ whenever $$\Re (\beta ) \ge 0$$. Moreover, it is bounded by a polynomial on the line $$\Re (\beta )=0$$.

### Proof

That $$g_N^{(\tau )}(\beta )$$ is analytic follows from a standard argument involving Fubini’s and Morera’s theorems. Using the bound $$|\Delta (\mathbf {x})| \le N! \prod _{i=1}^N(1+x_i^2)^{(N-1)/2}$$, see Lemma 3.1 in [40], we obtain that:

\begin{aligned} \begin{aligned} \left| g_N^{(\tau )}(\beta )\right|&\le \frac{\left( N!\right) ^{\Re (\beta )}}{\left| \tilde{{\mathcal {C}}}_{N,\beta }^{(\tau )}\right| } \int _{{\mathbb {R}}^N} \frac{(x_1+\cdots +x_N)^{2h}}{\prod _{j=1}^N (1+x_j^2)^{1+\tau }}d\mathbf {x}&\ll _{\tau ,h,N}\frac{\left( N!\right) ^{\Re (\beta )}}{\left| \tilde{{\mathcal {C}} }_{N,\beta }^{(\tau )}\right| }. \end{aligned} \end{aligned}
(29)

Using Stirling’s formula in the form $$\log \Gamma (z+1) = z\log (z) - z + \frac{1}{2}\log (z) + O(1)$$, and that $$\log (z+1) - \log (z) = 1/z + O(1/z^2)$$, we see that as a function of $$\beta$$:

\begin{aligned}&\log \frac{\Gamma \left( \frac{\beta }{2}j+\tau +1\right) ^2 \Gamma \left( \frac{\beta }{2}+1\right) }{\Gamma \left( \frac{\beta }{2}j + 2\tau + 1\right) \Gamma \left( \frac{\beta }{2}(j+1)+1\right) } \nonumber \\&\quad = \frac{1}{2}\log (\beta )-\frac{\beta }{2} \log (j) - \frac{\beta }{2}(j+1)\log \left( 1+\frac{1}{j}\right) + O_{\tau ,j}(1), \end{aligned}
(30)

in particular:

\begin{aligned} \left| \frac{\Gamma \left( \frac{\beta }{2}j+\tau +1\right) ^2 \Gamma \left( \frac{\beta }{2}+1\right) }{\Gamma \left( \frac{\beta }{2}j + 2\tau + 1\right) \Gamma \left( \frac{\beta }{2}(j+1)+1\right) }\right| \ll _{\tau ,j} |\beta |^{\frac{1}{2}}j^{-\Re (\beta )/2}e^{-\Re (\beta )/2}, \end{aligned}
(31)

so that:

\begin{aligned} \left( N!\right) ^{\Re (\beta )}\left| \tilde{{\mathcal {C}}}_{N,\beta }^{(\tau )} \right| ^{-1} \ll _{\tau ,N} 2^{\Re (\beta ) N(N-1)/2}e^{-\Re (\beta )N/2}(N!)^{\Re (\beta )/2}|\beta |^{\frac{N}{2}}. \end{aligned}
(32)

Applying this to (29) gives the required bounds for $$\Re (\beta ) > 0$$ and $$\Re (\beta ) = 0$$. $$\square$$

We can now prove the explicit formula in Theorem 1.1.

### Proof of the explicit formula in Theorem 1.1

Combining Proposition 2.2 with the convergence result (Proposition 3.11), and then using Proposition 3.12, we obtain the following chain of equalities, valid for all $$\beta > 0$$, $$h \in {\mathbb {N}} \cup \{0\}$$, and $$\tau \in {\mathbb {R}}$$ with $$\tau > h - \frac{1}{2}$$:

\begin{aligned} \begin{aligned} F_{\beta ,0}(\tau ,h)&= F_{\beta ,0}(\tau ,0) 2^{-2h} {\mathbb {E}} \left[ {\mathsf {X}}_\beta (\tau )^{2h}\right] \\&= F_{\beta ,0}(\tau ,0) \cdot \frac{2^{-2h}}{(2h)!}\sum _{k=1}^{2h} (-1)^{2h-k} {2h \atopwithdelims ()k} {\mathbb {E}}_{k,\beta }^{(\tau )} \left[ \left( x_1^{(k)}+\cdots +x_k^{(k)}\right) ^{2h}\right] . \end{aligned} \end{aligned}
(33)

Since $$F_{\beta ,0}(\tau ,0) = \prod _{j=1}^\tau \frac{\Gamma (2j/\beta )}{\Gamma (2(\tau +j)/\beta )}$$ whenever $$\tau \in {\mathbb {N}} \cup \{0\}$$, (as follows from evaluation of (11) at $$h=0$$), it follows from Theorem 2.4 that, whenever $$\beta > 0$$, $$\tau \in {\mathbb {N}} \cup \{0\}$$, and $$h \in {\mathbb {N}}\cup \{0\}, \ h \le \tau$$, we have the equality, after recalling that $${\mathfrak {E}}^{(\tau )}_{k,\beta }$$ and $${\mathbb {E}}^{(\tau )}_{k,\beta }$$ coincide for real $$\tau$$:

\begin{aligned}&\frac{1}{(2h)!}\sum _{k=1}^{2h} (-1)^{2h-k} {2h \atopwithdelims ()k} {\mathfrak {E}}_{k,\beta }^{(\tau )}\left[ \left( x_1^{(k)}+\cdots +x_k^{(k)}\right) ^{2h}\right] \nonumber \\&\quad = (-1)^h \sum _{|\kappa |\le 2h}\frac{(-2h)_{|\kappa |} 2^{|\kappa |}}{[4\tau /\beta ]_\kappa ^{(\beta /2)}}\nonumber \\&\qquad \times \prod _{\Box \in \kappa }\frac{\frac{\beta }{2} \alpha ^\prime (\Box ) + \tau - \ell ^\prime (\Box )}{\left( \frac{\beta }{2}(\alpha (\Box )+1)+ \ell (\Box )\right) \left( \frac{\beta }{2}\alpha (\Box )+ \ell (\Box ) + 1\right) }. \end{aligned}
(34)

Now fix $$\beta$$ as an even integer. By Lemma 4.1, the left-hand side of (34) is a rational function in $$\tau$$. We observe that the right-hand side of (34) is also rational in $$\tau$$. Therefore the equality (34) holds for all $$\beta \in 2 {\mathbb {N}}$$, $$h \in {\mathbb {N}} \cup \{0\}$$, and $$\tau \in {\mathbb {R}}, \tau > h - \frac{1}{2}$$. Now, fix $$\tau \in {\mathbb {R}}, \tau > h - \frac{1}{2}$$. By Lemma 4.2, the left-hand side of (34) satisfies the conditions of Carlson’s theorem as a function of $$\beta$$; the right-hand side of (34) also satisfies the conditions of Carlson’s theorem, being a rational expression in $$\beta$$. Therefore, we conclude that (34) holds for all $$\beta > 0$$, $$h \in {\mathbb {N}} \cup \{0\}$$ and all $$\tau > h - \frac{1}{2}$$. Combining this with (33), and recalling that $${\mathfrak {E}}^{(\tau )}_{k,\beta }$$ and $${\mathbb {E}}^{(\tau )}_{k,\beta }$$ coincide for real $$\tau$$, we get the desired explicit formula. $$\square$$

## 5 On the Moments of the Logarithmic Derivative of the Laguerre $$\beta$$-Ensemble Characteristic Polynomial

In this section we study the moments of the logarithmic derivative of the characteristic polynomial of the Laguerre $$\beta$$-ensemble, using the theory in Sect. 3.1, and prove an analogue of our main result. For $$\mathbf {x}\in {\mathbb {W}}_N$$ consider the following polynomial:

\begin{aligned} P_{\mathbf {x}}(t)=\prod _{i=1}^N\left( t-x_i\right) . \end{aligned}

For $$\beta >0$$ and $$\nu >-1$$ introduce the probability measure on $${\mathbb {W}}^+_N={\mathbb {W}}_N\cap [0,\infty )^N$$:

\begin{aligned} {\mathfrak {L}}_{N,\beta }^{(\nu )}(d\mathbf {x})= \frac{N!}{{\mathfrak {l}}_{N,\beta }^{(\nu )}}\prod _{j=1}^N x_j^\nu e^{-x_j} \left| \Delta (\mathbf {x})\right| ^\beta \mathbf {1}_{\mathbf {x}\in {\mathbb {W}}^+_N} d\mathbf {x}, \end{aligned}
(35)

where the normalisation constant $${\mathfrak {l}}_{N,\beta }^{(\nu )}$$, see [20], is given by

\begin{aligned} {\mathfrak {l}}_{N,\beta }^{(\nu )} = \prod _{j=0}^{N-1} \frac{\Gamma \left( \nu + 1 + \frac{\beta }{2}j\right) \Gamma \left( 1+\frac{\beta }{2}(j+1)\right) }{\Gamma \left( 1+\frac{\beta }{2}\right) }. \end{aligned}

This is called the Laguerre $$\beta$$-ensemble. For $$\beta =1,2 ,4$$ it corresponds to the law of eigenvalues of a certain self-adjoint random matrix with either real or complex or quaternion entries respectively, see [20]. Tridiagonal models for which (35) is the law of the eigenvalues exist for all values of $$\beta >0$$, see [19]. Under the transformation $$x\mapsto \frac{2}{x}$$, $${\mathfrak {L}}_{N,\beta }^{(\nu )}$$ transforms to the measure:

\begin{aligned} \mathfrak {IL}_{N,\beta }^{(\nu )}(d\mathbf {x})= \frac{N! }{{\mathfrak {l}}_{N,\beta }^{(\nu )}}2^{N \nu + \beta N(N-1)/2 + N} \prod _{j=1}^N x_j^{-\nu -(N-1)\beta -2} e^{-2/x_j} \left| \Delta (\mathbf {x})\right| ^\beta \mathbf {1}_{\mathbf {x}\in {\mathbb {W}}^+_N} d\mathbf {x}. \end{aligned}

We denote expectation with respect to $${\mathfrak {L}}_{N,\beta }^{(\nu )}$$ by $${\mathcal {E}}_{N,\beta }^{(\nu )}$$ and with respect to $$\mathfrak {IL}_{N,\beta }^{(\nu )}(d\mathbf {x})$$ by $$\hat{{\mathcal {E}}}_{N,\beta }^{(\nu )}$$. We then consider the moments of the logarithmic derivative of the Laguerre $$\beta$$-ensemble characteristic polynomial:

\begin{aligned} G_{N,\beta }(\nu ,r)={\mathcal {E}}_{N,\beta }^{(\nu )}\left[ \left| \frac{d}{dt}\log P_{\mathbf {x}}(t)\bigg |_{t=0}\right| ^{r}\right] . \end{aligned}

We note that these moments are finite for $$\beta >0$$, $$\nu >-1$$ and $$r<\nu +1$$. We then have the following analogue of our main result:

### Proposition 5.1

Let $$\beta ,\nu >0$$ and $$0\le r < \nu +1$$. Then, there exists a family of non-negative random variables $$\left\{ {\mathsf {Y}}_\beta (\nu )\right\} _{\nu >0}$$ such that

\begin{aligned} \lim \limits _{N \rightarrow \infty } \frac{1}{N^{r}}G_{N,\beta }(\nu ,r)= {\mathbb {E}}\left[ {\mathsf {Y}}_\beta (\nu )^{r}\right] <\infty . \end{aligned}

Moreover, for $$r\in {\mathbb {N}}\cup \{0\}$$ and $$\nu >r-1$$ we have the explicit formula

\begin{aligned} {\mathbb {E}}\left[ {\mathsf {Y}}_\beta (\nu )^{r}\right]&= \frac{r!}{\beta ^r} \sum _{|\kappa | = r} \prod _{\Box \in \kappa }\frac{1}{\left( \frac{\beta }{2}(\alpha (\Box )+1)+ \ell (\Box )\right) \left( \frac{\beta }{2}\alpha (\Box )+ \ell (\Box ) + 1\right) } \nonumber \\&\quad \prod _{j=0}^{\ell (\kappa ) - 1} \frac{\Gamma \left( \nu + \frac{\beta }{2}j + 1 - \kappa _j\right) }{\Gamma \left( \nu + \frac{\beta }{2}j + 1\right) }. \end{aligned}
(36)

### Remark 5.2

It follows from the results in [2], see also Sect. 3 in [5], that $${\mathsf {Y}}_2(\nu )$$ is equal to the sum of the (random) inverse points of the Bessel determinantal point process, see [20]. Moreover, for general $$\beta$$ it follows from the results of [38] that $${\mathsf {Y}}_\beta (\nu )$$ is the trace of the random integral operator in (1.4) in [38] (after multiplication by $$\beta /2$$ and the correspondence of parameters $$\nu =\beta (a+1)/2-1$$). This random operator is the inverse operator to the generator of a diffusion process with random scale function and speed measure, see (1.3) in [38]. It also follows that $${\mathsf {Y}}_\beta (\nu )$$ is in fact almost surely strictly positive.

### Remark 5.3

For $$\beta =2$$ and any $$\nu >-1$$, the Laplace transform $$t\mapsto {\mathbb {E}}\left[ e^{-4t{\mathsf {Y}}_2(\nu )}\right]$$ is a tau-function of a special case, which is different from the one appearing in the case of $${\mathsf {X}}_2(\tau )$$, of the $$\sigma$$-Painlevé III’ equation, depending on the parameter $$\nu$$, see Sect. 3 in [5].

We apply the results of Sect. 3.1 and begin with the following:

### Lemma 5.4

Let $$\beta >0$$ and $$\nu >-1$$. The invere Laguerre measures $$\left\{ \mathfrak {IL}_{N,\beta }^{(\nu )}\right\} _{N=1}^\infty$$ are consistent with parameter $$\beta$$.

### Proof

The required multidimensional integral (12) that needs to be checked follows from Variant A right after Lemma 2.2 of [33]. $$\square$$

The following formula is an easy consequence of the results of Forrester [21].

### Proposition 5.5

Let $$\beta >0$$, $$r \in {\mathbb {N}}\cup \{0\}$$ and $$\nu >r-1$$. Then, we have

\begin{aligned}&\hat{{\mathcal {E}}}_{N,\beta }^{(\nu )}\left[ \left( x_1^{(N)} +\cdots +x_N^{(N)}\right) ^r\right] \nonumber \\&\quad = \frac{r!}{\beta ^r} \sum _{|\kappa | = r} \prod _{\Box \in \kappa }\frac{\frac{\beta }{2}\alpha ^\prime (\Box ) + N - \ell ^\prime (\Box )}{\left( \frac{\beta }{2}(\alpha (\Box )+1)+ \ell (\Box )\right) \left( \frac{\beta }{2}\alpha (\Box )+ \ell (\Box ) + 1\right) } \nonumber \\&\qquad \prod _{j=0}^{\ell (\kappa ) - 1} \frac{\Gamma \left( \nu + \frac{\beta }{2}j + 1 - \kappa _j\right) }{\Gamma \left( \nu + \frac{\beta }{2}j + 1\right) }. \end{aligned}
(37)

### Proof

By Proposition 4.1 of [21], we have the exact evaluation (we have corrected here a small misprint from [21])

\begin{aligned}&\frac{1}{S_N(\nu ,\mu ,\beta )}\int _{[0,1]^N}\left( \sum _{i=1}^N\frac{1}{x_i}\right) ^r \prod _{j=1}^Nx_j^\nu (1-x_j)^\mu \left| \Delta (\mathbf {x})\right| ^\beta d \mathbf {x} \nonumber \\&\quad = \left( \frac{2}{\beta }\right) ^r r!\sum _{|\kappa | = r} \prod _{\Box \in \kappa }\frac{\frac{\beta }{2} \alpha ^\prime (\Box ) + N - \ell ^\prime (\Box )}{\left( \frac{\beta }{2}(\alpha (\Box )+1)+ \ell (\Box )\right) \left( \frac{\beta }{2}\alpha (\Box )+ \ell (\Box ) + 1\right) } \nonumber \\&\qquad \times \prod _{j=0}^{\ell (\kappa ) - 1} \frac{\Gamma \left( \nu + \frac{\beta }{2}j + 1 - \kappa _j\right) }{\Gamma \left( \nu + \frac{\beta }{2}j + 1\right) } \cdot \frac{\Gamma \left( \nu + \mu + \frac{\beta }{2}(N+j-1) + 2 \right) }{\Gamma \left( \nu + \mu + \frac{\beta }{2}(N+j-1) + 2-\kappa _j \right) },\nonumber \\ \end{aligned}
(38)

where $$S_N(\nu ,\mu ,\beta )$$ is the Selberg normalisation:

\begin{aligned} S_N(\nu ,\mu ,\beta ) = \prod _{j=0}^{N-1}\frac{\Gamma \left( \nu +1 +\frac{\beta }{2}j\right) \Gamma \left( \mu +1+\frac{\beta }{2}j\right) \Gamma \left( 1+\frac{\beta }{2}(j+1)\right) }{\Gamma \left( \nu +\mu +2+\frac{\beta }{2}(N+j-1)\right) \Gamma \left( 1+\frac{\beta }{2}\right) }. \end{aligned}

Now making the substitution $$x_i \mapsto \mu x_i$$ in (38), gives

\begin{aligned}&\frac{1}{S_N(\nu ,\mu ,\beta )}\frac{1}{\mu ^{N+aN + \beta N(N-1)/2}}\int _{[0,\mu ]^N}\left( \sum _{i=1}^N\frac{1}{x_i}\right) ^r \prod _{j=1}^Nx_j^\nu \left( 1-\frac{x_j}{\mu }\right) ^\mu \left| \Delta (\mathbf {x})\right| ^\beta d \mathbf {x} \nonumber \\&\quad = \left( \frac{2}{\beta }\right) ^r r!\sum _{|\kappa | = r} \prod _{\Box \in \kappa }\frac{\frac{\beta }{2} \alpha ^\prime (\Box ) + N - \ell ^\prime (\Box )}{\left( \frac{\beta }{2}(\alpha (\Box )+1)+ \ell (\Box )\right) \left( \frac{\beta }{2}\alpha (\Box )+ \ell (\Box ) + 1\right) } \nonumber \\&\qquad \times \prod _{j=0}^{\ell (\kappa ) - 1} \frac{\Gamma \left( \nu + \frac{\beta }{2}j + 1 - \kappa _j\right) }{\Gamma \left( \nu + \frac{\beta }{2}j + 1\right) } \cdot \frac{\Gamma \left( \nu + \mu + \frac{\beta }{2}(N+j-1) + 2\right) }{\mu ^{\kappa _j}\Gamma \left( \nu + \mu + \frac{\beta }{2}(N+j-1) + 2- \kappa _j\right) }.\nonumber \\ \end{aligned}
(39)

Hence, taking the limit $$\mu \rightarrow \infty$$ in (39) (using the dominated convergence theorem), yields

\begin{aligned}&\frac{1}{{\mathfrak {l}}_{N,\beta }^{\nu }}\int _{[0,\infty )^N} \left( \sum _{i=1}^N\frac{1}{x_i}\right) ^r \prod _{j=1}^Nx_j^\nu e^{-x_j} \left| \Delta (\mathbf {x})\right| ^\beta d \mathbf {x} \nonumber \\&\quad = \left( \frac{2}{\beta }\right) ^r r!\sum _{|\kappa | = r} \prod _{\Box \in \kappa }\frac{\frac{\beta }{2} \alpha ^\prime (\Box ) + N - \ell ^\prime (\Box )}{\left( \frac{\beta }{2}(\alpha (\Box )+1)+ \ell (\Box )\right) \left( \frac{\beta }{2}\alpha (\Box )+ \ell (\Box ) + 1\right) } \nonumber \\&\qquad \times \prod _{j=0}^{\ell (\kappa ) - 1} \frac{\Gamma \left( \nu + \frac{\beta }{2}j + 1 - \kappa _j\right) }{\Gamma \left( \nu + \frac{\beta }{2}j + 1\right) }. \end{aligned}
(40)

Thus, by making the substitution $$x_i \mapsto 2/x_i$$ in (40), we arrive at (37). $$\square$$

### Proof of Proposition 5.1

First, observe that

\begin{aligned} G_{N,\beta }(\nu ,r)=\hat{{\mathcal {E}}}_{N,\beta }^{(\nu )} \left[ \left( x_1^{(N)}+\cdots +x_N^{(N)}\right) ^r\right] , \end{aligned}

since all the points are non-negative. The convergence statement is then a consequence of Proposition 3.8 (recall all points are non-negative) by virtue of Lemma 5.4 and the fact that $$\hat{{\mathcal {E}}}_{N,\beta }^{(\nu )}\left[ |x_1^{(1)}|^r\right] <\infty$$ for any $$0\le r <\nu +1$$, which can be chosen so that $$r\ge 1$$ since $$\nu >0$$, where we denote by $${\mathsf {Y}}_\beta (\nu )$$ the limiting random variable $${\mathsf {T}}_\infty$$.

To obtain the explicit formula (36) we substitute the evaluation (37) into the right-hand side of (13) and use the following fact. For any partition $$\kappa$$ with $$|\kappa | = r$$, we have

\begin{aligned} 1= \frac{1}{r!} \sum _{k=0}^r (-1)^{r-k}\left( {\begin{array}{c}r\\ k\end{array}}\right) \prod _{\Box \in \kappa } \left( \frac{\beta }{2} \alpha ^\prime (\Box ) + k - \ell ^\prime (\Box ) \right) . \end{aligned}
(41)

This is a special case of the identity $$1 = \frac{1}{r!}\sum _{k=0}^r (-1)^{r-k} \left( {\begin{array}{c}r\\ k\end{array}}\right) p(k)$$, valid for any monic polynomial p with $$\deg p = r$$, which can be seen as follows. For a polynomial p, we define $$(\Delta p)(x) := p(x+1) - p(x)$$. Note that $$(\Delta ^np)(x) = \sum _{k=0}^n (-1)^{n-k} \left( {\begin{array}{c}n\\ k\end{array}}\right) p(x+k)$$, and if p(x) is monic with $$\deg p = r$$, then $$(\Delta p)(x) = rx^{r-1} + \text {lower order}$$. Hence, $$(\Delta ^r p)(0) = r!$$ and the desired identity follows. Alternatively, the explicit formula (36) follows by dividing both sides of (37) by $$N^r$$ and taking $$N \rightarrow \infty$$. $$\square$$