1 Introduction

1.1 The Main Result

Let \(\mathbb {T}\) be the unit circle and consider for \(\beta >0\) the following probability measure on \(\mathbb {T}^N\), with \(\theta _j\in [0,2\pi )\):

$$\begin{aligned} \frac{\Gamma \left( 1+\frac{\beta }{2}\right) ^N}{(2\pi )^N\Gamma \left( 1+N\frac{\beta }{2}\right) }\prod _{1\le j<k \le N}\big |e^{i\theta _j} -e^{i\theta _k}\big |^{\beta }d\theta _1d\theta _2\cdots d\theta _N. \end{aligned}$$
(1)

This probability measure is called the \(C\beta E_{N}\) ensemble and we denote the expectation with respect to it by \(\mathbb {E}_N^{(\beta )}\). For \(\beta =2\) this measure is the law of the eigenvalues of a random Haar distributed \(N\times N\) unitary matrix and it is called the CUE (unitary) ensemble. The \(\beta =1\) and \(\beta =4\) cases are also distinguished and are called the COE (orthogonal) and CSE (symplectic) ensembles respectively. In the orthogonal case one can obtain such a random matrix by taking \(\mathbf {U}^T\mathbf {U}\) where \(\mathbf {U}\) is an \(N\times N\) Haar distributed unitary matrix, while the symplectic case is slightly more complicated, see [32, 35]. Matrix models having the \(C\beta E_N\) ensemble as the induced law of eigenvalues also exist for general \(\beta >0\), see for example [32], but we will not need to use this fact here.

We now define the characteristic polynomial of the \(C\beta E_N\) ensemble, with \(\mathbf {z}=(z_1,\ldots ,z_N) \) where the \(z_j=e^{i\theta _j}\), \(j=1,\ldots ,N\) are distributed according to (1):

$$\begin{aligned} {\Psi }^{(N)}_{\mathbf {z}}(t)=\prod _{j=1}^N\left( 1-e^{-it} z_j\right) =\prod _{j=1}^N\left( 1-e^{i\theta _j} \right) , \ t \in [0,2\pi ]. \end{aligned}$$
(2)

We call \(\log {\Psi }_{\mathbf {z}}^{(N)}(\cdot )\) the \(C\beta E_{N}\) field. Borrowing the statistical mechanics terminology from [23, 24] we also define the partition function:

$$\begin{aligned} \mathcal {Z}_\mathbf {z}^{(N)}(q)=\frac{1}{2\pi }\int _0^{2\pi }e^{2q\Re \log {\Psi }_{\mathbf {z}}^{(N)}(t)}dt=\frac{1}{2\pi }\int _0^{2\pi } \big |{\Psi }^{(N)}_{\mathbf {z}}(t)\big |^{2q}dt. \end{aligned}$$
(3)

Finally, we define the moments of the partition function of the \(C\beta E_{N}\) field, or using the terminology of [2, 3, 5, 6] (from where we also borrow the notation) the moments of the moments of the characteristic polynomial of the \(C\beta E_{N}\) ensemble:

$$\begin{aligned} \text {MoM}_N^{(\beta )}(k;q)=\mathbb {E}_N^{(\beta )}\left[ \left( \mathcal {Z}_\mathbf {z}^{(N)}(q)\right) ^k\right] =\mathbb {E}_N^{(\beta )}\left[ \left( \frac{1}{2\pi }\int _0^{2\pi }\big |{\Psi }^{(N)}_{\mathbf {z}}(t)\big |^{2q}dt\right) ^k\right] . \end{aligned}$$
(4)

In this paper we give a combinatorial formula for \(\text {MoM}_N^{(\beta )}(k;q)\) for \(k,q\in \mathbb {N}\) and general \(\beta >0\) in Proposition 2.8 below. We then use this formula to establish the large N asymptotics of these moments in the so-called “moment-supercritical” regime. This terminology comes from a connection to log-correlated Gaussian fields and multiplicative chaos that we briefly recall below, see [31] for more details.

Theorem 1.1

Let \(k,q\in \mathbb {N}\) and \(\beta >0\). If moreover \(\beta \) satisfies:

  • \(\beta <4q^2\), for \(k=2\),

  • \(\beta \le 2\), for \(k\ge 3\),

then we have the following asymptotics:

$$\begin{aligned} \lim _{N\rightarrow \infty }\frac{1}{N^{\frac{2}{\beta }(kq)^2-(k-1)}}\text {MoM}_N^{(\beta )}(k;q)=\mathfrak {c}^{(\beta )}(k;q), \end{aligned}$$
(5)

where the coefficient \(\mathfrak {c}^{(\beta )}(k;q)\) is finite and strictly positive and is given as an integral of an explicit non-negative weight over continuous interlacing arrays with constraints, see (15) for the precise definition.

As far as we are aware, the result, even the precise order of the asymptotic in N, is new for parameters satisfying both \(\beta \ne 2\) and \(k\ne 1\) and we elaborate on the history and approaches to this problem below. The restriction to \(\beta \le 2\) for \(k\ge 3\) is a technical one and we expect the statement of the theorem to hold for all \(\beta <2kq^2\) when \(k>1\). Observe that, for \(\beta <2kq^2\) the exponent of N in (5) is strictly greater than one. When \(\beta =2kq^2\) this exponent becomes one, however it is expected when \(k>1\), from the connection to multiplicative chaos [31] recalled shortly, that there should be a phase transition and that, up to a multiplicative constant, \(\text {MoM}_N^{(2kq^2)}(k;q)\) should grow to leading order like \(N\log N\).

The weight mentioned in Theorem 1.1 is identically 1 when \(\beta =2\) and \(\mathfrak {c}^{(2)}(k;q)\) recovers the volume of the set \(\mathsf {I}_c(k;q)\) from Definition 3.1 below. For general \(\beta >0\) it is very closely related to the orbital beta process, a certain probability distribution on continuous interlacing arrays, see [4, 16, 25]. We note that while the integral expression (15) for \(\mathfrak {c}^{(\beta )}(k;q)\) is unambiguously defined for all \(\beta >0\), it is infinite when \(k\ge 2\) and \(\beta \) is large enough (for reasons that we explain in Sect. 3.2). This motivates the definition, for any fixed \(k,q\in \mathbb {N}\), of the following subset of \((0,\infty )\):

$$\begin{aligned} \mathcal {A}(k;q)=\big \{\beta >0: \mathfrak {c}^{(\beta )}(k;q)<\infty \big \}. \end{aligned}$$
(6)

Then, Theorem 1.1 is a consequence of the following two results, which use as a starting point the combinatorial formula for \(\text {MoM}_N^{(\beta )}(k;q)\) from Proposition 2.8. Proposition 1.2 is proven in Sect. 3.1 while Proposition 1.3 is proven in Sect. 3.2 along with a number of results on the leading order coefficient \(\mathfrak {c}^{(\beta )}(k;q)\), including more explicit expressions for \(k=1\) and \(k=2\). In the special case \(\beta =2\) corresponding to the CUE, \(\mathfrak {c}^{(2)}(2;q)\) is known to have connections to integrable systems, in particular the Painlevé V equation and we discuss this briefly in Sect. 3.3.

Proposition 1.2

Let \(k,q\in \mathbb {N}\) and \(\beta \in \mathcal {A}(k;q)\). Then, the asymptotics (5) hold.

Proposition 1.3

Let \(k,q \in \mathbb {N}\). Then, we have:

  • \(\mathcal {A}(1;q)=(0,\infty )\),

  • \(\mathcal {A}(2;q)=(0,4q^2)\),

  • For \(k\ge 3\), \((0,2] \subset \mathcal {A}(k;q)\) and moreover \([2kq^2,\infty )\cap \mathcal {A}(k;q)=\emptyset \).

The fact that \([2kq^2,\infty )\cap \mathcal {A}(k;q)=\emptyset \) for \(k>1\) is consistent with heuristics, explained right after, based on a connection to the theory of Gaussian multiplicative chaos, see [31] for the details. We expect that \(\mathcal {A}(k;q)=(0,2kq^2)\) for all \(k>1\) and we also give some brief heuristic arguments in support of this in Sect. 3.2.

1.2 Predictions from the Connection to Gaussian Log-Correlated Fields

We now briefly recall the connection between \({\Psi }^{(N)}_{\mathbf {z}}\) and Gaussian log-correlated fields and multiplicative chaos, see the introduction of [31] for more details and precise statements and also [13, 26,27,28, 33, 37, 43] for more on this topic. This connection begins with the following result, see [13, 26,27,28, 33] for the precise convergence statement:

$$\begin{aligned} \log \big |{\Psi }^{(N)}_{\mathbf {z}}(\cdot )\big | \overset{\text {d}}{\longrightarrow } \frac{1}{\sqrt{\beta }}\mathsf {G}(\cdot ), \ \ \text { as } N\rightarrow \infty , \end{aligned}$$
(7)

where \(\mathsf {G}\) is the Gaussian free field on the unit circle \(\mathbb {T}\) with covariance:

$$\begin{aligned} \mathbb {E}\left[ \mathsf {G}(t)\mathsf {G}(s)\right] =-\log \big |e^{it} -e^{is}\big |, \ \ \forall s,t\in [0,2\pi ). \end{aligned}$$

Now, it is possible to define, for a parameter \(\gamma \) with \(\gamma ^2<2\), a non-trivial random measure on \(\mathbb {T}\) called the Gaussian multiplicative chaos (GMC) associated to \(\mathsf {G}\) which is written formallyFootnote 1 as, see for example [9] for the details:

$$\begin{aligned} \text {GMC}_{\gamma }(dt)= \frac{e^{\gamma \mathsf {G}(t)}}{\mathbb {E}\left[ e^{\gamma \mathsf {G}(t)}\right] }dt=e^{\gamma \mathsf {G}(t)-\frac{\gamma ^2}{2}\mathbb {E}\left[ \mathsf {G}^2(t)\right] } dt. \end{aligned}$$

From (7) one might expect that we have the following convergence in law with respect to the topology of weak convergence of measures on \(\mathbb {T}\), where \(\gamma =\frac{2q}{\sqrt{\beta }}\):

$$\begin{aligned} \frac{\big |{\Psi }^{(N)}_{\mathbf {z}}(t)\big |^{2q}}{\mathbb {E}_N^{(\beta )}\left[ \big |{\Psi }^{(N)}_{\mathbf {z}}(t)\big |^{2q}\right] } dt \xrightarrow {N\rightarrow \infty } \text {GMC}_{\gamma }(dt). \end{aligned}$$
(8)

This convergence has been proven for \(\beta =2\) in [37, 43]. It is a very interesting and challenging task to extend this result to \(\beta \ne 2\) but as far as we are aware this is still an open problem for any \(\beta \ne 2\). However, see [33] for the analogous result for a small mesoscopic regularisation of \({\Psi }^{(N)}_{\mathbf {z}}\), where also the so-called “freezing transition" for the partition function \(\mathcal {Z}_\mathbf {z}^{(N)}(q)\) is proven.

The total mass of the chaos \(\text {GMC}_{\gamma }\left( \mathbb {T}\right) \) is known to be an explicit random variable from the work of Remy [41] that establishes a conjecture of Fyodorov and Bouchaud [22]. Its moments, when they exist, are also completely explicit and given in terms of Gamma functions, see [22, 41]. If k is sufficiently small, equivalently \(\beta \) is sufficiently large, \(2kq^2<\beta \) so that the k-th moment of \(\text {GMC}_{\gamma }\left( \mathbb {T}\right) \) exists one might expect that (8) can be extended to a convergence of the k-th moment of the total mass of the left hand side of (8) to \(\mathbb {E}\left[ \text {GMC}_{\gamma }\left( \mathbb {T}\right) ^k\right] \). Using rotation invarianceFootnote 2 of the \(C\beta E_N\), the above conjectural convergence of moments would imply that, in this “moment-subcritical" regime, \(\text {MoM}_N^{(\beta )}(k;q)\) should grow to leading order like \(N^{\frac{2}{\beta }kq^2}\) times an explicit constant, see [31] for more details. Although not written out explicitly, for \(\beta =2\) and a restricted range of parameters kq, see [31] for the details, this convergence of moments is a direct consequence of the proofs in [37, 43]. At “moment-criticality" \(\beta =2kq^2\) withFootnote 3\(k>1\) a more refined GMC heuristic developed in [31] gives a conjecture that the moments of moments should grow to leading order like \(N\log N\) times an explicit, albeit more involved compared to the moment-subcritical case, constant. For \(\beta =2\) and \(k\in \mathbb {N}\) this conjecture was proven in the same paper [31].

In the “moment-supercritical” regime \(\beta <2kq^2\) that we study in this paper the connection to the moments of the GMC breaks down and we get a completely different asymptotic behaviour which is much less understood. In fact, some more involved GMC heuristics, see [31], can still predict the correct power of N in the asymptotic but not the leading order coefficient. Although the leading order coefficient in this regime is in general not as explicit (and in fact we do not expect it to be) as in the moment-subcritical and critical regimes it has some non-trivial structure which leads to a representation, in the simplestFootnote 4 possible case when \(\beta =2\) and \(k=2\), in terms of a Painlevé V transcedent. We believe that more intricate connections to the theory of integrable systems should exist beyond this simplest possible case and the results of this paper could be used as a starting point for investigating this. It is interesting that the combinatorial formula for \(\text {MoM}_N^{(\beta )}(k;q)\) we give in Proposition 2.8 also exists for parameters in the moment critical and subcritical regimes. This is a new feature of the \(C\beta E_N\) for general \(\beta \) that is not present for the special case of CUE studied previously. In particular, it might be possible that this formula can be used to access the asymptotics of \(\text {MoM}_N^{(2kq^2)}(k;q)\) for the critical case \(\beta =2kq^2\) with \(k,q\in \mathbb {N}\) and \(k>1\). However, the arguments will be different, and more involved, than the ones presented here and so we do not pursue it further in this paper.

1.3 Some History and Motivation

For \(k=1\) and general \(\beta >0\), Theorem 1.1 is originally due to Keating and Snaith from [29] using the Selberg integral and asymptotics for the Barnes G-function. The leading order coefficient is in fact completely explicit in this case:

$$\begin{aligned} \mathfrak {c}^{(\beta )}(1;q)=\prod _{i=1}^q\frac{\Gamma \left( \frac{2}{\beta }i\right) }{\Gamma \left( \frac{2}{\beta }(q+i)\right) }. \end{aligned}$$
(9)

An alternative proof using symmetric function theory is due to Matsumoto from [35]. Yet another proof making use of spectral theory and orthogonal polynomials on the unit circle can be obtained from [10]. As we will see in Lemma 3.5 below \(\mathfrak {c}^{(\beta )}(1;q)\) also arises as an integral of a special weight over a continuous interlacing array.

In the special case of the CUE, namely \(\beta =2\), which has a lot of extra structure, there has been a great deal of work on the problem of asymptotics of moments of moments. In particular, for \(k=2\) and real q the theorem above is a consequence of the results of Claeys and Krasovsky from [14] on asymptotics of Toeplitz determinants with merging singularities who also go on to prove that \(\mathfrak {c}^{(2)}(2;q)\) has a representation in terms of the Painlevé V equation. This is achieved using the Riemann–Hilbert problem method. The Riemann–Hilbert problem analysis is also the key technical tool in proving the convergence (8) to GMC for \(\beta =2\) [37, 43] and in establishing in the moment-subcritical [37, 43] and moment-critical regimes [31] the convergence of moments of moments (again for \(\beta =2\)). Later two alternative proofs of the asymptotics for \(k=2\) and \(q\in \mathbb {N}\) were given in [30]. One of them using multiple contour integrals from [15] and the other using symmetric function theory, based on results of Bump and Gamburd [11], and two different expressions for \(\mathfrak {c}^{(2)}(2;q)\) were obtained. Using the expression of \(\mathfrak {c}^{(2)}(2;q)\) coming from symmetric function theory a different proof of the connection to Painlevé V was then given in [8]. The complex analytic approach using multiple contour integrals was extended in [5] to establish the asymptotics for general \(k,q\in \mathbb {N}\). Afterwards, the combinatorial approach involving symmetric functions was also extended to \(k,q\in \mathbb {N}\) in [3] and a different, geometric expression for \(\mathfrak {c}^{(2)}(k;q)\) given as a volumeFootnote 5 was obtained. The combinatorial approach to this problem essentially boils down to counting lattice points in certain complicated regions.Footnote 6 This approach was also then adapted in [2] to deal with the corresponding question of asymptotics of moments of moments in the more involved case of Haar distributed Sp(2N) and SO(2N) matrices.Footnote 7 Recently, Fahs [19] using Riemann–Hilbert problem techniques was able to establish the order of the asymptotics in N for general \(k\in \mathbb {N}\) and real q, however an expression for \(\mathfrak {c}^{(2)}(k;q)\) for \(k\ge 3\) and non-integer q is still lacking in the moment-supercritical regime. As far as we know, no rigorous results are available at present in the moment-supercritical regime when we also allow for non-integer values of k.

Now, for general \(\beta \ne 2\), as far as we are aware, there were no rigorous results (including for the moment subcritical and critical regimes) for \(k\ne 1\) prior to the present work. In terms of available approaches to this question, the Riemann–Hilbert problem techniques do not applyFootnote 8 since we are no longer in the determinantal setting of \(\beta =2\) and similarly we are not aware of any multiple contour integral formulae for general \(\beta \ne 2\). Moreover, an approach using random orthogonal polynomials on the unit circle as in [10] does not seem well-adapted to this problem. However, an approach based on symmetric function theory and combinatorics does work. Our starting point is a result of Matsumoto [35] which connects expectations of products of characteristic polynomials from the C\(\beta \)E to the Jack symmetric polynomials. The main difference compared to the \(\beta =2\) case from [3] is that now instead of simply counting lattice points we also include a weight which comes from the combinatorial formula for Jack polynomials and makes the analysis more complicated.Footnote 9 Even with this extra complication, the proof is still relatively short and this can be viewed as a testament to the efficiency of the method. We should also point out that this is of course not the first time Jack polynomials have been used in answering asymptotic questions related to the C\(\beta \)E, see for example [13, 27, 36]. However, both the questions and also the way Jack polynomials were used to answer them in these works are different from what we do here. It would be very interesting to extend the asymptotics established in this paper beyond positive integer values of k and q but this would most likely require new ideas.

Finally, it is important to mention that the moments of moments are also closely related to conjectures by Fyodorov, Hiary and Keating on the extreme value theory of the \(C\beta E_N\) field \(\log {\Psi }^{(N)}_{\mathbf {z}}\), see [5, 23, 24] for more details and [1, 12, 40] for rigorous progress on these conjectures and moreover for \(\beta =2\) they are connected to the corresponding moments of moments of the Riemann zeta function, see [6].

Regarding future directions, the method followed here should also work for the \(\beta \)-ensemble versions of Sp(2N) and SO(2N) random matrices. Instead of the Jack polynomials one uses the Heckman–Opdam Jacobi polynomials, see Sect. 5 of [35]. However, both the combinatorics, as can be seen from the \(\beta =2\) case in [2], and also the weight are significantly more complicated and we do not pursue this further in this paper. We hope to return to this question in future work. It is worth mentioning that connections to Gaussian log-correlated fields and multiplicative chaos measures also exist in the setting of Sp(2N) and SO(2N) random matrices, see [20, 31] for more details. We expect that these results should extend to the corresponding general \(\beta \)-ensemble versions but such questions have not been explored yet. Finally, it would also be possible to apply the method presented in this paper, with more involved computations, at a higher level of symmetric functions, for the Macdonald weight, which involves two parameters; but in this case the definition of the moments of moments needs to be modified accordingly.

2 A Combinatorial Representation for the Moments for Finite N

We first need a number of preliminaries from symmetric function theory. We write \(\mathbb {Z}_+=\{0,1,2,3,\ldots \}\). We define the space of non-negative signatures of length \(M\in \mathbb {N}\) by:

$$\begin{aligned} \mathsf {S}_+^{(M)}=\big \{(\lambda _1,\ldots ,\lambda _M)\in \mathbb {Z}_+^M:\lambda _1\ge \lambda _2 \ge \cdots \ge \lambda _M \big \}. \end{aligned}$$

For \(\lambda \in \mathsf {S}_+^{(M)}\) we write \(|\lambda |=\sum _{i=1}^M \lambda _i\). We say that \(\mu \in \mathsf {S}_+^{(M)}\) and \(\lambda \in \mathsf {S}_+^{(M+1)}\) interlace and write \(\mu \prec \lambda \) if:

$$\begin{aligned} \lambda _1\ge \mu _1 \ge \lambda _2\ge \cdots \ge \lambda _M\ge \mu _M\ge \lambda _{M+1}. \end{aligned}$$
(10)

We also call a sequence of signatures which interlace a discrete interlacing array. For \(\mu \in \mathsf {S}_+^{(M)}, \lambda \in \mathsf {S}_+^{(M+1)}\) that interlace we define the following non-negative weight:

$$\begin{aligned} \psi _{\lambda /\mu }^{(\delta )}=\prod _{1\le i \le j \le M} \frac{(\mu _i-\mu _j+\delta (j-i)+\delta )_{\mu _j-\lambda _{j+1}}(\lambda _i-\mu _j+\delta (j-i)+1)_{\mu _j-\lambda _{j+1}}}{(\mu _i-\mu _j+\delta (j-i)+1)_{\mu _j-\lambda _{j+1}}(\lambda _i-\mu _j+\delta (j-i)+\delta )_{\mu _j-\lambda _{j+1}}} , \end{aligned}$$
(11)

where \((t)_m=t(t+1)\cdots (t+m-1)=\frac{\Gamma (t+m)}{\Gamma (t)}\) is the Pochhammer symbol. Also, observe that \(\psi _{\lambda /\mu }^{(1)}\equiv 1\).

Definition 2.1

Let \(\delta >0\) and \(\lambda \in \mathsf {S}_+^{(M)}\). Then, we define the Jack polynomial indexed by \(\lambda \) by the combinatorial formulaFootnote 10

$$\begin{aligned} \mathcal {P}_{\lambda }\left( x_1,x_2,\ldots ,x_M;\delta \right) =\sum _{\lambda ^{(1)}\prec \lambda ^{(2)}\prec \cdots \prec \lambda ^{(M-1)}\prec \lambda ^{(M)}=\lambda } \psi ^{(\delta )}_{\lambda /\lambda ^{(M-1)}}\psi ^{(\delta )}_{\lambda ^{(M-1)}/\lambda ^{(M-2)}}\cdots \psi ^{(\delta )}_{\lambda ^{(2)}/\lambda ^{(1)}} \times \nonumber \\ x^{|\lambda |-|\lambda ^{(M-1)}|}_M x_{M-1}^{|\lambda ^{(M-1)}|-|\lambda ^{(M-2)}|} \cdots x_2^{|\lambda ^{(2)}|-|\lambda ^{(1)}|} x_1^{|\lambda ^{(1)}|}. \end{aligned}$$
(12)

Remark 2.2

Here we use the notation conventions of Okounkov and Olshanski from [38], with our fixed parameter \(\delta =\theta \) from their paper. Often in the literature, see [34, 42], the inverse parameter \(\alpha =1/\theta \) is used. The combinatorial definition above is a consequence of the branching rule for Jack polynomials, see Sect. 2.3 in [38]. The Jack polynomials are in fact symmetric (although this is not immediately evident from the definition above) and are orthogonal with respect to the C\(\beta \)E weight, see [34, 35, 42] for more details. Finally, in the special case \(\delta =1\) the Jack polynomials specialise to the Schur polynomials: \(\mathcal {P}_\lambda (\cdot ;1)=\mathsf {s}_{\lambda }(\cdot )\).

The following proposition due to Matsumoto, which is a special case of the results of Sect. 4.2 in [35], will be our starting point. This proposition is the C\(\beta \)E generalization of the results of Bump and Gamburd [11] on the CUE (\(\beta =2\)) characteristic polynomial which formed the starting point of the investigation in [3].

Proposition 2.3

(Matsumoto [35]) Let \(\beta >0\) and \(N,k,q\in \mathbb {N}\). Then, we have:

$$\begin{aligned}&\mathbb {E}_N^{(\beta )}\left[ \big |{\Psi }^{(N)}_{\mathbf {z}}(t_1)\big |^{2q}\cdots \big |{\Psi }^{(N)}_{\mathbf {z}}(t_k)\big |^{2q}\right] \\&=\frac{\mathcal {P}_{(N,\ldots ,N,0,\ldots ,0)}\left( e^{it_1} ,\ldots , e^{it_1},e^{it_2},\ldots ,e^{it_2}, \ldots , e^{it_k},\ldots , e^{it_k};\frac{2}{\beta }\right) }{\prod _{j=1}^k e^{iNqt_j}} \end{aligned}$$

where each variable \(e^{it_j}\), for \(j=1,\ldots ,k\) appears 2q times and \((N,\ldots ,N,0,\ldots ,0)\in \mathsf {S}_+^{(2kq)}\) consists of kq N’s and kq 0’s.

We will first obtain a preliminary combinatorial representation for \(\text {MoM}_N^{(\beta )}(k;q)\). Towards this end we begin by defining the following set \(\mathsf {J}_N(k;q)\). It consists of tuples of non-negative signatures \({\Lambda }=\left( \lambda ^{(1)},\lambda ^{(2)},\lambda ^{(3)},\ldots , \lambda ^{(2kq-1)},\lambda ^{(2kq)}\right) \) so that: \(\lambda ^{(i)}\in \mathsf {S}_+^{(i)}\),

$$\begin{aligned} \lambda ^{(1)}\prec \lambda ^{(2)}\prec \lambda ^{(3)}\prec \cdots \prec \lambda ^{(2kq-1)}\prec \lambda ^{(2kq)}, \end{aligned}$$

\(\lambda ^{(2kq)}=(N,\ldots ,N,0,\ldots ,0)\), where both the N’s and the 0’s appear kq times, and moreover the following \((k-1)\) sum constraintsFootnote 11 are satisfied:

$$\begin{aligned} \big |\lambda ^{(2jq)}\big |=\sum _{i=1}^{2jq}\lambda _i^{(2jq)}=Njq, \ \ j=1, \ldots , k-1. \end{aligned}$$

Proposition 2.4

Let \(\beta >0\) and \(N,k,q\in \mathbb {N}\). Then, we have:

$$\begin{aligned} \text {MoM}_N^{(\beta )}(k;q)=\sum _{{\Lambda }\in \mathsf {J}_N(k;q)}\psi _{\lambda ^{(2kq)}/\lambda ^{(2kq-1)}}^{\left( \frac{2}{\beta }\right) }\psi _{\lambda ^{(2kq-1)}/\lambda ^{(2kq-2)}}^{\left( \frac{2}{\beta }\right) }\cdots \psi _{\lambda ^{(3)}/\lambda ^{(2)}}^{\left( \frac{2}{\beta }\right) } \psi _{\lambda ^{(2)}/\lambda ^{(1)}}^{\left( \frac{2}{\beta }\right) }. \end{aligned}$$
(13)

Proof

We first apply Fubini’s theorem:

$$\begin{aligned} \text {MoM}_N^{(\beta )}(k;q)=\left( \frac{1}{2\pi }\right) ^{k}\int _0^{2\pi }\cdots \int _0^{2\pi }\mathbb {E}_N^{(\beta )}\left[ \big |{\Psi }^{(N)}_{\mathbf {z}}(t_1)\big |^{2q}\cdots \big |{\Psi }^{(N)}_{\mathbf {z}}(t_k)\big |^{2q}\right] dt_1\cdots dt_k. \end{aligned}$$

Then, we use Proposition 2.3 and Definition 2.1. The result then easily follows from the fact that, with \(c\in \mathbb {Z}\):

$$\begin{aligned} \frac{1}{2\pi }\int _0^{2\pi }e^{ict} dt={\left\{ \begin{array}{ll}1, \ \ c=0,\\ 0, \ \ \text {otherwise}. \end{array}\right. } \end{aligned}$$

\(\square \)

We will now obtain an alternative combinatorial representation for \(\text {MoM}_N^{(\beta )}(k;q)\) which is well-suited for taking the large N limit. The following definition is a key ingredient.

Definition 2.5

Let \(N,k,q \in \mathbb {N}\). We define the set \(\mathsf {I}_N(k;q)\) as follows. It consists of tuples of non-negative signatures \({\Lambda }=\left( \lambda ^{(1)},\lambda ^{(2)},\ldots ,\lambda ^{(kq-1)},\lambda ^{(kq)}=\tilde{\lambda }^{(kq)},\tilde{\lambda }^{(kq-1)},\ldots , \tilde{\lambda }^{(2)},\tilde{\lambda }^{(1)}\right) \) so that: \(\lambda ^{(i)},\tilde{\lambda }^{(i)}\in \mathsf {S}_+^{(i)}\),

$$\begin{aligned} \lambda ^{(1)}\prec \lambda ^{(2)} \prec \lambda ^{(3)}\prec \cdots \prec \lambda ^{(kq)}=\tilde{\lambda }^{(kq)}\succ \cdots \succ \tilde{\lambda }^{(3)} \succ \tilde{\lambda }^{(2)} \succ \tilde{\lambda }^{(1)}, \end{aligned}$$

\(\lambda _i^{(j)},\tilde{\lambda }_i^{(j)}\in \llbracket 0,N \rrbracket =\{0,1,\ldots ,N\}\) and finally the following \((k-1)\) sum constraints are satisfied:

$$\begin{aligned} \big |\lambda ^{(2jq)}\big |&=\sum _{i=1}^{2jq}\lambda _i^{(2jq)}=Njq, \ \ j=1,\ldots ,\big \lfloor \frac{k}{2}\big \rfloor ,\\ \big |\tilde{\lambda }^{(2jq)}\big |&=\sum _{i=1}^{2jq}\tilde{\lambda }_i^{(2jq)}=Njq, \ \ j=1,\ldots ,\big \lfloor \frac{k}{2}\big \rfloor . \end{aligned}$$

Observe that, \(\mathsf {I}_N(k;q)\) has \((kq)^2-(k-1)\) free (non-fixed) coordinates.

Fig. 1
figure 1

A figure showing the shaded triangles of fixed coordinates of 0’s and N’s. This results in two discrete interlacing arrays joined at the top row as shown in the figure in green. The solid red lines correspond to the sum constraints in \(\mathsf {I}_N(k;q)\), while their continuations involving the dashed red part correspond to the sum constraints in \(\mathsf {J}_N(k;q)\); in the figure \(k=5\) so we have 4 such constraints (Color figure online).

Lemma 2.6

Let \(N,k,q\in \mathbb {N}\). Then, there exists a bijection \(\mathcal {S}\) between \(\mathsf {J}_N(k;q)\) and \(\mathsf {I}_N(k;q)\)

$$\begin{aligned} \mathcal {S}:\mathsf {J}_N(k;q)&\longrightarrow \mathsf {I}_N(k;q), \end{aligned}$$

given in terms of the coordinates by:

$$\begin{aligned} \lambda ^{(i)}&\mapsto \lambda ^{(i)}, \ \ i=1,\ldots , kq-1\\ \lambda ^{(kq)}&\mapsto \lambda ^{(kq)}=\tilde{\lambda }^{(kq)}, \\ \left( \lambda ^{(i)}_{i-kq+1},\ldots ,\lambda ^{(i)}_{kq} \right)&\mapsto \left( \tilde{\lambda }_1^{(2kq-i)}, \ldots , \tilde{\lambda }_{2kq-i}^{(2kq-i)}\right) , \ \ i=kq+1, \ldots , 2kq-1. \end{aligned}$$

Proof

The key observation is that the interlacing constraints and the form of the top row \(\lambda ^{(2kq)}=(N,\ldots ,N,0,\ldots ,0)\) fix two triangles of coordinates, one of them filled with 0’s and the other one with N’s, see Fig. 1 for an illustration. This gives two discrete interlacing arrays joined at their respective top rows. The rest of the proof is essentially a relabelling of the free coordinates. Finally, it is easily seen that the sum constraints transform in the desired way. \(\square \)

Now we would like to understand how the weights \(\psi \) transform under the bijection \(\mathcal {S}\). For this it will be convenient to introduce some more notation. For any \(M\in \mathbb {N}\) and \(\lambda \in \mathsf {S}_+^{(M)}\) with \(\lambda _1\le N\) we define:

$$\begin{aligned} \mathfrak {e}_N(\lambda )=(N,\lambda _1,\lambda _2,\ldots ,\lambda _M,0)\in \mathsf {S}_+^{(M+2)}. \end{aligned}$$

By convention \(\mathfrak {e}_N(\emptyset )=(N,0)\). Also, observe that if \(\lambda \in \mathsf {S}^{(M)}_+\), \(\nu \in \mathsf {S}_+^{(M+1)}\) with \(\lambda \prec \nu \) and moreover \(\lambda _1,\nu _1\le N\) then \(\nu \prec \mathfrak {e}_N(\lambda )\).

Lemma 2.7

Let \(\beta >0\) and \(N,k,q\in \mathbb {N}\). Then, under the bijection \(\mathcal {S}\) between \(\mathsf {J}_N(k;q)\) and \(\mathsf {I}_N(k;q)\) we have

$$\begin{aligned} \psi ^{\left( \frac{2}{\beta }\right) }_{\lambda ^{(2kq-i)}/\lambda ^{(2kq-i-1)}}=\psi ^{\left( \frac{2}{\beta }\right) }_{\mathfrak {e}_N\left( \tilde{\lambda }^{(i)}\right) /\tilde{\lambda }^{(i+1)}} , \ \ i=0,1,\ldots ,kq-1, \end{aligned}$$

where we use the convention \(\tilde{\lambda }^{(0)}=\emptyset \).

Proof

Direct computation by noting that when \(\mu _i=\lambda _i\) or \(\mu _j=\lambda _{j+1}\) then the factor corresponding to the indices (ij) in the product definition (11) of \(\psi _{\lambda /\mu }^{(\delta )}\) is identically 1. \(\square \)

Proposition 2.8

Let \(\beta >0\) and \(N,k,q\in \mathbb {N}\). Then, we have:

$$\begin{aligned}&\text {MoM}_N^{(\beta )}(k;q)\nonumber \\&=\sum _{{\Lambda }\in \mathsf {I}_N(k;q)}\psi _{\mathfrak {e}_N(\emptyset )/\tilde{\lambda }^{(1)}}^{\left( \frac{2}{\beta }\right) }\psi _{\mathfrak {e}_N(\tilde{\lambda }^{(1)})/\tilde{\lambda }^{(2)}}^{\left( \frac{2}{\beta }\right) }\cdots \psi _{\mathfrak {e}_N(\tilde{\lambda }^{(kq-1)})/\lambda ^{(kq)}}^{\left( \frac{2}{\beta }\right) }\psi _{\lambda ^{(kq)}/\lambda ^{(kq-1)}}^{\left( \frac{2}{\beta }\right) }\cdots \psi _{\lambda ^{(2)}/\lambda ^{(1)}}^{\left( \frac{2}{\beta }\right) }. \end{aligned}$$
(14)

Proof

This follows by combining Proposition 2.4 along with Lemmas 2.6 and 2.7. \(\square \)

3 The Large N Limit

3.1 Proof of Proposition 1.2

We begin with some preliminaries. We need to introduce the continuous analogues of the notions and objects we saw in the discrete setting of Sect. 2. For any \(M\in \mathbb {N}\), we define the Weyl chamber with non-negative coordinates:

$$\begin{aligned} \mathsf {W}_+^{(M)}=\big \{(x_1,\ldots ,x_M)\in \mathbb {R}_+^M:x_1\ge x_2 \ge \cdots \ge x_M \big \}. \end{aligned}$$

For \(x\in \mathsf {W}_+^{(M)}\) we write \(|x|=\sum _{i=1}^M x_i\). We say that \(y\in \mathsf {W}_+^{(M)}\) and \(x\in \mathsf {W}_+^{(M+1)}\) interlace and still denote this by \(y \prec x\) if the same inequalities (10) as in the discrete case hold. Also, similarly to the discrete setting we call a sequence of configurations which interlace a continuous interlacing array. We now define the continuous analogue of \(\mathsf {I}_N(k;q)\).

Definition 3.1

Let \(k,q \in \mathbb {N}\). We define the set \(\mathsf {I}_c(k;q)\) as follows. It consists of tuples \(\mathsf {X}=\left( x^{(1)},x^{(2)},\ldots ,x^{(kq-1)},x^{(kq)}=\tilde{x}^{(kq)},\tilde{x}^{(kq-1)},\ldots , \tilde{x}^{(2)},\tilde{x}^{(1)}\right) \) so that: \(x^{(i)},\tilde{x}^{(i)}\in \mathsf {W}_+^{(i)}\),

$$\begin{aligned} x^{(1)}\prec x^{(2)} \prec x^{(3)}\prec \cdots \prec x^{(kq)}=\tilde{x}^{(kq)}\succ \cdots \succ \tilde{x}^{(3)} \succ \tilde{x}^{(2)} \succ \tilde{x}^{(1)}, \end{aligned}$$

\(x_i^{(j)},\tilde{x}_i^{(j)}\in [0,1]\) and finally the following \((k-1)\) sum constraints are satisfied:

$$\begin{aligned} \big |x^{(2jq)}\big |&=\sum _{i=1}^{2jq}x_i^{(2jq)}=jq, \ \ j=1,\ldots ,\left\lfloor \frac{k}{2}\right\rfloor ,\\ \big |\tilde{x}^{(2jq)}\big |&=\sum _{i=1}^{2jq}\tilde{x}_i^{(2jq)}=jq, \ \ j=1,\ldots ,\left\lfloor \frac{k}{2}\right\rfloor . \end{aligned}$$

Observe that, as in the discrete setting, \(\mathsf {I}_c(k;q)\) has \((kq)^2-(k-1)\) free (non-fixed) coordinates. We also define the continuous counterpart of \(\mathfrak {e}_N\). For any \(M\in \mathbb {N}\) and \(x \in \mathsf {W}_+^{(M)}\) with \(x_1\le 1\) we define:

$$\begin{aligned} \mathfrak {e}_c(x)=(1,x_1,x_2,\ldots ,x_M,0)\in \mathsf {W}_+^{(M+2)}. \end{aligned}$$

By convention \(\mathfrak {e}_c(\emptyset )=(1,0)\). As in the discrete case, observe that if \(y \in \mathsf {W}^{(M)}_+\), \(x \in \mathsf {W}_+^{(M+1)}\) with \(y \prec x\) and moreover \(x_1,y_1\le 1\) then \(x \prec \mathfrak {e}_c(y)\).

For \(y\in \mathsf {W}_+^{(M)}, x\in \mathsf {W}_+^{(M+1)}\) which interlace we define the following non-negative weight, which is the continuous analogue of \(\psi _{\lambda /\mu }^{(\delta )}\):

$$\begin{aligned}&\phi _{M,M+1}^{(\delta )}(y,x)\\&=\frac{1}{\Gamma (\delta )^M}\prod _{1\le i<j \le M}(y_i-x_{j+1})^{\delta -1}(y_i-y_j)^{1-\delta }(x_i-y_j)^{\delta -1}(x_i-x_{j+1})^{1-\delta }\\&\times \prod _{i=1}^M(y_i-x_{i+1})^{\delta -1}(x_i-y_i)^{\delta -1}(x_i-x_{i+1})^{1-\delta }\\&=\frac{1}{\Gamma (\delta )^M}\prod _{i=1}^{M+1}\prod _{j=1}^M|x_i-y_j|^{\delta -1}\prod _{1\le i<j\le M+1}(x_i-x_j)^{1-\delta }\prod _{1\le i<j \le M}(y_i-y_j)^{1-\delta }. \end{aligned}$$

Finally, we define the constant \(\mathfrak {c}^{(\beta )}(k;q)\), which turns out to be the leading order coefficient in the asymptotics of \(\text {MoM}_N^{(\beta )}(k;q)\), by the following integral expression:

$$\begin{aligned} \mathfrak {c}^{(\beta )}(k;q)=\int _{\mathsf {X}\in \mathsf {I}_c(k;q)} \prod _{M=1}^{kq-1}\phi _{M,M+1}^{\left( \frac{2}{\beta }\right) }\left( x^{(M)},x^{(M+1)}\right) \prod _{M=0}^{kq-1}\phi _{M+1,M+2}^{\left( \frac{2}{\beta }\right) }\left( \tilde{x}^{(M+1)},\mathfrak {e}_c\left( \tilde{x}^{(M)}\right) \right) d\mathsf {X}, \nonumber \\ \end{aligned}$$
(15)

where we have used the convention \(\tilde{x}^{(0)}=\emptyset \). It is important to note, as alluded to in the introduction and proven below, that when \(k\ge 2\) and \(\beta \) is large enough the integral defining \(\mathfrak {c}^{(\beta )}(k;q)\) is infinite. Nevertheless, since the integrand is non-negative, it is unambiguously defined for all \(\beta >0\). We also recall the definition of the set \(\mathcal {A}(k;q)\) given by, for any fixed \(k,q\in \mathbb {N}\):

$$\begin{aligned} \mathcal {A}(k;q)=\big \{\beta >0: \mathfrak {c}^{(\beta )}(k;q)<\infty \big \}. \end{aligned}$$

We are now ready to prove Proposition 1.2.

Proof

The idea is simple, namely a discrete to continuous scaling limit in going from a Riemann sum to an integral.

Using Proposition 2.8 we can write:

$$\begin{aligned}&\text {MoM}_N^{(\beta )}(k;q)\nonumber \\&=N^{\frac{2}{\beta }(kq)^2-(k-1)}\frac{1}{N^{(kq)^2-(k-1)}}N^{-\left( \frac{2}{\beta }-1\right) (kq)^2} \sum _{{\Lambda }\in \mathsf {I}_N(k;q)}\psi _{\mathfrak {e}_N(\emptyset )/\tilde{\lambda }^{(1)}}^{\left( \frac{2}{\beta }\right) }\psi _{\mathfrak {e}_N(\tilde{\lambda }^{(1)})/\tilde{\lambda }^{(2)}}^{\left( \frac{2}{\beta }\right) }\times \cdots \\&\qquad \cdots \times \psi _{\mathfrak {e}_N(\tilde{\lambda }^{(kq-1)})/\lambda ^{(kq)}}^{\left( \frac{2}{\beta }\right) }\psi _{\lambda ^{(kq)}/\lambda ^{(kq-1)}}^{\left( \frac{2}{\beta }\right) }\cdots \psi _{\lambda ^{(2)}/\lambda ^{(1)}}^{\left( \frac{2}{\beta }\right) }\\&=N^{\frac{2}{\beta }(kq)^2-(k-1)}\frac{1}{N^{(kq)^2-(k-1)}}\sum _{{\Lambda }\in \mathsf {I}_N(k;q)}N^{-\left( \frac{2}{\beta }-1\right) }\psi _{\mathfrak {e}_N(\emptyset )/\tilde{\lambda }^{(1)}}^{\left( \frac{2}{\beta }\right) }N^{-2\left( \frac{2}{\beta }-1\right) }\psi _{\mathfrak {e}_N(\tilde{\lambda }^{(1)})/\tilde{\lambda }^{(2)}}^{\left( \frac{2}{\beta }\right) }\times \cdots \\&\qquad \cdots \times N^{-kq\left( \frac{2}{\beta }-1\right) }\psi _{\mathfrak {e}_N(\tilde{\lambda }^{(kq-1)})/\lambda ^{(kq)}}^{\left( \frac{2}{\beta }\right) }N^{-(kq-1)\left( \frac{2}{\beta }-1\right) }\psi _{\lambda ^{(kq)}/\lambda ^{(kq-1)}}^{\left( \frac{2}{\beta }\right) }\cdots N^{-\left( \frac{2}{\beta }-1\right) }\psi _{\lambda ^{(2)}/\lambda ^{(1)}}^{\left( \frac{2}{\beta }\right) }. \end{aligned}$$

Now using \((t)_m=\frac{\Gamma (t+m)}{\Gamma (t)}\) we can rewrite the weight \(\psi _{\lambda /\mu }^{(\delta )}\), for \(\mu \in \mathsf {S}_+^{(M)}, \lambda \in \mathsf {S}_+^{(M+1)}\) that interlace, in the following suggestive way:

$$\begin{aligned} \psi _{\lambda /\mu }^{(\delta )}=\frac{1}{\Gamma (\delta )^M} \prod _{1\le i<j\le M}\frac{\Gamma (\mu _i-\lambda _{j+1}+\delta (j-i)+\delta )\Gamma (\mu _i-\mu _j+\delta (j-i)+1)}{\Gamma (\mu _i-\lambda _{j+1}+\delta (j-i)+1)\Gamma (\mu _i-\mu _j+\delta (j-i)+\delta )}\\ \times \frac{\Gamma (\lambda _i-\lambda _{j+1}+\delta (j-i)+1)\Gamma (\lambda _i-\mu _j+\delta (j-i)+\delta )}{\Gamma (\lambda _i-\lambda _{j+1}+\delta (j-i)+\delta )\Gamma (\lambda _i-\mu _j+\delta (j-i)+1)} \\\times \prod _{i=1}^M\frac{\Gamma (\mu _i-\lambda _{i+1}+\delta )\Gamma (\lambda _i-\lambda _{i+1}+1)\Gamma (\lambda _i-\mu _i+\delta )}{\Gamma (\mu _i-\lambda _{i+1}+1)\Gamma (\lambda _i-\lambda _{i+1}+\delta )\Gamma (\lambda _i-\mu _i+1)}. \end{aligned}$$

Then, making useFootnote 12 of the standard approximation:

$$\begin{aligned} \frac{\Gamma (z+c)}{\Gamma (z+d)}=z^{c-d}+\mathcal {O}_{c,d}\left( z^{c-d-1}\right) , \end{aligned}$$

we obtain the following:

$$\begin{aligned} \text {MoM}_N^{(\beta )}(k,q)\sim N^{\frac{2}{\beta }(kq)^2-(k-1)}\frac{1}{N^{(kq)^2-(k-1)}}\sum _{{\Lambda }\in \mathsf {I}_N(k;q)} \prod _{M=1}^{kq-1}\phi _{M,M+1}^{\left( \frac{2}{\beta }\right) }\left( \frac{\lambda ^{(M)}}{N},\frac{\lambda ^{(M+1)}}{N}\right) \\\prod _{M=0}^{kq-1}\phi _{M+1,M+2}^{\left( \frac{2}{\beta }\right) }\left( \frac{\tilde{\lambda }^{(M)}}{N},\frac{\mathfrak {e}_N\left( \tilde{\lambda }^{(M)}\right) }{N}\right) . \end{aligned}$$

Then, using the Riemann sum approximation of an integral, which by assumption is finite since \(\beta \in \mathcal {A}(k;q)\), we finally obtain:

$$\begin{aligned}&\text {MoM}_N^{(\beta )}(k;q)\\&\sim N^{\frac{2}{\beta }(kq)^2-(k-1)}\int _{\mathsf {X}\in \mathsf {I}_c(k;q)} \prod _{M=1}^{kq-1}\phi _{M,M+1}^{\left( \frac{2}{\beta }\right) }\left( x^{(M)},x^{(M+1)}\right) \prod _{M=0}^{kq-1}\phi _{M+1,M+2}^{\left( \frac{2}{\beta }\right) }\left( \tilde{x}^{(M+1)},\mathfrak {e}_c\left( \tilde{x}^{(M)}\right) \right) d\mathsf {X}\\&=N^{\frac{2}{\beta }(kq)^2-(k-1)}\mathfrak {c}^{(\beta )}(k;q). \end{aligned}$$

To conclude we observe that \(\mathfrak {c}^{(\beta )}(k;q)\) is strictly positive since \(\mathsf {I}_c(k;q)\) has non-empty interior and the integrand is continuous and strictly positive when restricted there. \(\square \)

3.2 Proof of Proposition 1.3

Before proving Proposition 1.3 we briefly comment on why the integral defining \(\mathfrak {c}^{(\beta )}(k;q)\) could be infinite when \(k\ge 2\) and \(\beta \) is large enough while it is always finite for \(k=1\). Observe that while the integrands coming in the definition of \(\mathfrak {c}^{(\beta )}(k;q)\) and \(\mathfrak {c}^{(\beta )}(1;kq)\) are identical the corresponding integrals are over \(\mathsf {I}_c(k;q)\) and \(\mathsf {I}_c(1;kq)\) which are \((kq)^2-(k-1)\) and \((kq)^2\) dimensional respectively. Thus, it could be that for a fixed \(\beta >0\) a singularity of the integrand of a certain orderFootnote 13 is integrable over \(\mathsf {I}_c(1;kq)\), in fact this holds for all \(\beta >0\), while it is not integrable over \(\mathsf {I}_c(k;q)\) for \(k\ge 2\) (this can only happen when \(\beta \) is large enough since when \(\beta \le 2\) the integrand is uniformly bounded as we show below).

We now prove a number of results which combined give Proposition 1.3. We begin with the following lemma on the dependence of the weight \(\phi _{M,M+1}^{(\delta )}(y,x)\) on \(\delta \).

Lemma 3.2

Let \(M\in \mathbb {N}\) and \(\delta '\le \delta \). Then, for any \(y\in \mathsf {W}_+^{(M)}\cap [0,1]^{M}, x\in \mathsf {W}_+^{(M+1)}\cap [0,1]^{M+1}\) which interlace we have:

$$\begin{aligned} \Gamma (\delta )^M\phi _{M,M+1}^{(\delta )}(y,x)\le \Gamma (\delta ')^M\phi _{M,M+1}^{(\delta ')}(y,x). \end{aligned}$$

In particular, for \(\delta \ge 1\) since \(\phi ^{(1)}_{M,M+1}(y,x)\equiv 1\) we have:

$$\begin{aligned} \phi _{M,M+1}^{(\delta )}(y,x)\le \frac{1}{\Gamma (\delta )^M}. \end{aligned}$$

Proof

Observe that we can write:

$$\begin{aligned} \Gamma (\delta )^M\phi _{M,M+1}^{(\delta )}(y,x)=\left[ \phi _{M,M+1}(y,x)\right] ^{\delta -1}, \end{aligned}$$

where \(\phi _{M,M+1}\) is given by:

$$\begin{aligned} \phi _{M,M+1}(y,x)&= \prod _{1\le i<j \le M}(y_i-x_{j+1})(y_i-y_j)^{-1}(x_i-y_j)(x_i-x_{j+1})^{-1}\\ {}&\quad \times \prod _{i=1}^M(y_i-x_{i+1})(x_i-y_i)(x_i-x_{i+1})^{-1}. \end{aligned}$$

We show that, for any \(y\in \mathsf {W}_+^{(M)}\cap [0,1]^{M}, x\in \mathsf {W}_+^{(M+1)}\cap [0,1]^{M+1}\) which interlace, \(\phi _{M,M+1}(y,x)\le 1\) which suffices to establish the lemma. We have that:

$$\begin{aligned}&\phi _{M,M+1}(y,x) \\&= \prod _{1\le i< j\le M}(y_i-x_{j+1})(y_i-y_j)^{-1} \left( \frac{x_i-y_j}{x_i-x_{j+1}}\right) \prod _{i=1}^M(y_i-x_{i+1})\left( \frac{x_i-y_i}{x_i-x_{i+1}}\right) \\&\le \prod _{1\le i< j\le M}(y_i-x_{j+1})(y_i-y_j)^{-1}\prod _{i=1}^M(y_i-x_{i+1}), \end{aligned}$$

since \(x_{j+1}\le y_j\) because of the interlacing. We rewrite the last line as follows:

$$\begin{aligned} \underbrace{\prod _{1\le i<j\le M}(y_i-x_{j+1})\prod _{\begin{array}{c} 1\le i<j \le M \\ j\ne i+1 \end{array}}(y_i-y_j)^{-1}}_\text {(I)}\underbrace{\prod _{i=1}^{M-1}(y_i-y_{i+1})^{-1} \prod _{i=1}^M(y_i-x_{i+1})}_\text {(II)}. \end{aligned}$$

The second term (II) can be bounded as follows:

$$\begin{aligned} \text {(II)}=\prod _{i=1}^{M-1}(y_i-y_{i+1})^{-1} \prod _{i=1}^M(y_i-x_{i+1})= (y_M-x_{M+1}) \prod _{i=1}^{M-1}\left( \frac{y_i-x_{i+1}}{y_i-y_{i+1}}\right) \le 1, \end{aligned}$$

since \(y_{i+1}\le x_{i+1}\) and moreover \(y_M\le 1\) and \(x_{M+1}\ge 0\). While the first term (I) satisfies:

$$\begin{aligned} \text {(I)}&=\prod _{1\le i<j\le M}(y_i-x_{j+1})\prod _{\begin{array}{c} 1\le i<j \le M \\ j\ne i+1 \end{array}}(y_i-y_j)^{-1} =\prod _{i=1}^M\left[ \prod _{j=i+1}^M (y_i-x_{j+1})^{} \prod _{j=i+2}^M(y_i-y_j)^{-1} \right] \\&=\prod _{i=1}^M\left[ \left( \frac{y_i-x_{i+2}}{y_i-y_{i+2}}\right) \cdots \left( \frac{y_i-x_M}{y_i-y_M}\right) (y_i-x_{M+1})\right] \le 1, \end{aligned}$$

again since \(y_j\le x_j\) and also \(y_i\le 1\) and \(x_{M+1}\ge 0\). The conclusion follows. \(\square \)

The lemma above gives us the following corollary on the dependence on \(\beta \) of the leading order coefficient in the asymptotics.

Corollary 3.3

Let \(k,q\in \mathbb {N}\) and \(\beta \le \beta '\). Then, we have:

$$\begin{aligned} \mathfrak {c}^{(\beta )}(k;q)\le \left[ \frac{\Gamma \left( \frac{2}{\beta '}\right) }{\Gamma \left( \frac{2}{\beta }\right) }\right] ^{(kq)^2}\mathfrak {c}^{(\beta ')}(k;q). \end{aligned}$$

In particular, if \(\beta \le 2\) then:

$$\begin{aligned} \mathfrak {c}^{(\beta )}(k;q)\le \frac{1}{\Gamma \left( \frac{2}{\beta }\right) ^{(kq)^2}}\mathfrak {c}^{(2)}(k;q)=\frac{1}{\Gamma \left( \frac{2}{\beta }\right) ^{(kq)^2}} \text {volume}\left( \mathsf {I}_c(k;q)\right) <\infty . \end{aligned}$$

Remark 3.4

Observe that for \(k=1\) using the explicit formula (9) for \(\mathfrak {c}^{(\beta )}(1;q)\), see also Lemma 3.5, the corollary above is equivalent to:

$$\begin{aligned} t\mapsto \Gamma (t)^{q^2}\prod _{j=1}^q\frac{\Gamma \left( tj\right) }{\Gamma \left( t\left( q+j\right) \right) } \ \ \text { is non-increasing on } (0,\infty ). \end{aligned}$$
(16)

This elementary statement can alternatively be proven directly as follows. It suffices to show that the logarithm of (16) is non-increasing on \((0,\infty )\). The logarithmic derivative of (16) is equal to, using the notation \(\mathsf {D}(x)=\frac{d}{dx}\log \Gamma (x)\):

$$\begin{aligned} q^2\mathsf {D}(t)&+\sum _{j=1}^q \left[ j\mathsf {D}\left( jt\right) -(q+j)\mathsf {D}\left( t(q+j)\right) \right] \\&\quad \le q^2\mathsf {D}(t)+\sum _{j=1}^q \left[ j\mathsf {D}\left( jt\right) -(q+j)\mathsf {D}\left( jt\right) \right] \\&\quad =q^2\mathsf {D}(t)-q\sum _{j=1}^q\mathsf {D}(jt)\le q^2\mathsf {D}(t)-q^2\mathsf {D}(t)=0, \end{aligned}$$

where we have used the well-known fact that \(\mathsf {D}(\cdot )\) is non-decreasing on \((0,\infty )\). Then, (16) follows.

In the special cases \(k=1\) and \(k=2\) it is possible to give simpler integral expressions for \(\mathfrak {c}^{(\beta )}(1;q)\) and \(\mathfrak {c}^{(\beta )}(2;q)\) by computing the intermediate integrals over the interlacing arrays, except for the row at which the arrays are joined at. In the case of \(\mathfrak {c}^{(\beta )}(1;q)\) the resulting integral, which is a special case of the Selberg integral, see for example [21], can be computed explicitly and gives (9).

Lemma 3.5

Let \(q\in \mathbb {N}\) and \(\beta >0\). Then, we have the following expressions:

$$\begin{aligned} \mathfrak {c}^{(\beta )}(1;q)&=\prod _{M=1}^{q}\frac{\Gamma \left( \frac{2}{\beta }\right) }{\Gamma \left( M\frac{2}{\beta }\right) ^2}\int _{\mathsf {W}_+^{(q)}\cap [0,1]^{q}}\prod _{1\le i<j \le q}(x_i-x_j)^{\frac{4}{\beta }}\prod _{i=1}^{q}\left[ x_i\left( 1-x_i\right) \right] ^{\frac{2}{\beta }-1}dx\nonumber \\ {}&=\prod _{i=1}^q\frac{\Gamma \left( \frac{2}{\beta }i\right) }{\Gamma \left( \frac{2}{\beta }(q+i)\right) }, \end{aligned}$$
(17)
$$\begin{aligned} \mathfrak {c}^{(\beta )}(2;q)&=\prod _{M=1}^{2q}\frac{\Gamma \left( \frac{2}{\beta }\right) }{\Gamma \left( M\frac{2}{\beta }\right) ^2}\int _{\mathsf {W}_+^{(2q)}\cap [0,1]^{2q},\ \sum _{i=1}^{2q}x_i=q}\prod _{1\le i<j \le 2q}(x_i-x_j)^{\frac{4}{\beta }}\prod _{i=1}^{2q}\left[ x_i\left( 1-x_i\right) \right] ^{\frac{2}{\beta }-1}dx. \end{aligned}$$
(18)

Proof

Observe that we have:

$$\begin{aligned}&\prod _{M=1}^{kq-1}\phi _{M,M+1}^{(\delta )}\left( x^{(M)},x^{(M+1)}\right) \nonumber \\&\quad =\prod _{M=1}^{kq-1}\frac{1}{\Gamma (\delta )^M}\prod _{M=1}^{kq-1}\prod _{i=1}^{M+1}\prod _{j=1}^M\big |x_i^{(M+1)}-x_j^{(M)}\big |^{\delta -1}\nonumber \\&\qquad \times \prod _{M=2}^{kq-1}\prod _{1\le i<j\le M}\left( x_i^{(M)}-x_j^{(M)}\right) ^{2-2\delta } \prod _{1\le i<j\le kq}\left( x_i^{(kq)}-x_j^{(kq)}\right) ^{1-\delta } \end{aligned}$$
(19)

and similarly (recall that \(x^{(kq)}=\tilde{x}^{(kq)}\)):

$$\begin{aligned}&\prod _{M=0}^{kq-1}\phi _{M+1,M+2}^{(\delta )}\left( \tilde{x}^{(M+1)},\mathfrak {e}_c\left( \tilde{x}^{(M)}\right) \right) \nonumber \\&\quad =\prod _{M=1}^{kq}\frac{1}{\Gamma (\delta )^M}\prod _{M=1}^{kq-1}\prod _{i=1}^{M+1}\prod _{j=1}^M\big |\tilde{x}_i^{(M+1)}-\tilde{x}_j^{(M)}\big |^{\delta -1}\prod _{M=2}^{kq-1}\prod _{1\le i<j\le M}\left( \tilde{x}_i^{(M)}-\tilde{x}_j^{(M)}\right) ^{2-2\delta } \nonumber \\&\qquad \times \prod _{1\le i<j\le kq}\left( x_i^{(kq)}-x_j^{(kq)}\right) ^{1-\delta }\prod _{i=1}^{kq}\left[ x_i^{(kq)}\left( 1-x_i^{(kq)}\right) \right] ^{\delta -1}. \end{aligned}$$
(20)

We notice that these weights are up to a factor involving only \(x^{(kq)}\), given by the orbital beta probability distribution on continuous interlacing arrays with fixed top row \(x^{(kq)}\), see Definition 1.3 in [25], also [4, 16].

We now argue as follows. For \(k=1\) and \(k=2\) we fix the centre row \(x^{(kq)}\) (note that for \(k=2\) there is a single sum constraint only on this row) and perform the integrations over the two individual arrays with fixed top row \(x^{(kq)}\). This choice of the order of integration is possible by Tonelli’s theorem since the integrand is positive. Then, the corresponding integrals over the interlacing arrays with fixed top row \(x^{(kq)}\) are known to have an explicit evaluation given as follows:

$$\begin{aligned} \int _{x^{(1)}\prec x^{(2)}\prec x^{(3)}\prec \cdots \prec x^{(kq-1)}\prec x^{(kq)}} \prod _{M=1}^{kq-1}\phi _{M,M+1}^{(\delta )}\left( x^{(M)},x^{(M+1)}\right) dx^{(1)}dx^{(2)}dx^{(3)}\cdots dx^{(kq-1)}\\ =\Gamma (\delta )^{kq}\prod _{M=1}^{kq}\frac{1}{\Gamma (M\delta )}\prod _{1\le i<j\le kq}\left( x_i^{(kq)}-x_j^{(kq)}\right) ^\delta ,\\ {} \int _{\tilde{x}^{(1)}\prec \tilde{x}^{(2)}\prec \tilde{x}^{(3)}\prec \cdots \prec \tilde{x}^{(kq-1)}\prec \tilde{x}^{(kq)}=x^{(kq)}} \prod _{M=0}^{kq-1}\phi _{M,M+1}^{(\delta )}\left( \tilde{x}^{(M+1)},\mathfrak {e}_c\left( \tilde{x}^{(M)}\right) \right) d\tilde{x}^{(1)}d\tilde{x}^{(2)}d\tilde{x}^{(3)}\cdots d\tilde{x}^{(kq-1)}\\ =\prod _{M=1}^{kq}\frac{1}{\Gamma (M\delta )}\prod _{1\le i<j\le kq}\left( x_i^{(kq)}-x_j^{(kq)}\right) ^\delta \prod _{i=1}^{kq}\left[ x_i^{(kq)}\left( 1-x_i^{(kq)}\right) \right] ^{\delta -1}. \end{aligned}$$

This is exactly the evaluation of the normalisation constant for the orbital beta probability distribution given in displays (8) and (9) of Definition 1.3 in [25], see also Remark 1.4 therein. We note that the argument above would not work for \(k\ge 3\) as the sum constraints are involved not only on the row where the arrays are joined.

This readily gives the integral expressions for \(\mathfrak {c}^{(\beta )}(1;q)\) and \(\mathfrak {c}^{(\beta )}(2;q)\). The final expression for \(\mathfrak {c}^{(\beta )}(1;q)\) is then an immediate consequence of the explicit evaluation of Selberg’s integral, see [21]. As far as we can tell the integral expression for \(\mathfrak {c}^{(\beta )}(2;q)\) is not known to have an explicit evaluation and we leave it in the form (18). \(\square \)

Using formula (18) it is relatively straightforward to show that \(\mathcal {A}(2;q)=(0,4q^2)\).

Lemma 3.6

Let \(q \in \mathbb {N}\). Then, \(\mathcal {A}(2;q)=(0,4q^2)\).

Proof

We show that the integral (18) is finite if and only if \(\beta <4q^2\) (recall that by definition \(\beta >0\)). Observe that the integral is \((2q-1)\)-dimensional, by eliminating one of the variables due to the sum constraint, for example \(x_{2q}=q-\sum _{i=1}^{2q-1}x_i\). We first claim and we will justify at the end of the proof that “singular manifolds" which are not full-dimensional, i.e. singularities over a number of variables \(\mathsf {n}<2q-1\), are always integrable for any \(\beta >0\). Hence, for now we restrict attention to the full-dimensional case. Observe that the only singularities occur when the coordinates are either 0 or 1 and each such singularity is of orderFootnote 14\(\left( 1-\frac{2}{\beta }\right) \). The most singular points are then the ones all of whose coordinates are either 0 or 1 which by the sum constraint and ordering of the coordinates uniquely identifies a single point \(\mathfrak {x}_{\star }(q)\) given byFootnote 15:

$$\begin{aligned} \mathfrak {x}_{\star }(q)=(\underbrace{1,\ldots ,1}_\text {{ q}},\underbrace{0,\ldots ,0}_\text {{ q}}). \end{aligned}$$

If any of the coordinates of a point are not 0 or 1 then the singularity at that point is automatically integrable for any \(\beta >0\). Finally, we need to take into account that for each pair of coordinates which coalesce there is a factor vanishing at order \(\frac{4}{\beta }\) in the integrand. Thus, putting everything together we obtain that the singularity at \(\mathfrak {x}_{\star }(q)\) is of order:

$$\begin{aligned} 2q\left( 1-\frac{2}{\beta }\right) +\left[ \left( {\begin{array}{c}q\\ 2\end{array}}\right) +\left( {\begin{array}{c}q\\ 2\end{array}}\right) \right] \left( -\frac{4}{\beta }\right) =2q-\frac{4}{\beta }q^2. \end{aligned}$$

This is integrable if and only ifFootnote 16:

$$\begin{aligned} 2q-\frac{4}{\beta }q^2 < 2q-1, \end{aligned}$$

which gives \(\beta <4q^2\).

Returning to the claim we made earlier, suppose that we are looking at a singularity over a number of variables \(\mathsf {n}<2q-1\). Suppose \(\mathsf {n}_0\) of these variables are 0 and \(\mathsf {n}_1\) are 1, so that \(\mathsf {n}_0+\mathsf {n}_1\le \mathsf {n}\) (we must also have \(\mathsf {n}_1\le q\) due to the sum constraint but we will not need to use this extra restriction). Then, by completely analogous considerations to the ones above for \(\mathfrak {x}_\star (q)\), the order of the singularity at such a point is given by (note that when \(\mathsf {n}=2q-1\) and in the particular case of \(\mathfrak {x}_\star (q)\) above we also picked up the singularity coming from the fixed coordinate \(x_{2q}\)):

$$\begin{aligned} \left( \mathsf {n}_0+\mathsf {n}_1\right) \left( 1-\frac{2}{\beta }\right) + \left[ \left( {\begin{array}{c}\mathsf {n}_0\\ 2\end{array}}\right) +\left( {\begin{array}{c}\mathsf {n}_1\\ 2\end{array}}\right) \right] \left( -\frac{4}{\beta }\right) = \mathsf {n}_0+\mathsf {n}_1-\left( \mathsf {n}_0^2+\mathsf {n}_1^2\right) \frac{2}{\beta }. \end{aligned}$$

This singularity is then integrable for any \(\beta >0\), since:

$$\begin{aligned} \mathsf {n}_0+\mathsf {n}_1-\left( \mathsf {n}_0^2+\mathsf {n}_1^2\right) \frac{2}{\beta } <\mathsf {n} \end{aligned}$$

and this concludes the proof. \(\square \)

Fig. 2
figure 2

The figures depict the restricted set of variables that we are integrating over in the proof of Lemma 3.7 shown shaded in grey. The variables we have removed (not integrating over) are depicted as green particles. As shown in the figure these correspond to the k squares defined in the proof of Lemma 3.7. The rows with the sum constraints are depicted as solid red lines. The figures also depict the definition of the point \(\mathfrak {x}(k;q)\) half of whose coordinates are 0’s and the other half 1’s as shown here. Finally, observe that in the special case \(k=2\) the point \(\mathfrak {x}(2;q)\) corresponds to the point \(\mathfrak {x}_{\star }(q)\) from the proof of Lemma 3.6 (Color figure online).

Fig. 3
figure 3

By inspecting formulae (19), (20) for the integrand we make the following observations (where without loss of generality we assume that \(\beta >2\) since the integrand is uniformly bounded for \(\beta \le 2\) from Lemma 3.2). We have singularities, each of order \(\left( 1-\frac{2}{\beta }\right) \), when any of the coordinates of the centre row (where the two interlacing arrays are joined) are either 0 or 1, depicted as blue particles in the figure. Moreover, for any pair of coordinates on the same row which coalesce, depicted as dashed edges in the figure, we have a factor vanishing at order \(2\left( 1-\frac{2}{\beta }\right) \). Finally, for any pair of coordinates on two consecutive rows which coalesce, depicted as solid edges in the figure, we have a singularity of order \(\left( 1-\frac{2}{\beta }\right) \). Then, to compute the order of the singularity at a point we simply add up the number of blue particles to the number of solid edges and subtract twice the number of dashed edges and then finally multiply the result by \(\left( 1-\frac{2}{\beta }\right) \). Thus, the order of the singularity at the point \(\mathfrak {x}(3;1)\) is \(\left[ 2\cdot 1+4\cdot 1\right] \left( 1-\frac{2}{\beta }\right) = 6 \left( 1-\frac{2}{\beta }\right) \) and similarly the order of the singularity at \(\mathfrak {x}(4;1)\) is \(\left[ 12\cdot 1+4\cdot 1-2\cdot 2\right] \left( 1-\frac{2}{\beta }\right) = 12 \left( 1-\frac{2}{\beta }\right) \) (Color figure online).

We now move on to prove the following result on finiteness of \(\mathfrak {c}^{(\beta )}(k;q)\) when \(k\ge 2\); clearly for \(k=2\) this is a consequence of Lemma 3.6. Unfortunately for \(k\ge 3\), as far as we are aware, there is no analogous simplification as in Lemma 3.5 and we need to analyse the integral over the whole interlacing array with constraints. Then, to prove this result we simply exhibit a singularity of the integrand which is not integrable if \(\beta \ge 2kq^2\). This choice of singularity might seem to come out of thin air but we give some intuition for it after the proof. We expect that this singularity is in fact the optimal one in the sense that it gives the strictest restriction on \(\beta \), which would show that \(\mathcal {A}(k;q)=(0,2kq^2)\). We give some brief heuristics in support of this claim after the proof of the result.

Finally, the reader is advised to study Figs. 2 and 3, while reading the proof of Lemma 3.7, which help elucidate the argument.

Lemma 3.7

Let \(k,q \in \mathbb {N}\) with \(k\ge 2\). Then, \([2kq^2,\infty )\cap \mathcal {A}(k;q)=\emptyset \).

Proof

We will show that the integral over a restricted set of variables, that we define next, is infinite when \(\beta \ge 2kq^2\). We remove (from the set of all variables), namely we do not integrate over, the variables corresponding to k squares of coordinates, as shown in Fig. 2, each of side q. The i-th square is uniquely determined by two of its diagonal vertices which are given by the mid-coordinate of the first row above (the row corresponding to) the \((i-1)\)-th sum constraint and the mid-coordinate of the last row below (the row corresponding to) the i-th sum constraint, see Fig. 2. For the extreme cases \(i=1\) and \(i=k\) we take as the lower and upper vertices of the corresponding squares the coordinates \(x_{1}^{(1)}\) and \(\tilde{x}_1^{(1)}\) respectively, see Fig. 2. We have refrained from giving here the definition in terms of the indices of coordinates since it is too cumbersome and somewhat obscures the simple geometric picture. Observe that the corresponding integral over the restricted set of variables just defined is \((kq)^2-kq^2-(k-1)=(kq^2-1)(k-1)\) dimensional, since we have removed \(kq^2\) coordinates and we include all of the \((k-1)\) sum constraints which fix a variable each.

Restricting to the set of variables just described we then consider the point \(\mathfrak {x}(k;q)\) whose coordinates consist entirely of 0’s and 1’s, see Fig. 2 for an illustration. Note that this description uniquely identifies \(\mathfrak {x}(k;q)\): the rows involving the sum constraints are uniquely determined (half of the coordinates are 0’s and the other half 1’s and they are also ordered) and then the rest of the coordinates of \(\mathfrak {x}(k;q)\) are determined by the interlacing, see Fig. 2.

Moreover, a direct but tedious combinatorial computation using formulae (19), (20) (see Fig. 3 for some explanations and the computation in simple cases, the general case is analogous but notationally cumbersome) gives that the singularity at the point \(\mathfrak {x}(k;q)\) is of orderFootnote 17\(kq^2(k-1)\left( 1-\frac{2}{\beta }\right) \). This singularity is then not integrable if:

$$\begin{aligned} kq^2(k-1)\left( 1-\frac{2}{\beta }\right) \ge (kq^2-1)(k-1), \end{aligned}$$

which gives that for \(\beta \ge 2kq^2\) the integral is infinite. \(\square \)

We now give some informal heuristics in support of \(\mathcal {A}(k;q)=(0,2kq^2)\) and intuition behind the definition of \(\mathfrak {x}(k;q)\). Observe that one of the main complications in studying the finiteness of (15) compared to (18), in addition to the obvious difficulty of having to analyse an integral over the whole array instead of a single row, is that singularities can arise not only when the coordinates of a point are 0’s and 1’s. Nevertheless, 0 and 1 are distinguished points in that we have some extra singular factors there corresponding to the coordinates of the centre row, see for example Fig. 3. Intuitively then, in order to obtain as strict of a restriction on \(\beta \) as possible, one would like to look at singularities at points having more 0’s and 1’s as coordinates and indeed checking what happens in a number of different cases indicates that it is actually best to only use 0’s and 1’s. However, the fact that one needs to take into account the sum constraints complicates things and proving this claim rigorously appears quite messy and we do not pursue it further in this paper.Footnote 18

Assuming this unproven heuristic, it would then be possible to argue by direct computations that it is better (in that we get a stricter restriction for \(\beta \)) to look at a singularity at a point that involves all of the \((k-1)\) sum constraints. Due to the ordering of the coordinates on individual rows and the interlacing this uniquely identifies \(\mathfrak {x}(k;q)\) as the point with the minimal number of variables which achieves this. Finally, by some more calculations it is possible to prove that looking at a singularity at a point involving any additional coordinates to the ones already defining \(\mathfrak {x}(k;q)\) will not give a stricter restriction on \(\beta \), which would then imply that \(\mathcal {A}(k;q)=(0,2kq^2)\).

To conclude, simply putting everything together establishes Proposition 1.3.

Proof of Proposition 1.3

The statement follows by combining Corollary 3.3 and Lemmas 3.5, 3.6 and 3.7 above. \(\square \)

3.3 On \(\mathfrak {c}^{(\beta )}(2;q)\) and Integrable Systems

We discuss in some more detail the integral expression of the leading order coefficient in the asymptotics for the special case \(k=2\). By using Lemma 3.5 and writing the sum constraint as a Fourier integral we have:

$$\begin{aligned} \mathfrak {c}^{(\beta )}(2;q)&=\prod _{M=1}^{2q}\frac{\Gamma \left( \frac{2}{\beta }\right) }{\Gamma \left( M\frac{2}{\beta }\right) ^2}\int _{\mathsf {W}_+^{(2q)}\cap [0,1]^{2q},\ \sum _{i=1}^{2q}x_i=q}\prod _{1\le i<j \le 2q}(x_i-x_j)^{\frac{4}{\beta }}\prod _{i=1}^{2q}\left[ x_i\left( 1-x_i\right) \right] ^{\frac{2}{\beta }-1}dx\\&=\prod _{M=1}^{2q}\frac{\Gamma \left( \frac{2}{\beta }\right) }{\Gamma \left( M\frac{2}{\beta }\right) ^2}\int _{-\infty }^{\infty }dse^{2\pi i sq} \int _{\mathsf {W}_+^{(2q)}\cap [0,1]^{2q}}\prod _{1\le i<j \le 2q}(x_i-x_j)^{\frac{4}{\beta }}\\&\quad \prod _{i=1}^{2q}e^{-2\pi i s x_i} \left[ x_i\left( 1-x_i\right) \right] ^{\frac{2}{\beta }-1}dx. \end{aligned}$$

For \(\beta =2\), by the Andreif identity, the inner 2q-dimensional integral is a Hankel determinant corresponding to a certain special weight and is known to have a representation in terms of a particular case of the Painlevé V equation:

$$\begin{aligned} \left( t\frac{d^2}{dt^2}\sigma _{2q}(t)\right) ^2&=\left( \sigma _{2q}(t)+\left( 4q-t\right) \frac{d}{dt}\sigma _{2q}(t)\right) ^2\\&\quad -4\left( \frac{d}{dt}\sigma _{2q}(t)\right) ^2\left( (2q)^2-\sigma _{2q}(t)+t\frac{d}{dt}\sigma _{2q}(t)\right) , \end{aligned}$$

see [3, 7, 8] for the precise statement.

It is plausible that a connection to integrable systems exists for other values of \(\beta \) as well, especially for the COE and CSE cases, namely for \(\beta =1\) and \(\beta =4\) (from Lemma 3.6 we need to restrict to \(q>1\) for \(\mathfrak {c}^{(4)}(2;q)\) to be finite), and the formula we give above could be used as a starting point for such an investigation (as the special case of this formula for \(\beta =2\) was used in [8]). In particular, for \(\beta =4\) we see that the inner integral can in fact be written as a Pfaffian by making use of de Bruijn’s formula, see [18].

Finally, for \(\beta =2\) and \(k\ge 3\) the integral expression for \(\mathfrak {c}^{(2)}(k;q)\) can be somewhat simplified since it is possible to compute the intermediate integrals between two consecutive sum constraints in terms of spline functions [17, 39], see Sect. 4 of [3] for more details. It is unclear whether an analogous simplification exists for general \(\beta \).