Averages of products and ratios of characteristic polynomials in polynomial ensembles

Polynomial ensembles are a sub-class of probability measures within determinantal point processes. Examples include products of independent random matrices, with applications to Lyapunov exponents, and random matrices with an external field, that may serve as schematic models of quantum field theories with temperature. We first analyse expectation values of ratios of an equal number of characteristic polynomials in general polynomial ensembles. Using Schur polynomials we show that polynomial ensembles constitute Giambelli compatible point processes, leading to a determinant formula for such ratios as in classical ensembles of random matrices. In the second part we introduce invertible polynomial ensembles given e.g. by random matrices with an external field. Expectation values of arbitrary ratios of characteristic polynomials are expressed in terms of multiple contour integrals. This generalises previous findings by one of the authors for a single ratio in the context of eigenvector statistics in the complex Ginibre ensemble.


Introduction
In this paper we study correlation functions of characteristic polynomials in a sub-class of determinantal random point processes. They are called polynomial ensembles [39] and belong to biorthogonal ensembles in the sense of Borodin [10]. Polynomial ensembles are characterised by the fact that one of the two determinants in the joint density of points is given by a Vandermonde determinant, while the other one is kept general. Thus they are generalising the classical ensembles of Gaussian random matrices [41]. Polynomial ensembles appear in various contexts as the joint distribution of eigenvalues (or singular values) of random matrices, see [27,15,19,20,3]. They enjoy many invariance properties on the level of joint density, kernel and bi-orthogonal functions [38,35], and provide examples for realisations of multiple orthogonal polynomials, see e.g. [39,20,8] and Muttalib-Borodin ensembles [42,10].
Random matrices enjoy many different applications in physics and beyond, see [2] and references therein. Polynomial ensembles in particular are relevant in the following contexts: Ensembles with an external field have been introduced as a tool to count intersection numbers of moduli spaces on Riemann surfaces [17]. In the application to the quantum field theory of the strong interactions, quantum chromodynamics (QCD), they have been used as a schematic model to study the influence of temperature in the chiral phase transition [29]. Detailed computations of Dirac operator eigenvalues [28,45] within this class of models have been restricted to supersymmetric techniques so far, that can now be addressed in the framework of biorthogonal ensembles.
Recently sums and products of random matrices have been shown to lead to polynomial ensembles [3,12,36] -see [4] for a review. This has important consequences for the spectrum of Lyapunov exponents, relating this multiplicative process to the additive process of Dyson's Brownian motion [6]. Last but not least polynomial ensembles of Pólya type have led to a deeper understanding of the relation between singular values and complex eigenvalues, [34,35] where a bijection between the respective point processes was constructed.
In this paper we consider expectation values of products and ratios of characteristic polynomials within the class of polynomial ensembles. While these can be used to generate multi-point resolvents and thus arbitrary k-point density correlation functions, as well as the kernel of bi-orthogonal polynomials, they are of interest in their own right as well. Examples for applications are the partition function of QCD with an arbitrary number of fermionic flavours [46]. In mathematics the Montgomery conjecture in conjunction with moments of the Riemann zeta-functions has lead to important insights [32].
Mathematical properties of ratios of characteristic polynomials have equally received attention, and we will not be able to give full justice to the existing literature. Based on earlier works such as [22] and [7], the determinantal structure of the expectation value of ratios of characteristic polynomials in orthogonal polynomial ensembles was expressed in several equivalent forms, given in terms of orthogonal polynomials, their Cauchy transform or their respective kernels. This structure was generalised for products of characteristic polynomials in [1], as well as to all symmetry classes [33]. The universality of such ratios has been studied in several works [47,14] and in particular its relation to the sine-and Airy-kernel [11]. New critical behaviours have been found from such ensembles as well [16] and universality was discussed in [9].
Moving to polynomial ensembles, expectation values of products are easy to evaluate by including them into the Vandermonde determinant, just as for orthogonal polynomial ensembles. Determinantal formulas for expectation values of characteristic polynomials and their inverse inverse have been derived, see e.g. [19,21,8]. A duality in the number of products and matrix dimension which is well known for the classical ensembles holds also in this external field model [18]. The kernel for general polynomial ensembles has been expressed in terms of the residue of a single ratio of characteristic polynomials in [19], see also [11,26]. Most recently the study of eigenvector statistics of random matrices has seen a revival, and also in this context expectation values of ratios of characteristic polynomials in polynomial ensembles arise [23,24]. This has been one of the starting points of the present work.
The outline of the paper is as follows. In Section 2 we introduce polynomial ensembles, provide several examples, and state the main results of the present paper. In particular, Theorem 2.2 says that any polynomial ensemble is a Giambelli compatible point process in the sense of Borodin, Olshanski, and Strahov [13]. This leads to Theorem 2.3, expressing the expectation value of the ratio of an equal number of characteristic polynomials as a determinant of a single ratio, generalising [7,Theorem 3.3] to polynomial ensembles. In Section 2.3 we introduce a more restricted class of polynomial ensembles which we call invertible. Here, we give a nested multiple complex contour integral representation for general ratios of characteristic polynomials in Theorem 2.9. The number of integrals only depends on the number of characteristic polynomials, but not on the number of points N of the point process. This generalises the results of [24, Theorem 5.1] to rectangular random matrices, in the presence of an arbitrary number of characteristic polynomials. Several examples are given that belong to the class of invertible polynomial ensembles, including the external field models. Sections 2.2 and 2.9 are devoted to the proofs of the results stated in Section 2. Section 5 contains some special cases, and comparison with the work by Fyodorov, Grela, and Strahov [24]. Finally, Appendix A collects properties of the Vandermonde determinant, when adding or removing factors.

Definitions and statement of results
2.1. Polynomial ensembles. We introduce polynomial ensembles following [39]. They are defined by the probability density function on I n , where I ⊆ R is an interval. The probability density function is given by is the Vandermonde determinant of N variables. The ϕ 1 , . . . , ϕ N are certain integrable real-valued functions on I, such that the normalisation constant Z N exists and is non-zero. The constant Z N is also called partition function in the physics literature. Polynomial ensembles are formed by eigenvalues (or singular values) of certain N ×N random matrices H, see examples below. The resulting probability density function (2.1) is then positive and normalised to unity. Here, the matrix G = (g k,l ) N k,l=1 is the invertible generalised moment matrix with entries The second equality in (2.2) follows using (A.1) and the Andréiéf integral formula, , valid for any two sets of integrable functions ψ k and φ l . We will now give some explicit realisations of polynomial ensembles in terms of random matrices. The simplest example for a polynomial ensembles is given by the eigenvalues of N ×N complex Hermitian random matrices H from the Gaussian Unitary Ensembles (GUE), defined by the probability measure The probability density function of the real eigenvalues x 1 , . . . , x N of H reads [41] (2.6) This is a polynomial ensemble where the resulting ϕ-functions, ϕ k (x) = x N −k e −x 2 , are obtained after multiplying the exponential factors into one of the Vandermonde determinants. Note that the GUE is an orthogonal polynomial ensemble.
The GUE with an external source or field [27,15] contains an additional constant, deterministic Hermitian matrix A of size N × N that we choose to be diagonal here, A = diag(a 1 , . . . , a N ) with a j ∈ R for j = 1, . . . , N, without loss of generality. It will constitute our first main example and is defined by the probability measure with the probability density function The resulting ϕ k (x) = e −(x−a k ) 2 , follow from the Harish-Chandra-Itzykson-Zuber integral [30,31] and from multiplying the Gaussian term inside the determinant. We refer to [15] for the derivation. Notice that the second determinant in (2.8) cannot be reduced to a Vandermonde determinant in general.
Our second main example is the chiral GUE with an external source, cf. [19]. It is defined in terms of a complex non-Hermitian N × (N + ν) dimensional random matrix X and a deterministic matrix A of equal size, with ν ≥ 0. Again without loss of generality we can choose AA † = diag(a 1 , . . . , a N ), with elements a j ∈ R + for j = 1, . . . , N. The ensemble is defined by At vanishing A it reduced to the chiral GUE also called Wishart or complex Laguerre ensemble. The probability density function of the real positive eigenvalues x 1 , . . . , x N of XX † reads, (2.10) .
The modified Bessel function of second kind I ν , inside ϕ k (x) = x ν/2 e −(x+a k ) I ν (2 √ a k x), follows from the Berezin-Karpelevich integral formula, cf. [44]. In principle we may also allow the parameter ν > −1 to take real values.
In the application to QCD at finite temperature typically the density (2.9) is endowed with N f extra terms, P (X) → P (X) ..,N f ∈ R, that correspond to N f Fermion flavours with masses m f , see e.g. [45] which also motivates the present study. We would like to mention that the expectation value of the ratio of two characteristic polynomials studied in [24] follows from the above ensemble, when setting ν = 0 and letting m f → 0 for all f = 1, . . . , N f . This leads to a polynomial ensemble with ϕ k (x) = x L e −(x+a k ) I 0 (2 √ a k x) of [24], with L = N f . Further examples have been given already in the introduction, including the singular values of products of independent random matrices, see [4] for a review, where ϕ k (x) is given by a special function, the Mejier G-function, and more generally Polyá ensembles [34,35]. Notice that when also the Vandermonde determinant in (2.1) is replaced be a general determinant, as in the Andréiéf integration formula (2.4), we are back to biorthogonal ensembles [10] -an explicit example can be found in [5]. For this class our methods below will not apply in general.

2.2.
Polynomials ensembles as Giambelli compatible point processes. In this section we adopt notation and definitions from Macdonald [40]. Let Λ be the algebra of symmetric functions. The Schur functions s λ indexed by Young diagrams λ form an orthonormal basis in Λ. Recall that Young diagrams can be written in the Frobenius notation, namely λ = (p 1 , . . . , p d |q 1 , . . . , q d ) , where d equals the number of boxes on the diagonal of λ, p j with j = 1, . . . , d denotes the number of boxes in the j-th row of λ to the right of the diagonal, and q l with l = 1, . . . , d denotes the number of boxes in the l-th column of λ below the diagonal. The Schur functions satisfy the Giambelli formula: . The Schur polynomial s λ (x 1 , . . . , x N ) is the specialization of s λ to the variables x 1 , . . ., x N . The Schur polynomial s λ (x 1 , . . . , x N ) corresponding to the Young diagram λ with l(λ) ≤ N rows of lengths λ 1 ≥ ... ≥ λ l(λ) > 0, can be defined by The Giambelli compatible point processes form a class of point processes whose different probabilistic quantities of interest can be studied using the Schur symmetric functions. This class of point processes was introduced in Borodin, Olshanski, and Strahov [13] to prove determinantal identities for averages of analogs of characteristic polynomials for ensembles originated from Random Matrix Theory, the theory of random partitions, and from representation theory of the infinite symmetric group. In the context of random point processes formed by N-point random configurations on a subset of R the Giambelli compatible point processes can be defined as follows.
In the present paper we show that the polynomial ensembles introduced in Section 2.1 can be understood as Giambelli compatible point processes. Namely, the following Theorem holds true.
Theorem 2.2. Any polynomial ensemble in the sense of Section 2.1 is a Giambelli compatible point process.
As it is explained in Borodin, Olshanski, and Strahov [13] the Giambelli compatibility of point processes implies determinantal formulas for averages of ratios of characteristic polynomials. Namely, we obtain Theorem 2.3. Assume that x 1 , . . . , x N form a polynomial ensemble. Let u 1 , . . . , u M ∈ C\R and z 1 , . . . , z M ∈ C for any M ∈ N be pairwise distinct variables. Then where D N (z) = N n=1 (z − x n ) denotes the characteristic polynomial associated with the random variables x 1 , . . ., x N .

2.3.
Averages of arbitrary ratios of characteristic polynomials in invertible ensembles. In this section we present our results for arbitrary ratios of characteristic polynomials, , allowing the number of characteristic polynomials in the numerator M and denominator, L ≤ N, to differ. As before we will assume the parameters y 1 , . . . , y L ∈ C\R and z 1 , . . . , z M ∈ C to be pairwise distinct. We will not consider the most general polynomial ensembles (2.1) here, but consider functions ϕ j (x) that satisfy certain conditions to be specified below.
Definition 2.4. Consider a polynomial ensemble defined by the probability density function (2.1). Assume that ϕ l (x) = ϕ(a l , x) for l = 1, . . . , N, (where a 1 , . . . , a N are real parameters), and that there exists a family {π k } ∞ k=0 of monic polynomials such that each π k can be represented as In addition, assume that equation (2.17) is invertible, i.e. there exists a function F : where I ′ is a certain contour in the complex plane. Then, we will refer to such a polynomial ensemble as an invertible ensemble.
Remark 2.5. Condition (2.17) together with (2.2) immediately implies that for invertible polynomial ensembles the normalising partition function simplifies as follows: Here, we use that in (A.1) the determinant of monomials equals that of arbitrary monic polynomials.
We will now present two examples for polynomial ensembles of invertible type according to Definition 2.4, before commenting on the general class of such ensembles.
Example 2.6. Our first example is given by the GUE with external field (2.8). Here, the eigenvalues take real values, I = R, and the functions ϕ l (x) can be chosen as From [25,8.951] we know the following representation of the standard Hermite polynomials H n (t) of degree n, that can be made monic as follows, 2 −n H n (t) = x n + O(x n−2 ). This leads to the integral from which we can read off with π k (a) = (2i) −k H k (ia), for k = 0, 1, . . ., which is again monic.
Renaming y = iz and x = is we obtain such that f is (N − 1)-times differentiable on R and the moments of its derivatives exist 1 , It immediately follows that its generalised moment matrix leads to polynomials, upon shifting the integration variable, and thus (2.17) is satisfied. It is not too difficult to show using Fourier transformation of f that also condition (2.18) of Definition 2.4 is satisfied and thus these ensembles are invertible.
Example 2.8. Our second example is the chiral GUE with external field (2.10) having I = R + and functions ϕ l (x) can be chosen as where the a l are positive real numbers. The following integral is known, see e.g. [25, 6.631.10] after analytic continuation, Here, L ν n (y) is the standard generalised Laguerre polynomial of degree n, which is made monic as follows, n!L ν n (−x) = x n + O(x n−1 ). Then, the first condition (2.17) is satisfied, with π k (a) = k!L ν k (−a) for k = 0, 1, . . . .
For the second condition (2.18) we consider the following integral, see [25, 7.421.6], which is also called Hankel transform, Bringing factors on the other side and making the substitution t = −s to make the same monic polynomials n!L ν n (−s) as above appear in the integrand, we obtain after using Now we state the second main result of the present paper which gives a formula for averages of products and ratios of characteristic polynomials in the case of invertible ensembles.
Theorem 2.9. Consider a polynomial ensemble (2.1) formed by x 1 , . . ., x N , and assume that this ensemble is invertible in the sense of Definition 2.4. Then where D N (z) = N n=1 (z − x n ) denotes the characteristic polynomial associated with the random variables x 1 , . . ., x N , the parameters y 1 , . . . , y L ∈ C\R and z 1 , . . . , z M ∈ C are pairwise distinct, and all contours C l with l = 1, . . . , N encircle the points a 1 , . . . , a N counter-clockwise.
We note that Theorem 2.9 generalises Theorem 5.1 in [24] for the ratio of two characteristic polynomials derived for the polynomial ensemble with ϕ(a, x) = x L e −x I 0 (2 √ ax) to general ratios in invertible polynomial ensembles. Clearly it is well suited for the asymptotic analysis when N → ∞ as the number of integrations does not depend on N.

2.4.
A formula for the correlation kernel for invertible ensembles. It is well known that each polynomial ensemble is a determinantal process. For invertible polynomial ensembles (see Definition 2.4) Theorem 2.9 enables us to deduce a double contour integration formula for the correlation kernel.
Proof. We use the following fact valid for any polynomial ensemble formed by x 1 , . . ., x N on I ⊆ R, see Ref. [19]. Assume that where the function v → Φ N (x, v) is analytic at y, y ∈ I. The the correlation kernel of the determinantal process formed by x 1 , . . . , x N is given by In our case Theorem 2.9 gives Our aim is to show that s λ satisfies the Giambelli formula, i.e.
, where λ is an arbitrary Young diagram, λ = (p 1 , . . . , p d |q 1 , . . . , q d ) in the Frobenius coordinates. According to Definition 2.1 this will mean that the polynomial ensemble under considerations is a Giambelli compatible point process.
The proof of equation (3.2) below is based on the following general fact due to Macdonald, see Macdonald [40], Example I.3.21.
Proposition 3.1. Let {h r,s } with integer r ∈ Z and non-negative integer s ∈ N be a collection of commuting indeterminates such that we have (3.3) ∀s ∈ N : h 0,s = 1 and ∀r < 0 h r,s = 0 , and set , where k is any number such that k ≥ l(λ). Then we have , where λ is an arbitrary Young diagram, λ = (p 1 , . . . , p d |q 1 , . . . , q d ) in the Frobenius coordinates.
Clearly, in order to apply Proposition 3.1 to s λ defined by equation (3.1) we need to construct a collection of indeterminates {h r,s } such that will hold true for an arbitrary Young diagram λ, for an arbitrary k ≥ l(λ), and such that condition (3.3) will be satisfied. By Andréiéf's integration formula (2.4) and the expression for the normalisation constant Z N (2.2) we can write where we used (A.1). Notice that at this point it matters that we consider polynomial ensembles and not more general bi-orthogonal ensembles. In the latter case the Vandermonde determinant in the denominator of the Schur function (2.12) would not cancel, the Andréiéf formula would not apply and we would not know how to compute such expectation values. Set and denote by Q = (Q i,j ) N i,j=1 the inverse 2 ofG = (g i,j ) N i,j=1 , whereg i,j = I dxx N −i ϕ j (x). With this notation we can rewrite equation (3.7) as Since Q is the inverse ofG, we have The following Proposition will imply Theorem 2.2.
Proposition 3.2. Let {h r,s }, with integer r ∈ Z and non-negative integer s ∈ Z ≥0 , be a collection of indeterminates defined by The collection of indeterminates {h r,s } satisfies condition (3.3). Moreover, with this collection of indeterminates {h r,s } formula (3.6) holds true for an arbitrary Young diagram λ, and for an arbitrary k ≥ l(λ).
Proof. Clearly, the collection of indeterminates {h r,s } defined by (3.12) satisfies condition (3.3). It remains to prove that equation (3.6) holds true for an arbitrary Young diagram λ, and for an arbitrary k ≥ l(λ). We divide the proof of this fact into several steps.
Step 1. First, we want to show that Let λ be an arbitrary Young diagram, and assume that k > l(λ). Consider the diagonal entries of the k × k matrix (h λ i −i+j,j−1 ) k i,j=1 for i = j ∈ {l(λ) + 1, . . . , k}. By definition of the h r,s these entries are Since h r,s = 0 for r < 0 (see equation (3.12)), the matrix (h λ i −i+j,j−1 ) k i,j=1 has the form  where the first row from the top with zeros has the label l(λ) + 1, and the first column from the left with ones has the label l(λ) + 1. The determinant of such a block matrix reduces to the product of the determinants of the blocks, which gives relation (3.13).
Step 2. Assume now that l(λ) > N. Then it trivially holds that E (s λ (x 1 , . . . , x N )) = 0 , by the very definition of the Schur polynomials. Here, we would like to show that it equally holds that We have h r,s = δ r,0 for s ≥ N and r ≥ 0. This implies that the matrix (h λ i −i+j,j−1 ) l(λ) i,j=1 , which we can write out as  Thus we can again apply the formula for determinants of block matrices to obtain which is true for any l(λ) > N and therefore condition (3.6) is satisfied in this case.
Step 3. Now we wish to prove that (3.14) is valid for any Young diagram with l(λ) ≤ N, and for 1 ≤ i, j ≤ N. Assume that λ i − i + j ≥ 0. Then (3.14) turns into the first equation in (3.12) with r = λ i − i + j, where we have used equation (3.11). Also, if λ i − i + j < 0, and 1 ≤ i, j ≤ N, then h λ i −i+j,j−1 = 0 as it follows from equation (3.12). We conclude that (3.14) holds true for λ i − i + j < 0 as well. Finally, the results obtained in Step 1-Step 3 together with formula (3.9) give the desired formula (3.6).

Proof of Theorem 2.9
Denoting by S K the symmetric group of a set of K variables with its elements being the permutations of these, we will utilise the following Lemma that was proven in [22].  N, and let x 1 , . . . , x N and y 1 , . . . , y L denote two sets of parameters that are pairwise distinct. Then the following identity holds on the coset of the corresponding permutation groups.
As shown in [22] this follows from the Cauchy-Littlewood formula and the determinantal formula for the Schur polynomials (2.12). We can use this identity to reduce the number of variables in the inverse characteristic polynomials from N to L. Applied to the averages of products and ratios of characteristic polynomials we obtain where we used the fact that each term in the sum over permutations gives the same contribution to the expectation. Hence, we can undo the permutations under the sum by a change of variables, and replace the sum over S N /(S N −L × S L ) by the dimension of the coset space N!/(N −L)!L!. Next we expand the determinant over the det [ϕ l (x k )] N k,l=1 and then separate the integration over the first L variables x l=1,...,L and the following N − L variables x n=L+1,...,N , by also splitting the characteristic polynomials accordingly. This gives Because we are aiming at an expression that will be amenable to taking the large-N limit, we now focus on the integrals over N − L variables in the second line, which we denote by J. Here, we make use of one of the properties of the Vandermonde determinant, namely the absorption of the M characteristic polynomials in J into a larger Vandermonde determinant, see (A.2), to write We use the representation (A.1), pull the integrations I dx k ϕ σ(k) (x k ) into the corresponding columns, and use definition (2.3) of the generalised moment matrix to obtain .

Property (2.17) of invertible polynomial ensembles enables us to rewrite J as
.
Property ( and insert what we have derived for J above. This gives .

(4.4)
The integrals are now put into a form to apply the following Lemma, that will allow us to simplify (and eventually get rid of) the sum over permutations. Proof. Recall that if G is a finite group, and H is its subgroup, then there are transversal elements t 1 , . . . , t k ∈ G for the left cosets of H such that G = t 1 H ⊎ . . . ⊎ t k H, where ⊎ denotes disjoint union. It follows that if F is a function on G with the property F (gh) = F (g) for any g ∈ G, and any h ∈ H, then , we obtain the formula valid for any antisymmetric function Φ(x σ(1) , . . . , x σ(L) ), and for any function f (x, y) such that the integrals in the equation above exist. Formula (4.8) enables us to rewrite equation (4.4) as .

(4.9)
We note that due to (A.7) it holds .
In addition, we apply (2.19) to eliminate Z N , cancel signs, and see that the strict ordering of the indices l 1 < l 2 < . . . < l L can be relaxed, .
Finally, we see that the sum in formula (4.9) can be written as contour integrals, because of the formula where the contour C encircles the points a 1 , . . . , a N counter-clockwise. The leads to the formula in the statement of Theorem 2.9.

Special cases
In Proposition (2.10) we have used equation (2.33) in the case M = L = 1. Another case of interest is that corresponding to products of characteristic polynomials. In this case L = 0, and we obtain that only the first set of integrals remains in (2.33), to obtain after pulling the M integrations over the s j 's into the Vandermonde determinant of size M. This result also could have been directly computed using Lemma A.2.
As a final special case of interest we look at the ratio of M +1 characteristic polynomials over a single one at L = 1. This object is needed in the application to finite temperature QCD, cf. [45]. Theorem 2.9 gives .

(5.3)
Following [19], we may use the Lagrange extrapolation formula This leads to the following rewriting of (5.3) where we have defined (s − a n ) .
In the second step in (5.6) we have first pulled all the s-integrals except the one over s m into the Vandermonde determinant ∆ . We then recognise that the sum is a Laplace expansion of a determinant of size M + 1 with respect to the first row, containing the matrix elements A(z j , u) (5.7). This reveals the determinantal form of the corresponding kernel.
Fyodorov, Grela, and Strahov [24] considered the probability density defined by 3 3 We use a different convention than [24] where the factor e −a l is part of the normalisation, cf. (2.10).
on R N + . Note that this polynomial ensemble is invertible only for L = 0 as it follows from equations (2.30) and (2.32). However, computations of different averages with respect to P L N can be reduced to those with respect to P L=0 N , i.e. with respect to an invertible ensemble. Indeed, we have for any function f (x 1 , . . . , x N ) such that the expectations in the formula above exist.
In particular, we can reproduce the results of [24] for the expectation value of a single characteristic polynomial, its inverse or a single ratio. Without going much into detail we need two ingredients for this check. First, in order to perform the limit of vanishing arguments in equation (5.9), it is useful to antisymmetrise the product of the first L functions F (t j , z j ) using the Vandermonde determinant ∆ L+1 (t 1 , . . . , t L+1 ) in (2.33). We are then led to consider . . . , t L ) , (5.10) after first taking the limit of degenerate arguments, which is then sent to zero. Obviously, we first separate the remaining non-vanishing argument z L+1 from the Vandermonde determinant by ∆ L (z 1 , . . . , z L ) L l=1 (z l − z L+1 ) = ∆ L+1 (z 1 , . . . , z L+1 ). Second, we need an equivalent formulation of Propositions 3.1 and 3.5 employed in [24], which are due to [21,19], respectively. 4 Proposition 5.1.
where C is the inverse of the N × N moment matrix G, and c i,j are the matrix elements of C T .
Proof. Eqs. (5.11) were stated in [24] following [21,19], without the factors of (u/y) N −1 . The equivalence of the two statements can be seen as follows. Expanding the geometric series inside the determinant without these factors we have If we perform the integrals in the last row we obtain infinite series over generalised moment matrices g k,l , the first N − 1 of which can be removed by subtraction of the upper N − 1 rows. Rewriting the last row as integrals and resumming the series we arrive at the first line of (5.11). The second line in (5.11) is obtained as follows. Using that det[c i,j ] N i,j=1 = 1/Z N and then multiplying the matrix C with the matrix inside the determinant from the right, this leads to an identity matrix, except for the last row, as C is the inverse of the finite, N × N dimensional matrix. Laplace expanding with respect to the last column leads to the desired result.
Employing Proposition 5.1 in [24], it is not difficult to see that from our Theorem 2.9 together with (5.10) we obtain an equivalent form of [24,  In this appendix we first define the Vandermonde determinant in various equivalent ways. We then collect several of its properties when extending the number of variables by multiplication and when reducing it by division through the corresponding factors.
Definition A.1. The Vandermonde determinant of N pairwise distinct variables x 1 , . . . , x N is denoted by ∆ N (x 1 , . . . , x N ) and can be represented in the following equivalent ways: For N = 1 it follows that ∆ 1 (x 1 ) = 1, and we also formally define ∆ 0 = 1 in the absence of parameters.
The Vandermonde determinant can be extended from N to N + M variables in the following way, when multiplied by M characteristic polynomials. where the parameters x j with j = l 1 , . . . , l L are absent. From the Definition A.1 we obtain that for L = N both sides are equal to unity. We obtain the reduced Vandermonde by the following formula.