1 Introduction

Recently an interesting property of matrix models called superintegrability was brought to attention, see [1] for a rather extensive summary and references therein. It appears that in a wide variety of matrix models the are explicit formulae for averages of an appropriately chosen basis in the space of gauge invariant operators. In each situation these operators correspond to some symmetric polynomials. Usually, these polynomials correspond to characters of some algebra, like in the case of Schur functions or Q-functions or some appropriate generalization of characters, like in this paper. Remarkably, expectation values of these polynomials not only have an explicit expression, but are also expressed in terms of the same polynomials again, now evaluated at specific loci. The name superintegrability refers to a rather stretched analogy with classical mechanics, where some systems have extra integrals of motion which allows to reduce the problem to algebra and present an explicit solution.

The superintegrability property is simplest in such models as the Hermitian Gaussian and complex matrix models [2], but it is remarkable in the sense that it allows one to guess generalizations to other cases. One just needs to guess the appropriate substitute of polynomials. One such interesting generalization is the \(\beta \)-deformation [3]. The corresponding matrix model is also referred to as the \(\beta \)-ensemble. Such \(\beta \)-deformed integrals are interesting in two ways. First, of all on their own they represent an eigenvalue model in which the quantum measure is deformed and a lot of familiar structures break down. This is to be an important deformation direction as such models find an application in a variety of problems such as supersymmetric localization [4, 5], categorification of knot invariants [6,7,8,9], AGT correspondence [10]. On the other hand completely understanding \(\beta \)-deformation is necessary to move the more general cases of cases like (qt) and elliptic (qt) matrix models [11].

figure a

The appropriate symmetric functions for the \(\beta \)-deformed model are the so-called Jack polynomials [12], for which one has [4, 13]:

$$\begin{aligned} \langle J_{R}(H)\rangle = \beta ^{|R|}\frac{J_R(N)}{J_R(\delta _{k,1})}\frac{J_R(\delta _{k,2})}{\Vert J_R\Vert ^2 }, \end{aligned}$$
(1)

which can be verified in several ways. However, there is still some mystery about proving such type of formula. In this paper we present a method of solving matrix models which naturally produces polynomial averages. We apply it to the \(\beta \)-deformed model, thus providing the long-awaited proof of (1)

Other features of matrix models which are important in our discussion are ordinary KP/Toda integrability [14], the Virasoro constraints and W-representation [15, 16]. These are matrix models analogues of conservation laws and equations of motions. Whether this analogy could be developed further is an intriguing question. Nevertheless, these three structures are crucial for our construction.

The former, is the idea that matrix models partitions functions are solutions of integrable systems. This is manifested in bilinear equations satisfied by matrix model partition functions. From the algebraic point of view it means that partition functions are certain matrix elements of the \(GL(\infty )\) group. From this perspective it is natural to expect the character expansion of such matrix models to be in terms of \(GL(\infty )\) characters – Schur functions [17].

On the other hand there are Virasoro constraints (called Ward identities in QFT), which are linear differential equations, annihilating the partition function. They reflect the invariance of the integral under arbitrary reparametrizations of the integration variable and substitute the equations of motion for the path integral. Namely, the full set of Virasoro constraints completely determines the partition function. One could wish to be able to solve matrix models – obtain full partition functions, by solving the Virasoro equations. Lately it was found that this is possible (at least up to choice of integration contours). The answer is typically given by the W-representation [18] – an evolution operator that generates the partition function from the trivial one:

$$\begin{aligned} Z=e^{W}\cdot 1 \end{aligned}$$
(2)

However, such answer is still unsatisfactory. Despite that one can write out the explicit W-operator, it is complicated enough that is not immediately clear how to expand this expression and obtain some explicit formulas. In this paper we explain how to promote (2) to an explicit expansion and hence incorporate superintegrability. The main idea is that the W-operator acts naturally on characters:

$$\begin{aligned} W \chi _R = \sum _{R'} c_{RR'} \chi _{R'} \end{aligned}$$
(3)

where sums are restricted to additions of just a few boxes to the original Young diagram R and coefficients \(c_{RR'}\) are factorized contributions of some combinatoric piece and the content factors \((i-j+N)\) or their proper deformations, with (ij) denoting the coordinates of these boxes. This property of W-operators is one manifestation of them being special elements of the \({\mathcal {W}}_{\infty }\) algebra. The example that we treat here demonstrates that these algebraic properties survive \(\beta \)-deformation [3]. On the other hand KP-like integrability seems to break down: there are no determinant formulae or bilinear identities and the substitute for \(GL(\infty )\) is unknown. However, the W-representation is deformed nicely and formulae like (3) are still there with appropriate substitutes of characters [13].

We will explain how the \(\beta \)-deformed W-operator acts on Jack polynomials and how this action allows us to calculate character averages (1) without any integration (see similar constructions in [19, 20]).

We describe this method for the special case of the \(\beta \)-deformed Gaussian model, however, we keep in mind that is applicable in a number of other cases. Only minor adaptions are required in cases where contour ambiguities are absent and the relevant function are Schur functions or Jack polynomials. This includes the Gaussian model, the complex matrix model, the model with logarithmic potential and it’s \(\beta \)-deformation. Furthermore, it is also applicable to the Kontsevich models, where the characters are Schur Q-functions.

This paper is organized as follows. In the Sect. 2 we illustrate the proof of superintegrability on the example of undeformed Hermitian Gauss matrix model and Schur polynomials. Next, we prove superintegrability for the \(\beta \)-deformation in Sect. 3. The connection between Calogero–Sutherland Hamiltonians and W-operators is discussed in the Sect. 4. Finally, we briefly discuss our result and further directions in Sect. 5.

At the moment of finalizing this paper we became aware that a very similar consideration has just appeared in a wonderful paper [21].

2 Undeformed case

Let us start with the standard Hermitian Gaussian matrix model and remind how one obtains explicit expression for averages in this case. The partition function of the model is:

$$\begin{aligned} Z(p_k)=\int dH \exp \left( -\frac{1}{2} {\text {Tr}}H^2 + \sum _k p_k {\text {Tr}}H^k \right) \end{aligned}$$
(4)

This model belongs to the class of so-called eigenvalue models, i.e. one can integrate out its angular part. After transition to eigenvalues \(\lambda _i\) of H the generating function becomes

$$\begin{aligned} Z(p_k)= & {} \int d\lambda _1 ... d\lambda _N\, \Delta ^{2} (\lambda ) \exp \left[ -\frac{1}{2}\sum _{i}\lambda _i^2\right] \nonumber \\&\times \exp \left[ \sum _{k}p_k\sum _{i}\lambda _i^k\right] \end{aligned}$$
(5)

where \(\Delta \) is the Vandermonde determinant: \( \Delta =\prod _{i<j}(\lambda _i-\lambda _j) \).

An important property of the partition function is its expansion in terms of characters. Recall the Cauchy identity:

$$\begin{aligned} \exp \left( \sum _{k} \dfrac{p_k {\bar{p}}_k}{k} \right) = \sum _R S_R(p_k)S_R({\bar{p}}_k). \end{aligned}$$
(6)

Applying it to the potential of the matrix model one obtains:

$$\begin{aligned} Z= & {} \sum _{R}S_{R}(p_k)\int dH \exp \left( -\frac{1}{2}{\text {Tr}}H^2 \right) S_{R}(H)\nonumber \\= & {} \sum _{R}S_{R}(p_k)\left\langle S_{R}(H)\right\rangle \end{aligned}$$
(7)

Hence, knowing all character means we have an explicit perturbative solution of the matrix model. Clearly, since Schur polynomials form a basis in the space of all symmetric functions, we can calculate the expectation value of any other gauge invariant operator provided we know, how it expands in Schur operators.

The partition function satisfies a set of differential equations called Virasoro constraints. These reflect the invariance of the integral under changes of the integration variables. To derive the constraints one changes the integration variables \(\lambda _i \rightarrow \lambda _i+\epsilon \lambda _i^{n+1}\) and expand by powers of \(\epsilon \) [14, 16]. In the first order one gets equations:

$$\begin{aligned} L_n Z=0 \quad \, n \ge -1 \end{aligned}$$
(8)

where \(L_n\) are Virasoro operators:

$$\begin{aligned} L_n= & {} \left( 2N\frac{\partial }{\partial p_{n}}+\sum _{k=1}^{\infty }k p_k\frac{\partial }{\partial p_{k+n}}+\sum _{r=1}^{n-1}\frac{\partial ^2}{\partial p_{r}\partial p_{n-r}}\right. \nonumber \\&\left. +N^2\delta _{n,0}+p_1N\delta _{n,-1}-\frac{\partial }{\partial p_{n+2}}\right) \end{aligned}$$
(9)

Before explaining how we suggest to solve the Virasoro constraints, let us shortly review a few rather traditional ways of solving the matrix model, namely obtaining explicit Schur averages. We do it because all of these methods break down in the \(\beta \)-deformed case, which explains why proving (1) is not simple. The method explained later in this section survives the \(\beta \)-deformation.

  • First of all, there is, of course, Wick’s theorem [22]. The key idea is to represent arbitrary correlators in terms of the symmetric group:

    (10)

    where the sum goes over permutations \(\gamma \) with a fixed cycle type \([2^m]\), which then allows to represent the correlator of monomials in terms of symmetric group characters:

    $$\begin{aligned} \left\langle \prod _{p=1}^{l_{\Lambda }}{\text {Tr}}H^{m_{p}}\right\rangle =\sum _{R \vdash m} \varphi _{R}\left( \left[ 2^{m}\right] \right) \cdot D_{R}(N) \cdot \psi _{R}(\sigma )\nonumber \\ \end{aligned}$$
    (11)

    Here \(\psi _R (\sigma )\) and \(\varphi _R (\sigma )\) are differently normalized symmetric group characters for representation R and cycle type \(\sigma \), while \(D_R(N)\) is the dimension of the corresponding GL(N) representation and is equal to \(S_R(N)\). As described in [22] one can use symmetric group character orthogonality to further construct Schur averages and explicitly obtain formula

    $$\begin{aligned} \langle S_{R}(H)\rangle =\frac{S_R(N)}{S_R(\delta _{k,1})} S_R(\delta _{k,2}) \end{aligned}$$
    (12)
  • On the other hand, one can just do explicit angular integration over the unitary group [23]. Consider the integral:

    $$\begin{aligned} Z(Y)= & {} \int dH \exp \left( -\frac{1}{2} {\text {Tr}}H^2 + {\text {Tr}}HY \right) \nonumber \\= & {} \int dH \exp \left( -\frac{1}{2} {\text {Tr}}H^2 \right) \int [dU]\exp \left( {\text {Tr}}UHU^{\dagger }Y \right) \nonumber \\= & {} \sum _{|R|\leqslant N}\frac{S_R(\delta _{k,1})S_R(Y)}{S_{R}(N)}\int dH \exp \left( -\frac{1}{2} {\text {Tr}}H^2 \right) S_R(H)\nonumber \\= & {} \sum _{|R|\leqslant N} \frac{S_R(\delta _{k,1})S_R(Y)}{S_{R}(N)}\left\langle S_R(H) \right\rangle . \end{aligned}$$
    (13)

    Here the transition between the second and third line is the character expansion of the Itzyckson-Zuber integral [24]. Besides one can just take the Gaussian integral and apply the Cauchy formula:

    $$\begin{aligned} Z(Y)= & {} \int dH \exp \left( -\frac{1}{2} {\text {Tr}}H^2 + {\text {Tr}}HY \right) \nonumber \\= & {} e^{\frac{1}{2}{\text {Tr}}Y^2}\nonumber \\= & {} \sum _{R}S_R(Y)S_R(\delta _{k,2}) \end{aligned}$$
    (14)

    Comparing the two expression we immediately obtain (12).

  • Lastly, we could use integrability properties of the partition function [25]. Namely, one can represent the partition function as a determinant of the moment matrix given by

    (15)

    In terms of averages of Schur functions this means:

    (16)

    According to the general idea, mentioned in the introduction, we could use the lowest Virasoro constraint \(L_{-1}\), also called the string equation. In terms of (16) it is written as

    $$\begin{aligned} \sum _{\square } c_{R+\square }=\sum _{\square }\left( N-i_{\square }+j_{\square }\right) c_{R-\square }. \end{aligned}$$
    (17)

    Solving for symmetric representations we determine the moments:

    (18)

    which correctly reproduces the moments of the Gaussian measure. Finally, inserting the moments back into (16) and after a few algebraic manipulations, which are explained in detail in [18], we obtain (12).

The idea of this paper is to solve the system of equations, written above, explicitly. It turns out, that the system (8) is equivalent to a single equation [25], which is the sum of Virasoro constraints:

$$\begin{aligned} \sum _{n\ge 1} p_n L_{n-2}Z=0 \end{aligned}$$
(19)

The last equation can be written as

$$\begin{aligned} (l_0-2 W_{-2})Z=0 \end{aligned}$$
(20)

where the operator \(W_{-2}\) has degreeFootnote 1 2 and \( l_0=\sum np_n\frac{\partial }{\partial p_n} \) is nothing but the grading operator. This equation is special case of the equation

$$\begin{aligned} (l_0-k{\widehat{O}}^{(k)})\Psi =0 \end{aligned}$$
(21)

where \( {\widehat{O}}^{(k)} \) is an operator with degree k, i.e. \(\left[ l_0, {\hat{O}}^{(k)} \right] =k {\hat{O}}^{(k)}\). The solution of the equation is

$$\begin{aligned} \Psi =e^{{\widehat{O}}^{(k)}} \cdot 1 \end{aligned}$$
(22)

Therefore, we obtain the partition function of the Hermitian matrix model:

$$\begin{aligned} Z= e^{W_{-2}} \cdot 1 \end{aligned}$$
(23)

Not only does one have an explicit solution, but can naturally recover the character expansion. For this, notice, that Schur polynomials are exactly “natural” for W-operator to act on. It acts on them by adding two boxes to the representation with some weight:

$$\begin{aligned} W_{-2}S_R= & {} \frac{1}{2}\sum _{R'=R+\square _1+\square _2}(j_{\square _1} -i_{\square _1}+N) \nonumber \\&\times (j_{\square _2}- i_{\square _2}+N) C_{RR'}S_{R'} \end{aligned}$$
(24)
figure b

where \(i_1\), \(i_2\), \(j_1\) and \(j_2\) are coordinates of the positions of the boxes added to the initial diagram (in the picture painted box has coordinates \((i,j)=(2,1)\)). The quantity \(i-j\) is sometimes called the content of the box in a Young diagram. Coefficients \( C_{RR'} \) come from the expansion \( p_2 S_R \) by Schur polynomials:

$$\begin{aligned} p_2 S_R=\sum _{R'=R+\square +\square }C_{RR'}S_{R'} \end{aligned}$$
(25)

and in this case vanish except if \(R'\) differs from R by a piece of form [2] or [1, 1], in other word the skew-diagram \(R'/R\) is a horizontal or vertical strip of size 2, and then \(C_{RR'}=\pm 1\). For example:

$$\begin{aligned} p_2 S_{[2]}= & {} S_{[4]}+0 \cdot S_{[3,1]}+ S_{[2,2]}-S_{[2,1,1]} \\ W_{-2}S_{[2]}= & {} \frac{1}{2}\left[ (2+N)(3+N)S_{[4]}+0 \cdot S_{[3,1]}\right. \\&\left. +(N-1)N S_{[2,2]}-(N-2)(N-1)S_{[2,1,1]} \right] \end{aligned}$$

Thus it is convenient to rewrite (23) in terms of Schur functions:

$$\begin{aligned} Z=e^{W_{-2}}\cdot S_{\varnothing }(p_k) \end{aligned}$$
(26)

where \( S_{\varnothing }(p_k)=1 \) is Schur function of empty Young diagram. Thus knowing the expression (24), by acting iteratively on Schur functions, one obtains the character expansion of the partition function. From the (26) one can extract superintegrability for Schur functions (12). We will describe the procedure in more detail in the next section for the Jack functions straight away.

3 \( \beta \)-Deformation

Now we are ready to present the main result of the paper. Namely, we prove the superintegrability of the \(\beta \)-deformation of the Gaussian Hermitian matrix model.

This deformation is introduced in the form of integral of eigenvalues (for a matrix integral representation of this model see [26, 27]). In general one can consider integrals not only of Hermitian matrices but of orthogonal or simplectic. The eigenvalue representation for both of these models will differ from the Hermitian in the power of the Vandermonde determinant in (5), one for orthogonal and four for symplectic respectively. Hence it is only natural to study eigenvalue integrals with the power of the determinant being a parameter, now taking any value. Hence, we define the following partition function:

$$\begin{aligned} Z_\beta (p_k)= & {} \int d\lambda _1 ... d\lambda _N \, \Delta ^{2\beta }(\lambda ) \exp \left[ -\frac{1}{2}\sum _{i}\lambda _i^2\right] \nonumber \\&\times \exp \left[ \sum _{k}\beta p_k\sum _{i}\lambda _i^k\right] \end{aligned}$$
(27)

One can still expand the partition function in terms of characters. However, it is now well known that in this case the proper basis functions are so-called Jack polynomials. They are symmetric polynomials orthogonal with respect to a certain scalar product, and reduce to Schur functions for \(\beta =1\) [12]. We list some examples of the simplest Jack polynomials for illustration:

$$\begin{aligned} J_{[1]}= & {} p_1 \\ J_{[2]}= & {} \frac{1}{\beta +1}(\beta p_1^2+p_2) \\ J_{[1,1]}= & {} \frac{1}{2}(p_1^2-p_2) \end{aligned}$$
$$\begin{aligned} J_{[3]}= & {} \frac{1}{(\beta +1)(\beta +2)}(\beta ^2 p_1^3+3\beta p_1 p_2+2p_3) \nonumber \\ J_{[2,1]}= & {} \frac{1}{2\beta +1}(\beta p_1^3+(1-\beta )p_1 p_2-p_3)\nonumber \\ J_{[1,1,1]}= & {} \frac{1}{6}(p_1^3-3p_1p_2+2p_3) \end{aligned}$$
(28)

The Cauchy identity for has a \(\beta \)-deformation as well, therefore for Jack polynomials we have:

$$\begin{aligned} \sum _R\frac{x^{|R|}}{\Vert J_R\Vert ^2}J_R(p)J_R({\bar{p}})=\exp \left[ \sum _k \beta x^k \frac{p_k {\bar{p}}_k}{k}\right] \end{aligned}$$
(29)

where \(||J_R||^2\) is the norm of the Jack polynomial. By proper functions we mean that averages of Jack polynomials constitute a direct \(\beta \)-deformation of (12). The expectation value of Jack polynomials in the \(\beta \)-deformed Hermitian Gaussian model is given by

$$\begin{aligned} \langle J_{R}(H)\rangle =\frac{J_R(N)}{J_R(\delta _{k,1})}\frac{J_R(\delta _{k,2})}{\Vert J_R\Vert ^2 }\beta ^{|R|} \end{aligned}$$
(30)

A key difference is that, as we have mentioned, it seems harder to prove this formula. Clearly, we cannot efficiently use Wick’s theorem. Standard KP/Toda integrability breaks down, i.e. no determinant-like representation in known for the partition function. However, we can still solve the model using Virasoro constrains. The Virasoro and W-operators are obtained the same way as in the undeformed case:

$$\begin{aligned} L^{(\beta )}_n= & {} ((n+1)(1-\beta )+2N\beta )\frac{\partial }{\partial p_{n}}+\beta \sum _{k=1}^{\infty }k p_k\frac{\partial }{\partial p_{k+n}}\nonumber \\&+\beta ^2 \sum _{r=1}^{n-1}\frac{\partial ^2}{\partial p_{r}\partial p_{n-r}}+((1-\beta )+N\beta )\beta N\delta _{n,0}\nonumber \\&+p_1 \beta ^2 N\delta _{n,-1}-\frac{\partial }{\partial p_{n+2}} \end{aligned}$$
(31)

rewriting

$$\begin{aligned} \sum _{n\ge 1} p_n L^{(\beta )}_{n-2}Z_\beta (p_k)=0 \end{aligned}$$
(32)

as

$$\begin{aligned} (l_0-2 W_{-2}^{(\beta )})Z_{\beta }(p_k)=0 \end{aligned}$$
(33)

one obtains \(\beta \)-deformed W-operator:

$$\begin{aligned} W_{-2}^{(\beta )}= & {} \sum _{n=1}^{\infty } \left( \frac{(n+1) (1-\beta )}{2}+N\beta \right) p_{n+2} \frac{\partial }{\partial p_{n}}\nonumber \\&+\frac{\beta }{2} \sum _{k,n=1}^{\infty } (n+k-2)k p_{n}p_k \frac{\partial }{\partial p_{k+n-2}} \nonumber \\&+\frac{1}{2}\sum _{k,n=1}^{\infty }n k p_{k+n+2}\frac{\partial ^2}{\partial p_{k}\partial p_{n}}\nonumber \\&+\frac{((1-\beta )+N\beta )}{2}\beta N p_2+\frac{1}{2} \beta ^2 p_1^2N \end{aligned}$$
(34)

It turns out that Jack polynomials are ”natural” functions for \(\beta \)-deformed W-operator too:

$$\begin{aligned} W_{-2}^{(\beta )}J_R= & {} \frac{1}{2}\sum _{R'=R+\square _1+\square _2}(j_{\square _1}+\beta (N-i_{\square _1})) \nonumber \\&\qquad \quad \times (j_{\square _2} +\beta (N-i_{\square _2}))C_{RR'}J_{R'} \end{aligned}$$
(35)

where \( C_{RR'} \) are coefficients of expansion \( p_2 J_R \) in terms of Jack polynomials:

$$\begin{aligned} p_2 J_R=\sum _{R'=R+\square +\square }C_{RR'}J_{R'} \end{aligned}$$
(36)

As in the underformed case, the important property is that the action of the W-operator differs only by a box content factor. For example:

$$\begin{aligned} p_2 J_{[2]}= & {} J_{[4]}-\frac{2(\beta -1)\beta }{(\beta +1)(\beta +3)} J_{[3,1]}\nonumber \\&+\frac{4(1+2\beta )}{(1+\beta )^2(2+\beta )} J_{[2,2]}\nonumber \\&-\frac{2\beta (1+3\beta )}{(1+\beta )^3}J_{[2,1,1]} \end{aligned}$$
(37)

while

$$\begin{aligned} \begin{aligned} W_{-2}^{(\beta )}J_{[2]}&=\frac{1}{2}[(2+N\beta )(3+N\beta )J_{[4]}-\beta (N-1)(N\beta +2)\\&\quad \times \frac{2(\beta -1)\beta }{(\beta +1)(\beta +3)} J_{[3,1]}+\\&\quad +\beta (N-1)((N-1)\beta +1)\\&\quad \times \frac{4(1+2\beta )}{(1+\beta )^2(2+\beta )} J_{[2,2]}-\beta (N-2)\beta (N-1)\\&\quad \times \frac{2\beta (1+3\beta )}{(1+\beta )^3}J_{[2,1,1]} \end{aligned} \end{aligned}$$
(38)

Now we would like to describe in detail how an iterative application of formula (35) leads to an explicit expression for expectation values of Jack polynomials. After iterated application of (35) and (36) one can obtain

$$\begin{aligned} (W_{-2}^{(\beta )})^{n}\cdot J_{\varnothing }= & {} \frac{1}{2^n}\sum _{R_n:|R_n|=2n}\nonumber \\&\times \left( \prod _{ \left( i_\square ,j_\square \right) \in R_n} \left( j_\square +\beta (N-i_\square )\right) \right) \nonumber \\&\times \sum _{\left\{ {R_1,\ldots ,R_{n-1}} \right\} }D_{\varnothing ,R_1,R_2,...,R_n}J_{R_n} \end{aligned}$$
(39)
$$\begin{aligned} p_2^n \cdot J_{\varnothing }= & {} \sum _{R_n:|R_n|=2n}\sum _{\left\{ {R_1,\ldots ,R_{n-1}} \right\} }\nonumber \\&\times D_{\varnothing ,R_1,R_2,...,R_n}J_{R_n} \end{aligned}$$
(40)

Here \( D_{\varnothing ,R_1,R_2,...,R_n}=C_{\varnothing ,R_1}C_{R_1,R_2}...C_{R_{n-1},R_n} \); \( R_i: |R_i|+2=|R_{i+1}| \) are combinatorial coefficients, which correspond to a certain pattern in which one obtains the Young diagram R from an empty one. The sum is taken over all such sequences \(\left\{ {R_1,\ldots ,R_{n-1}} \right\} \) in which every next partition is obtained from the previous one by adding two boxes according to the coefficient \(C_{R_1,R_2}\). In other words, it is a sum over “paths” in the set of Young diagrams, where each “path” comes with a certain weight governed by formula (40).

As an example, representation [3, 1] can be obtained in two ways (they are illustrated on the image below): 1) on the first step adding boxes with coordinates (0, 0) and (0, 1), on the second step - (0, 2) and (1,0); 2) on the first step adding boxes with coordinates (0,0) and (1,0), on the second step - (0,1) and (0,2). Paths \( D_{\varnothing ,[2],[3,1]}\) and \( D_{\varnothing ,[1,1],[3,1]}\) correspond to these ways respectively.

figure c

A key observation is that the piece \( \prod \limits _{(i_\square ,j_\square ) \in R_n} (j_\square + \beta (N- i_\square )) \) factors out from the sum over ”paths” because it does not depend on the order in which each box is added, but only on the content.The expression for this factor is a version of the hook-content product formula, see [12] formula (10.25):

$$\begin{aligned} \frac{J_R(N)}{J_R(\delta _{k,1})}=\beta ^{-|R|} \prod _{ \left( i_\square ,j_\square \right) \in R} (j_\square +\beta (N-i_\square )) \end{aligned}$$
(41)

A typical example is:

$$\begin{aligned}&\frac{J_{[3,2]}(N)}{J_{[3,2]}(\delta _{k,1})}\\&\quad =\frac{\beta (M-1) \beta N (N\beta +1)(N\beta +2)(1+\beta (N-1))}{\beta ^5}. \end{aligned}$$

Now, lets return to the evaluation of the combinatorial sum. We don’t need to know each term. The trick here is to use the fact that this sum originates in the Pieri-like formula (40). From the Cauchy identity follows

$$\begin{aligned} e^{p_2} =\sum _{R}\frac{J_R(p_k) J_R(\delta _{k,2})}{\Vert J_R\Vert ^2} 2^{|R|/2} \end{aligned}$$

Rewriting it in the same manner as the iterative action of the W-operator and using relation (40) we obtain:

$$\begin{aligned}&\sum _{R}\frac{J_R(p_k)J_R(\delta _{k,2})}{\Vert J_R\Vert ^2}2^{|R|/2}=e^{p_2} \cdot 1\nonumber \\&\quad \quad =\sum _{R:|R|-even}J_R(p_k)\sum _{\left\{ {R_1,\ldots ,R_{n-1}} \right\} }D_{\varnothing ,R_1,R_2,...,R_n}\frac{1}{(|R|/2)!}\nonumber \\ \end{aligned}$$
(42)

Thus we obtain

$$\begin{aligned} \frac{\Vert J_R\Vert ^2}{2^{|R|/2}(|R|/2)!}\sum _{\left\{ {R_1,\ldots ,R_{n-1}} \right\} }D_{\varnothing ,R_1,R_2,...,R_n} = J_R(\delta _{k,2}) \end{aligned}$$
(43)

Let us illustrate how this works:

$$\begin{aligned} \frac{J_{[3,1]}(\delta _{k,2})}{\Vert J_{[3,1]}\Vert ^2}= & {} \frac{1}{2}[D_{\varnothing ,[2],[3,1]}+D_{\varnothing ,[1,1],[3,1]}]\\= & {} \frac{1}{2}[C_{\varnothing ,[2]}C_{[2],[3,1]}+C_{\varnothing ,[1,1]}C_{[1,1],[3,1]}]= \\= & {} \frac{1}{2}\left[ -\frac{2\beta (\beta -1)}{(\beta +1)(\beta +3)}-\frac{2\beta }{1+\beta }\right] =-\frac{2\beta }{\beta +3} \end{aligned}$$

Hence, using formula (43) we obtain the partition function:

$$\begin{aligned} Z_\beta= & {} e^{W_{-2}^{(\beta )}}\cdot J_{\varnothing }(p_k)=\sum _{R:|R|-even}J_R(p_k)\frac{1}{(|R|/2)!}\nonumber \\&\times \sum \limits _{\left\{ {R_1,\ldots ,R_{n-1}} \right\} } D_{\varnothing ,R_1,R_2,...,R_n} \prod \limits _{(i_\square ,j_\square ) \in R_n}(j_\square +\beta (N-i_\square ))\nonumber \\= & {} \sum _{R:|R|-even}J_R(p_k)\frac{J_R(N)}{J_R(\delta _{k,1})}\frac{J_R(\delta _{k,2})\beta ^{|R|}}{\Vert J_R\Vert ^2 } \end{aligned}$$
(44)

4 Constructing W-operators from Hamiltonians

Formula (35) is the key to our construction, however we did not provide an explicit proof. Here we are going to sketch a general idea of where such operators come from and how to prove they act on characters in the mentioned way. The construction is rather similar to the one considered in [28] and [29]. We postpone a complete analysis, which would also involve the (qt)-deformed case to a separate paper.

Suppose instead of \(W_{-2}^{(\beta )}\) (35) we got a simpler operator \(W_{-1}^{(\beta )}\), which acts on Jack polynomials as

$$\begin{aligned} W_{-1}^{(\beta )}J_R=\sum _{R'=R+\square }(\beta (N-i_{\square })+j_{\square })C_{RR'}J_{R'} \end{aligned}$$
(45)

There \(C_{RR'}\) are the coefficients of expanding \(p_1J_R\) over Jack polynomials:

$$\begin{aligned} p_1J_R=\sum _{R'=R+\square } C_{RR'}J_{R'} \end{aligned}$$
(46)

To prove (45) we notice that it can be constructed by commuting the multiplication operator \(p_1\) with a diagonal operator:

$$\begin{aligned} H^{(\beta )}_{1} J_R = \sum _{( i_\square , j_\square ) \in R} \left( \beta \left( N-i_\square \right) + j_\square \right) J_R \end{aligned}$$
(47)

Such operators, diagonal in the Jack polynomial basis, are nothing but Calogero–Ruijsenaars Hamiltonians. We need their expression in terms of time variables. In [29,30,31] there is description of these operators, but for our current goal only one of them is needed. In our normalisation it reads

$$\begin{aligned} H^{(\beta )}_1= & {} \dfrac{1}{2}\sum _{n,m\ge 1}\nonumber \\&\times \left( n m p_{n+m}\frac{\partial ^2}{\partial p_n\partial p_m}+\beta (n+m)p_n p_m\frac{\partial }{\partial p_{n+m}}\right) \nonumber \\&+ \dfrac{1}{2}\sum _{n\ge 1}((n+1)(1-\beta )-2\beta N)n p_n\frac{\partial }{\partial p_n} \end{aligned}$$
(48)

Finally, we can find the expression for \(W_{-1}^{(\beta )}\) in terms if time variables:

$$\begin{aligned} \begin{aligned} W_{-1}^{(\beta )}=[H^{(\beta )}_1,p_1] = \sum _{n}n p_{n+1}\frac{\partial }{\partial p_{n}}+p_1( 1-\beta - \beta N ) \end{aligned}\nonumber \\ \end{aligned}$$
(49)

This procedure can be generalized and applied to proving relations similar to (45). In particular, to prove (35) one should construct \(W_2\) from \(H_2\) and \(p_2\) in addition to \(H_1\) and \(p_1\).

5 Discussion

The main technical result of this paper is the proof of formula (1), expressing averages of Jack polynomials in terms of the same functions evaluated at special points. At a more conceptual level we have developed a method of solving Virasoro equations explicitly in terms of the W-representation, which is applicable when usual ways of integration do not work. The algebraic side of the picture involves a representation of the W-operator in the space of characters. As we see the construction survives the \(\beta \)-deformation. From the discussion in Sect. 4 it is clear that there should be an immediate generalization to the case of (qt)-deformation with the appropriate Macdonald Hamiltonians and further to elliptic models and possibly even further involving Kerov functions [32](or non-Kerov deformations of Macdonald polynomials [33]). As we can see out of all possible operators of the form (49) matrix models select some specific ones. It would be interesting to distinguish matrix models out of “all models” from this point of view.

A lot of other intriguing directions of generalization immediately come to mind. First is the case of non-gaussian models and in general models with boundary conditions or non-trivial contour choices. It is distinguished by the fact that Virasoro constraints are not enough to fully specify the partition function, hence it seems that something should break in our method. On the other hand for specific choices of integration contours or boundary conditions superintegrabilty still holds and we could expect some kind of W-representation too [34, 35]. The second interesting direction is the generalized Kontsevich model. Here the situation is the opposite. The correct generalization of W-operators is known [36], however the appropriate characters are not (for attempts, see [37]). It seems like to obtain an answer in this case a deeper understanding of the relation between the algebra of W-operators and characters is needed.