1 Introduction

We continue to implement the large program of concrete approach to quantum field theories. This program consists in the simple-to-complex study of ever complicating QFT setups, but each time in full generality with focus on non-perturbative phenomena and finite (neither infinitesimal not infinite) coupling constants regime. The hope is that arising essential complications are this way untangled and can be dealt with one by one.

Our main focus is the bilinear superintegrability structure [1] – a generalization of usual, linear, superintegrability. The linear superintegrability itself was recently realized to be convenient language of non-perturbative, finite N, description of wide range of matrix models, in different regimes (phases) [2]. And the bilinear superintegrability, perhaps, even more importantly, sheds light on the previously obscure origins of the celebrated Nekrasov calculus [3]: the most fruitful concrete approach to non-perturbative physics of supersymmetric gauge theories [4].

Specifically, we explain that bilinear superintegrability is not restricted to just Gaussian and logarithmic (Penner-like) models, but instead is more universal and, in particular, straightforwardly generalizes to the wide class of monomial matrix models in pure phase [2]. This is a wide class of models indeed, as any polynomial model observable can be expanded near suitable monomial point in convergent power series; as opposed to usual asymptotic power series of perturbation theory near Gaussian (quadratic) point. The main statements are presented in Sect. 2. The central role is played by the relevant monomial deformation of the box-factor-inserting operator \({\mathcal {O}}\) (see (6)), which gradually seems to become one of the key objects in modern MM framework [5,6,7,8,9], which is being developed as the adequate language for understanding the recently proposed WLZZ models [10, 11] and their various natural generalizations. These concrete observations about the structure of bilinear superintegrable averages in monomial matrix models’ pure phase constitute the main result of the present paper.

The bilinear superintegrability most famously appears in (generalized) Kadell integrals (see eqns.(5.1)–(5.3) in [2]), where the bilinear average of two Schur functions, one of them of shifted argument, in Dotsenko–Fateev (DF) type logarithmic model is equal to manifest factorized expression.

However, the de-log ( \(v\rightarrow \infty \), \(\log (1 - v X^r) \sim - X^r\) ) limit of this formula, which restores the usual monomial potential, destroys the bilinearity of the correlator – the shift becomes infinite. So, naively, the bilinear superintegrability formula in non-logarithmic monomial matrix models does not exist. However, if one believes that structures persist when taking simplification limits (and de-log is a certain simplification) then bilinear superintegrability should exist in this case.

From this point-of-view, our formula (8) is the long awaited answer to this apparent puzzle: in the limit the “shifted” Schur polynomial becomes the “associated” \(K_\Delta \) polynomial (whose explicit formula (5) features a kind of shift operator in time-variables) and non-trivial (anomaly-like) permutation operation \(\pi (\Delta )\) appears.

Further, the simple form of single- and double-\(K_\Delta \) averages (9) and (10) is reminiscent of the structure of the CFT correlators. Therefore, in Sect. 5 we study the structure of triple-K averages. It turns out to be more complicated than the naive expectation from CFT analogy, so the naive motto

$$\begin{aligned} \text {Monomial MM in}\ K_\Delta \ \text {basis} \equiv \text {some CFT} \end{aligned}$$
(1)

is wrong. Still, the appearing non-factorizability seems tame enough (at most quadratic factors appear in studied examples) to deserve further intensive investigation.

Finally, in Sect. 6 we summarize our proof attempts. It turns out that, while the single-average formula (9) and implication \((8) \rightarrow (10)\) are quite straightforward, equally concise explanation for (8) itself is so far missing. This, of course, makes the existence of (8) even more valuable and non-trivial.

In this paper, as becomes customary for the papers about monomial matrix models, we freely use the language related to quotient division of partition by an integer r: r-cores, r-quotients, r-signatures, rim-hooks and so on. We refer the reader to Appendix A of [2], as well as to the original Macdonald book [12].

2 Main statements

Monomial matrix model in pure phase can be defined directly through its normalized Schur polynomial average

$$\begin{aligned} \left\langle S_R \right\rangle =&\ S_R \{\delta _{k,r}\} \cdot \Lambda _{r,a}^R (N), \end{aligned}$$
(2)

where \(S_R \{\delta _{k,r}\}\) is the Schur polynomial evaluated at special point \(p_k = \delta _{k,r}\),

\(\Lambda _{r,a}^R (N)\) is a peculiar product over boxes of the diagram R

$$\begin{aligned} \Lambda _{r,a}^R (N) =&\ \prod _{(i,j)\in R} \left[ \left[ N-i+j \right] \right] _{r,0} \left[ \left[ N-i+j \right] \right] _{r,a}, \text { with } \\ \nonumber \left[ \left[ f(i,j) \right] \right] _{r,x} =&\ f(i,j) \text { if } f(i,j)-x \text { mod } r = 0 \text { else } 0, \end{aligned}$$
(3)

that will frequently reappear in our presentation; r is an integer \(\ge 2\) and parameter a runs from 0 to \(r-1\). The emergent additional parameter \(b = N \text { mod } r\) can be equal to 0 or a.Footnote 1

Indeed, given (2), normalized correlator of any other symmetric polynomial can be calculated as a linear combination of these, basis, ones.

Motivated by the numerous papers on WLZZ models [5,6,7,8,9], we also frequently use the shorthand notation

$$\begin{aligned} \xi _R := \Lambda _{r,a}^R (N), \end{aligned}$$
(4)

keeping in mind that in our case the \(\xi \)-factor depends on N, r, a (and b).

For the relation to the usual matrix model definition, through repeated integration see [2]Footnote 2 and a more recent development [13].

Now consider auxiliary (associated) polynomials \(K_\Delta \), which are related to Schur polynomials by manifest triangular change of variables

$$\begin{aligned} K_\Delta = {\mathcal {O}}^{-1} \exp \left( (- 1) \frac{\partial }{\partial p_r}\right) {\mathcal {O}} S_\Delta , \end{aligned}$$
(5)

where \({\mathcal {O}}\)-operator (resp. \({\mathcal {O}}^{-1}\)-operator) is the operator that multiplies (resp. divides) each Schur function by the corresponding box-product (3)

$$\begin{aligned} {\mathcal {O}} S_R = \Lambda _{r,a}^R (N) \cdot S_R \end{aligned}$$
(6)

and differential operator \(r \frac{\partial }{\partial p_r}\) acts in Schur basis in manifest way

$$\begin{aligned} r \frac{\partial }{\partial p_r} S_R = (-1)^r \sum _{R' = R-\text {rim hook}} \frac{\sigma _r(R)}{\sigma _r(R')} S_{R'}, \end{aligned}$$
(7)

at least when R has trivial r-core. Here \(\sigma _r(R)\) is the r-signature of the diagram R.

With these definitions, one can check that a number of notable properties holds:

  • Average of \(K_\Delta \) with Schur function \(S_R\) is equal to

    $$\begin{aligned} \boxed { \left\langle K_\Delta S_R \right\rangle = (-1)^{\pi _{r,a,b}(\Delta )} S_{R/\pi _{r,a,b}(\Delta )} \{\delta _{k, r}\} \cdot \Lambda ^R_{r,a}(N) } \end{aligned}$$
    (8)

    Here \(S_{R/Q}\) is the skew Schur polynomial, which we again evaluate at special point \(p_k = \delta _{k,r}\).Footnote 3 The permutation operation \(\pi _{r,a,b}(\Delta )\) is a certain permutation on the space of partitions, that is somehow important to the story (it appears in several places, see below), and which we describe in detail in Sect. 3. The \((-1)^{\pi _{r,a,b}(\Delta )}\) is the certain sign related to permutation \(\pi _{r,a,b}\) which we also describe in Sect. 3.

  • As an elementary corollary of the previous property, the single-average of K-polynomial is trivial unless this polynomial corresponds to empty partition

    $$\begin{aligned} \left\langle K_{\Delta } \right\rangle \equiv \left\langle K_{\Delta } S_\emptyset \right\rangle = \delta _{\Delta ,\emptyset } \end{aligned}$$
    (9)
  • The double-average of two K-polynomials \(K_{\Delta _1}\) and \(K_{\Delta _2}\) is equally concise and manifest

    $$\begin{aligned} \boxed { \left\langle K_{\Delta _1} K_{\Delta _2} \right\rangle = \delta _{\Delta _1,\pi _{r,a,b}(\Delta _2)} \cdot (-1)^{\pi _{r,a,b}(\Delta _2)} \cdot \Lambda _{r,a}^{\Delta _1} } \end{aligned}$$
    (10)

    in case both \(\Delta _1\) and \(\Delta _2\) have trivial r-cores. The permutation operation \(\pi _{r,a,b}(\Delta )\) is such that \(\Lambda _{r,a}^{\Delta _1} = \Lambda _{r,a}^{\Delta _2}\) so it does not matter which one to use. In particular, when number of boxes is not equal, \(|\Delta _1| \ne |\Delta _2|\), the bilinear K-average is always zero – the feature that we originally used to calculate \(K_\Delta \) polynomials recursively, before we understood the simple general formula (5). In case only one of r-cores is non-trivial the average is zero. On the other hand, when both r-cores are non-trivial, there is also a non-trivial interaction structure, that even relaxes the selection rule \(|\Delta _1| = |\Delta _2|\). For instance, for \(r=3\) partitions [2, 2, 1, 1] and [3, 2, 2, 1, 1] both are their own non-trivial r-cores. At the same time, for \(a=1\) \(b=0\) we have

    $$\begin{aligned} \left\langle K_{[3,2,2,1,1]} K_{[2,2,1,1]} \right\rangle \ne 0 \end{aligned}$$
    (11)

    We present more examples of this non-trivial interaction in Sect. 4, but the general picture is, so far, missing.

3 Permutation operation \(\pi _{r,a,b}\)

The permutation operation \(\pi _{r,a,b}\) is manifestly given by the following construction.

For any partition \(\Delta \) with trivial r-core, consider its r-quotients \(\Delta _i\), \(i=0\dots r-1\). \(\pi _{r,a,b}\) rearranges r-quotients \(\Delta _i\) according to the rule

$$\begin{aligned} \Delta _i \longrightarrow \Delta _{r-1 - i + a - 2 b \text { mod } r} \end{aligned}$$
(12)

and then partition \(\Delta ^{'} = \pi _{r,a,b}(\Delta )\) is reassembled from the shuffled parts.

For instance, for \(r=5\), \(a=1\) \(b=0\) then partition [2, 2, 2, 2, 2] has 5-quotients: \((\emptyset ,\emptyset ,\emptyset ,[1],[1])\). The reshuffling of quotients according to prescription (12) yields \((\emptyset ,[1],\emptyset ,\emptyset ,[1])\) while is 5-quotient representation for partition [4, 2, 2, 1, 1]. Therefore, under \(\pi _{5,1,0}\) we have

$$\begin{aligned}{}[2,2,2,2,2] \longleftrightarrow [4,2,2,1,1] \end{aligned}$$
(13)

In the Gaussian case \(r=2\) the effect of \(\pi _{r,a,b}\) operation is not observed, since, for every r,a,b one of \(\Delta _i\)’s always stays on its place, and so for \(r=2\) does the only other.

The sign of the operation \((-1)^{\pi _{r,a,b}(\Delta )}\) is, actually, nothing but the product of corresponding r-signatures.

$$\begin{aligned} (-1)^{\pi _{r,a,b}(\Delta )} = \sigma _r(\Delta ) \sigma _r(\pi _{r,a,b}(\Delta )) \end{aligned}$$
(14)

Note that, while this formula is a conjecture, as we will see in Sect. 6, the respective signs in formulas (8) and (10) should coincide (so the same \((-1)^{\pi _{r,a,b}(\Delta )}\) enters both formulas).

4 Double-K average in case of non-trivial cores

The formula (5) can be equally well applied when \(\Delta \) has trivial or non-trivial r-core. When partition is its own r-core (denote it \(\Delta _{oc}\)), the corresponding Schur polynomial does not depend on \(p_r\), and therefore K-polynomial is equal to Schur polynomial

$$\begin{aligned} K_{\Delta _{oc}} = S_{\Delta _{oc}} \end{aligned}$$
(15)

The structure of pair correlators of such partitions is much less obvious than simple formula (10): here we list some more-or-less astonishing examples:

  • Some polynomials are “vanishing” vectors – orthogonal to every partition with same number of boxes, including itself. For instance, for \(r=3,\ a=1,\ b=0\):

    $$\begin{aligned} \left\langle K_{[2,2,1,1]} K_{[2,2,1,1]} \right\rangle = 0 \left\langle K_{[2,2,1,1]} K_{R} \right\rangle = 0, \text { for } |R|=6 \end{aligned}$$
    (16)
  • At the same time, the average between partitions with different r-cores and different number of boxes is non-vanishing

    $$\begin{aligned} \left\langle K_{[2,2,1,1]} K_{[3,2,2,1,1]} \right\rangle = N^2 (N+1)(N-2)(N-3) \end{aligned}$$
    (17)

    Note that the N-dependent factor is equal to \(\Lambda _{3,0}^{[2,2,1,1]}(N)=\Lambda _{3,0}^{[3,2,2,1,1]}(N)\), that is, it looks like

    $$\begin{aligned}&\text {Non-trivial are the pair correlators between}\nonumber \\&\quad K\text {-polynomials that have}\ {coincident}\ \Lambda \text {-factors.} \end{aligned}$$
    (18)

    Whether this is actually true or not, remains to be seen in a separate thorough study.

  • Furthermore, the non-vanishing correlators get even more complicated. For instance, both quadratic (i.e. same \(\Delta \))

    $$\begin{aligned}&\left\langle K_{[7,2]} K_{[7,2]} \right\rangle = \frac{(-1)}{9}\left( N^2 + 10 N + 33\right) \Lambda _{3,0}^{[7,2]}(N) \nonumber \\&\left\langle K_{[4,2,2,1]} K_{[4,2,2,1]} \right\rangle = \frac{(-1)}{9} (N-3)(N-2) \Lambda _{3,0}^{[4,2,2,1]}(N) \end{aligned}$$
    (19)

    and bilinear correlators

    $$\begin{aligned} \left\langle K_{[4,2,2,1]} K_{[4,2,1,1,1]} \right\rangle = \frac{1}{9}\left( N^2 - 5 N + 15\right) \Lambda _{3,0}^{[4,2,2,1]}(N) \end{aligned}$$
    (20)

    can have extra, often non-factorizable, factors (in addition to being divisible by the usual \(\Lambda \)-factor).

It remains to be seen, whether these extra (non-factorizable) factors can be amended by some clever redefinition of K-polynomials in case of non-trivial cores; or, perhaps, some more general clever formula can be invented that will take into account these more compilcated cases as is.

5 Triple-K averages

The single- and double-K averages are reminiscent to the averages in conformal field theory, where, for the primary operators one has

$$\begin{aligned} \left\langle {\mathcal {O}}(x) \right\rangle \sim&\ \delta _{\Delta ,0} \nonumber \\ \left\langle {\mathcal {O}}_1(x) {\mathcal {O}}_2(y) \right\rangle \sim&\ \frac{\delta _{\Delta _1,\Delta _2}}{(x-y)^{\Delta _1 + \Delta _2}}, \end{aligned}$$
(21)

where, \(\Lambda _{r,a}^{\Delta _{1,2}}(N)\) in (10) can be, perhaps, thought of as “discrete” analog of \((x-y)^{-\Delta _1-\Delta _2}\).

In this logic, the simple form of the three-point average in conformal field theory

$$\begin{aligned}&\left\langle {\mathcal {O}}_1(x) {\mathcal {O}}_2(y) {\mathcal {O}}_3(z) \right\rangle \nonumber \\&\quad = \frac{C_{\Delta _1,\Delta _2,\Delta _3}}{ (x-y)^{\Delta _1 + \Delta _2 - \Delta _3} (y-z)^{\Delta _2 + \Delta _3 - \Delta _1} (z-x)^{\Delta _3 + \Delta _1 - \Delta _2}} \end{aligned}$$
(22)

should imply, on our matrix model side, comparably simple fully-factorized triple-K average, where N-dependence is made from peculiar combinations of \(\Lambda _{r,a}^{\Delta _{1,2,3}}(N)\)-factors.

This naive hope, is, however, overoptimistic. While for some small digrams the average, indeed, is factorizable and simple. For instance, for \(r=3, a=1, b=0\)

$$\begin{aligned} \left\langle K_{[3]} K_{[3]} K_{[3]} \right\rangle = (-2) N (N+1)(N+2). \end{aligned}$$
(23)

For other diagrams the average stops being factorizable

$$\begin{aligned}&\left\langle K_{[6]} K_{[6]} K_{[5,1]} \right\rangle = (-2) N (N + 1) (N + 4) (N + 3)\nonumber \\&\quad (N^2 + 16 N + 57) \end{aligned}$$
(24)

The non-factorization, however, seems at the moment to be mild: in the examples we analyzed at most quadratic non-factorized polynomial was observed. Therefore, it can yet turn out that three-point K-average is always a sum of at most two fully-factorized expressions. For instance, with the above example the plausible “split” could look like

$$\begin{aligned}&\left\langle K_{[6]} K_{[6]} K_{[5,1]} \right\rangle = (-2) N^2 (N + 1) (N + 4) (N + 3)^2 \nonumber \\&\quad + (2 \cdot 19) (N-3) N (N + 1) (N + 4) (N + 3), \end{aligned}$$
(25)

where one now needs to explain the origin of the two summands.

Further intensive studies are needed to discern between several alternatives, which are equally probable at the moment:

  • the non-factorizability of triple K-polynomial average is, indeed, at most quadratic. Some hidden structure (perhaps, an analog of KZ-equation or similar) is controlling this simplification;

  • the proper matrix model analog of primary operators are not just \(K_\Delta \) polynomials with trivial-core \(\Delta \), but \(K_\Delta \)’s with some additional condition/requirement. The triple averages of such, “truly primary”, \(K_\Delta \)’s are factorizable, while averages of “descendent” \(K_\Delta \), in general, do not factorize;

  • the triple K-polynomial averages are fully non-factorizable and generic, and no hidden structure exists.

6 Towards proofs

The experimentally observed bilinear superintegrability formulas (8), (9) and (10) are crisp and concise. One may, therefore, be tempted to think that their proof is equally crisp and simple, and follows from ready generalizations of certain MM/representation-theoretic constructions to the monomial case.

At least at the moment this does not seem to be the case: several attempts (listed below) to find such auxiliary generalized structures that would help in the proof, fail. This, of course, makes the bilinear superintegrability formulas (8), (9) and (10) all the more interesting and valuable: true examples of emergent structure, which cannot be naively reduced to/explained by more fundamental observations.

6.1 The first encouraging successes

  • The single K-average (9) is, quite naturally, simpler than bilinear (8) and (10), so one may hope to prove it first.

    And indeed, utilizing the formula for expansion of Schur polynomial with shifted arguments with help of skew Schur functions [12]

    $$\begin{aligned} \sum _{\Delta \in R} S_{R/\Delta } (p) S_{\Delta }(g) = S_R(p+g) \end{aligned}$$
    (26)

    we can write (with help of formulas (5)–(7), (2) and also keeping in mind that \(\exp \left( (- 1) \frac{\partial }{\partial p_r} \right) \) is nothing but the shift operator \(p_k \rightarrow p_k - \delta _{k,r}\)

    $$\begin{aligned} \left\langle K_\Delta \right\rangle&= \ \left\langle {\mathcal {O}}^{-1} \exp \left( (- 1) \frac{\partial }{\partial p_r} \right) {\mathcal {O}} S_\Delta \right\rangle \nonumber \\&\quad \mathop {=}_{(5), (6)} \left\langle {\mathcal {O}}^{-1} \xi _\Delta S_\Delta \{p_k - \delta _{k,r}\} \right\rangle \nonumber \\&\quad \mathop {=}_{(26),(2)} \sum _{\nabla \in \Delta } \frac{1}{\xi _\nabla } \xi _\Delta S_{\Delta /\nabla }\{-\delta _{k,r}\} \cdot \xi _\nabla S_\nabla \{\delta _{k,r}\} \mathop {=}_{(26)}\nonumber \\&\qquad S_\Delta \{p_k=0\} = \delta _{\Delta ,\emptyset } \end{aligned}$$
    (27)
  • Similarly, the implication \((8) \rightarrow (10)\) is easy to prove. Indeed, expanding the definition we obtain

    $$\begin{aligned}&\left\langle K_\Delta K_{\Delta ^{'}} \right\rangle = \ \sum _{\nabla \in \Delta ^{'}} S_{\Delta ^{'}/\nabla }\{-\delta _{k,r}\} \frac{\xi _{\Delta ^{'}}}{\xi _\nabla } \left\langle K_\Delta S_\nabla \right\rangle \nonumber \\&\quad \mathop {=}_{(8)} \xi _{\Delta ^{'}} \left[ \sum _{\nabla \in \Delta ^{'}} S_{\Delta ^{'}/\nabla }\{-\delta _{k,r}\} \frac{1}{\xi _\nabla } \cdot \xi _\nabla \cdot (-1)^{\pi (\Delta )}\right. \nonumber \\&\qquad \left. S_{\nabla /\pi (\Delta )}\{\delta _{k,r}\} \right] . \end{aligned}$$
    (28)

Immediately one can see that the sum in brackets is independent of N, and N-dependence of the whole formula (10) is correctly reproduced by the \(\xi _{\Delta ^{'}}\)-factor.

The sign and selection rule require a bit more care. First, skew Schur functions are non-zero when (keep in mind that \(|\pi (\Delta )| = |\Delta |\))

$$\begin{aligned} |\Delta | \le |\nabla | \le |\Delta ^{'}| \end{aligned}$$
(29)

But because of symmetry w.r.t interchange \(\Delta \leftrightarrow \Delta ^{'}\) we actually have

$$\begin{aligned} |\Delta | = |\nabla | = |\Delta ^{'}|, \end{aligned}$$
(30)

therefore, we should have \(\nabla = \Delta ^{'} = \pi (\Delta )\) and the \(\nabla \)-sum trivializes to give the desired sign \((-1)^\pi (\Delta )\).

Writing down the bilinear average in a similar manner

$$\begin{aligned} \left\langle K_\Delta S_R \right\rangle = \xi _R \left( \sum _{\begin{array}{c} \nabla \in \Delta \\ P \end{array}} S_{\Delta /\nabla }\{-\delta _{k,r}\} \cdot {\mathcal {N}}_{\nabla R}^P \cdot \frac{\xi _\Delta \xi _P}{\xi _\nabla \xi _R} \cdot S_P \{\delta _{k,r}\} \right) , \end{aligned}$$
(31)

where \({\mathcal {N}}_{\nabla R}^P\) are the Littlewood–Richardson coefficients, we see that the goal is, firstly, to prove that the sum in brackets is N-independent and, secondly, that the peculiar permutation operator \(\pi _{r,a,b}\) emerges. How to do this, however, at the moment is not at all obvious: for illustration we present here a couple of proof ideas that fail (i.e. the emergent structure (8) is not decomposable into/explained by these, simpler, putative sub-structures).

6.2 No Cauchy-like summation

From the expansion formula for Schur functions with shifted arguments (26) the hope could be, that if one considers the generating function for Eq. (8), summed with auxiliary Schur functions, its l.h.s sum

$$\begin{aligned} P_{r,a,b}(R) := \sum _{\Delta \in R} K_{\pi _{r,a,b}(\Delta )} (p) \cdot (-1)^{\pi _{r,a,b}(\Delta )} \cdot S_{\Delta }(g) \end{aligned}$$
(32)

actually evaluates to something nice and concise.

This, however, turns out not to be the case as first few examples

$$\begin{aligned}&P_{3,1,0}([3]) = \frac{1}{54} \left( -3 p_1^3 - N^2 - N + 3 p_3 \right) g_1^3 \nonumber \\&\quad - \frac{1}{18} g_2 \left( 3 p_1^3 + N^2 + N - 3 p_3 \right) g_1 - \frac{1}{27} g_3 N^2 \nonumber \\&\quad - \frac{1}{27} g_3 N + 1 + \frac{1}{54} \left( -6 p_1^3 + 6 p_3 \right) g_3 \nonumber \\&P_{3,1,0}([2,1]) = \frac{1}{54} \left( -3 p_1^3 + 2 N^2 - 9 p_1 p_2 + 2 N - 6 p_3 \right) g_1^3\nonumber \\&\quad + 1 + \frac{1}{54} \left( 3 p_1^3 - 2 N^2 + 9 p_1 p_2 - 2 N + 6 p_3 \right) g_3 \nonumber \\&P_{3,1,0}([1,1,1]) = \ \frac{1}{108} \left( 3 p_1^3-2 N^2-9 p_1 p_2 + 4 N + 6 p_3 \right) g_1^3\nonumber \\&\quad +\frac{1}{18} \left( -\frac{3}{2} p_1^3 + N^2 + \frac{9}{2} p_1 p_2 - 2 N - 3 p_3 \right) g_2 g_1 \nonumber \\&\quad + \frac{1}{18} g_3 p_1^3 - \frac{1}{27} g_3 N^2 - \frac{1}{6} g_3 p_1 p_2 + \frac{2}{27} g_3 N\nonumber \\&\quad + \frac{1}{9} g_3 p_3 + 1 \end{aligned}$$
(33)

reveal no apparent structure.

6.3 No Littlewood–Richardson structure

Another approach would be to go via the orbifoldization construction of [2, eqn. (4.33)]. From that point of view the single Schur average turns out to be the product over the r-quotients.

$$\begin{aligned} \left\langle S_R \right\rangle _N \sim \prod _{i=0}^{r-1} \left\langle S_{R^{(i)}} \right\rangle _{n_i, u_i}, \end{aligned}$$
(34)

where \(R^{(i)}\) are the r-quotients of R and the correlators on the l.h.s. are evaluated in the simpler logarithmic model.

For the proof along these lines to go through two crucial things need to happen. First, the expression for \(K_\Delta \) polynomial should be reasonably simple in this language of r-quotients.

Secondly, Schur polynomial multiplication (i.e. Littlewood-Richardson coefficients), at least in the trivial r-core case, should be “consistent” with r-quotient language: the result should be expressed through individual r-quotients in a reasonable way.

The first crucial thing is, indeed, true. On one hand, the \({\mathcal {O}}\)-operator eigenvalue \(\xi _R\) is (analogously to orbifoldization construction) expressed through Schur functions for the respective r-quotients. On the other hand, the shift operator \(\exp ((- r) \partial /\partial r)\) acts by removing r-rim-hooks in all possible ways, which in the language of r-quotients is nothing but removing all boxes in all possible ways (with suitable signs).

However, the second crucial thing seems not to be the case. For instance, multiplying two partitions [3] and [2, 1], which, in the language of \(r=3\)-quotients are equal to \(([1], \emptyset , \emptyset )\) and \((\emptyset , [1], \emptyset )\) one gets

$$\begin{aligned}{}[3] \otimes [2,1] =&\ [5,1] + [4,2] + [4,1,1] + [3,2,1] \nonumber \\ ([1], \emptyset , \emptyset ) \otimes (\emptyset , [1], \emptyset ) =&\ (\emptyset , [2], \emptyset ) + (\emptyset , \emptyset , \emptyset )_{[4,2]}\nonumber \\&+ (\emptyset , \emptyset , [2]) + ([1], \emptyset , [1]), \end{aligned}$$
(35)

i.e. (even omitting the appearance of non-trivial r-core diagrams, which vanish later in the correlator) boxes are merged and shuffled in obscure ways. This gets even more complicated for bigger partitions.

Other plausible, but equally barren, proof strategies (for instance the study of interplay between \({\mathcal {O}}\)-operator and the Littlewood-Richardson coefficients) are possible but we don’t list them here. In any case, desired is not the technical proof, but rather the conceptual explanation of why the bilinear superintegrability formula (8) is true.

7 Conclusion

In this paper we studied, to what extent the recently proposed bilinear superintegrability [1] persists in the case of matrix models in pure phase [2, 13].

We found, that in the case of trivial r-cores, it generalizes simply and naturally, according to formula (8). Moreover, the associated \(K_\Delta \) polynomials are obtained with help of triangular change of variables (5), where the central ingredient (the \({\mathcal {O}}\)-operator) is, as well, a natural monomial generalization with respect to the Gaussian case.

The key prominent feature of bilinear superintegrability in the monomial case is the appearance of non-trivial permutation operation \(\pi _{r,a,b}\) (see Sect. 3), which trivializes in Gaussian case but generally is expressed in the language of Young diagram r-quotients. This non-trivial permutation operation, arguably, is the reason why the bilinear superintegrability formula for monomial non-(q,t)-deformed models was not found during the earlier attempts [2, 14,15,16].

Finally, the explicit and simple form of bilinear superintegrability in the language of \(K_\Delta \) polynomials allowed us, in Sect. 5 to pose some questions about general analogy between matrix models and conformal field theories, beyond the well-known AGT conjecture, and in the spirit of recent attempt to generalize Nekrasov calculus beyond AGT [3]. We performed just a few naive comparison attempts and they show that this matrix model conformaliztion program is not straightforward and immediate, yet, it is not immediately ruled out. We hope to study the situation in detail in the future.

Few immediate concrete questions seem natural in the context of the present paper:

  • What is the manifest expression of the operator \({\mathcal {O}}\) in terms of time variables \(p_k\)? Naive symbolic experiments show that \({\mathcal {O}}(p)\) likely is of infinite degree w.r.t derivatives in \(p_k\).

  • How does the story generalize to exotic sector? Both in the “strong” sense of [13], where the role of normalization constant is played not by partition function \(Z = \left\langle 1\right\rangle \), and in the “weak” sense of Sect. 4 where non-vanishing core partitions interact on the trivial core “background” of the basis Schur correlators. Is there any similarity at all between descriptions of these “strong” and “weak” exotic sectors?

  • What is the proper \(q-\) and \(\beta -\) deformation of the associated K-polynomials and what shape does their bilinear superintegrability take? How does it relate to the long-known formula for double-Schur/Jack correlator in these models (which does not seem to have \(q\rightarrow 1, \ \beta \rightarrow 1\) limit)?

  • Is the appearance of at most quadratic non-factorizable polynomials a general feature of multiple-K averages in monomial matrix models, or is it just an artifact of the partitions with small number of boxes?

All these intriguing questions will hopefull be studied in the future.