On bilinear superintegrability for monomial matrix models in pure phase

We argue that the recently discovered bilinear superintegrability arXiv:2206.02045 generalizes, in a non-trivial way, to monomial matrix models in pure phase. The structure is much richer: for the trivial core Schur functions required modifications are minor, and the only new ingredient is a certain (contour-dependent) permutation matrix; for non-trivial-core Schur functions, in both bi-linear and tri-linear averages the deformation is more complicated: averages acquire extra N-dependent factors and selection rule is less straightforward to imply.


Introduction
We continue to implement the large program of concrete approach to quantum field theories.This program consists in the simple-to-complex study of ever complicating QFT setups, but each time in full generality with focus on non-perturbative phenomena and finite (neither infinitesimal not infinite) coupling constants regime.The hope is that arising essential complications are this way untangled and can be dealt with one by one.
Our main focus is the bilinear superintegrability structure [1] -a generalization of usual, linear, superintegrability.The linear superintegrability itself was recently realized to be convenient language of non-perturbative, finite N , description of wide range of matrix models, in different regimes (phases) [2].And the bilinear superintegrability, perhaps, even more importantly, sheds light on the previously obscure origins of the celebrated Nekrasov calculus [3]: the most fruitful concrete approach to non-perturbative physics of supersymmetric gauge theories [4].
Specifically, we explain that bilinear superintegrability is not restricted to just Gaussian and logarithmic (Penner-like) models, but instead is more universal and, in particular, straightforwardly generalizes to the wide class of monomial matrix models in pure phase [2].This is a wide class of models indeed, as any polynomial model observable can be expanded near suitable monomial point in convergent power series; as opposed to usual asymptotic power series of perturbation theory near Gaussian (quadratic) point.The main statements are presented in Section 2. The central role is played by the relevant monomial deformation of the boxfactor-inserting operator O (see (6)), which gradually seems to become one of the key objects in modern MM framework [5][6][7][8][9], which is being developed as the adequate language for understanding the recently proposed WLZZ models [10,11] and their various natural generalizations.These concrete observations about the structure of bilinear superintegrable averages in monomial matrix models' pure phase constitute the main result of the present paper.
However, the de-log ( v → ∞, log(1 − vX r ) ∼ −X r ) limit of this formula, which restores the usual monomial potential, destroys the bilinearity of the correlator -the shift becomes infinite.So, naively, the bilinear superintegrability formula in non-logarithmic monomial matrix models does not exist.However, if one believes that structures persist when taking simplification limits (and de-log is a certain simplification) then bilinear superintegrability should exist in this case.
From this point-of-view, our formula (8) is the long awaited answer to this apparent puzzle: in the limit the "shifted" Schur polynomial becomes the "associated" K ∆ polynomial (whose explicit formula (5) features a kind of shift operator in time-variables) and non-trivial (anomaly-like) permutation operation π(∆) appears.
Further, the simple form of single-and double-K ∆ averages ( 9) and ( 10) is reminiscent of the structure of the CFT correlators.Therefore, in Section 5 we study the structure of triple-K averages.It turns out to be more complicated than the naive expectation from CFT analogy, so the naive motto Monomial MM in K ∆ basis ≡ some CFT (1) is wrong.Still, the appearing non-factorizability seems tame enough (at most quadratic factors appear in studied examples) to deserve further intensive investigation.Finally, in Section 6 we summarize our proof attempts.It turns out that, while the single-average formula (9) and implication (8) → (10) are quite straightforward, equally concise explanation for (8) itself is so far missing.This, of course, makes the existence of (8) even more valuable and non-trivial.
In this paper, as becomes customary for the papers about monomial matrix models, we freely use the language related to quotient division of partition by an integer r: r-cores, r-quotients, r-signatures, rim-hooks and so on.We refer the reader to Appendix A of [2], as well as to the original Macdonald book [12].

Main statements
Monomial matrix model in pure phase can be defined directly through its normalized Schur polynomial average where S R {δ k,r } is the Schur polynomial evaluated at special point [[f (i, j)]] r,x = f (i, j) if f (i, j) − x mod r = 0 else 0, that will frequently reappear in our presentation; r is an integer ≥ 2 and parameter a runs from 0 to r − 1.The emergent additional parameter b = N mod r can be equal to 0 or a. 5Indeed, given (2), normalized correlator of any other symmetric polynomial can be calculated as a linear combination of these, basis, ones.Motivated by the numerous papers on WLZZ models [5][6][7][8][9], we also frequently use the shorthand notation keeping in mind that in our case the ξ-factor depends on N , r, a (and b).
For the relation to the usual matrix model definition, through repeated integration see [2] and a more recent development [13].Now consider auxiliary (associated) polynomials K ∆ , which are related to Schur polynomials by manifest triangular change of variables where O-operator (resp.O −1 -operator) is the operator that multiplies (resp.divides) each Schur function by the corresponding box-product ( 3) and differential operator r ∂ ∂pr acts in Schur basis in manifest way at least when R has trivial r-core.Here σ r (R) is the r-signature of the diagram R.
With these definitions, one can check that a number of notable properties holds: Here S R/Q is the skew Schur polynomial, which we again evaluate at special point p k = δ k,r6 .The permutation operation π r,a,b (∆) is a certain permutation on the space of partitions, that is somehow important to the story (it appears in several places, see below), and which we describe in detail in Section 3. The (−1) π r,a,b (∆) is the certain sign related to permutation π r,a,b which we also describe in Section 3.
• As an elementary corollary of the previous property, the single-average of K-polynomial is trivial unless this polynomial corresponds to empty partition • The double-average of two K-polynomials K ∆1 and K ∆2 is equally concise and manifest in case both ∆ 1 and ∆ 2 have trivial r-cores.The permutation operation π r,a,b (∆) is such that Λ ∆1 r,a = Λ ∆2 r,a so it does not matter which one to use.In particular, when number of boxes is not equal, |∆ 1 | = |∆ 2 |, the bilinear K-average is always zero -the feature that we originally used to calculate K ∆ polynomials recursively, before we understood the simple general formula (5).
In case only one of r-cores is non-trivial the average is zero.
On the other hand, when both r-cores are non-trivial, there is also a non-trivial interaction structure, that even relaxes the selection rule both are their own non-trivial r-cores.At the same time, for a = 1 b = 0 we have We present more examples of this non-trivial interaction in Section 4, but the general picture is, so far, missing.
3 Permutation operation π r,a,b The permutation operation π r,a,b is manifestly given by the following construction.
For any partition ∆ with trivial r-core, consider its r-quotients ∆ i , i = 0 . . .r−1. π r,a,b rearranges r-quotients ∆ i according to the rule and then partition ∆ ′ = π r,a,b (∆) is reassembled from the shuffled parts.
The sign of the operation, (−1) π r,a,b (∆) , is calculated as follows.The overall sign is the product of the signs associated to elementary transpositions.For every ∆ i and ∆ i ′ that are being interchanged by π r,a,b they are either equal or different.Then 4 Double-K average in case of non-trivial cores The formula (5) can be equally well applied when ∆ has trivial or non-trivial r-core.When partition is its own r-core (denote it ∆ oc ), the corresponding Schur polynomial does not depend on p r , and therefore K-polynomial is equal to Schur polynomial The structure of pair correlators of such partitions is much less obvious than simple formula ( 10): here we list some more-or-less astonishing examples: • Some polynomials are "vanishing" vectors -orthogonal to every partition with same number of boxes, including itself.For instance, for r = 3, a = 1, b = 0: • At the same time, the average between partitions with different r-cores and different number of boxes is non-vanishing Note that the N -dependent factor is equal to Λ Whether this is actually true or not, remains to be seen in a separate thorough study.
• Furthermore, the non-vanishing correlators get even more complicated.For instance, both quadratic (i.e.same ∆) and bilinear correlators can have extra, often non-factorizable, factors (in addition to being divisible by the usual Λ-factor).
It remains to be seen, whether these extra (non-factorizable) factors can be amended by some clever redefinition of K-polynomials in case of non-trivial cores; or, perhaps, some more general clever formula can be invented that will take into account these more compilcated cases as is.

Triple-K averages
The single-and double-K averages are reminiscent to the averages in conformal field theory, where, for the primary operators one has where, Λ ∆1,2 r,a (N ) in ( 10) can be, perhaps, thought of as "discrete" analog of (x − y) −∆1−∆2 .In this logic, the simple form of the three-point average in conformal field theory should imply, on our matrix model side, comparably simple fully-factorized triple-K average, where N -dependence is made from peculiar combinations of Λ ∆1,2,3 r,a (N )-factors.This naive hope, is, however, overoptimistic.While for some small digrams the average, indeed, is factorizable and simple.For instance, for r = 3, a = 1, b = 0 For other diagrams the average stops being factorizable The non-factorization, however, seems at the moment to be mild: in the examples we analyzed at most quadratic non-factorized polynomial was observed.Therefore, it can yet turn out that three-point K-average is always a sum of at most two fully-factorized expressions.For instance, with the above example the plausible "split" could look like where one now needs to explain the origin of the two summands.
Further intensive studies are needed to discern between several alternatives, which are equally probable at the moment: • the non-factorizability of triple K-polynomial average is, indeed, at most quadratic.Some hidden structure (perhaps, an analog of KZ-equation or similar) is controlling this simplification; • the proper matrix model analog of primary operators are not just K ∆ polynomials with trivial-core ∆, but K ∆ 's with some additional condition/requirement.The triple averages of such, "truly primary", K ∆ 's are factorizable, while averages of "descendent" K ∆ , in general, do not factorize; • the triple K-polynomial averages are fully non-factorizable and generic, and no hidden structure exists.

Towards proofs
The experimentally observed bilinear superintegrability formulas ( 8), ( 9) and ( 10) are crisp and concise.One may, therefore, be tempted to think that their proof is equally crisp and simple, and follows from ready generalizations of certain MM/representation-theoretic constructions to the monomial case.
At least at the moment this does not seem to be the case: several attempts (listed below) to find such auxiliary generalized structures that would help in the proof, fail.This, of course, makes the bilinear superintegrability formulas ( 8), ( 9) and (10) all the more interesting and valuable: true examples of emergent structure, which cannot be naively reduced to/explained by more fundamental observations.

The first encouraging successes
• The single K-average ( 9) is, quite naturally, simpler than bilinear ( 8) and (10), so one may hope to prove it first.
And indeed where ∈ r means summation over diagrams obtained from ∆ by removing some r-rim-hooks, and Q(∆− ∇) is the number of ways to obtain ∇ from ∆ by doing so.Now, continuing • Similarly, the implication ( 8) → (10) is easy to prove.Indeed, expanding the definition where the sum in brackets is independent of N , and with more combinatorial massaging of the skew Schur functions analogous to (27) we prove the sign and a selection rule.
Writing down the bilinear average in a similar manner where N P ∇R are the Littlewood-Richardson coefficients, we see that the goal is, firstly, to prove that the sum in brackets is N -independent and, secondly, that the peculiar permutation operator π r,a,b emerges.How to do this, however, at the moment is not at all obvious: for illustration we present here a couple of proof ideas that fail (i.e. the emergent structure (8) is not decomposable into/explained by these, simpler, putative sub-structures).

No Cauchy-like summation
There is the following formula for the summation of the skew Schur functions [12] which simplifies the r.h.s of (8), provided one rewrites the permutation π r,a,b and the sign to the left hand side.
Then the hope would be, that the corresponding l.h.s sum actually evaluates to something nice and concise.
This, however, turns out not to be the case as first few examples reveal no apparent structure.

No Littlewood-Richardson structure
Another approach would be to go via the orbifoldization construction of [2], eqn.(4.33).From that point of view the single Schur average turns out to be the product over the r-quotients.
where R (i) are the r-quotients of R and the correlators on the l.h.s. are evaluated in the simpler logarithmic model.
For the proof along these lines to go through two crucial things need to happen.First, the expression for K ∆ polynomial should be reasonably simple in this language of r-quotients.
Secondly, Schur polynomial multiplication (i.e.Littlewood-Richardson coefficients), at least in the trivial r-core case, should be "consistent" with r-quotient language: the result should be expressed through individual r-quotients in a reasonable way.
The first crucial thing is, indeed, true.On one hand, the O-operator eigenvalue ξ R is (analogously to orbifoldization construction) expressed through Schur functions for the respective r-quotients.On the other hand, the shift operator exp((−r)∂/∂r) acts by removing r-rim-hooks in all possible ways, which in the language of r-quotients is nothing but removing all boxes in all possible ways (with suitable signs).
However, the second crucial thing seems not to be the case.For instance, multiplying two partitions [3] and [2,1], which, in the language of r = 3-quotients are equal to ( [1], ∅, ∅) and (∅, [1], ∅) one gets i.e. (even omitting the appearance of non-trivial r-core diagrams, which vanish later in the correlator) boxes are merged and shuffled in obscure ways.This gets even more complicated for bigger partitions.
Other plausible, but equally barren, proof strategies (for instance the study of interplay between O-operator and the Littlewood-Richardson coefficients) are possible but we don't list them here.In any case, desired is not the technical proof, but rather the conceptual explanation of why the bilinear superintegrability formula (8) is true.

Conclusion
In this paper we studied, to what extent the recently proposed bilinear superintegrability [1] persists in the case of matrix models in pure phase [2,13].
We found, that in the case of trivial r-cores, it generalizes simply and naturally, according to formula (8).Moreover, the associated K ∆ polynomials are obtained with help of triangular change of variables (5), where the central ingredient (the O-operator) is, as well, a natural monomial generalization with respect to the Gaussian case.
The key prominent feature of bilinear superintegrability in the monomial case is the appearance of nontrivial permutation operation π r,a,b (see Section 3), which trivializes in Gaussian case but generally is expressed in the language of Young diagram r-quotients.This non-trivial permutation operation, arguably, is the reason why the bilinear superintegrability formula for monomial non-(q,t)-deformed models was not found during the earlier attempts [2,[14][15][16].
Finally, the explicit and simple form of bilinear superintegrability in the language of K ∆ polynomials allowed us, in Section 5 to pose some questions about general analogy between matrix models and conformal field theories, beyond the well-known AGT conjecture, and in the spirit of recent attempt to generalize Nekrasov calculus beyond AGT [3].We performed just a few naive comparison attempts and they show that this matrix model conformaliztion program is not straightforward and immediate, yet, it is not immediately ruled out.We hope to study the situation in detail in the future.
Few immediate concrete questions seem natural in the context of the present paper: • What is the manifest expression of the operator O in terms of time variables p k ?Naive symbolic experiments show that O(p) likely is of infinite degree w.r.t derivatives in p k .
• How does the story generalize to exotic sector?Both in the "strong" sense of [13], where the role of normalization constant is played not by partition function Z = 1 , and in the "weak" sense of Section 4 where non-vanishing core partitions interact on the trivial core "background" of the basis Schur correlators.Is there any similarity at all between descriptions of these "strong" and "weak" exotic sectors?
• What is the proper q− and β− deformation of the associated K-polynomials and what shape does their bilinear superintegrability take?How does it relate to the long-known formula for double-Schur/Jack correlator in these models (which does not seem to have q → 1, β → 1 limit)?
• Is the appearance of at most quadratic non-factorizable polynomials a general feature of multiple-K averages in monomial matrix models, or is it just an artifact of the partitions with small number of boxes?
All these intriguing questions will hopefull be studied in the future.
N ), that is, it looks like Non-trivial are the pair correlators between K-polynomials that have coincident Λ-factors.(