On generalized Macdonald polynomials

Generalized Macdonald polynomials (GMP) are eigenfunctions of specifically-deformed Ruijsenaars Hamiltonians and are built as triangular polylinear combinations of Macdonald polynomials. They are orthogonal with respect to a modified scalar product, which could be constructed with the help of an increasingly important triangular perturbation theory, showing up in a variety of applications. A peculiar feature of GMP is that denominators in this expansion are fully factorized, which is a consequence of a hidden symmetry resulting from the special choice of the Hamiltonian deformation. We introduce also a simplified but deformed version of GMP, which we call generalized Schur functions. Our basic examples are bilinear in Macdonald polynomials.


Introduction
Macdonald polynomials are getting increasingly important for practical calculations in string theory. The reason is their role in representation theory of the Ding-Iohara-Miki symmetry, which underlines dynamics of background brane networks in 6d super-Yang-Mills models. At the same time, their properties remain underinvestigated, especially for the purposes of physical applications. This paper is one in the series of recent attempts to cure this situation. This time we concentrate on the subject of generalized Macdonald polynomials (GMP).
Macdonald polynomials M R labelled by Young diagrams R are [1] graded symmetric polynomials of variables x i , i = 1, . . . , N (in this case we use the notation M R (x i )), or graded polynomials of time-variables p k := N i=1 x k i (in this case we use the notation M R {p k }). In this paper, we will be mostly interested in the second point of view. They can be defined in many different ways, in particular, within the context of quantum toroidal algebras [2] (see also [3] and references therein). There are two other natural definitions: using their triangle structure and orthogonality condition w.r.t. a scalar product, or using a Hamiltonian structure behind them. We will briefly review in s.2 both of these possibilities in order to demonstrate that these definitions are really effective. An essential feature is that the Gaussian averages of Macdonald polynomials are Macdonald dimensions: this is a basic property < character > = character (1) of matrix and tensor models, presumably related to their superintegrability, see [4] and [5] for related references. However, in application to conformal (Dotsenko-Fateev) matrix models [6] relevant [7,8] to the Nekrasov counting [9] and the AGT relations [10], there is a significant "detail": Macdonald averages, though nicely factorized (in this case (1) is known as Selberg-Kadell identities), do not coincide with the Nekrasov functions [11]. The resolution of this problem is that the Macdonald polynomials should be modified to the generalized Macdonald polynomials [12] in such a way that the Selberg-Kadell formulas are still true, but the Nekrasov formulas are properly reproduced, see [13,14] for details. Unfortunately, despite their undisputable importance, these GMP can be effectively described only using the quantum toroidal algebra behind them [12,15]. That is, the GMP are defined as common eigenfunctions of a deformed Calogero-Ruijsenaars Hamiltonian (generalized cut-and-join operators of [16]) that is read off from the quantum toroidal algebra [12,15]. The goal of the present paper is to look for a definition independent on this algebraic approach. We discuss the two possibilities: how the deformed Hamiltonian can be independently obtained and how the GMP can be defined through the Gauss decomposition of an appropriate scalar product.

Macdonald polynomials 2.1 Triangular structure
Let us fix the scalar product on time-variables and continue it to any polynomials by linearity. From no on, we use the notation {x} := x − x −1 . Here the Young diagram ∆ = δ 1 ≥ δ 2 ≥ . . . ≥ δ l∆ , and p ∆ = l∆ i=1 p δi . The combinatorial factor z ∆ is best defined in the dual parametrization of the Young diagram, ∆ = . . . , 2 m2 , 1 m1 , then z ∆ = k k m k · m k !.
Then, the ordinary Macdonald polynomials can be defined as a lower triangular combination of the Schur polynomials S R {p} orthogonal w.r.t. this scalar product (2): The Young diagrams are ordered lexicographically: R > R ′ if r 1 > r ′ 1 or if r 1 = r ′ 1 , but r 2 > r ′ 2 , or if r 1 = r ′ 1 and r 2 = r ′ 2 , but r 3 > r ′ 3 , and so on (5) This ordering is not consistent with the transposition of Young diagrams: R > R ′ is not always the same as R ′ ∨ > R ∨ The first discrepancy appears at level |R| = 6, it is the pair [3,1,1,1] > [2,2,2], for which [3,1,1,1] [4,1,1] > [2, 2, 2] ∨ = [3,3]. However, it does not lead to an ambiguity, since the Kostka-Macdonald coefficients K R,R ′ (q, t) for this pair of diagrams, and in all similar cases vanishes [1]. Note that the normalization of M R is already fixed by the choice of unit diagonal coefficient (the first term) in (3): The Ruijsenaars Hamiltonians. The Ruijsenaars Hamiltonians are formulated in terms of the symmetric variables, x i and are associated with the particle coordinates in the integrable Ruijsenaars-Schneider system. One can write down a set of n integrable Ruijsenaars Hamiltonians in these variables as difference operators acting on the functions of N variables x i aŝ The first Hamiltonian in terms of time-variables. The Ruijsenaars Hamiltonians can be rewritten in terms of time-variables. The results reads as follows (we slightly redefine the Ruijsenaars Hamiltonians here to make formulas simpler). Introduce an operator Then, the first Hamiltonian isĤ with the eigenvalues when acting on the eigenfunction M R λ (1) In fact, in order to construct the Macdonald polynomials, one does not need to know all the Hamiltonians, it is sufficient to use only the first one: its polynomial solutions are unambiguously determined. The reason is the integrable structures behind the system. Nevertheless, in order to have the complete picture, we construct all higher Hamiltonians now.
Higher Hamiltonians. The second Hamiltonian iŝ The next Hamiltonian iŝ The general Hamiltonian looks likê Symbolically, it can be rewritten aŝ This expression becomes non-symbolic in the case of q = t: in this case, the pair product in this expression is a result of normal ordering so that one can re-writê This is a reflection of the fact that any 0 dz/z ·V k (z) is a Hamiltonian. This is not surprising, since the eigenfunctions in this case do not depend on t: these are the Schur functions. Hence, t is a free parameter and such is t k , expansion in any of these parameters generate the corresponding Hamiltonians having the Schur functions as their eigenfunctions. Unfortunately, in the general case of different q and t, the factor coming from the normal ordering is different from that in (15)- (16). For instance, Eigenvalues. One would better choose another normalization of the Hamiltonians: The eigenvalues in symmetric representations are: Similarly, in representation [n 1 , n 2 ] the eigenvalues are where In the general case, with the symmetric variables and the time-variables Another construction of Hamiltonians. There is another construction of the Hamiltonians that can be conveniently extended to the GMP. We borrow the formulas of this paragraph from [19]. Let us define an operatorP m := z=0 dz z m+1V 1 (z) (26) Then, the new first Hamiltonian coincides with the old one and is equal just toĤ 1 ∼Ĥ 1 =P 0 , the second one is the commutatorĤ 2 ∼ [P −1 , P 1 ] and all remaining Hamiltonians are repeated commutators 1 where we introduced the notation g k := {q k }{t −k }{t k /q k } and a non-unit pre-factor is chosen to simplify further formulas. The eigenvalues of these Hamiltonians are where The generating function of these eigenvalues can be written in the form The Hamiltonians (15) are related with these by the triangle transformationĤ One can also make another triangle transformation in order to construct new Hamiltonians H k , their generation function being The eigenvalues of these new Hamiltonians are L (k) k . This means that they celebrate the property Note that one can choose various generating functions for the Hamiltonians (27), e.g. that of the exponential type This generating function does not includeĤ 1 ∼P 0 .

Macdonald polynomials: summary
Thus, we explained that the Macdonald polynomials can be unambiguously defined either by the triangle expansion (3) with the requirement of the orthogonality condition w.r.t. the scalar product (2), or by the requirement that they are eigenfunctions of the Hamiltonian (11), which comes from the Ruijsenaars-Schneider integrable system. However, triangularity is not yet apparent from the shape of the Hamiltonian and this relation requires better understanding, see [18] for a possible approach to this problem.
Because of integrability, in this system there are also higher Hamiltonians. No canonical way is still available to construct them from the first principles. We mention just three possible approaches.
One of the possibilities is the Hamiltonians (15), which have their eigenvalues (23) expressed through linear combinations of the antisymmetric Schur (or, which is the same, of antisymmetric Macdonald polynomials), the latter being symmetric polynomials of the variables (24). Let us note that the shifted (or interpolating) antisymmetric Macdonald polynomials [21] are linear combinations very similar to (23): The difference is only in an additional product in the denominator of the summand. Another way of choosing the Hamiltonians is (27), they have the eigenvalues (28), which are just antisymmetric Schur or Macdonald polynomials (without the shift). These polynomials are functions of four sets of time-variables (29). Such time-variables typically emerge in issues related to the refined topological vertex [22, formula (52)]. Moreover, the structure of expression like the r.h.s. of (33) is much similar to that of the Hopf hyperpolynomial, [22, formula (39)], though that expression depends only on two sets of time variables, [22, formula (32)] HereR denotes the representation of SL N conjugate to R. The third approach is mentioned in [18]: the first Hamiltonian converts Schur functions into bilinear sums restricted to single-hook Young diagrams X and Y of same size. In higher Hamiltonians restriction is weakened to k-hook diagrams. This approach has chances of revealing the origins of triangularity and could allow generalization from the Schur polynomials not only to the Macdonald ones, but also to the Kerov functions, but there is long way to work all this out.

Introduction to GMP
Let us now discuss if one can define the GMP using one of the two possibilities that we discussed in the previous section for the ordinary Macdonald polynomials.

Triangular structure of GMP
As reviewed in detail in recent [24], the basic property of all symmetric polynomials naturally emerging in the course of study of non-perturbative quantum field theory, is that they are triangular combinations of Schur functions. This property is a bonus from existence of the natural lexicographical ordering for Young diagrams, and actually it breaks down when one goes up from ordinary to plane partitions [25], but it is still true for finite sets of Young diagrams, and thus for the Generalized Macdonald and Kerov functions. k |A 1 , . . . , A n } depends on an ordered sequence of n Young diagrams, on n infinite sets of time variables and on n "deformation" parameters A i , sometimes additionally constrained by the condition n i=1 A i = 1: Since nothing especially interesting depends on n, we mostly consider the case of n = 2 in order to simplify formulas, with a single parameter A (usually substituted by Q = A 2 ) such that A 1 = A, A 2 = A −1 , and denote p k =p k , i.e. our typical notation will be M (Q) R1,R2 {p,p}. Similarly to the ordinary Macdonald polynomial case, we present the GMP as a triangular linear combination of the products of the corresponding Schur functions: Let us discuss the ordering in this sum. At each level L = |R 1 | + |R 2 |, the pairs of Young diagrams are ordered in such a way that R 2 is ordered first. The ordering starts from diagrams of smaller sizes and is lexicographical for the same size diagrams. Then, one similarly orders R 1 within diagrams with coinciding R 2 . For instance, at level 2: In Kerov generalizations [23], a role is played by a transposed lexicographical ordering (see [24] for details), but, in the Macdonald case, the difference between these orderings is inessential as we explained in the previous section, and this remains true for the GMP. Moreover, it is sufficient to consider the partial ordering and, in all cases when the diagrams can not be ordered, the corresponding Kostka-Macdonald coefficients (40) vanish. Note that, from the triangular structure of the usual Macdonald polynomials, it follows that there is also an expansion For the purposes of the present paper the most convenient is this expansion in the Macdonald polynomials, not in the Schur ones. The GMP are defined so that at the beginning of the lexicographic sequence, when R 2 = ∅, they do not depend on Q = A 2 and coincide with the ordinary Macdonald polynomials, (note that the "complementary" M (Q) ∅,R {p,p} is a full-fledged function of both time variables and A, and has not much to do with M R {p}). Another reduction appears in the un-deformed case, for A = ∞: The scalar product. Now one has to specify the transform, i.e. to explain how the coefficients C are defined.
In [24] for the ordinary Macdonald and Kerov functions, we imposed an orthogonality condition, but for the GMP this is not immediate: one should know what the relevant scalar product is. Our goal is to construct such a scalar product, and thus to put the theory of GMP into the general context of [24], but this requires some work and insights. Before doing this, let us note that there is a standard scalar product for the GMP, however, the system of GMP's is not orthogonal, but bi-orthogonal w.r.t. it: one has to introduce a dual set of the GMP The standard product is an independent product of those for p k andp k , (2): The ordinary Macdonald functions are orthogonal in this scalar product, The problem is that the freedom is not fixed by such orthogonality requirement, and there is no guiding principle to select the needed triangular transformation. Let us note that the transformation from the Schur to Macdonald polynomials is lower triangular both in the normal and dual sectors, while the further transformation to the GMP continues to be lower triangular in normal sector, but becomes upper triangular in the dual one. Thus the net transform from the Schur polynomials to the dual GMP's is not really triangular.

GMP as eigenfunctions of deformed Hamiltonians
Since the GMP are not orthogonal in the standard metric, the Hamiltonians that have the GMP's their eigenfunctions are not Hermitian in the standard metric, though the deformation is triangular. The typical form is p ∂ ∂p , so that the ordinary Macdonald polynomials M R,∅ {p,p} = M R {p} are not affected. In fact, it is important to add some additional Hermitian pieces, and they bring into the game the deformation parameter(s). For example, has eigenfunctions p andp − η u−ū · p with eigenvalues u andū. It becomes Hermitian if the scalar product is deformed to In general, u is a coefficient in front of np n ∂ ∂pn (or higher Macdonald Hamiltonians), and this term just shifts the eigenvalue of M R {p}, while the mixing term ∼ ǫ n · n 2 p n ∂ ∂pn annihilates it. However, thep-dependent eigenfunctions are no longer just M R1 {p}M R2 {p} with R 2 = ∅, they get non-trivially ǫ-deformed into the GMP.
In fact, it is sufficient to consider only one Hamiltonian, it fully determines the deformation. All the higher Hamiltonians are deformed in a consistent way.
Perturbation theory. The main point is that perturbation theory is greatly simplified for triangular perturbations. If the deformed Hamiltonian isĤ +V andĤψ i = λ i ψ i , then and The underlined sums are triangular, if such is the perturbation matrix: V k i = 0 only for k < i. In this case, the eigenvalues do not change, and the eigenfunctions are finite sums with V entering the expression for the i-th eigenfunctions at most in the i-th power.
In application to the GMP, the role of ψ i is played by M RMQ , and the operatorV written symbolically aŝ V = k V k p k ∂ ∂p k decreases the size of Q and enlarges that of R: which brings (R, Q) down in the ordering. The new eigenfunctions, i.e. exactly the GMP, are given by The eigenvalues actually have the form Perturbed eigenfunctions are no longer orthogonal in the original scalar product, where ψ i ψ j = δ i,j . However, one can introduce a new one, where they are, and triangularity allows one to reformulate it as a simple recursion relation: In particular, for the lowest eigenfunction ψ 0 , which remains un-deformed, This expansion is very similar to above one for the eigenfunction itself, the only differences are alternated signs and the "common eigenvalue" in denominators: it is λ i for the eigenfunctions and λ 0 for the deformed scalar product. In the case of GMP, the role of ψ 0 is played by any of the ordinaryp-independent Macdonald functions M R {p}.

GMP Hamiltonian. The actual Hamiltonian in the case of GMP iŝ
and perturbation is generated by the ǫ n -term, ǫ n := 1 − (t/q) 2n . Thus it looks more likê Explicit shifts of the arguments can be substituted by action of the operators like The eigenvalue of M R,S {p,p} remains un-deformed, i.e. is the same as it was for M R {p} · M S {p} with the un-deformed Hamiltonian (c.f. with (12)): Higher Hamiltonians. Similarly to the case of ordinary Macdonald polynomials, the GMP's are unambiguously defined as graded polynomial eigenfunctions of the first Hamiltonian (63). This clearly implies an underlying integrable structure. It simultaneously implies that there exist higher Hamiltonians, and we again borrow formulas from [19] (see also [27,Appendix B]). In fact, one can use the form (27) for the Macdonald Hamiltonians in order to directly extended them to the GMP case: one just needs to substitute in all formulas P m withP (2) m := z=0 dz z m+1V (2) (z) (67) with ǫ k := 1− t 2k q 2k . Then, the Hamiltonians for the GMP are given by the same formulas (27), and the eigenvalues are given by formulas similar to (28), but depending on two Young diagrams: where is expressed through the eigenvalues (28). The generating function for these eigenvalues is given by An extension from the bilinear to multilinear GMP is evident. One can again introduce the HamiltoniansĤ (2) k with the generating function with the eigenvalues L One more notable thing is emergency of the triple of parameters (q 1 , q 2 , q 3 ) = (q, t −1 , tq −1 ), which is not seen at the level of the ordinary Macdonald functions. Thus the GMP are better suited for studying the apparent triality of the DIM algebra [26].

GMP: summary
Thus, we studied the Macdonald polynomial features that could be used for their definition and that can be extended to the GMP case. In particular, there is still a triangular structure of expansion, (40) in the Schur polynomials. Moreover, in this case, there is also a triangular expansion in Macdonald polynomials (43). However, there is no known scalar product so far that could fix the triangular expansion coefficients from the orthogonality condition.
There is a Hamiltonian (63) whose eigenfunctions are the GMP's, and it unambiguously defines them. However, this Hamiltonian at the moment comes only from an additional algebraic set-up of quantum toroidal algebras [12,15]. Moreover, in a complete analogy with the standard Macdonald case, one can construct higher Hamiltonians (69) with eigenvalues given by the same antisymmetric Schur or Macdonald polynomials. However, they are now polynomials of a sum of two sets of time-variables (71), each of them being associated its own Young diagram in accordance with the same Macdonald case formula (29).
Note that such a sum of times corresponding to two different Young diagrams emerges within the same context of refined topological vertex and Hopf hyperpolynomial as we discussed in s.2.3, however, in the case of composite representations (see [22,28], in particular, formula (32) in [22]).
In the next sections, we are going to modify the Macdonald scalar product so that the Q-deformed Hamiltonians become Hermitian. With this scalar product, the generalized Macdonald functions are themselves orthogonal, there is no need for the dual set of functions. If one could define (select) this deformed product by some appealing and simply formulated ansatz, this would fix the triangular transform and thus define the generalized Macdonald polynomials without any direct reference to the deformed Hamiltonians. We will look for this scalar product basing on the known set of GMP's, and then reverse the logic: we postulate it as a new definition of the GMP.
Our other goal in the forthcoming sections is to clarify reasons why the GMP Hamiltonian (63) is distinguished without references to the algebraic set-up. On this way, we will construct another set of symmetric functions of two sets of variables, which we call generalized Schur functions.

Looking for a scalar product
Let us discuss what could be the scalar product that gives rise to the correct GMP from the orthogonality relations. Let us denote Now we require the orthogonality of the GMP, In particular, for all R and S at all levels. Now let us proceed at the first two levels. At the first level, there is only one non-diagonal item of the symmetric matrix G R1,R2|S1,S2 : This defines the scalar product completely.
At level 2, there are more relations, but many more non-diagonal matrix elements of G R1,R2|S1,S2 : and G ∅, [1,1] [1], [1] [1] [1,1] The ugly-looking underlined terms can be eliminated or improved by choosing appropriate deformations of the norms: Then Now there are two natural versions of the other elements of the scalar product matrix G R1,R2|S1,S2 .
Version A: Assume that the norms of un-deformed polynomials M R {p} also remain un-deformed: Then M [1,1] {p}|M [1,1] [1,1] However, the other components get very complicated. There is another choice, which provides simply looking expressions for all matrix elements. Now we describe it.

Understanding GMP Hamiltonians
We discuss now a specific property of the Hamiltonian (63) that makes its eigenfunctions much simpler, in particular, it leads to a cancellation of many denominators that are generally expected to emerge. One can say that the Hamiltonian is actually distinguished by this property.
• Why cancellation of bad denominators (poles) in the GMP guarantees that they are canceled also in the scalar product?
• Is this the same symmetry that allows one to easily adjust all the higher Hamiltonians, i.e. how is it related to integrability?
The Hamiltonian (90) at t = q unambiguously defines the GSP's, however, higher Hamiltonians are no longer given by the construction of s.3.2.

Cauchy formula vs triangular expansion
The Cauchy formula. The Cauchy formula is always correct and is a direct corollary of the orthogonality relation. In this Appendix, we demonstrate how it works for the GMP. We start, however, with the usual Macdonald polynomials. Then, the Cauchy formula has the form In order to prove this formula, one has to act with the operator on the both sides of this formula and then put all p k = 0. Then, the r.h.s. is trivially equal to M Q {p ′ }, while, at the l.h.s., one has to use the orthogonality condition (4) and realize the scalar product (2) by the differential operation in order to obtain M Q {p ′ } too. Now one can use the transposition formula for the Macdonald polynomials in order to obtain The superscript ∨ here denotes the transposition of the Young diagram.
is diagonalized by the Macdonald-Kostka matrix itself: Now it remains to notice that Let us denote the interchanging q and t in a matrix with a bar and introduce the matrix Λ := δ R,R ∨ so that this latter identity reads as Then one obtains from (105)-(108) which is exactly (101). This consideration directly extended to the case of GMP and of more general Kerov functions. To better understand the the reason for additional matrices Λ emerging in (109) and somewhat strangely looking formula (101), it is instructive to look at a straightforward generalization of the Macdonald polynomials, Kerov functions [23,24].
Kerov functions. One can consider an extension of the scalar product (2) to which induces the Kerov functions 2 [23,24,30,31], which are defined to be triangular transforms from the Schur functions orthogonal with the scalar product (110). The Young diagrams are again ordered here lexicographically. However, in this case, there are two systems of symmetric functions, since the Kostka-Kerov matrices K (g) R,R ′ between diagrams placed differently in two different orderings are non-zero: Since the orderings begin to differ at level 6, the two system of the Kerov functions become different at level 6. The Cauchy formula in this case looks like since the rule that relates the Kerov functions of transposed Young diagrams (a counterpart of (94)) is The identity (101) becomes, in the Kerov case, Notice that the definition of K in (111) differs from K. In the Macdonald case, K is related with K by conjugation with the matrix Λ, which explains emergence of this latter in (109). Formula (114) is proved in the same way as in s.7 with taking into account that ||Ker (g)

Conclusion
The goal of this paper was to understand the notion of generalized symmetric polynomials. In the literature they are actually known in just one particular example of the generalized Macdonald polynomials, which appear in the theory of AGT relations, where q, t-deformed Selberg-Kadell integrals of GMP presumably reproduce Nekrasov functions. The question, however, is to find more algebraic and, most important, generalizable definitions.
Usually, one uses two approaches, which we described and discussed in the text. Both are based on the triangularity property, which is now understood to be the basic one for the actual theory of symmetric functions and their application to integrability in general and supersymmetric gauge theories in particular. All the relevant special functions in these fields are triangular combinations of Schur functions: triangular with respect to one or another kind of lexicographical ordering of Young diagrams. However, the triangularity per se is not enough. The two approaches consist of imposing orthogonality conditions either directly or through a requirement of forming the eigenfunctions of Hermitian Hamiltonians.
We discussed advantages and drawbacks of these two approaches in application to GMP. The problem with straightforward orthogonality is the complexity (and ambiguity) of the relevant scalar product, which we did not even manage to construct in the full form. For naive scalar products, the GMP's are not orthogonal: one needs to introduce a complementary set of "dual" GMP, and then the question is to impose relations between the dual and original GMP.
The Hamiltonian approach in the case of GMP is more successful, and we advanced it further to explicit description of higher Hamiltonians. However, even the triangularity property is somewhat obscure in this case, even for the ordinary Macdonald functions, and this makes the construction of Hamiltonians and the very fact of their existence a kind of mysterious art, preventing making integrability from becoming a clear deductive piece of science. This is a serious drawback, if one looks for generalizations, which are desperately needed, because most partition functions in quantum field theory are related to still unknown integrable systems. Moreover, even in pure mathematics, Hamiltonians are not yet known for an immediate generalization from the Macdonald polynomials to the Kerov functions.
We hope that our detailed presentation of these issues, of many problems which remain unsolved, and of many "miracles" which can provide the clues for their resolution, will attract an attention and help to advance the subject in the near future.