Strong factorization property of Macdonald polynomials and higher-order Macdonald’s positivity conjecture

We prove a strong factorization property of interpolation Macdonald polynomials when q tends to 1. As a consequence, we show that Macdonald polynomials have a strong factorization property when q tends to 1, which was posed as an open question in our previous paper with Féray. Furthermore, we introduce multivariate q, t-Kostka numbers and we show that they are polynomials in q, t with integer coefficients by using the strong factorization property of Macdonald polynomials. We conjecture that multivariate q, t-Kostka numbers are in fact polynomials in q, t with nonnegative integer coefficients, which generalizes the celebrated Macdonald’s positivity conjecture.


Macdonald polynomials
In 1988, Macdonald [21,22]  parameters q, t. They were immediately hailed as a breakthrough in symmetric function theory as well as special functions, as they contained most of the previously studied families of symmetric functions such as Schur polynomials, Jack polynomials, Hall-Littlewood polynomials and Askey-Wilson polynomials as special cases. They also satisfied many exciting properties, among which we just mention one, which led to a remarkable relation between Macdonald polynomials, representation theory and algebraic geometry. This property, called Macdonald's postivity conjecture [21], states that the coefficients K (q,t) μ,λ in the expansion of J (q,t) λ (x) into the "plethystic Schur" basis s μ [X(1 − t)] (for the readers not familiar with the plethystic notation, we refer to [22,Chapter VI.8]) are polynomials in q, t with nonnegative integer coefficients. Garsia and Haiman [7] refined this conjecture, giving a representation theoretic interpretation for the coefficients in terms of Garsia-Haiman modules, an interpretation which was finally proved almost 10 years later by Haiman [12], who connected the problem to the study of the Hilbert scheme of N points in the plane from algebraic geometry. It quickly turned out that Macdonald polynomials have found applications in special function theory, representation theory, algebraic geometry, group theory, statistics, quantum mechanics and much more [10]. Moreover, their fascinating and rich combinatorial structure is one of the most important object of interest in contemporary algebraic combinatorics.

Strong factorization property of interpolation Macdonald polynomials
The main goal of this paper is to state and partially prove a generalization of the celebrated Macdonald's positivity conjecture. We are going to do it by proving that Macdonald polynomials have a strong factorization property when q → 1, which also resolves the problem posed by the author of this paper and Féray in our recent joint paper [2,Conjecture 1.5].
In order to explain the notion of strong factorization property, let us introduce a few notations. If λ and μ are partitions, we denote λ ⊕ μ := (λ 1 + μ 1 , λ 2 + μ 2 , . . .) their entry-wise sum; see Sect. 2.2. If λ 1 , . . . , λ r are partitions and I a subset of [r ] := {1, . . . , r }, then we denote Moreover, we use a standard notation: Definition 1.1 For r ∈ R, where R is a ring and f, g ∈ R(q), we write f = O r (g) if the rational function f (q) g (q) has no pole in q = r . Then, we prove the following theorem: As in our previous paper [2], let us unpack the notation for small values of r in order to explain the terminology strong factorization property.
Here, the sum is taken over set partitions π of [r ] and #π denotes the number of parts of π ; see Sect. 2.1 for details. For example An equivalent form of Theorem 1.2 in terms of cumulants is as follows: Theorem 1.3 For any partitions λ 1 , . . . , λ r , Macdonald polynomials have a small cumulant property when q → 1, that is Instead of proving Theorem 1.3, we prove the stronger result that interpolation Macdonald polynomials have a small cumulant property when q → 1, from which Theorem 1.3 follows as a special case. To make this section complete, let us introduce interpolation Macdonald polynomials.
Interpolation polynomials are characterized by certain vanishing condition. Sahi [24] proved that for each partition λ of length (λ) ≤ N , there exists a unique (inhomogenous) symmetric polynomial J (q,t) λ (x) of degree |λ|, where x = (x 1 , . . . , x N ), which has the following properties: This symmetric polynomial is called interpolation Macdonald polynomial, and it has a remarkable property which explains its name: Its top-degree part is equal to Macdonald Our main result is the following theorem: Theorem 1.4 Let λ 1 , . . . , λ r be partitions. Then, we have a following small cumulant property when q → 1: where κ J (λ 1 , . . . , λ r ) is a cumulant of interpolation Macdonald polynomials.

Higher-order Macdonald's positivity conjecture
As we already mentioned, the purpose of this paper is to generalize q, t-Kostka numbers and to prove that they are polynomials in q, t with integer coefficients. Before we define the multivariate q, t-Kostka numbers, we just mention that strictly from the definition of q, t-Kostka numbers, they are elements of Q(q, t), and it took six or seven years after Macdonald formulated his conjecture to prove that they are in fact polynomials in q, t with integer coefficients, which was proved independently by many authors [9,11,14,15,20,24]. This result will be important to prove the integrality of the multivariate q, t-Kostka numbers.
We recall that Macdonald's positivity conjecture is a well-established theorem nowadays since Haiman proved it in 2001 [12]. We ran some computer simulations which suggested that multivariate q, t-Kostka numbers are also polynomials with positive coefficients. Unfortunately, we are not able to prove it, since our techniques of the proof of Theorem 1.4 do not seem to be applicable to this problem and we state it in this paper as a conjecture. Conjecture 1.6 Let λ 1 , . . . , λ r be partitions. Then, for any partition μ, the multivariate q, t-Kostka number K (q,t) μ;λ 1 ,...,λ r is a polynomial in q, t with positive, integer coefficients.

Related problems
We finish this section, mentioning some similar or somewhat related problems. First, we recall that one of the most typical application of cumulant is to show that a certain family of random variables is asymptotically Gaussian. Especially, when one deals with discrete structures, the main technique is to show that cumulants have a certain small cumulant property, which is in the same spirit as our Theorem 1.3; see [4][5][6]26]. It is therefore natural to ask for a probabilistic interpretation of Theorem 1.3. In particular, does it lead to some kind of central limit theorem? The most natural framework to investigate this problem seems to be related with Macdonald processes introduced by Borodin and Corwin [1] or representation-theoretical interpretation of Macdonald polynomials given by Haiman [12].
A second problem is related to the combinatorics of Jack polynomials, which are special cases of Macdonald polynomials. In fact, Theorem 1.3 was posed as an open question in our previous paper joint with Féray [2], where we proved that Jack polynomials have a strong factorization property when α → 0, where α is the Jackdeformation parameter. In the same paper, we use this result as a key tool to prove the polynomiality part of so-called b-conjecture, stated by Goulden and Jackson [8]. This conjecture says that a certain multivariate generating function involving Jack symmetric functions expressed in the power-sum basis gives rise to the multivariate generating function of bipartite maps (bipartite graphs embedded into some surface), where the exponent of β := α − 1 has an interpretation as some mysterious "measure of nonorientability" of the associated map. The conjecture is still open, while some special cases have been solved [3,8,17,18]. It is very tempting to build a q, t-framework which will generalize the b-conjecture. Although we can simply replace Jack polynomials by Macdonald polynomials in the definition of the multivariate generating function given by Goulden and Jackson and use the same techniques as in [2] to prove that expanding it in a properly normalized power-sum basis we obtain polynomials in q, t, we do not obtain positive, neither integer coefficients. Therefore, we leave a wide-open question of the possibility of building a proper framework which generalizes the b-conjecture to two parameters in a way that it is related to counting some combinatorial objects.

Organization of the paper
We describe all necessary definitions and background in Sect. 2. Section 3 gives the proof of Theorem 1.4 which is preceded by an explanation of the main idea of the proof. In Sect. 4, we discuss cumulants and their relation with the strong factorization property, and we investigate a relation between cumulants and derivatives that is in the heart of the proof of Theorem 1.4. Finally, Sect. 5 is devoted to the proof of two intermediate steps of the proof of Theorem 1.4.

Set partitions lattice
The combinatorics of set partitions is central in the theory of cumulants and will be important in this article. We recall here some well-known facts about them.
A set partition of a set S is a (non-ordered) family of non-empty disjoint subsets of S (called parts of the partition), whose union is S. In the following, we always assume that S is finite.
Denote P(S) the set of set partitions of a given set S. Then, P(S) may be endowed with a natural partial order: the refinement order. We say that π is finer than π (or π coarser than π ) if every part of π is included in a part of π . We denote this by π ≤ π . Endowed with this order, P(S) is a complete lattice, which means that each family F of set partitions admits a join (the finest set partition which is coarser than all set partitions in F; we denote the join operator by ∨) and a meet (the coarsest set partition which is finer than all set partitions in F; we denote the meet operator by ∧). In particular, the lattice P(S) has a maximum {S} (the partition in only one part) and a minimum {{x}, x ∈ S} (the partition in singletons).
Lastly, denote μ the Möbius function of the partition lattice P(S). Then, for any pair π ≤ σ of set partitions, the value of the Möbius function has a product form: where the product is taken over all blocks of a partition σ , and for a given block B ∈ σ an expression μ {B ∈ π : B ⊂ B }, {B } denotes a Möbius function of the lattice P(B ) of the interval in between a partition {B ∈ π : B ⊂ B }, and a maximal element {B }. This function is given by an explicit formula where #π denotes the number of parts of π . We finish this section by stating a well-known result on computing a Möbius functions of lattices. [27]) For any π < τ ≤ σ in a lattice L, we have π ≤ω≤σ : ω∨τ =σ μ(π, ω) = 0.

Partitions
We call λ := (λ 1 , λ 2 , . . . , λ l ) a partition of n if it is a weakly decreasing sequence of positive integers such that λ 1 + λ 2 + · · · + λ l = n. Then, n is called the size of λ, while l is its length. As usual, we use the notation λ n, or |λ| = n, and (λ) = l. We denote the set of partitions of n by Y n , and we define a partial order on Y n , called dominance order, in the following way: Then, we extend the notion of dominance order on the set of partitions of arbitrary size by saying that λ μ ⇐⇒ |λ| < |μ|, or |λ| = |μ| and λ ≤ μ.
For any two partitions λ ∈ Y n and μ ∈ Y m , we can construct a new partition λ ⊕ μ ∈ Y n+m by setting λ ⊕ μ := (λ 1 + μ 1 , λ 2 + μ 2 , . . . ). Moreover, there exists a canonical involution on the set Y n , which associates with a partition λ its conjugate partition λ t . By definition, the jth part λ t j of the conjugate partition is the number of positive integers i such that λ i ≥ j. A partition λ is identified with some geometric object, called Young diagram, that can be defined as follows: Finally, we define two combinatorial quantities associated with partitions that we will use extensively through this paper. First, we define the (q, t)-hook polynomial h (q,t) (λ) by the following equation We also introduce a partition binomial given by

Interpolation Macdonald polynomials as eigenfunctions
We already defined interpolation Macdonald polynomials in Sect. 1.2, but we are going to introduce another, equivalent definition that is more convenient in the framework of the following paper. Since this is now a well-established theory, results of this section are given without proofs but with explicit references to the literature (mostly to Macdonald's book [22] and Sahi's paper [24]). First, consider the vector space Sym N of symmetric polynomials in N variables over Q(q, t). Let T q,x i be the "q-shift operator" defined by

Let us define an operator
Proposition 2.2 There exists a unique family J (q,t) λ (indexed by partitions λ of length at most N ) in Sym N that satisfies: These polynomials are called interpolation Macdonald polynomials. This is a result of Sahi [24]. His original definition requires that the coefficients a λ ν are only rational functions in q, t with rational coefficients, but in the same paper Sahi proved that they are in fact polynomials in q, t −1 , t (and even in q, t when |ν| = |λ|) with integer coefficients, which will be important for us later. We just add for completeness of the presentation that we are using different notation and normalization than Sahi, so function R λ (x; q −1 , t −1 ) from Sahi's paper [24] is equal to h (q,t) (λ) −1 J (q,t) λ (x) with our notation, and c λ (q, t) from Sahi's paper is the same as h (q,t) (λ) with our notation.
Above definition says that the interpolation Macdonald polynomial J (q,t) λ depends on the parameter N , that is the number of variables. However, one can show that it satisfies the compatibility relation J can be seen as a symmetric function. In the sequel, when working with differential operators, we sometimes confuse a symmetric function f with its restriction It was shown by Macdonald [22, Chapter VI, (3.9)-(3.10)] that Moreover, it is easy to show (see for example [24,Lemma 3.3]) that Plugging it into Eq. (5) we observe that: Note that we can expand operator D around q = 1 as a linear combination of differential operators in the following form: where . As a consequence, we have the following identity: where ∂ q is a partial derivative with respect to q, b N j (λ) is given by Eq. (4), and d λ ν ∈ Z[t]. Corollary 2.3 Let f ∈ Sym be a symmetric function with an expansion in the monomial basis of the following form: where λ is a fixed partition, and d μ ∈ Q(t). If, for any number N of variables,

Strong factorization property of interpolation Macdonald polynomials
In this section, we prove Theorem 1.4. Since its proof involves many intermediate results which can be considered as independent of Theorem 1.4, we believe that presenting them before the proof of the main result might discourage the reader, and we decided to explain the main idea of the proof of Theorem 1.4 first, then give the proof with all the details, and finally present all the remaining proofs of the intermediate results in the separate sections.
Proof of Theorem 1. 4 We recall that we need to prove that for any positive integer r , and for any partitions λ 1 , . . . , λ r we have the following bound for the cumulant: The proof will by given by induction on r . The fact that Macdonald interpolation polynomials J (q,t) λ have no singularity in q = 1 is straightforward from the result of Sahi presented in Proposition 2.2. That covers the case r = 1. Now, notice that for any ring R, and any rational function f ∈ R[q], the following conditions are equivalent Thus, we are going to prove that where κ J (λ 1 , . . . , λ r ) denotes the cumulant with parameters q −1 , t −1 . From now on, until the end of this proof, κ J (λ 1 , . . . , λ r ) denotes the cumulant with parameters q −1 , t −1 .
Let R be a ring, and let f ∈ R[q, q −1 ] be a Laurent polynomial in q. We introduce the following notation: For any nonnegative integer k, the coefficient [(q − 1) k ] f ∈ R is defined by the following expansion: where deg( f ) is the smallest possible nonnegative integer such that It is clear that for two Laurent polynomials f, g ∈ R[q, q −1 ] and nonnegative integer k one has the following identity: With the above notation, we have to prove that for any integer 0 ≤ k ≤ r − 2 the following equality holds true: Notice now that the expansion of f into the monomial basis involves only the monomials m μ indexed by partitions μ ≺ λ [r ] , which is ensured by Proposition 4.8. Thus, if we are able to show that the following equation holds true: then f = 0 by Corollary 2.3, and the proof is over. So our goal is to prove Eq. (8).
In order to do that we make the following observation: An interpolation Macdonald polynomial J (q −1 ,t −1 ) is an eigenfunction of the operator D. Since the cumulant is a linear combination of products of interpolation Macdonald polynomials it will be very convenient if the action of D on such a product will be given by the Leibniz rule, that is Unfortunately, it is not the case. However, the trick is to decompose Dκ J (λ 1 , . . . , λ r ) into two parts: The first part is given by "forcing" the Leibniz rule for the action of D on the product of interpolation Macdonald polynomials, and the second part is given by the difference between the proper action of D on cumulant, and between the forced version. To be more precise where This decomposition turned out to be crucial. Indeed, Lemma 5.2 ensures that the first part can be expressed as a linear combination of products of cumulants of less then r elements; thus, we can use an induction hypothesis to analyze it. Similarly, Lemma 5.3 states that the second part can be given by an expression involving products of cumulants of less then r elements, and again, an inductive hypothesis can be used to its analysis. Then, comparing the coefficient of (q − 1) k in the left-hand side of Eq. (9) with the coefficient of (q − 1) k in the right-hand side of Eq. (9), we obtain Eq. (8). Let us go into details. Expanding operator D around q = 1 [see Eq. (6)], we have that Moreover, applying Lemma 5.2, we have that where InEx j λ B : B ∈ σ is a certain polynomial in t with integer coefficients [at this stage we do not need to know its explicit form, but for the interested reader it is given by Eq. (22)] which has the following form in the special case: Finally, applying Lemma 5.3, we obtain the following identity Here, N π + denotes the set of functions α : π → N + with positive integer values, the symbol |α| is defined as We recall that the right-hand side (RHS for short) of Eq. (10) is equal to the sum of the right-hand sides of Eqs. (11) and (12). Let k = 1. Then, the RHS of Eq. (10) is equal to the RHS of Eq. (11) is equal to . . . , λ r ), and the RHS of Eq. (12) vanishes. Thus, we have shown that Eq. (8) holds true for f = (q − 1) 0 κ J (λ 1 , . . . , λ r ), which implies that f = 0. Now, we fix K ≤ r − 2, and we assume that holds true for all 0 ≤ m < K . We are going to show that Eq. (8) holds true for f = (q − 1) K κ J (λ 1 , . . . , λ r ). First, note that for k = K + 1 the RHS of Eq. (10) simplifies to

Moreover, from the induction hypothesis for each subset
Thus, for any set partition π ∈ P([r ]) which has at least two parts, one has where the j B are any nonnegative integers (D 0 i = Id by convention). It implies that the RHS of Eq. (12) vanishes. Finally, again by induction hypothesis, all the elements of the form (q − 1) k− j B∈σ κ J (λ i : i ∈ B) that appear in the RHS of Eq. (11) vanish except (q − 1) K κ J (λ 1 , . . . , λ r ) = f . Thus, the RHS of Eq. (11) simplifies to which proves that Eq. (8) holds true. The proof is completed.

Cumulants
In this section, we introduce cumulants and we investigate an action of derivations on them, which is crucial in the proof of Theorem 1.4. We also explain the connection between the strong factorization property and the small cumulant property, and we present some applications of it relevant for our work. We begin with some definitions.

Partial cumulants
Definition 4.1 Let (u I ) I ⊆J be a family of elements in a field, indexed by subsets of a finite set J . Then, its partial cumulant is defined as follows. For any non-empty subset where μ is the Möbius function of the set partition lattice; see Section 2.1. The terminology comes from probability theory. Let J = [r ], and let X 1 , . . . , X r be random variables with finite moments defined on the same probability space. Then, define u I = E( i∈I X i ), where E denotes the expected value. The quantity κ [r ] (u) as defined above is known as the joint (or mixed) cumulant of the random variables X 1 , . . . , X r . Also, κ H (u) is the joint/mixed cumulant of the smaller family {X h , h ∈ H }.
Joint/mixed cumulants have been studied by Leonov and Shiryaev in [19] (see also an older note of Schützenberger [25], where they are introduced under the French name déviation d'indépendance). They now appear in random graph theory [13,Chapter 6] and have inspired a lot of work in non-commutative probability theory [23].
A classical result-see, e.g., [13, Proposition 6.16 (vi)] -is that Eq. (13) can be inverted as follows: For any non-empty subset H of J ,

Derivations and cumulants
Let R be a ring. We define an R-module of derivations Der K which consists of linear maps D : R → R satisfying the following Leibniz rule: For any positive integers r, k, and for any elements f 1 , . . . , f r ∈ R we define Let K be a field, and D ∈ Der K be a derivation. Then, for any family u = (u I ) I ⊆[r ] of elements in a field K we define the following deformed action of D k on the cumulant: The following lemma will be crucial to prove our main result.

Lemma 4.2 For any positive integers r, k, for any family u = (u I ) I ⊆[r ]
of elements in a field K and for any derivation D ∈ Der K , the following identity holds true: (15) Here, N π + denotes the set of functions α : π → N + , the symbol |α| is defined as Proof First of all, notice that for any elements f 1 , . . . , f r ∈ K , and for any positive integer k the following generalized Leibniz rule holds true: which is easy to prove by induction (D 0 := Id by convention). Notice now that the both hands of Eq. (15) are linear combinations of elements of the form where π ∈ P([r ]), and α ∈ N π is a composition of k. Let us call RHS the right-hand side of Eq. (15), and analogously LHS the left-hand side of Eq. (15). Let us fix a set partition π ∈ P([r ]), and a composition α ∈ N π of k. We would like to show that We define the support supp(α) of α in a standard way: We now analyze the coefficient We can see that the nonzero contribution come from the elements of the following form: σ ≥ π , and for each element B ∈ σ , there exists an element B ∈ supp(α) such that B ⊂ B . In other terms, σ is a partition which has the property that σ ≥ π , and where partition τ is constructed from π by merging all its blocks lying in a support of α, i.e., : Using the definition of cumulants Eqs. (13) and (16), we can compute the coefficient Plugging it into Eq. (15), we obtain that where τ is the partition given by Eq. (18). Here, the last equality is a consequence of Eq. (2) for the Möbius function μ(π, σ ). Now, notice that partition τ is constructed in a way that τ ≥ π , and the inequality is strict whenever # supp(α) > 1. Thus, we can apply Proposition 2.1 to get Comparing it with Eq. (17), we can see that which finishes the proof.

A multiplicative criterion for small cumulants
Let R be a ring and q a formal parameter. We consider a family u = (u I ) I ⊆[r ] of elements of R(q) indexed by subsets of [r ]. Throughout this section, we also assume that these elements are nonzero and u ∅ = 1.
In addition to partial cumulants, we also define the cumulative factorization error terms T H (u) of the family u. The quantities T H (u) H ⊆[r ],|H |≥2 are inductively defined as follows: For any subset G of [r ] of size at least 2, T H (u)).
Using the inclusion-exclusion principle, a direct equivalent definition is the following: For any subset H of [r ] of size at least 2, set Féray (using a different framework) [5] proved the following statement, which was reproved in our recent joint paper with Féray [2, Proposition 2.3] using the framework of the current paper: Remark In fact, above proposition was proved in the case r = 0, but it is enough to shift indeterminate q → q − r to obtain the general result.
A first consequence of this multiplicative criterion for small cumulants is the following stability result. Proof This is trivial for the strong factorization property, and the small cumulant property is equivalent to it.
Here is another consequence:

Hook cumulants
We use the multiplicative criterion above to prove that families constructed from the hook polynomial defined by Eq.
Proof It is enough to prove the statement for H = K . Indeed, the case of a general set H follows by considering the same family restricted to subsets of H .
Define R ev (resp. R odd ) as where the product runs over subsets of K of even (resp. odd) size. Without loss of generality, we can assume that |K | is even (the case when |K | is odd is analogous). With this notation, It is clear that Let us fix a positive integer l < |K |. Expanding the product in the definition of R ev in the basis {(q − 1) j } j≥0 , and using the binomial formula, one gets The index set of the second summation symbol is the list of sets of i distinct (but not necessarily disjoint) subsets of K of even size, and The factor 1 i! in the above formula comes from the fact that we should sum over sets of i distinct subsets of K , instead of lists, but it is the same as the summation over the set of lists of i distinct subsets of K and dividing by the number of permutations of [i]. Strictly from this formula, it is clear that [(q − 1) l ]R ev is a symmetric polynomial in c i : i ∈ K of degree at most l. Of course, a similar formula with subsets of odd size holds for [(q − 1) l ]R odd , which shows that it is a symmetric polynomial in c i : i ∈ K of degree at most l, as well. For any positive integers n, k, we define a set Y(n, k) of sequences of n nonnegative, non-increasing integers, which are of the following form: It is well known (see for example [16,Theorem 2.1]) that if f, g are two symmetric polynomials of degree at most k in n indeterminates, then Thus, in order to show that [(q − 1) l ]R ev = [(q − 1) l ]R odd it is enough to show that this equality holds for all (c i ) i∈K ∈ Y(|K |, l). Note that since l < |K |, then c k is necessarily equal to 0, where k is the biggest possible k ∈ K . It means that the function f : (K ) ev := {δ ⊂ K : δ has even size } → (K ) odd := {δ ⊂ K : δ has odd size } given by f (δ) := δ∇{k}, where ∇ is the symmetric difference operator, is a bijection which preserves the following statistic |δ| c = | f (δ)| c .
Thus, one has Since l < |K | was an arbitrary positive integer, we have shown that which finishes the proof. Proof Fix some subset I = {i 1 , . . . , i t } of [r ] with i 1 < · · · < i t . Observe that the Young diagram λ I can be constructed by sorting the columns of the diagrams λ i 1 , …, λ i t in decreasing order. When several columns have the same length, we put first the columns of λ i 1 , then those of λ i 2 and so on; see Fig. 2 (at the moment, please disregard symbols in boxes). This gives a way to identify boxes of λ I with boxes of the diagrams λ i s (1 ≤ s ≤ t) that we shall use below. With this identification, if b = (c, r ) is a box in λ g for some g ∈ I , its leg-length in λ I is the same as in λ g . We denote it by (b).
However, the arm-length of b in λ I may be bigger than the one in λ g . We denote these two quantities by a I (b) and a g (b). Let us also define a i (b) for i = g in I , as follows: Looking at Fig. 2, it is easy to see that Therefore, for G ⊆ [r ], one has: From the definition of T [r ] (u), given by Eq. (19), we get: The expression inside the bracket corresponds to 1 Plugging Eq. (20) into definition of v b I , we observe that v b I is as in Lemma 4.6 with the following values of the parameters: K = [r ]\{g}, C = t (b)+1 , c = 1, and c i = a i (b) for i = g. Therefore, we conclude that Going back to Eq. (21), we have: which completes the proof.
We finish this section by presenting an important corollary from the above result. any partitions λ 1 , . . . , λ r , the cumulant κ J (λ 1 , . . . , λ r ) has a monomial expansion of the following form

Differential operator and cumulant of interpolation Macdonald polynomials
Let us fix partitions λ 1 , . . . , λ r , and for any subset I ⊆ [r ], we define u I := J The purpose of this section is an analysis of the action of the differential operator Ddefined in Eq. (5)-on the cumulant κ J (λ 1 , . . . , λ r ) = κ [r ] (u) with parameters q −1 , and t −1 . In particular, this analysis leads to the proofs of two crucial lemmas used in the proof of Theorem 1.4.

Analysis of the decomposition
For any positive integer r and for any partitions λ 1 , . . . , λ r , we define InEx j λ 1 , . . . , λ r := where b N j λ I is given by Eq. (4).

Proposition 5.1
Let r > j ≥ 1 be positive integers. Then, for any partitions λ 1 , . . . , λ r one has: Proof Expanding the definition and completing partitions with zeros, we have: In particular, we have to prove that the summand corresponding to any given 1 ≤ i ≤ N is equal to 0. In other terms, we have to show that the polynomial . . . , x r ), and x I := i∈I x i .
Note that it is a symmetric polynomial in x without constant term of degree at most j, thus it is enough to show that the coefficient of x μ := x μ 1 1 · · · x μ r r is equal to zero for all non-empty partitions μ of size at most j. This coefficient is given by: where s( j, k) is the Stirling number of the first kind, i.e., Since (μ) ≤ |μ| ≤ j < r , we have that (−1) r −|I | = 0, which finishes the proof.
We recall that Lemma 5.2 For any positive integer r ≥ 2 and any partitions λ 1 , . . . , λ r , the following equality holds true: Proof Note that strictly from the definition of interpolation Macdonald polynomials given by Proposition 2.2 we know that for any set partition π ∈ P([r ]) the following identity holds: .
If we substitute it into the definition of Dκ J (λ 1 , . . . , λ r ), we have that Fix a set partition σ ∈ P([r ]). We claim that the expression in the bracket in the above equation is given by the following formula π ∈P([r ]);π ≥σ μ(π, {[r ]}) which finishes the proof, since the right-hand side of Eq. (23) vanishes for all set partitions σ such that #(σ ) > j, which is ensured by Proposition 5.1. Let us order the blocks of σ in some way σ = {B 1 , . . . , B #σ }. The partitions π coarser than σ are in bijection with partitions of the blocks of σ , that is partitions of [#σ ]. Therefore, the left-hand side of Eq. (23) can be rewritten as: