1 Introduction

1.1 Macdonald polynomials

In 1988, Macdonald [21, 22] introduced a new family of symmetric functions \(J_\lambda ^{(q,t)}({\varvec{x}})\) depending upon a partition \(\lambda \), a set of variables \({\varvec{x}} = \{x_1,\dots ,x_N\}\) and two real parameters qt. They were immediately hailed as a breakthrough in symmetric function theory as well as special functions, as they contained most of the previously studied families of symmetric functions such as Schur polynomials, Jack polynomials, Hall–Littlewood polynomials and Askey–Wilson polynomials as special cases. They also satisfied many exciting properties, among which we just mention one, which led to a remarkable relation between Macdonald polynomials, representation theory and algebraic geometry. This property, called Macdonald’s postivity conjecture [21], states that the coefficients \(K^{(q,t)}_{\mu ,\lambda }\) in the expansion of \(J_\lambda ^{(q,t)}({\varvec{x}})\) into the “plethystic Schur” basis \(s_\mu [{\varvec{X}}(1-t)]\) (for the readers not familiar with the plethystic notation, we refer to [22, Chapter VI.8]) are polynomials in qt with nonnegative integer coefficients. Garsia and Haiman [7] refined this conjecture, giving a representation theoretic interpretation for the coefficients in terms of Garsia-Haiman modules, an interpretation which was finally proved almost 10 years later by Haiman [12], who connected the problem to the study of the Hilbert scheme of N points in the plane from algebraic geometry. It quickly turned out that Macdonald polynomials have found applications in special function theory, representation theory, algebraic geometry, group theory, statistics, quantum mechanics and much more [10]. Moreover, their fascinating and rich combinatorial structure is one of the most important object of interest in contemporary algebraic combinatorics.

1.2 Strong factorization property of interpolation Macdonald polynomials

The main goal of this paper is to state and partially prove a generalization of the celebrated Macdonald’s positivity conjecture. We are going to do it by proving that Macdonald polynomials have a strong factorization property when \(q \rightarrow 1\), which also resolves the problem posed by the author of this paper and Féray in our recent joint paper [2, Conjecture 1.5].

In order to explain the notion of strong factorization property, let us introduce a few notations. If \(\lambda \) and \(\mu \) are partitions, we denote \(\lambda \oplus \mu := (\lambda _1+\mu _1,\lambda _2+\mu _2,\ldots )\) their entry-wise sum; see Sect. 2.2. If \(\lambda ^1,\ldots ,\lambda ^r\) are partitions and I a subset of \([r]:=\{1,\ldots ,r\}\), then we denote

$$\begin{aligned} \lambda ^I:= \bigoplus _{i \in I} \lambda ^i. \end{aligned}$$

Moreover, we use a standard notation:

Definition 1.1

For \(r \in R\), where R is a ring and \(f,g\in R(q)\), we write \(f=O_r(g)\) if the rational function \(\frac{f(q)}{g(q)}\) has no pole in \(q = r\).

Then, we prove the following theorem:

Theorem 1.2

For any partitions \(\lambda ^1,\dots ,\lambda ^r\), Macdonald polynomials have the strong factorization property when \(q \rightarrow 1\), i.e.,

$$\begin{aligned} \prod _{I \subset [r]} \left( J_{\lambda ^I}^{(q,t)}\right) ^{(-1)^{|I|}} =1+O_1\left( (q-1)^{r-1}\right) . \end{aligned}$$
(1)

As in our previous paper [2], let us unpack the notation for small values of r in order to explain the terminology strong factorization property.

  • For \(r=2\), Eq. (1) writes as

    $$\begin{aligned} \frac{J^{(q,t)}_{\lambda ^1 \oplus \lambda ^2}}{J^{(q,t)}_{\lambda ^1}J^{(q,t)}_{\lambda ^2}} = 1+O_1\left( q-1\right) . \end{aligned}$$

    In other terms, this means that for \(q=1\), one has the factorization property \(J^{(1,t)}_{\lambda ^1 \oplus \lambda ^2}=J^{(1,t)}_{\lambda ^1}J^{(1,t)}_{\lambda ^2}\). This is indeed true and follows from an explicit expression for \(J^{(1,t)}_{\lambda }\) given in [22, Chapter VI, Remark (8.4)-(iii)]. Thus, in this case, our theorem does not give anything new.

  • For \(r=3\), Eq. (1) writes as

    $$\begin{aligned} \frac{J^{(q,t)}_{\lambda ^1 \oplus \lambda ^2 \oplus \lambda ^3} \, J^{(q,t)}_{\lambda ^1} \, J^{(q,t)}_{\lambda ^2} J^{(q,t)}_{\lambda ^3}}{J^{(q,t)}_{\lambda ^1 \oplus \lambda ^2} J^{(q,t)}_{\lambda ^1 \oplus \lambda ^3} J^{(q,t)}_{\lambda ^2 \oplus \lambda ^3}} = 1+O_1\left( (q-1)^2\right) . \end{aligned}$$

    Using the case \(r=2\), it is easily seen that the left-hand side is \(1+O_1\left( q-1\right) \). But our theorem says more and asserts that it is \(1+O_1\left( (q-1)^2\right) \), which is not trivial at all.

Theorem 1.2 has an equivalent form that uses the notion of cumulants of Macdonald polynomials (see Sect. 4 for comments on the terminology). For partitions \(\lambda ^1,\ldots ,\lambda ^r\), we denote

$$\begin{aligned} \kappa ^J(\lambda ^1,\ldots ,\lambda ^r) := \sum _{\begin{array}{c} \pi \in {\mathcal {P}}([r]) \end{array} } (-1)^{\#\pi }\prod _{B \in \pi } J^{(q,t)}_{\lambda ^B}. \end{aligned}$$

Here, the sum is taken over set partitions \(\pi \) of [r] and \(\#\pi \) denotes the number of parts of \(\pi \); see Sect. 2.1 for details. For example

$$\begin{aligned} \kappa ^J(\lambda ^1,\lambda ^2)= & {} J^{(q,t)}_{\lambda ^1\oplus \lambda ^2}- J^{(q,t)}_{\lambda ^1}J^{(q,t)}_{\lambda ^2}, \\ \kappa ^J(\lambda ^1,\lambda ^2,\lambda ^3)= & {} J^{(q,t)}_{\lambda ^1\oplus \lambda ^2\oplus \lambda ^3}- J^{(q,t)}_{\lambda ^1}J^{(q,t)}_{\lambda ^2\oplus \lambda ^3}\\&\,-J^{(q,t)}_{\lambda ^2}J^{(q,t)}_{\lambda ^1\oplus \lambda ^3}- J^{(q,t)}_{\lambda ^3}J^{(q,t)}_{\lambda ^1\oplus \lambda ^2} + 2J^{(q,t)}_{\lambda ^1}J^{(q,t)}_{\lambda ^2}J^{(q,t)}_{\lambda ^3}. \end{aligned}$$

An equivalent form of Theorem 1.2 in terms of cumulants is as follows:

Theorem 1.3

For any partitions \(\lambda ^1, \dots , \lambda ^r\), Macdonald polynomials have a small cumulant property when \(q \rightarrow 1\), that is

$$\begin{aligned} \kappa ^J(\lambda ^1,\ldots ,\lambda ^r)=O_1\left( (q-1)^{r-1} \right) . \end{aligned}$$

Instead of proving Theorem 1.3, we prove the stronger result that interpolation Macdonald polynomials have a small cumulant property when \(q \rightarrow 1\), from which Theorem 1.3 follows as a special case. To make this section complete, let us introduce interpolation Macdonald polynomials.

Interpolation polynomials are characterized by certain vanishing condition. Sahi [24] proved that for each partition \(\lambda \) of length \(\ell (\lambda ) \le N\), there exists a unique (inhomogenous) symmetric polynomial \({\mathcal {J}}^{(q,t)}_\lambda ({\varvec{x}})\) of degree \(|\lambda |\), where \({\varvec{x}} = (x_1,\dots ,x_N)\), which has the following properties:

  • in the monomial basis expansion the coefficient \([m_\lambda ]{\mathcal {J}}^{(q,t)}_\lambda ({\varvec{x}})\) is the same as \( [m_\lambda ]J^{(q,t)}_\lambda ({\varvec{x}})\);

  • for all partitions \(\mu \ne \lambda , |\mu | \le |\lambda |\) an expression \({\mathcal {J}}^{(q,t)}_\lambda ({\widetilde{\mu }})\) vanishes, where

    $$\begin{aligned} {\widetilde{\mu }} := \left( q^{\mu _1}t^{N-1},q^{\mu _2}t^{N-2},\dots ,q^{\mu _N}t^{0}\right) . \end{aligned}$$

This symmetric polynomial is called interpolation Macdonald polynomial, and it has a remarkable property which explains its name: Its top-degree part is equal to Macdonald polynomial \(J^{(q,t)}_\lambda ({\varvec{x}})\).

Our main result is the following theorem:

Theorem 1.4

Let \(\lambda ^1,\ldots ,\lambda ^r\) be partitions. Then, we have a following small cumulant property when \(q \rightarrow 1\):

$$\begin{aligned} \kappa ^{{\mathcal {J}}}\left( \lambda ^1,\ldots ,\lambda ^r\right) = O_1\left( (q-1)^{r-1} \right) , \end{aligned}$$

where \(\kappa ^{{\mathcal {J}}}(\lambda ^1,\ldots ,\lambda ^r)\) is a cumulant of interpolation Macdonald polynomials.

Since the top-degree part of \(\kappa ^{{\mathcal {J}}}(\lambda ^1,\dots ,\lambda ^r)\) is equal to \(\kappa ^J(\lambda ^1,\dots ,\lambda ^r)\), Theorem 1.3 follows.

1.3 Higher-order Macdonald’s positivity conjecture

As we already mentioned, the purpose of this paper is to generalize qt-Kostka numbers and to prove that they are polynomials in qt with integer coefficients. Before we define the multivariate q, t-Kostka numbers, we just mention that strictly from the definition of qt-Kostka numbers, they are elements of \({\mathbb {Q}}(q,t)\), and it took six or seven years after Macdonald formulated his conjecture to prove that they are in fact polynomials in qt with integer coefficients, which was proved independently by many authors [9, 11, 14, 15, 20, 24]. This result will be important to prove the integrality of the multivariate qt-Kostka numbers.

Let \(\lambda ^1,\ldots ,\lambda ^r\) be partitions. We define the multivariate q, t-Kostka numbers \(K^{(q,t)}_{\mu ; \lambda ^1,\dots ,\lambda ^r}\) by the following equation

$$\begin{aligned} \kappa ^J(\lambda ^1,\dots ,\lambda ^r) = (q-1)^{r-1} \sum _{\mu \ \vdash \left| \lambda ^{[r]}\right| }K^{(q,t)}_{\mu ;\lambda ^1,\dots ,\lambda ^r}s_\mu [{\varvec{X}}(1-t)]. \end{aligned}$$

Note that when \(r=1\), the multivariate qt-Kostka number \(K^{(q,t)}_{\mu ; \lambda ^1}\) is equal to the ordinary qt-Kostka number \(K^{(q,t)}_{\mu ,\lambda }\) with \(\lambda ^1 = \lambda \).

In particular, integrality of Littlewood–Richardson coefficients together with the integrality result on qt-Kostka numbers implies that

$$\begin{aligned} (q-1)^{r-1} K^{(q,t)}_{\mu ;\lambda ^1,\dots ,\lambda ^r} \in {\mathbb {Z}}[q,t]. \end{aligned}$$

Thus, applying Theorem 1.3 into the above result, we obtain immediately the following theorem:

Theorem 1.5

Let \(\lambda ^1,\dots ,\lambda ^r\) be partitions. Then, for any partition \(\mu \), the multivariate qt-Kostka number \(K^{(q,t)}_{\mu ; \lambda ^1,\dots ,\lambda ^r}\) is a polynomial in qt with integer coefficients.

We recall that Macdonald’s positivity conjecture is a well-established theorem nowadays since Haiman proved it in 2001 [12]. We ran some computer simulations which suggested that multivariate qt-Kostka numbers are also polynomials with positive coefficients. Unfortunately, we are not able to prove it, since our techniques of the proof of Theorem 1.4 do not seem to be applicable to this problem and we state it in this paper as a conjecture.

Conjecture 1.6

Let \(\lambda ^1,\dots ,\lambda ^r\) be partitions. Then, for any partition \(\mu \), the multivariate qt-Kostka number \(K^{(q,t)}_{\mu ;\lambda ^1,\dots ,\lambda ^r}\) is a polynomial in qt with positive, integer coefficients.

1.4 Related problems

We finish this section, mentioning some similar or somewhat related problems. First, we recall that one of the most typical application of cumulant is to show that a certain family of random variables is asymptotically Gaussian. Especially, when one deals with discrete structures, the main technique is to show that cumulants have a certain small cumulant property, which is in the same spirit as our Theorem 1.3; see [4,5,6, 26]. It is therefore natural to ask for a probabilistic interpretation of Theorem 1.3. In particular, does it lead to some kind of central limit theorem? The most natural framework to investigate this problem seems to be related with Macdonald processes introduced by Borodin and Corwin [1] or representation-theoretical interpretation of Macdonald polynomials given by Haiman [12].

A second problem is related to the combinatorics of Jack polynomials, which are special cases of Macdonald polynomials. In fact, Theorem 1.3 was posed as an open question in our previous paper joint with Féray [2], where we proved that Jack polynomials have a strong factorization property when \(\alpha \rightarrow 0\), where \(\alpha \) is the Jack-deformation parameter. In the same paper, we use this result as a key tool to prove the polynomiality part of so-called b-conjecture, stated by Goulden and Jackson [8]. This conjecture says that a certain multivariate generating function involving Jack symmetric functions expressed in the power-sum basis gives rise to the multivariate generating function of bipartite maps (bipartite graphs embedded into some surface), where the exponent of \(\beta := \alpha - 1\) has an interpretation as some mysterious “measure of non-orientability” of the associated map. The conjecture is still open, while some special cases have been solved [3, 8, 17, 18]. It is very tempting to build a qt-framework which will generalize the b-conjecture. Although we can simply replace Jack polynomials by Macdonald polynomials in the definition of the multivariate generating function given by Goulden and Jackson and use the same techniques as in [2] to prove that expanding it in a properly normalized power-sum basis we obtain polynomials in qt, we do not obtain positive, neither integer coefficients. Therefore, we leave a wide-open question of the possibility of building a proper framework which generalizes the b-conjecture to two parameters in a way that it is related to counting some combinatorial objects.

1.5 Organization of the paper

We describe all necessary definitions and background in Sect. 2. Section 3 gives the proof of Theorem 1.4 which is preceded by an explanation of the main idea of the proof. In Sect. 4, we discuss cumulants and their relation with the strong factorization property, and we investigate a relation between cumulants and derivatives that is in the heart of the proof of Theorem 1.4. Finally, Sect. 5 is devoted to the proof of two intermediate steps of the proof of Theorem 1.4.

2 Preliminaries

2.1 Set partitions lattice

The combinatorics of set partitions is central in the theory of cumulants and will be important in this article. We recall here some well-known facts about them.

A set partition of a set S is a (non-ordered) family of non-empty disjoint subsets of S (called parts of the partition), whose union is S. In the following, we always assume that S is finite.

Denote \({\mathcal {P}}(S)\) the set of set partitions of a given set S. Then, \({\mathcal {P}}(S)\) may be endowed with a natural partial order: the refinement order. We say that \(\pi \) is finer than \(\pi '\) (or \(\pi '\) coarser than \(\pi \)) if every part of \(\pi \) is included in a part of \(\pi '\). We denote this by \(\pi \le \pi '\).

Endowed with this order, \({\mathcal {P}}(S)\) is a complete lattice, which means that each family F of set partitions admits a join (the finest set partition which is coarser than all set partitions in F; we denote the join operator by \(\vee \)) and a meet (the coarsest set partition which is finer than all set partitions in F; we denote the meet operator by \(\wedge \)). In particular, the lattice \({\mathcal {P}}(S)\) has a maximum \(\{S\}\) (the partition in only one part) and a minimum \(\{ \{x\}, x \in S\}\) (the partition in singletons).

Lastly, denote \(\mu \) the Möbius function of the partition lattice \({\mathcal {P}}(S)\). Then, for any pair \(\pi \le \sigma \) of set partitions, the value of the Möbius function has a product form:

$$\begin{aligned} \mu (\pi , \sigma )=\prod _{B' \in \sigma }\mu \left( \{B \in \pi : B \subset B'\}, \{B'\}\right) , \end{aligned}$$
(2)

where the product is taken over all blocks of a partition \(\sigma \), and for a given block \(B' \in \sigma \) an expression \(\mu \left( \{B \in \pi : B \subset B'\}, \{B'\}\right) \) denotes a Möbius function of the lattice \({\mathcal {P}}(B')\) of the interval in between a partition \(\{B \in \pi : B \subset B'\}\), and a maximal element \(\{B'\}\). This function is given by an explicit formula

$$\begin{aligned} \mu \left( \{B \in \pi : B \subset B'\}, \{B'\}\right) = (-1)^{\#\{B \in \pi : B\subset B'\}-1} \left( \#\{B \in \pi : B\subset B'\}-1\right) !, \end{aligned}$$

where \(\#\pi \) denotes the number of parts of \(\pi \).

We finish this section by stating a well-known result on computing a Möbius functions of lattices.

Proposition 2.1

(Weisner’s Theorem [27]) For any \(\pi < \tau \le \sigma \) in a lattice L, we have

$$\begin{aligned} \sum _{\begin{array}{c} \pi \le \omega \le \sigma : \\ \omega \vee \tau =\sigma \end{array}}\mu (\pi , \omega ) = 0. \end{aligned}$$

2.2 Partitions

We call \(\lambda := (\lambda _1, \lambda _2, \dots , \lambda _l)\) a partition of n if it is a weakly decreasing sequence of positive integers such that \(\lambda _1+\lambda _2+\cdots +\lambda _l = n\). Then, n is called the size of \(\lambda \), while l is its length. As usual, we use the notation \(\lambda \vdash n\), or \(|\lambda | = n\), and \(\ell (\lambda ) = l\). We denote the set of partitions of n by \({\mathbb {Y}}_n\), and we define a partial order on \({\mathbb {Y}}_n\), called dominance order, in the following way:

$$\begin{aligned} \lambda \le \mu \iff \sum _{i\le j}\lambda _i \le \sum _{i\le j}\mu _i \text { for any positive integer } j. \end{aligned}$$

Then, we extend the notion of dominance order on the set of partitions of arbitrary size by saying that

$$\begin{aligned} \lambda \preceq \mu \iff |\lambda | < |\mu |, \text { or } |\lambda | = |\mu | \text { and } \lambda \le \mu . \end{aligned}$$

For any two partitions \(\lambda \in {\mathbb {Y}}_n\) and \(\mu \in {\mathbb {Y}}_m\), we can construct a new partition \(\lambda \oplus \mu \in {\mathbb {Y}}_{n+m}\) by setting \(\lambda \oplus \mu := (\lambda _1+\mu _1,\lambda _2+\mu _2,\dots )\). Moreover, there exists a canonical involution on the set \({\mathbb {Y}}_n\), which associates with a partition \(\lambda \) its conjugate partition \(\lambda ^t\). By definition, the jth part \(\lambda _j^t\) of the conjugate partition is the number of positive integers i such that \(\lambda _i \ge j\). A partition \(\lambda \) is identified with some geometric object, called Young diagram, that can be defined as follows:

$$\begin{aligned} \lambda = \{(i, j):1 \le i \le \lambda _j, 1 \le j \le \ell (\lambda ) \}. \end{aligned}$$

For any box \(\square := (i,j) \in \lambda \) from Young diagram, we define its arm-length by \(a(\square ) := \lambda _j-i\) and its leg-length by \(\ell (\square ) := \lambda _i^t-j\) (the same definitions as in [22, Chapter I]), see Fig. 1.

Fig. 1
figure 1

Arm- and leg-length of boxes in Young diagrams

Finally, we define two combinatorial quantities associated with partitions that we will use extensively through this paper. First, we define the (q, t)-hook polynomial \(h_{(q,t)}(\lambda )\) by the following equation

$$\begin{aligned} h_{(q,t)}(\lambda )&:= \prod _{\square \in \lambda }\left( 1 - q^{a(\square )}t^{\ell (\square ) +1} \right) . \end{aligned}$$
(3)

We also introduce a partition binomial given by

$$\begin{aligned} b^N_j(\lambda ) := \sum _{1 \le i \le N}\left( {\begin{array}{c}\lambda _i\\ j\end{array}}\right) t^{N-i}. \end{aligned}$$
(4)

2.3 Interpolation Macdonald polynomials as eigenfunctions

We already defined interpolation Macdonald polynomials in Sect. 1.2, but we are going to introduce another, equivalent definition that is more convenient in the framework of the following paper. Since this is now a well-established theory, results of this section are given without proofs but with explicit references to the literature (mostly to Macdonald’s book [22] and Sahi’s paper [24]).

First, consider the vector space \({{\mathrm{Sym}}}_N\) of symmetric polynomials in N variables over \({\mathbb {Q}}(q,t)\). Let \(T_{q,x_i}\) be the “q-shift operator” defined by

$$\begin{aligned} T_{q,x_i} f(x_1,\dots ,x_N) := f(x_1,\dots ,qx_i,\dots ,x_N), \end{aligned}$$

and

$$\begin{aligned} A_i({\varvec{x}};t)f(x_1,\dots ,x_N) := \left( \prod _{j\ne i} \frac{tx_i-x_j}{x_i-x_j}\right) f(x_1,\dots ,x_N). \end{aligned}$$

Let us define an operator

$$\begin{aligned} D := \sum _i A_i({\varvec{x}};t)\left( 1-x_i^{-1}\right) \left( T_{q,x_i}-1\right) . \end{aligned}$$
(5)

Proposition 2.2

There exists a unique family \({\mathcal {J}}_\lambda ^{(q,t)}\) (indexed by partitions \(\lambda \) of length at most N) in \({{\mathrm{Sym}}}^N\) that satisfies:

  1. (C1)

    \({\mathcal {J}}^{\left( q^{-1},t^{-1}\right) }_\lambda ({\varvec{x}})\) is an eigenvector of D with eigenvalue

    $$\begin{aligned} \text {ev}(\lambda ) := \sum _{1 \le i \le N} \left( q^{\lambda _i}-1\right) t^{N-i} = \sum _{j \ge 1}(q-1)^j b^N_j(\lambda ); \end{aligned}$$
  2. (C2)

    the monomial expansion of \({\mathcal {J}}_\lambda ^{(q,t)}\) is given by

    $$\begin{aligned} {\mathcal {J}}^{(q,t)}_\lambda = h_{(q,t)}(\lambda ) m_\lambda + \sum _{\nu \prec \lambda }a^{\lambda }_\nu m_\nu , \text { where } a^{\lambda }_\nu \in {\left\{ \begin{array}{ll} {\mathbb {Z}}[q,t] &{}\text { for } |\nu | = |\lambda |,\\ {\mathbb {Z}}[q,t^{-1},t] &{}\text { for } |\nu | < |\lambda |.\end{array}\right. } \end{aligned}$$

These polynomials are called interpolation Macdonald polynomials.

This is a result of Sahi [24]. His original definition requires that the coefficients \(a^{\lambda }_\nu \) are only rational functions in qt with rational coefficients, but in the same paper Sahi proved that they are in fact polynomials in \(q,t^{-1},t\) (and even in qt when \(|\nu | = |\lambda |\)) with integer coefficients, which will be important for us later. We just add for completeness of the presentation that we are using different notation and normalization than Sahi, so function \(R_\lambda (x;q^{-1},t^{-1})\) from Sahi’s paper [24] is equal to \(\left( h_{(q,t)}(\lambda )\right) ^{-1}{\mathcal {J}}^{(q,t)}_\lambda ({\varvec{x}})\) with our notation, and \(c_\lambda (q,t)\) from Sahi’s paper is the same as \(h_{(q,t)}(\lambda )\) with our notation.

Above definition says that the interpolation Macdonald polynomial \({\mathcal {J}}_\lambda ^{(q,t)}\) depends on the parameter N, that is the number of variables. However, one can show that it satisfies the compatibility relation \({\mathcal {J}}_\lambda ^{(q,t)}(x_1,\dots ,x_N,0)={\mathcal {J}}_\lambda ^{(q,t)}(x_1,\dots ,x_N)\) and thus \({\mathcal {J}}_\lambda ^{(q,t)}\) can be seen as a symmetric function. In the sequel, when working with differential operators, we sometimes confuse a symmetric function f with its restriction \(f(x_1,\dots ,x_N,0,0,\dots )\) to N variables.

It was shown by Macdonald [22, Chapter VI, (3.9)–(3.10)] that

$$\begin{aligned} \left( \sum _{1 \le i \le N}A_i({\varvec{x}};t)T_{q,x_i}\right) m_\lambda = \left( \sum _{1 \le i \le N}q^{\lambda _i}t^{N-i}\right) m_\lambda + \sum _{\nu < \lambda }b^{\lambda }_\nu m_\nu , \end{aligned}$$

where \(b^{\lambda }_\nu \in {\mathbb {Z}}[q,t]\). Moreover, it is easy to show (see for example [24, Lemma 3.3]) that

$$\begin{aligned} \sum _i A_i({\varvec{x}};t) = \sum _i t^{N-i}. \end{aligned}$$

Plugging it into Eq. (5) we observe that:

$$\begin{aligned} D\ m_\lambda = \text {ev}(\lambda )\ m_\lambda + \sum _{\nu \prec \lambda }c^{\lambda }_\nu \ m_\nu , \text { where } c^{\lambda }_\nu \in {\mathbb {Z}}[q,t]. \end{aligned}$$

Note that we can expand operator D around \(q=1\) as a linear combination of differential operators in the following form:

$$\begin{aligned} D = \sum _{j \ge 1}\frac{(q-1)^{j}}{j!}\sum _i \left( A_i({\varvec{x}};t)\left( x_i^j-x_i^{j-1}\right) D^j_i\right) , \end{aligned}$$
(6)

where \(D^j_i := \frac{\partial ^j}{\partial x_i^j}\). As a consequence, we have the following identity:

$$\begin{aligned} \sum _{1 \le i \le N}\left( A_i({\varvec{x}};t)\left( x_i^j-x_i^{j-1}\right) D^j_i \right) m_\lambda= & {} \partial _q^{j}\big (\text {ev}(\lambda )\big )_{q=1}m_\lambda + \sum _{\nu \prec \lambda }\partial ^{j}_q \left( c^{\lambda }_\nu \right) _{q=1} m_\nu \nonumber \\= & {} j!\ b^N_j(\lambda )\ m_\lambda + \sum _{\nu \prec \lambda }d^{\lambda }_\nu m_\nu , \end{aligned}$$
(7)

where \(\partial _q\) is a partial derivative with respect to q, \(b^N_j(\lambda )\) is given by Eq. (4), and \(d^{\lambda }_\nu \in {\mathbb {Z}}[t]\).

Corollary 2.3

Let \(f \in {{\mathrm{Sym}}}\) be a symmetric function with an expansion in the monomial basis of the following form:

$$\begin{aligned} f = \sum _{\mu \prec \lambda } d_\mu m_\mu , \end{aligned}$$

where \(\lambda \) is a fixed partition, and \(d_\mu \in {\mathbb {Q}}(t)\). If, for any number N of variables, \(\sum _{1 \le i \le N}\left( A_i({\varvec{x}};t)(x_i-1)D^1_i \right) f = b^N_1(\lambda ) f\), then \(f = 0\).

Proof

It is obvious from Eq. (7) since \(b^N_1(\lambda ) = b^N_1(\mu )\) implies \(\lambda = \mu \). \(\square \)

3 Strong factorization property of interpolation Macdonald polynomials

In this section, we prove Theorem 1.4. Since its proof involves many intermediate results which can be considered as independent of Theorem 1.4, we believe that presenting them before the proof of the main result might discourage the reader, and we decided to explain the main idea of the proof of Theorem 1.4 first, then give the proof with all the details, and finally present all the remaining proofs of the intermediate results in the separate sections.

Proof of Theorem 1.4

We recall that we need to prove that for any positive integer r, and for any partitions \(\lambda ^1,\dots ,\lambda ^r\) we have the following bound for the cumulant:

$$\begin{aligned} \kappa ^{{\mathcal {J}}}\left( \lambda ^1,\dots ,\lambda ^r\right) = O_1\left( (q-1)^{r-1}\right) . \end{aligned}$$

The proof will by given by induction on r. The fact that Macdonald interpolation polynomials \({\mathcal {J}}^{(q,t)}_\lambda \) have no singularity in \(q=1\) is straightforward from the result of Sahi presented in Proposition 2.2. That covers the case \(r=1\).

Now, notice that for any ring R, and any rational function \(f \in R[q]\), the following conditions are equivalent

$$\begin{aligned} f(q) = O_1\left( (q-1)^r\right) \iff f\left( q^{-1}\right) = O_1\left( (q-1)^r\right) . \end{aligned}$$

Thus, we are going to prove that

$$\begin{aligned} \kappa ^{{\mathcal {J}}}\left( \lambda ^1,\dots ,\lambda ^r\right) = O_1\left( (q-1)^r\right) , \end{aligned}$$

where \(\kappa ^{{\mathcal {J}}}(\lambda ^1,\dots ,\lambda ^r)\) denotes the cumulant with parameters \(q^{-1},t^{-1}\). From now on, until the end of this proof, \(\kappa ^{{\mathcal {J}}}(\lambda ^1,\dots ,\lambda ^r)\) denotes the cumulant with parameters \(q^{-1},t^{-1}\).

Let R be a ring, and let \(f \in R[q,q^{-1}]\) be a Laurent polynomial in q. We introduce the following notation: For any nonnegative integer k, the coefficient \([(q-1)^k]f \in R\) is defined by the following expansion:

$$\begin{aligned} q^{\deg (f)}f = \sum _{k \ge 0}\left( \left[ (q-1)^k\right] f\right) \ (q-1)^k, \end{aligned}$$

where \(\deg (f)\) is the smallest possible nonnegative integer such that

$$\begin{aligned} q^{\deg (f)}f \in R[q]. \end{aligned}$$

It is clear that for two Laurent polynomials \(f,g \in R[q,q^{-1}]\) and nonnegative integer k one has the following identity:

$$\begin{aligned} \left[ (q-1)^k\right] (fg) = \sum _{0 \le j \le k}\left( \left[ (q-1)^j\right] f\right) \cdot \left( \left[ (q-1)^{k-j}\right] g\right) . \end{aligned}$$

With the above notation, we have to prove that for any integer \(0 \le k \le r-2\) the following equality holds true:

$$\begin{aligned} f := \left[ (q-1)^k\right] \kappa ^{{\mathcal {J}}}(\lambda ^1,\dots ,\lambda ^r) = 0. \end{aligned}$$

Notice now that the expansion of f into the monomial basis involves only the monomials \(m_\mu \) indexed by partitions \(\mu \prec \lambda ^{[r]}\), which is ensured by Proposition 4.8. Thus, if we are able to show that the following equation holds true:

$$\begin{aligned} \sum _{1 \le i \le N}A_i({\varvec{x}};t)(x_i - 1)D^1_i f = b^N_1\left( \lambda ^{[r]}\right) f, \end{aligned}$$
(8)

then \(f=0\) by Corollary 2.3, and the proof is over. So our goal is to prove Eq. (8). In order to do that we make the following observation: An interpolation Macdonald polynomial \({\mathcal {J}}^{(q^{-1},t^{-1})}\) is an eigenfunction of the operator D. Since the cumulant is a linear combination of products of interpolation Macdonald polynomials

$$\begin{aligned} \kappa ^{{\mathcal {J}}}(\lambda ^1,\dots ,\lambda ^r) := \sum _{\begin{array}{c} \pi \in {\mathcal {P}}([r]) \end{array} } (-1)^{\#\pi } \prod _{B \in \pi } {\mathcal {J}}^{(q^{-1},t^{-1})}_{\lambda ^B}, \end{aligned}$$

it will be very convenient if the action of D on such a product will be given by the Leibniz rule, that is

$$\begin{aligned} D\left( {\mathcal {J}}^{\left( q^{-1},t^{-1}\right) }_{\lambda ^1}\cdots {\mathcal {J}}^{\left( q^{-1},t^{-1}\right) }_{\lambda ^r}\right) = \sum _{1 \le k \le r}{\mathcal {J}}^{\left( q^{-1},t^{-1}\right) }_{\lambda ^1}\cdots \left( D {\mathcal {J}}^{\left( q^{-1},t^{-1}\right) }_{\lambda ^k} \right) \cdots {\mathcal {J}}^{\left( q^{-1},t^{-1}\right) }_{\lambda ^r}. \end{aligned}$$

Unfortunately, it is not the case. However, the trick is to decompose \(D\kappa ^{{\mathcal {J}}}(\lambda ^1,\dots ,\lambda ^r)\) into two parts: The first part is given by “forcing” the Leibniz rule for the action of D on the product of interpolation Macdonald polynomials, and the second part is given by the difference between the proper action of D on cumulant, and between the forced version. To be more precise

$$\begin{aligned} D\kappa ^{{\mathcal {J}}}\left( \lambda ^1,\dots ,\lambda ^r\right) = \underbrace{{\widetilde{D}}\kappa ^{{\mathcal {J}}}\left( \lambda ^1,\dots ,\lambda ^r\right) }_{\text {first part}} + \underbrace{D\kappa ^{{\mathcal {J}}}\left( \lambda ^1,\dots ,\lambda ^r\right) - {\widetilde{D}}\kappa ^{{\mathcal {J}}}\left( \lambda ^1,\dots ,\lambda ^r\right) }_{\text {second part}}, \end{aligned}$$
(9)

where

$$\begin{aligned} {\widetilde{D}}\kappa ^{{\mathcal {J}}}\left( \lambda ^1,\dots ,\lambda ^r\right) := \sum _{\begin{array}{c} \pi \in {\mathcal {P}}([r]) \end{array} } (-1)^{\#\pi } {\widetilde{D}}\left( {\mathcal {J}}^{\left( q^{-1},t^{-1}\right) }_{\lambda ^B}: B \in \pi \right) , \end{aligned}$$

and

$$\begin{aligned} {\widetilde{D}}(f_1,\dots ,f_r) := \sum _{1 \le k \le r}f_1\cdots \left( D f_k \right) \cdots f_r. \end{aligned}$$

This decomposition turned out to be crucial. Indeed, Lemma 5.2 ensures that the first part can be expressed as a linear combination of products of cumulants of less then r elements; thus, we can use an induction hypothesis to analyze it. Similarly, Lemma 5.3 states that the second part can be given by an expression involving products of cumulants of less then r elements, and again, an inductive hypothesis can be used to its analysis. Then, comparing the coefficient of \((q-1)^k\) in the left-hand side of Eq. (9) with the coefficient of \((q-1)^k\) in the right-hand side of Eq. (9), we obtain Eq. (8). Let us go into details. Expanding operator D around \(q=1\) [see Eq. (6)], we have that

$$\begin{aligned} \left[ (q-1)^k\right] D \kappa ^{{\mathcal {J}}}\left( \lambda ^1,\dots ,\lambda ^r\right)= & {} \sum _{j \ge 1}\sum _{1 \le i \le N}A_i({\varvec{x}};t)\left( x^j_i - x^{j-1}_i\right) \nonumber \\&D^j_i\left( \left[ (q-1)^{k-j}\right] \kappa ^{{\mathcal {J}}}\left( \lambda ^1,\dots ,\lambda ^r\right) \right) . \end{aligned}$$
(10)

Moreover, applying Lemma 5.2, we have that

$$\begin{aligned} \left[ (q-1)^k\right] \left( {\widetilde{D}}\kappa ^{{\mathcal {J}}}(\lambda ^1,\dots ,\lambda ^r)\right)= & {} \sum _{j \ge 1}\sum _{\begin{array}{c} \sigma \in {\mathcal {P}}([r])\\ \#\sigma \le j \end{array}} {{\mathrm{InEx}}}_j\left( \lambda ^B: B \in \sigma \right) \left[ (q-1)^{k-j}\right] \nonumber \\&\left( \prod _{B \in \sigma }\kappa ^{{\mathcal {J}}}\left( \lambda ^i: i \in B\right) \right) , \end{aligned}$$
(11)

where \({{\mathrm{InEx}}}_j\left( \lambda ^B: B \in \sigma \right) \) is a certain polynomial in t with integer coefficients [at this stage we do not need to know its explicit form, but for the interested reader it is given by Eq. (22)] which has the following form in the special case:

$$\begin{aligned} {{\mathrm{InEx}}}_1(\lambda ) = b^N_1(\lambda ). \end{aligned}$$

Finally, applying Lemma 5.3, we obtain the following identity

$$\begin{aligned}&\left[ (q-1)^k\right] \left( D \kappa ^{{\mathcal {J}}}(\lambda ^1,\dots ,\lambda ^r) - {\widetilde{D}}\kappa ^{{\mathcal {J}}}(\lambda ^1,\dots ,\lambda ^r)\right) \nonumber \\&\quad = -\sum _{j \ge 2}\sum _{1 \le i \le N}A_i({\varvec{x}};t)\frac{x_i^j-x_i^{j-1}}{j!} \sum _{\begin{array}{c} \pi \in {\mathcal {P}}([r]), \\ 2 \le \#\pi \le j \end{array} }\sum _{\begin{array}{c} \alpha \in {\mathbb {N}}_+^\pi , \\ |\alpha | = j \end{array} } \left( {\begin{array}{c}j\\ \alpha \end{array}}\right) \nonumber \\&\,\qquad \ \cdot \left[ (q-1)^{k-j}\right] \left( \prod _{B \in \pi } D_i^{\alpha (B)}\kappa ^{{\mathcal {J}}}(\lambda ^b: b \in B)\right) . \end{aligned}$$
(12)

Here, \({\mathbb {N}}_+^\pi \) denotes the set of functions \(\alpha : \pi \rightarrow {\mathbb {N}}_+\) with positive integer values, the symbol \(|\alpha |\) is defined as

$$\begin{aligned} |\alpha | := \sum _{B \in \pi }\alpha (B), \end{aligned}$$

and

$$\begin{aligned} \left( {\begin{array}{c}j\\ \alpha \end{array}}\right) := \frac{j!}{\prod _{B \in \pi }\alpha (B)!}. \end{aligned}$$

We recall that the right-hand side (RHS for short) of Eq. (10) is equal to the sum of the right-hand sides of Eqs. (11) and (12). Let \(k=1\). Then, the RHS of Eq. (10) is equal to

$$\begin{aligned} \sum _{1 \le i \le N}A_i({\varvec{x}};t)(x_i - 1)D^1_i f, \end{aligned}$$

the RHS of Eq. (11) is equal to

$$\begin{aligned} {{\mathrm{InEx}}}_1\left( \lambda ^{[r]}\right) f = b^N_1(\lambda ^{[r]})f, \end{aligned}$$

where \(f = \left[ (q-1)^0\right] \kappa ^{{\mathcal {J}}}(\lambda ^1,\dots ,\lambda ^r)\), and the RHS of Eq. (12) vanishes. Thus, we have shown that Eq. (8) holds true for \(f = \left[ (q-1)^0\right] \kappa ^{{\mathcal {J}}}(\lambda ^1,\dots ,\lambda ^r)\), which implies that \(f=0\). Now, we fix \(K \le r-2\), and we assume that

$$\begin{aligned} \left[ (q-1)^m\right] \kappa ^{{\mathcal {J}}}\left( \lambda ^1,\dots ,\lambda ^r\right) = 0 \end{aligned}$$

holds true for all \( 0 \le m < K\). We are going to show that Eq. (8) holds true for \(f = \left[ (q-1)^K\right] \kappa ^{{\mathcal {J}}}(\lambda ^1,\dots ,\lambda ^r)\). First, note that for \(k=K+1\) the RHS of Eq. (10) simplifies to

$$\begin{aligned} \sum _{1 \le i \le N}A_i({\varvec{x}};t)(x_i - 1)D^1_i f. \end{aligned}$$

Moreover, from the induction hypothesis for each subset I with \(\emptyset \subsetneq I \subsetneq [r]\), one has \(\kappa ^{{\mathcal {J}}}(\lambda ^i: i \in I) = O\left( (q-1)^{|I|-1}\right) \). Thus, for any set partition \(\pi \in {\mathcal {P}}([r])\) which has at least two parts, one has

$$\begin{aligned} \prod _{B \in \pi } \left( D^{j_B}_i\kappa ^{{\mathcal {J}}}(\lambda ^b: b\in B) \right) = O\left( (q-1)^{r-\#\pi }\right) , \end{aligned}$$

where the \(j_B\) are any nonnegative integers (\(D^0_i = {{\mathrm{Id}}}\) by convention). It implies that the RHS of Eq. (12) vanishes. Finally, again by induction hypothesis, all the elements of the form \(\left[ (q-1)^{k-j}\right] \left( \prod _{B \in \sigma }\kappa ^{{\mathcal {J}}}(\lambda ^i: i \in B)\right) \) that appear in the RHS of Eq.  (11) vanish except \(\left[ (q-1)^K\right] \kappa ^{{\mathcal {J}}}(\lambda ^1,\dots ,\lambda ^r) = f\). Thus, the RHS of Eq. (11) simplifies to

$$\begin{aligned} {{\mathrm{InEx}}}_1\left( \lambda ^{[r]}\right) f = b^N_1(\lambda ^{[r]})f, \end{aligned}$$

which proves that Eq. (8) holds true. The proof is completed. \(\square \)

4 Cumulants

In this section, we introduce cumulants and we investigate an action of derivations on them, which is crucial in the proof of Theorem 1.4. We also explain the connection between the strong factorization property and the small cumulant property, and we present some applications of it relevant for our work. We begin with some definitions.

4.1 Partial cumulants

Definition 4.1

Let \((u_I)_{I \subseteq J}\) be a family of elements in a field, indexed by subsets of a finite set J. Then, its partial cumulant is defined as follows. For any non-empty subset H of J, set

$$\begin{aligned} \kappa _H({\varvec{u}}) = \sum _{\begin{array}{c} \pi \in {\mathcal {P}}(H) \end{array} } \mu (\pi , \{H\}) \prod _{B \in \pi } u_B, \end{aligned}$$
(13)

where \(\mu \) is the Möbius function of the set partition lattice; see Section 2.1.

The terminology comes from probability theory. Let \(J = [r]\), and let \(X_1,\dots ,X_r\) be random variables with finite moments defined on the same probability space. Then, define \(u_I={\mathbb {E}}(\prod _{i \in I} X_i)\), where \({\mathbb {E}}\) denotes the expected value. The quantity \(\kappa _{[r]}({\varvec{u}})\) as defined above is known as the joint (or mixed) cumulant of the random variables \(X_1,\dots ,X_r\). Also, \(\kappa _{H}({\varvec{u}})\) is the joint/mixed cumulant of the smaller family \(\{X_h, h \in H\}\).

Joint/mixed cumulants have been studied by Leonov and Shiryaev in [19] (see also an older note of Schützenberger [25], where they are introduced under the French name déviation d’indépendance). They now appear in random graph theory [13, Chapter 6] and have inspired a lot of work in non-commutative probability theory [23].

A classical result—see, e.g., [13, Proposition 6.16 (vi)] – is that Eq. (13) can be inverted as follows: For any non-empty subset H of J,

$$\begin{aligned} u_H = \sum _{\begin{array}{c} \pi \in {\mathcal {P}}(H) \end{array} } \prod _{B \in \pi } \kappa _B({\varvec{u}}). \end{aligned}$$
(14)

4.2 Derivations and cumulants

Let R be a ring. We define an R-module of derivations \({{\mathrm{Der}}}_K\) which consists of linear maps \(D: R \rightarrow R\) satisfying the following Leibniz rule:

$$\begin{aligned} D(f\cdot g) = (Df)\cdot g + f \cdot (D g). \end{aligned}$$

For any positive integers rk, and for any elements \(f_1,\dots ,f_r \in R\) we define

$$\begin{aligned} {\widetilde{D^k}}(f_1,\dots ,f_r) := \sum _{1 \le i \le r}f_1\cdots \left( D^k f_i \right) \cdots f_r, \end{aligned}$$

Let K be a field, and \(D \in {{\mathrm{Der}}}_K\) be a derivation. Then, for any family \({\varvec{u}}= (u_I)_{I \subseteq [r]}\) of elements in a field K we define the following deformed action of \(D^k\) on the cumulant:

$$\begin{aligned} {\widetilde{D^k}}\ \kappa _{[r]}({\varvec{u}}) := \sum _{\begin{array}{c} \pi \in {\mathcal {P}}([r]) \end{array} } \mu (\pi , \{[r]\})\ {\widetilde{D^k}}(u_B: B \in \pi ). \end{aligned}$$

The following lemma will be crucial to prove our main result.

Lemma 4.2

For any positive integers rk, for any family \({\varvec{u}}= (u_I)_{I \subseteq [r]}\) of elements in a field K and for any derivation \(D \in {{\mathrm{Der}}}_K\), the following identity holds true:

$$\begin{aligned} {\widetilde{D^k}}\ \kappa _{[r]}({\varvec{u}}) = \sum _{\begin{array}{c} \pi \in {\mathcal {P}}([r]), \\ \#\pi \le k \end{array} }\sum _{\begin{array}{c} \alpha \in {\mathbb {N}}_+^\pi , \\ |\alpha | = k \end{array} } \left( {\begin{array}{c}k\\ \alpha \end{array}}\right) \prod _{B \in \pi } \left( D^{\alpha (B)}\kappa _{B}({\varvec{u}})\right) . \end{aligned}$$
(15)

Here, \({\mathbb {N}}_+^\pi \) denotes the set of functions \(\alpha : \pi \rightarrow {\mathbb {N}}_+\), the symbol \(|\alpha |\) is defined as

$$\begin{aligned} |\alpha | := \sum _{B \in \pi }\alpha (B), \end{aligned}$$

and

$$\begin{aligned} \left( {\begin{array}{c}k\\ \alpha \end{array}}\right) := \frac{k!}{\prod _{B \in \pi }\alpha (B)!}. \end{aligned}$$

Proof

First of all, notice that for any elements \(f_1,\dots ,f_r \in K\), and for any positive integer k the following generalized Leibniz rule holds true:

$$\begin{aligned} D^k(f_1\cdots f_r) = \sum _{\begin{array}{c} \alpha \in {\mathbb {N}}^{[r]}, \\ |\alpha | = k \end{array} } \left( {\begin{array}{c}k\\ \alpha \end{array}}\right) \left( D^{\alpha (1)}f_1\right) \cdots \left( D^{\alpha (r)}f_r\right) , \end{aligned}$$
(16)

which is easy to prove by induction (\(D^0 := {{\mathrm{Id}}}\) by convention).

Notice now that the both hands of Eq. (15) are linear combinations of elements of the form

$$\begin{aligned} \prod _{B \in \pi }D^{\alpha (B)}u_B, \end{aligned}$$

where \(\pi \in {\mathcal {P}}([r])\), and \(\alpha \in {\mathbb {N}}^\pi \) is a composition of k. Let us call \({{\mathrm{RHS}}}\) the right-hand side of Eq. (15), and analogously \({{\mathrm{LHS}}}\) the left-hand side of Eq. (15). Let us fix a set partition \(\pi \in {\mathcal {P}}([r])\), and a composition \(\alpha \in {\mathbb {N}}^\pi \) of k. We would like to show that

$$\begin{aligned} \left[ \prod _{B \in \pi }D^{\alpha (B)}u_B\right] {{\mathrm{LHS}}}= \left[ \prod _{B \in \pi }D^{\alpha (B)}u_B\right] {{\mathrm{RHS}}}. \end{aligned}$$

We define the support \({{\mathrm{supp}}}(\alpha )\) of \(\alpha \) in a standard way:

$$\begin{aligned} {{\mathrm{supp}}}(\alpha ) := \{B \in \pi : \alpha (B) \ne 0\}. \end{aligned}$$

Then, it is clear from definition of \({\widetilde{D^k}}\ \kappa _{[r]}({\varvec{u}})\) that

$$\begin{aligned} \left[ \prod _{B \in \pi }D^{\alpha (B)}u_B\right] {{\mathrm{LHS}}}= {\left\{ \begin{array}{ll}\mu (\pi ,\{[r]\}) &{}\text { if } \#{{\mathrm{supp}}}(\alpha ) = 1,\\ 0 &{}\text { otherwise. }\end{array}\right. } \end{aligned}$$
(17)

We now analyze the coefficient

$$\begin{aligned} \left[ \prod _{B \in \pi }D^{\alpha (B)}u_B\right] {{\mathrm{RHS}}}. \end{aligned}$$

We can see that the nonzero contribution come from the elements of the following form:

$$\begin{aligned} \prod _{B' \in \sigma } D^{\alpha '(B')}\kappa _{B'}({\varvec{u}}), \end{aligned}$$

where

$$\begin{aligned} \alpha '(B') := \sum _{\begin{array}{c} B \in \pi :\\ B \subset B' \end{array}}\alpha (B), \end{aligned}$$

\(\sigma \ge \pi \), and for each element \(B' \in \sigma \), there exists an element \(B \in {{\mathrm{supp}}}(\alpha )\) such that \(B\subset B'\). In other terms, \(\sigma \) is a partition which has the property that \(\sigma \ge \pi \), and

$$\begin{aligned} \sigma \vee \tau = \{[r]\}, \end{aligned}$$

where partition \(\tau \) is constructed from \(\pi \) by merging all its blocks lying in a support of \(\alpha \), i.e., :

$$\begin{aligned} \tau := \left\{ \bigcup {{\mathrm{supp}}}(\alpha )\right\} \cup \left( \pi {\setminus }{{\mathrm{supp}}}(\alpha )\right) . \end{aligned}$$
(18)

Using the definition of cumulants Eqs. (13) and (16), we can compute the coefficient

$$\begin{aligned}&\left[ \prod _{B \in \pi }D^{\alpha (B)}u_B\right] \prod _{B' \in \sigma } D^{\alpha '(B')}\kappa _{B'}({\varvec{u}}) \\&\quad = \prod _{B' \in \sigma }\left( {\begin{array}{c}\alpha '(B')\\ \alpha (B): B \in \pi , B \subset B'\end{array}}\right) \mu \left( \{B \in \pi : B \subset B'\}, \{B'\}\right) . \end{aligned}$$

Plugging it into Eq. (15), we obtain that

$$\begin{aligned} \left[ \prod _{B \in \pi }D^{\alpha (B)}u_B\right] {{\mathrm{RHS}}}= & {} \sum _{\begin{array}{c} \sigma \ge \pi ,\\ \sigma \vee \tau = \{[r]\} \end{array}}\left( {\begin{array}{c}k\\ \alpha '(B'): B' \in \sigma \end{array}}\right) \left[ \prod _{B \in \pi }D^{\alpha (B)}u_B\right] \prod _{B' \in \sigma } D^{\alpha '(B')}\kappa _{B'}({\varvec{u}})\\= & {} \left( {\begin{array}{c}k\\ \alpha \end{array}}\right) \sum _{\begin{array}{c} \sigma \in {\mathcal {P}}([r]),\\ \sigma \vee \tau = \{[r]\} \end{array}}\prod _{B' \in \sigma }\mu \left( \{B \in \pi : B \subset B'\}, \{B'\}\right) \\= & {} \left( {\begin{array}{c}k\\ \alpha \end{array}}\right) \sum _{\begin{array}{c} \sigma \ge \pi ,\\ \sigma \vee \tau = \{[r]\} \end{array}}\mu (\pi ,\sigma ), \end{aligned}$$

where \(\tau \) is the partition given by Eq. (18). Here, the last equality is a consequence of Eq. (2) for the Möbius function \(\mu (\pi ,\sigma )\). Now, notice that partition \(\tau \) is constructed in a way that \(\tau \ge \pi \), and the inequality is strict whenever \(\#{{\mathrm{supp}}}(\alpha )>1\). Thus, we can apply Proposition 2.1 to get

$$\begin{aligned} \left[ \prod _{B \in \pi }D^{\alpha (B)}u_B\right] {{\mathrm{RHS}}}= {\left\{ \begin{array}{ll}\left( {\begin{array}{c}k\\ \alpha \end{array}}\right) \sum _{\begin{array}{c} \sigma \ge \pi ,\\ \sigma \vee \tau = \{[r]\} \end{array}}\mu (\pi ,\sigma ) &{}\text { if } \#{{\mathrm{supp}}}(\alpha ) = 1,\\ 0 &{}\text { otherwise. }\end{array}\right. } \end{aligned}$$

But if \(\#{{\mathrm{supp}}}(\alpha ) = 1\) then \(\left( {\begin{array}{c}k\\ \alpha \end{array}}\right) = 1\), and \(\tau = \pi \), thus \(\sigma \vee \tau = \sigma \) (since \(\sigma \ge \pi = \tau \)). So condition \(\sigma \vee \tau = \{[r]\}\) implies that \(\sigma = \{[r]\}\), which gives that

$$\begin{aligned} \left[ \prod _{B \in \pi }D^{\alpha (B)}u_B\right] {{\mathrm{RHS}}}= {\left\{ \begin{array}{ll} \mu (\pi ,\{[r]\}) &{}\text { if } \#{{\mathrm{supp}}}(\alpha ) = 1,\\ 0 &{}\text { otherwise. }\end{array}\right. }. \end{aligned}$$

Comparing it with Eq. (17), we can see that

$$\begin{aligned} \left[ \prod _{B \in \pi }D^{\alpha (B)}u_B\right] {{\mathrm{LHS}}}= \left[ \prod _{B \in \pi }D^{\alpha (B)}u_B\right] {{\mathrm{RHS}}}, \end{aligned}$$

which finishes the proof. \(\square \)

4.3 A multiplicative criterion for small cumulants

Let R be a ring and q a formal parameter. We consider a family \({\varvec{u}}=(u_I)_{I\subseteq [r]}\) of elements of R(q) indexed by subsets of \([r]\). Throughout this section, we also assume that these elements are nonzero and \(u_\emptyset =1\).

In addition to partial cumulants, we also define the cumulative factorization error terms \(T_H({\varvec{u}})\) of the family \({\varvec{u}}\). The quantities \(T_H({\varvec{u}})_{H \subseteq [r],|H| \ge 2}\) are inductively defined as follows: For any subset G of \([r]\) of size at least 2,

$$\begin{aligned} u_G = \prod _{g \in G} u_{\{g\}} \cdot \mathop {\mathop {\prod }\limits _{H \subseteq G}}\limits _{|H| \ge 2} (1 + T_H({\varvec{u}})). \end{aligned}$$

Using the inclusion–exclusion principle, a direct equivalent definition is the following: For any subset H of \([r]\) of size at least 2, set

$$\begin{aligned} T_H({\varvec{u}}) = \left( \prod _{G \subseteq H} u_G^{(-1)^{|H|-|G|}}\right) - 1. \end{aligned}$$
(19)

Féray (using a different framework) [5] proved the following statement, which was reproved in our recent joint paper with Féray [2, Proposition 2.3] using the framework of the current paper:

Proposition 4.3

The following statements are equivalent:

  1. I.

    Strong factorization property when \(q=r\): for any subset \(H \subseteq [r]\) of size at least 2, one has

    $$\begin{aligned} T_{H}({\varvec{u}}) = O_r\left( (q-r)^{|H|-1}\right) . \end{aligned}$$
  2. II.

    Small cumulant property when \(q=r\): for any subset \(H \subseteq [r]\) of size at least 2, one has

    $$\begin{aligned} \kappa _H({\varvec{u}}) = \left( \prod _{h \in H} u_h \right) O_r\left( (q-r)^{|H|-1}\right) . \end{aligned}$$

Remark

In fact, above proposition was proved in the case \(r=0\), but it is enough to shift indeterminate \(q \mapsto q-r\) to obtain the general result.

A first consequence of this multiplicative criterion for small cumulants is the following stability result.

Corollary 4.4

Consider two families \((u_I)_{I \subseteq [r]}\) and \((v_I)_{I \subseteq [r]}\) with the small cumulant property when \(q \rightarrow r\). Then, their entry-wise product \((u_I v_I)_{I \subseteq [r]}\) and quotient \((u_I/v_I)_{I \subseteq [r]}\) also have the small cumulant property when \(q \rightarrow r\).

Proof

This is trivial for the strong factorization property, and the small cumulant property is equivalent to it. \(\square \)

Here is another consequence:

Corollary 4.5

Theorem 1.2 is equivalent to Theorem 1.3.

Proof

Let us fix a positive integer r, and partitions \(\lambda ^1,\dots ,\lambda ^r\). For any subset \(I \subset [r]\), define \(u_I := J^{(q,t)}_{\lambda ^I}\). Then, [22, Chapter VI, Remark (8.4)-(iii)] states that Macdonald polynomial \(J^{(1,t)}_{\lambda ^I}\) at \(q = 1\) has nonzero limit, thus \(u_I, \left( u_I\right) ^{-1} = O_1(1)\), and the statement is an immediate consequence of Proposition 4.3. \(\square \)

4.4 Hook cumulants

We use the multiplicative criterion above to prove that families constructed from the hook polynomial defined by Eq. (3) have the small cumulant properties at \(q=1\). This result is an important ingredient in the proof of the main result.

Lemma 4.6

Fix a positive integer r and a subset K of [r]. Let \(c \in {\mathbb {N}}\) and \((c_i)_{i \in K}\) be a family of some nonnegative integers, and let \(C \ne 1 \in R\). For a subset I of K, we define

$$\begin{aligned} v_I=1-C\cdot q^{c + \sum _{i \in I}c_i} \end{aligned}$$

Then we have, for any subset H of K,

$$\begin{aligned} T_H({\varvec{v}}) = O_1\left( (q-1)^{|H|}\right) . \end{aligned}$$

Proof

It is enough to prove the statement for \(H=K\). Indeed, the case of a general set H follows by considering the same family restricted to subsets of H.

Define \(R_\text {ev}\) (resp. \(R_\text {odd}\)) as

$$\begin{aligned} \prod _\delta \left( 1-C \cdot q^{c + \sum _{i \in \delta }c_i} \right) , \end{aligned}$$

where the product runs over subsets of K of even (resp. odd) size. Without loss of generality, we can assume that |K| is even (the case when |K| is odd is analogous). With this notation, \(T_K({\varvec{v}})=R_\text {ev}/R_\text {odd}-1=(R_\text {ev}-R_\text {odd})/R_\text {odd}\). Since \(R_\text {odd}^{-1}=O_1(1)\) (each term in the product is \(O_1(1)\), as well as its inverse), it is enough to show that \(R_\text {ev}-R_\text {odd}=O_1\left( (q-1)^{|K|}\right) \).

It is clear that

$$\begin{aligned} R_\text {ev}\big |_{q=1} = R_\text {odd}\big |_{q=1} = (1-C)^{2^{|K|}-1}. \end{aligned}$$

Let us fix a positive integer \(l < |K|\). Expanding the product in the definition of \(R_\text {ev}\) in the basis \(\{(q-1)^j\}_{j \ge 0}\), and using the binomial formula, one gets

$$\begin{aligned} \left[ (q-1)^l\right] R_\text {ev}= \sum _{1 \le i \le l}(1-C)^{2^{|K|-1}-i}C^i \ \frac{1}{i!} \sum _{\delta _1,\dots ,\delta _i} \ \ \sum _{\begin{array}{c} j_1+\cdots + j_i = l \\ j_1,\dots ,j_i \ge 1 \end{array}} \prod _{1 \le m \le i}\left( {\begin{array}{c}|\delta _m|_c\\ j_m\end{array}}\right) . \end{aligned}$$

The index set of the second summation symbol is the list of sets of i distinct (but not necessarily disjoint) subsets of K of even size, and

$$\begin{aligned} |\delta |_c := c + \sum _{i \in \delta }c_i. \end{aligned}$$

The factor \(\frac{1}{i!}\) in the above formula comes from the fact that we should sum over sets of i distinct subsets of K, instead of lists, but it is the same as the summation over the set of lists of i distinct subsets of K and dividing by the number of permutations of [i]. Strictly from this formula, it is clear that \([(q-1)^l]R_\text {ev}\) is a symmetric polynomial in \(c_i: i \in K\) of degree at most l. Of course, a similar formula with subsets of odd size holds for \([(q-1)^l]R_\text {odd}\), which shows that it is a symmetric polynomial in \(c_i: i \in K\) of degree at most l, as well. For any positive integers nk, we define a set \({\mathbb {Y}}(n,k)\) of sequences of n nonnegative, non-increasing integers, which are of the following form:

$$\begin{aligned} {\mathbb {Y}}(n,k) = \{(\lambda ,0^{n-\ell (\lambda )}):\lambda \in {\mathbb {Y}}_k,\ \ell (\lambda ) \le n\}. \end{aligned}$$

It is well known (see for example [16, Theorem 2.1]) that if fg are two symmetric polynomials of degree at most k in n indeterminates, then

$$\begin{aligned} f = g \iff \forall {\varvec{x}} \in {\mathbb {Y}}(n,k)\ \ f({\varvec{x}}) = g({\varvec{x}}). \end{aligned}$$

Thus, in order to show that \([(q-1)^l]R_\text {ev}= [(q-1)^l]R_\text {odd}\) it is enough to show that this equality holds for all \((c_i)_{i \in K} \in {\mathbb {Y}}(|K|,l)\). Note that since \(l < |K|\), then \(c_k\) is necessarily equal to 0, where k is the biggest possible \(k \in K\). It means that the function

$$\begin{aligned} f: (K)_{\text {ev}}:=\{\delta \subset K: \delta \text { has even size }\} \rightarrow (K)_{\text {odd}}:=\{\delta \subset K: \delta \text { has odd size }\} \end{aligned}$$

given by \(f(\delta ) := \delta \nabla \{k\}\), where \(\nabla \) is the symmetric difference operator, is a bijection which preserves the following statistic \(|\delta |_c = |f(\delta )|_c\).

Thus, one has

$$\begin{aligned} \left[ (q-1)^l\right] R_\text {ev}= & {} \sum _{1 \le i \le l}(1-C)^{2^{|K|-1}-i}C^i \ \frac{1}{i!} \sum _{\delta _1,\dots ,\delta _i \in (K)_{\text {ev}}} \ \ \sum _{\begin{array}{c} j_1+\cdots + j_i = l \\ j_1,\dots ,j_i \ge 1 \end{array}} \prod _{1 \le m \le i}\left( {\begin{array}{c}|\delta _m|_c\\ j_m\end{array}}\right) \\= & {} \sum _{1 \le i \le l}(1-C)^{2^{|K|-1}-i}C^i \ \frac{1}{i!} \sum _{\delta _1,\dots ,\delta _i \in (K)_{\text {ev}}} \ \ \sum _{\begin{array}{c} j_1+\cdots + j_i = l \\ j_1,\dots ,j_i \ge 1 \end{array}} \prod _{1 \le m \le i}\left( {\begin{array}{c}|f(\delta _m|_c)\\ j_m\end{array}}\right) \\= & {} \sum _{1 \le i \le l}(1-C)^{2^{|K|-1}-i}C^i \ \frac{1}{i!} \sum _{\delta _1,\dots ,\delta _i \in (K)_{\text {odd}}} \ \ \sum _{\begin{array}{c} j_1+\cdots + j_i = l \\ j_1,\dots ,j_i \ge 1 \end{array}} \prod _{1 \le m \le i}\left( {\begin{array}{c}|\delta _m|_c\\ j_m\end{array}}\right) \\= & {} \left[ (q-1)^l\right] R_\text {odd}. \end{aligned}$$

Since \(l < |K|\) was an arbitrary positive integer, we have shown that

$$\begin{aligned} R_\text {ev}-R_\text {odd}=O_1\left( (q-1)^{|K|}\right) , \end{aligned}$$

which finishes the proof. \(\square \)

Proposition 4.7

Fix some partitions \(\lambda ^1, \dots , \lambda ^r\) and for a subset I of [r] set \(u_I=h_{(q,t)}\left( \lambda ^I\right) \). The family \((u_I)\) has the strong factorization and hence the small cumulant properties when \(q \rightarrow 1\).

Proof

Fix some subset \(I=\{i_1,\dots ,i_t\}\) of [r] with \(i_1<\cdots <i_t\). Observe that the Young diagram \(\lambda ^I\) can be constructed by sorting the columns of the diagrams \(\lambda ^{i_1}\), ..., \(\lambda ^{i_t}\) in decreasing order. When several columns have the same length, we put first the columns of \(\lambda ^{i_1}\), then those of \(\lambda ^{i_2}\) and so on; see Fig. 2 (at the moment, please disregard symbols in boxes). This gives a way to identify boxes of \(\lambda ^I\) with boxes of the diagrams \(\lambda ^{i_s}\) (\(1 \le s \le t\)) that we shall use below.

With this identification, if \(b=(c,r)\) is a box in \(\lambda ^{g}\) for some \(g \in I\), its leg-length in \(\lambda ^I\) is the same as in \(\lambda ^{g}\). We denote it by \(\ell (b)\).

However, the arm-length of b in \(\lambda ^I\) may be bigger than the one in \(\lambda ^{g}\). We denote these two quantities by \(a_I(b)\) and \(a_{g}(b)\). Let us also define \(a_i(b)\) for \(i \ne g\) in I, as follows:

  • for \(i<g\), \(a_i(b)\) is the number of boxes \(b'\) in the r-th row of \(\lambda ^i\) such that the size of the column of \(b'\) is smaller than the size of the column of b (e.g., in Fig. 2, for \(i=1\), these are boxes with a diamond);

  • for \(i>g\), \(a_i(b)\) is the number of boxes \(b'\) in the r-th row of \(\lambda ^i\) such that the size of the column of \(b'\) is at most the size of the column of b (e.g., in Fig. 2, for \(i=3\), these are boxes with an asterisk).

Looking at Fig. 2, it is easy to see that

$$\begin{aligned} a_I(b)= \sum _{i \in I} a_i(b). \end{aligned}$$
(20)
Fig. 2
figure 2

Diagram of an entry-wise sum of partitions

Therefore, for \(G \subseteq [r]\), one has:

$$\begin{aligned} u_G=h_{(q,t)}\left( \bigoplus _{g \in G} \lambda ^g \right) = \prod _{g \in G} \prod _{b \in \lambda ^g} \left( 1-q^{a_G(b)}t^{\ell (b)+1}\right) . \end{aligned}$$

From the definition of \(T_{[r]}({\varvec{u}})\), given by Eq. (19), we get:

$$\begin{aligned} 1 + T_{[r]}({\varvec{u}})= & {} \prod _{G \subseteq [r]} \left( \prod _{g \in G} \prod _{b \in \lambda ^g} \left( 1-q^{a_G(b)}t^{\ell (b)+1}\right) \right) ^{(-1)^{r-|G|}} \nonumber \\= & {} \prod _{g \in [r]} \prod _{b \in \lambda ^g} \left( \mathop {\mathop {\prod }\limits _{G \subseteq [r]}}\limits _{G \ni g} \left( 1-q^{a_G(b)}t^{\ell (b)+1}\right) ^{(-1)^{r-|G|}} \right) . \end{aligned}$$
(21)

The expression inside the bracket corresponds to \(1+T_{[r] {\setminus } \{g\}}({\varvec{v}}^b)\), where \({\varvec{v}}^b\) is defined as follows: If I is a subset of \([r] {\setminus } \{g\}\), then

$$\begin{aligned} v^b_I = \left( 1-q^{a_{I\cup \{g\}}(b)}t^{\ell (b)+1}\right) . \end{aligned}$$

Plugging Eq. (20) into definition of \(v^b_I\), we observe that \(v^b_I\) is as in Lemma 4.6 with the following values of the parameters: \(K=[r] {\setminus } \{g\}\), \(C=t^{\ell (b)+1}\), \(c = 1\), and \(c_i = a_i(b)\) for \(i \ne g\). Therefore, we conclude that

$$\begin{aligned} T_{[r] {\setminus } \{g\}}({\varvec{v}}^b)= O_1\left( (q-1)^{r-1}\right) . \end{aligned}$$

Going back to Eq. (21), we have:

$$\begin{aligned} 1 + T_{[r]}({\varvec{u}}) =\prod _{g \in [r]} \prod _{b \in \lambda ^g} \left( 1 + T_{[r] {\setminus } \{g\}}({\varvec{v}}^b)\right) = 1+O_1\left( (q-1)^{r-1}\right) , \end{aligned}$$

which completes the proof. \(\square \)

We finish this section by presenting an important corollary from the above result.

Proposition 4.8

For any partitions \(\lambda ^1, \dots , \lambda ^r\), the cumulant \(\kappa ^{\mathcal {J}}(\lambda ^1,\dots ,\lambda ^r)\) has a monomial expansion of the following form

$$\begin{aligned} \kappa ^{\mathcal {J}}(\lambda ^1,\dots ,\lambda ^r) = \sum _{\mu \prec \lambda ^{[r]}} c^{\lambda ^1,\dots ,\lambda ^r}_\mu m_\mu + O_1\left( (q-1)^{r-1}\right) , \end{aligned}$$

where

$$\begin{aligned} c^{\lambda ^1,\dots ,\lambda ^r}_\mu \in {\left\{ \begin{array}{ll} {\mathbb {Z}}[q,t] &{}\text { for } |\mu | = |\lambda ^{[r]}|,\\ {\mathbb {Z}}[q,t^{-1},t] &{}\text { for } |\mu | < |\lambda ^{[r]}|. \end{array}\right. } \end{aligned}$$

Proof

First, observe that for any partitions \(\nu ^1\) and \(\nu ^2\), one has

$$\begin{aligned} m_{\nu ^1}m_{\nu ^2} = m_{\nu ^1 \oplus \nu ^2} + \sum _{\mu < \nu ^1 \oplus \nu ^2}b^{\nu ^1,\nu ^2}_\mu m_\mu , \end{aligned}$$

for some integers \(b^{\nu ^1,\nu ^2}_\mu \).

Fix partitions \(\lambda ^1, \dots , \lambda ^r\) and a set partition \(\pi =\{\pi _1,\ldots ,\pi _s\} \in {\mathcal {P}}([r])\). Note that \(\lambda ^{\pi _1} \oplus \cdots \oplus \lambda ^{\pi _s}=\lambda ^{[r]}\). Thanks to Proposition 2.2 and the above observation on products of monomials, there exist some coefficients

$$\begin{aligned} d^{\lambda ^{\pi _1},\ldots ,\lambda ^{\pi _s}}_\mu \in {\left\{ \begin{array}{ll} {\mathbb {Z}}[q,t] &{}\text { for } |\mu | = |\lambda ^{[r]}|,\\ {\mathbb {Z}}[q,t^{-1},t] &{}\text { for } |\mu | < |\lambda ^{[r]}|\end{array}\right. } \end{aligned}$$

such that:

$$\begin{aligned} {\mathcal {J}}_{\lambda ^{\pi _1}}\cdots {\mathcal {J}}_{\lambda ^{\pi _s}} = h_{(q,t)}(\lambda ^{\pi _1})\cdots h_{(q,t)}(\lambda ^{\pi _s}) m_{\lambda ^{[r]}} + \sum _{\mu \prec \lambda ^{[r]}}d^{\lambda ^{\pi _1},\ldots ,\lambda ^{\pi _s}}_\mu m_\mu . \end{aligned}$$

As a consequence, there exist some coefficients

$$\begin{aligned} c^{\lambda ^1,\dots ,\lambda ^r}_\mu \in {\left\{ \begin{array}{ll} {\mathbb {Z}}[q,t] &{}\text { for } |\mu | = |\lambda ^{[r]}|,\\ {\mathbb {Z}}[q,t^{-1},t] &{}\text { for } |\mu | < |\lambda ^{[r]}|. \end{array}\right. } \end{aligned}$$

such that

$$\begin{aligned} \kappa ^{\mathcal {J}}(\lambda ^1,\dots ,\lambda ^r) = \kappa _{[r]}({\varvec{v}})m_{\lambda ^{[r]}} + \sum _{\mu < \lambda ^{[r]}} c^{\lambda ^1,\dots ,\lambda ^r}_\mu m_\mu , \end{aligned}$$

where \(v_I=h_{(q,t)}\left( \lambda ^{I} \right) \). Proposition 4.7 completes the proof. \(\square \)

5 Differential operator and cumulant of interpolation Macdonald polynomials

Let us fix partitions \(\lambda ^1,\dots ,\lambda ^r\), and for any subset \(I \subseteq [r]\), we define \(u_I := {\mathcal {J}}^{(q^{-1},t^{-1})}_{\lambda ^I}\). The purpose of this section is an analysis of the action of the differential operator D—defined in Eq. (5)—on the cumulant \(\kappa ^J(\lambda ^1,\dots ,\lambda ^r) = \kappa _{[r]}({\varvec{u}})\) with parameters \(q^{-1}\), and \(t^{-1}\). In particular, this analysis leads to the proofs of two crucial lemmas used in the proof of Theorem 1.4.

5.1 Analysis of the decomposition

For any positive integer r and for any partitions \(\lambda ^1, \dots , \lambda ^r\), we define

$$\begin{aligned} {{\mathrm{InEx}}}_j\left( \lambda ^1,\dots ,\lambda ^r\right) := \sum _{I \subseteq [r]}(-1)^{r-|I|}b^N_j\left( \lambda ^I\right) , \end{aligned}$$
(22)

where \(b^N_j\left( \lambda ^I\right) \) is given by Eq. (4).

Proposition 5.1

Let \(r > j \ge 1\) be positive integers. Then, for any partitions \(\lambda ^1, \dots , \lambda ^r\) one has:

$$\begin{aligned} {{\mathrm{InEx}}}_j\left( \lambda ^1,\dots , \lambda ^r\right) = 0. \end{aligned}$$

Proof

Expanding the definition and completing partitions with zeros, we have:

$$\begin{aligned} {{\mathrm{InEx}}}_j(\lambda ^1,\dots ,\lambda ^r) = \sum _{1 \le i \le N} \sum _{I \subseteq [r]}(-1)^{r-|I|} \left( {\begin{array}{c}\lambda ^I_i\\ j\end{array}}\right) t^{N-i}. \end{aligned}$$

In particular, we have to prove that the summand corresponding to any given \(1 \le i \le N\) is equal to 0. In other terms, we have to show that the polynomial

$$\begin{aligned} \sum _{I \subseteq [r]}(-1)^{r-|I|} \left( {\begin{array}{c}{\varvec{x}}_I\\ j\end{array}}\right) = 0, \end{aligned}$$

where \({\varvec{x}} = (x_1,\dots ,x_r)\), and \({\varvec{x}}_I := \sum _{i \in I}x_i\).

Note that it is a symmetric polynomial in \({\varvec{x}}\) without constant term of degree at most j, thus it is enough to show that the coefficient of \({\varvec{x}}^\mu := x_1^{\mu _1}\cdots x_r^{\mu _r}\) is equal to zero for all non-empty partitions \(\mu \) of size at most j. This coefficient is given by:

$$\begin{aligned} \left( {\begin{array}{c}|\mu |\\ \mu _1,\dots ,\mu _r\end{array}}\right) \frac{s(j,|\mu |)}{j!}\sum _{[\ell (\mu )] \subseteq I \subseteq [r]}(-1)^{r-|I|}, \end{aligned}$$

where s(jk) is the Stirling number of the first kind, i.e.,

$$\begin{aligned} (x)_j := x(x-1)\cdots (x-j+1) = \sum _{0 \le k \le j}s(j,k)x^{k}. \end{aligned}$$

Since \(\ell (\mu ) \le |\mu | \le j < r\), we have that

$$\begin{aligned} \sum _{[\ell (\mu )] \subseteq I \subseteq [r]}(-1)^{r-|I|} = 0, \end{aligned}$$

which finishes the proof. \(\square \)

We recall that

$$\begin{aligned} {\widetilde{D}}\kappa _{[r]}({\varvec{u}}) = {\widetilde{D}}\kappa ^{\mathcal {J}}\left( \lambda ^1,\dots ,\lambda ^r\right) = \sum _{\pi \in {\mathcal {P}}([r])}\mu \big (\pi ,\{[r]\}\big ){\widetilde{D}}\left( {\mathcal {J}}^{(q^{-1},t^{-1})}_{\lambda ^B} : B \in \pi \right) , \end{aligned}$$

where

$$\begin{aligned} {\widetilde{D}}\left( {\mathcal {J}}^{(q^{-1},t^{-1})}_{\lambda ^B} : B \in \pi \right) = \sum _{B \in \pi }\left( D {\mathcal {J}}^{(q^{-1},t^{-1})}_{\lambda ^B}\ \cdot \prod _{B' \in \pi {\setminus }\{B\}}{\mathcal {J}}^{(q^{-1},t^{-1})}_{\lambda ^{B'}}\right) . \end{aligned}$$

Lemma 5.2

For any positive integer \(r \ge 2\) and any partitions \(\lambda ^1, \dots , \lambda ^r\), the following equality holds true:

$$\begin{aligned} {\widetilde{D}}\kappa ^{{\mathcal {J}}}(\lambda ^1,\dots ,\lambda ^r) = \sum _{j \ge 1}(q-1)^j\sum _{\begin{array}{c} \sigma \in {\mathcal {P}}([r])\\ \#\sigma \le j \end{array}} {{\mathrm{InEx}}}_j\left( \lambda ^B: B \in \sigma \right) \left( \prod _{B \in \sigma }\kappa ^{{\mathcal {J}}}(\lambda ^i: i \in B)\right) . \end{aligned}$$

Proof

Note that strictly from the definition of interpolation Macdonald polynomials given by Proposition 2.2 we know that for any set partition \(\pi \in {\mathcal {P}}([r])\) the following identity holds:

$$\begin{aligned} \sum _{B \in \pi }\left( D {\mathcal {J}}^{(q^{-1},t^{-1})}_{\lambda ^B}\ \cdot \prod _{B' \in \pi {\setminus }\{B\}}{\mathcal {J}}^{(q^{-1},t^{-1})}_{\lambda ^{B'}}\right)= & {} \left( \sum _{B \in \pi }\text {ev}\left( {\lambda ^B}\right) \right) \prod _{B \in \pi }{\mathcal {J}}^{(q^{-1},t^{-1})}_{\lambda ^{B'}}\\= & {} \sum _{j \ge 1}(q-1)^j\left( \sum _{B \in \pi }b^N_j\left( \lambda ^{\pi _i}\right) \right) \prod _{B \in \pi }{\mathcal {J}}^{(q^{-1},t^{-1})}_{\lambda ^{B'}}. \end{aligned}$$

If we substitute it into the definition of \({\widetilde{D}}\kappa ^{{\mathcal {J}}}(\lambda ^1,\dots ,\lambda ^r)\), we have that

$$\begin{aligned} {\widetilde{D}}\kappa ^{{\mathcal {J}}}(\lambda ^1,\dots ,\lambda ^r)&= \sum _{j \ge 1}(q-1)^j \sum _{\pi \in {\mathcal {P}}([r])} \left( \mu (\pi ,\{[r]\}) \left( \sum _{B \in \pi }b^N_j(\lambda ^B) \right) \prod _{B \in \pi }{\mathcal {J}}^{(q^{-1},t^{-1})}_{\lambda ^B} \right) \\&=\sum _{j \ge 1}(q-1)^j \sum _{\pi \in {\mathcal {P}}([r])} \left( \mu (\pi ,\{[r]\}) \left( \sum _{B \in \pi }b^N_j(\lambda ^B) \right) \prod _{B \in \pi }u_B \right) . \end{aligned}$$

Thanks to Eq. (14), we can replace each occurence of \(u_B\) in the above equation by \(\sum _{\begin{array}{c} \pi \in {\mathcal {P}}(B) \end{array} } \prod _{B' \in \pi } \kappa _{B'}({\varvec{u}})\) to obtain the following identity

$$\begin{aligned} {\widetilde{D}}\kappa _{[r]}({\varvec{u}})= & {} \sum _{j \ge 1}(q-1)^j\\&\cdot \sum _{\sigma \in {\mathcal {P}}([r])}\left( \sum _{\pi \in {\mathcal {P}}([r]); \pi \ge \sigma } \mu (\pi ,\{[r]\}) \left( \sum _{B \in \pi }b^N_j(\lambda ^B)\right) \right) \prod _{B \in \sigma }\kappa _B({\varvec{u}}). \end{aligned}$$

Fix a set partition \(\sigma \in {\mathcal {P}}([r])\). We claim that the expression in the bracket in the above equation is given by the following formula

$$\begin{aligned} \sum _{\pi \in {\mathcal {P}}([r]); \pi \ge \sigma } \mu (\pi ,\{[r]\}) \left( \sum _{B \in \pi }b^N_j(\lambda ^B)\right) = {{\mathrm{InEx}}}_j\left( \lambda ^B: B \in \sigma \right) , \end{aligned}$$
(23)

which finishes the proof, since the right-hand side of Eq. (23) vanishes for all set partitions \(\sigma \) such that \(\# (\sigma ) > j\), which is ensured by Proposition 5.1.

Let us order the blocks of \(\sigma \) in some way \(\sigma = \{B_1,\dots ,B_{\#\sigma }\}\). The partitions \(\pi \) coarser than \(\sigma \) are in bijection with partitions of the blocks of \(\sigma \), that is partitions of \([\#\sigma ]\). Therefore, the left-hand side of Eq. (23) can be rewritten as:

$$\begin{aligned}&\sum _{\pi \in {\mathcal {P}}([r]); \pi \ge \sigma } \mu (\pi ,\{[r]\}) \left( \sum _{B \in \pi }b^N_j(\lambda ^B)\right) \\&\quad =\sum _{\rho \in {\mathcal {P}}([\#\sigma ])} \mu (\rho ,\{[\#\sigma ]\}) \left( \sum _{C \in \rho }b^N_j\left( \bigoplus _{j \in C} \lambda ^{B_j} \right) \right) . \end{aligned}$$

Fix some subset C of \([\#\sigma ]\). The coefficient of \(b^N_j\left( \bigoplus _{j \in C} \lambda ^{B_j} \right) \) in the above sum is equal to

$$\begin{aligned} a_C:=\mathop {\mathop {\sum }\limits _{\rho \in {\mathcal {P}}([\#\sigma ])}}\limits _{C \in \rho } \mu (\rho ,\{[\#\sigma ]\}). \end{aligned}$$

The set partitions \(\rho \) of \([\#\sigma ]\) that have C as a block write uniquely as \(C \cup \rho '\), where \(\rho '\) is a set partition of \([\# (\sigma )]{\setminus } C)\). Thus,

$$\begin{aligned} a_C= & {} \sum _{\rho ' \in {\mathcal {P}}([\# (\sigma )]{\setminus } C)} \mu (C \cup \rho ', \{[\# (\sigma )]\}) \\= & {} \sum _{0 \le i \le \# (\sigma )-|C|} S(\#\sigma -|C|,i) i! (-1)^i = (-1)^{\#\sigma -|C|}, \end{aligned}$$

where S(nk) is the Stirling number of the second kind and the last equality comes from the relation

$$\begin{aligned} \sum _{0 \le k \le n}S(n,k)(x)_k = x^n \end{aligned}$$

evaluated at \(x=-1\) (here, \((x)_k := x(x - 1)\cdots (x-k+1)\) denotes the falling factorial). This finishes the proof of Eq. (23) and also completes the proof of the lemma. \(\square \)

Lemma 5.3

For any positive integer r and any partitions \(\lambda ^1, \dots , \lambda ^r\), the following equality holds true

$$\begin{aligned} {\widetilde{D}}\ \kappa ^{{\mathcal {J}}}\left( \lambda ^1,\dots ,\lambda ^r\right)= & {} {\widetilde{D}}\ \kappa _{[r]}({\varvec{u}}) \nonumber \\= & {} \sum _j \frac{(q-1)^j}{j!}\sum _{1 \le i \le N}A_i({\varvec{x}};t)\left( x_i^j-x_i^{j-1}\right) \nonumber \\&\quad \sum _{\begin{array}{c} \pi \in {\mathcal {P}}([r]), \\ \#\pi \le j \end{array} }\sum _{\begin{array}{c} \alpha \in {\mathbb {N}}_+^\pi , \\ |\alpha | = j \end{array} } \left( {\begin{array}{c}j\\ \alpha \end{array}}\right) \prod _{B \in \pi } \left( D_i^{\alpha (B)}\kappa _{B}({\varvec{u}})\right) . \end{aligned}$$
(24)

Proof

We recall that

$$\begin{aligned} D = \sum _{j \ge 1}\frac{(q-1)^{j}}{j!}\sum _iA_i({\varvec{x}};t)(x_i^j-x_i^{j-1}) D^j_i \end{aligned}$$

[which is given by Eq. (6)], where \(D^j_i := \frac{\partial ^j}{\partial x_i^j}\). Since \(\frac{\partial }{\partial x_i} \in {{\mathrm{Der}}}_{{{\mathrm{Sym}}}_N}\) is a derivation, we have the following formula

$$\begin{aligned} {\widetilde{D}}\ \kappa _{[r]}({\varvec{u}}) = \sum _{j \ge 1}\frac{(q-1)^{j}}{j!}\sum _i A_i({\varvec{x}};t)\left( x_i^j-x_i^{j-1}\right) \left( {\widetilde{D^j_i}}\ \kappa _{[r]}({\varvec{u}}) \right) , \end{aligned}$$

and one can apply Lemma 4.2 and substitute the following identity

$$\begin{aligned} {\widetilde{D^j_i}}\ \kappa _{[r]}({\varvec{u}}) = \sum _{\begin{array}{c} \pi \in {\mathcal {P}}([r]), \\ \#\pi \le j \end{array} }\sum _{\begin{array}{c} \alpha \in {\mathbb {N}}_+^\pi , \\ |\alpha | = j \end{array} } \left( {\begin{array}{c}j\\ \alpha \end{array}}\right) \prod _{B \in \pi } \left( D_i^{\alpha (B)}\kappa _{B}({\varvec{u}})\right) , \end{aligned}$$

which immediately gives Eq. (24). \(\square \)