1 Introduction

The theory of Hall–Littlewood, Jack, and Macdonald polynomials is one of the most interesting subjects in the modern theory of symmetric functions. It is well-known that combinatorial properties of symmetric functions can be explained by lifting them to larger algebras (the so-called combinatorial Hopf algebras), the simplest examples being Sym (Noncommutative symmetric functions [3]) and its dual QSym (Quasi-symmetric functions [5]).

There have been several attempts to lift Hall–Litttlewood and Macdonald polynomials to Sym and QSym [1, 7, 8, 11, 12]. The analogues defined by Bergeron and Zabrocki in [1] were similar to, though different from, those defined by Hivert–Lascoux–Thibon of [8]. These last ones admitted multiple parameters q i and t i which, however, could not be specialized to recover the version of Bergeron–Zabrocki.

The aim of this article is to show that many more parameters can be introduced in the definition of such bases. Actually, one can have a pair of n×n matrices (Q n ,T n ) for each degree n. The main properties established in [1] and [8] remain true in this general context, and one recovers the BZ and HLT polynomials for appropriate specializations of the matrices.

In the last section, another possibility involving quasideterminants is explored. One can then define bases involving two almost-triangular matrices of parameters in each degree. We shall see on some examples that if these matrices are chosen such as to give a special basis for the row and column compositions, special properties arise for hook compositions (nk,1k). For example, one can obtain a basis whose commutative image reduces to the Macdonald P-polynomials for hook compositions.

One should not expect that constructions at the level of Sym and QSym could lead to general results on ordinary Macdonald polynomials. Even for Schur functions, one has to work in the algebra of standard tableaux, FSym, to understand the Littlewood–Richardson rule. However, the analogues of Macdonald polynomials which can be defined in Sym and QSym have sufficiently much in common with the ordinary ones so as to suggest interesting ideas. The most startling one is that the usual Macdonald polynomials could be specializations of a family of symmetric functions with many more parameters.Footnote 1

2 Notations

Our notations for noncommutative symmetric functions will be as in [3, 10]. Here is a brief reminder.

The Hopf algebra of noncommutative symmetric functions is denoted by Sym, or by Sym(A) if we consider the realization in terms of an auxiliary alphabet. Bases of Sym n are labeled by compositions I of n. The noncommutative complete and elementary functions are denoted by S n and Λ n , and the notation S I means \(S_{i_{1}}\cdots S_{i_{r}}\). The ribbon basis is denoted by R I . The notation In means that I is a composition of n. The conjugate composition is denoted by I .

The graded dual of Sym is QSym (quasi-symmetric functions). The dual basis of (S I) is (M I ) (monomial), and that of (R I ) is (F I ). The descent set of I=(i 1,…,i r ) is \(\mathrm{Des\,}(I) = \{ i_{1},\ i_{1}+i_{2}, \ldots, i_{1}+\cdots+i_{r-1}\}\).

Finally, let us recall two operations on compositions: if I=(i 1,…,i r ) and J=(j 1,…,j s ), the composition I.J is (i 1,…,i r ,j 1,…,j s ) and IJ is (i 1,…,i r +j 1,…,j s ).

3 Sym n as a Grassmann algebra

Since for n>0, Sym n has dimension 2n−1, it can be identified (as a vector space) with a Grassmann algebra on n−1 generators η 1,…,η n−1 (that is, η i η j =−η j η i , so that in particular \(\eta_{i}^{2}=0\)). This identification is meaningful, for example, in the context of the representation theory of the 0-Hecke algebras H n (0). Indeed (see [2]), the quiver of H n (0) admits a simple description in terms of this identification. Of course, this identification is not an isomorphism of algebras.

If I is a composition of n with descent set D={d 1<⋯<d k }, we make the identification

$$ R_I \longleftrightarrow \eta_D:=\eta_{d_1} \eta_{d_2}\cdots \eta_{d_k}. $$
(1)

For example, R 213η 2 η 3. We then have

$$ S^I \longleftrightarrow (1+\eta_{d_1}) (1+\eta_{d_2})\cdots(1+\eta_{d_k}) $$
(2)

and

$$ {\varLambda}^I \longleftrightarrow \prod_{i=1}^{n-1} \theta_i(I), $$
(3)

where θ i (I)=η i if \(i\not\in D\) and θ i (I)=1+η i otherwise. Other families of noncommutative symmetric functions have simple expressions under this identification, e.g., the primitive elements Ψ n and Φ n of [3]

$$ {\varPsi}_n \longleftrightarrow 1-\eta_1+\eta_1 \eta_2 - \cdots + (-1)^{n-1} \eta_1\cdots \eta_{n-1}, $$
(4)

and

$$ {\varPhi}_n \longleftrightarrow \sum_{k=0}^{n-1} \frac{(-1)^k}{\binom{n-1}{k}} E_k, $$
(5)

where \(E_{k} = \sum_{j_{1}<\cdots<j_{k}} \eta_{j_{1}} \cdots \eta_{j_{k}}\). The q-Klyachko element [3]

$$ K_n(q) = \sum_{I\vDash n} q^{\mathrm{maj}(I)} R_I, $$
(6)

is

$$ (1+q\eta_1) \bigl(1+q^2\eta_2\bigr) \cdots \bigl(1+q^{n-1}\eta_{n-1}\bigr), $$
(7)

and Hivert’s Hall–Littlewood basis [7] is

$$ H_I(q) := (\eta_{d_1}+q) \bigl(\eta_{d_2}+q^2\bigr) \cdots \bigl(\eta_{q{_r-1}}+q^{r-1}\bigr). $$
(8)

3.1 Structure on the Grassmann algebra

Let ∗ be the anti-involution given by \(\eta_{i}^{*}=(-1)^{i}\eta_{i}\). Recall that the Grassmann integral of any function f is defined by

$$ \int f\,d\eta := f_{12\ldots n-1},\quad \hbox{where}\ f=\sum _{k}\sum_{i_1<\cdots<i_k}f_{i_1\dots i_k} \eta_{i_1}\cdots\eta_{i_k}. $$
(9)

We define a bilinear form on Sym n by

$$ (f,g)=\int f^* g\,d\eta. $$
(10)

Then,

$$ (R_I,R_J)=(-1)^{\ell(I)-1} \delta_{I,\bar{J}^\sim} $$
(11)

so that this is (up to an unessential sign) the Bergeron–Zabrocki scalar product [1, Eq. (4)]. Indeed, if \(\mathrm{Des\,}(I)=\{d_{1},\ldots,d_{r}\}\) and \(\mathrm{Des\,}(J)=\{e_{1},\ldots,e_{s}\}\), then

$$ R_I^* R_J=(-1)^{d_r}\eta_{d_r}\cdots (-1)^{d_1}\eta_{d_1} \,\eta_{e_1}\cdots \eta_{e_s} $$
(12)

and the coefficient of η 1η n−1 in this product is zero if \(\mathrm{Des\,}(I)\) and \(\mathrm{Des\,}(J)\) are not complementary subsets of [n−1]. When it is the case, moving \(\eta_{d_{k}}\) to its place in the middle of the e i produces a sign \((-1)^{d_{k}-1}\), which together with the factor \((-1)^{d_{k}}\) results into a single factor (−1). Hence the final sign (−1)r=(−1)(I)−1.

3.2 Factorized elements in the Grassman algebra

Now, for a sequence of parameters Z=(z 1,…,z n−1), let

$$ K_n(Z)=(1+z_1\eta_1) (1+z_2\eta_2) \cdots (1+z_{n-1}\eta_{n-1}). $$
(13)

Note that this is equivalent to defining, as was already done in [8, Eq. (18)],

$$ K_n(A;Z) = \sum_{|I|=n} \biggl(\prod_{d\in \mathrm{Des\,}(I)} z_d \biggr) R_I(A). $$
(14)

For example, with n=4, if one orders compositions as usual by reverse lexicographic order, the coefficients of expansion of K n on the basis (R I ) are

$$ 1,\quad z_3,\quad z_2,\quad z_2z_3,\quad z_1,\quad z_1z_3,\quad z_1z_2,\quad z_1z_2z_3. $$
(15)

We then have

Lemma 3.1

$$ \bigl(K_n(X),K_n(Y)\bigr) = \prod_{i=1}^{n-1}(y_i-x_i). $$
(16)

Proof

By induction. For n=1, the scalar product is 1, and

(17)

 □

3.3 Bases of Sym

We shall investigate bases of Sym n of the form

$$ {\tilde{\mathrm{H}}}_I = K_n(Z_I) = \sum_J \tilde{\mathbf{k}}_{IJ}R_J, $$
(18)

where Z I is a sequence of parameters depending on the composition I of n.

The bases defined in [8] and [1] are of the previous form and for both of them, the determinant of the Kostka matrix \(\mathcal{K}=(\tilde{\mathbf{k}}_{IJ})\) is a product of linear factors (as for ordinary Macdonald polynomials). This is explained by the fact that these matrices have the form

$$ \left( \matrix{ A & xA \cr B & yB \cr} \right) $$
(19)

where A and B have a similar structure, and so on recursively. Indeed, for such matrices,

Lemma 3.2

Let A,B be two m×m matrices. Then,

$$ \left \vert \matrix{ A & xA \cr B & yB \cr} \right\vert = (y-x)^m \det A\cdot \det B. $$
(20)

3.4 Duality

Similarly, the dual vector space \(\mathit{QSym}_{n}=\mathbf{Sym}_{n}^{*}\) can be identified with a Grassmann algebra on another set of generators ξ 1,…,ξ n−1. Encoding the fundamental basis F I of Gessel [5] by

$$ \xi_D := \xi_{d1}\xi_{d_2}\cdots \xi_{d_k}, $$
(21)

the usual duality pairing such that the F I are dual to the R I is given in this setting by

$$ \langle \xi_D,\eta_E\rangle =\delta_{DE}. $$
(22)

Let

$$ L_n(Z)=(z_1-\xi_1)\cdots (z_{n-1}-\xi_{n-1}). $$
(23)

Then, as above, we have a factorization identity:

Lemma 3.3

$$ \bigl \langle L_n(X),K_n(Y)\bigr \rangle = \prod_{i=1}^{n-1}(x_i-y_i). $$
(24)

Proof

By definition

$$ L_n(X) = \sum_{D\subseteq [n-1]}(-1)^{|D|} \xi_D \prod_{e\not\in D}x_e $$
(25)

and

$$ K_n(Y) = \sum_{D\subseteq[n-1]}\eta_D \prod_{d\in D}y_d, $$
(26)

so that

$$ \bigl \langle L_n(X),K_n(Y)\bigr \rangle = \sum_{D\subseteq[n-1]}(-1)^{|D|} \prod_{e\not\in D}x_e \prod_{d\in D}y_d = \prod_{i=1}^{n-1}(x_i-y_i). $$
(27)

 □

Note that alternatively, assuming that the ξ i and the η j commute with each other and that ξ i η i =1, one can define 〈f,g〉 as the constant term in the product fg.

Using this formalism, one can for example find for the dual basis \({\varPhi}_{I}^{*}\) of Φ I, the following expression

$$ {\varPhi}_J^* \longleftrightarrow \prod_{i=1}^r \frac{1}{j_i} \prod_{k\not\in \mathrm{Des\,}(K)}^\rightarrow (1- \xi_k) f_{r-1}(\xi_{d_1},\ldots,\xi_{d_{r-1}}), $$
(28)

where \({\varPhi}^{*}_{1^{r}} \Longleftrightarrow f_{r-1}(\xi_{1},\ldots,\xi_{r-1})\), which is simpler than the description of [3, Proposition 4.29]. Moreover, one can show that

$$ {\varPhi}_{1^n}^*(X) = F_n(X{\mathbb{E}}) = \sum_{I\vdash n} R_I({\mathbb{E}}) F_I(X), $$
(29)

where \({\mathbb{E}}\) is the exponential alphabet defined by \(S_{n}({\mathbb{E}}) = 1/n!\), so that

$$ f_{r-1}(\xi_1,\ldots,\xi_{r-1}) = \sum_{D\subseteq [r-1]} a_D \xi_D, $$
(30)

where a D is the number of permutations of \(\mathfrak{S}_{r}\) with descent set D.

4 Bases associated with paths in a binary tree

The most general possibility to build bases whose Kostka matrix can be recursively decomposed into blocks of the form (19) is as follows. Let y={y u } be a family of indeterminates indexed by all boolean words of length ≤n−1. For example, for n=3, we have the six parameters y 0,y 1,y 00,y 01,y 10,y 11.

We can encode a composition I with descent set D by the boolean word u=(u 1,…,u n−1) such that u i =1 if iD and u i =0 otherwise.

Let us denote by u mp the sequence u m u m+1u p and define

$$ P_I := (1+ y_{u_1} \eta_1)(1+y_{u_{1\ldots2}} \eta_2) \cdots (1+y_{u_{1\ldots n-1}} \eta_{n-1}) $$
(31)

or, equivalently,

$$ P_I := K_n(Y_I)\quad \hbox{with}\ Y_I=( y_{u_1},y_{u_{1\ldots2}}, \ldots,y_{u}) =: \bigl(y_k(I)\bigr). $$
(32)

Similarly, let

$$ Q_I := (y_{w_1} - \xi_1) (y_{w_{1\dots2}}- \xi_2) \cdots (y_{w_{1\dots n-1}}- \xi_{n-1}) =: L_n\bigl(Y^I\bigr) $$
(33)

where \(w_{1\ldots k}=u_{1}\cdots u_{k-1} \overline{u_{k}}\) where \(\overline{u_{k}}=1-u_{k}\), so that

$$ Y^I := (y_{w_1},y_{w_{1\ldots2}},\ldots,y_{w_{1\dots n-1}})=:\bigl(y^k(I)\bigr). $$
(34)

For n=4, we have the following tables:

(35)

4.1 Kostka matrices

The Kostka matrix, which is defined as the transpose of the transition matrices from P I to R J , is, for n=4:

$$ \left( \matrix{ 1 & y_{000} & y_{00} & y_{00}y_{000} & y_0 & y_0y_{000} & y_0y_{00} & y_0y_{00}y_{000} \cr 1 & y_{001} & y_{00} & y_{00}y_{001} & y_0 & y_0y_{001} & y_0y_{00} & y_0y_{00}y_{001} \cr 1 & y_{010} & y_{01} & y_{01}y_{010} & y_0 & y_0y_{010} & y_0y_{01} & y_0y_{01}y_{010} \cr 1 & y_{011} & y_{01} & y_{01}y_{011} & y_0 & y_0y_{011} & y_0y_{01} & y_0y_{01}y_{011} \cr 1 & y_{100} & y_{10} & y_{10}y_{100} & y_1 & y_1y_{100} & y_1y_{10} & y_1y_{10}y_{100} \cr 1 & y_{101} & y_{10} & y_{10}y_{101} & y_1 & y_1y_{101} & y_1y_{10} & y_1y_{10}y_{101} \cr 1 & y_{110} & y_{11} & y_{11}y_{110} & y_1 & y_1y_{110} & y_1y_{11} & y_1y_{11}y_{110} \cr 1 & y_{111} & y_{11} & y_{11}y_{111} & y_1 & y_1y_{111} & y_1y_{11} & y_1y_{11}y_{111} \cr} \right). $$
(36)

For example,

(37)

Note that this matrix is recursively of the form of Eq. (19). Thus, its determinant is

(38)

This has the consequence that, given a specialization of the y w , one can easily check whether the P I form a linear basis of Sym n .

Proposition 4.1

The bases (P I ) and (Q I ) are adjoint to each other, up to normalization:

$$ \langle Q_I,P_J\rangle = \bigl \langle L_n(Y^I),K_n(Y_J)\bigr \rangle = \prod_{k=1}^{n-1} \bigl(y^k(I)-y_k(J)\bigr \rangle , $$
(39)

which is indeed zero unless I=J.

Proof

If I=J, then \(y^{k}(I)\not=y_{k}(I)\) by definition. If \(I\not=J\), let d be the smallest integer which is a descent of either I or J but not both. Then, \(y_{w_{1\dots d}}=y_{u_{1\dots d}}\) and 〈Q I ,P J 〉=0. □

From this, it is easy to derive a product formula for the basis P I . Note that we are considering the usual product from Sym n ×Sym m to Sym n+m and not the product of the Grassman algebra.

Proposition 4.2

Let I and J be two compositions of respective sizes n and m. The product P I P J is a sum over an interval of the lattice of compositions

$$ P_I P_J = \sum_{K\in [I\triangleright (m),I\cdot (1^m)]} c_{IJ}^K P_K $$
(40)

where

$$ c_{IJ}^K=\frac{\langle L_{n+m}(Y^K),K_{n+m}(Y_I\cdot 1\cdot Y_J)\rangle }{\langle Q_K,P_K \rangle }, $$
(41)

where Y I ⋅1⋅Y J stands for the sequence (y 1(I),…,y n (I),1,y 1(J),…,y m (J)).

Proof

The usual product from Sym n ×Sym m to Sym n+m can be expressed as

$$ f_n\times g_m = f_n(\eta_1,\ldots,\eta_{n-1}) (1+\eta_n)g_m(\eta_{n+1},\ldots,\eta_{n+m-1}). $$
(42)

Indeed, this formula is clearly satisfied for R I R J . Then,

(43)

Then, for any K, the coefficient of P K is given by Formula (41). Thanks to Proposition 4.1, it is 0 if the boolean vector corresponding to I is not a prefix of the boolean vector corresponding to K, that is, if K is not in the interval [I▷(m),I⋅(1m)]. □

For example,

(44)
(45)

4.2 The quasi-symmetric side

As we have seen before, the (Q I ) being dual to the (P I ), the inverse Kostka matrix is given by the simple construction:

Proposition 4.3

The inverse of the Kostka matrix is given by

$$ \bigl(\mathcal{K}_n^{-1}\bigr)_{IJ} =(-1)^{\ell(I)-1} \prod_{d\in \mathrm{Des\,}(\bar{I}^\sim)} y^d(J) \prod_{p=1}^{n-1} \frac{1}{y^p(J)-y_p(J)}. $$
(46)

Proof

This follows from Proposition 4.1. □

One can check the answer on table (35) for n=4 of Q I in terms of the F I .

4.3 Some specializations

Let us now consider the specialization sending all y w to 1 if w ends with a 1 and denote by \(\mathcal{K}'\) the matrix obtained by this specialization. Then, as in [8, p. 10],

Proposition 4.4

Let n be an integer. Then

$$ S_n = \mathcal{K}_n {\mathcal{K}'_n}^{-1} $$
(47)

is lower triangular. More precisely, let \(Y'_{J}\) be the image of Y J by the previous specialization and define YJ in the same way. Then the coefficient s IJ indexed by (I,J) is

$$ s_{IJ} = \prod_{k=1}^{n-1} \frac{y_k(I)-y'^k(J)}{y'_k(J)-y'^k(J)}. $$
(48)

Proof

This follows from the explicit form of \(\mathcal{K}_{n}\) and \({\mathcal{K}'_{n}}^{-1}\) (see Proposition 4.3). □

Similarly, the specialization sending all y w to 1 if w ends with a 0 leads to an upper triangular matrix.

These properties can be regarded as analogues of the characterization of Macdonald polynomials given in [6, Proposition 2.6] (see also [8, p. 10]).

5 The two-matrix family

5.1 A specialization of the paths in a binary tree

The above bases can now be specialized to bases \({\tilde{\mathrm{H}}}(A;Q,T)\), depending on two infinite matrices of parameters. Label the cells of the ribbon diagram of a composition I of n with their matrix coordinates as follows:

(49)

We associate a variable z ij with each cell except (1,1) by setting z ij :=q i,j−1 if (i,j) has a cell on its left, and z ij :=t i−1,j if (i,j) has a cell on its top. The alphabet Z(I)=(z j (I)) is the sequence of the z ij in their natural order. For example,

(50)

Next, if J is a composition of the same integer n, form the monomial

$$ \tilde{\mathbf{k}}_{IJ}(Q,T)=\prod_{d\in \mathrm{Des\,}(J)}z_d(I). $$
(51)

For example, with I=(4,1,2,1) and J=(2,1,1,2,2), we have \(\mathrm{Des\,}(J)=\{2,3,4,6\}\) and \(\tilde{\mathbf{k}}_{IJ}=q_{12}q_{13}t_{14}q_{34}\).

Definition 5.1

Let Q=(q ij ) and T=(t ij ) (i,j≥1) be two infinite matrices of commuting indeterminates. For a composition I of n, the noncommutative (Q,T)-Macdonald polynomial \({\tilde{\mathrm{H}}}_{I}(A;Q,T)\) is

$$ {\tilde{\mathrm{H}}}_I(A;Q,T)= K_n\bigl(A;Z(I)\bigr) = \sum_{J\vDash n}\tilde{\mathbf{k}}_{IJ}(Q,T)R_J(A). $$
(52)

Note that \({\tilde{\mathrm{H}}}_{I}\) depends only on the q ij and t ij with i+jn.

Note 5.2

Since the kth element of Z I depends only on the prefix of size k of the boolean vector associated with I, the \({\tilde{\mathrm{H}}}_{I}\) are specializations of the P I defined in Eq. (32). More precisely, if u=w0 is a binary word ending by 0, the specialization is \(y_{u}= q_{|w|_{1}+1,|w|_{0}+1}\), and if u=w1, we have \(y_{u}=t_{|w|_{1}+1,|w|_{0}+1}\). Note also that y w0 and y w1 where w is any binary word are different (one is a q, the other is a t), so that the determinant of this specialized Kostka matrix is generically nonzero. Finally, since these \({\tilde{\mathrm{H}}}\) are specializations of the Ps, the product formula detailed in Proposition 4.2 gives a simple generic product formula for the \({\tilde{\mathrm{H}}}\).

For example, translating Eqs. (44) and (45), one gets

(53)
(54)

5.2 (Q,T)-Kostka matrices

Here are the (Q,T)-Kostka matrices for n=3,4 (compositions are as usual in reverse lexicographic order):

(55)
(56)

The factorization property of the determinant of the (Q,T)-Kostka matrix, which is valid for the usual Macdonald polynomials as well as for the noncommutative analogues of [8] and [1] still holds since the \({\tilde{\mathrm{H}}}_{I}\) are specializations of the P I . More precisely,

Theorem 5.3

Let n be an integer. Then

$$ \det \mathcal{K}_n = \prod_{i+j\le n}(q_{ij}-t_{ij})^{e(i,j)}, $$
(57)

where \(e(i,j)=\binom{i+j-2}{i-1}\, 2^{n-i-j}\).

Proof

The matrix \(\mathcal{K}_{n}(Q,T)\) is of the form

$$ \mathcal{K}_n =\left( \matrix{ A&q_{11}A \cr B&t_{11}B \cr} \right) $$
(58)

where A and B are obtained from \(\mathcal{K}_{n-1}\) by the replacements q ij q i,j+1, t ij t i,j+1, and q ij q i+1,j , t ij t i+1,j , respectively. So the result follows from Lemma 3.2. □

For example, with n=4,

(59)

5.3 Specializations

For appropriate specializations, we recover (up to indexation) the Bergeron–Zabrocki polynomials \({\tilde{\mathrm{H}}}_{I}^{\mathrm{BZ}}\) of [1] and the multiparameter Macdonald functions \({\tilde{\mathrm{H}}}_{I}^{\mathrm{HLT}}\) of [8]:

Proposition 5.4

Let (q i ), (t i ), i≥1 be two sequences of indeterminates. For a composition I of n,

  1. (i)

    Let ν be the anti-involution of Sym defined by ν(S n )=S n . Under the specialization q ij =q i+j−1, t ij =t n+1−ij , \({\tilde{\mathrm{H}}}_{I}(Q,T)\) becomes a multiparameter version of \(\mathrm{iv}({\tilde{\mathrm{H}}}_{I}^{\mathrm{BZ}})\), to which it reduces under the further specialization q i =q i and t i =t i.

  2. (ii)

    Under the specialization q ij =q j , t ij =t i , \({\tilde{\mathrm{H}}}_{I}(Q,T)\) reduces to \({\tilde{\mathrm{H}}}_{I}^{\mathrm{HLT}}\).

Proof

Equation (52) gives directly [1, Eq. (36)] under the specialization (i) and [8, Eqs. (2), (6)] under the specialization (ii). □

5.4 The quasi-symmetric side

Families of (Q,T)-quasi-symmetric functions can now be defined by duality by specialization of the (Q I ) defined in the general case. The dual basis of \(({\tilde{\mathrm{H}}}_{J})\) in QSym will be denoted by \((\tilde{\mathrm{G}}_{I})\). We have

$$ \tilde{\mathrm{G}}_I(X;Q,T) = \sum_J \tilde{\mathbf{g}}_{IJ}(q,t) F_J(X) $$
(60)

where the coefficients are given by the transposed inverse of the Kostka matrix: \((\tilde{\mathbf{g}}_{IJ})={}^{t}(\tilde{\mathbf{k}}_{IJ})^{-1}\).

Let \(Z'(I)(Q,T)=Z(I)(T,Q)=Z(\bar{I}^{\sim})(Q,T)\). Then, thanks to Proposition 4.3 and to the fact that changing the last bit of a binary word amounts to changing a q into a t, we have

Proposition 5.5

The inverse of the (Q,T)-Kostka matrix is given by

$$ \bigl(\mathcal{K}_n^{-1}\bigr)_{IJ} = (-1)^{\ell(I)-1} \prod_{d\in \mathrm{Des\,}(\bar{I}^\sim)} z'_d(J) \prod_{p=1}^{n-1}\frac{1}{z_p(J)-z'_p(J)}. $$
(61)

Note that, as in the more general case of parameters indexed by binary words (see Proposition 4.4), if one specializes all t (resp., all q) to 1, one then gets lower (resp., upper) triangular matrices with explicit coefficients, hence generalizing the observation of [8].

6 Multivariate BZ polynomials

In this section, we restrict our attention to the multiparameter version of the Bergeron–Zabrocki polynomials, obtained by setting q ij =q i+j−1 and t ij =t n+1−ij in degree n.

6.1 Multivariate BZ polynomials

As in the case of the two matrices of parameters, Q and T, one can deduce the product in the \({\tilde{\mathrm{H}}}\) basis by some sort of specialization of the general case. However, since t ij specializes to another t where n appears, one has to be a little more cautious to get the correct answer.

Theorem 6.1

Let I and J be two compositions of respective sizes p and r. Let us denote by \(K=I.\bar{J}^{\sim}\) and n=|K|=p+r. Then

(62)

where the sum is computed as follows. Let Iand Jbe the compositions such that |I′|=|I| and either K′=I′⋅J′, or K′=I′▷J′. If Iis not coarser than I or if Jis not finer than J, then \({\tilde{\mathrm{H}}}_{K'}\) does have coefficient 0. Otherwise, z k (K′)=q k if k is a descent of Kand t nk otherwise. Finally, \(z'_{k}(K')\) does not depend on Kand is (Z(I),1,Z(J)).

For example, with I=J=(2), we have K=(211). Note that Z BZ(K)=[q 1,t 2,t 1] and \(Z^{\mathrm{BZ}}(\bar{K}^{\sim})=[t_{3},q_{2},q_{3}]\). The set of compositions K′ having a nonzero coefficient is (4), (31), (22), (211). Here are the (modified) Z and Z′ restricted to the descents of K for these four compositions.

$$ \begin{array}{l@{\quad}l@{\quad}l@{\quad}l@{\quad}l} & (4) & (31) & (22) & (211) \\ Z & [t_2,t_1] & [t_2,q_3] & [q_2,t_1] & [q_2,q_3] \\ Z' & [1,q_1] & [1,q_1] & [1,q_1] & [1,q_1] \\ \end{array} $$
(63)
(64)
(65)

6.2 The ∇ operator

The ∇ operator of [1] can be extended by

$$ \nabla{ \tilde{\mathrm{H}}}_I = \Biggl(\prod_{d=1}^{n-1} z_d(I) \Biggr) {\tilde{\mathrm{H}}}_I. $$
(66)

Then,

Proposition 6.2

The action ofon the ribbon basis is given by

(67)

Proof

This is a direct adaptation of the proof of [1]. Lemma 21, Corollary 22, and Lemma 23 of [1] remain valid if one interprets q i and t j as q i and t j . In particular, Eq. (66) reduces to [1, (54)] under the specialization q i =q i, t i =t i. □

Note also that if I=(1n), one has

$$ \nabla {\varLambda}_n=\sum_{J\vDash n} \prod_{j\in \mathrm{Des\,}(J)}(q_j+t_{n-j}) R_J = \sum_{J\vDash n} \prod_{j\not\in \mathrm{Des\,}(J)}(q_j+t_{n-j}-1) {\varLambda}^J. $$
(68)

As a positive sum of ribbons, this is the multigraded characteristic of a projective module of the 0-Hecke algebra. Its dimension is the number of packed words of length n (called preference functions in [1]). Let us recall that a packed word is a word w over {1,2,…} so that if i>1 appears in w, then i−1 also appears in w. The set of all packed words of size n is denoted by PW n .

Then the multigraded dimension of the previous module is

$$ W_n(\mathbf{q},\mathbf{t}) = \bigl \langle \nabla{\varLambda}_n, F_1^n\bigr \rangle = \sum_{w\in \mathrm{PW}_n}\phi(w) $$
(69)

where the statistic ϕ(w) is obtained as follows.

Let \(\sigma_{w}=\overline{\mathrm{std}(\overline{w})}\), where \(\overline{w}\) denotes the mirror image of w. Then

$$ \phi(w) = \prod_{i\in \mathrm{Des\,}(\sigma_w^{-1})}x_i $$
(70)

where x i =q i if \(w^{\uparrow}_{i}=w^{\uparrow}_{i+1}\) and x i =t ni otherwise, where w is the nondecreasing reordering of w.

For example, with w=22135411, σ w =54368721, w =11122345, the recoils of σ w are 1, 2, 3, 4, 7, and ϕ(w)=q 1 q 2 t 5 q 4 t 1.

Actually, we have the following slightly stronger result.

Proposition 6.3

Denote by d I the number of permutations σ with descent composition C(σ)=I. Then,

$$ \nabla{\varLambda}_n = \sum_{w\in \mathrm{PW}_n} \frac{\phi(w)}{d_{C(\sigma_w^{-1})}} R_{C(\sigma_w^{-1})}. $$
(71)

Proof

If σ is any permutation such that C(σ −1)=I, the coefficient of R I in the r.h.s. of (71) can be rewritten as

$$ \sum_{w\in \mathrm{PW}_n} \frac{\phi(w)}{d_{C(\sigma_w^{-1})}} = \sum_{\sigma_w=\sigma}\phi(w). $$
(72)

The words w∈PW n such that σ w =σ are obtained by the following construction. For i=σ −1(1), we have w i =1. Next, if k+1 is to the right of k in σ, we must have \(w_{\sigma^{-1}(k+1)}=w_{\sigma^{-1}(k)}+1\). Otherwise, we have two choices: \(w_{\sigma^{-1}(k+1)}=w_{\sigma^{-1}(k)}\) or \(w_{\sigma^{-1}(k+1)}=w_{\sigma^{-1}(k)}+1\). These choices are independent, and so contribute a factor (q i +t n−1) to the sum (72). □

This can again be generalized:

Theorem 6.4

For any composition I of n,

$$ \nabla R_I=(-1)^{|I|+\ell(I)}\,\theta(\sigma) \sum_{w\in \mathrm{PW}_n;\,\mathrm{ev}(w)\le I} \frac{R_{C(\sigma_w^{-1})}}{d_{C(\sigma_w^{-1})}}, $$
(73)

where σ is any permutation such that \(C(\sigma^{-1})=\bar{I}^{\sim}\), and

$$ \theta(\sigma) = \prod_{d\in \mathrm{Des\,}(\bar{I}^\sim)}t_d. $$
(74)

Proof

First, if ev(w)≤I, then \(C(\sigma_{w}^{-1})\ge \bar{I}^{\sim}\). For any \(J\ge \bar{I}^{\sim}\), the coefficient of R J in (73) is, for any permutation τ such that C(τ −1)=J,

$$ \sum_{\sigma_w = \tau;\ \mathrm{ev}(w)\le I}\phi(w). $$
(75)

For a packed word w such that σ w =τ, we have

$$ \phi(w) = \prod_{j\in \mathrm{Des\,}(J)}x_j = \prod_{j\in \mathrm{Des\,}(J)\cap \mathrm{Des\,}(I)}x_j \prod_{j\in \mathrm{Des\,}(J)\backslash \mathrm{Des\,}(I)}x_j. $$
(76)

In the second product, one always has x j =q j , since \(w^{\uparrow}_{j}=w^{\uparrow}_{j+1}\). In the first one, there are, as before, two possible independent choices for each j. □

The behavior of the multiparameter BZ polynomials with respect to the scalar product

$$ [R_I,R_J] := (-1)^{|I|+\ell(I)}\delta_{I,\bar{J}^\sim} $$
(77)

is the natural generalization of [1, Proposition 1.7]:

$$ [{\tilde{\mathrm{H}}}_I,{\tilde{\mathrm{H}}}_J] = (-1)^{|I|+\ell(I)} \delta_{I,\bar{J}^\sim} \prod_{i=1}^{n-1}(q_i-t_{n-i}). $$
(78)

7 Quasideterminantal bases

Another way to introduce two matrices of parameters is via the quasi-determinantal expressions of some bases. This method leads to a different kind of deformation, yielding for example noncommutative analogues of the Macdonald P-polynomials.

7.1 Quasideterminants of almost triangular matrices

Quasideterminants [4] are noncommutative analogs of the ratio of a determinant by one of its principal minors. Thus, the quasideterminants of a generic matrix are not polynomials, but complicated rational expressions living in the free field generated by the coefficients. However, for an almost triangular matrices, i.e., such that a ij =0 for i>j+1, all quasideterminants are polynomials, with a simple explicit expression. We shall only need the formula (see [3], Proposition. 2.6):

(79)

Recall that the quasideterminant |A| pq is invariant by scaling the rows of index different from p and the columns of index different from q. It is homogeneous of degree 1 with respect to row p and column q. Also, the quasideterminant is invariant under permutations of rows and columns.

The quasideterminant (79) coincides with the row-ordered expansion of an ordinary determinant

$$ \mathrm{rdet}(A) := \sum_{\sigma\in{ \mathfrak{S}_n}} \varepsilon(\sigma) a_{1\sigma(1)} a_{2\sigma(2)}\cdots a_{n\sigma(n)} $$
(80)

which will be denoted as an ordinary determinant in the sequel.

7.2 Quasideterminantal bases of Sym

Many interesting families of noncommutative symmetric functions can be expressed as quasi-determinants of the form

$$ H(W,G) = \left \vert \matrix{ w_{11}G_1 & w_{12}G_2 & \ldots & w_{1\, n-1}G_{n-1} & {\small \,\Box}{w_{1n}G_n} \cr w_{21} & w_{22}G_1 & \ldots & w_{2\, n-1}G_{n-2} & w_{2n}G_{n-1} \cr 0 & w_{32} & \ldots & w_{3\, n-3}G_{n-3} & w_{3n}G_{n-2} \cr \vdots & \vdots & \ddots & \vdots & \vdots \cr 0 & 0 & \ldots & w_{n\, n-1} & w_{nn}G_1 \cr}\right \vert $$
(81)

(or of the transposed form), where G k is some sequence of free generators of Sym, and W an almost-triangular (w ij =0 for i>j+1) scalar matrix. For example (see [3, (37)–(41)]),

$$ S_n = (-1)^{n-1} \left \vert \matrix{ {\varLambda}_1 & {\varLambda}_2 & \ldots & {\varLambda}_{n-1} &{\small \,\Box}{{\varLambda}_n}\cr {\varLambda}_0 & {\varLambda}_1 & \ldots & {\varLambda}_{n-2} &{\varLambda}_{n-1} \cr 0 & {\varLambda}_0 & \ldots & {\varLambda}_{n-3} &{\varLambda}_{n-2} \cr \vdots & \vdots & \ddots & \vdots &\vdots \cr 0 & 0 & \ldots & {\varLambda}_0 &{\varLambda}_1 \cr}\right \vert, $$
(82)
$$ n S_n = \left \vert \matrix{ {\varPsi}_1 & {\varPsi}_2 & \ldots & {\varPsi}_{n-1} &{\small \,\Box}{{\varPsi}_n}\cr -1 & {\varPsi}_1 & \ldots & {\varPsi}_{n-2} &{\varPsi}_{n-1} \cr 0 & -2 & \ldots & {\varPsi}_{n-3} &{\varPsi}_{n-2} \cr \vdots & \vdots & \ddots & \vdots &\vdots \cr 0 & 0 & \ldots & -n+1 &{\varPsi}_1 \cr}\right \vert, $$
(83)

or (see [10, Eq. (78)])

$$ [n]_q S_n(A) = \left \vert \matrix{ {\varTheta}_1(q) & {\varTheta}_2(q) & \ldots & {\varTheta}_{n-1}(q) & {\small \,\Box}{{\varTheta}_n(q)} \cr -[1]_q & q\,{\varTheta}_1(q) & \ldots & q {\varTheta}_{n-2}(q) & q {\varTheta}_{n-1}(q)\cr 0 & - [2]_q & \ldots & q^2 {\varTheta}_{n-3}(q) & q^2 {\varTheta}_{n-2}(q) \cr \vdots & \vdots & \vdots & \vdots & \vdots \cr 0 & 0 & \ldots & -[n-1]_q & q^{n-1} {\varTheta}_1(q)\cr}\right \vert, $$
(84)

where Θ n (q)=(1−q)−1 S n ((1−q)A). These examples illustrate relations between sequences of free generators. Quasi-determinantal expressions for some linear bases can be recast in this form as well. For example, the formula for ribbons

$$ (-1)^{n-1}R_I = \left \vert \matrix{ S_{i_1} & S_{i_1+i_2} & S_{i_1+i_2+i_3} & \ldots & {\small \,\Box}{S_{i_1+\cdots+i_n}} \cr S_0 & S_{i_2} & S_{i_2+i_3} & \ldots & S_{i_2+\cdots+i_n} \cr 0 & S_0 & S_{i_3} & \ldots & S_{i_3+\cdots+i_n} \cr \vdots & \vdots & \vdots & \ddots & \vdots \cr 0 & 0 & 0 & \ldots & S_{i_n} \cr}\right \vert $$
(85)

can be rewritten as follows. Let U and V be the n×n almost-triangular matrices

(86)

Given the pair (U,V), define, for each composition I of n, a matrix W(I) by

(87)

and set

$$ H_I(U,V; A) := H\bigl(W(I),S(A)\bigr). $$
(88)

Then,

$$ (-1)^{\ell(I)-1} R_I = H_I(U,V). $$
(89)

Indeed, H I (U,V;A) is obtained by substituting in (79)

(90)

This yields

(91)

For example,

$$ R_{211} = \left \vert \matrix{ S_1 & S_2 & S_3 & {\small \,\Box}{S_4} \cr -1 & 0 & 0 & 0 \cr 0 & -1 & -S_1 & -S_2 \cr 0 & 0 & -1 & -S_1 \cr}\right \vert = S_4 - S_{31} - S_{22} + S_{211}. $$
(92)

For a generic pair of almost-triangular matrices (U,V), the H I form a basis of Sym n . Without loss of generality, we may assume that u 1j =v 1j =1 for all j. Then, the transition matrix M expressing the H I on the S J where J=(j 1,…,j p ) satisfies

$$ M_{J,I} := x_{1 j_1-1} x_{j_1 j_2-1} \cdots x_{j_p n}, $$
(93)

where x ij =u ij if i−1 is not a descent of I and v ij otherwise.

As we shall sometimes need different normalizations, we also define for arbitrary almost triangular matrices U,V

$$ H'(W,G)= \mathrm{rdet} \left[ \matrix{ w_{11}G_1 & w_{12}G_2 & \ldots & w_{1\, n-1}G_{n-1} & {w_{1n}G_n} \cr w_{21} & w_{22}G_1 & \ldots & w_{2\, n-1}G_{n-2} & w_{2n}G_{n-1} \cr 0 & w_{32} & \ldots & w_{3\, n-1}G_{n-3} & w_{3n}G_{n-2} \cr \vdots & \vdots & \ddots & \vdots & \vdots \cr 0 & 0 & \ldots & w_{n\, n-1} & w_{nn}G_1 \cr}\right] $$
(94)

and

$$ H'_I(U,V) = H'\bigl(W(I),S(A)\bigr). $$
(95)

7.3 Expansion on the basis (S I)

For a composition I=(i 1,…,i r ) of n, let I be the integer vector of length n obtained by replacing each entry k of I by the sequence (k,0,…,0) (k−1 zeros):

$$ I^\sharp = \bigl(i_10^{i_1-1}i_20^{i_2-1}\cdots i_r0^{i_r-1}\bigr), $$
(96)

e.g., for compositions of 3,

$$ \begin{array}{l} (3)^\sharp=(300),\qquad (21)^\sharp=(201), \\ (12)^\sharp=(120),\qquad (111)^{\sharp}=(111). \end{array} $$
(97)

Adding (componentwise) the sequence (0,1,2,…,n−1) to I , we obtain a permutation σ I . For example,

$$ \sigma_{(3)}=(312),\qquad \sigma_{(21)}=(213),\qquad \sigma_{(12)}=(132),\quad \sigma_{(111)}=(123). $$
(98)

Proposition 7.1

The expansion of H′(W,S) on the S-basis is given by

$$ H'(W,S) = \sum_{I\vDash n}\varepsilon(\sigma_I) w_{1\sigma_I(1)}\cdots w_{n\sigma_I(n)} S^I. $$
(99)

Thus, for n=3,

(100)

7.4 Expansion of the basis (R I )

Proposition 7.2

For a composition I=(i 1,…,i r ) of n, denote by W I the product of diagonal minors of the matrix W taken over the first i 1 rows and columns, then the next i 2 ones and so on. Then,

$$ H'(W,S) = \sum_{I\vDash n}W_I R_I. $$
(101)

7.5 Examples

We shall now describe a class of pairs (U,V) depending on a sequence (q i ) and extra parameters x,y,a,b, for which the coefficients on the ribbon basis are products of binomials. For special values of the parameters, the resulting bases can be regarded as analogues of Schur functions of (1−t)X/(1−q) or of Macdonald P-polynomials, which are indeed obtained for hook compositions.

7.5.1 A family with factoring coefficients

Theorem 7.3

Let U and V be defined by

(102)
(103)

Then the coefficients W J of the expansion of \(H'_{I}(U,V)\) on the ribbon basis all factor as products of binomials.

Proof

Observe first that the substitutions b=a −1 and \(u_{n+1-i}=q_{i-1}^{-1}\) change the determinant of V into c n det(U) with c n =a n−1 q 1q n−1. Expanding det(U) by its first column yields a two-term recurrence implying easily the factorized expression

$$ \det(U)=(x-y)\prod_{i=1}^{n-1}(x-aq_iy), $$
(104)

so that as well

$$ \det(V)=(x-y)\prod_{i=1}^{n-1}(y-bu_{n-i}x). $$
(105)

Now, all the matrices W(I) built from U and V, and all their diagonal minors have the same structure, and their determinants factor similarly. □

The formula for the coefficient of R n is simple enough: if one orders the factors of det(U) and det(V) as

$$ Z_n = (x-aq_1y, x-aq_2y,\ldots,x-aq_{n-1}y) $$
(106)

and

$$ Z'_n=(y-bu_{n-1}x,y-bu_{n-2}x,\ldots,y-bu_1x), $$
(107)

then the coefficient of R n in \(H'_{I}(U,V)\) is

$$ (x-y)\prod_{d\in \mathrm{Des\,}(I)}z'_d \prod_{e\not\in \mathrm{Des\,}(I)}z_e. $$
(108)

For example,

(109)
(110)
(111)
(112)

A more careful analysis allows one to compute directly that the coefficient of R J is \(H'_{I}\): denote by u the boolean vector of I and by v the boolean vector of J and consider the biword \(w=\binom{u}{v}\). Start with c IJ =c:=1. First, there are factors coming from the boundaries of the biword:

  • If \(w_{1}=\binom{0}{0}\) then c:=(xq 1 y)c.

  • If \(w_{1}=\binom{1}{0}\) then \(c:=(x-u_{n_{1}}y) c\).

  • If \(w_{n-1}=\binom{0}{1}\) then c:=(xq n−1 y)c.

  • If \(w_{n-1}=\binom{1}{1}\) then c:=(xu 1 y)c.

Then, for any i∈[1,n−2], the two biletters w i w i+1 can have different values:

  • if \(w_{i} w_{i+1}=\binom{.\ 0}{0\ 0}\) then c:=(xq i+1 y)c,

  • if \(w_{i} w_{i+1}=\binom{.\ 1}{0\ 0}\) then c:=(xu ni−1y)c,

  • if \(w_{i} w_{i+1}=\binom{0\ .}{1\ 1}\) then c:=(xq i y)c,

  • if \(w_{i} w_{i+1}=\binom{1\ .}{1\ 1}\) then c:=(xu ni y)c,

  • if \(w_{i} w_{i+1}=\binom{0\ 0}{1\ 0}\) then c:=(xq i q i+1 y)(xy)c,

  • if \(w_{i} w_{i+1}=\binom{0\ 1}{1\ 0}\) then c:=(xq i u ni−1y)(xy)c,

  • if \(w_{i} w_{i+1}=\binom{1\ 0}{1\ 0}\) then c:=(xq i+1 u ni y)(xy)c,

  • if \(w_{i} w_{i+1}=\binom{1\ 1}{1\ 0}\) then c:=(xu ni−1u ni y)(xy)c,

where the dot indicates any possible value. Note that if v i =0 and v i+1=1, no factor is added to c IJ .

7.5.2 An analogue of the (1−t)/(1−q) transform

Recall that for commutative symmetric functions, the (1−t)/(1−q) transform is defined in terms of the power-sums by

$$ p_n \biggl(\frac{1-t}{1-q}X \biggr) = \frac{1-t^n}{1-q^n}p_n(X). $$
(113)

There exist several noncommutative analogues of this transformation. One can define it on a sequence of generators and require that it be an algebra morphism. This is the case of the versions chaining internal products on the right by σ 1((1−t)A) and σ 1(A/(1−q)) (the order matters). Taking internal products on the left instead, one obtains linear maps which are not algebra morphisms but still lift the commutative transform.

With the specialization x=1, y=t, q i =q i, u i =1, a=b=1, one obtains a basis such that for a hook composition I=(nk,1k), the commutative image of \(H'_{I}(U,V)\) becomes the (1−t)/(1−q) transform of the Schur function \(s_{n-k,1^{k}}\).

7.5.3 An analogue of the Macdonald P-basis

With the specialization x=1, y=t, q i =q i, u i =t i, a=b=1, one obtains an analogue of the Macdonald P-basis, in the sense that for hook compositions I=(nk,1k), the commutative image of \(H'_{I}\) is proportional to the Macdonald polynomial \(P_{n-k,1^{k}}(q,t;X)\).

For example, the following determinant

$$ \left \vert \matrix{ ( 1-t ) h_1 & (1-{t}^{2}) {h}_{2} & ( 1-{t}^{3}) {h}_{3} & (1-{t}^{4}) {h}_{4} & (1-{t}^{5}) {h}_{5}\cr q-1 & (q-t) h_1 & (q-{t}^{2}) {h}_{2} & (q-{t}^{3}) {h}_{3} & (q-{t}^{4}) {h}_{4} \cr 0 & q^{2}-1 & ({q}^{2}-t) h_1 & ({q}^{2}-{t}^{2}) {h}_{2} & ({q}^{2}-{t}^{3}) {h}_{3} \cr 0 & 0 & 1-{t}^{2} & (1-{t}^{3}) h_1 & (1-{t}^{4}) {h}_{2} \cr 0&0&0&1-t& \bigl( 1-{t}^{2} \bigr) h_1 \cr} \right \vert $$
(114)

is equal to

$$ \bigl((1-q) \bigl(1-q^2\bigr)\bigr)^2 \bigl(1-q^5\bigr) P_{311}(q,t;X). $$
(115)

In general, the commutative image of \(H'_{n-k,1^{k}}(U,V)\) is

$$ [k]_q! [n-k-1]_q! \bigl(1-q^n\bigr) P_{n-k,1^k}(q,t;X). $$
(116)