Journal of Algebraic Combinatorics

, Volume 37, Issue 4, pp 683–715 | Cite as

A new “dinv” arising from the two part case of the shuffle conjecture

  • A. Duane
  • A. M. Garsia
  • M. Zabrocki


For a symmetric function F, the eigen-operator Δ F acts on the modified Macdonald basis of the ring of symmetric functions by \(\Delta_{F} \tilde{H}_{\mu}= F[B_{\mu}] \tilde{H}_{\mu}\). In a recent paper (Int. Math. Res. Not. 11:525–560, 2004), J. Haglund showed that the expression \(\langle\Delta_{h_{J}} E_{n,k}, e_{n}\rangle\) q,t-enumerates the parking functions whose diagonal word is in the shuffle 12⋯J∪∪J+1⋯J+n with k of the cars J+1,…,J+n in the main diagonal including car J+n in the cell (1,1) by t area q dinv.

In view of some recent conjectures of Haglund–Morse–Zabrocki (Can. J. Math., doi: 10.4153/CJM-2011-078-4, 2011), it is natural to conjecture that replacing E n,k by the modified Hall–Littlewood functions \(\mathbf{C}_{p_{1}}\mathbf{C}_{p_{2}}\cdots\mathbf{C}_{p_{k}} 1\) would yield a polynomial that enumerates the same collection of parking functions but now restricted by the requirement that the Dyck path supporting the parking function touches the diagonal according to the composition p=(p 1,p 2,…,p k ). We prove this conjecture by deriving a recursion for the polynomial \(\langle\Delta_{h_{J}} \mathbf{C}_{p_{1}}\mathbf{C}_{p_{2}}\cdots \mathbf{C}_{p_{k}} 1 , e_{n}\rangle \), using this recursion to construct a new \(\operatorname{dinv}\) statistic (which we denote \(\operatorname{ndinv}\)), then showing that this polynomial enumerates the latter parking functions by \(t^{\operatorname{area}} q^{\operatorname{ndinv}}\).


Symmetric functions Macdonald polynomials Parking functions 

1 Introduction

Parking functions are endowed by a colorful history and jargon (see for instance [9]) that is very helpful in dealing with them combinatorially as well as analytically. Here we will represent them interchangeably as two line arrays or as tableaux. A single example of this correspondence should be sufficient for our purposes. In the figure below we have on the left the two line array, with the list of cars V=(v 1,v 2,…,v n ) on top and their diagonal numbers U=(u 1,u 2,…,u n ) on the bottom. In the corresponding n×n tableau of lattice cells we have shaded the main diagonal (or 0-diagonal) and drawn the supporting Dyck path. The component u i gives the number of lattice cells EAST of the ith NORTH step and WEST of the main diagonal. The cells adjacent to the NORTH steps of the path are filled with the corresponding cars from bottom to top. The resulting tableau uniquely represents a parking function if and only if the cars increase up the columns.
A necessary and sufficient condition for the vector U to give a Dyck path is that
$$ u_1=0\quad\hbox{and}\quad\le u_i \le u_{i-1}+1. $$
This given, the column increasing property of the corresponding tableau is ensured by the requirement that V=(v 1,v 2,…,v n ) is a permutation in S n satisfying
$$ u_i=u_{i-1}+1\quad\Longrightarrow\quad v_i>v_{i-1}. $$
We should mention that the component u i may also be viewed as the order of the diagonal supporting car V i . In the example above, car 3 is in the third diagonal, 1 and 8 are in the second diagonal, 5,7 and 6 are in the first diagonal and 2 and 4 are in the main diagonal. We have purposely listed the cars by diagonals from right to left starting with the highest diagonal. This gives the diagonal word of PF which we will denote σ(PF). It is easily seen that σ(PF) can also be obtained directly from the 2-line array by successive right to left readings of the components of the vector V=(v 1,v 2,…,v n ) according to decreasing values of u 1,u 2,…,u n . In previous work, each parking function is assigned a weight
$$ w(\mathit{PF})= t^{\operatorname{area}(\mathit{PF})} q^{\operatorname{dinv}(\mathit{PF})} $$
$$ \operatorname{area}(\mathit{PF})=u_1+u_2+\cdots+u_n $$
$$ \operatorname{dinv}(\mathit{PF})= \sum_{1\le i<j\le n} \bigl( \chi( u_i=u_j\ \&\ v_i<v_j) + \chi( u_i=u_j+1\ \&\ v_i>v_j) \bigr). $$
It is clear from this imagery that the sum in (1.5) gives the total number of cells between the supporting Dyck path and the main diagonal. We also see that two cars in the same diagonal with the car on the left smaller than the car on the right will contribute a unit to \(\operatorname{dinv}(\mathit{PF})\). The same holds true when a car on the left is bigger than a car on the right with the latter in the adjacent lower diagonal. Thus in the present example we have
$$\operatorname{area}(\mathit{PF})=10, \qquad \operatorname{dinv}(\mathit{PF})=4, \qquad \sigma(\mathit{PF})=31857624, $$
$$w(\mathit{PF})= t^{10}q^4 . $$
Here and after, the vectors U and V in the two line representation will be also referred to as U(PF) and V(PF). It will also be convenient to denote by \(\mathcal{PF}_{n}\) the collection of parking functions in the n×n lattice square.
The shuffle conjecture [10] states that for any partition μ=(μ 1,μ 2,…,μ )⊢n we have the identity
$$ \langle\nabla e_n, h_{\mu_1}h_{\mu_2}\cdots h_{\mu_\ell}\rangle=\sum_{\mathit{PF}\in\mathcal{PF}_n} t^{\operatorname{area}(\mathit{PF})}q^{\operatorname{dinv}(\mathit{PF})} \chi\bigl(\sigma(\mathit{PF})\in\mathcal{E}_1 {\cup\!\cup}\mathcal{E}_2 {\cup\!\cup}\cdots {\cup\!\cup}\mathcal{E}_\ell\bigr) $$
where ∇ is the Macdonald eigen-operator introduced in [1], e n is the familiar elementary symmetric function, \(h_{\mu_{1}}h_{\mu_{2}}\cdots h_{\mu_{\ell}}\) is the homogeneous symmetric function basis indexed by μ, \(\mathcal{E}_{1},\mathcal{E}_{2},\ldots,\mathcal{E}_{\ell}\) are successive segments of the word 1234⋯n of respective lengths μ 1,μ 2,…,μ and the symbol \(\chi(\sigma(\mathit{PF})\in \mathcal{E}_{1}{\cup\!\cup}\mathcal{E}_{2}{\cup\!\cup}\cdots {\cup\!\cup}\mathcal{E}_{\ell})\) is to indicate that the sum is to be carried out over parking functions in \(\mathcal{PF}_{n}\) whose diagonal word is a shuffle of the words \(\mathcal{E}_{1},\mathcal{E}_{2},\ldots, \mathcal{E}_{\ell}\). In [8] Haglund proved the l=2 case of (1.7). By a remarkable sequence of identities it is shown in [8] that this case is a consequence of the more refined identity with \(\mathcal{E}_{J}=12\cdots J\), \(\mathcal{E}_{n-J}=J+1\cdots J+n\), and the sum is over the collection \(\mathcal{PF}_{n+J}(k)\) of parking functions in the (n+J)×(n+J) lattice square that have k of the cars J+1,…,J+n in the main diagonal including car J+n in the cell (1,1). Here the E n,k are certain ubiquitous symmetric functions introduced in [3] with sum
$$ E_{n,1}+E_{n,2}+\cdots+ E_{n,k}=e_n $$
and \(\Delta_{h_{j}}\) is the linear operator obtained by setting for the modified Macdonald basis in [4, 14].
$$ \Delta_{h_j} \tilde{H}_\mu[X;q,t] = h_j \biggl[ \sum_{(i,j)\in\mu}t^{i-1}q^{j-1} \biggr] \tilde{H}_\mu[X;q,t]. $$
More recently, J. Haglund, J. Morse and M. Zabrocki [11] formulated a variety of new conjectures yielding surprising refinements of the shuffle conjecture. In [11] they introduce new ingredients in the Theory of parking functions. This is the diagonal composition of a Parking function, which we denote by p(PF) and is simply the composition which gives the position of the zeros in the vector U=(u 1,u 2,…,u n ), or equivalently the lengths of the segments of the main diagonal between successive hits of its supporting Dyck path. One of their conjectures is the identity being valid for all pn and μn. Here for each integer a, C a is the operator whose action on a symmetric function F[X], in plethystic notation, can be simply expressed in the form
$$ \mathbf{C}_a F[X]= \biggl(-{{ 1\over q}} \biggr)^{a-1} F \biggl[X-{1-1/q \over z} \biggr]\displaystyle\sum _{m\ge0}z^m h_{m}[X] \bigg|_{z^a}. $$
Remarkably, the operators in (1.11) appear to control the shape of the supporting Dyck paths. Since in [11] it is shown that we also have the identity
$$ E_{n,k}=\sum_{p_1+p_2+\cdots+p_k=n} \mathbf{C}_{p_1} \mathbf{C}_{p_2}\cdots\mathbf{C}_{p_k} 1 $$
a natural question to ask is what becomes of Haglund’s identity (1.8) when E n,k is replaced by one of the symmetric polynomials \(\mathbf {C}_{p_{1}}\mathbf{C}_{p_{2}}\cdots\mathbf{C}_{p_{k}} 1\). Note, however, that since the k in (1.8), under the action of \(\Delta_{h_{J}}\) controls the number of big cars on the main diagonal, it natural to suspect that the combination of \(\Delta_{h_{J}}\) and \(\mathbf{C}_{p_{1}}\mathbf {C}_{p_{2}}\cdots \mathbf{C}_{p_{k}} 1\) would result in forcing k of the big cars to hit the diagonal according to the composition p=(p 1,p 2,…,p k ). Miraculous as this might appear to be, computer data beautifully confirm this mechanism … but up to a point. In fact, following this line of reasoning, one might conjecture the identity where \(p(\operatorname{big}(\mathit{PF}))\) now refers to the diagonal composition of the big cars, but otherwise the sum is over the same parking functions occurring in (1.8). Now that turned out to be false. Yet computer data revealed that the following (q-reduced) version of (1.8) is actually true. Namely This circumstance led to the conjecture that (1.14) could be made true by replacing the classical parking function “dinv” by a new dinv more focused on the positions of the big cars.

The main result of this paper is a proof of this conjecture. Banking on the intuition gained from previous work [7] and using some of the identities developed there with the C a and B b operators we are able to derive the following basic recursion.

Theorem 1.1

For all compositions p=(p 1,p 2,…,p k ) we have with \(\mathbf{B}_{a}=\omega\widetilde{\mathbf{B}}_{a}\omega\) and for any symmetric function F[X]
$$ \widetilde{\mathbf{B}}_a F[X]= F \biggl[ X- {1-q \over z} \biggr]{\varOmega}[ zX] \bigg|_{z^a}. $$
Now the Haglund–Morse–Zabrocki conjectures also assert that replacing the C operators by the B operators in (1.11) has the effect of allowing the controlled Dyck paths to hit the diagonal everywhere, including the points forced by the composition p. This led us to interpret the first polynomial on the right hand side of (1.8) as a weighted enumeration of the collection of parking functions with diagonal word of a shuffle of 12⋯(J−1) by J(J+1)⋯(n+J−1) whose big cars hit the main diagonal according to the collection of compositions obtained by concatenating (p 2,…,p k ) with an arbitrary composition of p 1. Guided by this interpretation, by means of (1.16) we obtained a recursive construction of the appropriate new dinv and proved the identity To carry out all this we need a collection of identities of Macdonald polynomial theory already used in previous work. These identities and the corresponding notational conventions will be collected in the first section with references to the original sources for their proofs. The second section will be dedicated to the proof of Theorem 1.1. All the corresponding combinatorial reasoning including the construction of the new dinv is given in the third section where our “ndinv” is also given an equivalent somewhat less recursive construction with the hope that it may be conducive to the discovery of a direct formula for the new dinv which, as in the case of the classical dinv, is closely related to the geometry of the corresponding parking function diagram.

2 Auxiliary identities from the Theory of Macdonald polynomials

The space of symmetric polynomials will be denoted Λ. The subspace of homogeneous symmetric polynomials of degree m will be denoted by Λ =m . We will seldom work with symmetric polynomials expressed in terms of variables but rather express them in terms of one of the six classical symmetric function bases
  1. (1)

    power” {p μ } μ ,

  2. (2)

    monomial” {m μ } μ ,

  3. (3)

    homogeneous” {h μ } μ ,

  4. (4)

    elementary” {e μ } μ ,

  5. (5)

    forgotten” {f μ } μ and

  6. (6)

    Schur” {s μ } μ .

We recall that the fundamental involution ω may be defined by setting for the power basis indexed by μ=(μ 1,μ 2,…,μ k )⊢n
$$ \omega p_\mu=(-1)^{n-k}p_\mu= (-1)^{|\mu|-l(\mu)}p_\mu $$
where for any vector v=(v 1,v 2,…,v k ) we set \(|v|=\sum_{i=1}^{k} v_{i} \) and l(v)=k.
In dealing with symmetric function identities, specially with those arising in the Theory of Macdonald Polynomials, we find it convenient and often indispensable to use plethystic notation. This device has a straightforward definition which can be verbatim implemented in MAPLE or MATHEMATICA for computer experimentation. We simply set for any expression E=E(t 1,t 2,…) and any power symmetric function p k
$$ p_k[E]= E \bigl( t_1^k,t_2^k, \ldots\bigr). $$
This given, for any symmetric function F we set
$$ F[E]= Q_F(p_1,p_2, \ldots) |_{p_k{\rightarrow}E( t_1^k,t_2^k,\ldots)} $$
where Q F is the polynomial yielding the expansion of F in terms of the power basis. Note that in writing E(t 1,t 2,…) we are tacitly assuming that t 1,t 2,t 3,… are all the variables appearing in E and in writing \(E(t_{1}^{k},t_{2}^{k},\ldots)\) we intend that all the variables appearing in E have been raised to their kth power.
A paradoxical but necessary property of plethystic substitutions is that (2.2) requires
$$ p_k[-E]=-p_k[E]. $$
This notwithstanding, we will still need to carry out ordinary changes of signs. To distinguish it from the plethystic minus sign, we will carry out the ordinary sign change by prepending our expressions with a superscripted minus sign, or, as the case may be, by means of a new variable ϵ which outside of the plethystic bracket is simply replaced by −1. For instance, these conventions give for X k =x 1+x 2+⋯+x n
$$p_k \bigl[-^-X_n \bigr]= (-1)^{k-1} \sum _{i=1}^nx_i^k $$
or, equivalently
$$p_k[ -\epsilon X_n]= -\epsilon^k\sum _{i=1}^nx_i^k= (-1)^{k-1} \sum_{i=1}^nx_i^k. $$
In particular we get for X=x 1+x 2+x 3+⋯
$$\omega p_k[X]= p_k \bigl[-^-X \bigr]. $$
Thus for any symmetric function FΛ and any expression E we have
$$ \omega F[E]= F \bigl[-^-E \bigr]= F[-\epsilon E]. $$
In particular, if FΛ =k we may also rewrite this as
$$ F[-E]= \omega F \bigl[^-E \bigr]=(-1)^k \omega F[ E]. $$
The formal power series
$$ {\varOmega}=\exp \biggl(\sum_{k\ge1}{p_k \over k} \biggr) $$
combined with plethysic substitutions will provide a powerful way of dealing with the many generating functions occurring in our manipulations.
Here and after it will be convenient to identify partitions with their (French) Ferrer’s diagram. Given a partition μ and a cell cμ, Macdonald introduces four parameters l=l μ (c), \(l'=l'_{\mu}(c)\), a=a μ (c) and \(a'=a'_{\mu}(c)\) called leg, coleg, arm and coarm which give the number of lattice cells of μ strictly NORTH, SOUTH, EAST and WEST of c (see attached figure). Following Macdonald we will set
Denoting by μ′ the conjugate of μ, the basic ingredients playing a role in the theory of Macdonald polynomials are
$$ \begin{aligned} &T_\mu=t^{n(\mu)}q^{n(\mu')} , \qquad B_\mu(q,t) =\sum_{c\in\mu}t^{l'_\mu(c)}q^{a'_\mu(c)} , \\ &\varPi_\mu(q,t)= \prod_{c\in\mu;c\neq(0,0)} \bigl(1-t^{l'_\mu(c)}q^{a'_\mu(c)} \bigr) , \qquad M=(1-t) (1-q) \\ & D_\mu(q,t)= MB_\mu(q,t)- 1, \qquad \\ & w_\mu(q,t)=\prod_{c\in\mu} \bigl(q^{a _\mu(c)} -t^{l _\mu(c)+1} \bigr) \bigl(t^{l _\mu(c)} -q^{a _\mu(c)+1} \bigr), \end{aligned} $$
together with a deformation of the Hall scalar product, which we call the star scalar product, defined by setting for the power basis
$$\langle p_\lambda, p_\mu\rangle_*= (-1)^{|\mu|-l(\mu)} \prod _i \bigl(1-t^{\mu_i} \bigr) \bigl(1-q^{\mu_i} \bigr) z_\mu\chi(\lambda=\mu) , $$
where z μ gives the order of the stabilizer of a permutation with cycle structure μ.
This given, the modified Macdonald Polynomials we will deal with here are the unique symmetric function basis \(\{\tilde{H}_{\mu}(X;q,t) \}_{\mu}\) which is upper triangularly related to the basis \(\{s_{\lambda}[{X\over t-1}]\}_{\lambda}\) and satisfies the orthogonality condition
$$ \bigl\langle\tilde{H}_\lambda,\tilde{H}_\mu\bigr\rangle_*=\chi( \lambda=\mu) w_\mu(q,t). $$
In this writing we will make intensive use of the operator ∇ defined by setting for all partitions μ
$$\nabla\tilde{H}_\mu= T_\mu\tilde{H}_\mu. $$
A closely related family of symmetric function operators is obtained by setting for a symmetric function F[X]
$$\Delta_F \tilde{H}_\mu= F[B_\mu] \tilde{H}_\mu. $$
It is good to keep in mind that, because of the relation e n [B μ ]=T μ for μn, the operator ∇ itself reduces to \(\Delta_{e_{n}}\) when acting on symmetric polynomials that are homogeneous of degree n.
Recall that for our version of the Macdonald polynomials the Macdonald Reciprocity formula states that
$$ {\tilde{H}_\alpha[1+u D_\beta] \over\prod _{c\in\alpha} \bigl(1-u t^{l'}q^{a'} \bigr)} = { \tilde{H}_\beta[1+u D_\alpha] \over\prod _{c\in\beta} \bigl(1-u t^{l'}q^{a'} \bigr)} \quad\hbox{(for\ all\ pairs}\ \alpha,\beta). $$
We will use here several special evaluations of (2.11). To begin, canceling the common factor (1−u) out of the denominators on both sides of (2.11) then setting u=1 gives
$$ {\tilde{H}_\alpha[M B_\beta] \over\varPi_\alpha} = { \tilde{H}_\beta[ M B_\alpha] \over\varPi_\beta} \quad\hbox{(for\ all\ pairs}\ \alpha,\beta). $$
On the other hand replacing u by 1/u and letting u=0 in (2.11) gives
$$ (-1)^{|\alpha|}{\tilde{H}_\alpha[D_\beta] \over T_\alpha} = (-1)^{|\beta|}{\tilde{H}_\beta[ D_\alpha] \over T_\beta} \quad\hbox{(for\ all\ pairs}\ \alpha,\beta). $$
Since for β the empty partition we can take \(\tilde{H}_{\beta}=1\) and D β =−1, (2.11) in this case for α=μ reduces to
$$ \tilde{H}_\mu[1-u ]=\prod_{c\in\mu} \bigl(1-ut^{l'}q^{a'} \bigr)=(1-u)\sum _{r=0}^{n-1} (-u)^r e_r[B_\mu-1]. $$
This identity yields the coefficients of hook Schur functions in the expansion
$$ \tilde{H}_\mu[X;q,t]=\sum_{\lambda\vdash|\mu|}s_\mu[X] \tilde{K}_{\lambda\mu}(q,t). $$
Recall that the addition formula for Schur functions gives
$$ s_\mu[1-u]= \begin{cases} (-u)^r(1-u) & \hbox{if}\ \mu=(n-r,1^r),\\ 0 & \mbox{otherwise.} \end{cases} $$
Thus (2.15), with X=1−u, combined with (2.14) gives for μn
$$\bigl\langle\tilde{H}_\mu, s_{(n-r,1^r)}\bigr\rangle= e_r[B_\mu-1] $$
and the identity \(e_{r}h_{n-r}=s_{(n-r,1^{r})}+s_{(n-r-1,1^{r-1})}\) gives
$$ \bigl\langle\tilde{H}_\mu, e_rh_{n-r}\bigr\rangle= e_r[B_\mu]. $$
Since for β=(1) we have \(\tilde{H}_{\beta}=1\) and Π β =1, formula (2.12) reduces to the surprisingly simple identity
$$ \tilde{H}_\alpha[M]= MB_\alpha\varPi_\alpha. $$
Last but not least we must also recall that we have the Pieri formulas
$$ (\mathrm{a})\quad e_1\tilde{H}_\nu=\sum _{\mu{\leftarrow}\nu}d_{\mu\nu}\tilde{H}_\mu, \qquad (\mathrm{b}) \quad e_1^\perp\tilde{H}_\mu=\sum _{\nu{\rightarrow}\mu}c_{\mu\nu} \tilde{H}_\nu, $$
and their corresponding summation formulas (see [2, 6, 15]) Here νμ simply means that the sum is over ν’s obtained from μ by removing a corner cell and μν means that the sum is over μ’s obtained from ν by adding a corner cell.
It will also be useful to know that these two Pieri coefficients are related by the identity
$$ d_{\mu\nu}= M c_{\mu\nu} {w_\nu\over w_\mu}. $$
Recall that the Hall scalar product in the theory of symmetric functions may be defined by setting, for the power basis
$$ \langle p_\lambda, p_\mu\rangle= z_\mu\chi( \lambda=\mu) . $$
It follows from this that the ∗-scalar product is simply related to the Hall scalar product by setting for all pairs of symmetric functions f,g
$$ \langle f, g\rangle_* =\langle f,\omega\phi g\rangle, $$
where it has been customary to let ϕ be the operator defined by setting for any symmetric function f
$$ \phi f[X]= f[MX]. $$
Note that the inverse of ϕ is usually written in the form
$$ f^*[X]= f[X/M]. $$
In particular we also have for all symmetric functions f,g
$$ \langle f, g\rangle=\bigl\langle f, \omega g^* \bigr\rangle_* . $$
The orthogonality relations in (2.10) yield the Cauchy identity for our Macdonald polynomials in the form
$$ {\varOmega} \biggl[-\epsilon {XY\over M} \biggr]=\sum _{\mu} {\tilde{H}_\mu[X]\tilde{H}_\mu[Y] \over w_\mu} $$
which restricted to its homogeneous component of degree n in X and Y reduces to
$$ e_n \biggl[ {XY\over M} \biggr]=\sum _{\mu\vdash n} {\tilde{H}_\mu[X]\tilde{H}_\mu[Y] \over w_\mu}. $$

Note that the orthogonality relations in (2.10) yield us the following Macdonald polynomial expansions:

Proposition 2.1

For all n≥1 we have
Finally it is good to keep in mind, for future use, that we have for all partitions μ
$$ T_\mu\omega\tilde{H}_\mu[X;1/q,1/t]= \tilde{H}_\mu[X;q,t] . $$

Remark 2.2

It was conjectured in [5] and proved in [12] that the bigraded Frobenius characteristic of the diagonal harmonics of S n is given by the symmetric function
$$ \mathit{DH}_n[X;q,t]=\sum_{\mu\vdash n}{T_\mu \tilde{H}_\mu(X;q,t)M B_\mu(q,t) \varPi_\mu(q,t) \over w_\mu(q,t)}. $$
Surprisingly the intricate rational function on the right hand side is none other than ∇e n . To see this we simply combine the relation in (2.18) with the degree n restricted Macdonald–Cauchy formula (2.29) obtaining
$$ e_n[X]=e_n \biggl[{XM\over M} \biggr]=\sum _{\mu\vdash n} {\tilde{H}_\mu[X]MB_\mu \varPi_\mu\over w_\mu}. $$
This is perhaps the simplest way to prove (2.30)(f). This discovery is precisely what led to the introduction of ∇ in the first place.

3 Proof of the basic recursion

To establish Theorem 1.1 we need some preliminary observations. To begin we have the following reduction.

Theorem 3.1

For all p=(p 1,p 2,…,p k )⊨n and j≥0 we have if and only if, with \(\mathbf{C}_{a}^{*}\) and \(\mathbf{B}_{a}^{*}\) the ∗-scalar product duals of C a and B a , we have
$$ \mathbf{C}^*_a\Delta_{h_j}h_n \biggl[{X \over M} \biggr] = t^{a-1}\mathbf{B}^*_{a} \Delta_{h_{j-1}}h_{n} \biggl[{X\over M} \biggr] +\chi(a=1) \Delta_{h_j}h_{n-1} \biggl[{X\over M} \biggr] $$
for all j≥0 and 1≤an.

We will give a proof of (3.1) first, and then in the following pages we will establish (3.2) after developing a few necessary identities.

Proof of (3.1)

It is shown in [11] that the operators C a and B b satisfy the commutativity relations
$$q\mathbf{C}_a\mathbf{B}_b=\mathbf{B}_b \mathbf{C}_a \quad(\hbox{for\ all}\ a,b\ge1). $$
Using these identities, (3.1) becomes Passing to ∗-scalar products, and using the identity in (2.27), we can next rewrite (3.1) in the form
$$ \bigl\langle\Delta_{h_j}\mathbf{C}_{p_1}F[X], h_n^* \bigr\rangle_* = t^{p_1-1} \bigl\langle \Delta_{h_{j-1}}\mathbf{B}_{p_1} F[X], h_n^* \bigr \rangle_* +\chi(p_1=1)\bigl\langle\Delta_{h_{j}}F[X] , h_{n-1}^* \bigr\rangle_* $$
and the validity of this identity for every symmetric function F[X] that is homogeneous of degree np 1 is equivalent to (3.1) since when p 2,…,p k are the parts of a partition the polynomials \(\mathbf{C}_{p_{2}}\cdots\mathbf{C}_{p_{k}} 1 \) are essentially elements of the Hall–Littlewood basis. Now, since all the operators Δ F are self adjoint with respect to the ∗-scalar product, (3.3) in turn can be rewritten in the form
$$\bigl\langle F[X], \mathbf{C}_{p_1}^* \Delta_{h_j}h_n^* \bigr\rangle_* = t^{p_1-1} \bigl\langle F[X], \mathbf{B}_{p_1}^* \Delta_{h_{j-1}} h_n^* \bigr\rangle_* +\chi(p_1=1) \bigl\langle F[X], \Delta_{h_{j}} h_{n-1}^* \bigr\rangle_* $$
and this identity (for all p 1≥1) is equivalent to (3.2) due to the arbitrariness of F[X]. This completes our proof. □

Our next goal is to prove (3.2). To begin we have the following auxiliary identity.

Proposition 3.2

$$ \Delta_{h_j}h_n \biggl[{X\over M} \biggr] =\sum _{s=0}^jh_{j-s} \biggl[{1 \over M} \biggr](-1)^{s} \sum_{\nu\vdash s}{T_\nu^2 \over w_\nu} h_n \biggl[X \biggl({1\over M}-B_\nu\biggr) \biggr]. $$


Using (2.30)(c) and the definition of the operator \(\Delta _{h_{j}}\) we get This gives (3.4) since \(e_{n} [ X(B_{\nu}-{1\over M}) ]=(-1)^{n} h_{n} [X({1\over M}-B_{\nu}) ]\). □
It is good to keep in mind that in particular we also have
$$ \Delta_{h_{j}}h_{n-1} \biggl[{X\over M} \biggr]= \sum _{s=0}^j(-1)^s h_{j-s} \biggl[{1\over M} \biggr] \sum _{\nu\vdash s}{T_\nu^2\over w_\nu} h_{n-1} \biggl[X \biggl({1\over M}-B_\nu\biggr) \biggr] $$
which by manipulation
$$ \Delta_{h_{j-1}}h_{n} \biggl[{X\over M} \biggr]= \sum _{s=0}^{j-1}(-1)^sh_{j-1-s} \biggl[{1\over M} \biggr] \sum_{\nu\vdash s}{T_\nu^2 \over w_\nu} h_n \biggl[X \biggl({1\over M}-B_\nu\biggr) \biggr]. $$

Next we have

Proposition 3.3


The identity in (3.4) gives
$$ \mathbf{C}^*_a\Delta_{h_j}h_n \biggl[{X \over M} \biggr] =\sum_{s=0}^j h_{j-s} \biggl[{1\over M} \biggr](-1)^{s} \sum _{\nu\vdash s}{T_\nu^2\over w_\nu} \mathbf{C}^*_ah_n \biggl[X \biggl({1\over M}-B_\nu\biggr) \biggr]. $$
Now it was shown in [7] that for all P[X]∈Λ we have
$$\mathbf{C}_a^* P[X]= \biggl({ -1\over q} \biggr)^{a-1} P \biggl[X-{\epsilon M\over z} \biggr]{\varOmega} \biggl [{-\epsilon zX \over q(1-t)} \biggr] \bigg|_{z^{-a}}. $$
This gives or better
$$\mathbf{C}^*_a h_n \biggl[X \biggl({1\over M}-B_\nu\biggr) \biggr] =-\sum_{r=a}^n h_{n-r} \biggl[ X \biggl({1\over M}-B_\nu\biggr) \biggr] {1\over q^{r-1}}h_r [ M B_\nu-1 ] h_{r-a} \biggl[{ - X\over1-t} \biggr] $$
and the last sum in (3.8) becomes Using the summation identity in (2.20) in the form
$$h_{r} [ MB_\nu-1 ]= (tq)^{r-1} \sum _{\tau{\rightarrow}\nu}Mc_{\nu\tau} \biggl({T_\nu\over T_\tau} \biggr)^{r-1} -\chi(r=1) $$
we get Using (2.22) and the fact that there are no partitions of size −1, (3.8) becomes and (3.5) gives Since \(B_{\nu}=B_{\tau}+{T_{\nu}\over T_{\tau}}\) we derive Using the summation formula in (2.21) in the form
$$\sum_{\mu\leftarrow\nu}d_{\mu\nu}(q,t) \biggl(\frac{T_\mu}{T_\nu}\biggr)^k= \begin{cases} h_{k-1} [1-MB_\nu] &\hbox{if}\ k\geq1,\\ 1 & \mbox{if}\ k=0 \end{cases} $$
together with the fact that vr and in (3.10) we have ra≥1 we obtain
$$\sum_{\nu{\leftarrow}\tau}d_{\nu\tau} \biggl( {T_\nu\over T_\tau} \biggr)^{v+1} = h_{v} [1-MB_\tau] $$
and (3.10) becomes This is best rewritten in the form proving (3.7) and completing our proof of Proposition 3.3. □
Let us now work on \(\mathbf{B}^{*}_{a} \Delta_{h_{j-1}}h_{n}[{X\over M}] \). Here we use the identity in (3.6), that is,
$$\Delta_{h_{j-1}}h_n \biggl[{X\over M} \biggr] =\sum _{s=0}^jh_{j-1-s} \biggl[{1 \over M} \biggr](-1)^{s} \sum_{\nu\vdash s}{T_\nu^2 \over w_\nu} h_n \biggl[X \biggl({1\over M}-B_\nu\biggr) \biggr] $$
and the identity
$$\mathbf{B}_a^* P[X]= P \biggl[X+{M\over z} \biggr]{\varOmega} \biggl[ {-zX\over1-t} \biggr] \bigg|_{z^{-a}} $$
(established in [7]) that gives the action of the operators \(\mathbf{B}_{a}^{*}\) to obtain and this may be rewritten as Comparing with the right hand side of (3.7) we see that we have established the identity
$$t^{a-1} \mathbf{B}_a^* \Delta_{h_{j-1}}h_{n} \biggl[{X\over M} \biggr] =\mathbf{C}^*_a\Delta_{h_j}h_n \biggl[{X\over M} \biggr] -\chi(a=1)\Delta_{h_j}h_{n-1} \biggl[{X\over M} \biggr]. $$
This completes our proof of (3.2) an consequently also the proof of Theorem 1.1.

4 The construction of the new dinv

Let \(\mathcal{PF}(J,n)\) denote the collection of Parking functions on the (J+n)×(J+n) lattice square whose diagonal word is a shuffle of the two words \(\mathcal{E}_{J}=12\cdots J\) and \(\mathcal{E}_{J,n}=J+1\cdots J+n\) with car J+n in the (1,1) lattice square. In symbols
$$ \mathcal{PF}(J,n)= \bigl\{\mathit{PF}\in\mathcal{PF}_{J+n} : \sigma(\mathit{PF})\in \mathcal{E}_J{\cup\!\cup}\mathcal{E}_{n-J}\ \mbox{\&}\ J+n\in(1,1) \bigr\}. $$

Before we can proceed with our construction of the new dinv, we need some preliminary observations about this family of parking functions. To begin we should note that the condition that the diagonal word be a shuffle of 12⋯J with J+1⋯J+n, together with the column increasing property of parking functions, forces the columns of the Dyck path where by the length of column i of a Dyck path D we refer to the number of NORTH steps of D of abscissa i supporting a \(\mathit{PF}\in\mathcal{PF}(J,n)\) to be of length at most 2. The reason for this is simple: as we read the cars of PF to obtain σ(PF) from right to left by diagonals starting from the highest and ending with the lowest the big cars (J+1,…,J+n) as well as the small cars (1,2,…,J) will be increasing. Thus we will never see a big car on top of a big car nor a small car on top of a small car. So the only possibility is a big car on top of a small car, i.e. columns of length most 2 as we asserted.

This yields an algorithm for constructing all the elements of the family \(\mathcal{PF}(J,n)\). Let us denote by “\(\operatorname{red}(\mathit{PF})\)”, and call it the “reduced tableau” of PF, the configuration obtained by replacing in a \(\mathit{PF}\in\mathcal{PF}(J,n)\) all big cars by a 2 and all small cars by a 1. We can simply obtain all the reduced tableaux of elements of \(\mathcal{PF}(J,n)\) by constructing first the family \(\mathcal{D}_{J,n}\) of Dyck paths of length n+J with no more than J columns of length 2 and all remaining columns of length 1. Then for each Dyck path \(D\in\mathcal{D}_{J,n}\) fill the cells adjacent to the NORTH steps of each column of length 2 by a 1 under a 2, then fill the columns of length 1 by a 1 or a 2 for a total of J ones and n twos.

Clearly each \(\mathit{PF}\in\mathcal{PF}(J,n)\) can be uniquely reconstructed from its reduced tableau by replacing all the ones by 1,2,…,J and all the twos by J+1,…,J+n by diagonals from right to left starting from the highest and ending with the lowest. It will also be clear that we need only work with reduced tableaux to construct our new dinv. However, being able to refer to the original cars will turn out to be more convenient in some of our proofs. For this reason we will work with a PF or its \(\operatorname{red}(\mathit{PF})\) interchangeably depending on the context.

This given, we have the following basic fact.

Proposition 4.1

For any
$$\mathit{PF}= \begin{bmatrix} v_1 & v_2 & \ldots& v_n \\ u_1 & u_2 & \ldots& u_n \\ \end{bmatrix} \in\mathcal{PF}(J,n) $$
if we set {i 1<i 2<⋯<i k }={i∈[1,J+n]:v i >J} then the vector
$$U_B(\mathit{PF}) =(u_{i_1},u_{i_2}, \ldots,u_{i_k}) $$
gives the area sequence of a Dyck path, which here and after will be referred to as the Dyck path “supporting” the big cars of PF.


Since car J+n is in the (1,1) lattice square it follows that \(u_{i_{1}}=0\). Thus we need only show that
$$u_{i_s}\le u_{i_{s-1}}+1\quad\hbox{for all}\ 2\le s\le k. $$
By definition \(v_{i_{s-1}}\) and \(v_{i_{s}}\) are two successive big cars in PF that means that for i s−1<j<i s the car v j is small and thus, except perhaps for j=i s −1, the car v j must be in a column of length 1. In particular we see that we must have \(u_{j}\le u_{i_{s-1}}\) for, the first violation of this inequality would put a small car above a big car (for j=i s−1+1) or a small car above a small car for j>i s−1+1. This gives \(u_{i_{s}} \le u_{i_{s}-1}+1\) as desired with equality only if car \(v_{i_{s}}\) is at the top of a column of length 2 and all the small cars in between \(v_{i_{s-1}}\) and \(v_{i_{s}}\) are in the same diagonal as \(v_{i_{s-1}}\). □
In view of this result we are now going to focus on the subfamilies \(\mathcal{PF}_{J}(p)\) of \(\mathcal{PF}_{J,n}\) consisting of its elements whose big cars have a supporting Dyck path which hits the diagonal according to a given composition p=(p 1,p 2,…,p k )⊨n. Our goal is to construct a statistic “ndinv” which yields the equality
$$ \langle\Delta_{h_J}\mathbf{C}_{p_1}\mathbf{C}_{p_1} \cdots\mathbf{C}_{p_k} 1, e_n\rangle= \sum _{\mathit{PF}\in\mathcal{PF}_{J}(p)} t^{\operatorname{area}(\mathit{PF})} q^{\operatorname{ndinv}(\mathit{PF})}. $$
But before we do this it may be good to experiment a little by constructing some of these families.
By reversing the argument we used to prove Proposition 4.1, we can start by constructing all the Dyck paths with the given diagonal composition then ad all the cars as required by the definition of a family. This is best illustrated by examples. Say we start with p=(3,2). In this case there are only two possible Dyck paths as given below on the left. On the right we added the 2’s and their corresponding diagonal numbers. Now the least number of 1’s we need to add to get a legal reduced diagram is 3 for the first and 2 for the second as shown below Now a MAPLE computation yields the polynomials
$$ \langle\Delta_{h_2}\mathbf{C}_3\mathbf{C}_2 1, e_5 \rangle= t^3 q^4 $$
$$ \langle\Delta_{h_3}\mathbf{C}_3\mathbf{C}_2 1, e_5 \rangle= t^3 \bigl(q^4+q^5+q^6+q^7+q^8 \bigr)+t^4 \bigl(q^4+q^5+q^6 \bigr)+t^5 q^4. $$
To compute the classical weight t area q dinv it is better to have a look at the non-reduced versions of the two tableau above. Namely Now in the first PF, the pairs (3,7), (1,5) and (6,2), are the only ones contributing to the dinv and the sum of the area numbers is 5, so its classical weight is t 5 q 3. Similarly, the pairs contributing to the dinv on the PF on the right are (2,6), (5,1) and (4,1) and the area numbers add to 3, so its classical weight is t 3 q 3. The latter is not the same as what comes out of (4.3). The area is OK but the dinv in not. The calculation in (4.4) thus asserts that the “new dinv” should be 4. Similarly, as we will show in a moment, the calculation in (4.5) yields the result that the new dinv of the PF on the left of (4.6) should be 4 again. In fact, it turns out that none of the 8 parking functions we obtain by inserting an extra 1 in the reduced tableau on the right of (4.3) have area 5 thus the last term in (4.5) can only be produced by the PF on the left of (4.6). We give below the 8 above mentioned reduced tableaux with the extra 1 shaded Therefore the reduced tableaux of the family \(\mathcal{PF}_{3} ([3,2])\) are nine altogether, as predicted by (4.5), namely the eight above together with the tableau on the left of (4.3). Computing their classical weight and summing gives
$$\sum_{\mathit{PF}\in\mathcal{PF}_{3} ([3,2])} t^{\operatorname{area}(\mathit{PF})}q^{\operatorname{dinv}(\mathit{PF})} = t^3 \bigl(q^4+2q^5+2q^6 \bigr)+t^4 \bigl(q^3+q^4+q^5 \bigr)+t^5 q^3 $$
the first eight terms from (4.7) and the last from the left tableau in (4.3). As we see this is not quite the same polynomial as in (4.5). Note that the area again works but the classical dinv does not!.

For a while in our investigation this appeared to be a challenging puzzle. The discovery of the recursion of Theorem 1.1 completely solved this puzzle but, as we shall see, it created another puzzle.

Let us have a closer look at (1.16), namely the identity Setting for a composition p=(p 1,p 2,…,p k )⊨n
$$\varPi_J(p)=\sum_{\mathit{PF}\in\mathcal{PF}_J(p)}t^{\operatorname{area}(\mathit{PF})}q^{\operatorname{ndinv}(\mathit{PF})} $$
our conjecture, together with the identity
$$\mathbf{B}_{p_1} 1= e_{p_1}= \sum _{(q_1,q_2,\ldots,q_\ell)\models p_1} \mathbf{C}_{q_1}\mathbf{C}_{q_2} \cdots\mathbf{C}_{q_\ell} 1 $$
proved in [10], translates (4.8) into the recursion where the symbol [p 2,…,p k ,r] represents the concatenation of the compositions (p 2,…,p k ) and r. This strongly suggests what should recursively happen to the new weight of our parking functions by the removal of a single (appropriate) car. That is, if the chosen car is small there should be a loss of area of p 1−1 and a loss of ndinv of k−1 and if the chosen car is big no loss of any kind.

Starting from this observation and further closer analysis of (4.8) led us to the following recursive algorithm for constructing “ndinv”.

This is best described by working with the corresponding reduced tableaux. To begin it will be convenient to start by decomposing each \(\operatorname{red}(\mathit{PF})\) into sections corresponding to the parts of the given composition. To be more precise it is best viewing our two line arrays as unions of vertical dominos. For instance the \(\operatorname{red}(\mathit{PF})\) below, which is none other than the minimal ones obtained from the Dyck path on the right will be viewed as the sequence of dominos Thus the corresponding PF belongs to the family \(\mathcal{PF}_{5}([3,3,2])\) and as such will be divided into 3 sections, one for each part of [3,3,2]. To do this we simply cut the sequence in (4.10) before each domino \(\bigl[{2\atop0}\bigr]\) obtaining the three sections
$$ \everymath{\displaystyle} \begin{array}{@{}l} \left[ \left[ \begin{array}{c} 2\\0 \end{array} \right], \left[ \begin{array}{c} 1\\0 \end{array} \right], \left[ \begin{array}{c} 2\\1 \end{array} \right], \left[ \begin{array}{c} 1\\1 \end{array} \right], \left[ \begin{array}{c} 2\\2 \end{array} \right] \right], \qquad\left[ \left[ \begin{array}{c} 2\\0 \end{array} \right], \left[ \begin{array}{c} 1\\0 \end{array} \right], \left[ \begin{array}{c} 2\\1 \end{array} \right], \left[ \begin{array}{c} 1\\1 \end{array} \right], \left[ \begin{array}{c} 2\\2 \end{array} \right] \right], \\[4mm] \left[ \left[ \begin{array}{c} 2\\0 \end{array} \right], \left[ \begin{array}{c} 1\\0 \end{array} \right], \left[ \begin{array}{c} 2\\1 \end{array} \right] \right] \end{array} $$
Since here p 1=3>1, (4.8) suggests that we should remove a 1 from the first section, then process it somewhat to cause a loss of dinv of 2 (=k−1), and loss of area 2 (=p 1−1). Taking a clue from the classical dinv, we can see that the first small car in the corresponding PF would contribute a unit to the classical dinv with the big cars to its right in the main diagonal. The latter of course correspond to the dominoes \(\bigl[{2\atop0}\bigr]\) that begin each of the following sections. Thus the desired loss of dinv can be simply obtained by bodily moving the first section to the end, and removing the \(\bigl[{1\atop0}\bigr]\) obtaining
$$ \everymath{\displaystyle} \begin{array}{@{}l} \left[ \left[ \begin{array}{c} 2\\0 \end{array} \right], \left[ \begin{array}{c} 1\\0 \end{array} \right], \left[ \begin{array}{c} 2\\1 \end{array} \right], \left[ \begin{array}{c} 1\\1 \end{array} \right], \left[ \begin{array}{c} 2\\2 \end{array} \right] \right], \qquad\left[ \left[ \begin{array}{c} 2\\0 \end{array} \right], \left[ \begin{array}{c} 1\\0 \end{array} \right], \left[ \begin{array}{c} 2\\1 \end{array} \right] \right], \\[4mm] \left[ \left[ \begin{array}{c} 2\\0 \end{array} \right], \left[ \begin{array}{c} 2\\1 \end{array} \right], \left[ \begin{array}{c} 1\\1 \end{array} \right], \left[ \begin{array}{c} 2\\2 \end{array} \right] \right] \end{array} $$
We may thus view that the removed domino contributed a unit to dinv for each domino \(\bigl[{2\atop0}\bigr]\) to its right. But we still have not accounted for the loss of area and worse yet we will now have a big car on top of a big car. Since (4.8) tells that the loss of area should be p 1−1 then it must be equal to the number of big cars in the moved section, minus one. This means that we can fix both problems by making the domino replacements \(\bigl[{2\atop1}\bigr] {\rightarrow}\bigl[{2\atop0}\bigr]\) and \(\bigl[{2\atop2}\bigr] {\rightarrow} \bigl[{2\atop1}\bigr]\), obtaining
$$ \everymath{\displaystyle} \begin{array}{@{}l} \left[ \left[ \begin{array}{c} 2\\0 \end{array} \right], \left[ \begin{array}{c} 1\\0 \end{array} \right], \left[ \begin{array}{c} 2\\1 \end{array} \right], \left[ \begin{array}{c} 1\\1 \end{array} \right], \left[ \begin{array}{c} 2\\2 \end{array} \right] \right], \qquad\left[ \left[ \begin{array}{c} 2\\0 \end{array} \right], \left[ \begin{array}{c} 1\\0 \end{array} \right], \left[ \begin{array}{c} 2\\1 \end{array} \right] \right], \\[4mm] \left[ \left[ \begin{array}{c} 2\\0 \end{array} \right], \left[ \begin{array}{c} 2\\0 \end{array} \right], \left[ \begin{array}{c} 1\\1 \end{array} \right], \left[ \begin{array}{c} 2\\1 \end{array} \right] \right] \end{array} $$
but that creates a new problem, since the succession \(\bigl[{2\atop 0}\bigr],\bigl[{1\atop1}\bigr]\) would put a small car on top of a big car. We will fix this final problem by simply switching the 1 with the 2 obtaining
$$ \everymath{\displaystyle} \begin{array}{@{}l} \left[ \left[ \begin{array}{c} 2\\0 \end{array} \right], \left[ \begin{array}{c} 1\\0 \end{array} \right], \left[ \begin{array}{c} 2\\1 \end{array} \right], \left[ \begin{array}{c} 1\\1 \end{array} \right], \left[ \begin{array}{c} 2\\2 \end{array} \right] \right], \qquad\left[ \left[ \begin{array}{c} 2\\0 \end{array} \right], \left[ \begin{array}{c} 1\\0 \end{array} \right], \left[ \begin{array}{c} 2\\1 \end{array} \right] \right], \\[4mm] \left[ \left[ \begin{array}{c} 2\\0 \end{array} \right], \left[ \begin{array}{c} 1\\0 \end{array} \right], \left[ \begin{array}{c} 2\\1 \end{array} \right], \left[ \begin{array}{c} 2\\1 \end{array} \right] \right] \end{array} $$
which gives the domino sequence of the \(\operatorname{red}(\mathit{PF})\) below on the right of which we have depicted the Dyck path supporting the big cars.

In the case that p 1=1 there will be only one big car in the first section and if there are small cars they all must be on the main diagonal. In this case we can process the first section as we did for p 1>1. If there are no small cars then the first section consists of the single domino \(\bigl[{2\atop0}\bigr]\) and (4.8) suggests that we should simply remove it with no further ado.

To carry out our definition of “ndinv” rigorously and in full generality, we will break our argument into three separate steps. In the first step we use the ideas stemming from the above example to construct a bijection In the second step we define “ndinv“ by setting for each \(\mathit{PF}\in \mathcal{PF}_{J}([p_{1},p_{2},\ldots, p_{k}])\)
$$ \operatorname{ndinv}(\mathit{PF})= \begin{cases} k-1+\operatorname{ndinv}(\varPhi(\mathit{PF})) &\hbox{if}\ J>0,\\ 0 &\hbox{if}\ J=0. \end{cases} $$
From step 1 and step 2 it will follow that the polynomials Π J ([p 1,p 2,…,p k ]) satisfy the same recursion as the polynomials \(\langle\Delta_{h_{J}}\mathbf{C}_{p_{1}}\mathbf{C}_{p_{2}}\cdots \mathbf{C}_{p_{k}}1, e_{n}\rangle\).

In the third step we establish the equality in (4.2) by verifying the equality in the base cases.

In our first step, starting with a \(\mathit{PF}\in\mathcal{PF}_{J}([p_{1},p_{2},\ldots, p_{k}])\) we construct Φ(PF) by the following procedure.
  • Cut the domino sequence of \(\operatorname{red}(\mathit{PF})\) into sections starting at the dominos \(\bigl[{2\atop0}\bigr]\)

  1. (1)
    If the first section does not contain a domino \(\bigl[{1\atop0}\bigr]\)
    • remove its only domino \(\bigl[{2\atop0}\bigr]\) from the sequence of dominos

  2. (2)
    If the first section contains a domino \(\bigl[{1\atop0}\bigr]\), work on the first section as follows:
    • remove its first domino \(\bigl[{1\atop0}\bigr]\)

    • for each (but the first) domino \(\bigl[{2\atop a}\bigr]\) make the replacement \(\bigl[{2\atop a}\bigr] {\rightarrow}\bigl[{2\atop a-1}\bigr]\)

    • if adjacent pairs \(\bigl[{2\atop a-1}\bigr] \bigl[{1\atop a}\bigr]\) are created make the replacements \(\bigl[{2\atop a-1}\bigr] \bigl[{1\atop a}\bigr] {\rightarrow}\allowbreak \bigl[{1\atop a-1}\bigr] \bigl[{2\atop a}\bigr]\)

    • cycle the modified first section to the end of the sequence of dominos

In any case we let PF′ be the parking function corresponding to the resulting domino sequence and set
$$\varPhi(\mathit{PF})=\mathit{PF}'. $$

It is clear that Φ maps the left hand side of (4.15) into the right hand side. To show that Φ is a bijection we need only show that the procedure above can be reversed to reconstruct PF from PF′ for any PF′ in the right hand side of (4.15). We will outline the salient steps of the reversed procedure.

Note first that since our target PF=Φ −1(PF′) is to be in \(\mathcal{PF}_{J}([p_{1},p_{2},\ldots, p_{k}])\) we already know the diagonal composition of the Dyck path of the big cars of PF. Thus we can proceed as follows:
  1. (1)
    Say \(\mathit{PF}'\in\mathcal{PF}_{J}([p_{2},\ldots, p_{k}])\) (which will only occur when p 1=1)
    • Then PF is the parking function obtained by prepending \(\bigl[{2\atop0}\bigr]\) to the domino sequence of PF.

  2. (2)
    Say \(\mathit{PF}'\in\mathcal{PF}_{J-1}([p_{2},\ldots, p_{k},1])\) (which will only occur when p 1=1)
    • Then PF is the parking function obtained by inserting \(\bigl[{1\atop0}\bigr]\) immediately after the first \(\bigl[{2\atop0}\bigr]\) in the last section of PF, then cycle back the last section to be the first in the domino sequence.

  3. (3)
    Say \(\mathit{PF}'\in\mathcal{PF}_{J-1}([p_{2},\ldots, p_{k},q])\) for a qp 1−1>0
    • Let \(\operatorname{last}(\mathit{PF}')\) be the domino sequence obtained by removing the first k−1 sections from the domino sequence of PF.

    • Modify \(\operatorname{last}(\mathit{PF}')\) by inserting a \(\bigl[{1\atop0}\bigr] \) immediately after its first \(\bigl[{2\atop0}\bigr]\) .

    • For a≥1 replace, in \(\operatorname{last}(\mathit{PF}')\) , each pair \(\bigl[{1\atop a-1}\bigr] \bigl[{2\atop a}\bigr]\) by the pair \(\bigl[{2\atop a}\bigr] \bigl[{1\atop a}\bigr]\) .

      (note that for this to put a big car on top of a big car we must have a \(\bigl[{2\atop a-1}\bigr]\) preceding the \(\bigl[{1\atop a-1}\bigr]\), but that \(\bigl[{2\atop a-1}\bigr]\) will also be replaced either by this step or by the next steps)

    • For a≥1 replace each \(\bigl[{2\atop a}\bigr]\) preceded by a \(\bigl[{1\atop a}\bigr]\) in \(\operatorname{last}(\mathit{PF}')\) by \(\bigl[{2\atop a+1}\bigr]\)

    • Replace each \(\bigl [{2\atop0}\bigr]\) , except the first by a \(\bigl[{2\atop1}\bigr]\)

      (note if a replaced \(\bigl[{2\atop0}\bigr]\) is preceded by a \(\bigl[{2\atop0}\bigr]\) then that \(\bigl[{2\atop0}\bigr]\) itself will also be replaced by \(\bigl[{2\atop1}\bigr]\))

    • The modified \(\operatorname{last}(\mathit{PF}')\) followed by the first k−1 sections of PFgives then the domino sequence of our target PF.

This completes our proof that Φ is bijective.

Since Φ moves EAST, by one cell, p 1−1 big cars it causes a loss of area equal to p 1−1. Thus the definition in (4.16) combined by the bijectivity of Φ proves the recursion in (4.9).

It remains to show equality in the base cases which, in view of the definition in (4.16) should be characterized by the absence of small cars.

Now it easily seen, combinatorially, that \(\mathcal{PF}_{0}([p])\) is an empty family except when all the components of p are equal to 1. To see this note that it is only the presence of small cars that allows the supporting Dyck path of one of our PFs to have columns of lengths 2. But if all the columns are of length 1, the “area” statistic is 0 and the Dyck path supporting the big cars can only have area sequence a string of 0′s. But in this case the family reduces to a single parking function which consists of cars 1,2,…,n placed on the main diagonal from top to bottom. Thus it follows from our definition of Π J (p) and (4.16) that
$$\varPi_0 \bigl([p_1,p_2, \ldots,p_k] \bigr)= \begin{cases} 0 &\hbox{if\ some}\ p_i>1,\\ 1 &\hbox{if\ all}\ p_i=1. \end{cases} $$
Since by definition \(\Delta_{h_{0}}\) reduces to the identity operator, the equality for the bases cases results from the following fact.

Theorem 4.2

For p=(p 1,p 2,…,p k )⊨n we have
$$ \langle\mathbf{C}_{p_1}\mathbf{C}_{p_2}\cdots \mathbf{C}_{p_k} 1, e_n\rangle= \begin{cases} 0 &\hbox{\textit{if\ some}}\ p_i>1,\\ 1 &\hbox{\textit{if\ all}}\ p_i=1. \end{cases} $$


Recall from (1.2) that for any symmetric function F[X] we have
$$ \mathbf{C}_a F[X]= \biggl(-{{ 1\over q}} \biggr)^{a-1} F \biggl[X-{1-1/q \over z} \biggr]\displaystyle\sum _{m\ge0}z^m h_{m}[X] |_{z^a}. $$
In particular it follows that for any Schur function s λ we have
$$\mathbf{C}_a s_\lambda[X] = \biggl(-{{ 1\over q}} \biggr)^{a-1} \sum_{\mu\subseteq\lambda}s_{\lambda/\mu}[X]s_\mu[1/q-1] \displaystyle h_{a+|\mu|}[X] . $$
This gives for a+|λ|=n
$$ \bigl\langle\mathbf{C}_a s_\lambda[X] , e_n \bigr\rangle= \biggl(-{{ 1\over q}} \biggr)^{a-1} \sum _{\mu\subseteq\lambda }s_\mu[1/q-1] \langle s_{\lambda/\mu} h_{a+|\mu|} , e_n\rangle $$
and the Littlewood–Richardson rule gives
$$\langle s_{\lambda/\mu} h_{a+|\mu|} , e_n\rangle=\bigl \langle s_{\lambda/\mu} , h_{a+|\mu|}^\perp e_n\bigr \rangle=0 $$
unless a+|μ|=1, Thus for a≥1 (4.19) reduces to
$$ \bigl\langle\mathbf{C}_a s_\lambda[X] , e_n \bigr\rangle= \biggl(-{{ 1\over q}} \biggr)^{a-1} \langle s_{\lambda} h_{a} , e_n\rangle= \begin{cases} 1 &\hbox{if}\ a=1\hbox{\ and}\ \lambda=1^{n-a},\\ 0 &\hbox{otherwise.} \end{cases} $$
$$\mathbf{C}_a 1= \biggl(-{{ 1\over q}} \biggr)^{a-1} h_a $$
the first case of (4.17) follows immediately from (4.20). On the other hand even when all the p i are equal to 1 in successive applications of C 1 only the term corresponding to \(s_{1^{m}}\) in the Schur function expansion of \(\mathbf{C}_{1}^{m} 1\) will survive in the scalar product
$$\bigl\langle\mathbf{C}_1 \mathbf{C}_1^m 1, e_{m+1} \bigr\rangle $$
since from (4.20) it follows that \(\mathbf{C}_{1}^{m} 1 |_{s_{1^{m}}}=1\), the second case of (4.17) is also another consequence of (4.20).

This completes of proof of (4.17). This was the last fact we need to establish the equality in (1.17). □

Remark 4.3

As we already mentioned, our definition of ndinv creates another puzzle. Indeed, the classical dinv can be immediately computed from the geometry of the parking function or directly from (1.6) which expresses it explicitly in terms of the two line array representation. For this reason we made a particular effort to obtain a non-recursive construction of ndinv and in the best scenario derive form it an explicit formula similar to (1.6). However, our efforts yielded only a partially non-recursive construction. In our original plan of writing we decided to include this further result even though in the end it yields a more complex algorithm for computing ndinv than from the original recursion. This was in the hope that our final construction may be conducive to the discovery of an explicit formula. It develops that during the preparation of this manuscript a new and better reason emerged for the inclusion of our final construction. It turns out that Angela Hicks and Yeonkyung Kim have very recently succeeded in discovering the desired explicit formula by a careful analysis of the combinatorial identities we are about to present. The results of Hicks–Kim will appear in a separate publication [13].

For our less recursive construction of ndinv it will be convenient to make a few changes in the domino sequences. To begin, we shall use the actual car numbers at the top of the dominos rather than 1 or 2. We do this, so that we may refer to individual dominos by their car as the corresponding area number on the bottom is being changed. But now, to distinguish big cars from small cars we must in each case specify the number J of small cars. Secondly, we will have sections end with a big car, rather than begin with a big car. This only requires, moving the initial big car to the end of the domino sequence. For example, the parking function below whose domino sequence was given in (4.10) has J=5 thus cars 1,2,3,4,5 are small and 5,6,…,13 are big.
$$ \left\vert \begin{array}{c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c@{\ \ }c} .&.&.&.&.&.&.&.&.&.&.&8&o\\[2pt] .&.&.&.&.&.&.&.&.&.&.&3&.\\[2pt] .&.&.&.&.&.&.&.&.&.&11&.&.\\[2pt] .&.&.&.&.&.&.&6&o&o&.&.&.\\[2pt] .&.&.&.&.&.&.&1&o&.&.&.&.\\[2pt] .&.&.&.&.&.&9&o&.&.&.&.&.\\[2pt] .&.&.&.&.&.&4&.&.&.&.&.&.\\[2pt] .&.&.&.&.&12&.&.&.&.&.&.&.\\[2pt] .&.&7&o&o&.&.&.&.&.&.&.&.\\[2pt] .&.&2&o&.&.&.&.&.&.&.&.&.\\[2pt] .&10&o&.&.&.&.&.&.&.&.&.&.\\[2pt] .&5&.&.&.&.&.&.&.&.&.&.&.\\[2pt] 13&.&.&.&.&.&.&.&.&.&.&.&.\\[2pt] \end{array} \right\vert $$
Its domino sequence is now
$$ \everymath{\displaystyle} \begin{array}[b]{@{}l} \left[ \left[ \begin{array}{c} 5\\0 \end{array} \right], \left[ \begin{array}{c} 10\\1 \end{array} \right], \left[ \begin{array}{c} 2\\1 \end{array} \right], \left[ \begin{array}{c} 7\\2 \end{array} \right], \left[ \begin{array}{c} 12\\0 \end{array} \right], \left[ \begin{array}{c} 4\\0 \end{array} \right], \left[ \begin{array}{c} 9\\1 \end{array} \right], \left[ \begin{array}{c} 1\\1 \end{array} \right], \left[ \begin{array}{c} 6\\2 \end{array} \right],\right. \\[6mm] \quad\left. \left[ \begin{array}{c} 11\\0 \end{array} \right], \left[ \begin{array}{c} 3\\0 \end{array} \right], \left[ \begin{array}{c} 8\\1 \end{array} \right], \left[ \begin{array}{c} 13\\0 \end{array} \right] \right] \end{array} $$
and its decomposition into sections is as shown below
$$ \everymath{\displaystyle} \begin{array}{@{}l} \left[ \left[ \begin{array}{c} 5\\0 \end{array} \right], \left[ \begin{array}{c} 10\\1 \end{array} \right], \left[ \begin{array}{c} 2\\1 \end{array} \right], \left[ \begin{array}{c} 7\\2 \end{array} \right], \left[ \begin{array}{c} 12\\0 \end{array} \right] \right], \qquad\left[ \left[ \begin{array}{c} 4\\0 \end{array} \right], \left[ \begin{array}{c} 9\\1 \end{array} \right], \left[ \begin{array}{c} 1\\1 \end{array} \right], \left[ \begin{array}{c} 6\\2 \end{array} \right], \left[ \begin{array}{c} 11\\0 \end{array} \right] \right], \\[5mm] \left[ \left[ \begin{array}{c} 3\\0 \end{array} \right], \left[ \begin{array}{c} 8\\1 \end{array} \right], \left[ \begin{array}{c} 13\\0 \end{array} \right] \right] \end{array} $$
For convenience we will now need to use the symbols “\(\bigl[{b\atop a}\bigr]\)” and “\(\bigl[{s\atop a}\bigr]\)” to, respectively, represent “big” and “small” car dominos. To motivate our second construction of ndinv we will begin by modifying our first construction to adapt to these new domino sequences.
  1. (1)

    The recursive construction will now consist of as many steps as there are dominos in the domino sequence

  2. (2)
    At each step the first domino of the first section is removed
    1. (a)

      when we remove an \(\bigl[{s\atop0}\bigr]\) , the section is cycled to the end after it is processed as before

    2. (b)

      when we remove a \(\bigl[{b\atop0}\bigr]\) , it is because the section consisted of a single big car domino.

  3. (3)

    The removal of an \(\bigl[{s\atop0}\bigr]\) contributes to ndinv the number of \(\bigl[{b\atop0}\bigr]\) ’s to its right minus one.


There are a few observations to be made about the effect of the cycling process. To begin note that when the domino sequence consists of a single section, no visible cycling occurs. However, even in this case, for accounting purposes, it is convenient to consider all of its dominos to have been cycled. With this provision, each domino in the original domino sequence will have an associated cycling number c that counts the number of times it has been cycled before it is removed.

Based on these observations, a step by step study of our recursive construction of ndinv led us to the following somewhat less recursive algorithm. It consists of two stages. In the first stage, the domino sequence is doctored and wrapped around a circle to be used in the second stage. The second stage uses circular motion to mimic the cycling of sections that takes place in the recursive procedure. To facilitate the understanding of the resulting algorithm we will illustrate each stage by applying it to the parking function in (4.21). More precisely we work as follows:

Stage I
  • Move each \(\bigl[{s\atop a}\bigr]\) in the domino sequence a places to its left and increase the area number by 1 of each domino \(\bigl[{b\atop a}\bigr]\) that is being by-passed. For instance the domino sections in (4.23) become
    $$ \everymath{\displaystyle} \begin{array}{@{}l} \left[ \left[ \begin{array}{c} 5\\0 \end{array} \right], \left[ \begin{array}{c} 2\\1 \end{array} \right], \left[ \begin{array}{c} 10\\2 \end{array} \right], \left[ \begin{array}{c} 7\\2 \end{array} \right], \left[ \begin{array}{c} 12\\0 \end{array} \right] \right], \qquad\left[ \left[ \begin{array}{c} 4\\0 \end{array} \right], \left[ \begin{array}{c} 1\\1 \end{array} \right], \left[ \begin{array}{c} 9\\2 \end{array} \right], \left[ \begin{array}{c} 6\\2 \end{array} \right], \left[ \begin{array}{c} 11\\0 \end{array} \right] \right], \\[4mm] \left[ \left[ \begin{array}{c} 3\\0 \end{array} \right], \left[ \begin{array}{c} 8\\1 \end{array} \right], \left[ \begin{array}{c} 13\\0 \end{array} \right] \right]. \end{array} $$
  • Next wrap the resulting sequence clockwise around a circle with positions marked by a “

(we also place a bar “|” to separating beginning and ending dominos; the ∘’s will be successively changed to •’s during the second stage)
Stage II
  • Set ndinv=0 and set the auxiliary parameter c to 1.

  • Mark the first domino by changing its “” to a “”.

  • Cycling clockwise from the first domino to the bar find the first \(\bigl[{b\atop0}\bigr]\) , call it “endsec”.

  • Cycling clockwise from endsec to the bar add 1 to ndinv each time we meet a \(\bigl[{b\atop0}\bigr]\) .

(On the right in (4.25) we have darkly boxed the first domino and the endsec and lightly boxed the two ndinv contributing big car dominos.)
While there is a domino that has not been marked repeat the following steps:
  • cycling clockwise from the last endsec mark the first unmarked domino

  • if in so doing the bar is crossed add 1 to c.

If the domino is a \(\bigl[{s\atop a}\bigr]\) then clockwise from it find the first \(\bigl[{b\atop a}\bigr]\) with a<c, call it “endsec” then cycle clockwise from endsec back to this \(\bigl[{s\atop a}\bigr]\)
  • for each encountered unmarked \(\bigl[{b\atop a}\bigr]\) add 1 to ndinv provided a<c if the bar is not crossed or a<c+1 after the bar is crossed

(the desired value of ndinv is reached after all the small car dominos are marked).
The successive configurations obtained after the marking of small car dominos are displayed below with the same conventions used on the right of (4.25): (at this point the c value increases to 2 and we obtain) (thus, in this case, ndinv=14, which is the total number of lightly boxed dominos in the previous five configurations).

Remark 4.4

We will not include a proof of the validity of this second algorithm, since A.S. Hicks and Y. Kim, using their discoveries, are able to provide in [13] a much simpler and more revealing validity argument than we can offer with our present tools. Here it should be sufficient to acknowledge that the auxiliary domino sequence resulting from Phase I together with the c statistic constructed in Phase II have ultimately been put to such beautiful use in subsequent work.

Before closing we should note that our ndinv may have an extension that can be used in a more general settings than the present one. To see this, let us recall that the 2 part case of the Shuffle Conjecture, proved by J. Haglund in [8], may be stated as follows:
Now replacing n by n+1−J in (1.17), for (p 1,p 2,…,p k )⊨n+1−J we get This given, since is was shown in [11] that we may write
$$e_{n+1}=\sum_{p\models n+1} \mathbf{C}_{p_1} \mathbf{C}_{p_2}\cdots\mathbf{C}_{p_{l(p)}}1 $$
it follows, by summing (4.29) over all compositions of n+1, that we also have where the “(∗)” is to signify that the sum is over all parking functions in the (n+1)×(n+1) lattice square which have the biggest car n+1 in cell (1,1). But it was also shown in [8] that we have
$$\langle\Delta_{h_J}e_{n+1-J}, e_{n+1-J}\rangle= \langle\nabla e_{n} , h_Jh_{n-J} \rangle. $$
Thus (4.30) may also be rewritten in the form which gives another parking function interpretation to this remarkable polynomial. It is natural then to ask if this kind of result involving the same ndinv, or a suitable extension of it, may give a new parking function interpretation to any of the polynomials occurring on the left hand side of (1.7). If that were the case then that would provide an alternate form of the Shuffle Conjecture. It is interesting to note that computer exploration has led us to conjecture that for p=(p 1,p 2,…,p k )⊨n the polynomials
$$\langle\Delta_{h_{J_1}e_{J_2}} \mathbf{C}_{p_1} \mathbf {C}_{p_1}\cdots\mathbf{C}_{p_1} 1, e_n\rangle $$
have non-negative integer coefficients. This yields us yet another avenue by which the results of this paper can be extended. It should be worthwhile to pursue these avenues in further investigations on the connections between Parking Functions and the Theory of Macdonald Polynomials.


  1. 1.
    Bergeron, F., Garsia, A.M.: Science fiction and Macdonald’s polynomials. In: Algebraic Methods and q-Special Functions, Montréal, QC, 1996. CRM Proc. Lecture Notes, vol. 22, pp. 1–52. Am. Math. Soc., Providence (1999) Google Scholar
  2. 2.
    Bergeron, F., Garsia, A.M., Haiman, M., Tesler, G.: Identities and positivity conjectures for some remarkable operators in the theory of symmetric functions. Methods Appl. Anal. 6, 363–420 (1999) MathSciNetzbMATHGoogle Scholar
  3. 3.
    Garsia, A.M., Haglund, J.: A proof of the q,t-Catalan positivity conjecture. Discrete Math. 256, 677–717 (2002) MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Garsia, A., Haiman, M.: Some natural bigraded modules and the q,t-Kostka coefficients. Electron. J. Comb. 3, Res. Paper 24 (1996) MathSciNetGoogle Scholar
  5. 5.
    Garsia, A.M., Haiman, M.: A remarkable q,t-Catalan sequence and q-Lagrange inversion. J. Algebr. Comb. 5(3), 191–244 (1996) MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    Garsia, A., Haiman, M., Tesler, G.: Explicit plethystic formulas for the Macdonald q,t-Kostka coefficients. Séminaire Lotharingien de Combinatoire B42m (1999), 45 pp. Google Scholar
  7. 7.
    Garsia, A.M., Xin, G., Zabrocki, M.: Hall–Littlewood operators in the theory of parking functions and diagonal harmonics. Int. Math. Res. Not. V. 11 (2011) Google Scholar
  8. 8.
    Haglund, J.: A proof of the q,t-Schröder conjecture. Int. Math. Res. Not. 11, 525–560 (2004) MathSciNetCrossRefGoogle Scholar
  9. 9.
    Haglund, J.: The q,t-Catalan Numbers and the Space of Diagonal Harmonics. AMS University Lecture Series, vol. 41 (2008), 167 pp. Google Scholar
  10. 10.
    Haglund, J., Haiman, M., Loehr, N., Remmel, J.B., Ulyanov, A.: A combinatorial formula for the character of the diagonal coinvariants. Duke Math. J. 126, 195–232 (2005) MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    Haglund, J., Morse, J., Zabrocki, M.: A compositional shuffle conjecture specifying touch points of the Dyck path. Can. J. Math. 64(4), 822–844 (2012). doi: 10.4153/CJM-2011-078-4 MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Haiman, M.: Hilbert schemes, polygraphs, and the Macdonald positivity conjecture. J. Am. Math. Soc. 14, 941–1006 (2001) MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    Hicks, A.S., Kim, Y.: An explicit formula for the new “dinv” statistic for compositional “2-shuffle” parking functions, to appear Google Scholar
  14. 14.
    Macdonald, I.G.: Symmetric Functions and Hall Polynomials, 2nd edn. Oxford Mathematical Monographs. The Clarendon Press/Oxford University Press, New York (1995) zbMATHGoogle Scholar
  15. 15.
    Zabrocki, M.: UCSD Advancement to Candidacy Lecture Notes.

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  1. 1.Mathematics DepartmentUniversity of California, San DiegoLa JollaUSA
  2. 2.Mathematics and StatisticsYork UniversityTorontoCanada

Personalised recommendations