Berlekamp [1, 2] raised a question concerning the entries of certain matrices of determinant 1. (Originally, Berlekamp was interested only in the entries modulo 2.) Carlitz, Roselle, and Scoville [3] gave a combinatorial interpretation of the entries (over the integers, not just modulo 2) in terms of lattice paths. Here we will generalize the result of Carlitz, Roselle, and Scoville in two ways: (a) we will refine the matrix entries so that they are multivariate polynomials, and (b) we compute not just the determinant of these matrices, but more strongly their Smith normal form (SNF). A priori our matrices need not have a Smith normal form, since they are not defined over a principal ideal domain, but the existence of SNF will follow from its explicit computation. A special case is a determinant of \(q\)-Catalan numbers. It will be more convenient for us to state our results in terms of partitions rather than lattice paths.

Let \(\lambda \) be a partition, identified with its Young diagram regarded as a set of squares; we fix \(\lambda \) for all that follows. Adjoin to \(\lambda \) a border strip extending from the end of the first row to the end of the first column of \(\lambda \), yielding an extended partition \(\lambda ^*\). Let \((r,s)\) denote the square in the \(r\)th row and \(s\)th column of \(\lambda ^*\). If \((r,s)\in \lambda ^*\), then let \(\lambda (r,s)\) be the partition whose diagram consists of all squares \((u,v)\) of \(\lambda \) satisfying \(u\ge r\) and \(v\ge s\). Thus, \(\lambda (1,1)=\lambda \), while \(\lambda (r,s)=\emptyset \) (the empty partition) if \((r,s)\in \lambda ^*{\setminus }\lambda \). Associate with the square \((i,j)\) of \(\lambda \) an indeterminate \(x_{ij}\). Now for each square \((r,s)\) of \(\lambda ^*\), associate a polynomial \(P_{rs}\) in the variables \(x_{ij}\), defined as follows:

$$\begin{aligned} P_{rs} = \sum _{\mu \subseteq \lambda (r,s)}\prod _{(i,j)\in \lambda (r,s){\setminus }\mu } x_{ij}, \end{aligned}$$
(1)

where \(\mu \) runs over all partitions contained in \(\lambda (r,s)\). In particular, if \((r,s)\in \lambda ^*{\setminus }\lambda \) then \(P_{rs}=1\). Thus, for \((r,s)\in \lambda \), \(P_{rs}\) may be regarded as a generating function for the squares of all skew diagrams \(\lambda (r,s){\setminus }\mu \). For instance, if \(\lambda =(3,2)\) and we set \(x_{11}=a\), \(x_{12}=b\), \(x_{13}=c\), \(x_{21}=d\), and \(x_{22}=e\), then Fig. 1 shows the extended diagram \(\lambda ^*\) with the polynomial \(P_{rs}\) placed in the square \((r,s)\).

Fig. 1
figure 1

The polynomials \(P_{rs}\) for \(\lambda =(3,2)\)

Write

$$\begin{aligned} A_{rs}=\prod _{(i,j)\in \lambda (r,s)} x_{ij}. \end{aligned}$$

Note that \(A_{rs}\) is simply the leading term of \(P_{rs}\). Thus, for \(\lambda =(3,2)\) as in Fig. 1 we have \(A_{11}=abcde, A_{12}=bce\), \(A_{13}=c\), \(A_{21}=de\), and \(A_{22}=e\).

For each square \((i,j)\in \lambda ^*\) there will be a unique subset of the squares of \(\lambda ^*\) forming an \(m\times m\) square \(S(i,j)\) for some \(m\ge 1\), such that the upper left-hand corner of \(S(i,j)\) is \((i,j)\), and the lower right-hand corner of \(S(i,j)\) lies in \(\lambda ^*{\setminus }\lambda \). In fact, if \(\rho _{ij}\) denotes the rank of \(\lambda (i,j)\) (the number of squares on the main diagonal, or equivalently, the largest \(k\) for which \(\lambda (i,j)_k\ge k\)), then \(m=\rho _{ij}+1\). Let \(M(i,j)\) denote the matrix obtained by inserting in each square \((r,s)\) of \(S(i,j)\) the polynomial \(P_{rs}\). For instance, for the partition \(\lambda =(3,2)\) of Fig. 1, the matrix \(M(1,1)\) is given by

$$\begin{aligned} M(1,1) = \left[ \begin{array}{c@{\quad }c@{\quad }c} P_{11} &{} bce+ce+c+e+1 &{} c+1\\ de+e+1 &{} e+1 &{} 1\\ 1 &{} 1 &{} 1 \end{array} \right] , \end{aligned}$$

where \(P_{11}=abcde+bcde+bce+cde+ce+de+c+e+1\). Note that for this example we have

$$\begin{aligned} \det M(1,1)= A_{11}A_{22}A_{33}=abcde\cdot e\cdot 1=abcde^2. \end{aligned}$$

If \(R\) is a commutative ring (with identity), and \(M\) an \(m\times n\) matrix over \(R\), then we say that \(M\) has a Smith normal form (SNF) over \(R\) if there exist matrices \(P\in {{\mathrm{GL}}}(m,R)\) (the set of \(m\times m\) matrices over \(R\) which have an inverse whose entries also lie in \(R\), so that \(\det P\) is a unit in \(R\)), \(Q\in {{\mathrm{GL}}}(n,R)\), such that \(PMQ\) has the form (w.l.o.g., here \(m\le n\), the other case is dual)

$$\begin{aligned} \begin{array}{r@{\quad }c@{\quad }l} PMQ &{} = &{} \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} \mathbf{0} &{} d_1 d_2 \cdots d_m &{} &{} &{} \\ \mathbf{0} &{} &{} d_1 d_2 \cdots d_{m-1} &{} \mathbf{0} &{} \\ \vdots &{} \mathbf{0} &{}\ddots &{} \\ \mathbf{0} &{} &{} &{} &{} d_1 \end{array} \right] \\ &{}=&{} (\mathbf{0}, \mathrm {diag}(d_1 d_2 \cdots d_m, d_1 d_2 \cdots d_{m-1},\dots , d_1))\,, \end{array} \end{aligned}$$

where each \(d_i\in R\). If \(R\) is an integral domain and \(M\) has an SNF, then it is unique up to multiplication of the diagonal entries by units. If \(R\) is a principal ideal domain then the SNF always exists, but not over more general rings. We will be working over the polynomial ring

$$\begin{aligned} R={\mathbb {Z}}[x_{ij}: (i,j)\in \lambda ]. \end{aligned}$$
(2)

Our main result asserts that \(M(i,j)\) has a Smith normal form over \(R\), which we describe explicitly. In particular, the entries on the main diagonal are monomials. It is stated below for \(M(1,1)\), but it applies to any \(M(i,j)\) by replacing \(\lambda \) with \(\lambda (i,j)\). Note also that the transforming matrices are particularly nice as they are triangular matrices with 1’s on the diagonal.

FormalPara Theorem 1

There are an upper unitriangular matrix \(P\) and a lower unitriangular matrix \(Q\) in \({{\mathrm{SL}}}(\rho +1,R)\) such that

$$\begin{aligned} P\cdot M(1,1)\cdot Q = \mathrm {diag}(A_{11},A_{22},\dots ,A_{\rho +1,\rho +1}). \end{aligned}$$

In particular, \(\det M(1,1)=A_{11}A_{22}\cdots A_{\rho \rho }\) (since \(A_{\rho +1,\rho +1}=1\)).

For instance, in the example of Fig. 1 we have

$$\begin{aligned} P\cdot M(1,1)\cdot Q=\mathrm {diag}(abcde,e,1)\,. \end{aligned}$$

We will give two proofs for Theorem 1.

The main tool used for the first proof is a recurrence for the polynomials \(P_{rs}\). To state this recurrence we need some definitions. Let \(\rho =\mathrm {rank}(\lambda )\), the size of the main diagonal of \(\lambda \). For \(1\le i\le \rho \), define the rectangular array

$$\begin{aligned} R_i = \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} (1,2) &{} (1,3) &{} \cdots &{} (1,\lambda _i-i+1)\\ (2,3) &{} (2,4) &{} \cdots &{} (2,\lambda _i-i+2)\\ &{} &{} \vdots \\ (i,i+1) &{} (i,i+2) &{} \cdots &{} (i,\lambda _i) \end{array} \right] \,. \end{aligned}$$

Note that if \(\lambda _\rho =\rho \), then \(R_\rho \) has no columns, i.e., \(R_\rho =\emptyset \). Let \(X_i\) denote the set of all subarrays of \(R_i\) whose shapes form a vertical reflection of a Young diagram, justified into the upper right-hand corner of \(R_i\). Define

$$\begin{aligned} \Omega _i =\sum _{\alpha \in X_i}\prod _{(a,b)\in \alpha } x_{ab}. \end{aligned}$$

For instance, if \(\lambda _2=4\), then

$$\begin{aligned} R_2 = \left[ \begin{array}{c@{\quad }c} (1,2) &{} (1,3)\\ (2,3) &{} (2,4) \end{array} \right] , \end{aligned}$$

so

$$\begin{aligned} \Omega _2 = 1+x_{13}+x_{12}x_{13}+x_{13}x_{24} +x_{12}x_{13}x_{24}+x_{12}x_{13}x_{23}x_{24}. \end{aligned}$$

In general, \(R_i\) will have \(i\) rows and \(\lambda _i-i\) columns, so \(\Omega _i\) will have \(\left( {\begin{array}{c}\lambda _i\\ i\end{array}}\right) \) terms. We also set \(\Omega _0=1\), which is consistent with regarding \(R_0\) to be an empty array.

Next set \(S_0=\emptyset \) and define for \(1\le i\le \rho \),

$$\begin{aligned} S_i = \{ (a,b)\in \lambda :1\le a\le i,\ \ \lambda _i-i+a< b\le \lambda _a\}. \end{aligned}$$

In particular \(S_1=\emptyset \). When \(\lambda _\rho =\rho \), then \(S_\rho \) consists of all squares strictly to the right of the main diagonal of \(\lambda \); otherwise, \(S_i\) consists of those squares of \(\lambda \) that are in the same row and to the right of all squares appearing as an entry of \(R_i\). Set

$$\begin{aligned} \tau _i = \Omega _i\cdot \prod _{(a,b)\in S_i} x_{ab}, \end{aligned}$$

where as usual an empty product is equal to 1. In particular, \(\tau _0=1\).

We can now state the recurrence relation for \(P_{rs}\).

FormalPara Theorem 2

Let \(2\le j\le \rho +1=\mathrm {rank}(\lambda )+1\). Then

$$\begin{aligned} \tau _0 P_{1j}-\tau _1 P_{2j} + \cdots + (-1)^\rho \tau _\rho P_{\rho +1, j}=0. \end{aligned}$$

Moreover, for \(j=1\), we have

$$\begin{aligned} \tau _0 P_{11}-\tau _1 P_{21} + \cdots + (-1)^\rho \tau _\rho P_{\rho +1, 1}=A_{11}. \end{aligned}$$

Before presenting the proof, we first give a couple of examples. Let \(\lambda =(3,2)\), with \(x_{11}=a\), \(x_{12}=b\), \(x_{13}=c\), \(x_{21}=d\), and \(x_{22}=e\) as in Fig. 1. For \(j=1\) we obtain the identity

$$\begin{aligned}&(1+c+e+ce+de+bce+cde+bcde+abcde)\\&\quad -(1+c+bc)(1+e+de)+bc= abcde, \end{aligned}$$

For \(j=2\) we have

$$\begin{aligned} (1+c+e+ce+bce)-(1+c+bc)(1+e)+bc = 0, \end{aligned}$$

while for \(j=3\) we get

$$\begin{aligned} (1+c)-(1+c+bc)+bc=0. \end{aligned}$$

For a further example, let \(\lambda =(5,4,1)\), with the variables \(x_{ij}\) replaced by the letters \(a,b,\dots ,j\) as shown in Fig. 2.

For \(j=1\), we get

$$\begin{aligned}&P_{11} -(1+e+de+cde+bcde)(1+i+j+ij+hi+hij+ghi+ghij\\&\quad +fghij)+de(1+c+bc+ci+bci+bchi)(1+j) = abcdefghij, \end{aligned}$$

where \(P_{11}=1+e+i+j+\cdots +abcdefghij\), a polynomial with 34 terms. For \(j=2\), we get

$$\begin{aligned}&(1+e+i+ei+hi+dei+ehi+ghi+dehi+eghi+cdehi+deghi\\&\quad +\,cdeghi+ bcdeghi)-(1+e+de+cde+bcde)(1+i+hi+ghi)\\&\quad +de(1+c+bc+ci+bci+bchi)=0. \end{aligned}$$

For \(j=3\) we have

$$\begin{aligned}&(1+e+i+ei+hi+dei+ehi+dehi+cdehi)\\&\quad -\,(1+e+de+cde+bcde)(1+i+hi)\\&\quad +\,de(1+c+bc+ci+bci+bchi)=0. \end{aligned}$$
Fig. 2
figure 2

The variables for \(\lambda =(5,4,1)\)

FormalPara Proof of Theorem 2

First suppose that \(j\ge 2\). We will prove the result in the form

$$\begin{aligned} \tau _0 P_{1j}= \tau _1 P_{2j} - \cdots + (-1)^{\rho -1}\tau _\rho P_{\rho +1,j} \end{aligned}$$

by an Inclusion–Exclusion argument. Since \(\tau _0=1\), the left-hand side is the generating function for all skew diagrams \(\lambda (1,j){\setminus }\mu \), as defined by Eq. (1). If we take a skew diagram \(\lambda (2,j){\setminus }\sigma \) and append to it some element of \(X_1\) (that is, some squares on the first row forming a contiguous strip up to the last square \((1,\lambda _1)\)), then we will include every skew diagram \(\lambda (1,j){\setminus }\mu \). However, some additional diagrams \(\delta \) will also be included. These will have the property that the first row begins strictly to the left of the second. We obtain the first two rows of such a diagram \(\delta \) by choosing an element of \(X_2\) and adjoining to it the set \(S_2\). The remainder of the diagram \(\delta \) is a skew shape \(\lambda (3,j){\setminus }\zeta \). Thus, we cancel out the unwanted terms of \(\tau _1 P_{2j}\) by subtracting \(\tau _2 P_{3j}\). However, the product \(\tau _2 P_{3j}\) has some additional terms that need to be added back in. These terms will correspond to diagrams \(\eta \) with the property that the first row begins strictly to the left of the second, and the second begins strictly to the left of the third. We obtain the first three rows of such a diagram \(\eta \) by choosing an element of \(X_3\) and adjoining to it the set \(S_3\). The remainder of the diagram \(\eta \) is a skew shape \(\lambda (4,j){\setminus }\xi \). Thus, we cancel out the unwanted terms of \(\tau _2 P_{3j}\) by adding \(\tau _3 P_{4j}\). This Inclusion–Exclusion process will come to end when we reach the term \(\tau _\rho P_{\rho +1,j}\), since we cannot have \(\rho +1\) rows, each strictly to the left of the one below. This proves the theorem for \(j\ge 2\).

When \(j=1\), the Inclusion–Exclusion process works exactly as before, except that the term \(A_{11}\) is never canceled from \(\tau _0 P_{11} = P_{11}\). Hence the theorem is also true for \(j=1\). \(\square \)

With this result at hand, we can now embark on the proof of Theorem 1. This is done by induction on \(\rho \), the result being trivial for \(\rho =0\) (so \(\lambda =\emptyset \)). Assume the assertion holds for partitions of rank less than \(\rho \), and let rank\((\lambda )=\rho \). For each \(1\le i\le \rho \), multiply row \(i+1\) of \(M(1,1)\) by \((-1)^i\tau _i\) and add it to the first row. By Theorem 2 we get a matrix \(M'\) whose first row is \([A_{11},0,0,\dots ,0]\). Now by symmetry we can perform the analogous operations on the columns of \(M'\). We then get a matrix in the block diagonal form \(\left[ \begin{array}{c@{\quad }c} A_{11} &{} 0\\ 0 &{} M(2,2) \end{array} \right] \). The row and column operations that we have performed are equivalent to computing \(P'MQ'\) for upper and lower unitriangular matrices \(P',Q'\in {{\mathrm{SL}}}(\rho +1,R)\), respectively. The proof now follows by induction. \(\square \)

Note The determinant above can also easily be evaluated by the Lindström–Wilf–Gessel–Viennot method of nonintersecting lattice paths, but it seems impossible to extend this method to a computation of SNF.

We now come to the second approach toward the SNF which does not use Theorem 2. Indeed, we will prove the more general version below where the weight matrix is not necessarily square; while this is not expounded here, the previous proof may also be extended easily to any rectangular subarray (regarded as a matrix) of \(\lambda ^*\) whose lower right-hand corner lies in \(\lambda ^*{\setminus }\lambda \). The inductive proof below will not involve Inclusion–Exclusion arguments; again, suitable transformation matrices are computed explicitly stepwise along the way.

Given our partition \(\lambda \), let \(F\) be a rectangle of size \(d\times e\) in \(\lambda ^*\), with top left corner at \((1,1)\), such that its corner \((d,e)\) is a square in the added border strip \(\lambda ^*{\setminus }\lambda \); thus \(A_{de}=1\) and \(P_{de}=1\). We denote the corresponding matrix of weights by

$$\begin{aligned} W_F=(P_{ij})_{(i,j)\in F}\,. \end{aligned}$$
FormalPara Theorem 3

Let \(F\) be a rectangle in \(\lambda ^*\) as above, of size \(d\times e\); assume \(d\le e\) (the other case is dual). Then there are an upper unitriangular matrix \(P \in {{\mathrm{SL}}}(d,R)\) and a lower unitriangular matrix \(Q \in {{\mathrm{SL}}}(e,R)\) such that

$$\begin{aligned} P\cdot W_F \cdot Q =(\mathbf{0}, {{\mathrm{diag}}}(A_{1,1+e-d},A_{2,2+e-d}\ldots , A_{d,e})) \,. \end{aligned}$$

In particular, when \(\lambda \) is of rank \(\rho \) and \(F\) is the Durfee square in \(\lambda ^*\), we have the result in Theorem 1 for \(W_F=M(1,1)\).

FormalPara Proof

We note that the claim clearly holds for \(1\times e\) rectangles as then \(A_{1,e}=1=P_{1,e}\). We use induction on the size of \(\lambda \). For \(\lambda =\emptyset \), \(\lambda ^*=(1)\) and \(F\) can only be a \(1\times 1\) rectangle. We now assume that the result holds for all rectangles as above in partitions of \(n\). We take a partition \(\lambda '\) of \(n+1\) and a rectangle \(F'\) with its corner on the rim \(\lambda '^*{\setminus }\lambda '\), where we may assume that \(F'\) has at least two rows, we will produce the required transformation matrices inductively along the way.

First we assume that we can remove a square \(s=(a,b)\) from \(\lambda '\) and obtain a partition \(\lambda =\lambda '{\setminus }{s}\) such that \(F'\subseteq \lambda ^* \). By induction, we thus have the result for \(\lambda \) and \(F=F'\subset \lambda ^*\), say this is a rectangle with corner \((d,e)\), \(d\le e\). Then, \(a<d\) and \(b\ge e\), or \(a\ge d\) and \(b<e\). We discuss the first case, the other case is analogous. Set \(z=x_{ab}\).

Let \(t\in \lambda \subset \lambda '\); denote the weights of \(t\) with respect to \(\lambda \) by \(A_t\), \(P_t\), and with respect to \(\lambda '\) by \(A'_t\), \(P'_t\). Let \(W=W_F=(P_{ij})_{(i,j)\in F}\) and \(W'=(P'_{ij})_{(i,j)\in F}\) be the corresponding weight matrices. We clearly have

$$\begin{aligned} A'_{j,j+e-d} = \left\{ \begin{array}{r@{\quad }l} z A_{j,j+e-d} &{} \text { for } 1\le j\le a \\ A_{j,j+e-d} &{} \text { for } a < j \le d \end{array} \right. \,. \end{aligned}$$

When we compute the weight of \(t=(i,j)\in F\) with \(i\le a\) with respect to \(\lambda '\), we get two types of contributions to \(P_{ij}'\). For a partition \(\mu \subseteq \lambda '(i,j)\) with \(s\not \in \mu \), i.e., \(\mu \subseteq \lambda (i,j)\), the corresponding weight summand in \(P_{ij}\) is multiplied by \(z\), and hence the total contribution from all such \(\mu \) is exactly \(zP_{ij}\). On the other hand, if the partition \(\mu \subseteq \lambda '(i,j)\) contains \(s\), then it contains the full rectangle \({\mathcal {R}}\) spanned by the corners \(t\) and \(s\), that is, running over rows \(i\) to \(a\) and columns \(j\) to \(b\); \(\mu \) arises from \({\mathcal {R}}\) by gluing suitable (possibly empty) partitions \(\alpha \), \(\beta \) to its right and bottom side, respectively, and

$$\begin{aligned} \lambda '(i,j) {\setminus }\mu = (\lambda (i,b+1){\setminus }\alpha ) \cup (\lambda (a+1,j){\setminus } \beta ) \,. \end{aligned}$$

Summing the terms for all such \(\mu \), we get the contribution \(P_{i,b+1}\cdot P_{a+1,j}\). Clearly, when \(i>a\), then the square \(s\) has no effect on the weight of \(t\). Hence

$$\begin{aligned} P_{ij}' = \left\{ \begin{array}{c@{\quad }l} z P_{ij} + P_{i,b+1}\cdot P_{a+1,j} &{} \text { for } 1\le i\le a \\ P_{ij} &{} \text { for } a < i\le d \end{array} \right. \,. \end{aligned}$$

We now transform \(W'\). Multiplying the \((a+1)\)-th row of \(W'\) by \(P_{i,b+1}\) and subtracting this from row \(i\), for all \(i\le a\), corresponds to multiplication from the left with an upper unitriangular matrix in \({{\mathrm{SL}}}(d,R)\) and gives

$$\begin{aligned} W'_1= \left[ \begin{array}{c@{\quad }c@{\quad }c} zP_{11} &{} \cdots &{} zP_{1 e}\\ \vdots &{} \cdots &{} \vdots \\ zP_{a1} &{} \cdots &{} zP_{a e }\\ P_{a+1,1} &{} \cdots &{} P_{a+1,e }\\ \vdots &{} \cdots &{} \vdots \\ P_{d 1} &{} \cdots &{} P_{d e } \end{array} \right] \,. \end{aligned}$$

By induction, we know that there are upper and lower unitriangular matrices \(U=(u_{ij})_{1\le i,j\le d },V=(v_{ij})_{1\le i,j\le e }\), respectively, defined over \(R\), such that \(UWV=( \mathbf{0},{{\mathrm{diag}}}(A_{1,1+e-d},\ldots ,A_{d,e }))\). We then define an upper unitriangular matrix \(U'=(u_{ij}')_{1\le i,j\le d }\in {{\mathrm{SL}}}(d,R)\) by setting

$$\begin{aligned}u'_{ij} = \left\{ \begin{array}{r@{\quad }l} z u_{ij} &{} \text { for } i\le a <j \\ u_{ij} &{} \text { otherwise } \end{array} \,. \right. \end{aligned}$$

With \(UW = (\tilde{P}_{ij})\), we then have

$$\begin{aligned} U'W_1'= \left[ \begin{array}{c@{\quad }c@{\quad }c} z\tilde{P}_{11} &{} \cdots &{} z\tilde{P}_{1 e }\\ \vdots &{} \cdots &{} \vdots \\ z\tilde{P}_{a1} &{} \cdots &{} z\tilde{P}_{a e }\\ \tilde{P}_{a+1,1} &{} \cdots &{} \tilde{P}_{a+1,e }\\ \vdots &{} \cdots &{} \vdots \\ \tilde{P}_{d 1} &{} \cdots &{} \tilde{P}_{d e } \end{array} \right] , \end{aligned}$$

and hence we obtain the desired form via

$$\begin{aligned} U'W'_1V=( \mathbf{0},{{\mathrm{diag}}}(zA_{1,1+e-d},\ldots ,zA_{a,a+e-d},A_{a+1,a+1+e-d},\ldots , A_{d e }))\,. \end{aligned}$$

Next we deal with the case where we cannot remove a square from \(\lambda '\) such that the rectangle \(F'\) is still contained in the extension of the smaller partition \(\lambda \) of \(n\). This is exactly the case when \(\lambda '\) is a rectangle, with corner square \(s=(d,e)\) (say), and \(F'=\lambda '^*\) is the rectangle with its corner at \((d+1,e+1)\). Then, \(s\) is the only square that can be removed from \(\lambda '\); for \(\lambda =\lambda '{\setminus }s\), we have \(\lambda ^*=\lambda '^*{\setminus }(d+1,e+1)\). We now use the induction hypothesis for the partition \(\lambda \) of \(n\) and the rectangle \(F=\lambda '\subset \lambda '^*=F'\).

We keep the notation for the weights of a square \(t=(i,j)\) with respect to \(\lambda \), \(\lambda '\), and set \(z=x_s=x_{de}\). Clearly, we have for the monomial weights to \(\lambda '\)

$$\begin{aligned} A'_{j,j+e-d} = \left\{ \begin{array}{l@{\quad }l} z A_{j,j+e-d} &{} \text { for } j\le d \\ 1 &{} \text { for } j = d +1 \end{array} \right. \,. \end{aligned}$$

Now we consider how to compute the matrix \(W'=(P'_{ij})_{(i,j)\in F'}\) from the values \(P_{ij}\), \((i,j)\in F\). Arguing analogously as before, we obtain

$$\begin{aligned} P_{ij}' = \left\{ \begin{array}{l@{\quad }l} z P_{ij} + 1 &{} \text { for } 1\le i\le d, 1\le j \le e \\ 1&{} \text { for } i\le d+1 \text{ and } j=e +1 \\ 1 &{} \text { for } i=d +1 \text { and } j\le e+1 \end{array} \right. \,. \end{aligned}$$

As a first simplification on \(W'\), we subtract the \((d +1)\)-th row of \(W'\) from row \(i\), for all \(i\le d \) (corresponding to a multiplication from the left with a suitable upper unitriangular matrix), and we obtain the matrix

$$\begin{aligned} W'_1= \left[ \begin{array}{c@{\quad }c@{\quad }c@{\quad }c} zP_{11} &{} \ldots &{} zP_{1 e } &{} 0\\ \vdots &{} \ldots &{} \vdots &{} \vdots \\ zP_{d 1} &{} \ldots &{} zP_{d e } &{} 0 \\ 1 &{} \ldots &{} 1 &{} 1 \end{array} \right] \,. \end{aligned}$$

Subtracting the last column from column \(j\), for all \(j\le e \), transforms this (via postmultiplication with a lower unitriangular matrix as required) into \(W'_2= \left[ \begin{array}{c@{\quad }c} zW &{} \mathbf{0}\\ \mathbf{0} &{} 1 \end{array} \right] \). By induction we have an upper and a lower unitriangular matrix \(U\in {{\mathrm{SL}}}(d,R)\), \(V\in {{\mathrm{SL}}}(e,R)\), respectively, with \(UWV=( \mathbf{0},{{\mathrm{diag}}}(A_{1,1+e-d}, \ldots , A_{d e }))\). Then

$$\begin{aligned} \left[ \begin{array}{c@{\quad }c} U &{} \mathbf{0}\\ \mathbf{0} &{} 1 \end{array} \right] \left[ \begin{array}{c@{\quad }c} zW &{} \mathbf{0}\\ \mathbf{0} &{} 1 \end{array} \right] \left[ \begin{array}{c@{\quad }c} V &{} \mathbf{0}\\ \mathbf{0} &{} 1 \end{array} \right] = \left[ \begin{array}{c@{\quad }c@{\quad }c} \mathbf{0} &{} {{\mathrm{diag}}}(zA_{1,1+e-d}, \ldots , zA_{d e}) &{} \mathbf{0} \\ \mathbf{0} &{} \mathbf{0}&{} 1 \end{array} \right] \,, \end{aligned}$$

and we have the assertion as claimed. \(\square \)

We conclude with an interesting special case, namely, when \(\lambda \) is the “staircase” \((n-1,n-2,\dots ,1)\) and each \(x_{ij}=q\), we get that \(P_{ij}\) is the \(q\)-Catalan number that is denoted \(\tilde{C}_{n+2-i-j}(q)\) by Fürlinger and Hofbauer [6], [7, Exer. 6.34(a)]. For instance, \(\tilde{C}_3(q)=1+2q+q^2+q^3\). Theorem 1 gives that the matrix

$$\begin{aligned} M_{2m-1}=[\tilde{C}_{2m+1-i-j}(q)]_{i,j=1}^m \end{aligned}$$

has SNF \(\mathrm {diag}(q^{\left( {\begin{array}{c}2m-1\\ 2\end{array}}\right) },q^{\left( {\begin{array}{c}2m-3\\ 2\end{array}}\right) },q^{\left( {\begin{array}{c}2m-5\\ 2\end{array}}\right) }, \dots ,q^3,1)\), and the matrix

$$\begin{aligned} M_{2m} =[\tilde{C}_{2m+2-i-j}(q)]_{i,j=1}^{m+1} \end{aligned}$$

has SNF \(\mathrm {diag}(q^{\left( {\begin{array}{c}2m\\ 2\end{array}}\right) },q^{\left( {\begin{array}{c}2m-2\\ 2\end{array}}\right) }, q^{\left( {\begin{array}{c}2m-4\\ 2\end{array}}\right) },\dots ,q,1)\). The determinants of the matrices \(M_n\) were already known (e.g., [4], [5, p. 7]), but their Smith normal form is new.