## 1 Introduction

Let ℕ denote the set of natural numbers {0,1,2,…} and ℙ denote the set of positive integers {1,2,…,}. We say that γ=(γ 1,γ 2,…,γ n ) is a weak composition of m into n parts if each γ i ∈ℕ and $$\sum_{i=1}^{n} \gamma_{i} =m$$. Letting |γ|=∑ i γ i , the (column) diagram of γ is the figure dg′(γ) consisting of |γ| cells arranged into columns so that the ith column contains γ i cells. For example, the diagram of γ=(2,0,1,0,3) is pictured in Fig. 1. The augmented diagram of γ, denoted by $$\widehat{dg}(\gamma)$$, consists of the diagram of γ together with an extra row of n cells attached below. These extra cells are referred to as the basement of the augmented diagram. We let λ(γ) be the partition that results by taking the weakly decreasing rearrangement of the parts of γ. Thus if γ=(2,0,1,0,3), then λ(γ)=(3,2,1,0,0).

Macdonald [7] defined a famous family of symmetric polynomials P λ (x 1,x 2,…,x n ;q,t), which have important applications to a variety of areas. In [6], Macdonald showed that many of the properties of the P λ , such as satisfying a multivariate orthogonality condition, are shared by a family of nonsymmetric polynomials E γ (x 1,…,x n ;q,t), where γ is a weak composition with n parts. Haglund, Haiman and Loehr [1] obtained a combinatorial formula for E γ (x 1,…,x n ;q,t) in terms of fillings of $$\widehat {dg}(\gamma)$$ by positive integers satisfying certain constraints. It will be simpler for us to phrase things in terms of a transformed version of the E γ studied by Marshall [8] which we denote by $${\widehat{E}}_{\gamma}(x_{1},\ldots ,x_{n};q,t)$$. The $$\widehat{E}_{\gamma}$$ can be obtained from the E γ by sending q→1/q, t→1/t, reversing the x-variables, and reversing the parts of γ. The corresponding combinatorial expression for $$\widehat{E}_{\gamma }(x_{1},\ldots,x_{n};0;0)$$ from [1] involves what the second author [9, 10] later called semi-standard augmented fillings. It was previously known that $$\widehat{E}_{\gamma}(x_{1},\ldots,x_{n}; 0, 0)$$ (hereafter denoted more simply by $$\widehat{E}_{\gamma}(x_{1},\ldots,x_{n})$$), equals the “standard bases” of Lascoux and Schützenberger [5], which are also referred to as Demazure atoms. The second author introduced a generalization of the RSK insertion algorithm involving semi-standard augmented fillings, and used this to give combinatorial proofs of several results involving Demazure atoms. For example, this generalized RSK insertion algorithm gives a bijective proof that for any partition β,

$$s_{\beta}(x_1,\ldots, x_n) = \sum_{ \gamma\atop{ \lambda(\gamma )=\beta} } \widehat{E}_{\gamma}(x_1, \ldots, x_n).$$
(1)

This extended Robinson–Schensted–Knuth insertion algorithm is also instrumental in work of Haglund, Luoto, Mason, and van Willigenburg, who developed the theory of a new basis for the ring of quasisymmetric functions called quasisymmetric Schur functions [3, 4]. In particular, these authors use it in proving a generalization of the Littlewood–Richardson rule, where the product of a Schur function and a Demazure atom (Demazure character, quasisymmetric Schur function) is expanded in terms of Demazure atoms (Demazure characters, quasisymmetric Schur functions), respectively, with positive coefficients.

Let ϵ n denote the identity 1 2 ⋯ n in S n and $$\bar{\epsilon}_{n}$$ the reverse of the identity n n−1 ⋯ 1. In [9, 10] and in [3, 4], the basements of the diagrams $$\widehat{dg}(\gamma)$$ are always filled by either ϵ n (i.e., i is in the ith column of the basement), or by $$\bar{\epsilon}_{n}$$. In this article, we show that many of the nice properties of the extended RSK insertion algorithm hold with the basement consisting of an arbitrary permutation σS n . In particular, we define a weight preserving bijection which shows

$$s_\beta(x_1,\ldots,x_n) = \sum_{\gamma} \widehat{E}^{\sigma}_{\gamma}(x_1, \ldots, x_n)$$
(2)

where the sum is over all weak compositions γ such that λ(γ)=β and γ i γ j whenever i<j and σ i >σ j . Here $$\widehat{E}^{\sigma}_{\gamma}(x_{1},\ldots,x_{n})$$ is the version of $$\widehat{E}_{\gamma}(x_{1},\ldots,x_{n})$$ with basement σ which we call a generalized Demazure atom. In the special case when $$\sigma= \bar{\epsilon}_{n}$$, there is only one term in the sum above so that $$s_{\beta} = E^{\bar{\epsilon}_{n}}_{\beta}$$, while if σ is ϵ n then (2) reduces to (1).

Part of our motivation for studying the $$\widehat{E}^{\sigma}_{\gamma }(x_{1}, \ldots, x_{n})$$ is an unpublished result of M. Haiman and the first author which can be described briefly as follows. Let $${\widehat{E}}^{\sigma}_{\gamma}(x_{1},\ldots,x_{n};q,t)$$ denote the polynomial obtained by starting with the combinatorial formula from [1] for $${\widehat{E}}_{\gamma}(x_{1}, \ldots, x_{n};q,t)$$ involving sums over non-attacking fillings, replacing the basement ϵ n by σ 1σ 2σ n , and keeping other aspects of the formula the same. Then if i+1 occurs to the left of i in the basement σ 1σ 2σ n , we have

(3)

Here A equals one if the height of the column of $$\widehat{dg}(\gamma )$$ above i+1 in the basement is greater than or equal to the height of the column above i in the basement, and equals zero otherwise. Also, σ′ is the permutation obtained by interchanging i and i+1 in σ. The T i are generators for the affine Hecke algebra which act on monomials in the X variables by

with $$x^{\alpha_{i}} = x_{i}/x_{i+1}$$. See [1] for a more detailed description of the T i and their relevance to nonsymmetric Macdonald polynomials. Our $$\widehat{E}^{\sigma}_{\gamma}(x_{1}, \ldots, x_{n})$$ can be obtained by setting q=t=0 in $$\widehat{E}^{\sigma}_{\gamma}(x_{1}, \ldots, x_{n};q,t)$$, and hence are a natural generalization of the $$\widehat{E}_{\gamma}(x_{1}, \ldots, x_{n})$$ to investigate. If we set q=t=0 in the Hecke operator T i , it reduces to a divided difference operator similar to those appearing in the definition of Schubert polynomials. By (3), $$\widehat{E}^{\sigma}_{\gamma}(x_{1}, \ldots, x_{n})$$ can be expressed (up to a power of t) as a series of the divided difference operators applied to the Demazure character $$\widehat{E}^{\bar{\epsilon}_{n}}_{\gamma}(x_{1}, \ldots, x_{n})$$.

As with the extended insertion algorithm, we shall see that our insertion algorithm with general basements also commutes in a natural way with the RSK insertion algorithm. This useful fact will allow us to extend the results of the second author to our more general setup. Moreover, we shall give a precise characterization of how the results of our insertion algorithm vary as the basement σ varies. If $$\sigma= {\bar {\epsilon}_{n}}$$ our algorithm becomes essentially equivalent to the ordinary RSK row insertion algorithm, while if σ=ϵ n , it reduces to the extended insertion algorithm.

The outline of this paper is as follows. In Sect. 2, we formally define the objects we will be working with, namely permuted basement semi-standard augmented fillings relative to a permutation σ ($${\operatorname {PBF}}$$s). In Sects. 3 and 4, we describe our insertion algorithm for $${\operatorname {PBF}}$$s and derive its general properties. In Sect. 5, we use it to prove analogues of the Pieri rules for the product of a homogeneous symmetric function h n (x 1,…,x n ) times an $$\widehat{E}^{\sigma }_{\gamma}(x_{1}, \ldots, x_{n})$$ and the product of an elementary symmetric function e n (x 1,…,x n ) times an $$\widehat{E}^{\sigma }_{\gamma}(x_{1}, \ldots, x_{n})$$. In Sect. 6, we define a generalization of the RSK correspondence between ℕ-valued matrices and pairs of column strict tableaux for permuted basement fillings and prove several of its basic properties. Finally, in Sect. 7, we study the analogue of evacuation for $${\operatorname {PBF}}$$s.

## 2 Permuted basement semi-standard augmented fillings

The positive integer n is fixed throughout, while γ will always denote a weak composition into n parts and σ a permutation in S n . We let (i,j) denote the cell in the ith column, reading from left to right, and the jth row, reading from bottom to top, of $$\widehat{dg}(\gamma)$$. The basement cells of $$\widehat{dg}(\gamma)$$ are considered to be in row 0 so that $$\widehat{dg}(\gamma)=dg^{\prime}(\gamma) \cup\{ (i,0) : 1 \le i \le n \}$$. The reading order of the cells of $$\widehat{dg}(\gamma)$$ is obtained by reading the cells in rows from left to right, beginning at the highest row and reading from top to bottom. Thus a cell a=(i,j) is less than a cell b=(i′,j′) in the reading order if either j>j′ or j=j′ and i<i′. For example, if γ=(0,2,0,3,1,2,0,0,1), then $$\widehat{dg}(\gamma)$$ is pictured in Fig. 2 where we have placed the number i in the ith cell in reading order. An augmented filling, F, of an augmented diagram $$\widehat {dg}(\gamma)$$ is a function $$F: \widehat{dg}(\gamma) \rightarrow\mathbb{P}$$, which we picture as an assignment of positive integers to the cells of $$\widehat{dg}(\gamma)$$. We let F(i,j) denote the entry in cell (i,j) of F. The reading word of F, read(F), is obtained by recording the entries of F in the reading order of dg′(γ). The content of F is the multiset of entries which appear in the filling F. Throughout this article, we will only be interested in fillings F such that entries in each column are weakly increasing reading from top to bottom and the basement entries form a permutation in the symmetric group S n .

Next we define type A and B triples as in [9]. A type A triple in an augmented diagram of shape γ is a set of three cells a,b,c of the form (i,k),(j,k),(i,k−1) for some pair of columns i<j of the diagram and some row k>0, where γ i γ j . A type B triple is a set of three cells a,b,c of the form (j,k+1),(i,k),(j,k) for some pair of columns i<j of the diagram and some row k≥0, where γ i <γ j . Note that basement cells can be elements of triples. As noted above, in this article our fillings F have weakly increasing column entries reading from top to bottom, so we always have the entry values satisfying F(a)≤F(c). We say that a triple of either type is an inversion triple if the relative order of the entries is either F(b)<F(a)≤F(c) or F(a)≤F(c)<F(b). Otherwise we say that the triple is a coinversion triple, i.e., if F(a)≤F(b)≤F(c). Figure 3 pictures type A and B triples.

A semi-standard augmented filling is a filling of an augmented diagram with positive integer entries so that (i) the column entries are weakly increasing from top to bottom, (ii) the basement entries form a permutation of 1,2,…,n where n is the number of cells in the basement, and (iii) every Type A or B triple is an inversion triple. We say that cells c 1=(x 1,y 1) and c 2=(x 2,y 2) are attacking if either c 1 and c 2 lie in the same row, i.e., y 1=y 2, or if c 1 lies strictly to the left and one row below c 2, i.e., if x 1<x 2 and y 2=y 1+1. We say that filling F is non-attacking if F(c 1)≠F(c 2) whenever c 1 and c 2 are attacking. It is easy to see from our definition of inversion triples that a semi-standard augmented filling F must be non-attacking. A superscript σ on a filling F, as in F σ, means the basement entries form the permutation σ.

We say that a filling F σ is a permuted basement semi-standard augmented filling ($$\operatorname {PBF}$$) of shape γ with basement permutation σ if

1. (I)

F σ is a semi-standard augmented filling of $$\widehat{dg}(\gamma)$$,

2. (II)

F σ((i,0))=σ i for i=1,…,n, and

3. (III)

for all cells a=(i 2,j),b=(i 1,j−1) such that i 1<i 2 and $$\gamma_{i_{1}} < \gamma_{i_{2}}$$, we have F σ(b)<F σ(a).

We shall call condition (III) the B-increasing condition, as pictured in Fig. 4.

We note that the fact that a $$\operatorname {PBF}$$ F σ has weakly increasing columns, reading from top to bottom, and satisfies the B-increasing condition automatically implies that every B-triple in F σ is an inversion triple. That is, suppose that γ i <γ j where i<j and a=(j,k+1), b=(i,k) and c=(j,k) is B-triple. Then F σ(b)<F σ(a)≤F σ(c) since the B-increasing condition forces F σ(b)<F σ(a) and the weakly increasing column condition forces F σ(a)≤F σ(c). Thus {a,b,c} is an inversion triple.

Given a $$\operatorname {PBF}$$ F σ of shape γ, we define the weight of F σ, W(F σ), to be

$$W\bigl(F^\sigma \bigr) = \prod _{(i,j) \in dg^{\prime}(\gamma)} x_{F^\sigma (i,j)}.$$
(4)

We let $$\mathcal{PBF}(\gamma,\sigma )$$ denote the set of all $$\operatorname {PBF}$$s F σ of shape γ with basement σ. We then define

$$\widehat{E}_{\gamma}^\sigma (x_1, x_2, \ldots, x_n ) = \sum_{F^\sigma \in\mathcal{PBF}(\gamma,\sigma )} W\bigl(F^\sigma \bigr).$$
(5)

The following fact about $$\operatorname {PBF}$$s will be used frequently in the sequel.

### Lemma 1

Let F σ be a $$\operatorname {PBF}$$ of shape γ and assume that i<m.

1. (i)

Suppose that F σ(i,j)<F σ(m,j) for some j>0. Then F σ(i,j−1)<F σ(m,j). Moreover, for all 0≤k<j, F σ(i,k)<F σ(m,k+1)≤F σ(m,k).

2. (ii)

Suppose that F σ(i,j)>F σ(m,j) for some j≥0. Then γ i γ m and , for all jkγ m , F σ(i,k)>F σ(m,k).

### Proof

For (i), we consider two cases. First, if γ i <γ m , then the B-increasing condition forces F σ(i,j−1)<F σ(m,j). Second, if γ i γ m , then consider the A-triple a=(i,j), b=(m,j), and c=(i,j−1). As we are assuming that F σ(a)<F σ(b), it must be the case that F σ(i,j−1)=F σ(c)<F σ(b)=F σ(m,j) since otherwise {a,b,c} would be coinversion triple in F σ. Thus it is always the case that F σ(i,j−1)<F σ(m,j). But then we know that F σ(i,j−1)<F σ(m,i)≤F σ(m,j−1) so that F σ(i,j−1)<F σ(m,j−1). Thus we can repeat our argument to show that for all 0≤k<j, F σ(i,k)<F σ(m,k+1)≤F σ(m,k).

For (ii), suppose that F σ(i,j)>F σ(m,j). Then we claim that it cannot be the case that γ i <γ m since otherwise (m,j+1) must be a cell in F σ which would mean that F σ(i,j)>F σ(m,j)≥F σ(m,j+1). But then a=(m,j+1) and b=(i,j) would violate the B-increasing condition. Thus it must be the case that γ i γ m . We claim that it also must be the case that F σ(i,k)>F σ(m,k) for all j<kγ m . If this is not the case, then let k be the smallest such that j and F σ(i,)≤F σ(m,). This implies the triple {(i,k),(m,k),(i,k−1)} is a type A coinversion triple since

$$F^{\sigma}(i,k) \le F^{\sigma}(m,k) \le F^{\sigma}(m,k-1) < F^{\sigma}(i,k-1).$$

Since we are assuming that F σ has no type A coinversion triples, there can be no such k. □

Note that part (ii) of Lemma 1 tells us that the basement permutation σ restricts the possible shapes of a $$\operatorname {PBF}$$ F σ with basement σ. That is, if σ i >σ m , then it must be the case that height of column i in F σ is greater than or equal to the height of column m in F σ.

We end this section by considering the two special cases of $$\operatorname {PBF}$$s where the basement is either the identity or the reverse of the identity. In the special case where the basement permutation σ=ϵ n , a $$\operatorname {PBF}$$ is a semi-standard augmented filling as defined in [9]. Next consider the case where $$F^{\bar{\epsilon}_{n}}$$ is a $$\operatorname {PBF}$$ of shape γ=(γ 1,…,γ n ) with basement $$\bar{\epsilon}_{n}$$. In that case, Lemma 1 implies that γ 1γ 2≥⋯≥γ n and that F σ must be strictly decreasing in rows. Since the entries of $$F^{\bar{\epsilon}_{n}}$$ must weakly decrease in columns reading from bottom to top, we see that $$F^{\bar{\epsilon}_{n}}$$ is what could be called a reverse row strict tableau with basement $$\bar{\epsilon}_{n}$$ attached. It follows that for γ a partition, $$\widehat{E}_{\gamma}^{\bar{\epsilon }_{n}}(x_{1},x_{2}, \ldots,x_{n})$$ is equal to the Schur function s γ (x 1,x 2,…,x n ).

## 3 An analogue of Schensted insertion

In [9], the second author defined a procedure kF to insert a positive integer k into a semi-skyline augmented filling, which is a $$\operatorname {PBF}$$ with basement permutation equal to the identity. In this section, we shall describe an extension of this insertion procedure which inserts a positive integer into a $$\operatorname {PBF}$$ with an arbitrary basement permutation.

Let F σ be a $$\operatorname {PBF}$$ with basement permutation σS n . We shall define a procedure kF σ to insert a positive integer k into F σ. Let $$\bar{F}^{\sigma }$$ be the extension of F σ which first extends the basement permutation σ by adding j in cell (j,0) for n<jk and then adds a cell which contains a 0 on top of each column. Let (x 1,y 1),(x 2,y 2),… be the cells of this extended diagram listed in reading order. Formally, we shall define the insertion procedure of k→(x 1,y 1),(x 2,y 2),… of k into the sequence of cells (x 1,y 1),(x 2,y 2),….

Let k 0=k and look for the first i such that $$\bar{F}^{\sigma }(x_{i},y_{i}) < k_{0} \leq\bar{F}^{\sigma }(x_{i},y_{i}-1)$$. Then there are two cases.

Case 1. If $$\bar{F}^{\sigma }(x_{i},y_{i}) =0$$, then place k 0 in cell (x i ,y i ) and terminate the procedure.

Case 2. If $$\bar{F}^{\sigma }(x_{i},y_{i}) \neq0$$, then place k 0 in cell (x i ,y i ), set $$k_{0} := \bar{F}^{\sigma }(x_{i},y_{i})$$ and repeat the procedure by inserting k 0 into the sequence of cells (x i+1,y i+1),(x i+2,y i+2),…. In such a situation, we say that $$\bar{F}^{\sigma }(x_{i},y_{i})$$ was bumped in the insertion kF σ.

The output of kF σ is the filling that keeps only the cells that are filled with positive integers. That is, we remove any cells of $$\bar{F}^{\sigma }$$ that still have a 0 in them.

The sequence of cells that contain elements that were bumped in the insertion kF σ plus the final cell which is added when the procedure is terminated will be called the bumping path of the insertion. For example, Fig. 5 shows an extended diagram of a $$\operatorname {PBF}$$ with basement permutation equal to 6 1 3 4 2 5. If we insert 5 into this $$\operatorname {PBF}$$, then it is easy to see that the first element bumped is the 4 in column 1. Thus that 4 will be replaced by 5 and we will insert 4 into the remaining sequence of cells. The first element that 4 can bump is the 2 in column 4. Thus that 4 will replace the 2 in column 4 and 2 will be inserted in the remaining cells. But then that 2 will bump the 0 in column 5 so that the procedure will terminate. Thus the circled elements in Fig. 5 correspond to the bumping path of this insertion. Clearly, the entries of $$\bar{F}^{\sigma }$$ in the bumping path must strictly decrease as we proceed in reading order.

We note that if we try to insert 8 in to the $$\operatorname {PBF}$$ pictured in Fig. 5, 8 would have no place to go unless we created extra columns with basement entries 7 and 8. Thus in our case, it is easy to see that inserting 8 into the $$\operatorname {PBF}$$ Fig. 5 would give us the $$\operatorname {PBF}$$ pictured in Fig. 6. For the rest of this paper, when we consider an insertion kF σ, we will assume that σS n where n is greater than or equal to k and all of the entries in F σ.

The following lemmas are needed in order to prove that the insertion procedure terminates and the result is a $$\operatorname {PBF}$$.

### Lemma 2

Let c 1=(i 1,j 1) and c 2=(i 2,j 2) be two cells in a $$\operatorname {PBF}$$ F σ such that F σ(c 1)=F σ(c 2)=a, assume c 1 appears before c 2 in reading order, and no cell between c 1 and c 2 in reading order contains the entry a. Let $$c_{1}'=(i_{1}',j_{1}')$$ and $$c_{2}'=(i_{2}',j_{2}')$$ be the cells in kF σ containing the entries from c 1 and c 2, respectively. Then $$j_{1}' > j_{2}'$$.

### Proof

Consider the cell $$\underline{c_{1}}=(i_{1},j_{1}-1)$$ immediately below c 1 in the diagram F σ. Note that c 1 attacks all cells of F σ to its right that lie in same row as well as all cells to its left that lie one row below the row of c 1. Since entries in cells which are attacked by c 1 must be different from F σ(c 1), it follows that c 2 must appear weakly after $$\underline{c_{1}}$$ in reading order. If $$c_{2} = \underline{c_{1}}=(i_{1},j_{1}-1)$$, then the entry in cell c 1 cannot be bumped because that would require F σ(i 1,j 1)<k 0F σ(i 1,j 1−1). Thus either c 2 is not bumped in which case the lemma automatically holds or c 2 is bumped in which case its entry ends up in a cell which is later in reading order so that $$j_{1} =j_{1}' > j_{1}-1 \geq j_{2}'$$.

Thus we may assume that $$F^{\sigma }(\underline{c_{1}}) > F^{\sigma }(c_{1})$$ and that c 2 follows $$\underline{c_{1}}$$ in reading order. This means that the element $$\overline{c_{2}} = (i_{2},j_{2}+1)$$ which lies immediately above c 2 follows c 1 in reading order and the entry in cell $$\overline{c_{2}}$$ must be strictly less than a by our choice of c 2. If the entry in c 1 is not bumped, then again we can conclude as above that the entry in c 2 will end up in a cell which follows $$\underline {c_{1}}$$ in reading order so that again $$j_{1} =j_{1}' > j_{1}-1 \geq j_{2}'$$. Finally, suppose that the entry a in cell c 1 is bumped. Since $$F^{\sigma }(\,\overline{c_{2}}\,) < a = F^{\sigma }(c_{2})$$, it follows that $$F^{\sigma }(\,\overline{c_{2}}\,)$$ is a candidate to be bumped by a. Thus the a that was bumped out of cell c 1 must end up in a cell which weakly precedes $$\overline{c_{2}}$$ in reading order and hence it ends up in a row which is higher than the row of c 2. Since the elements in a bumping path strictly decrease, the a in cell c 2 cannot be part of the bumping path. Thus the lemma holds. □

### Lemma 3

Suppose that F σ is a $$\operatorname {PBF}$$ and k is a positive integer. Then every type A triple in kF σ is an inversion triple.

### Proof

Suppose that F σ is of shape γ=(γ 1,…,γ n ) where nk. Consider an arbitrary type A triple {a=(x 1,y 1),b=(x 2,y 1),c=(x 1,y 1−1)} in $$\tilde{F}^{\sigma} : = k \rightarrow F^{\sigma}$$. Suppose for a contradiction that {a,b,c} is a coinversion triple so that $$\mbox {\tilde {F}^{\sigma }}(a) \le \mbox {\tilde {F}^{\sigma }}(b) \le \mbox {\tilde {F}^{\sigma }}(c)$$. Since the entries in the bumping path in the insertion kF σ form a strictly decreasing sequence when read in reading order, only one of {F σ(a),F σ(b),F σ(c)} can be bumped by the insertion procedure kF σ. Let $$\bar{F}^{\sigma }$$ be the extended diagram corresponding to F σ as defined in our definition of the insertion kF σ. We claim that the triple conditions for F σ imply that either $$\bar{F}^{\sigma}(b) < \bar{F}^{\sigma}(a) \leq \bar{F}^{\sigma}(c)$$ or $$\bar{F}^{\sigma}(a) \leq \bar{F}^{\sigma}(c) < \bar{F}^{\sigma}(b)$$. This follows from the fact that F σ is a $$\operatorname {PBF}$$ if a,b,c are cells in F σ. Since the shape of $$\tilde{F}^{\sigma }$$ arises from γ by adding a single cell on the outside of γ, we know that c is a cell in F σ. However, it is possible that exactly one of a or b is not in F σ and is filled with a 0 in $$\bar{F}^{\sigma }$$. If it is b, then we automatically have $$\bar{F}^{\sigma}(b) < \bar {F}^{\sigma}(a) \leq \bar{F}^{\sigma}(c)$$. If it is a, then the column that contains a is strictly shorter than the column that contains b because in $$\tilde{F}^{\sigma }$$, it must be the case that the height of column x 1 is greater than or equal to the height of column x 2 since {a,b,c} is a type A triple in $$\tilde{F}^{\sigma }$$. But then the B-increasing condition for F σ forces $$\bar{F}^{\sigma }(c) < \bar{F}^{\sigma }(b)$$ and, hence, $$\bar{F}^{\sigma}(a) \leq \bar{F}^{\sigma}(c) < \bar{F}^{\sigma}(b)$$ must hold.

We now consider two cases.

Case 1. $$\bar{F}^{\sigma}(b) < \bar{F}^{\sigma}(a) \leq\bar {F}^{\sigma}(c)$$.

Note in this case, $$0 < \bar{F}^{\sigma}(a)$$ so that a is a cell in F σ. Moreover the entries in a and c cannot be bumped in the insertion kF σ since their replacement by a larger value would not produce the desired ordering $$\mbox {\tilde {F}^{\sigma }}(a) \le \mbox {\tilde {F}^{\sigma }}(b) \le \mbox {\tilde {F}^{\sigma }}(c)$$. Thus it must be the case that $$\mbox {\tilde {F}^{\sigma }}(b)$$ was bumped in the insertion kF σ. We now consider two subcases.

Subcase 1(a). $$\bar{F}^{\sigma}(a) < \mbox {\tilde {F}^{\sigma }}(b)$$.

We know that $$\mbox {\tilde {F}^{\sigma }}(b)$$ bumps $$\bar{F}^{\sigma}(b)$$. We wish to determine where $$\mbox {\tilde {F}^{\sigma }}(b)$$ came from in the insertion process kF σ. It cannot be that $$\mbox {\tilde {F}^{\sigma }}(b)= k$$ or that it was bumped from a cell that comes before a in the reading order since it would then meet the conditions to bump the entry F σ(a) in cell a as $$F^{\sigma}(a) < \mbox {\tilde {F}^{\sigma }}(b)\leq F^{\sigma }(c)$$. Thus it must have been bumped from a cell after a but before b in reading order. That is, $$\mbox {\tilde {F}^{\sigma }}(b) = F^{\sigma }(d)$$ where d=(x 3,y 1) and x 1<x 3<x 2. Thus we have the situation pictured in Fig. 7.

However, this is not possible since if $$\gamma_{x_{1}} \geq\gamma_{x_{3}}$$, then the entries in cells a, d, and c would violate the A-triple condition for F σ and, if $$\gamma_{x_{1}} < \gamma_{x_{3}}$$, then the entries in cells c and d would violate the B-increasing condition on F σ.

Subcase 1(b). $$\bar{F}^{\sigma}(a) = \mbox {\tilde {F}^{\sigma }}(b)$$.

Again we must determine where $$\mbox {\tilde {F}^{\sigma }}(b)$$ came from in the insertion process kF σ. To this end, let r be the least row such that r>y 1 and $$\bar{F}^{\sigma }(x_{1},r) < \bar{F}^{\sigma }(x_{1},r-1)$$. Then we will have the situation pictured in Fig. 8 where d is the cell in column x 1 and row r. Thus all the entries of F σ in the cells in column x 1 between a and d are equal to F σ(a).

Now the region of shaded cells pictured in Fig. 8 are cells which are attacked or attack some cell which is equal to F σ(a) and hence their entries in F σ must all be different from F σ(a). Hence $$\mbox {\tilde {F}^{\sigma }}(b)$$ cannot have come from any of these cells since we are assuming that $$\bar{F}^{\sigma }(a) = \mbox {\tilde {F}^{\sigma }}(b)$$. Thus $$\mbox {\tilde {F}^{\sigma }}(b)$$ must have come from a cell before d in reading order. But this is also impossible because $$\mbox {\tilde {F}^{\sigma }}(b)$$ would then meet the conditions to bump $$\bar{F}^{\sigma }(d)$$ which would violate our assumption that it bumps F σ(b).

Case 2. $$\bar{F}^{\sigma}(a) \leq\bar{F}^{\sigma}(c) < \bar {F}^{\sigma}(b)$$.

The entry in cell c is the only entry which could be bumped in the insertion kF σ if we are to end up with the relative ordering $$\mbox {\tilde {F}^{\sigma }}(a) \le \mbox {\tilde {F}^{\sigma }}(b) \le \mbox {\tilde {F}^{\sigma }}(c)$$. Since F σ(c) is bumped, this means that c is not in the basement. But if we do not bump either a or b in the insertion kF σ and a and b are cells in $$\tilde{F}^{\sigma }$$, it must be the case that a and b are cells in F σ and that there is no change in the heights of columns x 1 and x 2. Thus $$\gamma_{x_{1}} \geq\gamma_{x_{2}}$$. Let $$\underline{c}$$ be the cell immediately below c and $$\underline{b}$$ be the cell immediately below b. Thus we must have $$F^{\sigma }(c) < \mbox {\tilde {F}^{\sigma }}(c) \leq F^{\sigma }(\underline{c})$$. We now consider two subcases.

Subcase 2(a). $$\mbox {\tilde {F}^{\sigma }}(c) = F^{\sigma }(b)$$.

Let r be the least row such that r>y 1 and $$\bar{F}^{\sigma }(x_{2},r) < \bar{F}^{\sigma }(x_{2},r-1)$$. Then we will have the situation pictured in Fig. 9 where d is the cell in column x 2 and row r. Thus all the entries of F σ in the cells on column x 2 between b and d are equal to F σ(b).

Now the region of shaded cells pictured in Fig. 9 are cells which are attacked or attack some cell which is equal to F σ(b) and hence their entries in F σ must all be different from F σ(b). Thus $$\mbox {\tilde {F}^{\sigma }}(c)$$ cannot have come from any of these cells since we are assuming that $$F^{\sigma }(b) = \mbox {\tilde {F}^{\sigma }}(c)$$. Hence $$\mbox {\tilde {F}^{\sigma }}(c)$$ must have come from a cell before d in reading order. But this is also impossible because $$\mbox {\tilde {F}^{\sigma }}(c)$$ would then meet the conditions to bump $$\bar{F}^{\sigma }(d)$$ which would violate our assumption that it bumps F σ(c).

Subcase 2(b). $$F^{\sigma }(b)< \mbox {\tilde {F}^{\sigma }}(c)$$.

First, consider the A-triple $$c,\underline{c},\underline{b}$$ in F σ. We cannot have that $$F^{\sigma }(\underline{b}) < F^{\sigma }(c) \leq F^{\sigma }(\underline{c})$$ since that would imply $$F^{\sigma }(b) \leq F^{\sigma }(\underline{b}) < F^{\sigma }(c)$$, which would violate our assumption that F σ(a)<F σ(c)<F σ(b). Thus it must be the case that $$F^{\sigma }(c) \leq F^{\sigma }(\underline{c}) < F^{\sigma }(\underline{b})$$. But then we would have $$F^{\sigma }(b)< \mbox {\tilde {F}^{\sigma }}(c) \leq F^{\sigma }(\underline{c}) < F^{\sigma }(\underline{b})$$ which would mean that $$\mbox {\tilde {F}^{\sigma }}(c)$$ satisfies the conditions to bump F σ(b). Since it does not bump F σ(b), it must be the case that $$\mbox {\tilde {F}^{\sigma }}(c)$$ came from a cell which is after b in the reading order. We now consider two more subcases.

Subcase 2(bi). $$\mbox {\tilde {F}^{\sigma }}(c)$$ is in the same row as F σ(b).

Assume that $$\mbox {\tilde {F}^{\sigma }}(c) = F^{\sigma }(d)$$ where d=(x 3,y 1) and x 2<x 3. It cannot be that $$\gamma_{x_{2}} < \gamma_{x_{3}}$$ since then the B-increasing condition would force that $$F^{\sigma }(\underline{b}) < F^{\sigma }(d) = \mbox {\tilde {F}^{\sigma }}(c)$$. But that would mean that $$F^{\sigma }(\underline{b}) < \mbox {\tilde {F}^{\sigma }}(c) \leq F^{\sigma }(\underline{c})$$ which violates the fact that $$F^{\sigma }(c) \leq F^{\sigma }(\underline{c}) < F^{\sigma }(\underline{b})$$. Thus it must be the case that $$\gamma_{x_{2}} \geq\gamma_{x_{3}}$$ and, hence, $$b,\underline{b},d$$ is a type A triple. As we cannot have $$F^{\sigma }(\underline{b}) < F^{\sigma }(d) = \mbox {\tilde {F}^{\sigma }}(c)$$, it must be the case that $$\mbox {\tilde {F}^{\sigma }}(c) = F^{\sigma }(d) < F^{\sigma }(b) \leq F^{\sigma }(\underline{b})$$. But this is also impossible because we are assuming that $$F^{\sigma }(b)< \mbox {\tilde {F}^{\sigma }}(c)$$.

Subcase 2(bii). $$\mbox {\tilde {F}^{\sigma }}(c)$$ is in the same row as F σ(c).

In this case, let e 1,…,e s ,e s+1=c be the cells in the bumping path of the insertion of kF σ in row y 1−1, reading from left to right. Thus we are assuming that $$\mbox {\tilde {F}^{\sigma }}(c) =F^{\sigma }(e_{s})$$. For each e i , we let $$\underline{e}_{i}$$ be the cell directly below e i and $$\overline{e}_{i}$$ be the cell directly above e i . Thus we have the picture in Fig. 10 where we are assuming that s=3 and we have circled the elements in the bumping path.

Since the elements in the bumping path strictly decrease, we have that F σ(e 1)>⋯>F σ(e s )>F σ(e s+1)=F σ(c) and that for each i, $$F^{\sigma }(e_{i+1}) < F^{\sigma }(e_{i}) \leq F^{\sigma }(\underline{e}_{i+1})$$. Let e j =(z j ,y 1−1). Thus z s+1=x 1. By Lemma 1, we must have $$\gamma_{z_{1}} \geq\cdots\geq\gamma_{z_{s}} \geq\gamma_{x_{1}}$$. This means that the $$\overline{e}_{i}$$s are cells in F σ so that Lemma 1 also implies that $$F^{\sigma }(\overline{e}_{1}) > \cdots> F^{\sigma }(\overline{e}_{s})$$. Note that in this case, we have $$\gamma_{x_{1}} \geq\gamma_{x_{2}}$$ so that we know that $$\gamma_{z_{1}} \geq\cdots\geq\gamma_{z_{s}} \geq\gamma_{x_{1}} \geq\gamma_{x_{2}}$$. Now consider the A triples $$\{e_{i}, \underline{e}_{i},\underline{b}\}$$. We are assuming that $$F^{\sigma }(c) = F^{\sigma }(e_{s+1}) \leq F^{\sigma }(\underline{e}_{s+1}) = F^{\sigma }(\underline{c}) < F^{\sigma }(\underline{b})$$. But since $$F^{\sigma }(e_{s+1}) < F^{\sigma }(e_{s}) \leq F^{\sigma }(\underline{e}_{s+1})$$, the $$\{e_{s},\underline{e}_{s},\underline{b}\}$$ A-triple condition must be that $$F^{\sigma }(e_{s}) \leq F^{\sigma }(\underline{e}_{s}) < F^{\sigma }(\underline{b})$$. Now if e s−1 exists, then we know that $$F^{\sigma }(e_{s}) < F^{\sigma }(e_{s-1}) \leq F^{\sigma }(\underline{e}_{s})$$ and, hence, the $$\{e_{s-1},\underline{e}_{s-1},\underline{b}\}$$ A-triple condition must also be that $$F^{\sigma }(e_{s-1}) \leq F^{\sigma }(\underline{e}_{s-1}) < F^{\sigma }(\underline{b})$$. If e s−2 exists, then we know that $$F^{\sigma }(e_{s-1}) < F^{\sigma }(e_{s-2}) \leq F^{\sigma }(\underline{e}_{s-1})$$ and, hence, the $$\{e_{s-2},\underline{e}_{s-2},\underline{b}\}$$ A-triple condition must also be that $$F^{\sigma }(e_{s-2}) \leq F^{\sigma }(\underline{e}_{s-2}) < F^{\sigma }(\underline{b})$$. Continuing on in this way, we can conclude that for all j, $$F^{\sigma }(e_{j}) \leq F^{\sigma }(\underline{e}_{j}) < F^{\sigma }(\underline{b})$$. Next consider the $$\overline{e}_{i}, e_{i}, b$$ A-triple conditions. We are assuming that $$F^{\sigma }(b) < \mbox {\tilde {F}^{\sigma }}(c) = F^{\sigma }(e_{s})$$. Thus it must be the case that $$F^{\sigma }(b) < F^{\sigma }(\overline{e}_{s}) \leq F^{\sigma }({e}_{s})$$. Since $$F^{\sigma }(\overline{e}_{s}) < F^{\sigma }(\overline{e}_{s-1}) < \cdots< F^{\sigma }(\overline{e}_{1})$$, it must be the case that for all j, $$F^{\sigma }(b) < F^{\sigma }(\overline{e}_{j}) \leq F^{\sigma }(e_{j})$$.

Thus in this case, we must have $$F^{\sigma }(b) < F^{\sigma }(\overline{e}_{1}) \leq F^{\sigma }(e_{1}) \leq F^{\sigma }(\underline{e}_{1}) < F^{\sigma }(\underline{b})$$. Now the question is where can the element z which bumps F σ(e 1) come from? We claim that z cannot equal k or come from a cell before b in reading order since it satisfies the condition to bump b and b is not bumped. Thus it must have come from a cell d=(x 3,y 1) which lies in the same row as b but comes after b in reading order. In that case, we must have $$F^{\sigma }(e_{1}) < F^{\sigma }(d) \leq F^{\sigma }(\underline{e_{1}}) < F^{\sigma }(\underline{b})$$. Thus it cannot be that $$\gamma_{x_{2}} < \gamma_{x_{3}}$$ since the B-increasing condition would force $$F^{\sigma }(\underline{b}) < F^{\sigma }(d)$$. Thus $$\gamma_{x_{2}} \geq\gamma_{x_{3}}$$. But in that case, we would have $$F^{\sigma }(b) < F^{\sigma }(d) < F^{\sigma }(\underline{b})$$ which would be a coinversion A triple in F σ.

Thus we have shown that in Subcase 2, c could not have been bumped and, hence, there can be no coinversion A triples in kF σ. □

It is obvious that our insertion algorithm ensures that the columns of kF σ are weakly increasing when read from top to bottom. Thus if we can show that kF σ satisfies the B-increasing condition, we know that all B triples in kF σ will be inversion triples.

### Lemma 4

If F σ is a PBF, then $$\tilde{F}^{\sigma }= k \rightarrow F^{\sigma}$$ satisfies the B-increasing condition.

### Proof

Suppose that F σ is of shape γ=(γ 1,…,γ n ) where nk. Suppose that $$\tilde{F}^{\sigma}$$ does not satisfy the B-increasing condition. Thus there must be a type B triple {b=(x 1,y 1),a=(x 2,y 1+1),c=(x 2,y 1)} in $$\tilde{F}^{\sigma} : = k \rightarrow F^{\sigma}$$ as depicted in Fig. 11 such that $$\mbox {\tilde {F}^{\sigma }}(b) \geq \mbox {\tilde {F}^{\sigma }}(a)$$. Assume that we have picked a and b so that b is as far left as possible. Let $$\overline{b}$$ denote the cell immediately above b and $$\underline{b}$$ denote the cell immediately below b. Then there are two possibilities, namely, it could be that $$\gamma_{x_{1}} < \gamma_{x_{2}}$$ so that {a,b,c} forms a type B triple in F σ or it could be that $$\gamma_{x_{1}} = \gamma_{x_{2}}$$ and we added an element on the top of column x 2 during the insertion kF σ so that in $$\mbox {\tilde {F}^{\sigma }}$$, the height of column x 1 is strictly less than the height of column x 2.

Case 1. $$\gamma_{x_{1}} < \gamma_{x_{2}}$$.

In this case, the B-increasing condition for F σ implies that F σ(b)<F σ(a) and $$F^{\sigma }(\underline{b})< F^{\sigma }(c)$$. As the elements in the bumping path strictly decrease, it must be the case that F σ(b) is bumped and F σ(a) is not bumped. Thus we must have that $$F^{\sigma }(\underline{b})\geq \mbox {\tilde {F}^{\sigma }}(b) > F^{\sigma }(b)$$.

First, we claim that we cannot be the case that $$\mbox {\tilde {F}^{\sigma }}(b) = F^{\sigma }(a)$$. Otherwise, let r be the least row such that r>y 1+1 and $$\bar{F}^{\sigma }(x_{2},r) < \bar{F}^{\sigma }(x_{2},r-1)$$. Then we will have the situation pictured in Fig. 12 where d is the cell in column x 2 and row r. Thus all the entries of F σ in the cells in column x 2 between a and d are equal to F σ(a). Now the region of shaded cells pictured in Fig. 12 are cells which are attacked or attack some cell which is equal to F σ(a) and hence their entries in F σ must all be different from F σ(a). Hence $$\mbox {\tilde {F}^{\sigma }}(b)$$ cannot have come from any of these cells since we are assuming that $$F^{\sigma }(a) = \mbox {\tilde {F}^{\sigma }}(b)$$. Thus $$\mbox {\tilde {F}^{\sigma }}(b)$$ must be either equal to k or have come from a cell in F σ which precedes d in reading order. But this is also impossible because $$\mbox {\tilde {F}^{\sigma }}(b)$$ would then meet the conditions to bump $$\bar{F}^{\sigma }(d)$$ which would violate our assumption that it bumps F σ(b).

Thus we can assume that F σ(a)<F σ(b). Now the question is where did $$\mbox {\tilde {F}^{\sigma }}(b)$$ come from?

First it cannot be that $$\mbox {\tilde {F}^{\sigma }}(b)$$ was either equal to k or was equal to F σ(d) where d comes before a in reading order since then we have that

$$F^{\sigma }(a) < F^{\sigma }(b) < \mbox {\tilde {F}^{\sigma }}(b) \leq F^{\sigma }( \underline{b}) < F^{\sigma }(c).$$

But this would mean that $$\mbox {\tilde {F}^{\sigma }}(b)$$ meets the condition to bump F σ(a) which would violate our assumption that $$\mbox {\tilde {F}^{\sigma }}(b)$$ bumps F σ(b).

Similarly, it cannot be the case that $$\mbox {\tilde {F}^{\sigma }}(b) = F^{\sigma }(d)$$ where d is a cell to the right of a and in the same row as a. That is, if d=(x 3,y 1+1) where x 2<x 3, then either (i) $$\gamma_{x_{2}} < \gamma_{x_{3}}$$ in which case the fact that $$F^{\sigma }(c) > F^{\sigma }(d) = \mbox {\tilde {F}^{\sigma }}(b)$$ would mean that cells d and c violate the B-increasing condition for F σ or (ii) $$\gamma_{x_{2}} \geq \gamma_{x_{3}}$$ in which case the triple {a,c,d} would be a type A coinversion triple in F σ.

Thus it must be the case that $$\mbox {\tilde {F}^{\sigma }}(b)$$ came from a cell to the left of b and in the same row as b in F σ. So let e 1,…,e s ,e s+1=b be the cells in the bumping path of the insertion of kF σ in row y 1, reading from left to right. Thus we are assuming that $$\mbox {\tilde {F}^{\sigma }}(b) =F^{\sigma }(e_{s})$$. For each e i , we let $$\underline{e}_{i}$$ be the cell directly below e i . Thus we have the situation pictured in Fig. 13 where we are assuming that s=3 and we have circled the elements in the bumping path.

Since the elements in the bumping path strictly decrease, we have that F σ(e 1)>⋯>F σ(e s )>F σ(e s+1)=F σ(b)>F σ(a). Moreover, for each 1≤is, we have $$F^{\sigma }(e_{i+1}) < F^{\sigma }(e_{i}) \leq F^{\sigma }(\underline{e}_{i+1})$$. Let e j =(z j ,y 1) for j=1,…,s+1. Thus z s+1=x 1. By Lemma 1, we must have $$\gamma_{z_{1}} \geq\cdots\geq\gamma_{z_{s}} \geq\gamma_{x_{1}}$$. Note that the fact that we chose b to be as far left as possible means that it must be the case that $$\gamma_{z_{j}} \geq\gamma_{x_{2}}$$ for 1≤js. That is, if for some 1≤js, $$\gamma_{z_{j}} < \gamma_{x_{2}}$$, then the entries in cells a and e j would violate the B-increasing condition in F σ which would violate our choice of b. Thus $$\{e_{j},\underline{e}_{j},c\}$$ is a type A triple for 1≤js. Since $$F^{\sigma }(c)> F^{\sigma }(\underline{b}) = F^{\sigma }(\underline{e}_{s+1}) \geq F^{\sigma }(e_{s})$$, it must be the case that the $$\{c,e_{s},\underline{e}_{s}\}$$. A triple condition is $$F^{\sigma }(e_{s}) \leq F^{\sigma }(\underline{e}_{s}) < F^{\sigma }(c)$$. Now assume by induction that we have shown that $$F^{\sigma }(e_{j}) \leq F^{\sigma }(\underline{e}_{j}) < F^{\sigma }(c)$$. Then since $$F^{\sigma }(e_{j-1}) \leq F^{\sigma }(\underline{e}_{j})$$, the $$\{a,e_{j-1},\underline{e}_{j-1}\}$$ A triple condition must be that $$F^{\sigma }(e_{j-1}) \leq F^{\sigma }(\underline{e}_{j-1}) < F^{\sigma }(c)$$. It thus follows that $$F^{\sigma }(e_{1}) \leq F^{\sigma }(\underline{e}_{1}) < F^{\sigma }(c)$$.

Now the question is where did $$\mbox {\tilde {F}^{\sigma }}(e_{1})$$ come from? Note that we have shown that

$$F^\sigma (a)<F^\sigma (e_1) < \mbox {\tilde {F}^{\sigma }}(e_1) \leq F^\sigma (\underline{e}_1)< F^\sigma (c).$$

Thus it cannot be that $$\mbox {\tilde {F}^{\sigma }}(e_{1})$$ is equal to k or is equal to F σ(d) for some cell d which precedes a in reading order since then $$\mbox {\tilde {F}^{\sigma }}(e_{1})$$ would bump F σ(a). By our choice of e 1, the only other possibility is that $$\mbox {\tilde {F}^{\sigma }}(e_{1}) = F^{\sigma }(d)$$ for some cell d to the right of a and in the same row as a. Say d=(x 3,y 1+1) where x 2<x 3. Then it cannot be that $$\gamma_{x_{2}}< \gamma_{x_{3}}$$ since then the cells d and c would violate the B-increasing condition in F σ and it cannot be that $$\gamma_{x_{2}}\geq \gamma_{x_{3}}$$ since then the triple {a,c,d} would be a type A coinversion triple in F σ.

Thus we have shown that $$\gamma_{x_{1}} < \gamma_{x_{2}}$$ is impossible.

Case 2. $$\gamma_{x_{1}} = \gamma_{x_{2}} =y$$

Thus we must have added an element on the top of column x 2 during the insertion kF σ so that in $$\mbox {\tilde {F}^{\sigma }}$$, the height of column x 1 is strictly less than the height of column x 2. In this case, neither b nor c were involved in the bumping path of kF σ so that $$F^{\sigma }(b) = \tilde{F}^{\sigma }(b)$$ and $$F^{\sigma }(c) = \tilde{F}^{\sigma }(c)$$. We claim that it must be the case that $$\tilde{F}^{\sigma }(x_{1},y) \geq\tilde{F}^{\sigma }(x_{2},y+1)$$. That is, if y=y 1, then $$\tilde{F}^{\sigma }(x_{1},y) = \tilde{F}^{\sigma }(b) \geq \tilde{F}^{\sigma }(a) = \tilde{F}^{\sigma }(x_{2},y+1)$$ since we are assuming that $$\tilde{F}^{\sigma }(b) \geq \tilde{F}^{\sigma }(a)$$. If y>y 1, then the triple $$\{ \overline{b},a,b \}$$ is a type A triple in F σ and $$F^{\sigma }(a) = \tilde{F}^{\sigma }(a)$$. We now have two possibilities, namely, either (i) $$F^{\sigma }(a) < F^{\sigma }(\overline{b}) \leq F^{\sigma }(b)$$ or (ii) $$F^{\sigma }(\overline{b}) \leq F^{\sigma }(b) < F^{\sigma }(a)$$. Note that (ii) is inconsistent with our assumption that $$\tilde{F}^{\sigma }(b) \geq \tilde {F}^{\sigma }(a)$$ so that it must be the case that $$F^{\sigma }(\overline{b}) > F^{\sigma }(a)$$. But then we know by part (ii) of Lemma 1 that F σ(x 1,y)>F σ(x 2,y). Our insertion algorithm ensures that $$F^{\sigma }(x_{2},y) \geq\tilde{F}^{\sigma }(x_{2},y+1)$$ so that $$F^{\sigma }(x_{1},y) > \tilde{F}^{\sigma }(x_{2},y+1)$$ in this case.

Now consider the question of where $$\tilde{F}^{\sigma }(x_{2},y+1)$$ came from in the bumping process. It cannot be the case that $$\tilde{F}^{\sigma }(x_{2},y+1) = k$$ or was bumped from a cell before (x 1,y+1) in the reading order because then $$\tilde{F}^{\sigma }(x_{2},y+1)$$ could be placed on top of F σ(x 1,y) and $$\bar{F}^{\sigma }(x_{1},y+1)=0$$ in this case. Thus $$\tilde{F}^{\sigma }(x_{2},y+1)$$ must have been bumped from some cell d between (x 1,y+1) and (x 2,y+1) in reading order. But this is impossible since $$F^{\sigma }(x_{1},y) \geq F^{\sigma }(d) = \tilde{F}^{\sigma }(x_{2},y+1)$$ would mean that (x 1,y) and d do not satisfy the B-increasing condition in F σ. Thus we have shown that the assumption that $$\mbox {\tilde {F}^{\sigma }}(b) \geq \mbox {\tilde {F}^{\sigma }}(a)$$ leads to a contradiction in all cases and, hence, $$\mbox {\tilde {F}^{\sigma }}$$ must satisfy the B-increasing condition. □

### Proposition 5

The insertion procedure kF σ is well-defined and produces a $$\operatorname {PBF}$$.

### Proof

Let F σ be an arbitrary $$\operatorname {PBF}$$ of shape γ and basement σS n and let k be an arbitrary positive integer less than or equal to n. We must show that the procedure kF σ terminates and that the resulting filling is indeed a $$\operatorname {PBF}$$. Lemma 2 implies that at most one occurrence of any given value will be bumped to the first row. Therefore each entry i in the first row will be inserted into a column at or before the column σ −1(i). This means that the insertion procedure terminates and hence is well-defined.

Lemmas 3 and 4 imply that kF σ is a semi-standard augmented filling which satisfies the B-increasing condition. Thus kF σ is a $$\operatorname {PBF}$$. □

Before proceeding, we make two remarks. Our first remark is concerned with the process of inverting our insertion procedure. That is, the last cell or terminal cell in the bumping path of kF σ must be a cell that originally contained 0 in $$\bar {F}^{\sigma }$$. Such a cell was not in F σ so that the shape of $$\mbox {\tilde {F}^{\sigma }}$$ is the result of adding one new cell c on the top of some column of the shape of F σ. However, there are restrictions as to where this new cell may be placed. That is, we have the following proposition which says that if c is the top cell of a column in a sequence of columns which have the same height in kF σ, then c must be in the rightmost of those columns.

### Proposition 6

Suppose that σS n and F σ is a $$\operatorname {PBF}$$ with basement σ and kn. Suppose that F σ has shape γ=(γ 1,…,γ n ), kF σ has shape δ=(δ 1,…,δ n ), and (x,y) is the cell in δ/γ. Then it must be case that if x<n, then 1+γ x γ x+j for 1≤jnx. In particular, if x<n, then δ x δ x+j for 1≤jnx.

### Proof

Arguing for a contradiction, suppose that x<n and 1+γ x =γ x+j =y for some j such that 1≤jnx. Let G σ=kF σ. and let $$\bar{F}^{\sigma }$$ and $$\bar{G}^{\sigma }$$ be the fillings which result by placing 0s on top of the columns of F σ and G σ, respectively. Thus we would have the situation pictured in Fig. 14 for the end of the bumping path in the insertion kF σ.

Hence b is at the top of column x+j in both F σ and G σ and neither F σ(b) nor F σ(c) are bumped during the insertion of kF σ. Note that B-increasing condition in F σ forces that F σ(c)<F σ(b). Thus the {a,b,c} A-triple condition in G σ must be that

$$G^\sigma (a) \leq G^\sigma (c) < G^\sigma (b).$$

Now consider the question of where G σ(a) came from in the bumping path of the insertion kF σ. It cannot be that G σ(a)=k or G σ(a) was bumped from a cell before (x+j,y+1) because of the fact that G σ(a)<G σ(b)=F σ(b) would allow G σ(a) to be inserted on top of cell b. Thus either (i) G σ(a)=F σ(z,y+1) for some z>x+j or (ii) G σ(a)=F σ(z,y) for some z<x. Case (i) is impossible since then we would have γ x+j <γ z and the B-increasing condition in F σ would force G σ(b)=F σ(b)<F σ(z,y+1)=G σ(a).

If case (ii) holds, let e 1,…,e s ,e s+1=(x,y) be the cells in row y of the bumping path of the insertion of kF σ, reading from left to right. Thus we are assuming that G σ(a)=F σ(e s ). For each e i , we let $$\underline{e}_{i}$$ be the cell directly below e i . Thus we have the picture in Fig. 15 where we are assuming that s=3 and we have circled the elements in the bumping path.

Since the elements in the bumping path strictly decrease, we have that F σ(e 1)>⋯>F σ(e s )=G σ(a) and that for each i, $$F^{\sigma }(e_{i+1}) < F^{\sigma }(e_{i}) \leq F^{\sigma }(\underline{e}_{i+1})$$. Let e j =(z j ,y). Thus z s+1=x. It follows from Lemma 1 that $$\gamma_{z_{1}} \geq\cdots\geq\gamma_{z_{s}} > \gamma_{x}$$.

Now consider the A-triples $$\{e_{i},\underline{e}_{i},(x+j,y)\}$$ for i=1,…,s in F σ. We have established that F σ(e s )=G σ(a)≤G σ(c)<G σ(b)=F σ(x+j,y). Thus it follows from the $$\{e_{s},\underline{e}_{s},(x+j,y)\}$$ A-triple condition that $$F^{\sigma }(e_{s}) \leq F^{\sigma }(\underline{e}_{s}) < F^{\sigma }(b)$$. But then $$F^{\sigma }(e_{s}) < F^{\sigma }(e_{s-1})\leq F^{\sigma }(\underline{e}_{s})$$ so that the $$\{e_{s-1},\underline{e}_{s-1},(x+j,y)\}$$ A-triple condition also implies that $$F^{\sigma }(e_{s-1}) \leq F^{\sigma }(\underline{e}_{s-1}) < F^{\sigma }(b)$$. Continuing on in this way, we can conclude from the $$\{e_{i},\underline{e}_{i},(x+j,y)\}$$ A-triple condition that $$F^{\sigma }(e_{i}) \leq F^{\sigma }(\underline{e}_{i}) < F^{\sigma }(b)$$ for i=1,…,s.

Now consider the element z that bumps F σ(e 1) in the insertion kF σ. We must have $$F^{\sigma }(e_{1}) < z \leq F^{\sigma }(\underline{e}_{1})< F^{\sigma }(b)$$. Thus it cannot be that z=k or z=F σ(d) for some cell d which precedes (x+j,y+1) in reading order because that would mean that z meets the conditions to be placed on top of b. Thus it must be that z=F σ(d) for some cell d which follows (x+j,y+1) in reading order. Suppose that d=(t,y+1) where t>x+j. But we are assuming that (x+j,y) is the top cell in column x+j. Thus it must be the case that γ x+j <γ t . But then the B-increasing condition in F σ would force F σ(b)<F σ(d)=z which is a contradiction. Thus case (ii) cannot hold either which implies 1+γ x γ x+j . □

Except for the restrictions determined by Proposition 6, we can invert the insertion procedure. That is, to invert the procedure kF σ, begin with the entry r j contained in the new cell appended to F σ and read backward through the reading order beginning with this cell until an entry is found which is greater than r j and immediately below an entry less than or equal to r j . Let this entry be r j−1, and repeat. When the first cell of kF σ is passed, the resulting entry is r 1=k and the procedure has been inverted.

Our second remark concerns the special case where $$\sigma = \bar{\epsilon}_{n}$$ and kn. In that case, we claim that our insertion procedure is just a twisted version of the usual RSK row insertion algorithm. That is, we know that F σ must be of shape γ=(γ 1,…,γ n ) where γ 1γ 2≥⋯≥γ n and that F σ is weakly decreasing in columns, reading from bottom to top, and is strictly decreasing in rows, reading from left to right. Now if kF σ(1,γ 1), then we just add k to the top of column 1 to form kF σ. Otherwise suppose that F σ(1,y 1)≥k>F σ(1,y 1+1). Then all the elements in $$\bar{F}^{\sigma }$$ that lie weakly above row y 1+1 and strictly to the right of column 1 must be less than or equal to F σ(1,y 1+1). Thus the first place that we can insert k is in cell (1,y 1+1). Thus it will be that case that k bumps F σ(1,y 1+1). Since elements in the bumping path are decreasing and all the elements in column 1 below row y 1+1 are strictly larger than F σ(1,y 1+1), it follows that none of them can be involved in the bumping path of the insertion kF σ. It is then easy to check that since F σ(1,y 1+1)≤n−1, the result of the insertion kF σ is the same as the result of the insertion of F σ(1,y 1+1) into the $$\operatorname {PBF}$$ formed from F σ by removing the first column and then adding back column 1 of F σ with F σ(1,y 1+1) replaced by k. Thus our insertion process satisfies the usual recursive definition of the RSK row insertion algorithm. Hence, in the special case where the basement permutation is $$\bar{\epsilon}_{n}$$ and kn, our insertion algorithm is just the usual RSK row insertion algorithm subject to the condition that we have weakly decreasing columns and strictly decreasing rows.

## 4 General properties of the insertion algorithm

In this section, we shall prove several fundamental properties of the insertion algorithm kF σ. In particular, our results in this section will allow us to prove that our insertion algorithm can be factored through the twisted version of RSK row insertion described in the previous section.

For any permutation σ, let E σ be the empty filling which just consists of the basement whose entries are σ 1,…,σ n reading from left to right. Let s i denote the transposition (i,i+1) so that if σ=σ 1σ n , then

$$s_i \sigma =\sigma _1 \cdots \sigma _{i-1} \sigma _{i+1} \sigma _i \sigma _{i+2} \cdots \sigma _n.$$

Our next lemma will describe the difference between inserting a word w into E σ versus inserting w into $$E^{s_{i}\sigma }$$. If w=w 1w t , then let

$$w \rightarrow E^\sigma = w_t \rightarrow\bigl( \ldots \bigl(w_2 \rightarrow\bigl(w_1 \rightarrow E^\sigma \bigr)\bigr)\ldots\bigr).$$

### Theorem 7

Let w be an arbitrary word whose letters are less than or equal to n and suppose that σ=σ 1σ n is a permutation in S n such that σ i <σ i+1. Let F σ=wE σ, γ=(γ 1,…,γ n ) be the shape of F σ, $$F^{s_{i} \sigma} = w \rightarrow E^{s_{i} \sigma}$$, and δ=(δ 1,…,δ n ) be the shape of $$F^{s_{i} \sigma }$$. Then

1. 1.

{γ i ,γ i+1}={δ i ,δ i+1} and δ i δ i+1,

2. 2.

$$F^{s_{i} \sigma} (i,j) > F^{s_{i} \sigma} ({i+1},j)$$, for jδ i where we let $$F^{s_{i} \sigma}(i+1,j)=0$$ if (i+1,j) is not a cell in F σ.

3. 3.

$$F^{s_{i} \sigma}(j,k) = F^{\sigma}(j,k)$$ for ji,i+1 so that γ j =δ j for all ji,i+1,

4. 4.

For all j, $$\{F^{s_{i} \sigma} (i,j) , F^{s_{i} \sigma} ({i+1},j)\} = \{ F^{\sigma} (i,j) , F^{\sigma} ({i+1},j) \}$$.

### Proof

Note that since (s i σ) i =σ i+1>σ i =(s i σ) i+1, Lemma 1 implies claims 1 and 2. Thus we need only prove claims 3 and 4.

We proceed by induction on the length of w. The theorem clearly holds when w is the empty word. Now suppose that the theorem holds for all words of length less than t. Then let G=w 1w t−1E σ and $$H = w_{1} \ldots w_{t-1} \rightarrow E^{s_{i}\sigma }$$ and suppose G has shape α=(α 1,…,α n ) and H has shape β=(β 1,…,β n ). Let $$\bar{G}$$ and $$\bar{H}$$ be the fillings with 0s added to the tops of the columns of G and H, respectively. Let $$\tilde{G} = w_{t} \rightarrow G$$ and $$\tilde{H} = w_{t} \rightarrow H$$ and suppose that $$\tilde{G}$$ has shape γ=(γ 1,…,γ n ) and $$\tilde{H}$$ has shape δ=(δ 1,…,δ n ). We compare the bumping path of w t H to the bumping path in w t G. That is, in the insertion process w t H, suppose we come to a point were we are inserting some element c which is either w t or some element bumped in the insertion w t H into the cells (i,j) and (i+1,j). Assume by a second inner reverse induction on the size of i, that the insertion of w t G will also insert c into the cells (i,j) and (i+1,j). This will certainly be true the first time the bumping paths interact with elements in columns i and i+1 since our induction assumption ensures that $$\bar{G}$$ restricted to columns 1,…,i−1 equals $$\bar{H}$$ restricted to columns 1,…,i−1. Let $$x = \bar{H}(i,j)$$, $$y = \bar{H}(i+1,j)$$, $$\underline{x} = \bar{H}(i,j-1)$$, and $$\underline{y} = \bar {H}(i+1,j-1)$$ (see Fig. 16). Our inductive assumption implies that if x>0, then x>y and if $$\underline{x} >0$$, then $$\underline{x} > \underline{y}$$. Our goal is to analyze how the insertion of c interacts with elements in cells (i,j) and (i+1,j) during the insertions w t H and w t G. We will show that either

($$\mathbb{A}$$) The bumping path does not interact with cells (i,j) and (i+1,j) during either the insertions w t H or w t G,

($$\mathbb{B}$$) The insertion of c into cells (i,j) and (i+1,j) results in inserting some c′ into the next cell in reading order after (i+1,j) in both w t H and w t G, or

(ℂ) Both insertions end up terminating in one of (i,j) or (i+1,j).

This will ensure that w t H and w t G are identical outside of columns i and i+1 thus proving condition 3 of the theorem and that {H(i,j),H(i+1,j)}={G(i,j),G(i+1,j)} which will prove condition 4 of the theorem.

Now suppose that the elements $$\bar{H}$$ are in cells (i,j), (i+1,j), (i,j−1), and (i+1,j−1) are x, y, $$\underline{x}$$ and $$\underline{y}$$, respectively, as pictured on the left in Fig. 16. If $$\bar{G}$$ and $$\bar{H}$$ agree on those cells, then there is nothing to prove. Thus we have to consider three cases (I), (II), or (III) for the entries in $$\bar{G}$$ in those cells which are pictured on the right in Fig. 16. We can assume that $$\underline{x} \neq0$$. Now if $$y = \underline{y} =0$$, then it is easy to see that one of ($$\mathbb{A}$$), ($$\mathbb{B}$$), or (ℂ) will hold since the insertion procedure sees the same elements possibly in different columns. Thus we can assume that $$\underline{y} \neq0$$ and hence, $$\underline{x} > \underline{y}$$.

We now consider several cases.

Case A. x=y=0.

This means that x and y sit on top of columns i and i+1, respectively, in H.

First, suppose that $$c \leq\underline{x}$$. Then in w t H, the insertion will terminate by putting c on top of $$\underline{x}$$. In case (I), the insertion w t G will terminate by placing c on top of $$\underline{x}$$ and in cases (II) and (III), the insertion w t G will terminate by placing c on top of $$\underline{y}$$ if $$c \leq\underline{y}$$, or by placing c on top of $$\underline{x}$$ if $$\underline{y} < c \leq\underline{x}$$. In either situation, (ℂ) holds.

Next suppose that $$\underline{x} < c$$, Then in w t H, c will not be placed in either cell (i,j) or (i+1,j) so that the result is that c will end up being inserted in the next cell in reading order after (i+1,j). But then in cases (I), (II), and (III), c will not be placed in either cell (i,j) or (i+1,j) in the insertion w t G so that the result is that c will end up being inserted in the next cell in reading order after (i+1,j). Thus ($$\mathbb{B}$$) holds in all cases.

Case B. x>0, and y=0.

Note that case (I) is impossible since then x and $$\underline{x}$$ would violate the B-increasing condition in G.

First, consider the case where c does not bump x in the insertion w t H, and the insertion terminates with c being placed on top of $$\underline{y}$$. Then it must be the case $$c \leq\underline{y}$$. Moreover, it is the case that x>c by Lemma 1. Hence in case (II), c will not bump x and instead will be placed on top of $$\underline{x}$$ since $$c \leq\underline{y} < \underline{x}$$ and in case (III), c will be put on top of $$\underline{y}$$. However, this is not possible because then the insertion would violate Proposition 6. Thus, we know that condition (ℂ) holds.

Next consider the case where in the insertion w t H, c bumps x and x terminates the insertion by being placed on top of $$\underline{y}$$. Thus we know that $$x < c \leq\underline{x}$$ and $$x \leq\underline{y}$$. This rules out case (III) since then x and $$\underline{y}$$ would violate the B-increasing condition and case (I) since then x and $$\underline{x}$$ would violate the B-increasing condition. Now in case (II), c will bump x and x will be placed on top of $$\underline{x}$$ if $$c \leq\underline{y}$$. If $$c > \underline{y}$$, then c will not bump x and c will be placed on top of $$\underline{x}$$. In either situation, condition (ℂ) holds.

Next consider the case where in the insertion w t H, c bumps x and x cannot be placed on top of $$\underline{y}$$ so that x is inserted in the next cell in reading order after (i+1,j). Then we must have $$\underline{y} < x < c \leq\underline{x}$$. This rules out cases (I) and (II) since x cannot sit on top of $$\underline{y}$$. In case (III), c cannot sit on top of $$\underline{y}$$ so c will bump x. Thus condition ($$\mathbb{B}$$) holds in this case.

Finally, consider the case where c does not bump x and c cannot be placed on top of $$\underline{y}$$ in the insertion w t H so that c is inserted in the next cell in reading order after (i+1,j). The fact that c does not bump x means that either $$c > \underline{x}$$ or cx. The fact that c cannot be placed on top of $$\underline{y}$$ means that $$c > \underline{y}$$. If $$c > \underline{x} > \underline{y}$$, then in cases (II) and (III), c does not meet the conditions for the entries in cells (i,j) and (i+1,j) to change so that the result is that c will be inserted in the next cell in reading order after (i+1,j). If cx, then we know that $$\underline{y} < c \leq x$$. This rules out cases (I) and (II) since x cannot sit on $$\underline{y}$$. In case (III), c cannot be placed on $$\underline{y}$$ and c cannot bump x so that Case $$\mathbb{A}$$ holds in either situation.

Case C. $$x,\underline{x},y, \underline{y} >0$$.

Then we know that x>y and $$\underline{x} > \underline{y}$$.

First, suppose that in the insertion w t H, c bumps x, but x does not bump y so that the result is that x will be inserted into the cell following (i+1,j) in reading order. Since y<x, the reason that x does not bump y must be that $$x > \underline{y}$$. Thus it must be the case that $$\underline{x} \geq x > \underline{y} \geq y$$. This means that cases (I) and (II) are impossible since x cannot sit on top of $$\underline{y}$$ in G. But then $$c > x > \underline{y}$$ so that in the insertion w t G, c cannot bump y in case (III). Thus in case (III), c will bump x so that the result is that x will be inserted into the cell following (i+1,j) in reading order as desired. Hence, condition ($$\mathbb{B}$$) holds in this case.

Next consider the case where c does not bump x but c bumps y. Since c does not bump x then we either have (i) $$c> \underline{x}$$ or (ii) cx. If (i) holds, then $$c > \underline{x} > \underline{y}$$ which means that c cannot bump y. Thus (ii) must hold. Since c bumps y, $$y < c \leq\underline{y}$$. Thus we have two possibilities, namely, $$y < c \leq\underline{y} < x$$ or $$y < c \leq x \leq\underline{y}$$. First suppose that $$y < c \leq\underline{y} < x$$. Then cases (I) and (II) are impossible since x cannot sit on top of $$\underline{y}$$. In case (III), c will bump y but y cannot bump x since y<x so that y is inserted in the next cell after (i+1,j). Next suppose that $$y < c \leq x \leq\underline{y}$$. Then in case (I), c will bump but y cannot bump x since y<x so that y is inserted in the next cell after (i+1,j). In case (II), c does not bump x since cx so that c will bump y and y will be inserted in the next cell after (i+1,j). In case (III), c will bump y but y cannot bump x since y<x so that y is inserted in the next cell after (i+1,j). Thus in every case, y will be inserted in the next cell after (i+1,j). Hence, condition ($$\mathbb{B}$$) holds in this case.

Next consider the case where in the insertion w t H, c bumps x and then x bumps y so that the result is that y will be inserted into the cell following (i+1,j) in reading order. In this case, we must have $$y < x \leq \underline{y} < \underline{x}$$ and $$x < c \leq \underline{x}$$. In case (I), it is easy to see that in the insertion w t G, c will bump y since $$y < x < c \leq\underline{x}$$, but y will not bump x so that the result is that y will be inserted into the cell following (i+1,j) in reading order. In case (II), c will bump x and then x will bump y if $$c \leq\underline{y}$$. However, if $$c > \underline{y}$$, then c will not bump x but it will bump y. Thus in either situation, the result is that y will be inserted into the cell following (i+1,j) in reading order. Finally, consider case (III). If $$c \leq\underline{y}$$, then c will bump y but y will not bump x so that again the result is that y will be inserted into the cell following (i+1,j) in reading order. Now if $$c > \underline{y}$$, then we must have that $$y < x \leq\underline{y} < c \leq\underline{x}$$. We claim that this is impossible. Recall that α i and α i+1 are the heights of column i and i+1 in G, respectively. Now if α i α i+1, then $$\{G(i,j) =y, G(i,j-1) = \underline{y}, G(i+1,j)=x\}$$ would be a type A coinversion triple in G and if α i <α i+1, then $$\{G(i,j-1) = \underline{y}, G(i+1,j)=x, G(i+1,j-1) =\underline {x}\}$$ would be a type B coinversion triple in G.

Finally, consider the case where c does not bump either x or y in the insertion w t H so that c is inserted into the cells following (i+1,j) in reading order. Then either cy<x so that c cannot bump either x or y in cases (I)–(III) or $$c > \underline{x} > \underline{y}$$ so again c cannot bump either x or y in cases (I)–(III). Thus in all cases, the result is that c will be inserted into the cells following (i+1,j) in reading order.

Thus we have shown that conditions ($$\mathbb{A}$$), ($$\mathbb{B}$$), and (ℂ) always hold which, in turn, implies that conditions 3 and 4 always hold. □

Before we proceed, we pause to make one technical remark which will be important for our results in Sect. 5. That is, a careful check of the proof of Theorem 7 will show that we actually proved the following.

### Corollary 8

Suppose that σ=σ 1σ n S n , σ i <σ i+1, and w=w 1w t ∈{1,…,n}t. For j=1,…,t, let $$F^{\sigma }_{j} = w_{1} \ldots w_{j} \rightarrow E^{\sigma }$$ and $$F^{s_{i}\sigma }_{j} = w_{1} \ldots w_{j} \rightarrow E^{s_{i}\sigma }$$. Let $$F^{\sigma }_{0} = E^{\sigma }$$ and $$F^{s_{i}\sigma }_{0} = E^{s_{i}\sigma }$$. Let α (j) be the shape of $$F^{\sigma }_{j}$$ and β (j) be the shape of $$F^{s_{i}\sigma }_{j}$$. Then for all i≥1, the cells in α (i)/α (i−1) and β (i)/β (i−1) lie in the same row.

### Proof

It is easy to prove the corollary by induction on t. The corollary is clearly true for t=1 since inserting w 1 into either E σ or $$E^{s_{i}\sigma }$$ will create a new cell in the first row. Then it is easy to check that our proof of Theorem 7 establishing properties ($$\mathbb{A}$$), ($$\mathbb{B}$$), and (ℂ) for the insertions $$w_{t} \rightarrow F^{\sigma }_{t-1}$$ and $$w_{t} \rightarrow F^{s_{i}\sigma }_{t-1}$$ implies that the cells in α (t)/α (t−1) and β (t)/β (t−1) must lie in the same row. □

For any alphabet A, we let A denote the set of all words over the alphabet A. If w∈{1,…,n}, then let P σ(w)=wE σ, which we call the σ-insertion tableau of w, and let $$\gamma^{\sigma }(w) = (\gamma^{\sigma }_{1}(w), \ldots,\gamma^{\sigma }_{n}(w))$$ be the composition corresponding to the shape of P σ(w). Theorem 7 has a number of fundamental consequences about the set of σ-insertion tableaux of w as σ varies over the symmetric group S n . Note that $$P^{\bar{\epsilon}_{n}}(w)$$ arises from w by performing a twisted version of the RSK row insertion algorithm. Hence $$\gamma^{\bar{\epsilon}_{n}}(w)$$ is always a partition. Then we have the following corollary.

### Corollary 9

Suppose that w∈{1,…,n}.

1. 1.

P σ(w) is completely determined by $$P^{\epsilon_{n}}(w)$$.

2. 2.

For all σS n , γ σ(w) is a rearrangement of $$\gamma^{\bar{\epsilon}_{n}}(w)$$.

3. 3.

For all σS n , the set of elements that lie in row j of P σ(w) equals the set of elements that lie in row j of $$P^{\bar{\epsilon}_{n}}(w)$$ for all j≥1.

### Proof

For claim 1, note that if σ=σ 1σ n where σ i <σ i+1, then Theorem 7 tells us that P σ(w) completely determines $$P^{s_{i}\sigma }(w)$$. That is, to obtain $$P^{s_{i}\sigma }(w)$$ from P σ(w), Lemma 1 tells us that we need only ensure that when both (i,j) and (i+1,j) are cells in P σ(w), then the elements in those two cells in P σ(w) must be arranged in decreasing order in $$P^{s_{i}\sigma }(w)$$. If only one of the cells (i,j) and (i+1,j) is in P σ(w), then the element in the cell that is occupied in P σ(w) must be placed in cell (i,j) in $$P^{s_{i}\sigma }(w)$$. Since we can get from ϵ n to any σS n by applying a sequence of adjacent transpositions where we increase the number of inversions at each step, it follows that P σ(w) is completely determined by $$P^{\epsilon_{n}}(w)$$.

For claims 2 and 3, note that for any σS n , we can get from σ to $$\bar{\epsilon}_{n}$$ by applying a sequence of adjacent transpositions where we increase the number of inversions at each step. Thus it follows from Theorem 7 that the set of column heights in $$\gamma^{\bar{\epsilon}_{n}}(w)$$ must be a rearrangement of the set of column heights of γ σ(w).

Moreover, it also follows that the set of elements in row j of $$P^{\bar{\epsilon}_{n}}(w)$$ must be the same as the set of elements in row j of P σ(w). Note that all the elements in a row of a $$\operatorname {PBF}$$ must be distinct by the non-attacking properties of a $$\operatorname {PBF}$$. □

The second author [9] introduced a shift map ρ which takes any $$\operatorname {PBF}$$ F with basement equal to ϵ n to a reverse row strict tableau ρ(F) by simply putting the elements which appear in row j of F (where j≥1) in decreasing order in row j of ρ(F), reading from left to right. This map is pictured at the top of Fig. 17. We can then add a basement below ρ(F) which contains the permutation $$\bar{\epsilon}_{n}$$ to obtain a $$\operatorname {PBF}$$ with basement equal to $$\bar{\epsilon}_{n}$$.

We can extend this map to $$\operatorname {PBF}$$s with an arbitrary basement σ. That is, if F σ is a $$\operatorname {PBF}$$ with basement σS n , let ρ σ (F σ) be the $$\operatorname {PBF}$$ with basement $$\bar{\epsilon}_{n}$$ by simply putting the elements which appear in row j of F σ in decreasing order in row j of ρ σ (F σ) for j≥0, reading from left to right. This map is pictured at the bottom of Fig. 17. To see that ρ σ (F σ) is a reverse row strict tableau, we need only check that ρ σ (F σ) is weakly decreasing in columns from bottom to top. But this property is an immediate consequence of the fact that every element in row j of F σ where j≥1 is less than or equal to the element it sits on top of in F σ.

The second author [9] showed that for any reverse row strict tableaux T, there is a unique $$\operatorname {PBF}$$ F T with basement equal to ϵ n such that ρ(F T )=T. Thus ρ −1 is uniquely defined. In fact, there is a natural procedure for constructing ρ −1(T). That is, assume that T has k rows and P i is the set of elements of T that lie in row i.

### Definition of ρ−1 [9]

Inductively assume that the first i rows of T, {P 1,…,P i−1}, have been mapped to a $$\operatorname {PBF}$$ F (i−1) with basement ϵ n in such a way the elements in row j of F (i−1) are equal to P j for j=1,…,i−1. Let $$P_{i} =\{\alpha_{1} > \alpha_{2} > \cdots> \alpha_{s_{i}}\}$$. There exists an element greater than or equal to α 1 in row i−1 since α 1 sits on top of some element in T. Place α 1 on top of the left-most such element in row i−1 of F (i−1). Next assume that we have placed α 1,…,α k−1. Then there are at least k elements of P i−1 that are greater than or equal to α k since each of α 1,…,α k sit on top of some element in row i−1 of T. Place α k on top of the left-most element in row i−1 of F (i−1) which is greater than or equal to α k which does not have one of α 1,…,α k−1 on top of it.

Now suppose that w∈{1,…,n}. We let rr(w) be the word that results by reading the cells of $$P^{\bar{\epsilon}_{n}}(w)$$ in reverse reading order excluding the cells in the basement. Thus rr(w) is just the word which consists of the elements in the first row of $$P^{\bar{\epsilon}_{n}}(w)$$ in increasing order, followed by the elements in the second row of $$P^{\bar{\epsilon}_{n}}(w)$$ in increasing order, etc. For example, if w=1 3 2 4 3 2 1 4, then $$P^{\bar{\epsilon}_{4}}(w)$$ is pictured in Fig. 18 so that rr(w)=1 2 3 4 3 4 1 2 2.

Since our insertion algorithm for basement $$\bar{\epsilon}_{n}$$ is just a twisted version of the RSK row insertion algorithm, it is easy to see that $$rr(w) \rightarrow E^{\bar{\epsilon}_{n}} = P^{\bar{\epsilon}_{n}}(w)$$. But then we know by part 3 of Corollary 9 that for all j≥1, the elements in the jth row of $$P^{\epsilon_{n}}(rr(w)) = rr(w) \rightarrow E^{\epsilon_{n}}$$ is equal to the set of elements in the jth row of $$P^{\epsilon_{n}}(w)$$ since both sets are equal to the set of elements in the jth row of $$P^{\bar{\epsilon}_{n}}(w) = P^{\bar{\epsilon}_{n}}(rr(w))$$. Thus

$$\rho\bigl(P^{\epsilon_n}(w)\bigr) =\rho\bigl(P^{\epsilon_n}\bigl(rr(w)\bigr) \bigr) = P^{\bar{\epsilon}_n}(w) = P^{\bar{\epsilon}_n}\bigl(rr(w)\bigr).$$

Since there is a unique $$\operatorname {PBF}$$ F with basement ϵ n such that $$\rho(F) = P^{\bar{\epsilon}_{n}}(w)$$, we can conclude that $$P^{\epsilon_{n}}(w) = P^{\epsilon_{n}}(rr(w))$$. But then by part 1 of Corollary 9, it must be the case that P σ(w)=P σ(rr(w)) for all σS n . Thus we have the following theorem.

### Theorem 10

1. 1.

If u,v∈{1,…,n} and $$P^{\bar{\epsilon}_{n}}(w) = P^{\bar{\epsilon}_{n}}(u)$$, then P σ(u)=P σ(w) for all σS n .

2. 2.

For any $$\operatorname {PBF}$$ T with basement equal to $$\bar{\epsilon}_{n}$$ and any σS n , there is a unique $$\operatorname {PBF}$$ F σ with basement σ such that ρ σ (F σ)=T.

Theorem 10 says that we can construct P σ(w)=wE σ by first constructing $$P^{\bar{\epsilon}_{n}}(w) = w \rightarrow E^{\bar{\epsilon}_{n}}$$ by our twisted version of RSK row insertion, then find rr(w) which is the reverse reading word of $$P^{\bar{\epsilon}_{n}}(w)$$, and then compute rr(w)→E σ. However, it is easy to construct $$rr(w) \rightarrow E^{\sigma }= \rho_{\sigma }^{-1}(P^{\bar {\epsilon}_{n}}(w))$$. That is, suppose that w=w 1 w 2w s is the strictly increasing word that results by reading the first row of $$P^{\bar{\epsilon}_{n}}(w)$$ in reverse reading order. Now consider inserting w=w 1 w 2w s into E σ. It is easy to see w s will end up sitting on top of σ i in the basement where i is the least j such that σ j w s . Next consider the entry w s−1. Before the insertion of w s , w s−1 sat on top σ a in the basement where a is the least b such that σ b w s−1. Now if a equals i, then w s will bump w s−1 and w s−1 will move to σ c where c is the least d>i such that σ d w s−1. Thus once w s is placed, w s−1 will be placed on top of σ a in the basement where a is the least b such that σ b w s−1 and w s is not on top of σ b . We continue this reasoning and show that w=w 1 w 2w s E σ can be constructed inductively as follows.

Procedure to construct $$\rho_{\sigma }^{-1}(P^{\bar{\epsilon}_{n}}(w)) = rr(w) \rightarrow E^{\sigma }$$.

Step 1. Let w 1w s be the first row of $$P^{\bar{\epsilon}_{n}}(w)$$ in increasing order. First, place w s on top of σ i in the basement where i is the least j such that σ j w s . Then having placed w s ,…,w r+1, place w r on top σ u in the basement where u is the least v such that σ v w r and none of w s ,…,w r+1 are on top of σ v .

Step i>0. Inductively assume that the first i rows of $$P^{\bar{\epsilon}_{n}}(w)$$, {P 1,…,P i−1}, have been mapped to a $$\operatorname {PBF}$$ F (i−1) with basement σ in such a way that the elements in row j of F (i−1) are equal to P j for j=1,…,i−1. Let $$P_{i} =\{\alpha_{1} > \alpha_{2} > \cdots> \alpha_{s_{i}}\}$$ be the ith row of $$P^{\bar{\epsilon}_{n}}(w)$$. There exists an element greater than or equal to α 1 in row i−1 since there α 1 sits on top of some element in $$P^{\bar{\epsilon}_{n}}(w)$$. Place α 1 on top of the left-most such element in row i−1 of F (i−1). Next assume that we have placed α 1,…,α k−1. Then there are at least k elements of P i−1 that are greater than or equal to α k since each of α 1,…,α k sit on top of some element in row i−1 of $$P^{\bar{\epsilon}_{n}}(w)$$. Place α k on top of the left-most element in row i−1 of F (i−1) which is greater than or equal to α k which does not have one of α 1,…,α k−1 on top of it.

We then have the following theorem which shows that the insertion wE σ can be factored through the twisted RSK row insertion algorithm used to construct $$w \rightarrow E^{\bar{\epsilon}_{n}}$$.

### Theorem 11

If w∈{1,…,n} and σS n , then P σ(w)=wE σ equals $$\rho_{\sigma }^{-1}(P^{\bar{\epsilon}_{n}}(w))$$ where $$P^{\bar{\epsilon}_{n}}(w) = w \rightarrow E^{\bar{\epsilon}_{n}}$$.

There are several important consequences of Theorem 11. First, we will show that our insertion algorithm satisfies many of the properties that the usual RSK row insertion algorithm satisfies. Consider the usual Knuth equivalence relations for row insertion. Suppose that u,v∈{1,2,…} and x,y,z∈{1,2,…}. The two types of Knuth relations are

1. 1.

uyxzvuyzxv if x<yz and

2. 2.

uxzyvuzxyv if xy<z.

We say that two words w,w′∈{1,2,…} are Knuth equivalent, ww′, if w can be transformed into w′ by repeated use of relations 1 and 2. If ww′, then w and w′ give us the same insertion tableau under row insertion. In our twisted version of row insertion the two types of Knuth relations become

1 :

uyxzv uyzxv if zy<x and

2 :

uxzyv uzxyv if z<yx.

Then we say that two words w,w′∈{1,2,…,n} are twisted Knuth equivalent, w w′, if w can be transformed to w′ by repeated use of relations 1 and 2. Therefore, if w w′, then $$P^{\bar{\epsilon}_{n}}(w) = P^{\bar{\epsilon}_{n}}(w^{\prime})$$. Then Theorem 11 immediately implies the following.

### Theorem 12

Suppose that w,w′∈{1,2,…,n} and w w′. Then for all σS n , P σ(w)=P σ(w′).

It also follows from Theorem 11 that for every partition γ, the map $$\rho_{\sigma }^{-1}$$ gives a one-to-one correspondence between the set of reverse row strict tableaux of shape γ whose entries are less than or equal to n and the set of $$\operatorname {PBF}$$s with basement σ whose entries are less than or equal to n and whose shape (δ 1,…,δ n ) is a rearrangement of γ compatible with basement σ. That is, we say that a weak composition δ=(δ 1,…,δ n ) is compatible with basement σ=σ 1σ n S n if δ i δ j whenever σ i >σ j and i<j. Note that Lemma 1 implies that the shape of any $$\operatorname {PBF}$$ F σ with entries from {1,…,n} and basement σ must have a shape which is a weak composition compatible with basement σ. Then we have the following theorem.

### Theorem 13

Let λ=(λ 1,…,λ n ) be a partition of n. Then

$$s_\lambda(x_1, \ldots, x_n) = \sum_{ \delta\atop{\lambda(\delta) = \lambda}} \widehat{E}_{\delta}^{\sigma }(x_1, \ldots ,x_n)$$
(6)

where the sum runs over all weak compositions δ=(δ 1,…,δ n ) which are rearrangements of λ that are compatible with basement σ.

For example, consider s (2,1,0)(x 1,x 2,x 3). In Fig. 19, we have listed the eight $$\operatorname {PBF}$$s with basement $$\bar{\epsilon}_{3} = 3~2~1$$ over the alphabet {1,2,3}. Below each of these $$\operatorname {PBF}$$s G, we have pictured $$\rho^{-1}_{123}(G)$$, $$\rho^{-1}_{132}(G)$$ and $$\rho^{-1}_{312}(G)$$. One can see that

and

$$s_{(2,1,0)}(x_1,x_2,x_3) = \widehat{E}_{(2,1,0)}^{312}(x_1,x_2,x_3) + \widehat {E}_{(2,0,1)}^{312}(x_1,x_2,x_3).$$

In fact, if we fix a basement permutation σS n and a partition λ of n, then we can view the set of generalized Demazure atoms

$$\bigl\{\widehat{E}^\sigma _\gamma(x_1, \ldots, x_n)\mid \lambda(\gamma) = \lambda\ \mbox{and}\ \gamma\ \mbox{is compatible with basement \sigma }\bigr\}$$

as inducing a set partition of the reverse row strict tableaux of shape λ. That is, let RRT(λ) denote the set of reverse row strict tableaux of shape λ with entries from {1,…,n}. Then if λ(γ)=λ and γ is compatible with basement σ, we can identify $$\widehat{E}^{\sigma }_{\gamma}(x_{1}, \ldots, x_{n})$$ with the set

$$\mathcal{E}^\sigma _\gamma= \bigl\{\rho_\sigma (P): P \ \mbox{is a \operatorname {PBF} of shape}\ \gamma\bigr\}.$$

The fact that there is a unique $$\operatorname {PBF}$$ with basement σ such ρ σ (P)=T for any reverse row strict tableau T of shape λ with entries from {1,…,n} implies that

$$Spt_\lambda^\sigma : = \bigl\{\mathcal{E}^\sigma _\gamma: \lambda(\gamma) = \lambda \ \mbox{and} \ \gamma\ \mbox{is compatible with basement \sigma }\bigr\}$$

is a set partition of RRT(λ). For example, if T 1,…,T 8 are the reverse row strict tableaux with basement 321 pictured at the top of Fig. 19, reading from left to right, then

Then the collection of such set partitions $$\mathcal{STP}_{\lambda}= \{Stp^{\sigma }_{\lambda}: \sigma \in S_{n}\}$$ can be partially ordered by refinement. We can show that if σ< L τ in the (left) weak Bruhat order on S n , then $$Stp^{\sigma }_{\lambda}$$ is a refinement of $$Stp^{\tau}_{\lambda}$$. Moreover, if λ has n distinct parts then $$\mathcal{STP}_{\lambda}$$ under refinement is isomorphic to the (left) weak Bruhat order on S n . These results will appear in a subsequent paper [2].

We end this section with a simple characterization for when $$\widehat{E}^{\sigma }_{\alpha}(x_{1}, \ldots, x_{n}) \neq0$$.

### Proposition 14

Suppose that α=(α 1,…,α n ) is a weak composition of length n and σ=σ 1σ n S n . Then $$\widehat{E}^{\sigma }_{\alpha}(x_{1}, \ldots, x_{n}) \neq0$$ if and only if α is compatible with basement σ.

### Proof

If $$\widehat{E}^{\sigma }_{\alpha}(x_{1}, \ldots, x_{n}) \neq0$$, then there must be a $$\operatorname {PBF}$$ F σ of shape α with basement σ. Then Lemma 1 tells us that if 1≤i<jn and σ i >σ j , then α i α j so that α is compatible with basement σ.

Vice versa, suppose that α is compatible with basement σ. Then let F σ be the filling of $$\hat{dg}(\alpha)$$ such that the elements in column i are all equal to σ i . The elements of F σ are weakly increasing in columns, reading from top to bottom. If 1≤i<jn and α i <α j , then we know that σ i <σ j so that F σ automatically satisfies the B-increasing condition. Finally, if α i α i and a=(i,y), b=(j,y) and c=(i,j−1) is a type A triple, then we cannot have F σ(a)≤F σ(b)≤F σ(c), so every type A triple in F σ will be an inversion triple. Thus F σ is a $$\operatorname {PBF}$$ of shape α with basement σ so that $$\widehat{E}^{\sigma }_{\alpha}(x_{1}, \ldots, x_{n}) \neq0$$. □

## 5 Pieri rules

The homogeneous symmetric function h k (x 1,…,x n ) and the elementary symmetric function e k (x 1,…,x n ) are defined by

The Pieri rules for Schur functions state that

$$h_k(x_1, \ldots x_n) s_\mu(x_1, \ldots, x_n) = \sum_{\mu\subseteq\lambda} s_\lambda(x_1, \ldots, x_n)$$

where the sum runs over all partitions λ of k+|μ| such that μλ and λ/μ does not contain two elements in the same column and that

$$e_k(x_1, \ldots x_n) s_\mu(x_1, \ldots, x_n) = \sum_{\mu\subseteq\lambda} s_\lambda(x_1, \ldots, x_n)$$

where the sum runs over all partitions λ of k+|μ| such that μλ and λ/μ does not contain two elements in the same row. In our case, we think of the Schur function s μ (x 1,…,x n ) as $$\widehat{E}^{\bar{e}_{n}}_{\mu}(x_{1}, \ldots, x_{n})$$. Since we work with reverse row strict tableaux T, μ corresponds to the column heights of the $$\operatorname {PBF}$$ $$T^{\bar{\epsilon}_{n}}$$. Thus we say that λ/μ is a transposed skew row of size k if dg′(μ)⊆dg′(λ), |λ|=k+|μ| and no two elements in dg′(λ/μ) lie in the same column. Similarly, we say that λ/μ is a transposed skew column of size k if dg′(μ)⊆dg′(λ), |λ|=k+|μ| and no two elements in dg′(λ/μ) lie in the same row. Thus in this language, the Pieri rules become

$$h_k(x_1, \ldots x_n) \widehat{E}^{\bar{\epsilon}_n}_\mu(x_1, \ldots, x_n) = \sum_{\mu\subseteq\lambda} \widehat{E}^{\bar{\epsilon}_n}_\lambda (x_1, \ldots, x_n)$$
(7)

where the sum runs over all partitions λ such that μλ and λ/μ is a transposed skew row of size k and

$$e_k(x_1, \ldots x_n) \widehat{E}^{\bar{\epsilon}_n}_\mu(x_1, \ldots, x_n) = \sum_{\mu\subseteq\lambda} \widehat{E}^{\bar{\epsilon}_n}_\lambda (x_1, \ldots, x_n)$$
(8)

where the sum runs over λ such that μλ and λ/μ is a transposed skew column of size k.

The main goal of this section is to prove an analogue of the Pieri rules (7) and (8) for the products $$h_{k}(x_{1}, \ldots x_{n}) \widehat{E}^{\sigma }_{\gamma}(x_{1}, \ldots, x_{n})$$ and $$e_{k}(x_{1}, \ldots x_{n}) \widehat{E}^{\sigma }_{\gamma}(x_{1}, \ldots, x_{n})$$.

We start with a simple lemma about the effect of inserting two letters into a $$\operatorname {PBF}$$. If α and β are weak compositions of length n and dg′(α)⊆dg′(β), then dg′(β/α) will denote the cells of dg′(β) which are not in dg′(α).

### Lemma 15

Suppose that F σ is a $$\operatorname {PBF}$$, G σ=kF σ and H σ=k′→G σ. Suppose F σ is of shape α, G σ is of shape β, H σ is of shape γ, T is the cell in dg′(β/α), and Tis the cell in dg′(γ/β). Then

1. 1.

If kk′, then T is strictly below Tand

2. 2.

If k<k′, then T appears before Tin reading order.

### Proof

There is no loss in generality in assuming that σ=σ 1σ n S n where n≥max(k,k′). Assume $$\bar{F}^{\sigma }$$ is the diagram that results by adding 0s on top of the cells of F σ as in the definition of the insertion kF σ. Let c 1,c 2,… be the cells in reading order that are in $$\bar{F}^{\sigma }$$ but not in the basement. We will prove this result by induction on the number of cells p in the list c 1,c 2,….

First, suppose that p=0 so that F σ just consists of the basement permutation σ and thus the cells c 1,c 2,… are simply the zero entries on top of the basement in $$\bar{F}^{\sigma }$$. Then k will be inserted in cell (i,1) where i is the least j such that kσ j . Now if k′≤k, then it is easy to see that k′ will be inserted on top of k in the insertion k′→G σ so that T will be strictly below T′.

If k′>k, then suppose that k is in cell (i,1) in G σ. Then it is clear that k′ cannot be placed in any of the cells (j,1) with j<i since k could not be placed in any of those cells. Hence k′ either bumps k or is placed in the first row in some cell to the right of k. In either case, T precedes T′ in reading order.

Now if p>0, there are two cases.

Case 1. k is placed in cell c i which is either equal to c 1 or in the same row as c 1.

In either case, c i is a cell on top of a column in $$\overline{F}^{\sigma }$$. Let $$\overline{c_{i}}$$ be the cell immediately above c i . Then cell $$\overline{c_{i}}$$ will be the first cell in reading order in $$\bar{G}^{\sigma }$$. If k′≤k, then k′ will be placed in $$\overline{c_{i}}$$ so that c i =T and $$\overline{c_{i}} =T^{\prime}$$. Thus T will occur below T′.

If k<k′, then k′ cannot be placed in $$\overline{c_{i}}$$. Moreover, k′ cannot bump any of the entries in cells c 1,…,c i−1 since k does not bump any of those elements. That is, for 1≤j<i, let $$\underline{c}_{j}$$ be the cell immediately below c j . Then the reason that k does not bump F σ(c j ) was that $$k > F^{\sigma }(\underline{c}_{j})$$ in which case $$k' > F^{\sigma }(\underline{c}_{j})$$ since the entries in cells c 1,…,c i−1 are all zero. Thus either k′ bumps k in cell c i or it is inserted into cells of $$\bar{F}^{\sigma }$$ after c i . In either case, it is easy to see that T′ must follow c i =T in reading order.

Case 2. k is placed in a cell c i which is not in the same row as c 1.

Let c j be the first cell in our list which is not in the same row as cell c 1. If k′ is not placed in any of the cells c 1,…c j−1, then we are inserting k followed by k′ into the sequence $$\bar{F}^{\sigma }(c_{j}), \bar{F}^{\sigma }(c_{j}+1), \ldots$$ so the result follows by induction. However, the only way that k′ can be placed in a cell c i in the same row as c 1 is if k′<k in which case T′ = c i . In that case, T lies in a row below the row of c 1 so that T lies strictly below T′. □

Suppose that γ and δ are weak compositions such that dg′(γ)⊆dg′(δ) and dg′(δ/γ) consists of a single cell c=(x,y). Then we say that c is a removable cell from δ if there is no j such that x<j(δ) and δ j =1+y. The idea is that if F σ is a $$\operatorname {PBF}$$ of shape γ and basement σ, and δ is the shape of kF σ, then Proposition 6 tells us that the cell c in dg′(δ/γ) must be a removable cell.

Now suppose that γ and δ are weak compositions of length m such that dg′(γ) is contained in dg′(δ) and σ=σ 1σ m is a permutation in S m . Let c 1=(x 1,y 1),…,c k =(x k ,y k ) be the cells of dg′(δ/γ) listed in reverse reading order. Let dg′(δ (i)) consist of the diagram of γ plus the cells c 1,…,c i . Then we say that δ/γ is a γ-transposed skew row relative to basement σ if

1. 1.

y 1<y 2<⋯<y k ,

2. 2.

For i=1,…,k, dg′(δ (i)) is the diagram of weak composition δ (i) which is compatible with basement σ,

3. 3.

dg′(γ)⊂dg′(δ (1))⊂dg′(δ (2))⊂⋯⊂dg′(δ (k)), and

4. 4.

c i is a removable square from δ (i) for i=1,…,k.

Next suppose that γ and ϵ are weak compositions of length m such that dg′(γ) is contained in dg′(ϵ). Let d 1=(x 1,y 1),…,d k =(x k ,y k ) be the cells of dg′(ϵ/γ) listed in reading order. Let dg′(ϵ (i)) consist of the diagram of γ plus the cells d 1,…,d i . We say that ϵ/γ is a γ-transposed skew column relative to basement σ if

1. 1.

For i=1,…,k, dg′(ϵ (i)) is the diagram of weak composition ϵ (i) which is compatible with basement σ,

2. 2.

dg′(γ)⊂dg′(ϵ (1))⊂dg′(ϵ (2))⊂⋯⊂dg′(ϵ (k)), and

3. 3.

d i is a removable square from ϵ (i) for i=1,…,k.

For example, if m=9, σ=127346589, and γ=(2,0,3,1,1,3,1,0,0), then, in Fig. 20, we have pictured a γ-transposed skew row relative to basement σ in the top left and a γ-transposed skew column relative to basement σ in the bottom left. The diagram on the top right is not a γ-transposed skew column relative to basement σ since c 3 is not a removable cell from δ (3)=(2,0,3,3,1,3,1,1,0) and the diagram on the bottom right is not a γ-skew row since the diagram consisting of γ plus cells d 1 and d 2 does not correspond to the diagram of a weak composition. It is easy to check that if $$\sigma = \bar{\epsilon}_{n}$$ and γ is of partition shape, then a γ-transposed skew row δ/γ relative to basement $$\bar{\epsilon}_{n}$$ implies that δ is a partition containing γ such that no two cells δ/γ can lie in the same column. In this case, the removable cell condition is automatic since there are no cells to the right of any cell in δ/γ. Similarly, it is easy to check that a γ-transposed skew column ϵ/γ relative to basement $$\bar{\epsilon}_{n}$$ is just a transposed skew column.

### Theorem 16

Let γ=(γ 1,…,γ n ) be a weak composition of p and σS n . Then

$$h_k(x_1, \ldots x_n) \widehat{E}^{\sigma }_\gamma(x_1, \ldots, x_n) = \sum_{\delta} \widehat{E}^{\sigma }_\delta(x_1, \ldots, x_n),$$
(9)

where the sum runs over all weak compositions δ=(δ 1,…,δ n ) of size p+k such that dg′(γ)⊆dg′(δ) and δ/γ is a γ-transposed skew row relative to basement σ.

$$e_k(x_1, \ldots x_n) \widehat{E}^{\sigma }_\gamma(x_1, \ldots, x_n) = \sum_{\epsilon} \widehat{E}^{\sigma }_\epsilon(x_1, \ldots, x_n),$$
(10)

where the sum runs over all weak compositions ϵ=(ϵ 1,…,ϵ n ) of size p+k such that dg′(γ)⊆dg′(ϵ) and ϵ/γ is a γ-transposed skew column relative to basement σ.

### Proof

The left hand side of (9) can be interpreted as the weight of the set of pairs (w,F σ) where w=w 1w k and nw 1≥⋯≥w k ≥1, F σ is a $$\operatorname {PBF}$$ of shape γ with basement σ, and the weight W(w,F σ) of the pair (w,F σ) is equal to $$W(F^{\sigma })= \prod_{i=1}^{k} x_{w_{i}}$$. The right hand side of (9) can be interpreted as the sum of the weights of all $$\operatorname {PBF}$$s G σ with basement σ such G σ has shape δ=(δ 1,…,δ n ) for some δ which is a weak composition of size p+k such that δ/γ is a γ-transposed skew row relative to basement σ.

Now consider the map Θ which takes such a pair (w,F σ) to

$$\varTheta\bigl(w,F^\sigma \bigr)= w \rightarrow F^\sigma = G^\sigma .$$

Let $$G^{\sigma }_{i} = w_{1} \ldots w_{i} \rightarrow F^{\sigma }$$ for i=1,…,k and let $$G^{\sigma }_{0} =F^{\sigma }$$. Let δ (i) be the shape of $$G^{\sigma }_{i}$$. It follows that each δ (i) is a weak composition of size p+i which is compatible with basement σ. Then let c i be the cell in dg′(δ (i)/δ (i−1)) for i=1,…,k. By Lemma 15, we know that c i+1 must be strictly above c i for i=1,…,k−1. Also, by Proposition 6, we know that c i must be a removable cell for δ (i). It follows that G σ is a $$\operatorname {PBF}$$ of some shape δ such that δ/γ is a γ-transposed skew row relative to basement σ. Moreover, it is clear that W(G σ)=W(w,F σ). Since our insertion procedure can be reversed, it is easy to see that Θ is one-to-one.

To see that Θ is a bijection between the pairs (w,F σ) contributing to the left hand side of (9) and the $$\operatorname {PBF}$$s G σ contributing to the right hand side (9), we must show that for each G σ contributing to the right hand side (9), there is a pair (w,F σ) contributing to the left hand side (9) such that wF σ=G σ. Suppose that G σ is a $$\operatorname {PBF}$$ with basement σ such that G σ has shape δ=(δ 1,…,δ n ) where δ/γ is γ-transposed skew row relative to basement σ. Let c k ,…,c 1 be the cells of dg′(α/γ) reading from top to bottom. Because c k is a removable square for δ, it follows from our remarks following Proposition 6 that we can reverse the insertion procedure starting at cell c k . Similarly, c k−1 is a removable cell for shape consisting of δ with c k removed, and, in general, c i must be a removable cell for the shape of δ with c k ,…,c i+1 removed. Thus we can first reverse the insertion process for the element in cell c k in G σ to produce a $$\operatorname {PBF}$$ $$F^{\sigma }_{k-1}$$ with basement σ and shape δ with cell c k removed and a letter w k such that $$w_{k} \rightarrow F^{\sigma }_{k-1} = G^{\sigma }$$. Then we can reverse our insertion process for the element in cell c k−1 of $$F^{\sigma }_{k-1}$$ to produce a $$\operatorname {PBF}$$ $$F^{\sigma }_{k-2}$$ with basement σ and shape δ with cells c k and c k−1 removed and a letter w k−1 such that $$w_{k-1}w_{k} \rightarrow F^{\sigma }_{k-2} = G^{\sigma }$$. Continuing on in this manner we can produce a sequence of $$\operatorname {PBF}$$s $$F^{\sigma }_{0}, \ldots, F^{\sigma }_{k-1}$$ with basement σ and a word w=w 1w k such that $$w_{i} \ldots w_{k} \rightarrow F^{\sigma }_{i-1} =G^{\sigma }$$ and the shape of $$F^{\sigma }_{i-1}$$ equals δ with the cells c i ,c i+1,…,c k removed. Thus $$F^{\sigma }_{0}$$ will be a $$\operatorname {PBF}$$ with basement σ and shape γ such that $$w \rightarrow F^{\sigma }_{0} =G^{\sigma }$$. The only thing that we have to prove is that w 1≥⋯≥w k . But it cannot be that w i <w i+1 for some i because Lemma 15 would imply that c i appears before c i+1 in reading order which it does not. Thus Θ is a bijection which proves that (9) holds.

The left hand side of (10) can be interpreted as the weight of the set of pairs (u,H σ) where u=u 1u k and 1≤u 1<⋯<u k n, H σ is a $$\operatorname {PBF}$$ of shape γ with basement σ, and the weight W(u,H σ) of the pair (u,H σ) is equal to $$W(H^{\sigma })\prod_{i=1}^{k} x_{u_{i}}$$. The right hand side of (10) can be interpreted as the sum of the weights of all $$\operatorname {PBF}$$s K σ with basement σ such that K σ has shape ϵ=(ϵ 1,…,ϵ n ) for some weak composition ϵ of size p+k such that ϵ/γ is a γ-transposed skew column relative to basement σ.

Again consider the map Θ which takes such a pair (u,H σ) to

$$\varTheta\bigl(u,H^\sigma \bigr)= u\rightarrow H^\sigma = K^\sigma .$$

Let $$K^{\sigma }_{i} = u_{1} \ldots u_{i} \rightarrow H^{\sigma }$$ for i=1,…,k and let $$K^{\sigma }_{0} =H^{\sigma }$$. Then let ϵ (i) be the shape of $$K^{\sigma }_{i}$$ so that ϵ (i) is a weak composition of size p+i which is compatible with basement σ. Then let d i be the cell in dg′(ϵ (i)/ϵ (i−1)) for i=1,…,k. By Lemma 15, we know that d i must appear before d i+1 in reading order for i=1,…,k−1. Moreover, d i must be a removable cell from ϵ (i) by Proposition 6.

It follows that K σ is a $$\operatorname {PBF}$$ of some shape ϵ=(ϵ 1,…,ϵ n ) such that ϵ/γ is a γ-transposed skew column relative to basement σ. Moreover, it is clear that W(G σ)=W(w,F σ). Since our insertion procedure can be reversed, it is easy to see that Θ is one-to-one.

To see that Θ is a bijection between the pairs (u,H σ) contributing to the left hand side of (10) and the $$\operatorname {PBF}$$s K σ contributing to the right hand side (10), we must show that for each K σ contributing to the right hand side of (10), there is a pair (u,H σ) contributing to the left hand side of (10) such that uH σ=K σ. So suppose that K σ is a $$\operatorname {PBF}$$ with basement σ such that K σ has shape ϵ=(ϵ 1,…,ϵ n ) such that ϵ/γ is a γ-transposed skew column relative to basement σ of size k. Let d k ,…,d 1 be the cells of dg′(ϵ/γ) read in reverse reading order. Since d k is a removable cell for ϵ, we can reverse our insertion process starting at cell d k . Similarly, d k−1 is removable cell for shape consisting of ϵ with c k removed, and, in general, d i must be a removable cell for the shape of ϵ with d k ,…,d i+1 removed. This means that we can reverse our insertion process staring with cell d i after we have reversed the insertion process starting at cells d k ,…,d i+1. Then we first reverse the insertion process for the element in cell d k in K σ to produce a $$\operatorname {PBF}$$ $$H_{k-1}^{\sigma }$$ with basement σ and shape ϵ with cell d k removed and a letter u k such that $$u_{k} \rightarrow H^{\sigma }_{k-1} = K^{\sigma }$$. Then we can reverse our insertion process for the element in cell d k−1 of $$H^{\sigma }_{k-1}$$ to produce a $$\operatorname {PBF}$$ $$H^{\sigma }_{k-2}$$ with basement σ and shape ϵ with cells d k and d k−1 removed and a letter u k−1 such that $$u_{k-1}u_{k} \rightarrow H^{\sigma }_{k-2} = K^{\sigma }$$. Continuing in this manner, we can produce a sequence of $$\operatorname {PBF}$$s $$H^{\sigma }_{0}, \ldots, H^{\sigma }_{k-1}$$ with basement σ and a word u=u 1u k such that $$u_{i} \ldots u_{k} \rightarrow H^{\sigma }_{i-1} =G^{\sigma }$$ and the shape of $$H^{\sigma }_{i-1}$$ equals ϵ with the cells d i ,d i+1,…,d k removed. Thus $$H^{\sigma }_{0}$$ will be a $$\operatorname {PBF}$$ with basement σ and shape γ such that $$u \rightarrow H^{\sigma }_{0} =K^{\sigma }$$. The only thing that we have to prove is that u 1<⋯<u k . But it cannot be that u i u i+1 for some i because Lemma 15 would force d i+1 to appear in a row which is strictly above the row in which d i appears which would mean that d i+1 does not follow d i in reading order. Thus Θ is a bijection which proves that (10) holds. □

We can show that there is an analogue of the Littlewood–Richardson rule for the product of a Schur function s λ (x 1,…,x n ) times a $$\widehat{E}^{\sigma }_{\gamma}(x_{1}, \ldots, x_{n})$$ for all γ and σS n . This rule will appear in a subsequent paper.

## 6 A permuted basements analogue of the Robinson–Schensted–Knuth algorithm

We are now ready to state an analogue of the Robinson–Schensted–Knuth Algorithm for $$\operatorname {PBF}$$s.

Let A=(a i,j ) be an arbitrary n×n-matrix with nonnegative integer entries and let σ=σ 1σ n S n . For each pair i,j such that a i,j >0, create a sequence of a i,j biletters $$\begin{array}{c} \scriptstyle i \\[-3pt] \scriptstyle j \end{array}$$. Let w A be the unique two-line array consisting of such biletters so the top letters are weakly increasing and for all pairs with the same top letter the bottoms letters are weakly increasing. For example, if

then

Let u A be the word consisting of the top row of w A and v A be the word consisting of the bottom row of w A . Let $${P}^{\sigma}_{0} = Q^{\sigma}_{0} = E^{\sigma }$$ be empty $$\operatorname {PBF}$$s with basement σ. We say that $${P}^{\sigma}_{0}$$ is the initial insertion $$\operatorname {PBF}$$ and $${Q}^{\sigma}_{0}$$ is the initial recording $$\operatorname {PBF}$$ relative to σ.

Now suppose that u A =i 1i t and v A =j 1j t . Then insert the biletters of w A into the insertion and recording $$\operatorname {PBF}$$s using the following inductive procedure. Assume that the last k biletters of w A have already been inserted and the resulting pair of $$\operatorname {PBF}$$s is $$(\mbox {{P_{k}}^{\sigma }}, \mbox {{Q_{k}}^{\sigma }})$$ such that the partitions obtained by rearranging the shapes of $$\mbox {{P_{k}}^{\sigma }}$$ and $$\mbox {{Q_{k}}^{\sigma }}$$ are the same. Insert the entry j tk into $$\mbox {{P_{k}}^{\sigma }}$$ according to the procedure $$j_{t-k} \rightarrow \mbox {{P_{k}}^{\sigma }}$$. Suppose the new cell created in the insertion $$j_{t-k} \rightarrow \mbox {{P_{k}}^{\sigma }}$$ lies in row r. Record the position of the new entry by placing the entry i tk into the leftmost empty cell in row r of $$\mbox {{Q_{k}}^{\sigma }}$$ which lies immediately above a cell greater than or equal to i tk . Note there will be such a cell since all the elements of $$\mbox {{Q_{k}}^{\sigma }}$$ are greater than or equal to i tk and there is at least one cell in row r which is not occupied that lies above an occupied cell in row r−1 in $$\mbox {{Q_{k}}^{\sigma }}$$ since there is such a cell in $$\mbox {{P_{k}}^{\sigma }}$$. The resulting filling is $${Q}^{\sigma}_{k+1}$$. Repeat this procedure until all of the biletters from w A have been inserted. The resulting pair $$(\mbox {{P_{n}}^{\sigma }}, \mbox {{Q_{n}}^{\sigma }}):=(\mbox {{P}^{\sigma }}, \mbox {{Q}^{\sigma }})$$ is denoted by Ψ σ (A). For example, if σ=1 4 2 5 3 and A is the matrix given above, then Ψ σ (A)=(P σ,Q σ) is pictured in Fig. 21.

Next consider the special case where $$\sigma = \bar{\epsilon}_{n}$$. Note that $$P^{\bar{\epsilon}_{n}} = j_{t} j_{t-1} \ldots j_{1} \rightarrow E^{\bar {\epsilon}_{n}}$$ is constructed by a twisted version of the usual RSK row insertion algorithm. In that case, the recording $$\operatorname {PBF}$$ $$Q^{\bar{\epsilon}_{n}}$$ is constructed in the same way that the usual RSK recording tableau is constructed except that we are constructing tableaux such that columns are weakly decreasing reading from bottom to top and the rows are strictly decreasing reading from left to right. Thus $$\varPsi_{\bar {\epsilon}_{n}}$$ is just a twisted version of the usual RSK correspondence between ℕ-valued n×n-matrices and pairs of column strict tableaux of the same shape. In particular, we know that if A is an ℕ-valued n×n-matrix and A T is its transpose, then $$\varPsi_{\bar{\epsilon}_{n}}(A) = (P^{\bar{\epsilon}_{n}},Q^{\bar{\epsilon}_{n}})$$ if and only if $$\varPsi_{\bar{\epsilon}_{n}}(A^{T}) = (Q^{\bar{\epsilon }_{n}},P^{\bar{\epsilon}_{n}})$$.

### Theorem 17

Let σ=σ 1σ n S n . The map Ψ σ is a bijection between ℕ-valued n×n matrices and pairs $$(\mbox {{P}^{\sigma }}, \mbox {{Q}^{\sigma }})$$ of $$\operatorname {PBF}$$ s with basement σ such that if α is the shape of $$\mbox {{P}^{\sigma }}$$ and β is the shape of $$\mbox {{Q}^{\sigma }}$$, then λ(α)=λ(β) and α and β are compatible with basement σ.

### Proof

Suppose that A is an ℕ-valued n×n matrix and $$\varPsi_{\sigma }(A) = (\mbox {{P}^{\sigma }},\mbox {{Q}^{\sigma }})$$. The filling $$\mbox {{P}^{\sigma }}$$ is a $$\operatorname {PBF}$$ by Lemma 5. The shape α of $$\mbox {{P}^{\sigma }}$$ satisfies α i α j for all inversions i<j of σ by Lemma 1. It is also easy to see that our definition of Ψ σ ensures that λ(α)=λ(β).

We must prove that the filling $$\mbox {{Q}^{\sigma }}$$ is a $$\operatorname {PBF}$$. The columns of $$\mbox {{Q}^{\sigma }}$$ are weakly decreasing from bottom to top by construction. For any given i, the bottom elements of biletters whose top elements are i are inserted in weakly decreasing order. It then follows from Lemma 15 that i cannot occur twice in the same row in $$\mbox {{Q}^{\sigma }}$$.

To see that every triple is an inversion triple, consider first a type A triple consisting of the cells a=(x 1,y 1), b=(x 2,y 1), and c=(x 1,y 1−1) where x 1<x 2 as depicted below.

This triple would be a type A coinversion triple only if Q σ(a)≤Q σ(b)≤Q σ(c). Since we cannot have two equal elements in $$\mbox {{Q}^{\sigma }}$$ in the same row, it must be that Q σ(a)<Q σ(b)≤Q σ(c). There are now two cases. First if Q σ(b)<Q σ(c), then under the Ψ σ map, Q σ(c) was placed first in $$\mbox {{Q}^{\sigma }}$$, then Q σ(b) was placed, and then Q σ(a) was placed. But this means at the time Q σ(b) was placed, it could have been placed on top of Q σ(c) which is a contradiction since the Ψ σ map requires that Q σ(b) be placed in the left-most possible position subject to the requirement that the columns are weakly decreasing. The second case is when Q σ(b)=Q σ(c). In that case, Lemma 15 ensures that the cells created by the insertion of the bottoms of biletters whose tops equal Q σ(b) are created from bottom to top. This means that the biletter which created cell c in $$\mbox {{Q}^{\sigma }}$$ must have been processed before the biletter which created cell b. But this means that under the Ψ σ map, Q σ(c) was placed first in $$\mbox {{Q}^{\sigma }}$$, then Q σ(b) was placed, and then Q σ(a) was placed, which we have already determined is impossible. Thus there are no type A coinversion triples in $$\mbox {{Q}^{\sigma }}$$.

Now suppose that there exists a=(x 2,y), b=(x 1,y−1), and c=(x 2,y−1), where x 1x 2, which form a type B coinversion triple in $$\mbox {{Q}^{\sigma }}$$ as depicted below.

We know that Q σ(b)≠Q σ(c) since we cannot have two equal elements in the same row in $$\mbox {{Q}^{\sigma }}$$. Thus we must have Q σ(a)≤Q σ(b)<Q σ(c). Now if Q σ(a)<Q σ(b), then under the Ψ σ map, Q σ(c) was placed first in $$\mbox {{Q}^{\sigma }}$$, then Q σ(b) was placed, and then Q σ(a) was placed. However, if Q σ(a)=Q σ(b), then Lemma 15 ensures that the cells created by the insertion of the bottoms of biletters whose tops equal Q σ(b) are created from bottom to top. This means that the biletter which created cell b in $$\mbox {{Q}^{\sigma }}$$ must have been processed before the biletter which created cell a. Thus in either case, under the Ψ σ map, Q σ(c) was placed first in $$\mbox {{Q}^{\sigma }}$$, then Q σ(b) was placed, and then Q σ(a) was placed. The only reason that Q σ(a) was not placed on top of Q σ(b) is that there must have already existed an element e which was on top of Q σ(b) at the time Q σ(a) was placed. This means that Q σ(a)≤e since the cells in $$\mbox {{Q}^{\sigma }}$$ are created by adding elements in weakly decreasing order. However, since we cannot have two equal elements in the same row, we must have that Q σ(a)<e. Thus we know $$\mbox {{Q}^{\sigma }}(x_{1},y) > \mbox {{Q}^{\sigma }}(x_{2},y)$$. But this means that if we added an element z in cell (x 2,y+1) which sits on top of Q σ(a), then the only reason that z was not placed on top of $$e = \mbox {{Q}^{\sigma }}(x_{1},y)$$ is that there must have already been an element in $$\mbox {{Q}^{\sigma }}(x_{1},y+1)$$ at the time we added z. But then we can argue as above that it must be the case that $$\mbox {{Q}^{\sigma }}(x_{1},y+1) > \mbox {{Q}^{\sigma }}(x_{2},y+1)$$. But then we can repeat the argument for row y+2 so that if (x 2,y+2) is a cell in $$\mbox {{Q}^{\sigma }}$$, then (x 1,y+2) must have already been filled at the time we added an element to (x 2,y+2) and that $$\mbox {{Q}^{\sigma }}(x_{1},y+2) > \mbox {{Q}^{\sigma }}(x_{2},y+2)$$. Continuing on in this way, we conclude that the height of column x 1 in $$\mbox {{Q}^{\sigma }}$$ is greater than or equal to the height of column x 2 in $$\mbox {{Q}^{\sigma }}$$. But that is a contradiction, since if {a,b,c} is a type B triple, the height of column x 1 in $$\mbox {{Q}^{\sigma }}$$ must be less than the height of column x 2 in $$\mbox {{Q}^{\sigma }}$$. Thus there can be no type B coinversion triples in $$\mbox {{Q}^{\sigma }}$$.

Note that our argument above did not really use any properties of Q σ(c), but only relied on the fact that Q σ(a)≤Q σ(b). That is, we proved that if x 1<x 2 and $$\mbox {{Q}^{\sigma }}(x_{1},y-1) \geq \mbox {{Q}^{\sigma }}(x_{2},y)$$, then the height of column x 1 in $$\mbox {{Q}^{\sigma }}$$ must be greater that or equal to the height of column x 2 in $$\mbox {{Q}^{\sigma }}$$. But this means that if the height of column x 1 in $$\mbox {{Q}^{\sigma }}$$ is less than the height of column x 2 in $$\mbox {{Q}^{\sigma }}$$, then $$\mbox {{Q}^{\sigma }}(x_{1},y-1) < \mbox {{Q}^{\sigma }}(x_{2},y)$$, which is precisely the B-increasing condition. Thus $$\mbox {{Q}^{\sigma }}$$ is a $$\operatorname {PBF}$$.

Next consider the shape β of $$\mbox {{Q}^{\sigma }}$$. We must prove that β i β j for all inversions i<j of σ. Consider the shape β (1) of $$\mbox {{Q}^{\sigma }}$$ after we have placed j n into Q σ. Since j n is placed on top of the leftmost entry σ k such that σ k j n , the first k−1 entries of σ are less than σ k and hence the claim is satisfied after the initial insertion.

Assume that the claim is satisfied after the insertion of each of the last k−1 letters of w A and consider the placement of the entry j nk in Q σ. Let s be the index of the column into which j nk is placed. Let t be an integer less than s such that σ t >σ s . Then column t is weakly taller than column s before the placement of j nk by assumption. If column t is strictly taller, then the placement of j nk on top of column s will not alter the relative orders of the columns. If the heights of columns t and s are equal, then the highest entry in column t was inserted before the highest entry in column s, for otherwise the columns would violate the condition immediately after the highest entry of column s was inserted. But then j nk would be inserted on top of column t, a contradiction. Therefore, the shape β of $$\mbox {{Q}^{\sigma }}$$ satisfies the condition that β i β j for all pairs (i,j) satisfying i<j and σ i >σ j .

Thus we know that Ψ σ maps any n×n matrix A to a pair of $$\operatorname {PBF}$$s (P σ,Q σ). Now suppose that u A =i 1i t and v A =j 1j t and σ=σ 1σ n where σ i <σ i+1. We would like to determine the relationship between Q σ and $$Q^{s_{i}\sigma }$$. We established in Corollary 8 that as we consider the sequence of insertions

the new cells that we created by the insertions at each stage were in the same row of E σ as in $$E^{s_{i}\sigma }$$. This implies that for all j, the elements in row j of Q σ and $$Q^{s_{i}\sigma }$$ are the same. But then it is easy to prove by induction on the number of inversions of σ that for all j, the elements in row j of Q σ and $$Q^{\bar{\epsilon}_{n}}$$ are the same. That is, $$\rho(Q^{\sigma }) = Q^{\bar{\epsilon}_{n}}$$. Since there is a unique $$\operatorname {PBF}$$ Q with basement σ such that for all j, the elements in row j of Q and $$Q^{\bar{\epsilon}_{n}}$$ are the same, it follows that $$Q^{\sigma }= \rho^{-1}_{\sigma }(Q^{\bar{\epsilon}_{n}})$$ for all σ. Since P σ=j t j t−1j 1E σ for all σ, we know by the results of Sect. 3 that $$P^{\sigma }= \rho^{-1}_{\sigma }(P^{\bar{\epsilon}_{n}})$$ for all σ. Thus it follows that for any ℕ-valued n×n matrix A,

$$\varPsi_\sigma (A) = \bigl(P^\sigma ,Q^\sigma \bigr) = \bigl(\rho^{-1}_\sigma \bigl(P^{\bar{\epsilon}_n}\bigr), \rho^{-1}_\sigma \bigl(Q^{\bar{\epsilon}_n}\bigr)\bigr).$$

Since $$\varPsi_{\bar{\epsilon}_{n}}$$ and $$\rho^{-1}_{\sigma }$$ are bijections, it follows that Ψ σ is also a bijection between ℕ-valued n×n matrices A and pairs (P,Q) of $$\operatorname {PBF}$$s with basement σ.

We note that another way to define the inverse of Ψ σ is given by choosing the first occurrence (in reading order) of the smallest value in $$\mbox {{Q}^{\sigma }}$$, removing it from $$\mbox {{Q}^{\sigma }}$$, and labeling this entry j 1. Then choose the rightmost entry in this row of $$\mbox {{P}^{\sigma }}$$ which sits at the top of its column and apply the inverse of the insertion procedure to remove this cell from $$\mbox {{P}^{\sigma }}$$. The resulting entry is then i 1. Repeat this procedure to obtain the array w A . □

Note that our proof of Theorem 17 allows us to prove the following corollary which says that for any σS n , the map Ψ σ can be factored through our twisted version of the RSK correspondence.

### Corollary 18

For any ℕ-valued n×n matrix A,

$$\varPsi_\sigma (A) = \bigl(P^\sigma ,Q^\sigma \bigr) = \bigl(\rho^{-1}_\sigma \bigl(P^{\bar{\epsilon}_n}\bigr), \rho^{-1}_\sigma \bigl(Q^{\bar{\epsilon}_n}\bigr)\bigr)$$
(11)

where the map

$$\varPsi_{\bar{\epsilon}_n}(A) = \bigl(P^{\bar{\epsilon}_n},Q^{\bar{\epsilon}_n}\bigr)$$
(12)

is a twisted version of the usual RSK correspondence.

Corollary 18 allows us to prove that our permuted basement version of the RSK correspondence Ψ σ satisfies many of the properties that are satisfied by the RSK correspondence. For example, we have the following theorem.

### Theorem 19

Suppose that A is an ℕ-valued n×n matrix and A T is its transpose. Then for all σS n ,

$$\varPsi_\sigma (A) = \bigl(P^\sigma ,Q^\sigma \bigr) \ \iff\ \varPsi_\sigma \bigl(A^T\bigr) = \bigl(Q^\sigma ,P^\sigma \bigr).$$
(13)

### Proof

By the usual properties of the RSK correspondence, we know that

$$\varPsi_{\bar{\epsilon}_n}(A) = \bigl(P^{\bar{\epsilon}_n},Q^{\bar{\epsilon }_n} \bigr) \iff\varPsi_{\bar{\epsilon}_n}\bigl(A^T\bigr) = \bigl(Q^{\bar{\epsilon}_n},P^{\bar {\epsilon}_n}\bigr).$$
(14)

Then (13) follows immediately from (14) and (11). □

### 6.1 Standardization

Let w=w 1w n ∈{1,…,n} be a word and let P σ(w)=w 1w n E σ where σ=σ 1σ n S n . One can standardize w in the usual manner. That is, if w has i j js for j=1,…,n, then the standardization of w, st(w), is the permutation that results by replacing the 1’s in w by 1,…,i 1, reading from right to left, then replacing the 2s in w by i 1+1,…,i 1+i 2, reading from right to left, etc. If st(w 1w n )=s 1s n , then we define the standardization of P σ(w) by letting st(P σ(w))=s 1s n E σ.

In the special case where $$\sigma = \bar{\epsilon}_{n}$$, there are two different ways to find st(P σ(w)). That is, we can compute $$st(P^{\bar{\epsilon}_{n}}) = st(w) \rightarrow E^{\bar{\epsilon}_{n}}$$ directly or we can compute $$P^{\bar{\epsilon}_{n}} = w\rightarrow E^{\bar{\epsilon}_{n}}$$ and then standardize the reverse row strict tableau $$P^{\bar{\epsilon}_{n}}$$. Here, for any reverse row strict tableau T, st(F) is the standard reverse row strict tableau obtained by replacing the 1s in T 1,…,i 1 in order from top to bottom, then replacing the 2s in T by i 1+1,…,i 1+i 2, reading from top to bottom, etc. This follows from the fact that the usual standardization operation for words and column strict tableaux commutes with RSK row insertion; see [11]. Thus our standardization operation for words and reverse row strict tableaux commutes with our twisted version of RSK row insertion. That is, suppose w=w 1w n ∈{1,…,n} and st(w)=s 1s n . Then $$w \rightarrow E^{\bar{\epsilon}_{n}} = T$$ if and only if $$s_{1} \ldots s_{n} \rightarrow E^{\bar{\epsilon}_{n}} = st(T)$$. Because our insertion algorithm where the basement permutation is $${\bar{\epsilon}}_{n}$$ can be factored through our twisted version of RSK row insertion, the same thing happens when the basement is σ. That is,

We can summarize the above discussion in the following two propositions.

### Proposition 20

Let σ=σ 1σ n S n , w=w 1w n ∈{1,…,n}, and st(w)=s 1s n . If P σ(w)=w 1w n E σ is of shape γ where γ is a weak composition of n, then the $$\operatorname {PBF}$$ st(P σ(w))=s 1s n E σ is a $$\operatorname {PBF}$$ whose shape is a rearrangement of γ.

### Proof

We have proved above that $$st(P^{\sigma }(w)) = \rho^{-1}_{\sigma }(st(P^{\bar{\epsilon}_{n}}(w)))$$. Thus since the shape of $$st(P^{\bar{\epsilon}_{n}}(w))$$ is λ(γ), we know that $$\rho^{-1}_{\sigma }(st(P^{\bar{\epsilon}_{n}}(w)))$$ is a rearrangement of γ. □

### Proposition 21

The standardization of words and $$\operatorname {PBF}$$ s commutes with our insertion algorithm relative to the basement σ=σ 1σ n S n in the sense that for any w=w 1w n ∈{1,…,n}, we have the following commutative diagram.

A specific example of this process for w=4 3 1 3 2 3 4 1 is pictured in Fig. 22.

By the same reasoning, we can show that the RSK algorithm for $$\operatorname {PBF}$$s with basement σ also commutes with standardization. That is, suppose that we are given an ℕ-valued matrix n×n matrix A such that the sum of then entries of A is less than or equal to n. Then if $$w_{A} = \begin{array}{c} \scriptstyle u_{A} \\[-3pt] \scriptstyle v_{A} \end{array}$$ and

it will be the case that

## 7 Evacuation

The evacuation procedure on reverse semi-standard Young tableaux associates to each reverse SSYT T a new reverse SSYT evac(T) through a deletion process coupled with jeu de taquin. Specifically, let T be a reverse SSYT with n cells whose largest entry is m and let a be the entry in cell (1,1). Remove the entry a from T and apply jeu de taquin to create a new reverse SSYT, T′, with n−1 cells. The skew shape sh(T)/sh(T′) therefore consists of one cell which is then filled with the complement, m+1−a, of a relative to m. Repeat this procedure with T′ (but without changing the value of m) and continue until all of the cells from T have been evacuated and their complements relative to m have been placed into the appropriate locations in the diagram consisting of the union of all the one-celled skew shapes. This resulting diagram is a reverse semi-standard Young tableau called evac(T).

We define an evacuation procedure on standard $$\operatorname {PBF}$$s with basement σ as follows. Given a standard $$\operatorname {PBF}$$ F σ with basement σ, we define $$\mathit{evac}(F^{\sigma }) = \rho^{-1}_{\sigma }(\mathit{evac}(\rho_{\sigma }(F^{\sigma })))$$. That is, we first use the ρ σ map to send F σ to a reverse standard tableau ρ σ (F σ). Then we apply the usual evacuation procedure to produce a reverse standard tableau evac(ρ σ (F σ)) and next apply $$\rho^{-1}_{\sigma }$$ to map evac(ρ(F σ)) back to a standard $$\operatorname {PBF}$$ with basement σ. We claim that in the special case where σ=ϵ n is the identity, then we can define the evacuation procedure directly on the standard $$\operatorname {PBF}$$ which will allow us to compute evacuation without using jeu de taquin.

### Procedure 22

Let $$F^{\epsilon_{n}}$$ be an arbitrary $$\operatorname {PBF}$$ of size n whose largest entry is m, and let R i be the collection of entries appearing in the ith row of $$F^{\epsilon_{n}}$$, reading from bottom to top. Let e 1 be the largest entry in the first row of $$F^{\epsilon_{n}}$$, C 1 be the column containing e 1, and let h 1 be the height of C 1 in $$F^{\epsilon _{n}}$$. Assign m+1−e 1 row $$R_{h_{1}}$$ in $$\mathit{evac}(F^{\epsilon_{n}})$$. Remove e 1 and shift the remaining entries in column C 1 down by one position so that there are no gaps in the column. Next rearrange the entries in the rows in the resulting figure according to the same procedure that we used in defining the $$\rho^{-1}_{\epsilon _{n}}$$ map to produce a $$\operatorname {PBF}$$ $$F_{1}^{\epsilon_{n}}$$. Repeat the procedure on the new diagram $$F_{1}^{\epsilon_{n}}$$. That is, let e 2 be the largest entry in the first row of $$F_{1}^{\epsilon_{n}}$$, C 2 be the column that contains e 2, and h 2 be the height of column C 2 in $$F_{1}^{\epsilon_{n}}$$. Assign m+1−e 2 row $$R_{h_{2}}$$ in $$\mathit{evac}(F^{\epsilon_{n}})$$. Remove e 2 and shift the remaining entries in column C 2 down by one position so that there are no gaps in the column. Next rearrange the entries in the rows of the resulting figure according to same procedure that we used in defining the $$\rho^{-1}_{\epsilon_{n}}$$ map to produce a $$\operatorname {PBF}$$ $$F_{2}^{\epsilon_{n-2}}$$. Continue in this manner until all of the entries have been removed. The $$\operatorname {PBF}$$ $$\mathit{evac}(F^{\epsilon_{n}})$$ is produced by letting row i contain the complements of each entry relative to m associated with a column of height i and applying the map $$\rho^{-1}_{\epsilon_{n}}$$ to send the resulting entries in the given rows to their appropriate places.

See Fig. 23 for an example of this procedure.

### Theorem 23

If $$F^{\epsilon_{n}}$$ is a $$\operatorname {PBF}$$, then one can construct $$\mathit{evac}(F^{\epsilon_{n}})= \rho^{-1}_{\epsilon_{n}}(\mathit{evac}(\rho(F^{\epsilon_{n}})))$$ by Procedure 22.

### Proof

Let F be a $$\operatorname {PBF}$$ with basement $$\bar{\epsilon}_{n}$$ and let $$G^{\epsilon_{n}} = \rho^{-1}_{\epsilon_{n}}(F)$$. Let e 1=F(1,1) so that e 1 is the largest entry in the first row of F and hence it will be the largest entry in the first row of $$G^{\epsilon_{n}}$$. Now consider the jeu de taquin path of the empty space created by the removal of e 1 from F. That is, in jeu de taquin, we move the empty space to cell (2,1) and put F(2,1) in cell (1,1) if F(2,1) is defined and either F(2,1)≥F(1,2) or F(1,2) is not defined. Otherwise, we put F(1,2) in cell (1,1) and move the empty space to cell (1,2). In general, if the empty space is in cell (i,j), then we move the empty space to cell (i+1,j) and put F(i+1,j) into cell (i,j) if F(i+1,j) is defined and either F(i+1,j)≥F(i,j+1) or F(i,j+1) is not defined. Otherwise, we put F(i,j+1) in cell (i,j) and move the empty space to cell (i,j+1). The jeu de taquin path ends at cell (i,j) when both F(i,j+1) and F(i+1,j) are undefined.

Now suppose in the evacuation of e 1=F(1,1), the path of the empty space ends in row s and that c i is the right-most column involved in the jeu de taquin path in row i for i=1,…,s. Thus the jeu de taquin path involves cells (1,1),…,(c 1,1) in row 1 of F, cells (c 1,2),…,(c 2,2) in row 2 of F, cells (c 2,3),…,(c 3,3) in row 3 of F, etc.. Now if F 1 is the $$\operatorname {PBF}$$ with basement $$\bar{\epsilon}_{n}$$ that results from evacuating e 1, then it follows that in F 1, each of the entries F(c i ,i+1) will end up in row i of F 1 and all the other entries will be in the same row in F 1 as they were in F. We claim that in $$G^{\epsilon_{n}} = \rho^{-1}_{\epsilon_{n}}(F)$$, the column containing e 1 consists of e 1,F(c 1,2),F(c 2,3),…,F(c s−1,s), reading from bottom to top. Once we prove the claim, it will follow that in our direct evacuation of e 1 in $$G^{\epsilon_{n}}$$ to produce $$G_{1}^{\epsilon_{n}}$$, the entries in row i in F 1 and $$G_{1}^{\epsilon_{n}}$$ are the same. But then $$\rho(G_{1}^{\epsilon_{n}}) =F_{1}$$ so that $$\rho^{-1}_{\epsilon_{n}}(F_{1}) = G^{\epsilon_{n}}_{1}$$ since the row sets of F 1 completely determine $$\rho^{-1}_{\epsilon_{n}}(F_{1})$$. The theorem then easily follows by induction.

To prove the claim, note that the entries in the first row of F must all be distinct so that in constructing $$\rho^{-1}_{\epsilon_{n}}(F)$$, each entry i in row 1 of F will be placed on column i. Now the fact that (2,1),…,(c 1,1) are in the jeu de taquin path means that F(2,1)≥F(1,2),F(3,1)≥F(2,2),…,F(c 1,1)≥F(c 1−1,2). The fact that F(c 1,2) is in the jeu de taquin path means that F(c 1,2)>F(c 1+1,1) or F(c 1+1,1) is not defined. It then follows that in constructing $$\rho^{-1}_{\epsilon_{n}}(F)$$, the entries F(1,2),…F(c 1−1,2) can be placed on the columns occupied by F(2,1),…,F(c 1,1) but not on top of any of the columns occupied by F(c 1+1,1),F(c 1+2,1),…. Thus the F(1,2),…F(c 1−1,2) will be placed somewhere in the columns occupied by F(2,1),…,F(c 1,1). Thus when we go to place F(c 1,2) in the left-most available column, it must go on top of e 1 since it cannot go on top of any of the columns occupied by F(c 1+1,1),F(c 1+2,1),…. Finally any entries strictly right of (c 1,2) in row 2 must be placed on top of columns occupied by entries strictly to the left of the column containing e 1 in row 1 of F. Now consider the construction of the third row of $$\rho^{-1}_{\epsilon_{n}}(F)$$. The entries F(1,3),…,F(c 1−1,3) can go on top of the entries F(1,2),…,F(c 1−1,2) since F(i,3)≤F(i,2) for all i for which both F(i,3) and F(i,2) are defined. Next the fact that (c 1+1,2),…,(c 3,2) are in the jeu de taquin path means that F(c 1+1,2)≥F(c 1,3),…,F(c 2−1,3)≥F(c 2,2). Thus F(c 1,3),F(c 1+1,3),…,F(c 2,3) can go on top of the entries F(c 1+1,2),…,F(c 2,2) in row two of $$\rho^{-1}_{\epsilon_{n}}(F)$$. The fact that (c 2,3) is in the jeu de taquin path of e 1 in F means that F(c 2,3)>F(c 2+1,2) so that none of F(c 1+1,3),…,F(c 2,3) can go on top of entries F(c 2+1,2),F(c 2+2,2),… in row two of $$\rho^{-1}_{\epsilon_{n}}(F)$$. Hence F(1,3),…,F(c 2−1,3) will be able go on top of entries F(1,2),…,F(c 1−1,2),F(c 1+1,2),…,F(c 2,2) in row 2 of $$\rho^{-1}_{\epsilon_{n}}(F)$$ but they cannot go on top of the entries F(c 2+1,2),F(c 2+2,2),… in row 2 of $$\rho^{-1}_{\epsilon_{n}}(F)$$. Hence it must be the case that F(1,3),…,F(c 2−1,3) end up on top of entries F(1,2),…,F(c 1−1,2),F(c 1+1,2),…,F(c 2,2) in row 2 of $$\rho^{-1}_{\epsilon_{n}}(F)$$. Since F(c 2,3) cannot go on top of the entries F(c 2+1,2),F(c 2+2,2),… in row 2 of $$\rho^{-1}_{\epsilon_{n}}(F)$$, the only place left to place F(c 2,3) is on top of the column that contains e 1. Continuing on in this way establishes the claim.Footnote 1 □