Combinatorial Markov chains on linear extensions
 267 Downloads
 8 Citations
Abstract
We consider generalizations of Schützenberger’s promotion operator on the set \(\mathcal{L}\) of linear extensions of a finite poset of size n. This gives rise to a strongly connected graph on \(\mathcal{L}\). By assigning weights to the edges of the graph in two different ways, we study two Markov chains, both of which are irreducible. The stationary state of one gives rise to the uniform distribution, whereas the weights of the stationary state of the other have a nice product formula. This generalizes results by Hendricks on the Tsetlin library, which corresponds to the case when the poset is the antichain and hence \(\mathcal{L}=S_{n}\) is the full symmetric group. We also provide explicit eigenvalues of the transition matrix in general when the poset is a rooted forest. This is shown by proving that the associated monoid is \(\mathcal {R}\)trivial and then using Steinberg’s extension of Brown’s theory for Markov chains on left regular bands to \(\mathcal {R}\)trivial monoids.
Keywords
Linear extensions Posets Promotion operator Markov chains1 Introduction
Schützenberger [30] introduced the notion of evacuation and promotion on the set of linear extensions of a finite poset P of size n. This generalizes promotion on standard Young tableaux defined in terms of jeudetaquin moves. Haiman [19] as well as Malvenuto and Reutenauer [26] simplified Schützenberger’s approach by expressing the promotion operator ∂ in terms of more fundamental operators τ _{ i } (1≤i<n), which either act as the identity or as a simple transposition. A beautiful survey on this subject was written by Stanley [33].
In this paper, we consider a slight generalization of the promotion operator defined as ∂ _{ i }=τ _{ i } τ _{ i+1}⋯τ _{ n−1} for 1≤i≤n with ∂ _{1}=∂ being the original promotion operator. Since the operators ∂ _{ i } act on the set of all linear extensions of P, denoted \(\mathcal{L}(P)\), this gives rise to a graph whose vertices are the linear extensions and edges are labeled by the action of ∂ _{ i }. We show that this graph is strongly connected (see Proposition 4.1). As a result we obtain two irreducible Markov chains on \(\mathcal{L}(P)\) by assigning weights to the edges in two different ways. In one case, the stationary state is uniform, that is, every linear extension is equally likely to occur (see Theorem 4.3). In the other case, we obtain a nice product formula for the weights of the stationary distribution (see Theorem 4.5). We also consider analogous Markov chains for the adjacent transposition operators τ _{ i }, and give a combinatorial formula for their stationary distributions (see Theorems 4.4 and 4.7).
Our results can be viewed as a natural generalization of the results of Hendricks [20, 21] on the Tsetlin library [36], which is a model for the way an arrangement of books in a library shelf evolves over time. It is a Markov chain on permutations, where the entry in the ith position is moved to the front (or back depending on the conventions) with probability p _{ i }. Hendricks’ results from our viewpoint correspond to the case when P is an antichain and hence \(\mathcal{L}(P)=S_{n}\) is the full symmetric group. Many variants of the Tsetlin library have been studied and there is a wealth of literature on the subject. We refer the interested reader to the monographs by Letac [24] and by Dies [14], as well as the comprehensive bibliographies in [17] and [6].
One of the most interesting properties of the Tsetlin library Markov chain is that the eigenvalues of the transition matrix can be computed exactly. The exact form of the eigenvalues was independently investigated by several groups. Notably Donnelly [15], Kapoor and Reingold [23], and Phatarfod [27] studied the approach to stationarity in great detail. There has been some interest in finding exact formulas for the eigenvalues for generalizations of the Tsetlin library. The first major achievement in this direction was to interpret these results in the context of hyperplane arrangements [4, 6, 7]. This was further generalized to a class of monoids called left regular bands [10] and subsequently to all bands [11] by Brown. This theory has been used effectively by Björner [8, 9] to extend eigenvalue formulas on the Tsetlin library from a single shelf to hierarchies of libraries.
In this paper, we give explicit combinatorial formulas for the eigenvalues and multiplicities for the transition matrix of the promotion Markov chain when the underlying poset is a rooted forest (see Theorem 5.2). This is achieved by proving that the associated monoid is \(\mathcal {R}\)trivial and then using a generalization of Brown’s theory [10] of Markov chains for left regular bands to the \(\mathcal {R}\)trivial case using results by Steinberg [34, 35].
Computing the number of linear extensions is an important problem for real world applications [22]. For example, it relates to sorting algorithms in computer science, rankings in the social sciences, and efficiently counting standard Young tableaux in combinatorics. A recursive formula was given in [16]. Brightwell and Winkler [12] showed that counting the number of linear extensions is #Pcomplete. Bubley and Dyer [5] provided an algorithm to (almost) uniformly sample the set of linear extensions of a finite poset quickly. We propose new Markov chains for sampling linear extensions uniformly randomly. Further details are discussed in Sect. 7.
The paper is outlined as follows. In Sect. 2, we define the extended promotion operator and investigate some of its properties. The extended promotion and transposition operators are used in Sect. 3 to define various Markov chains, whose properties are studied in Sect. 4. We also prove formulas for the stationary distributions and explain the connection with the Tsetlin library there. In Sect. 5, we derive the partition function for the promotion Markov chains for rooted forests as well as all eigenvalues together with their multiplicities of the transition matrix. The statements about eigenvalues and multiplicities are proven in Sect. 6 using the theory of \(\mathcal {R}\)trivial monoids. We end with possible directions for future research in Sect. 7. In the Appendix, we provide details about implementations of linear extensions, Markov chains, and their properties in Sage [28, 29] and Maple.
2 Extended promotion on linear extensions
2.1 Definition of extended promotion
 (i)
The process starts with a seed, the label 1. First remove it and replace it by the minimum of all the labels covering it, i, say.
 (ii)
Now look for the minimum of all labels covering i in the original poset, and replace it, and continue in this way.
 (iii)
This process ends when a label is a “local maximum.” Place the label n+1 at that point.
 (iv)
Decrease all the labels by 1.
Example 2.1
We now generalize this to extended promotion, whose seed is any of the numbers 1,2,…,n. The algorithm is similar to the original one, and we describe it for seed j. Start with the subposet P _{ j } and perform steps 1–3 in a completely analogous fashion. Now decrease all the labels strictly larger than j by 1 in P (not only P _{ j }). Clearly, this gives a new linear extension, which we denote π∂ _{ j }. Note that ∂ _{ n } is always the identity.
2.2 Properties of τ_{i} and extended promotion
The operators τ _{ i } are involutions (\(\tau_{i}^{2} = 1\)) and partially commute (τ _{ i } τ _{ j }=τ _{ j } τ _{ i } when i−j>1). Unlike the generators for the symmetric group, the τ _{ i } do not always satisfy the braid relation τ _{ i } τ _{ i+1} τ _{ i }=τ _{ i+1} τ _{ i } τ _{ i+1}. They do, however, satisfy (τ _{ i } τ _{ i+1})^{6}=1 [33].
Proposition 2.2
The proof is an easy casebycase check. Since we do not use this result, we omit the proof.
It will also be useful to express the operators τ _{ i } in terms of the generalized promotion operator.
Lemma 2.3
For all 1≤j≤n−1, each operator τ _{ j } can be expressed as a product of promotion operators.
Proof
3 Various Markov chains
We now consider various discretetime Markov chains related to the extended promotion operator. For completeness, we briefly review the part of the theory relevant to us.
Fix a finite poset P of size n. The operators {τ _{ i }∣1≤i<n} (resp., {∂ _{ i }∣1≤i≤n}) define a directed graph on the set of linear extensions \({\mathcal{L}}(P)\). The vertices of the graph are the elements of \({\mathcal{L}}(P)\) and there is an edge from π to π′ if π′=πτ _{ i } (resp., π′=π∂ _{ i }). We can now consider random walks on this graph with probabilities given formally by x _{1},…,x _{ n } which sum to 1. In each case, we give two ways to assign the edge weights, see Sects. 3.1–3.4. An edge with weight x _{ i } is traversed with that rate. A priori, the x _{ i }’s must be positive real numbers for this to make sense according to the standard techniques of Markov chains. However, the ideas work in much greater generality and one can think of this as an “analytic continuation.”
We set up a running example that will be used for each case. The Appendix shows how to define and work with this poset in Sage.
Example 3.1
3.1 Uniform transposition graph
The vertices of the uniform transposition graph are the elements of \({\mathcal{L}}(P)\) and there is an edge between π and π′ if and only if π′=πτ _{ j } for some j∈[n], where we define τ _{ n } to be the identity map. This edge is assigned the symbolic weight x _{ j }. The name “uniform” is motivated by the fact that the stationary distribution of this Markov chain turns out to be uniform. Note that this chain is more general than the chains considered in [22] in that we assign arbitrary weights x _{ j } on the edges.
Example 3.2
3.2 Transposition graph
The transposition graph is defined in the same way as the uniform transposition graph, except that the edges are given the symbolic weight \(x_{\pi_{j}}\) whenever τ _{ j } takes π→π′.
Example 3.3
3.3 Uniform promotion graph
The vertices of the uniform promotion graph are labeled by elements of \({\mathcal{L}}(P)\) and there is an edge between π and π′ if and only if π′=π∂ _{ j } for some j∈[n]. In this case, the edge is given the symbolic weight x _{ j }.
Example 3.4
As in the uniform transposition graph, x _{4} occurs only on the diagonal in the above transition matrix. This is because the action of ∂ _{4} (or in general ∂ _{ n }) maps every linear extension to itself resulting in a loop.
3.4 Promotion graph
The promotion graph is defined in the same fashion as the uniform promotion graph with the exception that the edge between π and π′ when π′=π∂ _{ j } is given the weight \(x_{\pi_{j}}\).
Example 3.5
In the Appendix, implementations of these Markov chains in Sage and Maple are discussed.
4 Properties of the various Markov chains
In Sect. 4.1, we prove that the Markov chains defined in Sect. 3 are all irreducible. This is used in Sect. 4.2 to conclude that their stationary state is unique and either uniform or given by an explicit product formula in their weights.
Throughout this section, we fix a poset P of size n and let \({\mathcal{L}}:={\mathcal{L}}(P)\) be the set of its linear extensions.
4.1 Irreducibility
We now show that the four graphs of Sect. 3 are all strongly connected.
Proposition 4.1
Consider the digraph G whose vertices are labeled by elements of \({\mathcal{L}}\) and whose edges are given as follows: for \(\pi, \pi' \in{ \mathcal{L}}\), there is an edge between π and π′ in G if and only if π′=π∂ _{ j } (resp., π′=πτ _{ j }) for some j∈[n] (resp., j∈[n−1]). Then G is strongly connected.
Proof
We begin by showing the statement for the generalized promotion operators ∂ _{ j }. From an easy generalization of [33], we see that extended promotion, given by ∂ _{ j }, is a bijection for any j. Therefore, every element of \({\mathcal{L}}\) has exactly one such edge pointing in and one such edge pointing out. Moreover, ∂ _{ j } has finite order, so that \(\pi\partial_{j}^{k} = \pi\) for some k. In other words, the action of ∂ _{ j } splits \({\mathcal{L}}\) into disjoint cycles. In particular, π∂ _{ n }=π for all π so that it decomposes \({\mathcal{L}}\) into cycles of size 1.
It suffices to show that there is a directed path from any π to the identity e. We prove this by induction on n. The case of the poset with a single element is vacuous. Suppose the statement is true for every poset of size n−1. We have two cases. First, suppose \(\pi^{1}_{1}=1\). In this case, ∂ _{2},…,∂ _{ n } act on \({\mathcal{L}}\) in exactly the same way as ∂ _{1},…,∂ _{ n−1} on \({\mathcal{L}}'\), the set of linear extensions of P′, the poset obtained from P by removing 1. Then the directed path exists by the induction assumption.
Instead, suppose \(\pi^{1}_{1}=j\) and \(\pi^{1}_{k}=1\), for j,k>1. In other words, the label j is at position 1 and label 1 is at position k of P. Since j is at the position of a minimal element in P, it does not belong to the upper set of 1 (that is j⋡1 in the relabeled poset). Thus, the only effect on j of applying ∂ _{1} is to reduce it by 1, i.e., if π′=π∂ _{1}, then \(\pi'^{1}_{1}=j1\). Continuing this way, we can get to the previous case by the action of \(\partial_{1}^{j1}\) on π.
The statement for the τ _{ j } now follows from Lemma 2.3. □
Corollary 4.2
Assuming that the edge weights are strictly positive, all Markov chains of Sect. 3 are irreducible and their stationary distribution is unique.
Proof
Since the underlying graph of all four Markov chains of Sect. 3 is strongly connected, they are irreducible. The existence of a single loop at any vertex of the graph guarantees aperiodicity. The uniqueness of the stationary distribution then follows by standard theory of Markov chains [25, Chap. 1]. □
4.2 Stationary states
In this section, we prove properties of the stationary state of the various discretetime Markov chains defined in Sect. 3, assuming that all x _{ i }’s are strictly positive.
Theorem 4.3
The discretetime Markov chain according to the uniform promotion graph has the uniform stationary distribution, that is, each linear extension is equally likely to occur.
Proof
Stanley showed [33] that the promotion operator has finite order, that is, ∂ ^{ k }=id for some k. The same arguments go through for the extended promotion operators ∂ _{ j }. Therefore, at each vertex \(\pi\in{ \mathcal{L}}(P)\), there is an incoming and outgoing edge corresponding to ∂ _{ j } for each j∈[n]. For the uniform promotion graph, an edge for ∂ _{ j } is assigned weight x _{ j }, and hence the row sum of the transition matrix is one, which proves the result. Equivalently, the all ones vector is the required eigenvector. □
Theorem 4.4
The discretetime Markov chain according to the uniform transposition graph has the uniform stationary distribution.
Proof
Since each τ _{ j } is an involution, every incoming edge with weight x _{ j } has an outgoing edge with the same weight. Another way of saying the same thing is that the transition matrix is symmetric. By definition, the transition matrix is constructed so that column sums are one. Therefore, row sums are also one. □
We now turn to the promotion and transposition graphs of Sect. 3. In this case, we find nice product formulas for the stationary weights.
Theorem 4.5
Remark 4.6
The entries of w do not, in general, sum to one. Therefore, this is not a true probability distribution, but this is easily remedied by a multiplicative constant Z _{ P } depending only on the poset P.
Proof of Theorem 4.5
When P is the nantichain, then \({\mathcal{L}}= S_{n}\). In this case, the probability distribution of Theorem 4.5 has been studied in a completely different context by Hendricks [20, 21] and is known in the literature as the Tsetlin library [36], which we now describe. Suppose that a library consists of n books b _{1},…,b _{ n } on a single shelf. Assume that only one book is picked at a time and is returned before the next book is picked up. The book b _{ i } is picked with probability x _{ i } and placed at the end of the shelf.
We now explain why promotion on the nantichain is the Tsetlin library. A given ordering of the books can be identified with a permutation π. The action of ∂ _{ k } on π gives πτ _{ k }⋯τ _{ n−1} by (2.3), where now all the τ _{ i }’s satisfy the braid relation since none of the π _{ j }’s are comparable. Thus the kth element in π is moved all the way to the end. The probability with which this happens is \(x_{\pi_{k}}\), which makes this process identical to the action of the Tsetlin library.
Theorem 4.7
Proof
5 Partition functions and eigenvalues for rooted forests
For a certain class of posets, we are able to give an explicit formula for the probability distribution for the promotion graph. Note that this involves computing the partition function Z _{ P } (see Remark 4.6). We can also specify all eigenvalues and their multiplicities of the transition matrix explicitly.
5.1 Main results
Before we can state the main theorems of this section, we need to make a couple of definitions. A rooted tree is a connected poset, where each node has at most one successor. Note that a rooted tree has a unique largest element. A rooted forest is a union of rooted trees. A lower set (resp., upper set) S in a poset is a subset of the nodes such that if x∈S and y⪯x (resp., y⪰x), then also y∈S. We first give the formula for the partition function.
Theorem 5.1
Proof
A permutation is a derangement if it does not have any fixed points. A linear extension π is called a poset derangement if it is a derangement when considered as a permutation. Let \(\mathfrak{d}_{P}\) be the number of poset derangements of the poset P.
A lattice L is a poset in which any two elements have a unique supremum (also called join) and a unique infimum (also called meet). For x,y∈L the join is denoted by x∨y, whereas the meet is x∧y. For an upper semilattice we only require the existence of a unique supremum of any two elements.
Theorem 5.2
The proof of Theorem 5.2 will be given in Sect. 6. As we will see in Lemma 6.5, the action of the operators in the promotion graph of Sect. 3.4 for rooted forests have a Tsetlin library type interpretation of moving books to the end of a stack (up to reordering).
When P is a union of chains, which is a special case of rooted forests, we can express the eigenvalue multiplicities directly in terms of the number of poset derangements.
Theorem 5.3
The proof of Theorem 5.3 is given in Sect. 5.2.
Corollary 5.4
Note that the antichain is a special case of a rooted forest and in particular a union of chains. In this case, the Markov chain is the Tsetlin library and all subsets of [n] are upper (and lower) sets. Hence Theorem 5.2 specializes to the results of Donnelly [15], Kapoor and Reingold [23], and Phatarfod [27] for the Tsetlin library.
The case of unions of chains, which are consecutively labeled, can be interpreted as looking at a parabolic subgroup of S _{ n }. If there are k chains of lengths n _{ i } for 1≤i≤k, then the parabolic subgroup is \(S_{n_{1}} \times\cdots\times S_{n_{k}}\). In the realm of the Tsetlin library, there are n _{ i } books of the same color. The Markov chain consists of taking a book at random and placing it at the end of the stack.
5.2 Proof of Theorem 5.3
We deduce Theorem 5.3 from Theorem 5.2 by which the matrix M has eigenvalues indexed by upper sets S with multiplicity d _{ S }. We need to show that \(\mathfrak{d}_{P \setminus S} = d_{S}\).
Let P be a union of chains and L the lattice of upper sets of P. The Möbius function of P is the product of the Möbius functions of each chain. This implies that the only upper sets of P with a nonzero entry of the Möbius function are the ones with unions of the top element in each chain.
Lemma 5.5
Let P=[n _{1}]+[n _{2}]+⋯+[n _{ k }]. Fix I⊆[k] and let S⊆P be the upper set containing the top element of the ith chain of P for all i∈I. Then \(f([S,\hat{1}])\) is equal to the number of linear extensions of P that fix at least one element of the ith chain of P for all i∈I.
Proof
Let n=n _{1}+n _{2}+⋯+n _{ k } denote the number of elements in P. Let N _{1}=0 and define N _{ i }=n _{1}+⋯+n _{ i−1} for all 2≤i≤k. We label the elements of P consecutively so that N _{ i }+1,N _{ i }+2,…,N _{ i+1} label the elements of the ith chain of P for all 1≤i≤k.
The linear extensions of P are in bijection with words w of length n in the alphabet \(\mathcal{E}:= \{e_{1}, e_{2}, \ldots, e_{k}\}\) with n _{ i } instances of each letter e _{ i }. Indeed, given a linear extension π of P, we associate such a word w to π by setting w _{ j }=e _{ i } if π _{ j }∈{N _{ i }+1,…,N _{ i+1}}; i.e., if j lies in the ith column of P under the extension π. For the remainder of the proof, we will identify a linear extension π (and properties of π) with its corresponding word w. We also view e _{ i } as standard basis vectors in \(\mathbb{Z}^{k}\).

\(w_{N_{i}+j} = e_{i}\) (i.e., w sends N _{ i }+j to the ith column of P) and

the restriction of w to its first N _{ i }+j letters, which we denote \(w_{[1,\ldots,N_{i}+j]}\), contains exactly j instances of the letter e _{ i } (i.e., N _{ i }+j is the jth element of the ith column of P under the extension w).
Moreover, it is clear that the set of all j∈{1,…,n _{ i }} such that w fixes N _{ i }+j is an interval of the form [a _{ i },b _{ i }].
Having established this notation, we are ready to prove the main statement of the Lemma. Let \(\mathcal{W}\) denote the collection of all words in the alphabet \(\mathcal{E}\) of length n with n _{ j } instances of each letter e _{ j } that fix an element of the ith chain of P for all i∈I. Let \(\mathcal{W}'\) denote the collection of all words of length n−I in the alphabet \(\mathcal{E}\) with \(n'_{j}\) instances of each letter e _{ j }.
We define a bijection \(\varphi: \mathcal{W} \rightarrow\mathcal {W}'\) as follows. For each i∈I, suppose \(w \in\mathcal{W}\) fixes the elements N _{ i }+a _{ i },…,N _{ i }+b _{ i } from the ith chain of P. We define φ(w) to be the word obtained from w by removing the letter e _{ i } in position \(w_{N_{i} + b_{i}}\) for each i∈I. Clearly, φ(w) has length n−I and \(n'_{j}\) instances of each letter e _{ j }.
Conversely, given \(w' \in\mathcal{W}'\), let J _{ i } be the set of indices \(N'_{i}+j\) with \(0 \leq j \leq n'_{i}\) such that \(w'_{[1,\ldots,N'_{i}+j]}\) contains exactly j instances of the letter e _{ i }. Here we allow j=0 since it is possible that there are no instances of the letter e _{ i } among the first \(N'_{i}\) letters of w′. Again, it is clear that each J _{ i } is an interval of the form \([N'_{i} + c_{i}, \ldots, N'_{i}+d_{i}]\) and \(w'_{N_{i}+j} = e_{i}\) for all j∈[c _{ i }+1,…,d _{ i }], though it is possible that \(w'_{N'_{i}+c_{i}} \neq e_{i}\). Thus we define φ ^{−1}(w′) to be the word obtained from w′ by inserting the letter e _{ i } after \(w'_{N'_{i}+d_{i}}\) for all i∈I. □
We illustrate the proof of Lemma 5.5 in the following example.
Example 5.6
Remark 5.7
The initial labeling of P in the proof of Lemma 5.5 is essential to the proof. For example, let P be the poset [2]+[2] with two chains, each of length two. Labeling the elements of P so that 1<2 and 3<4 admits two derangements: 3142 and 3412. On the other hand, labeling the elements of P so that 1<4 and 2<3 only admits one derangement: 2143. In either case, the eigenvalue 0 of M has multiplicity 2.
6 \(\mathcal {R}\)Trivial monoids
In this section, we provide the proof of Theorem 5.2. We first note that in the case of rooted forests the monoid generated by the relabeled promotion operators of the promotion graph is \(\mathcal {R}\)trivial (see Sects. 6.1 and 6.2). Then we use a generalization of Brown’s theory [10] for Markov chains associated to left regular bands (see also [6, 7]) to \(\mathcal {R}\)trivial monoids. This is, in fact, a special case of Steinberg’s results [34, Theorems 6.3 and 6.4] for monoids in the pseudovariety DA as stated in Sect. 6.3. The proof of Theorem 5.2 is given in Sect. 6.4.
6.1 \(\mathcal {R}\)Trivial monoids
Remark 6.1
A monoid \(\mathcal {M}\) is a left regular band if x ^{2}=x and xyx=xy for all \(x,y\in \mathcal {M}\). It is not hard to check (see also [3, Example 2.4]) that left regular bands are \(\mathcal {R}\)trivial.
Schocker [31] introduced the notion of weakly ordered monoids which is equivalent to the notion of \(\mathcal {R}\)triviality [3, Theorem 2.18] (the proof of which is based on ideas by Steinberg and Thiéry).
Definition 6.2
 (i)
\(\operatorname {supp}\) is a surjective monoid morphism, that is, \(\operatorname {supp}(xy) = \operatorname {supp}(x) \vee \operatorname {supp}(y)\) for all \(x,y \in \mathcal {M}\) and \(\operatorname {supp}(\mathcal {M})=L^{\mathcal {M}}\).
 (ii)
If \(x,y \in \mathcal {M}\) are such that \(xy\le_{\mathcal {R}} x\), then \(\operatorname {supp}(y) \preceq \operatorname {des}(x)\).
 (iii)
If \(x,y \in \mathcal {M}\) are such that \(\operatorname {supp}(y) \preceq \operatorname {des}(x)\), then xy=x.
Theorem 6.3
[3, Theorem 2.18] Let \(\mathcal {M}\) be a finite monoid. Then \(\mathcal {M}\) is weakly ordered if and only if \(\mathcal {M}\) is \(\mathcal {R}\)trivial.
If \(\mathcal {M}\) is \(\mathcal {R}\)trivial, then for each \(x\in \mathcal {M}\) there exists an exponent of x such that x ^{ ω } x=x ^{ ω }. In particular, x ^{ ω } is idempotent, that is, (x ^{ ω })^{2}=x ^{ ω }.
Given an \(\mathcal {R}\)trivial monoid \(\mathcal {M}\), one might be interested in finding the underlying semilattice \(L^{\mathcal {M}}\) and maps \(\operatorname {supp},\operatorname {des}\).
Remark 6.4
 (i)
\(L^{\mathcal {M}}\) is the set of left ideals \(\mathcal {M}e\) generated by the idempotents \(e\in \mathcal {M}\), ordered by reverse inclusion.
 (ii)
\(\operatorname {supp}:\mathcal {M}\to L^{\mathcal {M}}\) is defined as \(\operatorname {supp}(x) = \mathcal {M}x^{\omega}\).
 (iii)
\(\operatorname {des}: \mathcal {M}\to L^{\mathcal {M}}\) is defined as \(\operatorname {des}(x)=\operatorname {supp}(e)\), where e is some maximal element in the set \(\{y\in \mathcal {M}\mid xy=x\}\) with respect to the preorder \(\le_{\mathcal {R}}\).
The idea of associating a lattice (or semilattice) to certain monoids has been used for a long time in the semigroup community [13].
6.2 \(\mathcal {R}\)Triviality of the promotion monoid
Now let P be a rooted forest of size n and \(\hat{\partial}_{i}\) for 1≤i≤n the operators on \({\mathcal{L}}(P)\) defined by the promotion graph of Sect. 3.4. That is, for \(\pi,\pi' \in{ \mathcal{L}}(P)\), the operator \(\hat{\partial}_{i}\) maps π to π′ if \(\pi' = \pi\partial _{\pi^{1}_{i}}\). We are interested in the monoid \(\mathcal {M}^{\hat{\partial}}\) generated by \(\{ \hat{\partial}_{i} \mid1\le i \le n\}\).
Lemma 6.5
Let P and \(\hat{\partial}_{i}\) be as above, and \(\pi\in{ \mathcal{L}}(P)\). Then \(\pi \hat{\partial}_{i}\) is the linear extension in \({\mathcal{L}}(P)\) obtained from π by moving the letter i to position n and reordering all letters j⪰i.
Proof
Suppose \(\pi_{i}^{1} = k\). Then the letter i is in position k in π. Furthermore by definition \(\pi \hat{\partial}_{\pi^{1}_{i}} = \pi \hat{\partial}_{k} = \pi\tau_{k} \tau_{k+1} \cdots\tau_{n1}\). Since π is a linear extension of P, all comparable letters are ordered within π. Hence τ _{ k } either tries to switch i with a letter j⪰i or an incomparable letter j. In the case j⪰i, τ _{ k } acts as the identity. In the other case, τ _{ k } switches the elements. In the first (resp., second) case, we repeat the argument with i replaced by its unique successor j (resp., i) and τ _{ k } replaced by τ _{ k+1}, etc. It is not hard to see that this results in the claim of the lemma. □
Example 6.6
Let P be the union of a chain of length 3 and a chain of length 2, where the first chain is labeled by the elements {1,2,3} and the second chain by {4,5}. Then \(41235 \hat{\partial}_{1} = 41253\), which is obtained by moving the letter 1 to the end of the word and then reordering the letters {1,2,3}, so that the result is again a linear extension of P.
Let \(x\in \mathcal {M}^{\hat{\partial}}\). The image of x is \(\operatorname {im}(x) = \{\pi x \mid\pi\in{ \mathcal{L}}(P)\}\). Furthermore, for each \(\pi\in \operatorname {im}(x)\), let \(\operatorname {fiber}(\pi,x) = \{ \pi ' \in{ \mathcal{L}}(P) \mid\pi= \pi' x\}\). Let \(\operatorname{rfactor}(x)\) be the maximal common right factor of all elements in \(\operatorname {im}(x)\), that is, all elements \(\pi\in \operatorname {im}(x)\) can be written as \(\pi= \pi_{1} \cdots\pi_{m} \operatorname{rfactor}(x)\) and there is no bigger right factor for which this is true. Let us also define the set of entries in the right factor \(\operatorname {Rfactor}(x) = \{ i \mid i\in\operatorname{rfactor}(x) \}\). Note that since all elements in the image set of x are linear extensions of P, \(\operatorname {Rfactor}(x)\) is an upper set of P.
By Lemma 6.5, linear extensions in \(\operatorname {im}(\hat{\partial}_{i})\) have as their last letter max_{ P }{j∣j⪰i}; this maximum is unique since P is a rooted forest. Hence it is clear that \(\operatorname {im}(\hat{\partial}_{i} x) \subseteq \operatorname {im}(x)\) for any \(x\in \mathcal {M}^{\hat{\partial}}\) and 1≤i≤n. In particular, if \(x\le_{\mathcal {L}}y\), that is y=ux for some \(u\in \mathcal {M}^{\hat{\partial}}\), then \(\operatorname {im}(y) \subseteq \operatorname {im}(x)\). Hence x,y can only be in the same \(\mathcal {L}\)class if \(\operatorname {im}(x)=\operatorname {im}(y)\).
Fix \(x \in \mathcal {M}^{\hat{\partial}}\) and let the set I _{ x }={i _{1},…,i _{ k }} be maximal such that \(\hat{\partial}_{i_{j}} x = x\) for 1≤j≤k. The following holds.
Lemma 6.7
If x is an idempotent, then \(\operatorname {Rfactor}(x) = I_{x}\).
Proof
Recall that the operators \(\hat{\partial}_{i}\) generate \(\mathcal {M}^{\hat{\partial}}\). Hence we can write \(x = \hat{\partial}_{\alpha_{1}} \cdots \hat{\partial}_{\alpha_{m}}\) for some α _{ j }∈[n].
The condition \(\hat{\partial}_{i} x = x\) is equivalent to the condition that for every \(\pi\in \operatorname {im}(\hat{\partial}_{i})\) there is a \(\pi'\in \operatorname {im}(x)\) such that \(\operatorname {fiber}(\pi, \hat{\partial}_{i}) \subseteq \operatorname {fiber}(\pi',x)\) and π′=πx. Since x is idempotent we also have π′=π′x. The first condition \(\operatorname {fiber}(\pi, \hat{\partial}_{i}) \subseteq \operatorname {fiber}(\pi ',x)\) makes sure that the fibers of x are coarser than the fibers of \(\hat{\partial}_{i}\); this is a necessary condition for \(\hat{\partial}_{i} x = x\) to hold (recall that we are acting on the right) since the fibers of \(\hat{\partial}_{i} x\) are coarser than the fibers of \(\hat{\partial}_{i}\). The second condition π′=πx ensures that \(\operatorname {im}(\hat{\partial}_{i} x) = \operatorname {im}(x)\). Conversely, if the two conditions hold, then certainly \(\hat{\partial}_{i} x = x\). Since x ^{2}=x is an idempotent, we hence must have \(\hat{\partial}_{\alpha_{j}} x = x\) for all 1≤j≤m.
Now let us consider \(x\hat{\partial}_{\alpha_{j}}\). If \(\alpha_{j} \notin \operatorname {Rfactor}(x)\), then, by Lemma 6.5, we have \(\operatorname {Rfactor}(x) \subsetneq \operatorname {Rfactor}(x\hat{\partial}_{\alpha_{j}})\) and hence \(\mathrm{im}\,(x\hat{\partial}_{\alpha_{j}})< \mathrm{im}\,(x)\), which contradicts the fact that x ^{2}=x. Therefore, \(\alpha_{j}\in \operatorname {Rfactor}(x)\).
Now suppose \(\hat{\partial}_{i}x=x\). Then \(x=\hat{\partial}_{i} \hat{\partial}_{\alpha_{1}} \cdots \hat{\partial}_{\alpha_{m}}\) and by the same arguments as above \(i\in \operatorname {Rfactor}(x)\). Hence \(I_{x} \subseteq \operatorname {Rfactor}(x)\). Conversely, suppose \(i\in \operatorname {Rfactor}(x)\). Then \(x\hat{\partial}_{i}\) has the same fibers as x (but possibly a different image set since \(\operatorname{rfactor}(x\hat{\partial}_{i}) = \operatorname{rfactor}(x) \hat{\partial}_{i}\) which can be different from \(\operatorname{rfactor}(x)\)). This implies \(x\hat{\partial}_{i} x =x\). Hence considering the expression in terms of generators \(x= \hat{\partial}_{\alpha_{1}} \cdots \hat{\partial}_{\alpha_{m}} \hat{\partial}_{i} \hat{\partial}_{\alpha_{1}} \cdots \hat{\partial}_{\alpha_{m}}\), the above arguments imply that \(\hat{\partial}_{i} x = x\). This shows that \(\operatorname {Rfactor}(x) \subseteq I_{x}\) and hence \(I_{x} = \operatorname {Rfactor}(x)\). This proves the claim. □
Lemma 6.8
I _{ x } is an upper set of P for any \(x\in \mathcal {M}^{\hat{\partial}}\). More precisely, \(I_{x} = \operatorname {Rfactor}(e)\) for some idempotent \(e\in \mathcal {M}^{\hat{\partial}}\).
Proof
For any \(x\in \mathcal {M}^{\hat{\partial}}\), \(\operatorname{rfactor}(x) \subseteq \operatorname{rfactor}(x^{\ell})\) for any integer ℓ>0. Also, the fibers of x ^{ ℓ } are coarser or equal to the fibers of x. Since the right factors can be of length at most n (the size of P) and \(\mathcal {M}^{\hat{\partial}}\) is finite, for ℓ sufficiently large we have (x ^{ ℓ })^{2}=x ^{ ℓ }, so that x ^{ ℓ } is an idempotent. Now take a maximal idempotent e in the \(\ge_{\mathcal {R}}\) preorder such that ex=x (when I _{ x }=∅ we have Open image in new window ) which exists by the previous arguments. Then I _{ e }=I _{ x } which, by Lemma 6.7, is also \(\operatorname {Rfactor}(e)\). This proves the claim. □
Let M be the transition matrix of the promotion graph of Sect. 3.4. Define \(\mathcal {M}\) to be the monoid generated by {G _{ i }∣1≤i≤n}, where G _{ i } is the matrix M evaluated at x _{ i }=1 and all other x _{ j }=0. We are now ready to state the main result of this section.
Theorem 6.9
\(\mathcal {M}\) is \(\mathcal {R}\)trivial.
Remark 6.10
Considering the matrix monoid \(\mathcal {M}\) is equivalent to considering the abstract monoid \(\mathcal {M}^{\hat{\partial}}\) generated by \(\{\hat{\partial}_{i} \mid1\le i \le n\}\). Since the operators \(\hat{\partial}_{i}\) act on the right on linear extensions, the monoid \(\mathcal {M}^{\hat{\partial}}\) is \(\mathcal {L}\)trivial instead of \(\mathcal {R}\)trivial.
Example 6.11
Example 6.12
Proof of Theorem 6.9
By Theorem 6.3, a monoid is \(\mathcal {R}\)trivial if and only if it is weakly ordered. We prove the theorem by explicitly constructing the semilattice \(L^{\mathcal {M}}\) and maps \(\operatorname {supp}, \operatorname {des}:\mathcal {M}^{\hat{\partial}} \to L^{\mathcal {M}}\) of Definition 6.2. In fact, since we work with \(\mathcal {M}^{\hat{\partial}}\), we will establish the left version of Definition 6.2 by Remark 6.10.
Recall that for \(x \in \mathcal {M}^{\hat{\partial}}\), we defined the set I _{ x }={i _{1},…,i _{ k }} to be maximal such that \(\hat{\partial}_{i_{j}} x = x\) for 1≤j≤k.
Define \(\operatorname {des}(x)=I_{x}\) and \(\operatorname {supp}(x) = \operatorname {des}(x^{\omega})\). By Lemma 6.7, for idempotents x we have \(\operatorname {supp}(x) = \operatorname {des}(x) = I_{x} = \operatorname {Rfactor}(x)\). Let \(L^{\mathcal {M}}= \{ \operatorname {Rfactor}(x) \mid x\in \mathcal {M}^{\hat{\partial}}, x^{2}=x \}\) which has a natural semilattice structure \((L^{\mathcal {M}},\preceq)\) by inclusion of sets. The join operation is union of sets.
Certainly by Lemma 6.7 and the definition of \(L^{\mathcal {M}}\), the map \(\operatorname {supp}\) is surjective. We want to show that in addition \(\operatorname {supp}(xy) = \operatorname {supp}(x) \vee \operatorname {supp}(y)\), where ∨ is the join in \(L^{\mathcal {M}}\). Recall that \(\operatorname {supp}(x) = \operatorname {des}(x^{\omega}) = \operatorname {Rfactor}(x^{\omega})\). If \(x = \hat{\partial}_{j_{1}} \cdots \hat{\partial}_{j_{m}}\) in terms of the generators and J _{ x }:={j _{1},…,j _{ m }}, then, by Lemma 6.5, \(\operatorname {Rfactor}(x^{\omega})\) contains the upper set of J _{ x } in P plus possibly some more elements that are forced if the upper set of J _{ x } has only one successor in the semilattice of upper sets in P. A similar argument holds for y with J _{ y }. Now again by Lemma 6.5, \(\operatorname {supp}(xy) = \operatorname {Rfactor}((xy)^{\omega})\) contains the elements in the upper set of J _{ x }∪J _{ y }, plus possibly more forced by the same reason as before. Hence \(\operatorname {supp}(xy) = \operatorname {supp}(x) \vee \operatorname {supp}(y)\). This shows that Definition 6.2(i) holds.
Suppose \(x,y\in \mathcal {M}^{\hat{\partial}}\) with \(yx \le_{\mathcal {L}} x\). Then there exists a \(z\in \mathcal {M}^{\hat{\partial}}\) such that zyx=x. Hence \(\operatorname {supp}(y) \preceq \operatorname {supp}(zy) \preceq I_{x} =\operatorname {des}(x)\) by Lemmas 6.7 and 6.8. Conversely, if \(x,y\in \mathcal {M}^{\hat{\partial}}\) are such that \(\operatorname {supp}(y) \preceq \operatorname {des}(x)\), then by the definition of \(\operatorname {des}(x)\) we have \(\operatorname {supp}(y) \preceq I_{x}\), which is the list of indices of the left stabilizers of x. By the definition of \(\operatorname {supp}(y)\) and the proof of Lemma 6.7, y ^{ ω } can be written as a product of \(\hat{\partial}_{i}\) with \(i\in \operatorname {supp}(y)\). The same must be true for y. Hence yx=x, which shows that the left version of (ii) and (iii) of Definition 6.2 hold.
In summary, we have shown that \(\mathcal {M}^{\hat{\partial}}\) is weakly ordered in \(\mathcal {L}\)preorder and hence \(\mathcal {L}\)trivial. This implies that \(\mathcal {M}\) is \(\mathcal {R}\)trivial. □
Remark 6.13
In the proof of Theorem 6.9 we explicitly constructed the semilattice \(L^{\mathcal {M}}=\{ \operatorname {Rfactor}(x) \mid x\in \mathcal {M}^{\hat{\partial}}, x^{2}=x\}\) and the maps \(\operatorname {supp},\operatorname {des}: \mathcal {M}^{\hat{\partial}} \to L^{\mathcal {M}}\) of Definition 6.2. Here \(\operatorname {des}(x)= I_{x}\) is the set of indices I _{ x }={i _{1},…,i _{ m }} such that \(\hat{\partial}_{i_{j}} x =x\) for all 1≤j≤m and \(\operatorname {supp}(x) = \operatorname {des}(x^{\omega}) = I_{x^{\omega}} = \operatorname {Rfactor}(x^{\omega})\).
Example 6.14
6.3 Eigenvalues and multiplicities for \(\mathcal {R}\)trivial monoids
Recall that by Remark 6.4 we can associate a semilattice \(L^{\mathcal {M}}\) and functions \(\operatorname {supp},\operatorname {des}:\mathcal {M}\to L^{\mathcal {M}}\) to an \(\mathcal {R}\)trivial monoid \(\mathcal {M}\). For \(X\in L^{\mathcal {M}}\), define c _{ X } to be the number of chambers in \(\mathcal {M}_{\ge X}\), that is, the number of \(c\in\mathcal{C}\) such that \(c\ge_{\mathcal {R}} x\), where \(x\in \mathcal {M}\) is any fixed element with \(\operatorname {supp}(x)=X\).
Theorem 6.15
Brown [10, Theorem 4, p. 900] proved Theorem 6.15 in the case when \(\mathcal {M}\) is a left regular band. Theorem 6.15 is a generalization to the \(\mathcal {R}\)trivial case. It is, in fact, a special case of a result of Steinberg [34, Theorems 6.3 and 6.4] for monoids in the pseudovariety DA. This was further generalized in [35].
6.4 Proof of Theorem 5.2
By Theorem 6.9, the promotion monoid \(\mathcal {M}\) is \(\mathcal {R}\)trivial, hence Theorem 6.15 applies.
Let L be the lattice of upper sets of P and \(L^{\mathcal {M}}\) the semilattice of Definition 6.2 associated to \(\mathcal {R}\)trivial monoids that is used in Theorem 6.15. Recall that for the promotion monoid \(L^{\mathcal {M}} = \{ \operatorname {Rfactor}(x) \mid x\in \mathcal {M}^{\hat{\partial}}, x^{2}=x \}\) by Remark 6.13. Now pick S∈L and let r=r _{1}…r _{ m } be any linear extension of P_{ S } (denoting P restricted to S). By repeated application of Lemma 6.5, it is not hard to see that \(x = \hat{\partial}_{r_{1}} \cdots \hat{\partial}_{r_{m}}\) is an idempotent since \(r_{1} \dots r_{m} \subseteq \operatorname{rfactor}(x)\) and x only acts on this right factor and fixes it. \(\operatorname{rfactor}(x)\) is strictly bigger than r _{1}…r _{ m } if some further letters beyond r _{1}…r _{ m } are forced in the right factors of the elements in the image set. This can only happen if there is only one successor S′ of S in the lattice L. In this case, the element in S′∖S is forced as the letter to the left of r _{1}…r _{ m } and is hence part of \(\operatorname{rfactor}(x)\).
Now suppose \(S\in L^{\mathcal {M}}\) is an element of the smaller semilattice. Recall that c _{ S } of Theorem 6.15 is the number of maximal elements in \(x\in \mathcal {M}^{\hat{\partial}}\) with \(x\ge_{\mathcal {R}}s\) for some s with \(\operatorname {supp}(s)=S\). In \(\mathcal {M}\) the maximal elements in \(\mathcal {R}\)order (or equivalently, in \(\mathcal {M}^{\hat{\partial}}\) in \(\mathcal {L}\)order) form the chamber \(\mathcal{C}\) (resp., \(\mathcal{C}^{\hat{\partial}}\)) and are naturally indexed by the linear extensions in \({\mathcal{L}}(P)\). Namely, given \(\pi= \pi_{1} \ldots\pi_{n} \in{ \mathcal{L}}(P)\) the element \(x = \hat{\partial}_{\pi_{1}} \cdots \hat{\partial}_{\pi_{n}}\) is idempotent, maximal in \(\mathcal {L}\)order and has as image set {π}. Conversely, given a maximal element x in \(\mathcal {L}\)order it must have \(\operatorname{rfactor}(x)\in{ \mathcal{L}}(P)\). Given \(s\in \mathcal {M}^{\hat{\partial}}\) with \(\operatorname {supp}(s)=S\), only those maximal elements \(x\in \mathcal {M}^{\hat{\partial}}\) associated to \(\pi\in \operatorname {im}(s)\) are bigger than s. Hence for \(S\in L^{\mathcal {M}}\) we have \(c_{S} = f([S,\hat{1}])\).
The above arguments show that instead of \(L^{\mathcal {M}}\) one can also work with the lattice L of upper sets since any S∈L but \(S\notin L^{\mathcal {M}}\) comes with multiplicity d _{ S }=0 and otherwise the multiplicities agree.
The promotion Markov chain assigns a weight x _{ i } for a transition from π to π′ for \(\pi,\pi'\in{ \mathcal{L}}(P)\) if \(\pi'=\pi \hat{\partial}_{i}\). Recall that elements in the chamber \(\mathcal {C}^{\hat{\partial}}\) are naturally associated with linear extensions. Let \(x,x'\in\mathcal{C}^{\hat{\partial}}\) be associated to π,π′, respectively. That is, π=τx and π′=τx′ for all \(\tau\in{ \mathcal{L}}(P)\). Then \(x'= x \hat{\partial}_{i}\) since \(\tau(x \hat{\partial}_{i}) = (\tau x) \hat{\partial}_{i} = \pi \hat{\partial}_{i} = \pi'\) for all \(\tau\in{ \mathcal{L}}(P)\). Equivalently in the monoid \(\mathcal {M}\) we would have X′=G _{ i } X for \(X,X'\in\mathcal{C}\). Hence comparing with (6.2), setting the probability variables to \(w_{G_{i}}= x_{i}\) and w _{ X }=0 for all other \(X\in \mathcal {M}\), Theorem 6.15 implies Theorem 5.2.
7 Outlook
Two of our Markov chains, the uniform promotion graph and the uniform transposition graph, are irreducible and have the uniform distribution as their stationary distributions. Moreover, the former is irreversible and has the advantage of having tunable parameters x _{1},…,x _{ n } whose only constraint is that they sum to 1. Because of the irreversibility property, it is plausible that the mixing times for this Markov chain is smaller than the ones considered by Bubley and Dyer [5]. Hence the uniform promotion graph could have possible applications for uniformly sampling linear extensions of a large poset. This is certainly deserving of further study.
It would also be interesting to extend the results of Brown and Diaconis [4] (see also [1]) on rates of convergence to the Markov chains in this paper. For the Markov chains corresponding to \(\mathcal {R}\)trivial monoids of Sect. 5, one can find polynomial time exponential bounds for the rates of convergence after ℓ steps of the form c ℓ ^{ k } λ ^{ ℓ−k }, where c is the number of chambers, λ=max_{ i }(1−x _{ i }), and k is a parameter associated to the poset. More details on rates of convergence and mixing times can be found in [2].
Notes
Acknowledgements
We would like to thank Richard Stanley for valuable input during his visit to UC Davis in January 2012, Jesús De Loera, Persi Diaconis, Franco Saliola, Benjamin Steinberg, and Peter Winkler for helpful discussions. Special thanks go to Nicolas M. Thiéry for his help getting our code related to this project into Sage [28, 29], for his discussions on the representation theory of monoids, and for pointing out that Theorem 5.2 holds not only for unions of chains but for rooted forests. John Stembridge’s posets package proved very useful for computer experimentation.
A.A. would like to acknowledge support from MSRI, where part of this work was done. S.K. was supported by NSF VIGRE grant DMS0636297. A.S. was supported by NSF grant DMS1001256.
References
 1.Athanasiadis, C.A., Diaconis, P.: Functions of random walks on hyperplane arrangements. Adv. Appl. Math. 45(3), 410–437 (2010) CrossRefMATHMathSciNetGoogle Scholar
 2.Ayyer, A., Klee, S., Schilling, A.: Markov chains for promotion operators. Fields Commun. Ser. (2013, to appear) arXiv:1307.7499
 3.Berg, C., Bergeron, N., Bhargava, S., Saliola, F.: Primitive orthogonal idempotents for Rtrivial monoids. J. Algebra 348, 446–461 (2011) CrossRefMATHMathSciNetGoogle Scholar
 4.Brown, K.S., Diaconis, P.: Random walks and hyperplane arrangements. Ann. Probab. 26(4), 1813–1854 (1998) CrossRefMATHMathSciNetGoogle Scholar
 5.Bubley, R., Dyer, M.: Faster random generation of linear extensions. Discrete Math. 201(1–3), 81–88 (1999) CrossRefMATHMathSciNetGoogle Scholar
 6.Bidigare, P., Hanlon, P., Rockmore, D.: A combinatorial description of the spectrum for the Tsetlin library and its generalization to hyperplane arrangements. Duke Math. J. 99(1), 135–174 (1999) CrossRefMATHMathSciNetGoogle Scholar
 7.Bidigare, T.P.: Hyperplane arrangement face algebras and their associated Markov chains. ProQuest LLC, Ann Arbor, MI. Thesis (Ph.D.), University of Michigan (1997) Google Scholar
 8.Björner, A.: Random walks, arrangements, cell complexes, greedoids, and selforganizing libraries. In: Building Bridges. Bolyai Soc. Math. Stud., vol. 19, pp. 165–203. Springer, Berlin (2008) CrossRefGoogle Scholar
 9.Björner, A.: Note: Randomtofront shuffles on trees. Electron. Commun. Probab. 14, 36–41 (2009) CrossRefMATHMathSciNetGoogle Scholar
 10.Brown, K.S.: Semigroups, rings, and Markov chains. J. Theor. Probab. 13(3), 871–938 (2000) CrossRefMATHGoogle Scholar
 11.Brown, K.S.: Semigroup and ring theoretical methods in probability. In: Representations of Finite Dimensional Algebras and Related Topics in Lie Theory and Geometry. Fields Inst. Commun, vol. 40, pp. 3–26. Amer. Math. Soc., Providence (2004) Google Scholar
 12.Brightwell, G., Winkler, P.: Counting linear extensions. Order 8(3), 225–242 (1991) CrossRefMATHMathSciNetGoogle Scholar
 13.Clifford, A.H., Preston, G.B.: The Algebraic Theory of Semigroups. Vol. I. Mathematical Surveys, vol. 7. Amer. Math. Soc., Providence (1961) CrossRefMATHGoogle Scholar
 14.Dies, J.É.: Chaînes de Markov sur les permutations. Lecture Notes in Mathematics, vol. 1010. Springer, Berlin (1983) MATHGoogle Scholar
 15.Donnelly, P.: The heaps process, libraries, and sizebiased permutations. J. Appl. Probab. 28(2), 321–335 (1991) CrossRefMATHMathSciNetGoogle Scholar
 16.Edelman, P., Hibi, T., Stanley, R.P.: A recurrence for linear extensions. Order 6(1), 15–18 (1989) CrossRefMATHMathSciNetGoogle Scholar
 17.Fill, J.A.: An exact formula for the movetofront rule for selforganizing lists. J. Theor. Probab. 9(1), 113–160 (1996) CrossRefMATHMathSciNetGoogle Scholar
 18.Green, J.A.: On the structure of semigroups. Ann. Math. 54(1), 163–172 (1951) CrossRefMATHGoogle Scholar
 19.Haiman, M.D.: Dual equivalence with applications, including a conjecture of Proctor. Discrete Math. 99(1–3), 79–113 (1992) CrossRefMATHMathSciNetGoogle Scholar
 20.Hendricks, W.J.: The stationary distribution of an interesting Markov chain. J. Appl. Probab. 9, 231–233 (1972) CrossRefMATHMathSciNetGoogle Scholar
 21.Hendricks, W.J.: An extension of a theorem concerning an interesting Markov chain. J. Appl. Probab. 10, 886–890 (1973) CrossRefMATHMathSciNetGoogle Scholar
 22.Karzanov, A., Khachiyan, L.: On the conductance of order Markov chains. Order 8(1), 7–15 (1991) CrossRefMATHMathSciNetGoogle Scholar
 23.Kapoor, S., Reingold, E.M.: Stochastic rearrangement rules for selforganizing data structures. Algorithmica 6(2), 278–291 (1991) MATHMathSciNetGoogle Scholar
 24.Letac, G.: Chaînes de Markov sur les permutations. Séminaire de Mathématiques Supérieures [Seminar on Higher Mathematics], vol. 63. Presses de l’Université de Montréal, Montreal (1978) MATHGoogle Scholar
 25.Levin, D.A., Peres, Y., Wilmer, E.L.: Markov Chains and Mixing Times. American Mathematical Society, Providence (2009). With a chapter by James G. Propp and David B. Wilson MATHGoogle Scholar
 26.Malvenuto, C., Reutenauer, C.: Evacuation of labelled graphs. Discrete Math. 132(1–3), 137–143 (1994) CrossRefMATHMathSciNetGoogle Scholar
 27.Phatarfod, R.M.: On the matrix occurring in a linear search problem. J. Appl. Probab. 28(2), 336–346 (1991) CrossRefMATHMathSciNetGoogle Scholar
 28.Stein, A., et al.: Sage mathematics software (Version 5.0). The Sage Development Team (2012). http://www.sagemath.org
 29.The SageCombinat Community: Sagecombinat: enhancing Sage as a toolbox for computer exploration in algebraic combinatorics (2008). http://combinat.sagemath.org
 30.Schützenberger, M.P.: Promotion des morphismes d’ensembles ordonnés. Discrete Math. 2, 73–94 (1972) CrossRefMATHMathSciNetGoogle Scholar
 31.Schocker, M.: Radical of weakly ordered semigroup algebras. J. Algebr. Comb. 28(1), 231–234 (2008). With a foreword by Nantel Bergeron CrossRefMATHMathSciNetGoogle Scholar
 32.Stanley, R.P.: Enumerative Combinatorics. Vol. 1. Cambridge Studies in Advanced Mathematics, vol. 49. Cambridge University Press, Cambridge (1997). With a foreword by GianCarlo Rota, Corrected reprint of the 1986 original CrossRefMATHGoogle Scholar
 33.Stanley, R.P.: Promotion and evacuation. Electron. J. Comb. 16(2), Research paper 9 and 24 pp (2009) Google Scholar
 34.Steinberg, B.: Möbius functions and semigroup representation theory. J. Comb. Theory, Ser. A 113(5), 866–881 (2006) CrossRefMATHGoogle Scholar
 35.Steinberg, B.: Möbius functions and semigroup representation theory. II. Character formulas and multiplicities. Adv. Math. 217(4), 1521–1557 (2008) CrossRefMATHMathSciNetGoogle Scholar
 36.Tsetlin, M.L.: Finite automata and models of simple forms of behaviour. Russ. Math. Surv. 18(4), 1 (1963) CrossRefMATHGoogle Scholar