Gröbner basis and the automaton property of Hecke–Kiselman algebras

It is shown that the Hecke–Kiselman algebra associated to a finite directed graph is an automaton algebra in the sense of Ufnarovskii. Consequently, its Gelfand–Kirillov dimension is an integer if it is finite. Moreover, it is proved that the Hecke–Kiselman algebra associated to an oriented cycle admits a finite Gröbner basis.


Introduction
In the paper [7] of Ganyushkin and Mazorchuk a finitely generated monoid HK was defined for an arbitrary finite simple digraph with n vertices {1, . . . , n} by specifying generators and relations.
(i) HK is generated by idempotents x 2 i = x i , where 1 ≤ i ≤ n, (ii) if the vertices i, j are not connected in , then x i x j = x j x i , (iii) if i, j are connected by an arrow i → j in , then x i x j x i = x j x i x j = x i x j , (iv) if i, j are connected by an (unoriented) edge in , then x i x j x i = x j x i x j .
If the graph is unoriented (has no arrows), the monoid HK is isomorphic to the so-called 0-Hecke monoid H 0 (W ), where W is the Coxeter group of the graph , see [6]. The latter monoid plays an important role in representation theory. In the case is oriented (all edges are arrows) and acyclic, the monoid HK is finite and it is a homomorphic image of the so-called Kiselman monoid K n , see [7,9]. It is worth  mentioning that a characterization of general finite digraphs such that the monoid HK is finite remains an open problem, see [1].
The aim of this paper is to continue the study of the semigroup algebra A = k[HK ] over a field k, in the case when is an oriented graph, that was started in [10], where it was shown that the growth of A is either polynomial or the monoid HK contains a noncommutative free submonoid. The main result of the present paper states that the algebra A is automaton in the sense of Ufnarovskii [16], which means that the set of normal words of A forms a regular language. In other words, the set of normal words of A is determined by a finite automaton.

Theorem 1.1 Assume that is a finite simple oriented graph. Then A = k[HK ] is an automaton algebra, with respect to any deg-lex order on the underlying free monoid of rank n. Consequently, the Gelfand-Kirillov dimension GKdim(A) of A is an integer if it is finite.
In the case when the digraph is unoriented, the corresponding monoid algebra is known to be automaton. Indeed, as mentioned above: in this case HK = H 0 (W ), where W is the Coxeter group of the graph . In fact, one can prove that the reduced words for W and H 0 (W ) are the same, and two words represent the same element of the Coxeter group if and only if they represent the same element of the Coxeter monoid, see [14]. However, the set of normal forms of elements of a Coxeter group is known to be regular, see [4].
We note that it was proved in [10] that the following conditions are equivalent: 1) k[HK ] is a PI-algebra, 2) HK does not contain a noncommutative free submonoid, 3) GKdim(k[HK ]) is finite, 4) does not contain two different oriented cycles connected by an oriented path. Theorem 1.1 answers a question raised in [10].
The key method used to obtain this result is the description of a Gröbner basis of Hecke-Kiselman algebras. It is known that if the leading terms of the elements of this basis form a regular subset of the corresponding free monoid, then the algebra is automaton, see [16], Theorem 2 on p. 97. Consequently, our methods involve the monoid HK only, rather than certain ring theoretical aspects of the algebra k [HK ]. The obtained Gröbner basis is crucial for the approach to the structure of such algebras, which will be pursued in a forthcoming paper.
The class of automaton algebras was introduced by Ufnarovskii in [15]. The main motivation was to study a class of finitely generated algebras that generalizes the class of algebras that admit a finite Gröbner basis with respect to some choice of generators and an ordering on monomials. The difficulty here lies in the fact that there are infinitely many generating sets as well as infinitely many admissible orderings on monomials to deal with. There are examples of algebras with finite Gröbner bases with respect to one ordering, and infinite bases with respect to the other. Up until recently it was not known whether for any of known examples of automaton algebras with infinite Gröbner bases with respect to certain orderings one could find a better ordering that would yield a finite Gröbner basis. First counterexamples were found by Iyudu and Shkarin in [8].
There are many results indicating that the class of automaton algebras not only has better computational properties but also several structural properties that are better than in the class of arbitrary finitely generated algebras. For example, in this context one can refer to results on the Gelfand-Kirillov dimension, results on the radical in the case of monomial automaton algebras [15], results on prime algebras of this type [2], and also structural results concerned with the special case of finitely presented monomial algebras [11]. In particular, finitely generated algebras of the following types are automaton: commutative algebras, algebras defined by not more than two quadratic relations, algebras for which all the defining relations have the form [x i x j ] = 0, for some pairs of generators, see [16]. Moreover, algebras that are finite modules over commutative finitely generated subalgebras are also of this type [5]. Several aspects of automaton algebras have been recently studied also in [8,12,13].
In Sect. 2 we introduce the necessary definitions and auxiliary results. Next, in Sect. 3, we determine a Gröbner basis of k[HK ], from which the main result follows. Finally, in Sect. 4, we prove that in the case when the graph is a cycle, k[HK ] has a finite Gröbner basis. An example is given to show that this is not true for arbitrary Hecke-Kiselman algebras of oriented graphs, even in the case when the algebra satisfies a polynomial identity.

Definitions and the necessary background
Let F denote the free monoid on the set X of n ≥ 3 free generators x 1 , . . . , x n . Let k be a field and let k[F] = k x 1 , . . . , x n denote the corresponding free algebra over k. Assume that a well order < is fixed on X and consider the induced degreelexicographical order on F (also denoted by <). Let A be a finitely generated algebra over k with a set of generators r 1 , . . . , r n and let π : k[F] → A be the natural homomorphism of k-algebras with π(x i ) = r i . We will assume that ker(π ) is spanned by elements of the form w − v, where w, v ∈ F (in other words, A is a semigroup algebra). Let I be the ideal of F consisting of all leading monomials of ker(π ). The set of normal words corresponding to the chosen presentation for A and to the chosen order on F is defined by N (A) = F \ I . One says that A is an automaton algebra if N (A) is a regular language. That means that this set is obtained from a finite subset of F by applying a finite sequence of operations of union, multiplication and operation * defined by T * = i≥1 T i , for T ⊆ F. If T = {w} for some w ∈ F, then we write T * = w * .
For every x ∈ X and w ∈ F by |w| x we mean the number of occurrences of x in w. By |w| we denote the length of the word w. The support of the word w, denoted by supp(w), stands for the set of all x ∈ X such that |w| x > 0. We say that If v 2 , . . . , v r are trivial words, then we say that w is a factor of v.
Describing the normal words of a finitely generated algebra A is related to finding a Gröbner basis of the ideal J = ker(π ). Recall that a subset G of J is called a Gröbner basis of J (or of A) if 0 / ∈ G, J is generated by G as an ideal and for every nonzero f ∈ J there exists g ∈ G such that the leading monomial g ∈ F of g is a factor of the leading monomial f of f . If G is a Gröbner basis of A, then a word w ∈ F is normal if and only if w has no factors that are leading monomials in g ∈ G.
The so-called diamond lemma is often used in this context. We will follow the approach and terminology of [3]. By a reduction in k[F] determined by a pair (w, w ) ∈ F 2 , where w < w (the deg-lex order of F), we mean any operation of replacing a factor w in a word f ∈ F by the factor w . For a set T ⊆ F 2 of such pairs (these pairs will be called reductions as well) we say that the word f ∈ F is T -reduced if no factor of f is the leading term w of a reduction (w, w ) from the set T . The deg-lex order on F satisfies the descending chain condition, which means there is no infinite decreasing chain of elements in F. This means that a T -reduced form of a word w ∈ F can always be obtained in a finite series of steps. The linear space spanned by T -reduced monomials in k[F] is denoted by R(T ).
The diamond lemma gives necessary and sufficient conditions for the set N (A) of normal words to coincide with the set of T -reduced words in F. The key tool is the notion of ambiguity. Let σ = (w σ , v σ ), τ = (w τ , v τ ) be reductions in T . By an overlap ambiguity we mean a quintuple (σ, τ, l, w, r ), where 1 = l, w, r ∈ F are such that w σ = wr and w τ = lw. A quintuple (σ, τ, l, w, r ) is called an inclusive ambiguity if w σ = w and w τ = lwr . For brevity we will denote these ambiguities as l(wr ) = (lw)r and l(w)r = (lwr ), respectively. We will also say that they are of type σ -τ . We say that the overlap (inclusive, respectively) ambiguity is resolvable if v τ r and lv σ (v τ and lv σ r , respectively) have equal T -reduced forms. We use the following simplified version of Bergman's diamond lemma.

Gröbner basis in the oriented graphs case
In this section we will prove that for any oriented graph = (V ( ), E( )), the language of normal words of the Hecke-Kiselman algebra k[HK ] is regular, and thus that the algebra is always automaton.
For t ∈ V ( ) and w ∈ F = V ( ) we write w t if |w| t = 0 and there are no x ∈ supp(w) such that x → t in . Similarly, we define t w: again we assume that |w| t = 0 and there is no arrow t → y, where y ∈ supp(w). In the case when t w and w t, we write t w.

to the deg-lex order on the free monoid F = V ( ) . Consider the following set T of reductions on the algebra k[F]:
(i) (twt, tw), for any t ∈ V ( ) and w ∈ F such that w t, (ii) (twt, wt), for any t ∈ V ( ) and w ∈ F such that t w, (iii) (t 1 wt 2 , t 2 t 1 w), for any t 1 , t 2 ∈ V ( ) and w ∈ F such that t 1 > t 2 and t 2 t 1 w.

forms a Gröbner basis of the algebra k[HK ].
Proof Clearly, w > v for every pair (w, v) ∈ T . Moreover, it is easy to see that w and v represent the same element of HK . It remains to use the diamond lemma. We will prove that all overlap and inclusive ambiguities of the reduction system T are resolvable. We begin with a simple observation.

Observation 3.2
Assume that t ∈ V ( ) and w ∈ F are such that t w. Then the words tw and wt have equal T -reduced forms.
Proof We argue by induction on the length |w| of w. If w = 1, the assertion is clear.
We proceed with the induction step. Assume that w = y 1 · · · y k , where y i ∈ V ( ), for i = 1, . . . , k. If y 1 > t, then we apply (iii) to wt and we are done. If there exists − − → y 1 · · · y i−1 t y i · · · y k . Now we apply the induction hypothesis to the words t y 1 · · · y i−1 and y 1 · · · y i−1 t and T -reduce them to some w ∈ F. Thus we get that tw and wt can be both T -reduced to w y i · · · y k . Finally, if y i < t, for all i, then by using reduction (iii) k times we get: We will now list overlap and inclusive ambiguities of all possible types (x)-(y) of pairs of reductions in T , where (x), (y) ∈ {(i), (ii), (iii)}. There are two overlap and one inclusive ambiguity of type (i)-(i): There are two overlap and three inclusive ambiguities of type (i)-(iii): There are two overlap and one inclusive ambiguity of type (ii)-(i): There are two overlap and one inclusive ambiguity of type (ii)-(ii): There are two overlap and three inclusive ambiguities of type (ii)-(iii): Since t 2 t 1 w 1 , then t 1 w 1 w 2 t 2 and we have: 17 Since t 2 t 1 w 1 , we have t 2 t 1 w 1 w 2 and thus: Since t 2 w 1 , we can use Observation 3.2 to reduce w 1 t 2 and t 2 w 1 to the same form.
Since t 2 t 1 w 2 , we have w 1 t 2 t 1 and: Since t 2 w 2 , then by Observation 3.2 we can reduce t 2 w 2 and w 2 t 2 to the same form.
t 2 w 2 t 1 w 3 , then w 1 t 3 t 2 w 2 t 1 and: Since t 2 w 2 , we have t 1 w 1 t 2 , and thus: Since t 3 t 2 w 2 t 1 w 3 , we have t 1 w 1 t 3 t 2 w 2 and: Here, again we can see that w 1 t 2 and thus t 2 w 1 and w 1 t 2 can be reduced to the same word, by Observation 3.2.
Since t 3 w 2 then by Observation 3.2 t 3 w 2 and w 2 t 3 can be reduced to the same form.
Since t 2 t 3 , we either have We have checked that all ambiguities of the reduction system T are resolvable. Thus the diamond lemma can be applied and the result follows.
We are ready to prove our first main result.
Proof of Theorem 1. 1 We have where N stands for the set of leading terms in pairs from the set T considered in Theorem 3.1, and N (i) , N (ii) , N (iii) are the sets of leading terms from the three families (i), (ii), (iii) of reductions in T , respectively. We only need to show that the sets which is a finite union of sets of the form t Y t t, where Y t is the subset of V ( ) consisting of all generators z such that z t. All these summands are clearly regular. Thus N (i) is regular. A similar argument works for N (ii) . Finally, where X x ⊆ V ( ) is the subset consisting of all generators y ∈ V ( ) such that y x. Again, these summands are clearly regular. Therefore, the set N (iii) is regular as a union of regular sets.
As a result, the entire set N is regular and it is well known that this implies that the algebra k[HK ] is automaton, see [16], p. 97. The fact that GKdim(k[HK ]) is an integer, if it is finite, follows, see [16], Theorem 3 on p. 97 and Theorem 1 on p. 90.

Gröbner basis of a cycle monoid
Let C n denote the Hecke-Kiselman monoid associated to the cycle consisting of n ≥ 3 vertices. The aim of this section is to prove that in the case of k[C n ] one can find a finite subset of the Gröbner basis obtained in the previous section such that it itself forms a Gröbner basis of k[C n ]. Our interest in this special case comes from the fact that the structure of the algebra k[C n ] is crucial for the study of an arbitrary algebra k [HK ].
Recall that the monoid C n is defined by generators x 1 , . . . , x n subject to the following relations: for all i = 1, . . . , n (with the convention that indices are taken modulo n) and x i x j = x j x i for all i, j = 1, . . . , n satisfying 1 < i − j < n − 1 (note that for n = 3 there are no relations of this type).
The natural order x 1 < x 2 < · · · < x n is considered on the set of generators and the corresponding deg-lex order on the free monoid F. We also adopt the following notation in this section. When we write a word of the form: x i · · · x j , we mean that consecutive generators from x i up to x j if i < j (or down to x j , if i > j) appear in this word. For instance, x 2 · · · x 5 denotes x 2 x 3 x 4 x 5 and x 6 · · · x 3 stands for the word Consider two sets S and S of reductions on k[F]. The first one is a subset of the system T considered in the previous section that consists of all pairs of the form: , . . . , n} and 1 = u ∈ F such that u x i . Here, i − 1 = n, for i = 1 (we say, for the sake of simplicity, that the word x i ux i is of type (4x i )), Here i + 1 = 1, for i = n (similarly, we say that the word x i vx i is of type (5x i )).
We will say that the word x i ux i that appears in (ii) is of type (4x i ), the word x i vx i that appears in (iii) is of type (5x i ), and the word x i zx i that appears in (iv) is of type (5x i ). We will also say that a word x ∈ F is of type (1), (2), or (3), respectively, if x is the leading term of one of the reductions of the corresponding type.
One can recognize reductions of type (1) and (4) as subsets of the reduction set (i) from Theorem 1.1. Similarly, reductions of types (2), (3) are special cases of reductions of type (iii) and reductions of type (5) correspond to the subset (ii) of T . It is convenient to explicitly distinguish five families of reductions of the system S, as they will be repeatedly used in the process of reducing the size of the Gröbner basis obtained in the previous section.
We will prove two facts concerning the reduction sets S and S .

The first lemma is a simple observation that is an intermediate step towards the main result of this section.
Proof of Lemma 4.1 Assume, to the contrary, that some word w ∈ F is S-reduced, but not T -reduced. Clearly, it is enough to consider the case where w is of the form (iii) from the definition of T , namely v = x k wx i , where k > i and x i x k w. We will use inductive argument to show that v is not S-reduced, which leads to a contradiction.
Of course, if |w| = 0 then x k x i (2) − → x i x k , so x k x i is S-reducible. We proceed with the inductive step. Let |w| > 0 and let w = x i 1 · · · x i r , for some x i s ∈ {x 1 , . . . , x n } such that x i s x i , for 1 ≤ s ≤ r . If for any s we have i s > i then the factor x i s · · · x i r x i is of the form (iii) and thus it is not S-reduced, by the induction hypothesis. So we only need to consider the case where i s ≤ i < k, for all s. In particular, we have i 1 ≤ i < k. We consider two cases. Case 1. k = n. Here we must have i 1 = 1. Otherwise, an S-reducible factor x n x i 1 appears in v and the induction step follows. If an S-reducible factor of the form (3) appears in v, then we are done, so we may only consider the case where i 2 = 2, i 3 = 3, . . . , x r = r . However, it follows that r < i, since i s < i, for all s. Since x i w = x n x 1 · · · x r , we must have i > r + 1, which means that v is of the form (3) and w is thus S-reducible. The induction step follows again. Case 2. k < n. In this case we either have i 1 < k − 1 and an S-reducible factor x k x i 1 appears in v, which yields the induction step, or i 1 = k − 1. In the latter case we have v = x k x k−1 x i 2 · · · x r x i . However now we can repeat the argument for i 1 to obtain that the only relevant case is i 2 = k − 2. Indeed, we have i 2 = k − 1, i 2 = k and i 2 ≤ i < k. If we were to assume that i 2 < k − 2, then the S-reducible factor x k−1 x i 2 would appear in v, which would immediately yield the inductive step. After repeating this process we are left with the case when vx i = x k x k−1 x k−2 · · · x m · x i . However, since k > i and x i x k w, we have k > m − 1, so we get and an S-reducible factor x m x i . Thus, the induction step follows again.
We have shown that the word v of the form (iii) is S-reducible, which yields a contradiction. The assertion follows.
Before proving Lemma 4.2, we will prove the following fact concerning certain special family of words.
Proof We need some additional notation. We will say that a factor v of a word w ∈ F is a block if v is of the form x i · · · x j , for some 1 ≤ i, j < n, but there is no factor v of w such that v is a factor of v , the latter is also of the form x i · · · x j , for some 1 ≤ i , j < n, and v = v . The length of a block v is defined as the number | j −i +1|.
The block is called increasing if i ≤ j and decreasing if i ≥ j (note that | p| x n = 0). Take p = 1 such that | p| x n = 0. Since p cannot have subwords of the form x j x j+1 x j or x j x j−1 x j (conditions (4x i ), (5x i ), respectively), it follows that p is (in a unique way) a product of blocks and, by definition, the product of two consecutive blocks is not a block. If p is a product of an exactly one block then there is nothing to provep is of the form (4.1). Assume that p is a product of at least two blocks and take two consecutive blocks of the form (x i s · · · x j s )(x i s+1 · · · x j s+1 ). Observe first, that we cannot have i s+1 ≤ j s + 1. Indeed, if i s+1 < j s − 1, then a factor of type (2) would appear in p, a contradiction. If we had i s+1 = j s ± 1, then either the product of the two blocks (x i s · · · x j s )(x i s+1 · · · x j s+1 ) is a block itself, or a factor of one of the forms x j s x j s −1 x j s , x j s x j s +1 x j s appears in p, again a contradiction. Of course, we cannot have i s+1 = j s , as this yields a factor of type (1) in p.
We will prove that i s < i s+1 . Note that we cannot have i s = i s+1 since this immediately gives a factor x i s · · · x j s x i s of type (4x i s ) or (5x i s ) in p, a contradiction. Assume, to the contrary, that i s+1 < i s . We already know that must have i s+1 > j s +1, so j s + 1 < i s+1 < i s and thus the first block is decreasing of length > 1 and the factor of the form x i s+1 · · · x j s x i s+1 of type (5x i s+1 ) appears in p, a contradiction. So i s < i s+1 . The inequality j s < j s+1 is proved in a completely analogous way.
Proof of Lemma 4.2 Assume, to the contrary, that some word w ∈ F is S -reduced, but not S-reduced. We may choose w to be minimal with respect to the deg-lex order on F. It is clear that w may only be of the form (4x i ) or (5x i ).
We will first consider the case (4x i ); in other words w = x i ux i , for some u = 1, First, observe that i = n. Indeed, if i = n, then as w is S'-reduced and |u| x n = 0, u is of the form (4.1) and for some k and i 1 < i 2 < · · · < i k and j 1 < j 2 < · · · < j k , if k > 1. As |w| x n−1 = 0 and x n x i 1 cannot be of the form (2), we have i 1 = 1 and the first block of u is increasing. If k > 1, however, then i 2 > j 1 + 1, since otherwise a factor x i 2 · · · x j 1 x i 2 of the form (4x i 2 ) appears in w, which is impossible. But if i 2 = n − 1, then x n x i 1 · · · x j 1 x i 2 is a factor of type (3) in w, a contradiction. Thus k = 1. In this case, however, w = x n (x 1 · · · x j 1 )x n is of the form (4x n ), again a contradiction. Therefore i = n.
Let t = max{l : |u| x l = 0}. Of course, t > 1 as otherwise w is S -reducible. Moreover, t > i since otherwise w has a prefix x i x m with m < i − 1, which is a word of the form (2), a contradiction. We consider two cases: 1 < t < n and t = n.
• Case 1. 1 < t < n. Since w is S -reduced and |w| x n = 0, then by Observation 4.3 w is of the form (4.1), for some k and i 1 < · · · < i k , j 1 < · · · < j k , if k > 1. But since w is of the form (4x i ), the first block of w must begin with x i , and the last block must end with x i . If the length of the first block x i · · · x j 1 was greater than 1, then this block must have been increasing, since |w| x i−1 = 0. However, in this case i = i 1 < j 1 ≤ j k = i, which is impossible. Thus the first block of w consists just of x i . If k = 1, then w = x i , a contradiction. If k > 1, then j k > j 1 , which is impossible, as i 1 = j 1 < j k = i. Again, a contradiction. • Case 2. t = n. Then i = 1 because we are in the case (4x i ). Consider the last appearance of x n in w, namely let w = x i px n qx i , where p, q ∈ X and |q| x n = 0. First, assume that q = 1. Then i must be equal to n − 1 since otherwise we would have a factor of type (2) in w. Hence w = x n−1 px n x n−1 . If p = 1, then w is of type (4x n−1 ), which is impossible as w is S -reduced. Thus p = x n p , since otherwise w contains a factor x n−1 x s of type (2). Thus w has a proper factor x n p x n of type (4x n ), contradicting the minimality of the word w. Thus we may assume that q = 1.
Since w is S -reduced, also qx i is S -reduced and since |qx i | x n = 0, as i < n, we can apply Observation 4.3 and assume that it is of the form (4.1), for some k and i 1 < · · · < i k and j 1 < · · · < j k , if k > 1. However, since x n x i 1 is a factor of w we must have i 1 = n − 1 or i 1 = 1, as otherwise w has a factor of type (2). We consider these subcases now: (a) If i 1 = n − 1, then there is only one block in the decomposition (4.1) of qx i , otherwise another block of qx i would have to begin with x i 2 , where i 2 > i 1 and also n > i 2 . This is impossible. Therefore w = x i px n x n−1 · · · x i . If p = 1 then w is the form (4x i ), a contradiction. Assume that p = 1. Then x i px n · · · x i+1 cannot contain two occurrences of x i+1 as that would yield a factor of the form (4x i+1 ) in w, which contradicts its minimality. Thus |x i px n · · · x i+2 | x i+1 = 0 and we can see that x i px n · · · x i+2 cannot contain two occurrences of x i+2 . Continuing this way, we can see that | p| x l = 0, for n ≥ l > i − 1. Thus p = x m p , for some p , for some m < i − 1 and thus we have a factor x i x m of type (2) in w, a contradiction. (b) If i 1 = 1, then qx i is of the form (x 1 · · · x j 1 ) · · · (x i k · · · x i ). We cannot have k = 1, since in that case, we would have a factor of the form x 1 · · · x i in w. Its length would be greater than 1, since i = 1. Therefore, w would contain x i−1 , a contradiction. If k > 1 then as in the case of words of the form (4.2) we have i 2 = n − 1. This easily implies that k = 2 and w = x i px n (x 1 · · · x j 1 )(x n−1 · · · x i ). Next, if p = 1 then, since 1 ≤ j 1 < i − 1 the word w is of the form (4x i ), whence w is S -reducible, a contradiction. Let p = 1. As in the previous subcase, we can easily see that | p| x l = 0 for l = i + 1, . . . , n − 1. Again, if | p| x n = 0, the minimality of w is violated, and thus | p| x n = 0. Therefore p = x m p , for some m < i − 1. As in the previous case, a factor x i x m of type (2) appears in w, a contradiction.
This implies, that u is a product of only one block x n−1 · · · x j 1 . Since j 1 > 1 we can see that w is of the form (5x n ), a contradiction. Thus i < n. Our approach will be similar to that from the first part of the proof. Again, consider t = max{l : |u| x l = 0}. Clearly, t > 1, as otherwise w is S -reducible. The proof breaks into two cases: • Case 1. t < n. Since w is S -reduced and |w| x n = 0, it satisfies the conditions of Observation 4.3. Thus it must be of the form (4.1), namely w = (x i 1 · · · x j 1 )(x i 2 · · · x j 2 ) · · · (x i k · · · x j k ), where i 1 < i 2 < · · · < i k and j 1 < j 2 < · · · < j k , if k > 1. Of course, w cannot consist of only one block x i 1 · · · x j 1 , since otherwise we have i 1 = j 1 = i and thus w = x i , a contradiction. Hence k > 1. We claim that j l ≥ i, for all l > 1. Indeed, if we had j l < i, for some 1 < l ≤ k, then the block x i l · · · x j l would have to be decreasing, as i l > i 1 = i, and thus it would contain x i+1 , a contradiction with the fact that w is of the form (5x i ). Thus j l ≥ i, for all l > 1. Consider the second block x i 2 · · · x j 2 of w. Of course i 2 > i 1 = i. If we had j 2 = i, then the entire second block of w would be decreasing and it would contain x i+1 , a contradiction. So j 2 > i. It follows that i = j k ≥ j 2 > i, and we arrive at a contradiction, again. • Case 2. t = n. Notice that i = n − 1 in this case. We assume, again, that w = x i px n qx i , where |q| x n = 0. To avoid the appearance of a factor of type (2) in w, we must restrict ourselves to one of the following subcases: -Subcase (a). If q = 1, then w = x i px n x i . Therefore, i = n − 1 or i = 1, otherwise we have a factor of the form (2) in w. The first case was excluded in the beginning of Case 2. So w = x 1 px n x 1 . Thus p = 1, as otherwise w is of type (5x 1 ). Also, observe that | p| x n = 0, since otherwise we would have a proper factor x n p x n of w such that | p | x n = |p | x l = 0 and thus this factor would be of the form (5x n ). This violates the minimality of w as a minimal S -reduced and S-reducible word with respect to the deg-lex order in F. This means that x 1 p satisfies the conditions of Observation 4.3 and is of the form (4.1), so that w is of the form (5x 1 ). This contradicts the fact that it is S -reduced. -Subcase (b). q = x 1 q . Again, qx i is of the form (4.1) and as i < n − 1 it follows, using the same arguments as in the case of words of the form (4.2), that qx i must be a single block and thus w = x i px n x 1 · · · x i . Now, by an argument used in the subcase (b) of Case 2 in the first part of the proof, when we considered words w of type (4x i ), we can assume that | p| x j = 0 for j = i − 1, i − 2, . . . , 1, n. So | p| x n = 0 allows us to apply Observation 4.3 to prove that p is of the form (4.1). This yields a contradiction, as w is again proved to be of the form (5x i ). -Subcase (c). q = x n−1 q . Once again, qx i is of the form (4.1). As the first block of qx i begins with x n−1 we can see, as before, that this is in fact the only block of this word. Otherwise another block of qx i would have to begin with x i 2 , where i 2 > i 1 = n − 1 and also n > i 2 . This is impossible. Thus qx i = x n−1 q x i is a a single decreasing block of length greater than 1 which is impossible, as |u| x i+1 = 0.
The subcases (a)-(c) have been proved to lead to a contradiction. Therefore, also in the case when t = n we can see that no w can be S -reduced but S-reducible.
So, every S -reduced word is S-reduced. Thus, Lemma 4.2 is proved.
It now follows easily from Lemma 4.1 that the reduction system S satisfies the diamond lemma, because the reduction system T satisfies this. And similarly, Lemma 4.2 implies then that the reduction system S satisfies the diamond lemma. Consequently, we have proved the following theorem. As mentioned before, the fact that in the particular case of a cycle graph, even a finite Gröbner basis can be obtained, strengthens the assertion of Theorem 1.1 in view of [8]. We conclude with an example showing that the above result cannot be extended to arbitrary Hecke-Kiselman algebras of oriented graphs, even in the case of PI-algebras. Proof It is easy to see that the set N used in the proof of Theorem 1.1 is the union of the following subsets of F: a c a ∪ b a, d b ∪ c b, d c ∪ d a,