On the radical of a Hecke-Kiselman algebra

The Hecke-Kiselman algebra of a finite oriented graph $\Theta$ over a field $K$ is studied. If $\Theta$ is an oriented cycle, it is shown that the algebra is semiprime and its central localization is a finite direct product of matrix algebras over the field of rational functions $K(x)$. More generally, the radical is described in the case of PI-algebras, and it is shown that it comes from an explicitly described congruence on the underlying Hecke-Kiselman monoid. Moreover, the algebra modulo the radical is again a Hecke-Kiselman algebra and it is a finite module over its center.


Introduction
Let Θ be a finite simple oriented graph with n vertices {1, . . . , n}. The Hecke-Kiselman monoid HK Θ associated to Θ, [7], is generated by elements x 1 , . . . , x n subject to the defining relations: Thus, HK Θ is a natural homomorphic image of the corresponding Coxeter monoid, where relations (iii) are replaced by the braid relations x i x j x i = x j x i x j . Several combinatorial properties of HK Θ and their representations were studied in [1], [5], [7], [9], [12]. We continue the study in [14], where the structure of HK Θ , and of the associated algebra K[HK Θ ] over a field K, is investigated. The case where Θ is the oriented cycle x 1 → x 2 → · · · → x n → x 1 , with n 3, plays a crucial role. Our first main result shows that the associated Hecke-Kiselman algebra, denoted by K[C n ], is semiprime. It is also Noetherian, as shown in [14].
Consequently, since K[C n ] is an algebra of Gelfand-Kirillov dimension one [11], from [17] it follows that K[C n ] is a finite module over its center. Moreover, its classical quotient ring can be completely described.
In particular, this result answers a question asked in [14]. Next, we apply it to derive a description of the Jacobson radical J (K[HK Θ ]) of an arbitrary algebra K[HK Θ ], provided it satisfies a polynomial identity. The latter condition is equivalent to a simple condition expressed in terms of the graph Θ, [11]. Namely, it is equivalent to saying that Θ does not contain two cyclic subgraphs (i.e. subgraphs which are oriented cycles) connected by an oriented path. We prove that the radical is the ideal determined by an explicitly described congruence ρ on HK Θ , so that K[HK Θ ]/J (K[HK Θ ]) ∼ = K[HK θ /ρ] is again a Hecke-Kiselman algebra and it has a very transparent structure. For a congruence η on a semigroup S, the kernel of the natural homomorphism K Namely, let ρ be the congruence on HK Θ generated by all pairs (xy, yx) such that there is an arrow x → y that is not contained in any cyclic subgraph of Θ. Let Θ ′ be the subgraph of Θ obtained by deleting all arrows x → y that are not contained in any cyclic subgraph of Θ. Then HK Θ ′ ∼ = HK Θ /ρ. (If there is no such a pair then we assume that ρ is the trivial congruence.) Then, because of the assumption that K[HK Θ ] is a PI-algebra, the connected components of Θ ′ are either singletons or cyclic subgraphs. Recall from [14] that this implies that K[HK Θ ′ ] is a Noetherian algebra. Indeed, Noetherian algebras K[HK Θ ] are characterized by the condition: each of the connected components of the graph Θ either is acyclic or it is a cyclic graph of length n for some n 3. Our second main result reads as follows.
Theorem 1.2. Assume that Θ is a finite oriented graph such that K[HK Θ ] is a PI-algebra. Let Θ ′ be the subgraph of Θ obtained by deleting all arrows x → y that are not contained in any cyclic subgraph of Θ and let ρ be the congruence on HK Θ defined above. Then and it is the tensor product of algebras K[HK Θi ] of the connected components Θ 1 , . . . , Θ m of Θ ′ , each being isomorphic to K ⊕ K or to the algebra K[C j ], for some j 3, is a finitely generated module over its center.
Recall that the Jacobson radical of a finitely generated PI-algebra R is nilpotent, see [16], Theorem 6.3.39. However, for R = K[HK Θ ] this can also be derived from our proof.

Some background
A Gröbner basis for C n has been found in [12], by applying the diamond lemma, see [2]. Consequently, the elements of C n can be treated as words in the free monoid F = x 1 , . . . , x n that are reduced in terms of certain rewriting system in F . Let |w| i denote the degree of a word w (treated as an element of C n ) in the generator x i . If i, j ∈ {1, . . . , n} then x i · · · x j denotes the product of all consecutive generators from x i up to x j if i < j, or down to x j , if i > j.
Theorem 2.1. Let Θ = C n for some n 3. Let S be the system of reductions in F consisting of all pairs of the form It follows that an element w ∈ F is a reduced word if and only if w has no factors that are leading terms of the reductions (1) -(5) listed above. This reduction system is compatible with the degree-lexicographical ordering on the free monoid F defined by x 1 < x 2 < · · · < x n . We will use this result from [12] several times without further comment.
Our approach heavily depends on the results of [14]. In particular, a very transparent description of the reduced forms of almost all elements of C n has been found in [14], Theorem 2.1. Namely, for i = 0, 1, . . . , n − 2, the setM i of reduced forms of elements of C n that have a factor of the form x n q i = x n x 1 · · · x i x n−1 · · · x i+1 can be described as follows where s i is the monoid generated by s i . Recall that, if S is a semigroup, A, B are nonempty sets and P = (p ba ) is a B × Amatrix with entries in S 0 , then the semigroup of matrix type M 0 (S, A, B; P ) over S is the set of all triples (s, a, b), where s ∈ S, a ∈ A, b ∈ B, with the zero element θ, with operation (s, a, b)(s ′ , a ′ , b ′ ) = (sp ba ′ s ′ , a, b ′ ) if p ba ′ ∈ S and θ otherwise. So, M 0 (S, A, B; P ) is an order in the completely 0-simple semigroup M 0 (G, A, B; P ) over a cyclic infinite group, in the sense of [6]. Moreover For basic results on semigroups and algebras of matrix type we refer to [13], Chapter 5. They play a fundamental role in representation theory of semigroup algebras.
It is shown in [14] that |A i | = |B i | and P i is not a zero divisor in the matrix ring M ni (K[s i ]). Therefore, P i is invertible as a matrix in M ni (K(s i )) and hence the algebra of matrix type Proof. The first assertion was proved in [14], Theorem 5.8. Suppose that J is a nonzero ideal. Then there exist v, w ∈ M i such that vJw = 0. Hence, the matrix We start with calculating the size of the set A i , for every i = 0, . . . , n − 2 and n 3.  [14], so next we assume that i n − 3. From the description of the set A i from Theorem 2.1 in [14] it is clear that every element w of A i is exactly of one of the forms 1. w = (x ks · · · x s )(x ks+1 · · · x s+1 ) · · · (x ki+1 · · · x i+1 ) where i + 1 s 1, s + 1 < k s+1 < · · · < k i+1 n − 1 and s k s ; for s = i + 1 we assume that w = (x ki+1 · · · x i+1 ) with i + 1 k i+1 ; Choose 1 s i+1 and 0 i n−3. Then the elements w from Case 1. are in a bijection with strictly increasing sequences (k s , . . . , k i+1 ) of natural numbers such that 1 k s s < s + 2 k s+1 < · · · < k i+1 n − 1. It is easy to see that there exist exactly s n−s−2 i−s+1 sequences of the above form. Similarly, elements w of the form as in Case 2. are in a bijection with strictly increasing sequences (k s , . . . , k i+1 ) of natural numbers such that s + 1 k s < · · · < k i+1 n − 1. There are exactly n−s−1 i−s+2 such sequences. It follows that Thus, it is enough to prove that 1 + i+1 s=1 ( n−s−1 i−s+2 + s n−s−2 i−s+1 ) = n i+1 for n 3 and 0 i n − 3.
Moreover, if i = n − 3, then by a direct calculation we get that as desired.
It is easy to check that Indeed, substituting k = i + 1 − s in the sum in the left hand side, we get that this sum is equal to as claimed. We proceed by induction on n to prove that For i = 0 and arbitrary n 3 we have 1 + n−2 1 + n−3 0 = n 1 and the assertion follows. If n = 3, then we have 0 i 0, so the proposition holds.
Assume now that the equality is true for some n and every i n − 3. Consider the sum From the induction hypothesis it follows that the first sum is equal to n i+1 . Substituting m = k − 1 and j = i − 1 we get From the induction hypothesis it follows that the above sum is equal to n i . Now, using n i+1 and the assertion follows.

Main results
We will identify, without further comments, elements of the monoid C n with words in free monoid F that are reduced with respect to the system S described in Theorem 2.1. Our first main aim is to show that K[C n ] is semiprime. To prove this, we strengthen some of the results from [14].
Consider the automorphism σ of C n given by σ(x i ) = x i+1 for i = 1, . . . , n, where we agree that x n+1 = x 1 . The natural extension to an automorphism of K[C n ] also will be denoted by σ. For basic properties of this automorphism we refer to Section 3 in [14].
We have an ideal chain in C n where I i = {w ∈ C n : C n wC n ∩ x n q i = ∅} for i = 0, . . . , n − 2, and In particular, using Corollary 3.17 in [14] we obtain that σ(I k ) = I k for k = 0, . . . , n − 3. The key structural result obtained in [14] reads as follows. The following observation can be deduced from the results and methods of [14]. Proof. Let a(x n q i ) k b ∈M i and take any generator x r ∈ C n . Assume that a(x n q i ) k bx r / ∈M i . We claim that then a(x n q i ) k bx r ∈ I i . Let b ′ be the reduced form of bx r . If b ′ = x jb for some wordb, where j i+1, then using reduction (4) from Theorem 2.1 we get that a(x n q i ) k b ′ can be reduced to a(x n q i ) kb . Therefore we can assume that a prefix of b ′ is equal to x j , for some j > i + 1. If j < n, then it can be calculated that a(x n q i ) k bx r can be rewritten as a word with a factor of the form x j−1 · · · x i+2 x n x 1 · · · x i+1 x n−1 · · · x j and this element is in I i by Lemma 3.8 in [14]. Let us now consider the case when x n is a prefix of b ′ . As we assume that a(x n q i ) k b ′ / ∈M i , this word can be rewritten in C n as an element without the factor x n q i . From Theorem 2.1 it is easy to see that to obtain a word without such a factor one has to use a reduction of type (5). Therefore a(x n q i ) k b ′ can be written as a word with a prefix of the form a(x n q i ) k x n vx j , where |x n v| j = |x n v| j+1 = 0. Moreover, for j i or j = n−1 the generator x j+1 occurs in x n q i x n after x j , thus the reduction of x j of type (5) is not possible in this case. Therefore n − 1 > j i + 1. It follows (see Lemma 2.3 in [14]) that such a prefix is of the form a(x n q i ) k x n x 1 · · · x j . Therefore this element has a factor x n x 1 · · · x i x n−1 · · · x i+1 x n x 1 · · · x j for some n − 1 > j i + 1. It can be checked (using the reductions from Theorem 2.1) that the latter word can be rewritten as an element with the factor x n−1 · · · x j+1 x n x 1 · · · x j , which is in I j−1 ⊆ I i , by Lemma 3.8 in [14]. The assertion follows.
The following lemma provides a crucial step in the proof of Theorem 1.1. By P(K[C n ]) we denote the prime radical of K[C n ]. Proof. Suppose that J = 0 is a finite dimensional ideal of K[C n ]. First, we claim that a nonzero element α ∈ J can be chosen so that for every i = 1, . . . , n we have wx i = w for all w ∈ supp(α) or αx i = 0.
Let 0 = α ∈ J be such that | supp(α)| is minimal possible. Let supp(α) = {v 1 , . . . , v k }. Since J is finite dimensional, the set Z consisting of all such ktuples is finite. every K[M i ] is prime. So, we know that A ∩ K[M ] = 0 and therefore A and P(K[C n ]) are finite dimensional, because C n \M is finite. Hence, the assertion follows.
We are now in a position to prove Theorem 1.1.
Proof. In view of Lemma 3.3, from Theorem 5.9 in [14] we know that K[C n ] is a Noetherian semiprime PI-algebra.
For any fixed i = 0, . . . , n − 2, let J i be a maximal among all ideals of K[C n ] intersecting K[x n q i ] trivially. Then J i is a prime ideal, and K[I i ] ⊆ J i . By Corollary 10.16 in [8], GKdim(R) = clKdim(R) (the Gelfand-Kirillov and the classical Krull dimensions) for every finitely generated Noetherian PI-algebra R. Since GKdim(K[C n ]) = 1, it follows that J i is a minimal prime ideal of is a prime ideal. M i is a right ideal in C n /I i by Lemma 3.2, and thus it is a two-sided ideal because C n /I i is endowed with a natural involution which preserves M i , by Corollary 3.12 and Lemma 3.18 in [14]. Our second main result describes the radical of a Hecke-Kiselman algebra K[HK Θ ], as well as the algebra modulo the radical, in the case of PI-algebras. So, assume that Θ is a finite oriented graph such that K[HK Θ ] is a PI-algebra. This is equivalent to saying that Θ does not contain two cyclic subgraphs (i.e. subgraphs which are cycles) connected by an oriented path, [11]. Let ρ be the congruence on HK Θ generated by all pairs (xy, yx) such that there is an arrow x → y that is not contained in any cyclic subgraph of Θ. (If there is no such a pair then we assume that ρ is the trivial congruence.) Let Θ ′ be the subgraph of Θ obtained by deleting all arrows x → y that are not contained in any cyclic subgraph of Θ. Then HK Θ ′ ∼ = HK Θ /ρ. Then the connected components of Θ ′ are either singletons or cyclic subgraphs. Now, we are in a position to prove Theorem 1.2.
Proof. Suppose that a vertex x ∈ V (Θ) is a source vertex. In other words, there is an arrow x → y for some y ∈ V (Θ) but there are no arrows of the form z → x. For any w ∈ HK Θ consider the element β = (xy − yx)w(xy − yx) ∈ K[HK Θ ]. Since x is a source vertex, we know that xvx = xv in HK Θ for every v ∈ HK Θ . Hence xwxy = xwy, xwyx = xwy. Similarly, xywxy = xywy and xywyx = xywy. Therefore β = 0. It follows that xy − xy ∈ P(K[HK Θ ]). If x is a sink, that is there is an arrow z → x for some z ∈ V (Θ) but there are no arrows of the form x → y in the graph Θ, a symmetric argument shows that xz − zx ∈ P(K[HK Θ ]) for all z such that z → x in Θ. Let ρ 1 be the congruence generated by all pairs (xy, yx) such that x or y is either source or sink and there is an arrow x → y that is not contained in any cyclic subgraph of Θ. Equivalently, we may consider the graph Γ 1 obtained by erasing in Θ all such arrows x → y and z → x as above. Then K[HK Γ1 ] ∼ = K[HK Θ ]/I(ρ 1 ). We have shown that I(ρ 1 ) ⊆ P(K[HK Θ ]). Repeating this argument finitely many times we easily get that I(ρ) ⊆ P(K[HK Θ ]) (and our argument shows that I(ρ) is nilpotent, because Θ is finite).
Since we know that J (K[HK Θ ]) = P(K[HK Θ ]), to prove the first assertion of the theorem it is now enough to check that K[HK Θ ′ ] is semiprime. HK Θ ′ is the direct product of all HK Θi , where Θ i , i = 1, . . . , m, are the connected components of Θ ′ . From [11] we know that each HK Θi is either a band with two elements (if Θ i has only one vertex) or it is isomorphic to C k for some k 3. In the former case K[HK Θi ] ∼ = K ⊕ K, in the latter K[HK Θi ] is a semiprime PIalgebra (by Theorem 1.1) of Gelfand-Kirillov dimension one [11], and hence it is a finitely generated module over its center [17]. It follows easily that K[HK Θ ] is a finitely generated module over its center.
Let Q i be the classical ring of quotients of K[HK Θi ]. If Θ i = C mi for some m i then we know that Q i is a central localization of the form described in Theorem 1.1. Clearly, HK Θ ′ is the direct product m i=1 HK Θi . Then in the localization Q = Q 1 ⊗· · ·⊗Q m of K[HK Θ ′ ] ∼ = m i=1 K[HK Θi ] each of the factors is isomorphic to K ⊕ K or to mi−2 j=0 M rj (K(x)), where r j = mi j+1 . Therefore, the tensor product is semiprime. Hence K[HK Θ ′ ] is semiprime, because Q is its central localization. It is now clear that K[HK Θ ′ ] ∼ = K[HK Θ ]/P(K[HK Θ ]). The result follows.