1 Introduction

Let S be a semigroup with set \(E=E(S)\) of idempotents, and let \(\langle E\rangle \) denote the subsemigroup of S generated by E. We say that S is idempotent generated if \(S = \langle E\rangle .\) There are many natural examples of such semigroups. It is shown in Howie [24] that the subsemigroup of all singular transformations of a full transformation monoid \(\mathcal {T}_n\) on n elements, \(n\in \mathbb {N},\) is generated by its idempotents and, moreover, any (finite) semigroup can be embedded into a (finite) regular idempotent generated semigroup. Subsequently, Erdos [14] investigated the matrix monoid \(M_n(F)\) of all \(n\times n\) matrices over a field F,  showing that the subsemigroup of all singular matrices of \(M_n(F)\) is generated by its idempotents. Note that \(M_n(F)\) is isomorphic to the endomorphism monoid \({\text {End}}\mathbf {V}\) of all linear maps of an n dimensional vector space \(\mathbf {V}\) over F to itself. Alternative proofs of [14] were given by Araújo and Mitchell [1], Dawlings [8] and Djoković [9], and the result was generalized to finite dimensional vector spaces over division rings by Laffey [26]. Given the common properties shared by full transformation monoids and matrix monoids, Gould [18] and Fountain and Lewin [17] studied the endomorphism monoid \({\text {End}}\mathbf {A}\) of an independence algebra \(\mathbf {A}\). In Sect. 2 we review the relevant results.

The notion of a biordered set, and that of the free idempotent generated semigroup over a biordered set, were both introduced by Nambooripad [29] in his seminal work on the structure of regular semigroups. A biordered set is a partial algebra equipped with two quasi-orders determining the domain of a partial binary operation, satisfying certain axioms. It is shown in [29] that for any semigroup S, the set \(E=E(S)\) of idempotents is endowed with the structure of a biordered set, where the quasi-orders are the restriction of \(\le _{\mathcal {L}}\) and \(\le _{\mathcal {R}}\) to E and the partial binary operation is a restriction of the fundamental operation of S. The definition of \(\le _{\mathcal {L}}\) and \(\le _{\mathcal {R}}\) yields that a product between \(e,f\in E\) is basic if and only if \(\{ ef,fe\}\cap \{ e,f\}\ne \emptyset \). Note that in this case \(ef, fe\in E.\) Conversely, Easdown [13] showed that, for any biordered set E, there exists a semigroup S whose set E(S) of idempotents is isomorphic to E as a biordered set.

Given a biordered set \(E=E(S)\), there is a free object in the category of all idempotent generated semigroups whose biordered sets are isomorphic to E, called the free idempotent generated semigroup \({\text {IG}}(E)\) over E. It is given by the following presentation:

$$\begin{aligned} {\text {IG}}(E)=\langle \, \overline{E}: \bar{e}\bar{f}=\overline{ef},\, e,f\in E, \{ e,f\} \cap \{ ef,fe\}\ne \emptyset \,\rangle , \end{aligned}$$

where here \(\overline{E}=\{ \bar{e}:e\in E\}\).Footnote 1 Clearly, \({\text {IG}}(E)\) is idempotent generated, and there is a natural map \(\varvec{\phi }:{\text {IG}}(E)\rightarrow S\), given by \(\bar{e}\varvec{\phi }= e\), such that \({\text {im}}\varvec{\phi }=S'=\langle E\rangle \). Further, we have the following result, taken from [2, 13, 15, 21, 29], which exhibits the close relation between the structure of the regular \(\mathcal {D}\)-classes of \({\text {IG}}(E)\) and those of S.

Proposition 1.1

Let \(S,S',E=E(S), {\text {IG}}(E)\) and \(\varvec{\phi }\) be as above, and let \(e\in E\).

  1. (i)

    The restriction of \(\varvec{\phi }\) to the set of idempotents of \({\text {IG}}(E)\) is a bijection onto E (and an isomorphism of biordered sets).

  2. (ii)

    The morphism \(\varvec{\phi }\) induces a bijection between the set of all \(\mathcal {R}\)-classes (respectively \(\mathcal {L}\)-classes) in the \(\mathcal {D}\)-class of \(\bar{e}\) in \({\text {IG}}(E)\) and the corresponding set in \(\langle E\rangle \).

  3. (iii)

    The restriction of \(\varvec{\phi }\) to \(H_{\bar{e}}\) is a morphism onto \(H_e\).

To understand the behaviour of idempotent generated semigroups, it is important to explore the structure of semigroups of the form \({\text {IG}}(E).\) One of the main directions in recent years is to study maximal subgroups of \({\text {IG}}(E).\) A longstanding conjecture (appearing formally in [27]), purported that all maximal subgroups of \({\text {IG}}(E)\) were free. Several papers [27], [30, 33] and [32] established various sufficient conditions guaranteeing that all maximal subgroups are free. However, in 2009, Brittenham, Margolis and Meakin [3] disproved this conjecture by showing that the free abelian group \(\mathbb {Z}\oplus \mathbb {Z}\) occurs as a maximal subgroup of some \({\text {IG}}(E)\). An unpublished counterexample of McElwee from the 2010s was announced by Easdown [12] in 2011. Motivated by the significant discovery in [3], Gray and Ruškuc [21] showed that any group occurs as a maximal subgroup of some \({\text {IG}}(E)\). Their approach is to use Ruškuc’s pre-existing machinery for constructing presentations of maximal subgroups of semigroups given by a presentation, and refine this to give presentations of \({\text {IG}}(E)\), and then, given a group G, to carefully choose a biordered set E. Their techniques are significant and powerful, and have other consequences. However, to show that any group occurs as a maximal subgroup of \({\text {IG}}(E)\), a simple approach suffices [19], by considering the biordered set E of non-identity idempotents of a wreath product \(G\wr \mathcal {T}_n\). We also note here that any group occurs as \({\text {IG}}(E)\) for some band, that is, a semigroup of idempotents [11].

With the above established, other natural question arise: for a particular biordered set E, what are the maximal subgroups of \({\text {IG}}(E)\)? Gray and Ruškuc [22] investigated the maximal subgroups of \({\text {IG}}(E)\), where E is the biordered set of idempotents of a full transformation monoid \(\mathcal {T}_n\), showing that for any \(e\in E\) with \({\text {rank}}r\), where \(1\le r\le n-2,\) the maximal subgroup of \({\text {IG}}(E)\) containing \(\overline{e}\) is isomorphic to the maximal subgroup of \(\mathcal {T}_n\) containing e,  and hence to the symmetric group \(\mathcal {S}_r\). Another strand of this popular theme is to consider the biordered set E of idempotents of the matrix monoid \(M_n(D)\) of all \(n\times n\) matrices over a division ring D. By using similar topological methods to those of [3], Brittenham, Margolis and Meakin [2] proved that if \(e\in E\) is a rank 1 idempotent, then the maximal subgroup of \({\text {IG}}(E)\) containing \(\overline{e}\) is isomorphic to that of \(M_n(D),\) that is, to the multiplicative group \(D^*\) of D. Dolinka and Gray [10] went on to generalise the result of [2] to \(e\in E\) with higher rank r, where \(r< n/3,\) showing that the maximal subgroup of \({\text {IG}}(E)\) containing \(\overline{e}\) is isomorphic to the maximal subgroup of \(M_n(D)\) containing e, and hence to the r dimensional general linear group \(GL_r(D)\). So far, the structure of maximal subgroups of \({\text {IG}}(E)\) containing \(\overline{e}\in E\), where \({\text {rank}}e=r\) and \(r\ge n/3\) remains unknown. On the other hand, Dolinka, Gould and Yang [7] explored the biordered set E of idempotents of the endomorphism monoid \({\text {End}}F_n(G)\) of a free G-act \(F_n(G)=\bigcup _{i=1}^n Gx_{i}\) over a group G, with \(n\in \mathbb {N},n\ge 3\). It is known that End \(F_n(G)\) is isomorphic to a wreath product \(G\wr \mathcal {T}_n\). They showed that for any rank r idempotent \(e\in E\), with \(1\le r\le n-2\), \(H_{\overline{e}}\) is isomorphic to \(H_{e}\) and hence to \(G\wr \mathcal {S}_r\). Thus, the main result of [7] extends results of [19, 21] and [22]. We note in the cases above that if rank e is \(n-1\) then \(H_{\overline{e}}\) is free and if rank e is n then \(H_{\overline{e}}\) is trivial.

In this paper we are concerned with a kind of universal algebra called an independence algebra or \(v^*\)-algebra. Examples include sets, vector spaces and free G-acts over a group G. Results for the biordered sets of idempotents of the full transformation monoid \(\mathcal {T}_n\), the matrix monoid \(M_n(D)\) of all \(n\times n\) matrices over a division ring D and the endomorphism monoid \({\text {End}}F_n(G)\) of a free (left) G-act \(F_n(G)\) suggest that it may well be worth investigating maximal subgroups of \({\text {IG}}(E)\), where E is the biordered set of idempotents of the endomorphism monoid \({\text {End}}\mathbf {A}\) of an independence algebra \(\mathbf {A}\) of rank n, where \(n\in \mathbb {N}\) and \(n\ge 3.\) Given the diverse methods needed to deal with the biordered sets of idempotents of \(\mathcal {T}_n\), \(M_n(D)\) and \({\text {End}}F_n(G)\), we start with the presumption it will be hard to find a unified approach applicable to the biordered set of idempotents of \({\text {End}}\mathbf {A}.\) However, the aim is to make a start here in the hope of facilitating the identification of a pattern. We show that for the case where \(\mathbf {A}\) has no constants, the maximal subgroup of \({\text {IG}}(E)\) containing a rank 1 idempotent \(\bar{\varepsilon }\) is isomorphic to that of \(\varepsilon \) in \({\text {End}}\mathbf {A}\), and the latter is the group of all unary term operations of \(\mathbf {A}.\) Standard arguments give that if rank \(\varepsilon \) is \(n-1\) then \(H_{\overline{\varepsilon }}\) is free and if rank \(\varepsilon \) is n then \(H_{\overline{\varepsilon }}\) is trivial.

The structure of this paper is as follows. In Sect. 2 we give the required background on independence algebras, including the part of the classification of Urbanik [35] necessary for our purposes. We then discuss in Sect. 3 the rank 1 \(\mathcal {D}\)-class of \({\text {End}}\mathbf {A}\) in the case \(\mathbf {A}\) is an independence algebra of finite rank with no constants, and show that the maximal subgroups are isomorphic to the group of unary operations. The next section collects together some results for the maximal subgroup \(H_{\overline{e}}\) in \({\text {IG}}(E)\) in the case \(D_e\) is completely simple with a particularly nice Rees matrix representation; the proofs are omitted since they following those of [19]. Section 5 contains the main technicalities, where we examine a set of generators of \(H_{\overline{\varepsilon }}\) in the case \(\varepsilon \) is a rank 1 idempotent of \({\text {End}}\mathbf {A}\), where \(\mathbf {A}\) is an independence algebra of finite rank \(n\ge 3\) having no constants. Finally in Sect. 6 we give our promised result, that with \(\varepsilon \) as given, \(H_{\overline{\varepsilon }}\) is isomorphic to \(H_{{\varepsilon }}\).

For basic ideas of Semigroup Theory we refer the reader to [25] and of Universal Algebra to [28], [6] and [20].

2 Independence algebras and their endomorphism monoids

In [18] the second author answered the question ‘What then do vector spaces and sets have in common which forces \({\text {End}}\mathbf {V}\) and \(\mathcal {T}_n\) to support a similar pleasing structure?’. To do so, she investigated a class of universal algebras, \(v^*\)-algebras, which she called independence algebras: the class includes sets, vector spaces and free group acts.

Let \(\mathbf {A}\) be a (universal) algebra. For any \(a_1,\ldots ,a_m\in A\), a term built from these elements may be written as \(t(a_1,\ldots ,a_m)\) where \(t(x_1,\ldots ,x_m):A^m\rightarrow A\) is a term operation. For any subset \(X\subseteq A\), we use \(\langle X\rangle \) to denote the universe of the subalgebra generated by X, consisting of all \(t(a_1, \ldots , a_m),\) where \(m\in \mathbb {N}^0=\mathbb {N}\cup \{ 0\}\), \(a_1, \ldots , a_m\in X,\) and t is an m-ary term operation. A constant in \(\mathbf {A}\) is the image of a basic nullary operation; an algebraic constant is the image of a nullary term operation i.e. an element of the form \(t(c_0,\ldots ,c_m)\) where \(c_1,\ldots ,c_m\) are constants. Notice that \(\langle \emptyset \rangle \) denotes the subalgebra generated by the constants of \(\mathbf {A}\) and consists of the algebraic constants. Of course, \(\langle \emptyset \rangle =\emptyset \) if and only if \(\mathbf {A}\) has no algebraic constants, if and only if \(\mathbf {A}\) has no constants.

We say that an algebra \(\mathbf {A}\) satisfies the exchange property (EP) if for every subset X of A and all elements \(x,y\in A\):

$$\begin{aligned} y\in \langle X\cup \{x\}\rangle \text{ and } y\not \in \langle X\rangle \text{ implies } x\in \langle X\cup \{y\}\rangle . \end{aligned}$$

A subset X of A is called independent if for each \(x\in X\) we have \(x\not \in \langle X\backslash \{x\}\rangle .\) We say that a subset X of A is a basis of \(\mathbf {A}\) if X generates A and is independent.

As explained in [18], any algebra satisfying the exchange property has a basis, and in such an algebra a subset X is a basis if and only if X is a minimal generating set if and only if X is a maximal independent set. All bases of such an algebra \(\mathbf {A}\) have the same cardinality, called the rank of \(\mathbf {A}\). Further, any independent subset X can be extended to be a basis of \(\mathbf {A}.\)

We say that a mapping \(\theta \) from A into itself is an endomorphism of \(\mathbf {A}\) if for any m-ary term operation \(t(x_1,\ldots ,x_m)\) and \(a_1,\ldots ,a_m\in A\) we have

$$\begin{aligned} t(a_1,\ldots ,a_m)\theta =t(a_1\theta , \ldots ,a_m\theta ); \end{aligned}$$

if \(\theta \) is bijective, then we call it an automorphism. Note that an endomorphism fixes the algebraic constants.

An algebra \(\mathbf {A}\) satisfying the exchange property is called an independence algebra if it satisfies the free basis property, by which we mean that any map from a basis of \(\mathbf {A}\) to \(\mathbf {A}\) can be extended to an endomorphism of \(\mathbf {A}.\) The term ‘independence algebra’ was introduced by the second author in [18], where she initiated the study of their endomorphism monoids; it is remarked in [18] that they are precisely the \(v^*\)-algebras of Narkiewicz [31].

Let \(\mathbf {A}\) be an independence algebra, \({\text {End}}\mathbf {A}\) the endomorphism monoid of \(\mathbf {A},\) and \({\text {Aut}}\mathbf {A}\) the automorphism group of \(\mathbf {A}\). We define the rank of an element \(\alpha \in {\text {End}}\mathbf {A}\) to be the rank of the subalgebra \({\text {im}}\alpha \) of \(\mathbf {A};\) that this is well defined follows from the easy observation that a subalgebra of \(\mathbf {A}\) is an independence algebra.

Lemma 2.1

[18] Let \(\mathbf {A}\) be an independence algebra. Then \({\text {End}}\mathbf {A}\) is a regular semigroup, and for any \(\alpha ,\beta \in {\text {End}}\mathbf {A},\) the following statements are true:

  1. (i)

    \(\alpha ~\mathcal {L}~\beta \) if and only if \({\text {im}}\alpha ={\text {im}}\beta ;\)

  2. (ii)

    \(\alpha ~\mathcal {R}~\beta \) if and only if \({\text {ker}}\alpha ={\text {ker}}\beta ;\)

  3. (iii)

    \(\alpha ~\mathcal {D}~\beta \) if and only \({\text {rank}}\alpha ={\text {rank}}\beta ;\)

  4. (iv)

    \(\mathcal {D}=\mathcal {J}.\)

Let \(D_r\) be the \(\mathcal {D}\)-class of an arbitrary rank r element in \({\text {End}}\mathbf {A}.\) Then by Lemma 2.1, we have

$$\begin{aligned} D_r=\{\alpha \in {\text {End}}\mathbf {A}:~{\text {rank}}\,\, \alpha =r\}. \end{aligned}$$

Put \(D_r^0=D_r\cup \{0\}\) and define a multiplication on \(D^0_r\) by:

$$\begin{aligned} \alpha \cdot \beta =\left\{ \begin{array}{lll} \alpha \beta &{}\text{ if } \alpha ,\beta \in D_r \text{ and } {\text {rank}}\,\, \alpha \beta =r\\ 0&{} \text{ else }\end{array} \right. \end{aligned}$$

Then according to [18], we have the following result.

Lemma 2.2

[18] Under the above multiplication \(\cdot \) given as above, \(D_r^0\) is a completely 0-simple semigroup.

It follows immediately from Rees Theorem (see [25, Theorem 3.2.3]) that \(D_r^0\) is isomorphic to some Rees matrix semigroup \(\mathcal {M}^0(G; I, \Lambda ; P).\) We remark here that if \(\mathbf {A}\) has no constants, then

$$\begin{aligned} D_1=\{\alpha \in {\text {End}}\mathbf {A}:~{\text {rank}}\alpha =1\} \end{aligned}$$

forms a completely simple semigroup under the multiplication defined in \({\text {End}}\mathbf {A},\) so that \(D_1\) is isomorphic to some Rees matrix semigroup \(\mathcal {M}(G; I, \Lambda ; P).\)

Generalising results obtained in [14] and [24], we have the following.

Lemma 2.3

[17] Let \(\mathbf {A}\) be an independence algebra of finite rank n. Let E denote the non-identity idempotents of \({\text {End}}\mathbf {A}\). Then

$$\begin{aligned} \langle E\rangle =\langle E_1\rangle ={\text {End}}\mathbf {A}\backslash {\text {Aut}}\mathbf {A} \end{aligned}$$

where \(E_1\) is the set of idempotents of rank \(n-1\) in \({\text {End}}\mathbf {A}.\)

We now recall part of the classification of independence algebras given by Urbanik in [34]. Note that in [34], an algebraic constant of an algebra is defined as the image of a constant term operation of \(\mathbf {A}\), which is in general a broader definition than that we introduced in the beginning of this section. However, the following lemma illustrates that for non-trivial independence algebras, these two notions coincide. The proof of the following can be extracted from [16], which deals with a wider class of algebras called basis algebras. For convenience we give a quick argument.

Proposition 2.4

For any independence algebra \(\mathbf {A}\) with \(|A|>1\), we have \(\langle \emptyset \rangle =C\), where C is the collection of all elements \(a\in A\) such that there is a constant term operation \(t(x_1,\ldots ,x_n)\) of A whose image is a. Consequently, if \(\mathbf {A}\) has no constants, then it has no constant term operations.

Proof

First, clearly we have \(\langle \emptyset \rangle \subseteq C\).

Let \(a\in C\setminus \langle \emptyset \rangle \) so that by definition there exists \(n\ge 1\) and a constant term operation \(t(x_1, \ldots ,x_n)\) with image \(\{a\}\). Put \(s(x)=t(x,\ldots ,x)\) and note that

$$\begin{aligned} s(a)=t(a,\ldots ,a)=a. \end{aligned}$$

As \(\{a\}\) is independent, it can be extended to be a basis X of \(\mathbf {A}\). Now we choose an arbitrary \(b\in A\), and define a mapping \(\theta : X\longrightarrow \mathbf {A}\) such that \(a\theta =b.\) By the free basis property, \(\theta \) can be extended to be an endomorphism \(\overline{\theta }\) of \(\mathbf {A}\). Then

$$\begin{aligned} b=a\overline{\theta }=s(a)\overline{\theta }=s(a\overline{\theta })=a. \end{aligned}$$

As b is an arbitrary fixed element in A, we have \(|A|=1\), contradicting our assumption that \(|A|>1\). Thus no such a exists and \(\langle \emptyset \rangle =C\) as required. \(\square \)

We are only concerned in this paper with independence algebras with no constants, so here we give the classification of independence algebras only in this case. For the complete result we refer the reader to [35]. The reason for our restriction to rank at least 3 will become clear from later sections.

Theorem 2.5

[35] Let \(\mathbf {A}\) be an independence algebra of rank at least 3, and having no constants. Then one of the following holds:

  1. (i)

    there exists a permutation group \(\mathcal {G}\) of the set A such that the class of all term operations of A is the class of all functions given by the following formula

    $$\begin{aligned} \begin{aligned} t(x_1,\ldots ,x_m)=g(x_j), ~(m\in \mathbb {N} ,1\le j\le m)\end{aligned}\end{aligned}$$

    where \(g\in \mathcal {G}.\)

  2. (ii)

    \(\mathbf {A}\) is an affine algebra, namely, there is a division ring F, a vector space \(\mathbf {V}\) over F and a linear subspace \(\mathbf {V_0}\) of \(\mathbf {V}\) such that \(\mathbf {A}\) and \(\mathbf {V}\) share the same underlying set and the class of all term operations of \(\mathbf {A}\) is the class of all functions defined as

    $$\begin{aligned} t(x_1, \ldots ,x_n)=k_1x_1+\cdots +k_nx_n+a \end{aligned}$$

    where \(k_1,\ldots ,k_n\in F\) with \(k_1+\cdots +k_n=1\in F\), \(a\in V_0\) and \(n\ge 1.\)

It is easy to see (and an explicit argument is given in the finite rank case in [18]), that the \(\mathcal {H}\)-class of a rank \(\kappa \) idempotent in \({\text {End}}\mathbf {A}\) for any independence algebra \(\mathbf {A}\) is isomorphic to the automorphism group of a rank \(\kappa \) subalgebra.

We finish with some comments concerning the automorphism groups of \(\mathbf {A}\), where \(\mathbf {A}\) is as in Theorem 2.5. The reader may also refer to Cameron and Szabó’s paper [4], in which it is observed that the automorphism group of any independence algebra is a geometric group. The latter fact is used to obtain characterisations of independence algebras (in particular in the finite case) from a different standpoint to that in [35].

Remark 2.6

Let \(\mathbf {A}\) be as in Theorem 2.5. In Case (i), an easy application of the free basis property gives that the stabiliser subgroups \(\mathcal {S}_a\) and \(\mathcal {S}_b\) of any \(a,b\in A\) are equal and hence \(\mathcal {S}=\mathcal {S}_a\) is normal in \( \mathcal {G}.\) Consequently, if \(\mathcal {G}'=\mathcal {G}/\mathcal {S}\), we have that \(\mathbf {A}\) is a free \(\mathcal {G}'\)-act. It is well known that in this case Aut \(\mathbf {A}\) is isomorphic to \(\mathcal {G}'\wr \mathcal {S}_X\), where X is a basis of the free \(\mathcal {G}'\)-act.

In the rank 1 case it is then clear that \({\text {Aut}}\mathbf {A}\) is isomorphic to \(\mathcal {G}'\).

In Case (ii), we have that \({\text {Aut}}\mathbf {A}\) is a subgroup of the group of affine transformations of the underlying vector space \(\mathbf {V}\). Specifically, we claim that Aut \(\mathbf {A}\) is the set of all maps of the form \(\theta t_b\) such that \(\theta \in {\text {Aut}}\mathbf {V}\) and fixes \(V_0\) pointwise, and \(t_b,b\in A\), is translation by b.

It is easy to see that with \(\theta \) and b as given, \(\theta t_b\in {\text {Aut}}\mathbf {A}\). Conversely, let \(\psi \in {\text {Aut}}\mathbf {A}\), put \(b=0\psi \) and let \(\theta =\psi t_{-b}\). By considering the term \(t(x)=x+a,a\in A_0\), it is easy to see that \(a\theta =a\). For any \(\lambda \in F\), put \(t_{\lambda }(x,y)=(1-\lambda )x+\lambda y\). Now \(t_{\lambda }(0,y)\psi =t_{\lambda }(b,y\psi )\) gives that \((\lambda y)\theta =\lambda \theta (y)\) for any \(y\in A\). Finally, by considering \(t(x,y)=\frac{1}{2}x+\frac{1}{2}y\), we obtain that \(\theta \) preserves \(+\). By very construction, \(\psi =\theta t_b\).

Notice that if \(\mathbf {V}_0\) is trivial, then \({\text {Aut}}\mathbf {A}\) is the affine group of \(\mathbf {V}\). If dim \(\mathbf {V}=n\in \mathbb {N}\) then \({\text {Aut}}\mathbf {V}\) is isomorphic to the general linear group GL\(_n(F)\). It is well known that in this case the affine group is isomorphic to a semidirect product \(V \rtimes \) GL\(_n(F)\) under the natural action. For a general \(\mathbf {V}_0\) we therefore have that \({\text {Aut}}\mathbf {A}\) is obtained by taking the subgroup of GL\(_n(F)\) that stabilises \(V_0\) pointwise.

From comments above it is clear that in the rank 1 case we have \({\text {Aut}}\mathbf {A}\) is \(V \rtimes F\) if \(V_0=\{ 0\}\) and F (under addition) otherwise.

3 Unary term operations and rank-1 \(\mathcal {D}\)-classes

Throughout this section let \(\mathbf {A}\) be an independence algebra with no constants and rank \(n\in \mathbb {N}\). We now explore the \(\mathcal {D}\)-class D of a rank 1 idempotent of \({\text {End}}A\). It is known that D is a completely simple semigroup, and we give a specific decomposition for D as a Rees matrix semigroup.

We first recall the following fact observed by Gould [18], the proof of which follows from the free basis property of independence algebras.

Lemma 3.1

[18] Let \(Y=\{y_1,\ldots , y_m\}\) be an independent subset of A of cardinality m. Then for any m-ary term operations s and t, we have that \(s(y_1, \ldots , y_m)=t(y_1, \ldots ,y_m)\) implies

$$\begin{aligned} s(a_1,\ldots ,a_m)=t(a_1,\ldots ,a_m) \end{aligned}$$

for all \(a_1,\ldots ,a_m\in A,\) so that \(s=t.\)

Let G be the set of all unary term operations of \(\mathbf {A}\). It is clear that G is a submonoid of the monoid of all maps from A to A, with identity we denote as \(1_{\mathbf {A}}\).

Lemma 3.2

The set G forms a group under composition of functions.

Proof

Let t be an arbitrary unary term operation of \(\mathbf {A}\). Then for any \(x\in A\), we have \(t(x)\in \langle x\rangle \) and \(t(x)\not \in \langle \emptyset \rangle =\emptyset \). By the exchange property of independence algebras, we have that \(x\in \langle t(x)\rangle \), and so \(x=s(t(x))\) for some unary term operation s. As \(\{x\}\) is independent, we have \(st\equiv 1_{\mathbf {A}}\) by Lemma 3.1. Hence we have \(t(x)=t(s(t(x)))\), and since \(\{t(x)\}\) is independent, it again follows from Lemma 3.1 that \(ts\equiv 1_{\mathbf {A}}\), so that G is a group. \(\square \)

Let \(\varepsilon \) be a rank 1 idempotent of \({\text {End}}\mathbf {A}\). It follows immediately from Lemma 2.1 that the \(\mathcal {D}\)-class of \(\varepsilon \) is given by

$$\begin{aligned} D=D_{\varepsilon }=\{\alpha \in {\text {End}}\mathbf {A}:~{\text {rank}}\alpha =1\} \end{aligned}$$

which is a completely simple semigroup by Lemma 2.2, so that each \(\mathcal {H}\)-class of D is a group.

Let I index the \(\mathcal {R}\)-classes in D and let \(\Lambda \) index the \(\mathcal {L}\)-classes in D, so that \(H_{i\lambda }\) denotes the \(\mathcal {H}\)-class of D which is the intersection of \(R_i\) and \(L_\lambda \). By Lemma 2.1, I indexes the kernels of the rank 1 elements and \(\Lambda \) the images. Note that \(H_{i\lambda }\) is a group, and we use \(\varepsilon _{i\lambda }\) to denote the identity of \(H_{i\lambda }\), for all \(i\in I\) and all \(\lambda \in \Lambda \). Let \(X=\{ x_{1},\ldots ,x_{n}\}\) be a basis of \(\mathbf {A}.\) It is notationally standard to use the same symbol 1 to denote a selected element from both I and \(\Lambda \), and here we put

$$\begin{aligned} 1=\langle x_1\rangle \in \Lambda \text{ and } 1=\langle (x_{1},x_{i}):1\le i\le n\rangle \in I, \end{aligned}$$

the latter of which is the congruence generated by \(\{(x_{1},x_{i}):1\le i\le n\}\). Then the identity of the group \(\mathcal {H}\)-class \(H_{11}\) (using obvious notation) is

$$\begin{aligned} \varepsilon _{11}=\left( \begin{array}{ccc} x_{1} &{} \cdots &{} x_{n} \\ x_{1} &{} \cdots &{} x_{1} \end{array}\right) . \end{aligned}$$

As we pointed out before, the group \(\mathcal {H}\)-classes of D are the maximal subgroups of \({\text {End}}\mathbf {A}\) containing a rank 1 idempotent. All group \(\mathcal {H}\)-classes in D are isomorphic (see [25, Chap. 2]), so we only need to show that \(H_{11}\) is isomorphic to G. For notational convenience, put \(H=H_{11}\) and \(\varepsilon =\varepsilon _{11}.\) In what follows, we denote an element \(\left( \begin{array}{ccc} x_{1} &{} \cdots &{} x_{n} \\ s(x_{1}) &{} \cdots &{} s(x_{1}) \end{array}\right) \in {\text {End}}\mathbf {A}\) by \(\alpha _s,\) where \(s\in G.\)

Lemma 3.3

The group H is isomorphic to G.

Proof

It follows from Lemma 2.1 that

$$\begin{aligned} \alpha \in H \Longleftrightarrow \alpha =\alpha _s=\left( \begin{array}{ccc} x_{1} &{} \cdots &{} x_{n} \\ s(x_{1}) &{} \cdots &{} s(x_{1}) \end{array}\right) \end{aligned}$$

for some unary term operation \(s\in G\). Define a mapping

$$\begin{aligned} \phi : H\longrightarrow G, \left( \begin{array}{ccc} x_{1} &{} \cdots &{} x_{n} \\ s(x_{1}) &{} \cdots &{} s(x_{1}) \end{array}\right) \mapsto s. \end{aligned}$$

Clearly, \(\phi \) is an isomorphism (note that composition in G is right to left), so that \(H\cong G\) as required. \(\square \)

Since D is a completely simple semigroup, we have that D is isomorphic to some Rees matrix semigroup \(\mathcal {M}(H; I, \Lambda ; P)\), where \(P=(\mathbf{{p}}_{\lambda i}).\) For each \(i\in I\) and each \(\lambda \in \Lambda \), we put \(\mathbf{p}_{\lambda i}=\mathbf{q}_{\lambda }{} \mathbf{{r}}_{i}\) with \(\mathbf{r}_{i}=\varepsilon _{i1}\in H_{i1}\) and \(\mathbf{q}_{\lambda }=\varepsilon _{1\lambda } \in H_{1 \lambda }\).

It is easy to see that an element \(\alpha \in {\text {End}}\mathbf {A}\) with \({\text {im}}\alpha =\langle x_1\rangle \) is an idempotent if and only if \(x_1\alpha =x_1\), so that for each \(i\in I\), we must have

$$\begin{aligned} \mathbf{r}_{i}=\varepsilon _{i1}=\left( \begin{array}{cccc} x_{1} &{} x_{2} &{} \cdots &{} x_{n} \\ x_{1} &{} s_{2}(x_{1}) &{} \cdots &{} s_{n}(x_{1}) \end{array}\right) \end{aligned}$$

where \(s_{2}, \ldots , s_{n}\in G\).

On the other hand, for each \(\lambda \in \Lambda \), suppose that \(\lambda =\langle y\rangle \) , where \(y=t(x_{1},\ldots ,x_{n})\in A\) for some term operation t. We put \(t'(x)=t(x,\ldots ,x)\) and \(s_{t}=(t')^{-1}\). Then we claim that

$$\begin{aligned} \mathbf{q}_{\lambda }=\varepsilon _{1\lambda }=\left( \begin{array}{ccc} x_{1} &{} \cdots &{} x_{n} \\ s_{t}(y) &{} \cdots &{} s_{t}(y) \end{array}\right) . \end{aligned}$$

Obviously, we have \({\text {ker}}\mathbf{q}_{\lambda }=\langle (x_{1},x_{2}),\ldots ,(x_{1},x_{n})\rangle \) and \({\text {im}}\mathbf{q}_{\lambda }=\lambda \), so \(\mathbf{q}_{\lambda }\in H_{1\lambda }\). It follows from

$$\begin{aligned} y\mathbf{q}_\lambda =t(x_1,\ldots ,x_n)\mathbf{q}_\lambda = t(s_t(y),\ldots ,s_t(y))=t'(s_t(y))=y \end{aligned}$$

that \(\mathbf{q}_{\lambda }\) is an idempotent of \(H_{1\lambda }\) and so \(\mathbf{q}_{\lambda }=\varepsilon _{1\lambda }.\) This also implies that \(\mathbf{q}_{\lambda }\) does not depend on our choice of the generator y,  as each group \(\mathcal {H}\)-class has a unique idempotent.

Note that we must have special elements \(\lambda _1, \ldots , \lambda _n\) of \(\Lambda \) such that \(\lambda _k=\langle x_k\rangle \), for \(k=1,\ldots ,n\). To simplify our notation, at times we put \(k=\lambda _k\), for all \(k=1,\ldots ,n\). Clearly, we have

$$\begin{aligned} \mathbf{q}_k=\varepsilon _{1k}=\left( \begin{array}{ccc} x_{1} &{} \cdots &{} x_{n} \\ x_{k} &{} \cdots &{} x_{k} \end{array}\right) \end{aligned}$$

for all \(k=1,\ldots ,n\).

We now aim to look into the structure of the sandwich matrix \(P=(\mathbf{p}_{\lambda i})\). Let \(\lambda ,y,t,\mathbf{r}_i\) and \(\mathbf{q}_{\lambda }\) be defined as above. Then we have

$$\begin{aligned} \begin{aligned} \mathbf{p}_{\lambda i}&=\left( \begin{array}{ccc} x_{1} &{} \cdots &{} x_{n} \\ s_{t}(y) &{} \cdots &{} s_{t}(y) \end{array}\right) \left( \begin{array}{cccc} x_{1} &{} x_2 &{} \cdots &{} x_{n} \\ x_{1} &{} s_{2}(x_{1}) &{} \cdots &{} s_{n}(x_{1}) \end{array}\right) \\&=\left( \begin{array}{ccccc} x_{1} &{} \cdots &{} x_{n} \\ s_{t}t(x_{1},s_{2}(x_{1}), \ldots , s_{n}(x_{1})) &{} \cdots &{} s_{t}t(x_{1},s_{2}(x_{1}), \ldots , s_{n}(x_{1})) \end{array}\right) \end{aligned} \end{aligned}$$

Particularly, if \(\lambda =1\) then

$$\begin{aligned} \begin{aligned}{} \mathbf{p}_{1i}&=\left( \begin{array}{ccc} x_{1} &{} \cdots &{} x_{n} \\ x_{1} &{} \cdots &{} x_{1} \end{array}\right) \left( \begin{array}{cccc} x_{1} &{} x_{2} &{} \cdots &{} x_{n} \\ x_{1} &{} s_{2}(x_{1}) &{} \cdots &{} s_{n}(x_{1}) \end{array}\right) \\&=\left( \begin{array}{ccc} x_{1} &{} \cdots &{} x_{n} \\ x_{1} &{} \cdots &{} x_{1} \end{array}\right) = \alpha _{1_{\mathbf {A}}}=\varepsilon _{11} \end{aligned} \end{aligned}$$

and if \(\lambda =k\) with \(k\in \{2,\ldots ,n\}\), then

$$\begin{aligned} \begin{aligned}{} \mathbf{p}_{k i}&=\left( \begin{array}{ccc} x_{1} &{} \cdots &{} x_{n} \\ x_{k} &{} \cdots &{} x_{k} \end{array}\right) \left( \begin{array}{cccc} x_{1} &{} x_{2} &{} \cdots &{} x_{n} \\ x_{1} &{} s_{2}(x_{1}) &{} \cdots &{} s_{n}(x_{1}) \end{array}\right) \\&=\left( \begin{array}{ccc} x_{1} &{} \cdots &{} x_{n} \\ s_{k}(x_{1}) &{} \cdots &{} s_{k}(x_{1}) \end{array}\right) = \alpha _{s_{k}}.\end{aligned} \end{aligned}$$

For convenience, we refer to the row \((\mathbf{p}_{\lambda i})\) for fixed \(\lambda \in \Lambda \) and the column \((\mathbf{p}_{\lambda i})\) for fixed \(i\in I\) as the \(\lambda \)-th row and i-th column, respectively. Notice that from above \((\mathbf{p}_{1 i})=(\alpha _{1_\mathbf {A}})\) and

$$\begin{aligned} (\mathbf{p}_{\lambda 1})=( \mathbf{q}_{\lambda }{} \mathbf{r}_1 )=(\varepsilon _{1\lambda }\varepsilon _{11})=(\varepsilon _{11})=(\alpha _{1_\mathbf {A}}). \end{aligned}$$

Furthermore, P has the following nice property.

Lemma 3.4

For any \(\alpha _{s_2},\ldots ,\alpha _{s_n}\in H_{11}\) with \(s_2, \ldots , s_n\in G,\) there exists some \(k\in I\) such that the k-th column of the sandwich matrix P is \((\alpha _{1_\mathbf {A}}, \alpha _{s_2},\ldots , \alpha _{s_n}, \ldots )^T\).

Proof

To show this, we only need to take \(\mathbf{r}_k=\left( \begin{array}{cccc} x_{1} &{} x_2 &{} \cdots &{} x_{n} \\ x_{1} &{} s_{2}(x_{1}) &{} \cdots &{} s_{n}(x_{1}) \end{array}\right) .\) Then the k-th column is

$$\begin{aligned} (\mathbf{p}_{1 k}, \mathbf{p}_{2 k}, \ldots , \mathbf{p}_{n k}, \ldots )^T=(\alpha _{1_\mathbf {A}}, \alpha _{s_2},\ldots , \alpha _{s_n}, \ldots )^T. \end{aligned}$$

\(\square \)

4 Completely simple subsemigroups

For future convenience we draw together some facts concerning the case where \(E=E(S)\) and \(S=\langle E \rangle \) has a completely simple subsemigroup such that the sandwich matrix has particular properties. Effectively, we are elaborating on a remark made at the beginning of [19, Sect. 3].

We begin with a crucial concept. Let \(E=E(S)\) be (any) biordered set. An E-square is a sequence (efghe) of elements of E with \(e~\mathcal {R}\) \(~f~\mathcal {L}\) \(~g~\mathcal {R}\) \(~h~\mathcal {L}\)  e. We draw such an E-square as \(\begin{bmatrix} e&f \\ h&g \end{bmatrix}\). An E-square (efghe) is singular if there exists \(k\in E\) such that either:

$$\begin{aligned} \left\{ \begin{array}{ll} ek=e,\, fk=f, \,ke=h,\, kf=g \text{ or }\\ ke=e,\, kh=h,\, ek=f,\, hk=g. \end{array}\right. \end{aligned}$$

We call a singular square for which the first condition holds an up-down singular square, and that satisfying the second condition a left-right singular square.

Let D be a completely simple semigroup, so that D is a single \(\mathcal {D}\)-class and a union of groups. As per standard, and as we did in Sect. 3, we use I and \(\Lambda \) to index the \(\mathcal {R}\)-classes and the \(\mathcal {L}\)-classes, respectively, of D. Let \(H_{i\lambda }\) denote \(\mathcal {H}\)-class corresponding to the intersection of the \(\mathcal {R}\)-class indexed by i and the \(\mathcal {L}\)-class indexed by \(\lambda ,\) and denote the identity of \(H_{i\lambda }\) by \(e_{i\lambda },\) for any \(i\in I\), \(\lambda \in \Lambda .\) Without loss of generality we assume that \(1\in I \cap \Lambda \), so that \(H=H_{11}=H_{e_{11}}\) is a group \(\mathcal {H}\)-class with identity \(e=e_{11}\).

By Rees’ Theorem, we know that D is isomorphic to some Rees matrix semigroup

$$\begin{aligned} \mathcal {M}=\mathcal {M}(H; I, \Lambda ; P), \end{aligned}$$

where \(P=(p_{\lambda i})\) is a regular \(\Lambda \times I\) matrix over H.

Let \(E=E(S)\) where \(S=\langle E \rangle \) and let D as above be a completely simple subsemigroup of S. In view of Proposition 1.1, I and \(\Lambda \) also label the set of \(\mathcal {R}\)-classes and the set of \(\mathcal {L}\)-classes in the \(\mathcal {D}\)-class \(\overline{D}=D_{\overline{e}}\) of \(\overline{e}\) in \({\text {IG}}(E)\). Let \(\overline{H}_{i\lambda }\) denote the \(\mathcal {H}\)-class corresponding to the intersection of the \(\mathcal {R}\)-class indexed by i and the \(\mathcal {L}\)-class indexed by \(\lambda \) in \({\text {IG}}(E)\), so that \(\overline{H}_{i\lambda }=H_{\overline{e_{i\lambda }}}\) has an identity \(\overline{e_{i\lambda }},\) for any \(i\in I, \lambda \in \Lambda .\)

Exactly as in [19], we may use results of [15, 23] and [5] to locate a set of generators for \(\overline{H}=\overline{H}_{11}\). Note that the assumption that the set of generators in [5] is finite is not critical.

Lemma 4.1

The group \(\overline{H}\) is generated (as a group) by elements of the form \(\overline{e_{11}}~\overline{e_{i\lambda }}~\overline{e_{11}}\). Moreover, the inverse of \(\overline{e_{11}}~\overline{e_{i\lambda }}~\overline{e_{11}}\) is \(\overline{e_{1\lambda }}~\overline{e_{i1}}\).

It is known from [3] that every singular square in E is a rectangular band, however the converse is not always true. We say that D is singularisable if an E-square of elements from D is singular if and only if it is a rectangular band. The next lemma is entirely standard - proofs in our notation are given in [19].

Lemma 4.2

Let D be isomorphic to \(\mathcal {M}(H; I, \Lambda ; P)\) and be singularisable.

  1. (1)

    For any idempotents \(e,f,g\in D\), \(ef=g\) implies \(\overline{e}~\overline{f}=\overline{g}\).

  2. (2)

    If \(e_{1\lambda }e_{i1}=e_{11}\), then \(\overline{e_{11}}~\overline{e_{i\lambda }}~\overline{e_{11}}=\overline{e_{11}}.\)

  3. (3)

    Let \(P=(p_{\lambda i})\) be such that the column \((p_{\lambda 1})\) and the row \((p_{1 i})\) entirely consist of \(e_{11}.\)

  4. (i)

    If \(p_{\lambda i}=p_{\lambda l}\) in the sandwich matrix P, then

    $$\begin{aligned} \overline{e_{11}}~\overline{e_{i\lambda }}~\overline{e_{11}}= \overline{e_{11}}~\overline{e_{l\lambda }}~\overline{e_{11}}. \end{aligned}$$
  5. (ii)

    If \(p_{\lambda i}=p_{\mu i}\) in the sandwich matrix P, then

    $$\begin{aligned} \overline{e_{11}}~\overline{e_{i\lambda }}~\overline{e_{11}}=\overline{e_{11}}~ \overline{e_{i\mu }} ~\overline{e_{11}}. \end{aligned}$$

Definition 4.3

Let \(i,j\in I\) and \(\lambda ,\mu \in \Lambda \) such that \(\mathbf {p}_{\lambda i}=\mathbf {p}_{\mu j}\). We say that \((i,\lambda ),(j,\mu )\) are connected if there exist

$$\begin{aligned} i=i_0,i_1,\ldots , i_m=j\in I \text{ and } \lambda =\lambda _0,\lambda _1,\ldots , \lambda _m=\mu \in \Lambda \end{aligned}$$

such that for \(0\le k< m\) we have \(\mathbf {p}_{\lambda _k i_k}=\mathbf {p}_{\lambda _{k+1}i_{k+1}}\) and \(\lambda _k=\lambda _{k+1}\) or \(i_k=i_{k+1}\).

We immediately have the next corollary.

Corollary 4.4

Let D isomorphic to \(\mathcal {M}(H; I, \Lambda ; P)\) be singularisable and such that column \((p_{\lambda 1})\) and the row \((p_{1 i})\) of \(P=(p_{\lambda i})\) entirely consist of \(e_{11}.\) Then for any \(i, j\in I, \lambda , \mu \in \Lambda \), such that \((i,\lambda ),(j,\mu )\) are connected, we have

$$\begin{aligned} \overline{e_{11}}~\overline{e_{i \lambda }}~\overline{e_{11}}=\overline{e_{11}}~ \overline{e_{j \mu }}~\overline{e_{11}}. \end{aligned}$$

The next lemma can also be extracted from [19]. Note that the final condition of the hypothesis will always be true if for any collection of elements \(g_{\lambda },\lambda \in \Lambda \) of H, with \(g_1=e\), we have a \(k\in I\) such that \(p_{\lambda k}=g_{\lambda }\) for all \(\lambda \in \Lambda \): the situation encountered in [19], and this also implies connectivity.

Lemma 4.5

Let D be isomorphic to \(\mathcal {M}(H; I, \Lambda ; P)\) and be singularisable, such that the column \((p_{\lambda 1})\) and the row \((p_{1 i})\) entirely consist of \(e_{11}\) and P is connected. Then with

$$\begin{aligned} w_a= \overline{e_{11}}~\overline{e_{i\lambda }}~\overline{e_{11}} \quad \text{ where } \quad a=p_{\lambda i}^{-1}\in H \end{aligned}$$

we have that \(w_a\) is well defined.

Suppose also that \(|\Lambda |\ge 3\) and for any \(a,b\in H\) there exists \(\lambda ,\mu \in \Lambda \) with \(\lambda , \mu \) and 1 all distinct, and \(j,k\in I\) such that

$$\begin{aligned} p_{\lambda j}=a^{-1}, \; p_{\mu j}=b^{-1}a^{-1}, \; p_{\lambda k}=e_{11}\quad \text{ and } \quad p_{\mu k}=b^{-1}. \end{aligned}$$

Then for any \(a,b\in H\) we have

$$\begin{aligned} w_{a}w_{b}=w_{ab}\quad \text{ and } \quad w_{a}^{-1}=w_{a^{-1}}. \end{aligned}$$

In this case, any element of \(\overline{H}\) can be expressed as \(\overline{e_{11}}~\overline{e_{i\lambda }}~\overline{e_{11}}\) for some \(i\in I\) and \(\lambda \in \Lambda \).

5 A set of generators and relations of \(\overline{H}\)

Let E be the biordered set of idempotents of the endomorphism monoid \({\text {End}}\mathbf {A}\) of an independence algebra \(\mathbf {A}\) of finite rank n with no constants, where \(n\ge 3.\) Using the notation of Sect. 3, we now apply the results of Sect. 4 to investigate a set of generators for the maximal subgroup \(\overline{H}=H_{\overline{\varepsilon _{11}}}\) of \({\text {IG}}(E)\).

We immediately have:

Lemma 5.1

Every element in \(\overline{H}\) is a product of elements of the form \(\overline{\varepsilon _{11}}~\overline{\varepsilon _{i\lambda }}~\overline{\varepsilon _{11}}\) and \((\overline{\varepsilon _{11}}~\overline{\varepsilon _{i\lambda }}~\overline{\varepsilon _{11}})^{-1}(=\overline{\varepsilon _{1\lambda }}~\overline{\varepsilon _{i1}})\), where \(i\in I\) and \(\lambda \in \Lambda \).

Next, we consider the singular squares of the rank 1 \(\mathcal {D}\)-class D of \({\text {End}}\mathbf {A}. \)

Lemma 5.2

The semigroup D is singularisable.

Proof

Consider an E-square \(\begin{bmatrix} \alpha&\beta \\ \delta&\gamma \end{bmatrix}\). If it is singular, then it follows from [3] that it is a rectangular band.

Conversely, suppose that \(\{\alpha , \beta , \gamma , \delta \}\) is a rectangular band in D. Let \(\mathbf {B}\) be the subalgebra of \(\mathbf {A}\) generated by \({\text {im}}\alpha \cup {\text {im}}\beta ,\) i.e. \(B=\langle {\text {im}}\alpha \cup {\text {im}}\beta \rangle .\) Suppose that \(\mathbf {B}\) has a basis U. As \(\mathbf {A}\) is an independence algebra, any independent subset of A can be extended to be a basis of \(\mathbf {A}\), so that we can extend U to be a basis \(U\cup W\) of \(\mathbf {A}\).

Now we define an element \(\sigma \in {\text {End}}\mathbf {A}\) by

$$\begin{aligned} x\sigma = \left\{ \begin{array}{lll} x &{} \quad \text{ if } x\in U;\\ x\gamma &{} \quad \text{ if } x\in W. \end{array} \right. \end{aligned}$$

Notice that, for any \(x\in A\), \(x\gamma \in {\text {im}}\gamma ={\text {im}}\beta \subseteq B\) and so \({\text {im}}\sigma =B\). Since \(\sigma |_{B}=I_{B}\) we have \(\sigma ^{2}=\sigma \) is an idempotent of \({\text {End}}\mathbf {A}\), and is such that \(\alpha \sigma =\alpha \) and \(\beta \sigma =\beta .\)

Let \(x\in U\). Then as \(x\in B\) we have \(x=t(a_1,\ldots , a_r,b_1,\ldots b_s)\) for some term t where \(a_i\in {\text {im}}\alpha , 1 \le i\le r\), and \(b_j\in {\text {im}}\beta , 1\le j\le s\). Since \(\{ \alpha , \beta ,\gamma ,\delta \}\) is a rectangular band we have \(\alpha =\beta \delta \) and as \({\text {im}}\alpha ={\text {im}}\delta \) we see

$$\begin{aligned} a_i\sigma \alpha =a_i\alpha =a_i=a_i\delta ,\quad 1\le i\le r \end{aligned}$$

and

$$\begin{aligned} b_j\sigma \alpha =b_j\alpha =b_j\beta \delta =b_j\delta ,\quad 1\le j\le s. \end{aligned}$$

It follows that \(x\sigma \alpha =x\delta \). For any \(w\in W\) we have \(w\sigma \alpha =w\gamma \alpha =w\delta \), so that we obtain \(\sigma \alpha =\delta \). Similarly, \(\sigma \beta =\gamma \). Hence \(\begin{bmatrix} \alpha&\beta \\ \delta&\gamma \end{bmatrix}\) is a singular square in D. \(\square \)

We have already noticed in Sect. 3 that the first row \((\mathbf{p}_{1 i})\) and the first column \((\mathbf{p}_{\lambda 1})\) of the sandwich matrix \(P=(\mathbf{p}_{\lambda i})\) consist entirely of \(\varepsilon _{11},\) so by Lemmas  4.2 and 5.2 we have the following lemma.

Lemma 5.3

  1. (1)

    For any idempotents \(\alpha ,\beta ,\gamma \in D\), \(\alpha \beta =\gamma \) implies \(\overline{\alpha }~\overline{\beta }=\overline{\gamma }\).

  2. (2)

    If \(\varepsilon _{1\lambda }\varepsilon _{i1}=\varepsilon _{11}\), or equivalently, if \({\varepsilon _{11}}{\varepsilon _{i\lambda }}{\varepsilon _{11}}={\varepsilon _{11}}\), then \(\overline{\varepsilon _{11}}~\overline{\varepsilon _{i\lambda }}~\overline{\varepsilon _{11}}=\overline{\varepsilon _{11}}.\)

  3. (3)

    For any \(\lambda , \mu \in \Lambda \) and \(i,j\in I\), \(\mathbf{p}_{\lambda i}=\mathbf{p}_{\lambda j}\) implies \(\overline{\varepsilon _{11}}~\overline{\varepsilon _{i\lambda }}~\overline{\varepsilon _{11}}=\overline{\varepsilon _{11}}~\overline{\varepsilon _{j\lambda }}~\overline{\varepsilon _{11}}\); \(\mathbf{p}_{\lambda i}=\mathbf{p}_{\mu i}\) implies \(\overline{\varepsilon _{11}}~\overline{\varepsilon _{i\lambda }}~\overline{\varepsilon _{11}}=\overline{\varepsilon _{11}}~\overline{\varepsilon _{i\mu }}~\overline{\varepsilon _{11}}\).

Now we divide the sandwich matrix \(P=(\mathbf{p}_{\lambda i})\) into two blocks, say a good block and a bad block. Here the so called good block consists of all rows \((\mathbf{p}_{k i})\), where \(k\in \{1,\ldots , n\}\), and of course, the rest of P forms the bad block. Note that the bad block only occurs in the affine case.

For any \(i, j\in I\) and \(\lambda , \mu \in \{1, \ldots , n\}\) with \(\mathbf{p}_{\lambda i}=\mathbf{p}_{\mu j}\), it follows from Lemma 3.4 that there exists \(l\in I\) such that \(\mathbf{p}_{\lambda i}=\mathbf{p}_{\lambda l}=\mathbf{p}_{\mu l}=\mathbf{p}_{\mu j}.\) From Corollary 4.4 and Lemma 5.2 we have the following result for the good block of P.

Lemma 5.4

For any \(i,j \in I\) and \(\lambda , \mu \in \{1, \ldots , n\}\), \(\mathbf{p}_{\lambda i}=\mathbf{p}_{\mu j}\) implies \(\overline{\varepsilon _{11}}~\overline{\varepsilon _{i\lambda }}~\overline{\varepsilon _{11}}=\overline{\varepsilon _{11}}~\overline{\varepsilon _{j\mu }}~\overline{\varepsilon _{11}}\).

If the bad block does not exist in P, clearly we directly have Corollary  5.11 without any more effort. Suppose now that the bad block does exist. Then our task now is to deal with generators corresponding to entries in the bad block. The main strategy here is to find a ‘bridge’ to connect the bad block and the good block, in the sense that, for each \(\lambda \in \Lambda , i\in I\), to try to find some \(k\in \{1, \ldots , n\}\), \(j\in I\) such that \(\mathbf{p}_{\lambda i}=\mathbf{p}_{\lambda j}=\mathbf{p}_{k j}\). For this purpose, we consider the following cases:

Lemma 5.5

Suppose that we have

$$\begin{aligned} \mathbf{r}_{i}=\left( \begin{array}{cccc} x_{1} &{} x_{2} &{} \cdots &{} x_{n} \\ x_{1} &{} s_{2}(x_{1}) &{} \cdots &{} s_{n}(x_{1}) \end{array}\right) \end{aligned}$$

for some \(i\in I\) and \(\lambda =\langle y\rangle \) with \(y=t(x_{l_1},\ldots , x_{l_k})\) such that

$$\begin{aligned} 1=l_1<\cdots<l_k\le n \text{ and } k<n. \end{aligned}$$

Then there exists some \(j\in I\) and \(m\in [1,n]\) such that \(\mathbf{p}_{\lambda i}=\mathbf{p}_{\lambda j}=\mathbf{p}_{m j}\).

Proof

By assumption, we have

$$\begin{aligned} \mathbf{p}_{\lambda i}=\left( \begin{array}{ccc} x_{1} &{} \cdots &{} x_{n} \\ w(x_1) &{} \cdots &{} w(x_1)\end{array}\right) \end{aligned}$$

where \(w(x)=s_{t}t(x,s_{{l_2}}(x),\ldots ,s_{l_k}(x))\). Define \(\mathbf{r}_j\) by \(x_1\mathbf{r}_j=x_1\), \(x_{l_2}{} \mathbf{r}_j=s_{l_2}(x_1), \ldots , x_{l_k}{} \mathbf{r}_j=s_{l_k}(x_1)\), \(x_m\mathbf{r}_j=w(x_1)\), for any \(m\in [1,n]\setminus \{l_1, l_2, \ldots , l_k\}\). Note that such m must exist as by assumption we have \(k<n.\) Since the restriction of \(\mathbf{r}_j\) onto \(x_{l_1}, \ldots , x_{l_k}\) is the same as that of \(\mathbf{r}_i,\) we have \(\mathbf{p}_{\lambda j}=\mathbf{p}_{\lambda i}\). Further, as \(x_m\mathbf{r}_j=w(x_1)\), we deduce that \(\mathbf{p}_{\lambda i}=\mathbf{p}_{m j}\), so that \(\mathbf{p}_{\lambda j}=\mathbf{p}_{\lambda i}=\mathbf{p}_{m j}\) as required. \(\square \)

Lemma 5.6

Let

$$\begin{aligned} \mathbf{r}_{i}=\left( \begin{array}{cccc} x_{1} &{} x_{2} &{} \cdots &{} x_{n} \\ x_{1} &{} s_{2}(x_{1}) &{} \cdots &{} s_{n}(x_{1}) \end{array}\right) \end{aligned}$$

for some \(i\in I\), \(\lambda =\langle y\rangle \) with \(y=t(x_{l_1}, \ldots , x_{l_k})\) such that \(1\ne l_1<\cdots <l_k\le n.\) Then there exists some \(j\in I\) such that \(\mathbf{p}_{\lambda i}=\mathbf{p}_{\lambda j}=\mathbf{p}_{2 j}\).

Proof

It follows from our assumption that

$$\begin{aligned} \mathbf{p}_{\lambda i}=\left( \begin{array}{ccc} x_{1} &{} \cdots &{} x_{n} \\ w(x_1) &{} \cdots &{} w(x_1) \end{array}\right) \end{aligned}$$

where \(w(x)=s_{t}t(s_{l_1}(x),\ldots , s_{l_k}(x))\). Then as \(s_t=(t')^{-1}\), we have

$$\begin{aligned} w(x)=s_{t}t'(w(x))=s_{t}t(w(x),\ldots , w(x)). \end{aligned}$$

Let \(\mathbf{r}_{j}=\left( \begin{array}{cccc} x_{1} &{} x_{2} &{} \cdots &{} x_{n} \\ x_{1} &{} w(x_{1}) &{} \cdots &{} w(x_{1}) \end{array}\right) \). Then \(s_{t}t(w(x_{1}),\ldots , w(x_{1})) =w(x_1)\) and by similar arguments to those of Lemma 5.5 we have \(\mathbf{p}_{\lambda i}=\mathbf{p}_{\lambda j}=\mathbf{p}_{2 j}\) as required. \(\square \)

Now we are only left with the case such that \(y=t(x_{1},\ldots ,x_{n})\) is essentially n-ary, in the sense that there exists no proper subset \(X'\) of the basis \(X=\{x_1, \ldots , x_n \}\) such that \(y\in \langle X'\rangle ,\) where we need more effort.

Let G be the group of all unary term operations on an independence algebra \(\mathbf {A}\) of finite rank \(n\ge 3\) with no constants, and let \(s_2, \ldots , s_{n-1}\) be arbitrary chosen and fixed elements of G. With t essentially n-ary, define a mapping \(\theta \) as follows:

$$\begin{aligned} \theta : G\longrightarrow G, u(x)\longmapsto t(x,s_2(x),\ldots , s_{n-1}(x), u(x)). \end{aligned}$$

Lemma 5.7

The mapping \(\theta \) defined as above is one-one.

Proof

First, we claim that

$$\begin{aligned} \{x_{1},\ldots ,x_{n-1},t(x_{1},\ldots ,x_{n})\} \end{aligned}$$

is an independent subset of A. Since \(t(x_1,\ldots ,x_n)\) is essentially n-ary, we have that \(t(x_1, \ldots , x_n)\not \in \langle x_{1},\ldots , x_{n-1}\rangle \). Suppose that \(x_i\) is an element of the subalgebra \(\langle x_1,\ldots , x_{i-1},x_{i+1},\ldots , x_{n-1},t(x_{1},\ldots ,x_{n})\rangle .\) Then as \(x_{i}\not \in \langle x_1,\ldots , x_{i-1},x_{i+1},\ldots , x_{n-1}\rangle \), by the exchange property (EP), we must have that \(t(x_{1},\ldots ,x_{n})\in \langle x_{1},\ldots , x_{n-1}\rangle ,\) a contradiction. As any n-element independent set forms a basis of \(\mathbf {A}\), we have

$$\begin{aligned} \mathbf {A}=\langle x_{1},\ldots ,x_{n-1},t(x_{1},\ldots ,x_{n})\rangle \end{aligned}$$

and so \(x_{n}=w(x_{1},\ldots , x_{n-1},t(x_{1},\ldots ,x_{n}))\) for some n-ary term operation w. Let u and v be unary term operations such that \(u(x)\theta =v(x)\theta .\) Then by the definition of \(\theta ,\) we have

$$\begin{aligned} t(x,s_2(x),\ldots , s_{n-1}(x), u(x))=t(x,s_2(x),\ldots , s_{n-1}(x),v(x)). \end{aligned}$$

On the other hand, it follows from Lemma 3.1 that

$$\begin{aligned} u(x)=w(x,s_2(x),\ldots , s_{n-1}(x), t(x,s_2(x),\ldots , s_{n-1}(x),u(x)) \end{aligned}$$

and

$$\begin{aligned} v(x)=w(x,s_2(x),\ldots , s_{n-1}(x), t(x,s_2(x),\ldots , s_{n-1}(x),v(x)). \end{aligned}$$

Therefore, we have \(u(x)=v(x)\), so that \(\theta \) is one-one. \(\square \)

Corollary 5.8

If \(\mathbf {A}\) is a finite independence algebra, then the mapping \(\theta \) defined as above is onto.

If \(\mathbf {A}\) is infinite, so far we have not found a direct way to show that the mapping \(\theta \) defined as above is onto, and in this case we need the classification described in Theorem 2.5. As we assumed that the bad block exists in P, we have that \(\mathbf {A}\) is an affine algebra.

Lemma 5.9

If \(\mathbf {A}\) is an affine algebra, then the mapping \(\theta \) defined as above is onto.

Proof

Let \(\mathbf {V_0}\) be the subspace satisfying the condition stated in Theorem 2.5. Let \(t(y_1,\ldots ,y_n)\) be an essentially n-ary term operation with \(s_2, \ldots , s_{n-1}\in G\). Then we have

$$\begin{aligned} t(y_1,\ldots , y_n)=k_1 y_1+\cdots +k_n y_n+a \end{aligned}$$

and

$$\begin{aligned} s_2(x)=x+a_2, \ldots , s_{n-1}(x)=x+a_{n-1} \end{aligned}$$

where for all \(i\in [1,n],\) \(k_i\ne 0\), \(k_1+\cdots +k_n=1\) and \(a,a_2, \ldots , a_{n-1}\in V_0.\) For any unary term operation \(v(x)=x+c\in G\) with \(c\in V_0\), by putting \(s_n(x)=x+a_n\), where

$$\begin{aligned} a_n=k_n^{-1}(c-k_2a_2-\cdots -k_{n-1}a_{n-1}-a)\in V_0 \end{aligned}$$

we have \(t(x, s_2(x),\ldots ,s_{n-1}(x),s_n(x))=v(x)\), and hence \(\theta \) is onto. \(\square \)

Lemma 5.10

Let

$$\begin{aligned} \mathbf{r}_{i}=\left( \begin{array}{cccc} x_{1} &{} x_{2} &{} \cdots &{} x_{n} \\ x_{1} &{} s_{2}(x_{1}) &{} \cdots &{} s_{n}(x_{1}) \end{array}\right) \end{aligned}$$

for some \(i\in I\) and let \(\lambda =\langle y\rangle ,\) where \(y=t(x_1,\ldots , x_n)\) is an essentially n-ary term operation on \(\mathbf {A}\). Then there exists some \(j\in I\) such that \(\mathbf{p}_{\lambda i}=\mathbf{p}_{\lambda j}=\mathbf{p}_{2j}\).

Proof

Put \(w(x)=s_{t}t(x,s_{2}(x), \ldots , s_{n}(x))\). By assumption, we have

$$\begin{aligned} \mathbf{p}_{\lambda i}=\left( \begin{array}{ccc} x_{1} &{} \cdots &{} x_{n} \\ w(x_1)&{} \cdots &{} w(x_1)\end{array}\right) . \end{aligned}$$

It follows from Lemma 5.9 that the mapping

$$\begin{aligned} \theta : G\longrightarrow G, u(x)\longmapsto t(x,w(x),\ldots , w(x), u(x)) \end{aligned}$$

is onto, so that there exists some \(h(x)\in G\) such that

$$\begin{aligned} t(x,w(x),\ldots , w(x),h(x))=s_{t}^{-1}(w(x)) \end{aligned}$$

and so

$$\begin{aligned} w(x)=s_{t}t(x,w(x),\ldots , w(x),h(x)). \end{aligned}$$

Let

$$\begin{aligned} \mathbf{r}_{j}=\left( \begin{array}{cccccccc} x_{1} &{} x_{2} &{} \cdots &{} x_{n-1} &{} x_{n} \\ x_{1} &{} w(x_{1}) &{} \cdots &{} w(x_1) &{} h(x_{1}) \end{array}\right) . \end{aligned}$$

Again by similar arguments to those of Lemma 5.5 we have \(\mathbf{p}_{\lambda i}=\mathbf{p}_{\lambda j}=\mathbf{p}_{2 j}\). \(\square \)

In view of Lemma 5.4, and Lemmas 5.5, 5.6, 5.10, we deduce:

Corollary 5.11

For any \(i,j \in I\) and \(\lambda , \mu \in \Lambda \), if \(\mathbf{p}_{\lambda i}=\mathbf{p}_{\mu j}\) in the sandwich matrix P,  then \(\overline{\varepsilon _{11}}~\overline{\varepsilon _{i\lambda }}~\overline{\varepsilon _{11}}=\overline{\varepsilon _{11}}~\overline{\varepsilon _{j\mu }}~\overline{\varepsilon _{11}}\).

Following the fact we obtained in Corollary 5.11, we denote the generator \(\overline{\varepsilon _{11}}~\overline{\varepsilon _{i\lambda }}~\overline{\varepsilon _{11}}\) with \(\mathbf{p}_{\lambda i}=\alpha ^{-1}\) by \(w_{\alpha }\), where \(\alpha \in H\). Furthermore, as \(n\ge 3\), we have that for any \(\alpha , \beta \in H,\) the sandwich matrix P has two columns with the following forms:

$$\begin{aligned} \left( \varepsilon _{11}, \alpha ^{-1}, \beta ^{-1}\alpha ^{-1},\ldots \right) ^{T} \text{ and } \left( \varepsilon _{11}, \varepsilon _{11}, \beta ^{-1},\ldots \right) ^{T} \end{aligned}$$

by Lemma 3.4. Therefore, by Lemmas 5.2, Corollary 5.11 and Lemma 4.5 we have:

Lemma 5.12

For any \(\alpha , \beta \in H\), \(w_{\alpha }w_{\beta }=w_{\alpha \beta }\) and \(w_{\alpha ^{-1}}=w_{\alpha }^{-1}\).

6 The main theorem

We are now in the position to state our main result.

Theorem 6.1

Let \({\text {End}}\mathbf {A}\) be the endomorphism monoid of an independence algebra \(\mathbf {A}\) of finite rank \(n\ge 3\) with no constants, let E be the biordered set of idempotents of \({\text {End}}\mathbf {A}\), and let \({\text {IG}}(E)\) be the free idempotent generated semigroup over E. Then for any rank 1 idempotent \(\varepsilon \in E,\) the maximal subgroup \(\overline{H}\) of \({\text {IG}}(E)\) containing \(\overline{\varepsilon }\) is isomorphic to the maximal subgroup H of \({\text {End}}\mathbf {A}\) containing \(\varepsilon \), and hence to the group G of all unary term operations of \(\mathbf {A}\).

Proof

As all group \(\mathcal {H}\)-classes in the same \(\mathcal {D}\)-class are isomorphic, we only need to show that \(\overline{H}=H_{\overline{\varepsilon _{11}}}\) is isomorphic to G. It follows from Lemmas 5.1 and 5.12 that

$$\begin{aligned} \overline{H}=\{\overline{\varepsilon _{11}}~\overline{\varepsilon _{i\lambda }}~\overline{\varepsilon _{11}}:~i\in I, \lambda \in \Lambda \}. \end{aligned}$$

Let \(\overline{\varvec{\phi }}\) be the restriction of the natural map \(\varvec{\phi }: {\text {IG}}(E)\longrightarrow \langle E\rangle \). Then by (IG4), we know that

$$\begin{aligned} \overline{\varvec{\phi }}: \overline{H}\longrightarrow H, ~\overline{\varepsilon _{11}}~\overline{\varepsilon _{i\lambda }}~\overline{\varepsilon _{11}}\mapsto \varepsilon _{11}\varepsilon _{i\lambda }\varepsilon _{11} \end{aligned}$$

is an onto morphism. Furthermore, \(\overline{\varvec{\phi }}\) is one-one, because if we have

$$\begin{aligned} (\overline{\varepsilon _{11}}~\overline{\varepsilon _{i\lambda }}~\overline{\varepsilon _{11}})~\overline{\varvec{\phi }}=\varepsilon _{11} \end{aligned}$$

then \(\varepsilon _{11}\varepsilon _{i\lambda }\varepsilon _{11}=\varepsilon _{11}\) and by Lemma 5.3, \(\overline{\varepsilon _{11}}~\overline{\varepsilon _{i\lambda }}~\overline{\varepsilon _{11}}=\overline{\varepsilon _{11}}.\) We therefore have \(\overline{H}\cong H\cong G\). \(\square \)

Note that if rank \(\mathbf {A}\) is any \(n\in \mathbb {N}\), and \(\varepsilon \) is an idempotent with rank n, that is, the identity map, then \(\overline{H}\) is the trivial group, since it is generated (in \({\text {IG}}(E)\)) by idempotents of the same rank. On the other hand, if the rank of \(\varepsilon \) is \(n-1\), then \(\overline{H}\) is the free group as there are no non-trivial singular squares in the \(\mathcal {D}\)-class of \(\varepsilon \) in \(\mathbf{A}\) (see [3]).