1 Introduction

Since Tian introduced them making them the subject of his Ph.D. Thesis in 2004 [12], evolution algebras have proven to be a very useful tool to address different aspects of several disciplines, not only scientific, as could be Biology, Mathematics, Physics or Engineering, for instance, but also economic ones or logistics. Let us recall that an evolution algebra is an algebra endowed with a basis (which is called natural basis), for which the product of two different generators is always zero, that is, an algebra \(E \equiv (E,+,\cdot )\) over a field \(\mathbb {K},\) with a basis \({\mathcal {B}}=\{e_i,i\in \Lambda \}\) of E, where \(\Lambda \) is an index set, such that \(e_i\cdot e_j=0\), if \(i\ne j\). Note that this definition involves that \(e_j\cdot e_j=\sum _{i\in \Lambda }a_{ij}e_i\), with \(a_{ij}\in \mathbb {K}\), where only a finite quantity of \(a_{ij}\) (which are called structure constants) are non-zero for each \(j\in \Lambda \) fixed. In this way, the product on E is determined by the structure matrix \(A=(a_{ij})\). Note, therefore, that the definition of evolution algebra implies that the generators of these algebras behave in exactly the opposite way in relation to those of a Lie algebra, since in these last algebras, the product (bracket) of every generator by itself is always zero.

Taking into consideration this definition, if in addition the product of all generators by themselves (which is called the square of each generator) is different from zero, the evolution algebra is called non-degenerate. Otherwise, the algebra is called degenerate. A particular case of non-degenerate evolution algebras is the so-called non-zero trivial evolution algebra, in which \(e_i\cdot e_i=k_ie_i\), where \(k_i\) is a nonzero scalar. On the other hand, as an example of degenerate algebra, we have the zero trivial evolution algebra, which is the one in which the product of any pair of generators is zero.

Starting from the publication by Tian and Petr Vojtechovsky in 2006 [14], and later by Tian himself in 2008 [13], lots of researchers have dealt with these algebras, both for their own importance and for their useful applications to other disciplines. As examples of these works can be mentioned, among others, those which relate evolution algebras to graphs and combinatorial structures [5, 9, 10], those which link them with Markov chains [2, 8] and the ones that use these algebras to study distinct biological populations, [4, 6, 7, 11], for instance.

However, there are still quite a few topics to discuss about them, such as, for example, the problem of their classification, the determination of new properties or the study of their hierarchy, subalgebras and ideals. With respect to it, one of these aspects on which there is very scarce literature to date is the issue of the solvability and nilpotency of these algebras, an aspect that is precisely the goal of this paper. Note that these two algebraic concepts might be interpreted biologically as the fact that some of the original gametes (that is, generators, in the biological interpretation of these algebras) become extinct after a certain number of generations (see [3, 7], for instance).

2 Preliminaries

In this section we briefly recall the main definitions and results on evolution algebras which are related with the topic of the paper.

As we have already indicated in the Introduction, an algebra \(E \equiv (E,+,\cdot )\) over a field \(\mathbb {K}\) is said to be an evolution algebra if it admits a basis \({\mathcal {B}}=\{e_i,i\in \Lambda \},\) where \(\Lambda \) is an index set, such that \(e_i\cdot e_j=0\), if \(i\ne j\) and \(e_j\cdot e_j=\sum _{i\in \Lambda }a_{ij}e_i\), with \(a_{ij}\in \mathbb {K}.\) The scalars \(a_{ij}\) are called structure constants. In this way, the product on E is determined by the so-called structure matrix \(A=(a_{ij})\).

Tian defined in [13] the evolution operator associated with \({\mathcal {B}}\) as the endomorphism \(L:E\rightarrow E\) which maps each generator into its square, that is, \(L(e_j)=e_j^2=\sum _{i\in \Lambda }a_{ij}e_i\), for all \(j\in \Lambda \). Note that the matrix representation of the evolution operator with respect to the basis \({\mathcal {B}}\) is also the structure matrix \(A=(a_{ij})\) and note also that the evolution operator of an evolution algebra may or may not be an endomorphism of the algebra.

A vector subspace \(E'\subseteq E\) of an evolution algebra E is a subalgebra of E if it is closed with respect to the second inner law of E and it is an evolution subalgebra of E if it is both subalgebra and of evolution, that is, if there exists a natural basis \({\mathcal {B}}'=\{e_i,i\in \Lambda '\}\) of \(E'\) such that \(e_i\cdot e_j=0\), if \(i\ne j\). In this case, we say that \(E'\) verifies the extension property if a natural basis \({\mathcal {B}}'\) of \(E'\) can be extended to a natural basis \({\mathcal {B}}\) of E.

The subset \(E'\) is an ideal of E if \(E'\cdot E\subseteq E'\). It is easy to see that every ideal of an evolution algebra is a subalgebra of the algebra.

Previous concepts allows us to define the ones of evolution ideal and evolution ideal with the extension property. In fact, every subalgebra with the extension property is an ideal, therefore if \(E'\) has the extension property, then the concepts of evolution subalgebra and evolution ideal are equivalent.

An element a of an evolution algebra E is called nil if there exists \(n(a) \in N,\) such that \(\underbrace{(\cdot \cdot \cdot ((a\cdot a)\cdot a) \cdot \cdot \cdot a)}_{n(a)}=0.\) An evolution algebra E is called nil if every element of the algebra is nil.

Let us consider the following sequences

$$\begin{aligned} \begin{array}{lll} E^{[0]} = E, &{} E^{[k+1]} = E^{[k]}E^{[k]}, &{} k \ge 1, \\ &{} &{} \\ E^{\langle 1 \rangle } = E, &{} E^{\langle k+1 \rangle } = E^{\langle k \rangle } E, &{} k \ge 1, \\ &{} &{} \\ E^1 = E, &{} E^{k} = \sum _{i = 1}^{k-1} E^i E^{k-i}, &{} k \ge 2. \\ \end{array} \end{aligned}$$

In general, any algebra A,  whether it is evolution algebra or not, is called

  1. 1.

    Solvable if there exists \(n \in N\) such that \(E^{[n]} =0.\) The minimal such number n is called solvability index.

  2. 2.

    Right nilpotent if there exists \(n \in N\) such that \(E^{\langle n \rangle } =0.\) The minimal such number n is called right nilpotency index.

  3. 3.

    Nilpotent if there exists \(n \in N\) such that \(E^n =0.\) The minimal such number n is called nilpotency index.

A derivation of any algebra A is a linear operator \(d : E \mapsto E\) such that \(d(u \cdot v)=d(u) \cdot v + u \cdot d(v)\), for all \(u, v \in A.\)

Regarding these concepts, the following results on evolution algebras have already been obtained [1]

- In any n-dimensional evolution algebra, the three following facts are equivalent: being a right nilpotent algebra, being a nil algebra and the structure matrix A can be written in the following form

$$\begin{aligned}A=\begin{pmatrix} 0 &{} a_{12} &{} a_{12} &{} \cdots &{} a_{1n}\\ 0 &{} 0 &{} a_{13}&{} \cdots &{} a_{2n}\\ \vdots &{} \vdots &{} \vdots &{} \cdots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \cdots &{} 0 \\ \end{pmatrix}.\end{aligned}$$

- An n-dimensional evolution algebra is nilpotent if and only if its structure matrix can be transformed by the natural basis permutation to the form of the previous result. Moreover, its index of nilpotency is not greater than \(2^{n-1} + 1.\)

- If the index of nilpotency of a nilpotent evolution algebra is not equal to \(2^{n-1}+1,\) then this one is not greater than \(2^{n-2} + 1.\)

The reader can check [1, 13] for further information.

3 Solvability

It is well-known that evolution algebras are, in general, neither associative nor power-associative. Regarding the concept of power of an element, it is usual to work with the so-called principal powers, defined on an element \(a\in E\), recursively, as

$$\begin{aligned} a^1&=a,\\ a^n&=a^{n-1}\cdot a. \end{aligned}$$

However, according to previous section, other powers distinct than the usual one can be considered, such as, for instance, plenary powers, which are defined on an element \(a\in E\) as

$$\begin{aligned} a^{[0]}&=a,\\ a^{[n]}&=a^{[n-1]}\cdot a^{[n-1]}. \end{aligned}$$

It is already known that these powers satisfy the following property

$$\begin{aligned}\left( a^{[n]}\right) ^{[m]}=a^{[n+m]}.\end{aligned}$$

Let us also define, recursively, the following sets

$$\begin{aligned} E^{[0]}&=E,\\ E^{[n]}&=E^{[n-1]}\cdot E^{[n-1]}=\{x\cdot y: x,y\in E^{[n-1]}\}. \end{aligned}$$

Although it is usual to define \(E^{[n]}\) as the subspace generated by the products of the form \(x\cdot y\), with \(x,y\in E^{[n-1]}\), we will prove in the following proposition that in our case is somewhat unnecessary, since \(E^{[n]}\) has a vector space structure with the definition we have considered. In addition, it is easy to check that

$$\begin{aligned}\left( E^{[n]}\right) ^{[m]}=E^{[n+m]}.\end{aligned}$$

From here on and unless other conditions are indicated, E will denote an evolution algebra, with natural basis \({\mathcal {B}} = \{e_i: i \in \Lambda \} \) and structure matrix \(A = (a_ {ij}).\) We have the following result

Proposition 3.1

If \(L\in End(E)\), then \(E^{[n]}\) is an evolution subalgebra of E, for all \(n\ge 0\). A natural basis of \(E^{[n]}\) is \(\{e_i^{[n]}:i\in \Omega \}\), for a certain \(\Omega \subseteq \Lambda \). In addition, the equalities

$$\begin{aligned} e_i^{[n+1]}=L\left( e_i^{[n]}\right) =L^{n+1}\left( e_i\right) =\sum _{k\in \Lambda }a_{ki}e_k^{[n]} \end{aligned}$$

are verified.

Proof

We prove it by induction on n. For \(n=0\) the result is trivial. Suppose it is satisfied for \(n-1\) and let us see that it also holds for n.

In the first place, we will see that \(E^{[n]}=E^{[n-1]}\cdot E^{[n-1]}\) has a vector space structure. If \(x,y\in E^{[n]}\), then there exist \(x_1,x_2,y_1,y_2\in E^{[n-1]}\) such that \(x=x_1\cdot x_2\) and \(y=y_1\cdot y_2\). Since \(\{e_i^{[n-1]}:i\in \Omega \}\) is a basis of \(E^{[n-1]}\), then

$$\begin{aligned}x_i=\sum _{k\in \Omega } x_{ki}e_k^{[n-1]},\;\;\;y_i=\sum _{k\in \Omega } y_{ki}e_k^{[n-1]},\end{aligned}$$

for certain scalars \(x_{ki}\) and \(y_{ki}\). Since this basis is natural

$$\begin{aligned}x=x_1\cdot x_2=\sum _{k\in \Omega }x_{k1}x_{k2}e_k^{[n-1]}\cdot e_k^{[n-1]},\end{aligned}$$
$$\begin{aligned}y=y_1\cdot y_2=\sum _{k\in \Omega }y_{k1}y_{k2}e_k^{[n-1]}\cdot e_k^{[n-1]}.\end{aligned}$$

Thus, the sum is

$$\begin{aligned}x+y=\sum _{k\in \Omega }(x_{k1}x_{k2}+y_{k1}y_{k2})e_k^{[n-1]}\cdot e_k^{[n-1]},\end{aligned}$$

which, in turn, can be written as the product

$$\begin{aligned}\left( \sum _{k\in \Omega }(x_{k1}x_{k2}+y_{k1}y_{k2})e_k^{[n-1]}\right) \cdot \left( \sum _{k\in \Omega }e_k^{[n-1]}\right) .\end{aligned}$$

Each factor of this product is an element of \(E^{[n-1]}\), so \(x+y\in E^{[n]}\).

Furthermore, since \(E^{[n-1]}\) is a subalgebra, we have that \(E^{[n]}\subseteq E^{[n-1]}\). As a consequence, if \(x,y\in E^{[n]}\), then \(x,y\in E^{[n-1]}\) and hence \(x\cdot y\in E^{[n]}\). This proves that \(E^{[n]}\) is a subalgebra.

In addition, from the above reasoning it follows that \(\{e_i^{[n-1]}\cdot e_i^{[n-1]}:i\in \Omega \}=\{e_i^{[n]}:i\in \Omega \}\) is a generator set of \(E^{[n]}\), so there exists \(\Omega '\subseteq \Omega \subseteq \Lambda \) such that \(\{e_i^{[n]}:i\in \Omega '\}\) is a basis. Furthermore, this is a natural basis, since if \(i\ne j\)

$$\begin{aligned}e_i^{[n]}\cdot e_j^{[n]}=L\left( e_i^{[n-1]}\right) \cdot L\left( e_j^{[n-1]}\right) =L\left( e_i^{[n-1]}\cdot e_j^{[n-1]}\right) =L(0)=0.\end{aligned}$$

Finally

$$\begin{aligned}&e_i^{[n+1]}=e_i^{[n]}\cdot e_i^{[n]}=L\left( e_i^{[n-1]}\right) \cdot L\left( e_i^{[n-1]}\right) =L\left( e_i^{[n-1]}\cdot e_i^{[n-1]}\right) =L\left( e_i^{[n]}\right) ,\\&L\left( e_i^{[n]}\right) =L\left( L^{n}(e_i)\right) =L^{n+1}(e_i),\\&L^{n+1}(e_i)=L^{n}\left( L(e_i)\right) =L^{n}\left( \sum _{k\in \Lambda }a_{ki}e_k\right) =\sum _{k\in \Lambda }a_{ki}L^{n}(e_k)=\sum _{k\in \Lambda }a_{ki}e_k^{[n]}. \end{aligned}$$

\(\square \)

An important observation is that from the equality

$$\begin{aligned}\left( e_i^{[n]}\right) ^2=\sum _{k\in \Lambda }a_{ki}e_k^{[n]}\end{aligned}$$

we cannot deduce, in general, that the structure matrix associated to the evolution subalgebra \(E^{[n]}\), with respect to the natural basis \(\{e_i^{[n]}:i\in \Omega \}\), is A or a submatrix of A, since \(\Omega \) is contained in \(\Lambda \). In fact, this last inclusion is strict as long as L is not an automorphism. In the case where

$$\begin{aligned}\sum _{k\in \Lambda \setminus \Omega }a_{ki}e_k^{[n]}=0,\end{aligned}$$

we will have that the structure matrix is the submatrix of A formed by the rows and columns whose indexes are in \(\Omega \). We show next an example of this assertion.

Example 3.2

Let us consider the evolution algebra E with basis \(\{e_1,e_2,e_3\}\) and structure matrix

$$\begin{aligned}A=\begin{pmatrix} a &{} -a &{} \lambda \\ a &{} -a &{} \lambda \\ 0 &{} 0 &{} 1 \end{pmatrix},\end{aligned}$$

where a and \(\lambda \) are non-zero scalars. It can be verified that \(L(e_i\cdot e_j)=L(e_i)\cdot L(e_j)\), so \(L\in End(E)\).

Since \(e_1^{[1]}=-e_2^{[1]}=a(e_1+e_2)\) and \(e_3^{[1]}=\lambda (e_1+e_2)+e_3\), then \(E^{[1]}=\langle e_1^{[1]},e_3^{[1]}\rangle \). Since

$$\begin{aligned} \left( e_1^{[1]}\right) ^2&=ae_1^{[1]}+ae_2^{[1]}=0,\\ \left( e_3^{[1]}\right) ^2&=\lambda e_1^{[1]}+\lambda e_2^{[1]}+e_3^{[1]}=e_3^{[1]}, \end{aligned}$$

then the structure matrix of the evolution subalgebra \(E^{[1]}\) is

$$\begin{aligned} \begin{pmatrix} 0 &{} 0\\ 0 &{} 1 \end{pmatrix}, \end{aligned}$$

which is not a submatrix of matrix A.

Moreover, we have that \(E^{[n]}\) is an evolution ideal of \(E^{[n-1]}\): if \(x\in E^{[n]}\subseteq E^{[n-1]}\) and \(y\in E^{[n-1]}\), then \(x\cdot y\in E^{[n]}\). In particular, \(E^{[1]}\) is an evolution ideal of E. In the following example it can be seen that, in general, \(E^{[n]}\) is not an ideal, for \(n>1\). Moreover, in general, the evolution subalgebra \(E^{[n]}\) does not have the extension property.

Example 3.3

Let us consider the evolution algebra E with basis \(\{e_1,e_2,e_3\}\) and structure matrix

$$\begin{aligned}A=\begin{pmatrix} a &{} -a &{} \lambda \\ a &{} -a &{} \lambda \\ 0 &{} 0 &{} 1 \end{pmatrix},\end{aligned}$$

where a and \(\lambda \) are non-zero scalars. We have previously seen that in this case \(L\in End(E)\). Since

$$\begin{aligned}A^2=\begin{pmatrix} 0 &{} 0 &{} \lambda \\ 0 &{} 0 &{} \lambda \\ 0 &{} 0 &{} 1 \end{pmatrix}\end{aligned}$$

and \(e_i^{[2]}=L^2(e_i)\), then \(e_1^{[2]}=e_2^{[2]}=0\), \(e_3^{[2]}=\lambda (e_1+e_2)+e_3\) and \(E^{[2]}=\langle e_3^{[2]}\rangle \).

Let us see that \(E^{[2]}\) is not an ideal. To do this we consider

$$\begin{aligned} x&=e_1\in E,\\ y&=\lambda (e_1+e_2)+e_3\in E^{[2]}. \end{aligned}$$

Then

$$\begin{aligned}x\cdot y=\lambda e_1^2=\lambda a(e_1+e_2)\notin E^{[2]},\end{aligned}$$

from which it follows that \(E^{[2]}\) is not an ideal.

Let us now consider \(E^{[1]}\). Since \(e_1^{[1]}=-e_2^{[1]}=a(e_1+e_2)\) and \(e_3^{[1]}=\lambda (e_1+e_2)+e_3\), then \(E^{[1]}=\langle e_1^{[1]},e_3^{[1]}\rangle \). Let us see that \(E^{[1]}\) does not have the extension property. By contradiction, suppose there exists \(x=\sum _{i=1}^3x_ie_i\) being linearly independent of \(e_1^{[1]}\) and \(e_3^{[1]}\) and such that \(x\cdot e_1^{[1]}=x\cdot e_3^{[1]}=0\). Since \(x\cdot e_1^{[1]}=0\), then

$$\begin{aligned}a(x_1e_1^2+x_2e_2^2)=0,\end{aligned}$$

from which

$$\begin{aligned}a^2(x_1-x_2)(e_1+e_2)=0,\end{aligned}$$

so \(x_1=x_2\). Since \(x\cdot e_3^{[1]}=0\), then

$$\begin{aligned}0=\lambda (x_1e_1^2+x_2e_2^2)+x_3e_3^2=x_3e_3^2=\lambda x_3(e_1+e_2)+x_3e_3,\end{aligned}$$

so \(x_3=0\). Accordingly, \(x=x_1(e_1+e_2)\), which is linearly dependent of \(e_1^{[1]}\), and so a contradiction is reached.

Corollary 3.4

Let \(L\in End(E)\) be the evolution operator of E. Let n be a natural number and \(\Omega \subseteq \Lambda \) a subset such that \(\{e_i^{[n]}:i\in \Omega \}\) is a natural basis of \(E^{[n]}\). Let \(L'\) be the evolution operator of \(E^{[n]}\) associated to this basis. Then, \(L'=L|_{E^{[n]}}\) and \(L'\in End(E^{[n]})\).

Proof

To prove that \(L'=L|_{E^{[n]}}\), it suffices to see that this equality is satisfied by the elements of the natural basis \(\{e_i^{[n]}:i\in \Omega \}\)

$$\begin{aligned}L'\left( e_i^{[n]}\right) =\left( e_i^{[n]}\right) ^2=e_i^{[n+1]}=L\left( e_i^{[n]}\right) .\end{aligned}$$

Furthermore, since

$$\begin{aligned}L'(x\cdot y)=L(x\cdot y)=L(x)\cdot L(y)=L'(x)\cdot L'(y),\end{aligned}$$

for all \(x,y\in E^{[n]}\), it follows that \(L'\in End(E^{[n]})\). \(\square \)

A question that arises naturally is whether the sequence of subalgebras

$$\begin{aligned}E=E^{[0]}\supseteq E^{[1]}\supseteq \cdots \supseteq E^{[n]}\supseteq \cdots \end{aligned}$$

stabilizes and, if so, when does happen it. Regarding it, if E is a finite dimensional evolution algebra, we obtain the following result

Proposition 3.5

Let E be an evloution algebra with structure matrix of rank \(n-k\). Then

  1. 1.

    If \(E^{[i]}=E^{[i+1]}\), then \(E^{[i+r]}=E^{[i]}\), for all \(r\ge 0\).

  2. 2.

    \(k=0\) if and only if \(E^{[r]}=E\), for all \(r\ge 0\).

  3. 3.

    If \(k>0\), then \(E^{[n-k+1+r]}=E^{[n-k+1]}\), for all \(r\ge 0\).

Proof

 

  1. 1.

    Let us prove by induction on r. For \(r=0\) and \(r=1\) is trivial. Suppose the result holds for r. Then

    $$\begin{aligned}E^{[i+r+1]}=E^{[i+r]}\cdot E^{[i+r]}=E^{[i]}\cdot E^{[i]}=E^{[i+1]}=E^{[i]}.\end{aligned}$$

    That is, the result holds for \(r+1\).

  2. 2.

    \(E^{[1]}\) is generated by \(\{e_i^2:i=1,\dots ,n\}\) and these vectors are the colums of A, so \(k=0\) (A has maximum rank) if and only if \(E^{[1]}=E=E^{[0]}\). By the previous case, this is equivalent to \(E^{[r]}=E\), for all \(r\ge 0\).

  3. 3.

    Since \(E^{[1]}=E\) is generated by \(\{e_i^2:i=1,\dots ,n\}\), then \(dim(E^{[1]})=n-k\). Due to dimensionality, the equality \(E^{[i]}=E^{[i+1]}\) will be obtained for some \(i\le n-k+1\). The result follows from case 1.

\(\square \)

In the case of E being a finite dimensional evolution algebra, the following result is easily obtained as an immediate consequence of the fact that \(E^{[n]}\) is generated by \(\{e_i^{[n]}:i=1,\dots ,n\}\) and that \(e_i^{[n]}=L^n(e_i)\)

Proposition 3.6

Let \(L\in End(E)\) be the evolution operator of E. Then, \(E^{[n]}=0\) if and only if \(A^n=0\). As a consequence, E is solvable if and only if A is a nilpotent matrix. Moreover, the solvability index of E and the nilpotency index of A are the same. \({\Box }\)

However, the following examples show that this result does not hold if \(L\notin End(E).\)

Example 3.7

Let us consider the evolution algebra E with basis \(\{e_1,e_2,e_3\}\) and structure matrix

$$\begin{aligned}A=\begin{pmatrix} 1 &{} -2 &{} 1\\ -1 &{} 2 &{} -1\\ 1 &{} -2 &{} 1 \end{pmatrix}.\end{aligned}$$

On the one hand, it can be verified that \(A^n=4^{n-1}A\), so A is not nilpotent. On the other hand

$$\begin{aligned} e_1^2\cdot e_1^2&=(e_1-e_2+e_3)\cdot (e_1-e_2+e_3)=e_1^2+e_2^2+e_3^2=0,\\ e_2^2\cdot e_2^2&=-2e_1^2\cdot -2e_1^2=4e_1^2\cdot e_1^2=0,\\ e_3^2\cdot e_3^2&=e_1^2\cdot e_1^2=0,\\ e_1^2\cdot e_2^2&=e_1^2\cdot -2e_1^2=-2(e_1^2\cdot e_1^2)=0,\\ e_1^2\cdot e_3^2&=e_1^2\cdot e_1^2=0,\\ e_2^2\cdot e_3^2&=-2e_1^2\cdot e_1^2=-2(e_1^2\cdot e_1^2)=0. \end{aligned}$$

Since \(E^{[1]}\) is generated by \(\{e_1^2,e_2^2,e_3^2\}\), then \(E^{[2]}=0\), therefore E is solvable.

Example 3.8

Let us consider the evolution algebra E with basis \(\{e_1,e_2,e_3\}\) and structure matrix

$$\begin{aligned}A=\begin{pmatrix} 5 &{} -3 &{} 2\\ 15 &{} -9 &{} 6\\ 10 &{} -6 &{} 4 \end{pmatrix}.\end{aligned}$$

On the one hand, it can be verified that \(A^2=0\), so A is nilpotent. On the other hand

$$\begin{aligned} e_2^2\cdot e_3^2&=(-3e_1-9e_2-6e_3)\cdot (2e_1+6e_2+4e_3)=-6e_1^2-54e_2^2-24e_3^2=\\&=-6(5e_1+15e_2+10e_3)-54(-3e_1-9e_2-6e_3)-24(2e_1+6e_2+4e_3)=\\&=84e_1+252e_2+168e_3=-28\,e_2^2=42\,e_3^2. \end{aligned}$$

Therefore

$$\begin{aligned}(e_2^2\cdot e_3^2)^{[1]}=-28\,e_2^2\cdot 42\, e_3^2=-28\cdot 42\, e_2^2\cdot e_3^2=(-28)^2\cdot 42\,e_2^2=-28\cdot 42^2\,e_3^2.\end{aligned}$$

Let us see by induction that

$$\begin{aligned}(e_2^2\cdot e_3^2)^{[n]}=(-28)^{2^n}42^{2^n-1}\, e_2^2=(-28)^{2^n-1}42^{2^n}\,e_3^2,\end{aligned}$$

from which it follows that E is not solvable.

Indeed, let suppose the result holds for n and let us see that it holds for \(n+1\)

$$\begin{aligned} (e_2^2\cdot e_3^2)^{[n+1]}&=(e_2^2\cdot e_3^2)^{[n]}\cdot (e_2^2\cdot e_3^2)^{[n]}=(-28)^{2^n}42^{2^n-1}\,e_2^2 \cdot (-28)^{2^n-1}42^{2^n}\,e_3^2=\\&=(-28)^{2^{n+1}-1}42^{2^{n+1}-1}\,e_2^2\cdot e_3^2=(-28)^{2^{n+1}}42^{2^{n+1}-1}\, e_2^2=\\&=(-28)^{2^{n+1}-1}42^{2^{n+1}}\,e_3^2. \end{aligned}$$