MATRICES OVER NONDIVISION ALGEBRAS WITHOUT EIGENVALUES

We are concerned with matrices over nondivision algebras and show by an example from an $${\mathbb{R}^{4}}$$R4 algebra that these matrices do not necessarily have eigenvalues, even if these matrices are invertible. The standard condition for eigenvectors $${\rm x \neq 0}$$x≠0 will be replaced by the condition that x contains at least one invertible component which is the same as $${\rm x \neq 0}$$x≠0 for division algebras. The topic is of principal interest, and leads to the question what qualifies a matrix over a nondivision algebra to have eigenvalues. And connected with this problem is the question, whether these matrices are diagonalizable or triangulizable and allow a Schur decomposition. There is a last section where the question whether a specific matrix A has eigenvalues is extended to all eight $${\mathbb{R}^{4}}$$R4 algebras by applying numerical means. As a curiosity we found that the considered matrix A over the algebra of tessarines, which is a commutative algebra, introduced by Cockle (Phil Mag 35(3):434–437, 1849; http://www.oocities.org/cocklebio/), possesses eigenvalues.


Introduction.
By standard matrices we understand square matrices with entries from R, C, H, the fields of real numbers, complex numbers, and quaternions, respectively. They have in common that they form a division algebra. By algebras in general we will understand the vector space R N equipped with an associative multiplication R N × R N → R N , with a one, often abbreviated as 1, where N ∈ N, the set of positive integers. For algebras in this general sense, we will use the notation A. More details can be found in a book by Garling, [5]. The name geometric algebra is often used for these algebras, [6].
A division algebra is an algebra, where the zero element is the only noninvertible element. Square matrices which have n rows and n columns are called matrices of order n. This paper will deal with matrices of order n ∈ N, where the entries are from some algebra in the sense mentioned. In this context we also speak of matrices over an algebra A. We also write A ∈ A n×n to denote a square matrix of order n with entries from an algebra A. At some occasions we also have to deal with nonsquare matrices which are denoted by A ∈ A m×n . If m = 1, A is called row vector, if n = 1, A is called column vector.
The question whether standard matrices have eigenvalues does not come up, since it is known, that these matrices have always eigenvalues and the number of eigenvalues never exceeds the order n. See Horn and Johnson, [7] for matrices over R, C, and Brenner, [1], and Zhang, [19] for matrices over H. For matrices over A the term number of eigenvalues will be made more precise a little later.
However, for matrices over nondivision algebras the answer to the question are there eigenvalues at all is not so clear, and in the first place one should make clear what an eigenvalue is. Let A ∈ A n×n where A is a division algebra. The standard definition is Ax = xλ, x = 0 (1. 1) for an eigenvalue λ ∈ A with respect to an eigenvector x ∈ A n×1 . Even in the standard case it is reasonable, to write xλ instead of λx, because λ may be regarded as a 1 × 1 matrix. The vector 0 is the zero column vector. In a division algebra, the condition x = 0 is equivalent to saying that the vector x contains at least one invertible component. However, if we transfer this definition also to nondivision algebras, then x = 0 would allow that all components of x are noninvertible. We will see a little later, that this definition does not lead to a reasonable theory of eigenvalues. Definition 1.1. Let A be a nondivision algebra. We denote the set of noninvertible elements of A by N A . An element λ ∈ A will be called an eigenvalue with respect to an eigenvector x of a matrix A ∈ A n×n if The set of all eigenvalues of a matrix A will be denoted by σ(A). If (λ, x) only solves (1.1), then, λ will be called a candidate for an eigenvalue with respect to x. is called the similarity class of a. See also [18] for an early paper in this context. We will see in the next theorem, that the existence of one eigenvalue λ will imply that the whole similarity class [λ] consists of eigenvalues. Theorem 1.3. Let λ be an eigenvalue of A with respect to the eigenvector x. Then, h −1 λh is also an eigenvalue of A with respect to xh for all h / ∈ N A . Proof. Multiply the defining equation (1.2) from the right by h. Then, , the same is true for xh, since h is invertible. In view of Theorem 1.3 the number of eigenvalues of a matrix A is defined as the number of similarity classes which contain eigenvalues. We see in the next theorem that one of the essential requirements for eigenvalues is valid in the framework of this definition of eigenvalues. Theorem 1.4. Let A be a matrix over an arbitrary algebra A and let there be two eigenvalues λ 1 and λ 2 of A with respect to the same eigenvector x. Then, Proof. By definition, Ax = xλ j , j = 1, 2. This implies 0 = x(λ 1 − λ 2 ). Since, also by definition, x contains an invertible component, we have λ 1 − λ 2 = 0.
We also see here already that the assumption x = 0 alone would not allow the conclusion of the last theorem.
Let us turn to the question, whether all matrices over nondivision algebras have eigenvalues. There are two strategies. If one believes that there are always eigenvalues, one has to prove it. Otherwise, life is a little easier, one has to find one example of a matrix which does not have eigenvalues. We will pursue this line.
2. Some elementary properties of R 4 algebras . In order to find an example we will restrict our search to the eight algebras of R 4 . These algebras with their multiplication rules are summarized in the following table. The four units in these algebras are denoted by 1, i, j, k. No Name of algebra Short name The full multiplication table of the eight listed algebras can be obtained by multiplying the last three columns in Elements in all these algebras will be denoted by a = (a 1 , a 2 , a 3 , a 4 ) which is the same as a = a 1 + a 2 i + a 3 j + a 4 k, where a 1 , a 2 , a 3 , a 4 ∈ R. The first component a 1 of a is called real part of a and it is denoted by ℜ(a). An algebra element of the form (a 1 , 0, 0, 0), a 1 ∈ R is called real and the set of real elements is also identified with R. The set R plays a special role in all noncommutative R 4 algebras. All elements of R (and no others) commute with all algebra elements. Let a, b ∈ A where A is one of the eight R 4 algebras. Then, For the four noncommutative algebras (No 1,2,5,6) we define the conjugate of an element a = (a 1 , a 2 , a 3 , a 4 ) in the notation a or conj(a) by a = conj(a) := (a 1 , −a 2 , −a 3 , −a 4 ), and we use the notation abs 2 (a) := a a for the product a a. The essential properties are given in the next theorem.
Theorem 2.2. Let A be one of the four noncommutative algebras of Table 2.1. Then abs 2 (a) = a a = a a is real, and a is invertible if and only if abs 2 (a) = 0. For the inverse of a there is the formula , if abs 2 (a) = 0.
Let a, b ∈ A. Then, there is the property that abs 2 is multiplicative: abs 2 (ab) = abs 2 (ba) = abs 2 (a)abs 2 (b). For abs 2 (a) there is the formula for a ∈ H con .
Since eigenvalues appear always in similarity classes, we can reduce the eigenvalues to certain representatives of the similarity classes. Lemma 2.3. All eigenvalues of matrices over H can be given the form a + bi. All eigenvalues of matrices over H coq and over H nec can be given the form a + bi or a + bj. All eigenvalues of matrices over H con can be given the form a + bi or a + bk, where in all cases b ≥ 0.
Proof. This follows from (2.2) in dependence of the sign of abs 2 together with (2.1).
For this reason all eigenvalues of matrices over H can be represented as complex numbers, see Brenner, [1] and therefore, the eigenvalue theory of matrices over H is very similar to the standard case of matrices over R or over C.
For later use we will provide two tables for simple multiplication rules.
Lemma 2.4. Let a ∈ A where A is one of the eight R 4 algebras of Table 2.1. Then, the multiplication of a by i, j, k from the left and from the right results apart from the signs in a permutation of the four components of a. Let a = (1, 2, 3, 4), then the corresponding permutations (inclusive the signs) are given in the following two tables.
Proof. Apply the multiplication rules.
3. One dimensional eigenvalue problems and matrix representations. All algebras (at least those considered here) are isomorphic to certain real matrix spaces. So it is reasonable in an eigenvalue problem over an algebra A to replace the algebra elements by isomorphic matrices so that the eigenvalue problem over A appears as a problem in real matrices. 2 Since these matrices can have only real or complex eigenvalues, it is of interest to study the connection between these eigenvalues and the eigenvalues of the underlying matrices over A. For this purpose we study the simplest, one dimensional case A (1 × 1) matrix a may be regarded simultaneously as a diagonal and a triangular matrix. Therefore, the reults of the Lemma 4.7, p. 14 are valid for the problem (3.1).

Lemma 3.1. Let A be an arbitrary algebra. The eigenvalue problem (3.1) is always solvable and the solution is
Proof. The invertible eigenvector x = 1 solves the problem. Multiplying equation (3.1) from the left by x −1 , yields λ = x −1 ax = a. The remaining part follows from Theorem 1.3.
How to find the matrix equivalents of an algebra element a ∈ A. The simplest idea is, to regard the mapping  Details for computing the matrix M are in [8,9,12]. Now, we compare the (known) real or complex eigenvalues of M with the corresponding representatives of [a]. For all R 4 algebras the eight 4 × 4 matrix representations are given in [8,  List of all eigenvalues λ 1,2,3,4 of M in all eight R 4 algebras. The eigenvalues λ 1,3 correspond to the +sign, and eigenvalues λ 2,4 to the −sign. We use the abbreviation A := abs 2 (a) − a 2 1 , with abs 2 (a) from (2.2).
No Name of algebra Quaternions In the following Theorem we give a reconstruction scheme for representatives of eigenvalues in [a], from the real and complex eigenvalues λ ℓ , ℓ = 1, 2, 3, 4 of M (see Table 3.2), where a is a member of one of the eight R 4 algebras.
Let the real or complex eigenvalues of M be λ ℓ , ℓ = 1, 2, 3, 4, which are listed in Table 3 Proof. For the noncommutative cases we use Lemma 2.3 by keeping the similarity class of [a] with the help of formula (2.1). For the commutative cases we reconstruct the four values of a j from the four eigenvalues λ j , j = 1, 2, 3, 4.
The cases A < 0 in the first part of Table 3.4 always mean that the eigenvalues λ j , j = 1, 2 are real. We will provide two simple examples, one for the noncommutative, one for the commutative case, which show, that allowing eigenvectors with all components noninvertible may lead to not uniquely determined eigenvalues and to other strange situations.
Now we can apply these results also to larger matrices over noncommutative algebras by replacing the algebra elements in the matrix by real matrices according to formula (3.4). And by distinguishing between real and complex, nonreal eigenvalues we can obtain information on the eigenvalues of the original matrix by applying the results of Table 3.4. However, there is a serious drawback. Since we are using standard techniques to find the eigenvalues of real matrices, the standard condition x = 0 is invoked. Thus, whether the eigenvalues computed by standard techniques are also eigenvalues of the underling algebraic matrix is a priori unknown. We will use one example to demonstrate the details. Let be a matrix in H coq that will play a central role in this paper. If we replace the four entries of A by corresponding real 4 × 4 matrices we obtain a real 8 × 8 matrix which has the (double) eigenvalues ± √ 2 and 1 ± i. Now the second and third row of Table 3.4 tells us, that the eigenvalues of A are (possibly) If we have a look at the standard eigenvectors we see that they all have the form (a, b, b, a, c, d, d, c). If we speculate that the first and second four elements are coquaternionic elements, then they are noninvertible. We will see in the next section, that the true eigenvectors belonging to A are indeed all noninvertible.

A counterexample.
We will eventually show, that the matrix A defined in (3.5) will have no eigenvalues in H coq . We note, that A is invertible in H coq and We denote the eigenvectors of A by and require for a λ ∈ H coq to qualify as eigenvalue of A with respect to an eigenvector x that Ax = xλ, x contains at least one invertible component. (4.3) See the Definition 1.1. We will first show, that the two candidates for eigenvalues λ j , j = 1, 2 obtained in (3.6) are not eigenvalues of A. Then we will show that in general no eigenvalues exist.
Lemma 4.1. λ = 1 + i is not an eigenvalue of A, where A is defined in (3.5).
Proof. Let λ := 1 + i. Then, we have to find the eigenvectors x := (x 1 , x 2 ) T ( T means transposition) from the defining equations By using the tables 2.5, 2.6, the first equation can be written as which implies and the second equation as Comparing all four components and using the results from the first equation, we obtain eventually the pair of equations Adding and subtracting these equations yields Thus, altogether with v 1 , v 2 ∈ R, |v 1 | + |v 2 | > 0 the eigenvector components are and both components are not invertible. Thus, 1+i is not an eigenvalue of A, however, it is a candidate for an eigenvalue.
Lemma 4.2. λ = √ 2 j is not an eigenvalue of A, where A is defined in (3.5). Proof. In this case x := (x 1 , x 2 ) T is wanted which solves The first and second equation can be written as (see Tables 2.5 and these equations imply, respectively

Comparing with the second set of equations yields
This finally yields with Both components of x are noninvertible, and √ 2 j is not an eigenvalue of A, but it is a candidate for an eigenvalue. Now we come to the general case. In the first place we show that the eigenvalue problem can be expressed by two real, linear homogeneous 4 × 4 systems.

Proof.
We apply the rules given in Table 2.5 for H coq and obtain The multiplication rule in H coq is For (v 1 , v 2 , v 3 , v 4 )λ the same rule applies. Comparing the four components of the two parts of (4.11) yields (after some reordering) From these equations we deduce by adding and subtracting two sets of four linear equations each. The first set is The second set is The equations (4.12) to (4.19) form a homogeneous linear system in the eight variables u 1 , u 2 , . . . , v 4 . If we choose λ at random, then we have to expect that the above 8 × 8 system has full rank and thus, λ is not a candidate for an eigenvalue. By adding and subtracting we can recover the solution of the 8 × 8 system from the solutions of the two 4 × 4 systems. In short form the two 4 × 4 systems defined by (4.20) to (4.23) and by (4.24) to (4.27) are the forms given in the lemma. Proof. Let both matrices have rank four. Then, m = n = 0 (for definition see (4.7), (4.8)) which implies u 1 = u 2 , · · · = v 4 = 0 (see (4.2)) and λ is not a candidate for an eigenvalue. If only one of the two matrices has rank four, then either m = 0 or n = 0, which implies that the two components x 1 , x 2 of the eigenvector x are both noninvertible. Thus, λ is not an eigenvalue.
Therefore, only those λ can qualify for an eigenvalue, which render the ranks of both M and N less than four.
For the definition of abs 2 see (2.2). Formula (4.30) has the following special cases.
The solution is a 1 = −λp 2,3 , a 2 = λp 1,4 ; b 1 = −λm 1,4 , b 2 = −λm 2,3 . Let the four columns of M be c 1 , c 2 , c 3 , c 4 , then, the resulting conditions are In explicit terms By definition, only the last two rows are needed and imply − λp 2,3 λm 2,3 + (λp 1,4 ) 2 = −1,  If we subtract the last from the first equation, we obtain The middle two equations can be written as λp 2,3 (λm 1,4 + λp 1,4 ) = 0, λm 2,3 (λm 1,4 + λp 1,4 ) = 0, which is the same as If we do not put λ = 1, it follows, that λ 2 = λ 3 = λ 4 = 0, however, this λ neither obeys (4.33) nor (4.36). So we have to choose λ 1 = 1, and λ 2 , λ 3 , λ 4 obeying (4.33), (4.36) results in λ 2 2 − λ 2 3 − λ 2 4 = 1, which is (4.28). Now let us treat the case, where rank(N) = 2. The technique to find the corresponding conditions is the same as for M. The equations corresponding to (4.33) to (4.36) are Thus, (4.29) is shown. Let N have rank three and denote, as before, the four columns of N by c 1 , c 2 , c 3 , c 4 . In order to find the linear combination to represent the last column by the first three columns we have to find a 1 , a 2 , a 3 from the first three rows and columns of a 1 c 1 + a 2 c 2 + a 3 c 3 = c 4 . Explicitly this reads If the coefficients a 1 , a 2 , a 3 are known, then, by using the fourth row of N, (see (4.6)) there is only one condition to satisfy: − a 1 λp 1,4 + a 2 λm 2,3 = −1. In terms of the components of λ, this condition reads The equation (4.43) for a 3 in terms of the λ components reads We put only for this proof and note that the factor (λ 1 − λ 4 ) 2 − A in (4.43) at a 3 cannot vanish, because the assumption that N has rank three implies that the three coefficients a 1 , a 2 , a 3 are uniquely defined. Inserting a 3 into condition (4.46) yields which proves (4.30) and the special cases follow.
We summarize our results. Theorem 4.6. The matrix A ∈ H coq 2×2 defined in (3.5) has no eigenvalues in H coq , the algebra of coquaternions.
Proof. The previous Lemma 4.5 says, that λ defined by (4.28), (4.29), (4.30) are the only candidates for eigenvalues of A over H coq . However, for all λ ∈ H coq at least one of the matrices M, N has rank four. Thus, Lemma 4.4 applies and gives the final result.
Lemma 4.7. Let D be a diagonal matrix of order n ∈ N over an algebra A with diagonal elements d 1 , d 2 , . . . , d n . Then all diagonal elements d j , j = 1, 2, . . . , n are eigenvalues of D. Let T be a triangular matrix (upper or lower) of order n ∈ N over an algebra A with diagonal elements t 1 , t 2 , . . . , t n . Then, at least the first diagonal element t 1 is an eigenvalue of T.
Proof. Let e j ∈ A n×1 be the the jth unit vector, where the jth component consists of the algebraic one, and all other components consist of the algebraic zero. Then, De j = d j e j = e j d j . Therefore, d j is a candidate for an eigenvalue with respect to e j , and since e j contains one invertible element, d j is an eigenvalue for all j = 1, 2, . . . , n. Now, Te 1 = t 1 e 1 = e 1 t 1 , and t 1 is an eigenvalue of T for the same reasons.
Proof. Since similar matrices have the same set of eigenvalues, and following Lemma 4.7, diagonal and triangular matrices contain eigenvalues, they cannot be similar to a matrix without eigenvalues.
In general terms that means, we cannot expect, that a matrix of order n over a nondivision algebra has a Schur decomposition.
In the formulas (4.28) to (4.30) we have listed all possible candidates for eigenvalues of A (for A see (3.5)), over H coq , the algebra of coquaternions, which by definition obey the equation Ax = xλ, x = 0. And even if we count only similarity classes, we see from formula (4.30) that this number is infinite since distinct real parts λ 1 define different similarity classes. In a paper by Erdogdu andÖzdemir, [4], only candidates for eigenvalues over H coq are considered. More recent information on problems related to quaternions and coquaternions with extensions to other algebras can be found in papers by the current authors in [8] to [13].
5. Eigenvalues in the other R 4 algebras. In this section we give only a short overview on the problem whether the same matrix A, defined in (3.5) has eigenvalues in the eight R 4 algebras or not. If there are eigenvalues, we also present corresponding eigenvectors. Our results are mainly based on numerical computations which produce eigenvalues and eigenvectors. We would like to mention one principal difficulty for the noncommutative algebras. In this computation eigenvalues are computed with four components, but not in reduced form. What is reduced form? Theorem 1.3 says that λ ∈ σ(A) implies [λ] ⊂ σ(A) and because of the similarity identity relation (2.1) arbitrary eigenvalues in the noncommutative cases can be reduced to the short form: in H: λ = a + bi, in H coq and in H nec : λ = a + bi, or λ = a + bj, in H con : λ = a + bi, or λ = a + bk, with b ≥ 0 in all cases.

Quaternions H. Let A = H. The matrix A is invertible and
There are two similarity classes of eigenvalues, defined by Since H is a division algebra, both eigenvalues and all components of the eigenvectors are invertible.

Coquaternions H coq . Let A = H coq . The matrix A is invertible and
See (4.1). However, there are no eigenvalues. This is the main result of this paper.

5.3.
Tessarines H tes . Let A = H tes . This algebra is commutative. The matrix A is noninvertible. There are the following eigenvalues and eigenvectors: Both eigenvalues are not invertible, but all components of the eigenvectors are invertible in the algebra H tes .

5.4.
Cotessarines H cotes . Let A = H cotes . This algebra is commutative. The matrix A is not invertible and the search for eigenvalues ended without success.

Nectarines H nec . Let A = H nec . The matrix A is invertible and
The search for eigenvalues ended without success.
5.6. Conectarines H con . Let A = H con . The matrix A is invertible and The eigenvalues are  All eigenvalues are not invertible, wheras all components of the eigenvectors are invertible.

5.8.
Cotangerines H cotan . Let A = H cotan . This algebra is commutative. The matrix A is not invertible. The search for eigenvalues ended without success.
5.9. Concluding remarks for this section. These results, apart from those for coquaternions, are based on numerical computations. The statement The search for eigenvalues ended without success means that at least 100 trials with random initial values were carried out, in some cases even several hundreds. In contrast to the cases where there are eigenvalues, which were found in almost all cases within the first ten trials. So there is some likelihood, that not only A over H coq does not contain eigenvalues, but also A over H cotes , H nec , H cotan . And it even may be, that in all nondivision algebras A there are matrices over A without eigenvalues.
In order to facilitate the computations, we have used the overloading technique offered by MATLAB, which allows to use the standard algebra for matrices in all eight R 4 algebras, and we steered the selection of the algebras by just one global integer parameter with values from one to eight. By this it is very easy to program Newton's method only once for finding eigenvalues and eigenvectors in all eight R 4 algebras. A technique which is described by Lauterbach and Opfer in [14] was applied.