1 Introduction

A problem that occurs frequently in a variety of mathematical contexts is to find the common invariant subspaces of a single or of a set of matrices. Of course the problem is even more challenging in infinite dimensions and the invariant subspace problem for bounded operators on a separable Hilbert space remains open; see for example [3, 8]. In this article, we shall be concerned with finite dimensions only. In the case of a single endomorphism or matrix, it is relatively easy to find all the invariant subspaces by using Jordan normal form. Also, some theoretical results are given only for the invariant subspaces of two matrices. However, when there are more than two matrices, the problem becomes much harder and unexpected invariant subspaces may occur and there is no systematic method is known. In any case, the key point is that it is difficult to be certain that one has uncovered all such invariant subspaces and that is where the method introduced in this article is of great utility. In the context of studying symmetries of differential equations, there is a compelling reason to consider invariant subspaces. The set of Lie symmetries naturally comprises a Lie algebra and one would like to be able to describe all its subalgebras and ideals, up to conjugacy. However, the invariant subspaces of the adjoint representation correspond precisely to the ideals of the Lie algebra.

In this work, we provide a new algorithm for computing bases for the intersections of eigenspaces of a set of matrices. In order to check if the intersecting eigenspaces of a set of matrices contain a nonzero decomposable vector and find its divisors, we provide a convenient formulation of the Plücker relations that can be used to check for the decomposability of a multivector \(\Lambda \in \bigwedge ^d V\). If the multivector \(\Lambda\) involves parameters, then the quadratic Plücker relations for decomposability provide initial constraints on these parameters in order to keep \(\Lambda\) totally decomposable in \(\bigwedge ^d V\). The Plücker relations are given in a simple form which is easily programmable using the symbolic manipulation program Maple. Moreover, we provide a procedure for computing the divisors of a totally decomposable vector \(\Lambda \in \bigwedge ^d V\).

The method proposed in this paper can be used to find all the ideals of the symmetry Lie algebra for all possible dimension of a given DE. Therefore, the results of this paper provide a new systematic comprehensive method to construct the corresponding invariant solutions. Another application for the proposed method lies in finding representations of Lie algebra by considering the invariant subspaces and the action of a semi-simple Lie group on the adjoint representation. We are currently working on a main application for this method to provide two algorithms for simultaneous block triangularization and block diagonalization of sets of matrices.

An outline of the paper is as follows. In Sect. 2, we give some details about invariant subspaces, in particular, explaining the connection between an invariant subspace and a totally decomposable multivector. In Sect. 3, we provide a low-dimensional example, where the details can be carried out by hand. In Sect. 4, we present the main result of the paper, which gives a convenient formulation of the Plücker relations that can be used to check for the decomposability of a multivector or provide the initial constraints for a multivector involving parameters. A procedure for computing the divisors of a totally decomposable vector \(\Lambda \in \bigwedge ^d V\) is also provided. In Sect. 5, we outline two algorithms to find common invariant subspaces, first for one-dimensional subspaces then for higher dimensions. Finally, in Sect. 6 we provide three examples for which the calculations are too complicated to do by hand and are performed using Maple although we do not go into details of the code here.

2 Invariant subspaces

Let \(\{v_1, v_2, \ldots , v_n\}\) be a basis of an n-dimensional vector space V. We shall denote the associated dual basis for the dual space \(V^*\) by \(\omega _1,\omega _2,\ldots ,\omega _n\), so that \(\omega _i(v^j)=\delta ^j_i\) or, equivalently, we shall write \(\langle v^j, \omega _i \rangle =\delta ^j_i\). For the moment, we shall assume that the underlying field of V is either \(\mathbb {R}\) or all eigenvalues of the matrices encountered are real. We shall address the issues of complex eigenvalues in a subsequent article. Now suppose that \(T_A:V\rightarrow V\) is an endomorphism of V. Since we have chosen a basis for V, we shall identify \(T_A\) with its \(n\times n\) matrix, denoted by A. Thus, we have

$$\begin{aligned} T_A(v_i)=\sum \limits _{j=1}^n a_i^jv_j. \end{aligned}$$
(2.1)

Definition 2.1

A subspace \(W\subset V\) is said to be invariant with respect to the transformation \(T_A\) if \(T_AW\subset W\).

An extensive study of invariant subspaces of transformations may be found in [4]. See also [1]. Let \(T_{A_{\alpha }}\) be a family of linear transformations indexed by \(\alpha\) and suppose that \(W_1, W_2\) are \(T_{A_{\alpha }}\)-invariant. Then,

$$\begin{aligned} T_{A_{\alpha }}(W_1\cap W_2)\subset T_{A_{\alpha }}W_1\cap T_{A_{\alpha }}W_2\subset W_1\cap W_2 \end{aligned}$$
(2.2)

and

$$\begin{aligned} T_{A_{\alpha }}(W_1+W_2)=T_{A_{\alpha }}W_1+T_{A_{\alpha }}W_2\subset W_1 + W_2. \end{aligned}$$
(2.3)

The intersection of two such subspaces \(W_1, W_2\) is \(W_1\cap W_2\) and their union is the space \(W_1+W_2\), that is, the space spanned by \(W_1\) and \(W_2\). Hence, the set of invariant subspaces for a family of transformations form a lattice.

It is clear that if \(T_A\) is a multiple of the identity transformation on V, then every subspace of V is invariant. More generally, if a family of transformations possesses a subspace W of V on which each of them is a multiple of the identity, then again any subspace of W is invariant. Accordingly, we shall sometimes assume in the sequel that no such invariant subspace exists.

Our goal is to find, if possible, the lattice of invariant subspaces of a single transformation and eventually for a family of transformations. We shall specify these subspaces by giving decomposable multivectors \(\Lambda\) in terms of the reference basis \(\{v_1, v_2, \ldots , v_n\}\) used for V. Such a \(\Lambda\) of degree d is contained in \(\bigwedge ^d V\), the \(d^{th}\) exterior power of V. A basis for \(\bigwedge ^d V\) consists of \(\{v_{i_1}\wedge v_{i_2}\wedge \cdots \wedge v_{i_d}\mid 1\le i_1< i_2< \cdots <i_d \le n\}\) where \(1\le d \le n\).

However, with reference to a single transformation \(T_A\), we shall frequently assume that the basis \(\{v_1, v_2, \ldots , v_n\}\) is adapted to a d-dimensional invariant subspace W, by which we mean that \(\{v_1,v_2, \ldots , v_d\}\) is a basis for W. Then, \(T_A\) induces an eigenvector in \(\bigwedge ^d V\), the dth exterior power of V. Indeed, in the adapted basis

$$\begin{aligned} \bigwedge ^d T_A(v_1\wedge v_2\wedge \cdots \wedge v_d)=\det ({\tilde{A}})(v_1\wedge v_2\wedge \cdots \wedge v_d). \end{aligned}$$
(2.4)

In eq.(2.4), \({\tilde{A}}\) denotes the submatrix of A that results when \(T_A\) is restricted to W. At this point, it is convenient to introduce the following definition.

Definition 2.2

The vector \(\Lambda \in \bigwedge ^d V\) is said to be totally decomposable if there are d linearly independent vectors \(v_1, v_2, \ldots ,v_d \in V\) such that \(\Lambda =v_1 \wedge v_2 \wedge \cdots \wedge v_d\).

Again, it may be assumed in Definition 2.2 that we are working with a basis of V that is adapted to W. If we drop the independence condition in Definition 2.2, then we would say simply that \(\Lambda\) is decomposable. Thus, an invariant subspace engenders a totally decomposable multivector. Conversely, suppose that \(\Lambda =v_1\wedge v_2\wedge \cdots \wedge v_d\), is a totally decomposable eigenvector of \(\bigwedge ^d A\); then

$$\begin{aligned} \bigwedge ^d A(v_1\wedge v_2\wedge \cdots \wedge v_d)=\lambda (v_1\wedge v_2\wedge \cdots \wedge v_d) \end{aligned}$$
(2.5)

for some \(\lambda\). Since \(\bigwedge ^d A(v_1\wedge v_2\wedge \cdots \wedge v_d)= Av_1\wedge Av_2\wedge \cdots \wedge Av_d\), then we have \(Av_1\wedge Av_2\wedge \cdots \wedge Av_d=\lambda (v_1\wedge v_2\wedge \cdots \wedge v_d)\). If we assume that \(\lambda \ne 0\), then \(Av_m\wedge (v_1\wedge v_2\wedge \cdots \wedge v_d)=0\) for each \(1 \le m \le d\) and hence the subspace spanned by \(v_1, v_2, \ldots ,v_d\) is A-invariant or \(T_A\)-invariant. Thus, we have the key observation that underlies this paper.

Proposition 2.3

Given an \(n\times n\) non-singular matrix A, there is a one-one correspondence between invariant subspaces of A and projective equivalence classes of totally decomposable eigenvectors of \(\bigwedge ^d A\) where \(1\le d \le n\).

It is necessary to assume that the matrix A is non-singular. For example, if A is nilpotent then some exterior power \(\bigwedge ^k A=0\) and we may not obtain information about invariant subspaces of A. On the other hand, we can always add a suitable multiple of the identity to A in order to obtain \({\overline{A}}\), which is non-singular: A and \({\overline{A}}\) have the same invariant subspaces. More generally:

Lemma 2.4

Given a set of \(n\times n\) matrices \(\{A_1,A_2, \ldots ,A_k\}\), it is possible to add a suitable multiple of the identity to each \(A_i\) in order to obtain \(\overline{A_i}=A_i+\mu _iI\), each of which is non-singular and then the sets \(\{A_1,A_2, \ldots ,A_k\}\) and \(\{{\overline{A}}_1,{\overline{A}}_2,\ldots ,{\overline{A}}_k\}\) have the same invariant subspaces.

Proof

Since we are only considering a finite number of matrices, there are only a finite number of eigenvalues altogether for the matrices \(A_1,A_2,\ldots ,A_k\). We simply choose each \(\mu _i\) so that each \(A_i+\mu _iI\) is non-singular. Adding multiples of the identity does not change the common invariant subspaces of \(A_1,A_2,\ldots ,A_k\). \(\square\)

3 Illustrative example

In this Section, we study an example where the calculations are simple enough to do by hand. We consider the problem of finding the common invariant subspaces of the following three matrices:

$$\begin{aligned} A_1= \left( \begin{array}{cccc} 0&{}0&{}0&{}1\\ {} 0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0\\ {} {} 0&{}0&{}0&{}0\end{array} \right) , A_2= \left( \begin{array}{cccc} 0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}1\\ {} 0&{}0&{}0&{}0\\ {} {} 0&{}0&{}0&{}0\end{array} \right) , A_3= \left( \begin{array}{cccc} 0&{}1&{}0&{}0\\ {} 0&{}0&{}1&{}0\\ {} 0&{}0&{}0&{}0\\ {} {} 0&{}0&{}0&{}0\end{array} \right) . \end{aligned}$$
(3.1)

In the first place, we note that \(\bigwedge ^2 A_1=\bigwedge ^2 A_2=0\). However, the common invariant subspaces of \(A_1,A_2,A_3\) are not just determined by \(A_3\) only. Accordingly, we add the identity matrix to each of \(A_1, A_2, A_3\) in order to obtain \({\overline{A}}_1, {\overline{A}}_2, {\overline{A}}_3\) giving

$$\begin{aligned} {\overline{A}}_1= \left( \begin{array}{cccc} 1&{}0&{}0&{}1\\ {} 0&{}1&{}0&{}0\\ {} 0&{}0&{}1&{}0\\ {} {} 0&{}0&{}0&{}1\end{array} \right) , {\overline{A}}_2= \left( \begin{array}{cccc} 1&{}0&{}0&{}0\\ {} 0&{}1&{}0&{}1\\ {} 0&{}0&{}1&{}0\\ {} {} 0&{}0&{}0&{}1\end{array} \right) , {\overline{A}}_3= \left( \begin{array}{cccc} 1&{}1&{}0&{}0\\ {} 0&{}1&{}1&{}0\\ {} 0&{}0&{}1&{}0\\ {} {} 0&{}0&{}0&{}1\end{array} \right) . \end{aligned}$$
(3.2)

Taking the basis in the order \(e_1\wedge e_2, e_1\wedge e_3, e_1\wedge e_4, e_2\wedge e_3, e_2\wedge e_4, e_3\wedge e_4\), we calculate the matrix \(\bigwedge ^2{\overline{A}}_1\) as follows:

$$\begin{aligned} \bigwedge ^2{\overline{A}}_1(e_1\wedge e_2)=({\overline{A}}_1 e_1\wedge {\overline{A}}_1 e_2)=e_1\wedge e_2 \end{aligned}$$

and so the first column in the matrix \(\bigwedge ^2{\overline{A}}_1\) is \((1,0,0,0,0,0)^T\). Similarly, we obtain the rest of the columns for the following matrices.

$$\begin{aligned} \bigwedge ^2{\overline{A}}_1= \left( {\begin{matrix} 1&{}0&{}0&{}0&{}-1&{}0\\ {} 0&{}1&{}0&{}0&{}0&{}-1\\ {} 0&{}0&{}1&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}1&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}1&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}1 \end{matrix}} \right) , \bigwedge ^2{\overline{A}}_2= \left( {\begin{matrix} 1&{}0&{}1&{}0&{}0&{}0\\ {} 0&{}1&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}1&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}1&{}0&{}-1\\ {} 0&{}0&{}0&{}0&{}1&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}1 \end{matrix}} \right) , \bigwedge ^2{\overline{A}}_3= \left( {\begin{matrix} 1&{}1&{}0&{}1&{}0&{}0\\ {} 0&{}1&{}0&{}1&{}0&{}0\\ {} 0&{}0&{}1&{}0&{}1&{}0\\ {} 0&{}0&{}0&{}1&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}1&{}1\\ {} 0&{}0&{}0&{}0&{}0&{}1 \end{matrix}}\right) . \end{aligned}$$
(3.3)

Using the basis \(e_1\wedge e_2 \wedge e_3, e_1\wedge e_2 \wedge e_4, e_1\wedge e_3 \wedge e_4, e_2\wedge e_3 \wedge e_4\)

$$\begin{aligned} \bigwedge ^3 {\overline{A}}_1= \left( \begin{array}{cccc} 1&{}0&{}0&{}1\\ {} 0&{}1&{}0&{}0\\ {} 0&{}0&{}1&{}0\\ {} {} 0&{}0&{}0&{}1\end{array} \right) , \bigwedge ^3 {\overline{A}}_2= \left( \begin{array}{cccc} 1&{}0&{}-1&{}0\\ {} 0&{}1&{}0&{}0\\ {} 0&{}0&{}1&{}0\\ {} {} 0&{}0&{}0&{}1\end{array} \right) , \bigwedge ^3 {\overline{A}}_3=\left( \begin{array}{cccc} 1&{}0&{}0&{}0\\ {} 0&{}1&{}1&{}1\\ {} 0&{}0&{}1&{}1\\ {} {} 0&{}0&{}0&{}1\end{array} \right) . \end{aligned}$$
(3.4)

Now we see in the matrices in eq.(3.7) that the first column gives the only common eigenvector \(e_1\). Accordingly, the only common invariant subspace of dimension one for the matrices in eq.(3.6) is spanned by \(e_1\); we will use \(\langle e_1 \rangle\) for the span of \(e_1\). Similarly, we see in the matrices in eq.(3.8) that the first column gives the only common eigenvector \(e_1 \wedge e_2\). Accordingly, the only common invariant subspace of dimension two for the matrices in eq.(3.6) is \(\langle e_1, e_2 \rangle\). Finally, we see in the matrices in eq.(3.9) that there are two linearly independent eigenvectors \(e_1 \wedge e_2 \wedge e_3\) and \(e_1 \wedge e_2 \wedge e_4\) corresponding to the first and second columns respectively. Accordingly, the common invariant subspace of dimension three for the matrices in eq.(3.6) is \(\langle e_1, e_2, ae_3+b e_4 \rangle ,(a^2+b^2\ne 0)\). We obtain \(\langle e_1, e_2, a e_3+b e_4 \rangle ,(a^2+b^2\ne 0)\) by finding the divisors of the linear combination of two eigenvectors \(e_1 \wedge e_2 \wedge e_3\) and \(e_1 \wedge e_2 \wedge e_4\) as follows: \(a(e_1 \wedge e_2 \wedge e_3)+b(e_1 \wedge e_2 \wedge e_4)= e_1 \wedge e_2 \wedge (a e_3 +b e_4)\). One can verify that the components of the latter vector satisfy the Plücker relations for arbitrary parameters a and b as long as \((a^2+b^2\ne 0)\). In particular, we see that there are an infinite number of invariant subspaces of dimension three.

The reader will observe that the original example considered in this Section arises from the adjoint representation of the unique four-dimensional nilpotent Lie algebra. However, one needs to exercise caution when adding multiples of the identity to the generators, because they may no longer span a subalgebra.

4 Plücker embedding and decomposability

As a result of the considerations of Sect. 2, it will be important to decide whether a given multivector \(\Lambda\) is totally decomposable. In the case of a bivector, it is decomposable if and only if \(v\wedge v=0\). On the other hand, if v is of odd degree, then \(v\wedge v=0\) holds identically. Furthermore, multivectors of degree one, \(n-1\) and n are always decomposable. However, to handle multivectors of arbitrary degree, we shall have to introduce some more machinery.

We continue with our real vector space V of dimension n. We let G(dV) denote the Grassmann manifold of all d-planes in V. Then G(dV) is a smooth compact manifold of dimension \(d(n-d)\) and in case \(d=1\), we are looking at the real projective space P(V) of V. In the case where \(V=\mathbb {R}^n\), by reducing to orthonormal frames, the space \(G(d,\mathbb {R}^n)\) may be identified as the homogeneous space \(O(n)/\ O(r)\times O(n-r)\).

It is awkward to work directly with G(dV), so instead one usually makes use of the so called Plücker embedding construction. The idea behind Plücker embedding consists of mapping the Grassmannian G(dV), to the projective space \(P(\bigwedge ^d V)\); to do so, let \(W\in G(d,V)\) and let \(v_1,v_2,\ldots ,v_d\) be a basis for W. Then map W to \([v_{1} \wedge \cdots \wedge v_{d}]\), where the parentheses denote equivalence in \(P(\bigwedge ^d V)\). The mapping is well defined; if we take a different basis and wedge its vectors together, the wedge product differs from the old one by a nonzero factor, that is the determinant of the change of basis, and so will yield the same element of \(P(\bigwedge ^d V)\). Moreover, this mapping is injective: if \(\omega\) is the image of W, then W consists of the subspace of vectors w such that \(w\wedge \omega =0\). We introduced the notion of totally decomposable multi-vector in Sect. 2 and we may say now that every totally decomposable element of \(P(\bigwedge ^d V)\) arises from a unique element of G(dV).

In the course of the construction, Grassmann coordinates arise in a natural way. The usual procedure is to imagine a d-dimensional subspace W of \(\mathbb {R}^n\) as giving rise to a \(d\times n\) matrix A, where the rows of A consist of a basis of vectors in \(\mathbb {R}^n\) for W. The Grassmann coordinates then consist of the minors arising from the \(\left( {\begin{array}{c}n\\ d\end{array}}\right)\) possible \(d\times d\) submatrices. They provide the homogeneous coordinates on \(P(\bigwedge ^d V)\) and are well defined up to multiplying A by a non-singular matrix on the left. Denoting such a system of coordinates by \((p_{i_1i_2 \ldots i_d})\), the condition for the decomposability of a multi-vector is, in terms of indices,

$$\begin{aligned} p_{i_1i_2 \ldots i_{d-2}hj}p_{i_1i_2 \ldots i_{d-2}km}+p_{i_1i_2 \ldots i_{d-2}hm}p_{i_1i_2 \ldots i_{d-2}jk} +p_{i_1i_2 \ldots i_{d-2}hk}p_{i_1i_2 \ldots i_{d-2}mj}=0, \end{aligned}$$
(4.1)

the so-called “quadratic p-relations.” Incidentally, it is our understanding that Plücker carried out the construction for the case of lines in three-dimensional projective space and it was Grassmann who extended the construction to arbitrary dimensions.

Conditions (4.1) may not be the most useful way to describe decomposability of a multi-vector, particularly with regard to applications, not least since the high degree of the polynomials may cause problems. There are, however, equivalent ways to characterize total decomposability, see for example, [5,6,7]. We shall obtain next a convenient formulation of these conditions. To begin, note that whenever a volume element of V is given, that is, a nonzero element of \(\bigwedge ^n V\), there an isomorphism between \(\bigwedge ^d V\) and \(\bigwedge ^{n-d} V^{*}\). As such, we shall use the volume element \(v_N=v_{1} \wedge v_{2} \wedge \cdots \wedge v_{n}\) coming from our reference basis and we shall denote by \({\bar{\Lambda }}\) the element \(\in \bigwedge ^{n-d} V^{*}\) associated to \(\Lambda \in \bigwedge ^d V\). Now, we can assert that \(\Lambda\) is totally decomposable if and only if its components satisfy the following Plücker relations

$$\begin{aligned} \begin{array}{cc} E_{KL}=\sum \limits _{s=1}^n {\omega }_{K}(v_s \wedge \Lambda ) ({\omega }_s \wedge {\bar{\Lambda }})v_{L}=0,\\ \end{array} \end{aligned}$$
(4.2)

where K and L are strictly increasing subsequences of \(N=(1,\ldots ,n)\) such that \(|K|=d+1\) and \(|L|=n-d+1\).

4.1 A convenient formulation of the Plücker relations

Now we are in a position to obtain a version of the Plücker relations that is suitable for use in our algorithm.

Theorem 4.1

Let V be an n-dimensional vector space. The vector

\(\Lambda =\sum \limits _{|I|=d} x_{I} v_{I} \in \bigwedge ^d V\) is totally decomposable if and only if its components \(x_{I}\) satisfy the following Plücker relations:

$$\begin{aligned} \begin{array}{c} \{E_{KL}=0: K~\text {and}~L~\text {are strictly increasing subsequences of}~N=(1, 2, \ldots ,n)\\ \text {such that}~\mid K\mid =d+1, \mid L\mid =n-d+1\}, \end{array} \end{aligned}$$
(4.3)

where \(E_{KL}\) is defined for every K and L as

$$\begin{aligned} \begin{array}{c} E_{KL}=\sum \limits _{s\in K \cap L} \text {sgn}\left( \sigma _1\right) \cdot \text {sgn}\left( \sigma _2\right) \cdot \text {sgn}\left( \sigma _3\right) \cdot x_{K \setminus \{s\}}\cdot x_{(L \setminus \{s\})'},\\ \end{array} \end{aligned}$$
(4.4)

\(\text {sgn}(\sigma _i)\) for \(i=1, 2, 3,\) are the signature of the permutations \(\sigma _1=\left( s,K \setminus \{s\}\right) , \sigma _2=\left( s,L \setminus \{s\}\right) , \sigma _3=\left( (L \setminus \{s\})',L \setminus \{s\}\right)\) and the sequence \((L \setminus \{s\})'\) denotes the increasing sequence complementary to \(L \setminus \{s\}\).

Proof

Let I and J be strictly increasing subsequences of \(N=(1, 2, \ldots ,n)\) such that \(|I|=d\) and \(|J|=n-d\). For the subsequences \(I:i_1<i_2< \cdots <i_d\) and \(J:j_1<j_2< \cdots <j_{n-d}\), we write \(v_{i_1} \wedge v_{i_2} \wedge \cdots \wedge v_{i_d}=v_I\) and \(\omega _{j_1} \wedge \omega _{j_2} \wedge \cdots \wedge \omega _{j_{n-d}}=\omega _J\).

In fact it is useful to evaluate \({\bar{\Lambda }}\) directly in terms of \(\Lambda\) as we shall now do. If

$$\begin{aligned} \Lambda =\sum \limits _{|I|=d} x_{I} v_{I} \in \bigwedge ^dV, \end{aligned}$$
(4.5)

then \({\bar{\Lambda }}=\sum \limits _{|J|=n-d} y_{J}{\omega }_{J} \in \bigwedge ^{n-d} V^{*}\) and \(\Lambda \wedge v_{J}=y_{J} v_N\). Therefore, we have \(\sum \limits _{|I|=d} x_{I} v_{I} \wedge v_{J}=y_{J} v_N.\) Now, since \(v_I \wedge v_{J} \ne 0 \Longleftrightarrow I=J'\), where \(J'\) is the complement of J. Moreover, \(v_{J'} \wedge v_{J} =\text {sgn}\left( J',J\right) v_N\) where \(\text {sgn}(J',J)\) is the signature of the permutation \((J',J)\). It follows that \(y_{J}=\text {sgn}\left( J',J\right) x_{J'}\). So

$$\begin{aligned} {\bar{\Lambda }}=\sum \limits _{|J|=n-d}\text {sgn}\left( J',J\right) x_{J'} {\omega }_{J}. \end{aligned}$$
(4.6)

Then, adapting the formulas for \(\Lambda\) from Eq. (4.5) and \({\bar{\Lambda }}\) from Eq. (4.6) in Eq. (4.2) gives

$$\begin{aligned} \begin{array}{cc} E_{KL}=\sum \limits _{s=1}^n \sum \limits _{|I|=d} x_{I}{\omega }_{K}(v_s \wedge v_{I}) \sum \limits _{|J|=n-d} \text {sgn}\left( J',J\right) x_{J'}({\omega }_s \wedge {\omega }_{J})v_{L}=0.\\ \end{array} \end{aligned}$$
(4.7)

Now \({\omega }_{K}(v_s \wedge v_{I}) \ne 0 \Longleftrightarrow I=K\setminus \{s\}\) and \(s\in K\). Similarly, \(({\omega }_s \wedge {\omega }_{J})v_{L} \ne 0 \Longleftrightarrow J=L\setminus \{s\}\) and \(s\in L\). Thus, Eq. (4.2) can be rewritten as

$$\begin{aligned}&\sum \limits _{s\in K \cap L} \text {sgn}\left( (L\setminus \{s\})',L\setminus \{s\}\right) x_{(L\setminus \{s\})'} x_{K\setminus \{s\}}{\omega }_{K}(v_s \wedge v_{K\setminus \{s\}}) ({\omega }_s \wedge {\omega }_{L\setminus \{s\}})v_{L}\nonumber \\&= E_{KL}=0. \end{aligned}$$
(4.8)

Finally, substituting \(v_s \wedge v_{K\setminus \{s\}}=\text {sgn}\left( s,K \setminus \{s\}\right) v_K\) and \({\omega }_s \wedge {\omega }_{L\setminus \{s\}}=\text {sgn}\left( s,L \setminus \{s\}\right) {\omega }_L\) in Eq. (4.8) completes the proof.

4.2 Procedure for computing the divisors of a totally decomposable vector \(\Lambda \in \bigwedge ^d V\)

Given a totally decomposable \(\Lambda \in \bigwedge ^2 V\), there exist two linearly independent vectors \(u_1, u_2 \in V\) such that \(\Lambda =u_1 \wedge u_2\). Moreover, the divisors \(u_1, u_2\) satisfy the conditions \(u_1 \wedge \Lambda =0\) and \(u_2 \wedge \Lambda =0\). Therefore, for a given totally decomposable vector \(\Lambda =\sum \limits _{1\le i<j \le n} x_{ij} v_i\wedge v_j\), if we assume that \(u=\sum \limits _{k=1}^n a_k v _k\) is a divisor of \(\Lambda\), then one can find u by solving the linear system generated by comparing the coefficients of the basis of \(\bigwedge ^3 V\) in the following equation

$$\begin{aligned} u \wedge \Lambda = \sum \limits _{1\le i<j \le n} \sum \limits _{k=1}^n x_{ij} a_k ~v_k \wedge ~v_i\wedge v_j=0. \end{aligned}$$

Solving this linear system will give the two divisors \(u_1,u_2 \in V\) such that \(\Lambda =u_1 \wedge u_2\).

Now, if the vector \(\Lambda \in \bigwedge ^2 V\) involves some parameters, applying the Plücker relations (4.3) provides initial constraints on these parameters in order to keep \(\Lambda\) totally decomposable in \(\bigwedge ^2 V\). Moreover, the system of divisors of \(\Lambda\) will inherit these parameters. Therefore, we need to solve the system of divisors with the initial constraints for all the possible cases of the parameters. This can be achieved using the command “PreComprehensiveTriangularize (sysdR)” in Maple 13, that is given in [2]. This command returns a pre-comprehensive triangular decomposition of sys, with respect to the last d variables of R.

Similarly, the above idea can be extended to find the divisors of a totally decomposable vector \(\Lambda \in \bigwedge ^d V\) for \(2<d<n\).

5 Algorithms

5.1 Algorithm A: Determining the common one-dimensional invariant subspaces for a set of \(n \times n\) matrices \(\{A_i \}_{ i=1}^N\).

  1. 1.

    Input: \(\{A_i \}_{ i=1}^N\).

  2. 2.

    Find the set of eigenvalues \(\sigma (A_i)\) for each matrix \(A_i\).

  3. 3.

    Find \(\Omega =\{(\lambda _1, \lambda _2, \ldots , \lambda _N): \lambda _i \in \sigma (A_i) \}.\)

  4. 4.

    Construct the following matrix for each \((\lambda _1, \lambda _2, \ldots , \lambda _N) \in \Omega\):

    $$\begin{aligned} B(\lambda _1, \lambda _2, \ldots , \lambda _N)= \left( \begin{array}{c} A_1-\lambda _1 I\\ {} A_2-\lambda _2 I\\ {} .\\ .\\ .\\ A_N-\lambda _N I\\ {} \end{array} \right) . \end{aligned}$$
  5. 5.

    Compute the null space \(\Lambda\) for each matrix \(B(\lambda _1, \lambda _2, \ldots , \lambda _N)\).

  6. 6.

    Output: The set of all possible \(((\lambda _1, \lambda _2, \ldots , \lambda _N), \Lambda )\).

5.2 Algorithm B: Determining the common invariant subspaces of dimension \(1<d<n\) for a set of \(n \times n\) matrices \(\{A_i \}_{ i=1}^N\).

  1. 1.

    Input: \(\{A_i \}_{ i=1}^N\) and d.

  2. 2.

    Find s such that \(\{A_i + sI\}_{ i=1}^N\) are invertible matrices. Let \({{\bar{A}}}_i=A_i + sI\) for \(i=1,2, \ldots ,N\).

  3. 3.

    Compute the \(m \times m\) matrices \(\{\bigwedge ^d {{\bar{A}}}_i \}_{ i=1}^N\) where \(m=\dim \bigwedge ^d V=\left( {\begin{array}{c}n\\ d\end{array}}\right)\).

  4. 4.

    Determine the set of all common one-dimensional invariant subspaces and the corresponding sequence of eigenvalues \((\lambda , \Lambda )\) to the set of \(m \times m\) matrices \(\{\bigwedge ^d {{\bar{A}}}_i \}_{ i=1}^N\) using Algorithm A.

  5. 5.

    Construct the Plücker relations in \(\bigwedge ^d V\) using the convenient formulation given in section (4.1).

  6. 6.

    For each sequence of eigenvalues \(\lambda\), check if the coefficients of the corresponding one-dimensional invariant subspace \(\Lambda\) satisfy the Plücker relations. If the vector \(\Lambda \in \bigwedge ^d V\) involves some parameters, apply the Plücker relations to find the initial constraints on these parameters.

  7. 7.

    Find the divisors \(\{v_1, v_2, \ldots , v_d\}\) of each totally decomposable one-dimensional invariant subspace \(\Lambda =v_1 \wedge v_2 \wedge \cdots \wedge v_d\) using the procedure given in section (4.2).

  8. 8.

    Output: The set of all possible \((\lambda , \langle v_1, v_2, \cdots , v_d \rangle )\).

6 Examples

Example 6.1

$$\begin{aligned} A_1= \left( \begin{array}{ccccccc} 3&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}2&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}2&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}1&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}1&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}1&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}3\end{array} \right) , A_2=\left( \begin{array}{ccccccc} 0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}1&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}1&{}0&{}0&{}0\\ {} 1&{}0&{}0&{}0&{}0&{}0&{}0\end{array} \right) . \end{aligned}$$
(6.1)

Using algorithms A and B with \(s=1\), the complete list of the common invariant subspaces for the set of matrices \(\{A_1,A_2\}\) is as follows:

  • Zero-dimensional subspaces: \(\{\varvec{0}\}\)

  • One-dimensional subspaces:

    Sequence of eigenvalues

    One-dimensional

    \(\lambda =(\lambda _1,\lambda _2)\) for \(\{ {{\bar{A}}}_i \}_{ i=1}^2\)

    Invariant subspace

    (2,1)

    \(\langle e_5\rangle ,\langle e_6+\alpha e_5\rangle\)

    (3,1)

    \(\langle e_3\rangle\)

    (4,1)

    \(\langle e_7\rangle\)

  • Two-dimensional subspaces:

    Sequence of eigenvalues

    Two-dimensional

    \(\lambda =(\lambda _1,\lambda _2)\) for \(\{\bigwedge ^2 {{\bar{A}}}_i \}_{ i=1}^2\)

    Invariant subspace

    (4,1)

    \(\langle e_4 ,e_6\rangle ,\langle e_5+\alpha e_4 ,e_6\rangle\)

    (6,1)

    \(\langle e_3, e_5\rangle ,\langle e_3,e_6+\alpha e_5\rangle\)

    (8,1)

    \(\langle e_5,e_7\rangle ,\langle e_6+\alpha e_5,e_7\rangle\)

    (9,1)

    \(\langle e_2,e_3\rangle\)

    (12,1)

    \(\langle e_3,e_7\rangle\)

    (16,1)

    \(\langle e_1,e_7\rangle\)

  • Three-dimensional subspaces:

    Sequence of eigenvalues

    Three-dimensional

    \(\lambda =(\lambda _1,\lambda _2)\) for \(\{\bigwedge ^3 {{\bar{A}}}_i \}_{ i=1}^2\)

    Invariant subspace

    (8,1)

    \(\langle e_4, e_5,e_6\rangle\)

    (12,1)

    \(\langle e_3,e_4,e_6\rangle ,\langle e_3, e_5+\alpha e_4,e_6\rangle\)

    (16,1)

    \(\langle e_4, e_6,e_7\rangle ,\langle e_5+\alpha e_4, e_6,e_7\rangle\)

    (18,1)

    \(\langle e_2, e_3, e_5\rangle , \langle e_2, e_3, e_6+\alpha e_5\rangle\)

    (24,1)

    \(\langle e_3, e_5 ,e_7\rangle , \langle e_3, e_6+\alpha e_5,e_7\rangle\)

    (32,1)

    \(\langle e_1,e_5,e_7\rangle , \langle e_1, e_6+\alpha e_5,e_7\rangle\)

    (36,1)

    \(\langle e_2, e_3,e_7\rangle\)

    (48,1)

    \(\langle e_1, e_3,e_7\rangle\)

  • Four-dimensional subspaces:

    Sequence of eigenvalues

    Four-dimensional

    \(\lambda =(\lambda _1,\lambda _2)\) for \(\{\bigwedge ^4 {{\bar{A}}}_i \}_{ i=1}^2\)

    Invariant subspace

    (24,1)

    \(\langle e_3, e_4,e_5,e_6\rangle\)

    (32,1)

    \(\langle e_4, e_5,e_6,e_7\rangle\)

    (36,1)

    \(\langle e_2, e_3,e_4,e_6\rangle ,\langle e_2, e_3,e_5+\alpha e_4,e_6\rangle\)

    (48,1)

    \(\langle e_3, e_4,e_6,e_7\rangle ,\langle e_3, e_5+\alpha e_4,e_6,e_7\rangle\)

    (64,1)

    \(\langle e_1, e_4,e_6,e_7\rangle ,\langle e_1,e_5+\alpha e_4,e_6,e_7\rangle\)

    (72,1)

    \(\langle e_2, e_3,e_5,e_7\rangle ,\langle e_2, e_3,e_6+\alpha e_5,e_7\rangle\)

    (96,1)

    \(\langle e_1, e_3, e_5,e_7\rangle ,\langle e_1, e_3, e_6+\alpha e_5,e_7\rangle\)

    (144,1)

    \(\langle e_1, e_2,e_3,e_7\rangle\)

  • Five-dimensional subspaces:

    Sequence of eigenvalues

    Five-dimensional

    \(\lambda =(\lambda _1,\lambda _2)\) for \(\{\bigwedge ^5 {{\bar{A}}}_i \}_{ i=1}^2\)

    Invariant subspace

    (72,1)

    \(\langle e_2, e_3,e_4,e_5,e_6\rangle\)

    (96,1)

    \(\langle e_3, e_4,e_5,e_6,e_7\rangle\)

    (128,1)

    \(\langle e_1, e_4,e_5,e_6,e_7\rangle\)

    (144,1)

    \(\langle e_2, e_3,e_4,e_6,e_7\rangle ,\langle e_2, e_3,e_5+\alpha e_4,e_6,e_7\rangle\)

    (192,1)

    \(\langle e_1, e_3, e_4,e_6,e_7\rangle ,\langle e_1, e_3,e_5+\alpha e_4,e_6,e_7\rangle\)

    (288,1)

    \(\langle e_1, e_2,e_3,e_5,e_7\rangle , \langle e_1, e_2,e_3, e_6+\alpha e_5,e_7\rangle\)

  • Six-dimensional subspaces:

    Sequence of eigenvalues

    Six-dimensional

    \(\lambda =(\lambda _1,\lambda _2)\) for \(\{\bigwedge ^6 {{\bar{A}}}_i \}_{ i=1}^2\)

    Invariant subspace

    (288,1)

    \(\langle e_2, e_3,e_4, e_5,e_6, e_7\rangle\)

    (384,1)

    \(\langle e_1,e_3, e_4,e_5,e_6, e_7\rangle\)

    (576,1)

    \(\langle e_1, e_2,e_3, e_4,e_6, e_7\rangle ,\langle e_1, e_2,e_3, e_5+\alpha e_4,e_6, e_7\rangle\)

  • Seven-dimensional subspaces:

    Sequence of eigenvalues

    Seven-dimensional

    \(\lambda =(\lambda _1,\lambda _2)\) for \(\{\bigwedge ^7 {{\bar{A}}}_i \}_{ i=1}^2\)

    Invariant subspace

    (1152, 1)

    \(\langle e_1, e_2,e_3,e_4,e_5,e_6,e_7\rangle\)

Example 6.2

$$\begin{aligned} A_1= \left( \begin{array}{ccccccc} 3&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}2&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}2&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}1&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}1&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}1&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}3 \end{array} \right) , A_2=\left( \begin{array}{ccccccc} 0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}1&{}0&{}0&{}0\\ {} 0&{}1&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}1&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}1&{}0&{}0\\ {} 1&{}0&{}0&{}0&{}0&{}0&{}0\end{array} \right) . \end{aligned}$$
(6.2)

Using algorithms A and B with \(s=1\), the complete list of the common invariant subspaces for the set of matrices \(\{A_1,A_2\}\) is as follows:

  • Zero-dimensional subspaces: \(\{\varvec{0}\}\)

  • One-dimensional subspaces:

    Sequence of eigenvalues

    One-dimensional

    \(\lambda =(\lambda _1,\lambda _2)\) for \(\{ {{\bar{A}}}_i \}_{ i=1}^2\)

    Invariant subspace

    (2,1)

    \(\langle e_6\rangle\)

    (2,2)

    \(\langle e_5+e_6\rangle\)

    (3,1)

    \(\langle e_3\rangle\)

    (4,1)

    \(\langle e_7\rangle\)

  • Two-dimensional subspaces:

    Sequence of eigenvalues

    Two-dimensional

    \(\lambda =(\lambda _1,\lambda _2)\) for \(\{\bigwedge ^2 {{\bar{A}}}_i \}_{ i=1}^2\)

    Invariant subspace

    (4,2)

    \(\langle e_5,e_6\rangle\)

    (6,1)

    \(\langle e_3,e_6\rangle\)

    (6,2)

    \(\langle e_3,e_5+e_6\rangle\)

    (8,1)

    \(\langle e_6,e_7\rangle\)

    (8,2)

    \(\langle e_5+e_6,e_7\rangle\)

    (9,1)

    \(\langle e_2,e_3\rangle\)

    (12,1)

    \(\langle e_3,e_7\rangle\)

    (16,1)

    \(\langle e_1,e_7\rangle\)

  • Three-dimensional subspaces:

    Sequence of eigenvalues

    Three-dimensional

    \(\lambda =(\lambda _1,\lambda _2)\) for \(\{\bigwedge ^3 {{\bar{A}}}_i \}_{ i=1}^2\)

    Invariant subspace

    (12,2)

    \(\langle e_3, e_5,e_6\rangle\)

    (16,2)

    \(\langle e_5, e_6,e_7\rangle\)

    (18,1)

    \(\langle e_2, e_3,e_4\rangle , \langle e_2, e_3,e_6+\alpha e_4\rangle\)

    (18,2)

    \(\langle e_2, e_3,e_5+e_6\rangle\)

    (24,1)

    \(\langle e_3, e_6,e_7\rangle\)

    (24,2)

    \(\langle e_3, e_5+e_6,e_7\rangle\)

    (32,1)

    \(\langle e_1, e_6,e_7\rangle\)

    (32,2)

    \(\langle e_1, e_5+e_6,e_7\rangle\)

    (36,1)

    \(\langle e_2, e_3,e_7\rangle\)

    (48,1)

    \(\langle e_1, e_3,e_7\rangle\)

  • Four-dimensional subspaces:

    Sequence of eigenvalues

    Four-dimensional

    \(\lambda =(\lambda _1,\lambda _2)\) for \(\{\bigwedge ^4 {{\bar{A}}}_i \}_{ i=1}^2\)

    Invariant subspace

    (36,1)

    \(\langle e_2, e_3,e_4,e_6\rangle ,\)

    (36,2)

    \(\langle e_2, e_3,e_4,e_5+e_6\rangle ,\langle e_2, e_3, e_5+\alpha e_4,e_6-\alpha e_4\rangle\)

    (48,2)

    \(\langle e_3, e_5,e_6,e_7\rangle\)

    (64,2)

    \(\langle e_1, e_5,e_6,e_7\rangle\)

    (72,1)

    \(\langle e_2, e_3,e_4,e_7\rangle , \langle e_2, e_3, e_6+\alpha e_4,e_7\rangle\)

    (72,2)

    \(\langle e_2, e_3,e_5+e_6,e_7\rangle\)

    (96,1)

    \(\langle e_1, e_3,e_6,e_7\rangle\)

    (96,2)

    \(\langle e_1, e_3,e_5+e_6,e_7\rangle\)

    (144,1)

    \(\langle e_1, e_2,e_3,e_7\rangle\)

  • Five-dimensional subspaces:

    Sequence of eigenvalues

    Five-dimensional

    \(\lambda =(\lambda _1,\lambda _2)\) for \(\{\bigwedge ^5 {{\bar{A}}}_i \}_{ i=1}^2\)

    Invariant subspace

    (72,2)

    \(\langle e_2, e_3,e_4,e_5,e_6\rangle\)

    (144,1)

    \(\langle e_2, e_3,e_4,e_6,e_7\rangle ,\)

    (144,2)

    \(\langle e_2, e_3, e_4,e_5+e_6,e_7\rangle ,\langle e_2, e_3,e_5+\alpha e_4,e_6-\alpha e_4,e_7\rangle\)

    (192,2)

    \(\langle e_1, e_3,e_5,e_6,e_7\rangle\)

    (288,1)

    \(\langle e_1, e_2,e_3, e_4,e_7\rangle ,\langle e_1, e_2,e_3,e_6+\alpha e_4,e_7\rangle\)

    (288,2)

    \(\langle e_1, e_2,e_3,e_5+e_6,e_7\rangle\)

  • Six-dimensional subspaces:

    Sequence of eigenvalues

    Six-dimensional

    \(\lambda =(\lambda _1,\lambda _2)\) for \(\{\bigwedge ^6 {{\bar{A}}}_i \}_{ i=1}^2\)

    Invariant subspace

    (288,2)

    \(\langle e_2,e_3, e_4,e_5,e_6, e_7\rangle\)

    (576,1)

    \(\langle e_1, e_2,e_3, e_4,e_6, e_7\rangle\)

    (576,2)

    \(\langle e_1, e_2,e_3, e_4,e_5+e_6, e_7\rangle ,\)

     

    \(\langle e_1, e_2,e_3, ,e_5+\alpha e_4,e_6-\alpha e_4,e_7\rangle\)

  • Seven-dimensional subspaces:

    Sequence of eigenvalues

    Seven-dimensional

    \(\lambda =(\lambda _1,\lambda _2)\) for \(\{\bigwedge ^7 {{\bar{A}}}_i \}_{ i=1}^2\)

    Invariant subspace

    (1152, 2)

    \(\langle e_1, e_2,e_3,e_4,e_5,e_6,e_7\rangle\)

Example 6.3

$$\begin{aligned} \begin{array}{ll} A_1= \left( \begin{array}{ccccccccc} 0&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}2&{}0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}1&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}-2&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}-1&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}-1&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}0&{}1&{}0\\ {}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0 \end{array} \right) , A_2=\left( \begin{array}{ccccccccc} 0&{}0&{}0&{}1&{}0&{}0&{}0&{}0&{}0\\ {} -1&{}0&{}0&{}0&{}1&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}1&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}-1&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}-1&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0 \end{array} \right) ,\\ A_3=\left( \begin{array}{ccccccccc} 0&{}-1&{}0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 1&{}0&{}0&{}0&{}-1&{}0&{}0&{}0&{}0\\ {} 0&{}1&{}0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}1&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}0&{}-1&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0\\ {} 0&{}0&{}0&{}0&{}0&{}0&{}0&{}0&{}0\end{array} \right) . \end{array} \end{aligned}$$
(6.3)

Using algorithms A and B with \(s=3\), the complete list of the common invariant subspaces for the set of matrices \(\{A_1,A_2,A_3\}\) is as follows:

  • Zero-dimensional subspaces: \(\{\varvec{0}\}\)

  • One-dimensional subspaces:

    Sequence of eigenvalues

    One-dimensional

    \(\lambda =(\lambda _1,\lambda _2,\lambda _3)\) for \(\{ {{\bar{A}}}_i \}_{ i=1}^3\)

    Invariant subspace

    (3, 3, 3)

    \(\langle e_1+e_5\rangle , \langle e_9+\alpha (e_1+e_5) \rangle\)

  • Two-dimensional subspaces:

    Sequence of eigenvalues

    Two-dimensional

    \(\lambda =(\lambda _1,\lambda _2,\lambda _3)\) for \(\{\bigwedge ^2 {{\bar{A}}}_i \}_{ i=1}^3\)

    Invariant subspace

    (8, 9, 9)

    \(\langle e_3, e_6\rangle , \langle e_7+\alpha e_6, e_8-\alpha e_3\rangle\)

    (9, 9, 9)

    \(\langle e_1+e_5,e_9\rangle\)

  • Three-dimensional subspaces:

    Sequence of eigenvalues

    Three-dimensional

    \(\lambda =(\lambda _1,\lambda _2,\lambda _3)\) for \(\{\bigwedge ^3 {{\bar{A}}}_i \}_{ i=1}^3\)

    Invariant subspace

    (24, 27, 27)

    \(\langle e_3,e_6,e_9+\alpha (e_1+e_5)\rangle ,\)

     

    \(\langle e_3,e_1+e_5,e_6\rangle ,\)

     

    \(\langle e_7, e_8, e_9+\alpha (e_1+e_5)\rangle ,\)

     

    \(\langle e_1+e_5, e_7+\alpha e_6, e_8-\alpha e_3\rangle ,\)

     

    \(\langle e_7+\alpha e_6, e_8-\alpha e_3,e_9\rangle ,\)

     

    \(\langle e_7+\alpha e_6, e_8-\alpha e_3,e_9+\alpha (e_1+e_5)\rangle\)

    (15, 27, 27)

    \(\langle e_2,e_4,e_5- e_1\rangle\)

  • Four-dimensional subspaces:

    Sequence of eigenvalues

    Four-dimensional

    \(\lambda =(\lambda _1,\lambda _2,\lambda _3)\) for \(\{\bigwedge ^4 {{\bar{A}}}_i \}_{ i=1}^3\)

    Invariant subspace

    (64, 81, 81)

    \(\langle e_3,e_6,e_7,e_8\rangle\)

    (72, 81, 81)

    \(\langle e_3, e_1+e_5, e_6,e_9\rangle ,\)

     

    \(\langle e_1+e_5, e_7+\alpha e_6, e_8-\alpha e_3,e_9\rangle\)

    (45, 81, 81)

    \(\langle e_1,e_2,e_4,e_5\rangle ,\)

     

    \(\langle e_2,e_4,e_5- e_1,e_9+\alpha e_1\rangle\)

  • Five-dimensional subspaces:

    Sequence of eigenvalues

    Five-dimensional

    \(\lambda =(\lambda _1,\lambda _2,\lambda _3)\) for \(\{\bigwedge ^5 {{\bar{A}}}_i \}_{ i=1}^3\)

    Invariant subspace

    (192, 243, 243)

    \(\langle e_3,e_1+e_5,e_6,e_7,e_8\rangle ,\)

     

    \(\langle e_3,e_6,e_7,e_8,e_9+\alpha ( e_1+e_5)\rangle\)

    (120, 243, 243)

    \(\langle e_2,e_3,e_4,e_5- e_1,e_6\rangle ,\)

     

    \(\langle e_2,e_4,e_5- e_1,e_7+\alpha e_6 ,e_8-\alpha e_3\rangle\)

    (135, 243, 243)

    \(\langle e_1,e_2,e_4,e_5,e_9\rangle\)

  • Six-dimensional subspaces:

    Sequence of eigenvalues

    Six-dimensional

    \(\lambda =(\lambda _1,\lambda _2,\lambda _3)\) for \(\{\bigwedge ^6 {{\bar{A}}}_i \}_{ i=1}^3\)

    Invariant subspace

    (576, 729, 729)

    \(\langle e_3,e_1+e_5, e_6,e_7,e_8,e_9\rangle\)

    (360, 729, 729)

    \(\langle e_1,e_2,e_3,e_4, e_5,e_6\rangle ,\)

     

    \(\langle e_1, e_2, e_4, e_5, e_7+\alpha e_6, e_8-\alpha e_3\rangle ,\)

     

    \(\langle e_2,e_3,e_4,e_5- e_1,e_6,e_9+\alpha e_1\rangle ,\)

     

    \(\langle e_2, e_4, e_5+e_1, e_7+\alpha e_6, e_8-\alpha e_3, e_9+\beta e_1\rangle\)

  • Seven-dimensional subspaces:

    Sequence of eigenvalues

    Seven-dimensional

    \(\lambda =(\lambda _1,\lambda _2,\lambda _3)\) for \(\{\bigwedge ^7 {{\bar{A}}}_i \}_{ i=1}^3\)

    Invariant subspace

    (960, 2187, 2187)

    \(\langle e_2,e_3,e_4,e_5- e_1,e_6,e_7,e_8\rangle\)

    (1080, 2187, 2187)

    \(\langle e_1,e_2,e_3,e_4, e_5,e_6,e_9\rangle ,\)

     

    \(\langle e_1, e_2, e_4, e_5, e_7+\alpha e_6, e_8 -\alpha e_3, e_9\rangle\)

  • Eight-dimensional subspaces:

    Sequence of eigenvalues

    Eight-dimensional

    \(\lambda =(\lambda _1,\lambda _2,\lambda _3)\) for \(\{\bigwedge ^8 {{\bar{A}}}_i \}_{ i=1}^3\)

    Invariant subspace

    (2880, 6561, 6561)

    \(\langle e_1,e_2,e_3,e_4, e_5,e_6,e_7,e_8\rangle ,\)

     

    \(\langle e_2,e_3,e_4, e_5-e_1,e_6,e_7,e_8,e_9+\alpha e_1\rangle\)

  • Nine-dimensional subspaces:

    Sequence of eigenvalues

    Nine-dimensional

    \(\lambda =(\lambda _1,\lambda _2,\lambda _3)\) for \(\{\bigwedge ^9 {{\bar{A}}}_i \}_{ i=1}^3\)

    Invariant subspace

    (8640, 19683, 19683)

    \(\langle e_1,e_2,e_3,e_4, e_5,e_6,e_7,e_8,e_9\rangle\)