1 Introduction

The Einstein Field Equations (EFE) are one of the most interesting field equations in physics and from a mathematical point of view, the search for methods to obtain solutions has led to a large number of mathematical results. The first exact solution was obtained by Karl Schwarzschild in 1916, but his study has been a long debate over the meaning of the solution. We now know that the Schwarzschild solution represents a static black hole. His generalization for a stationary solution had to wait more than 40 years to be found by Roy Kerr. These solutions have been the cornerstone of the theory of general relativity and its interpretation and represent a stationary black hole. After the finding of Kerr’s solution, the exact solutions area of the EFE has been very active, see for example [10]. Several mathematical methods have been developed with great success to find exact solutions of EFEs. One of the most successful has been the method of subspaces and subgroups, which is capable of generating exact solutions on demand. It is possible to decide the exact physical content of the solution from the beginning. That is why in this work we will adopt this solution method.

On the other hand, interest in higher dimensional theories began in 1919 with Theodor Kaluza’s proposal for a five-dimensional space-time that unified gravitation with electromagnetism. Kaluza proposes that the metric of a five-dimensional spacetime can be separated as \(g^5_{\mu \nu }=g^4_{\mu \nu }+I^2A_\mu A_\nu \), for \(\mu ,\nu =1,\cdots ,4\), \(g^5_{5\mu }=IA_\mu \) and \(g^5_{55}=I^2\), where \(A_\mu \) is the tetraelectromagnetic potential and I is related to a scalar field, called dilaton, see for example [8]. This theory has evolved to the unification of all interactions; electromagnetic, strong and weak interactions with gravity. However, the theory includes quantum interactions, but it is not renormalizable, nor is it quantizable. So people propose string and superstring theory to have a quantizable renormalizable higher dimensional theory, see for example [2]. The price they have to pay is that the extra dimensions must be singular. In this paper we propose that the extra dimensions form an n-dimensional space with \(n-2\) Killing vectors that may be singular and interesting enough to be studied.

In this work we pretend to find exact solutions of the EFE from the mathematical point of view, using the method of space and subgroups which seems very successful to obtain a great amount of exact solutions.

We start with an (\(n+2\))-dimensional space and are interested in 4-dimensional spacetimes that are stationary and axially symmetric. This means that the 4-dimensional spacetime contains two commuting Killing vectors whose extra dimensional space is (\(n-2\))-dimensional with \(n-2\) commuting Killing vectors, so that the n-dimensional space contains n commuting Killing vectors. Thus, in this case we can work in a coordinate system where the metric depends only on two variables \(x^1\) and \(x^2\), so that the metric tensor has the form

$$\begin{aligned} \hat{g} = f \ ( {\textrm{d}}x ^1\otimes {\textrm{d}}x ^1 + {\textrm{d}}x ^2 \otimes {\textrm{d}}x ^2 ) + g_{\mu \nu } {\textrm{d}}x ^\mu \otimes {\textrm{d}}x ^\nu \end{aligned}$$
(1)

where the components of \(\hat{g}\), f and g, for \(\mu , \nu = 3, \ldots , n+2\), depend on two variables \(x^1\) and \(x^2\). In the following, we will denote the uppercase indices as \(A,B=1,\ldots , n+2\) and the Greek indices as \(\mu , \nu = 3,\ldots , n+2\).

Throughout this paper, the set of matrices of size \(m\times n\) with entries in \(\mathbb {R}\) is denoted by \(\textbf{M}_{m \times n} \), we write \(\textbf{M}_{m}\) if \(m = n\). The identity matrix and the zero matrix are denoted by \(I_n\) and \(0_n\), respectively, and, \(\textbf{Sym}_{n}\) is the subset of symmetric matrices in \(\textbf{M}_{n}\).

This work is organized as follows. In Section 2 we follow [6] to write the main field equations, obtaining the Ricci tensor for an (\(n+2\))-dimensional space with n commutative Killing vectors. Section 4 presents the algebraic basis of the matrices used in this work. In Section 5 we express these matrices by their Jordan normal form in order to solve the final algebraic equation. In Section 8, using the Jordan form of matrices we obtain the solutions of the algebraic equations. Finally, Section 9 contains some conclusions.

2 Field Equations

In this section we derive the main field equations for an (\(n+2\))-dimensional Riemannian space with n commuting Killing vectors. The metric components depend only on the coordinates \(\hat{g}_{A B}=\hat{g}_{A B}(x^1,x^2)\). In this case, the Christoffel symbols are given as

$$\begin{aligned} \Gamma ^C_{A B} \equiv \frac{1}{2} \hat{g}^{C D} \ (\hat{g}_{D A , B} + \hat{g}_{D B , A} - \hat{g}_{A B , D} )\ . \end{aligned}$$
(2)

For the metric (1) we have

$$\begin{aligned} \begin{array}{cccccccc} \Gamma ^1_{1 1} &{} = \frac{1}{2} ( \ln f )_{, 1} &{} ,\ \Gamma ^1_{1 2} &{} = \frac{1}{2} ( \ln f )_{, 2} &{} ,\ \Gamma ^1_{2 2} &{} = - \frac{1}{2} ( \ln f )_{, 1} &{} ,\ \Gamma ^{\mu } _{\nu i} &{} = \frac{1}{2} g^{\mu \omega } g_{\omega _\nu , i} ,\\ \Gamma ^2_{2 2} &{} = \frac{1}{2} ( \ln f )_{, 2} &{} ,\ \Gamma ^2_{1 2} &{} = \frac{1}{2} ( \ln f )_{, 1} &{} ,\ \Gamma ^2_{1 1} &{} = - \frac{1}{2} ( \ln f )_{, 2} &{} ,\ \Gamma ^i _{\mu \nu } &{} = - \frac{1}{2} g_{\mu \nu } ^{, i} , \end{array} \end{aligned}$$
(3)

the remaining components are zero.

In order to compute the Ricci tensor with our metric,

$$\begin{aligned} R_{A B} = \Gamma ^C_{A B , C} - \Gamma ^C_{A C , B} + \Gamma ^C_{D C} \Gamma ^D_{A B} - \Gamma ^C_{D B} \Gamma ^D_{A C}\ , \end{aligned}$$
(4)

it is convenient to use the variables \(z = x _1 + i x _2 \) and its complex conjugate, \({\bar{z}}\). Hence, the non-zero components of the Ricci tensor are as follows:

$$\begin{aligned} R_{1 1}= & {} -2 (\ln f \alpha )_{, z {\bar{z}}} - \frac{1}{2} g^{\mu \alpha } g_{\alpha \nu , z} g^{\nu \beta } g_{\beta \mu , {\bar{z}}} - (\ln \alpha )_{, z z} + (\ln \alpha )_{, z} (\ln f)_{, z} \end{aligned}$$
(5)
$$\begin{aligned}{} & {} - \frac{1}{4} g^{\mu \alpha } g_{\alpha \nu , z } g^{\nu \beta } g_{\beta \mu , z} - (\ln \alpha )_{, {\bar{z}} {\bar{z}}} + (\ln \alpha )_{, {\bar{z}}} (\ln f )_{, {\bar{z}}} - \frac{1}{4} g^{\mu \alpha } g_{\alpha \nu , {\bar{z}}} g^{\nu \beta } g_{\beta \mu , {\bar{z}}}\end{aligned}$$
(6)
$$\begin{aligned} R_{2 2}= & {} -2 (\ln f \alpha )_{, z {\bar{z}}} - \frac{1}{2} g^{\mu \alpha } g_{\alpha \nu , z} g^{\nu \beta } g_{\beta \mu , {\bar{z}}} + (\ln \alpha )_{, z z} - (\ln \alpha )_{, z} (\ln f)_{, z} \end{aligned}$$
(7)
$$\begin{aligned}{} & {} + \frac{1}{4} g^{\mu \alpha } g_{\alpha \nu , z} g^{\nu \beta } g_{\beta \mu , z} + (\ln \alpha )_{, {\bar{z}} {\bar{z}}} - (\ln \alpha )_{, {\bar{z}}} (\ln f )_{, {\bar{z}}} + \frac{1}{4} g^{\mu \alpha } g_{\alpha \nu , {\bar{z}} } g^{\nu \beta } g_{\beta \mu , {\bar{z}}}\end{aligned}$$
(8)
$$\begin{aligned} R_{1 2}= & {} i \ [ (\ln \alpha )_{, {\bar{z}} {\bar{z}}} \!-\! (\ln \alpha )_{, {\bar{z}}} (\ln f)_{, {\bar{z}}} \!+\! \frac{1}{4} g_{\alpha \beta , {\bar{z}}} g^{\beta \gamma } g_{\gamma \delta , {\bar{z}}} g^{\delta \alpha } \!-\! (\ln \alpha )_{, z z} \!+\! (\ln \alpha )_{, z} (\ln f )_{, z} \end{aligned}$$
(9)
$$\begin{aligned}{} & {} - \frac{1}{4} g_{\alpha \beta , z} g^{\beta \gamma } g_{\gamma \delta , z} g^{\delta \alpha } ]\end{aligned}$$
(10)
$$\begin{aligned} R_{\mu } ^{\nu }= & {} - \frac{1}{f\alpha } \ [ \ (\alpha g_{\mu \omega , {\bar{z}}} g^{\omega \nu } )_{, z} + \ ( \alpha g_{\mu \omega , z} g^{\omega \nu } )_{, {\bar{z}}} ] \end{aligned}$$
(11)

where \(\det g_{\mu \nu } = - \alpha ^2\).

We will use matrix notation, let us define the matrix g from the components of the metric tensor \(g_{\mu \nu }\) as follows:

$$\begin{aligned} (g)_{\mu \nu } = g_{\mu \nu }\ . \end{aligned}$$
(12)

Note that the matrix g is real and symmetric, that is, denoting by T transpose of a matrix,

$$\begin{aligned} \det g= & {} -\alpha ^2\end{aligned}$$
(13)
$$\begin{aligned} {\bar{g}}= & {} g\end{aligned}$$
(14)
$$\begin{aligned} g^T= & {} g \end{aligned}$$
(15)

The vacuum Einstein equations are given by

$$\begin{aligned} R_{A B} = 0 \ . \end{aligned}$$
(16)

From \(R_{\mu } ^{\nu } = 0 \) we obtain the chiral equations

$$\begin{aligned} \ ( \alpha g_{, {\bar{z}}} g^{-1} ) _{, z} + \ ( \alpha g_{, z} g^{-1} )_{, {\bar{z}}} = 0 \ . \end{aligned}$$
(17)

Its trace gives a differential equation for \(\alpha \):

$$\begin{aligned} \alpha _{, z {\bar{z}}} = 0 \ . \end{aligned}$$
(18)

From now on, the index Z will take the values z and \(\bar{z}\). Using \(R_{1 1} - R_{2 2} \pm 2i R_{1 2} = 0 \) we find

$$\begin{aligned} (\ln f \alpha )_{, Z} = \frac{\alpha _{, Z Z}}{\alpha _{, Z}} + \frac{{\textrm{tr}}( g_{, Z} g^{-1} )^2}{4(\ln \alpha )_{, _Z}}\ . \end{aligned}$$
(19)

Both Equations (for z and \({\bar{z}}\)) satisfy

$$\begin{aligned} (\ln f \alpha )_{, z {\bar{z}}} = - \frac{1}{4} {\textrm{tr}}( g_{, z} g^{-1} g_{, {\bar{z}}} g^{-1} ) \ . \end{aligned}$$
(20)

Using the transformation

$$\begin{aligned} g \rightarrow -\alpha ^{-2/n} g \end{aligned}$$
(21)

we normalize g, i.e., \(\det g = ( -1 ) ^{n+1}\). Therefore, g is a symmetric matrix in \(SL(n,\mathbb {R})\).

The chiral equation (17) does not change under the transformation (21), whereas (19) takes the form

$$\begin{aligned} ( \ln f \alpha ^{1-1/n} )_{, Z} = \frac{ \alpha _{, Z Z} }{\alpha _{, Z} } + \frac{{\textrm{tr}}( g_{, Z} g^{-1} )^2}{4(\ln \alpha )_{, Z} } \ . \end{aligned}$$
(22)

The chiral equation (17) is invariant under transformations

$$\begin{aligned} g \rightarrow C g C ^T \end{aligned}$$
(23)

where \( C \in SL(n,\mathbb {R}) \) is a constant matrix. The general solution of the differential (18) for \(\alpha \) is given as

$$\begin{aligned} \alpha ( z , \bar{z} ) = \alpha _z ( z ) + \alpha _{\bar{z}} ( \bar{z} ) \end{aligned}$$
(24)

where \( \alpha _z \) and \( \alpha _{\bar{z}} \) are arbitrary functions. Chosing Weyl coordinates, i.e.,

$$\begin{aligned} \alpha = \frac{ z + \bar{z} }{2} \ , \end{aligned}$$
(25)

Equations (22) are reduced to

$$\begin{aligned} ( \ln f \alpha ^{1-1/n} )_{, Z} = \frac{1}{2} \alpha \,{\textrm{tr}}( g_{, Z} g^{-1} )^2 \ . \end{aligned}$$
(26)

The next sections will introduce important quantities to transform the differential equations (17).

3 One-Dimensional Subspaces

Suppose that g depends on parameters \(\xi \) which are arbitrary functions of the variables z and \(\bar{z}\). Then, the chiral equation (17) changes to

$$\begin{aligned} 2\alpha \ (g_{, \xi } g^{-1})_{, \xi } \xi _{, z} \xi _{, {\bar{z}}} + g_{, \xi } g^{-1} \ ((\alpha \xi _{, z})_{, {\bar{z}}} + (\alpha \xi _{, {\bar{z}}} )_{, z} ) = 0 \ . \end{aligned}$$
(27)

Now we assume that the parameter \(\xi \) satisfies the Laplace equation

$$\begin{aligned} ( \alpha \xi _{, z} )_{, {\bar{z}}} + ( \alpha \xi _{, {\bar{z}}} )_{, z} = 0 \ , \end{aligned}$$
(28)

then \(g_{, \xi } g^{-1} = A\) is a constant matrix. Note that each new solution of the Laplace equation gives another solution for g. From the properties of the matrix g we obtain

$$\begin{aligned} {\bar{A}}= & {} A\end{aligned}$$
(29)
$$\begin{aligned} {\textrm{tr}}A= & {} 0\end{aligned}$$
(30)
$$\begin{aligned} A g= & {} g A^T \end{aligned}$$
(31)

Equations (29) and (30) imply that A belongs to the Lie algebra \(\mathfrak {sl} (n,\mathbb {R} )\), the Lie algebra corresponding to the group \(SL(n,\mathbb {R})\). The matrix A varies as

$$\begin{aligned} A \rightarrow C A C ^{-1} \end{aligned}$$
(32)

under the transformation (23). The relation (32) separates the set of matrices A into equivalence classes. We will work with a representative matrix of each class.

4 The Subspace \( \mathcal {I} ( A ) \)

It is possible to find the general form of g given A if we consider the property (15), together with the intertwining relation (31) satisfied for the matrix A. To do so, let us define the following set.

Definition 1

For any non-zero matrix \( A \in \textbf{M}_{n}\), define the set \(\mathcal {I}( A )\) as

$$\begin{aligned} \mathcal {I}( A ) = \ \{ g \in \textbf{Sym}_n : A g = g A ^T \}\ . \end{aligned}$$
(33)

Observe that \( g \in \mathcal {I}(A) \). Thus, the problem of finding the form of g has been transformed into a linear algebra problem. First, let us derive the following useful properties.

Theorem 1

For any non-zero matrix \( A \in \textbf{M}_{n}\), \(\mathcal {I}( A )\) is a subspace of the vector space \(\textbf{M}_{n}\).

Proof

Let \(\alpha \in \mathbb {R} \) and let \(X, Y \in \mathcal {I}(A)\). We have \( ( \alpha X ) ^T = \alpha X ^T = \alpha X\) and \( ( X + Y ) ^T = X ^T + Y ^T = X + Y \). Then, \(A ( \alpha X ) = \alpha ( A X ) = \alpha ( X A ^T ) = ( \alpha X ) A ^T \) and \( A ( X + Y ) = A X + A Y = X A ^T + Y A ^T = ( X + Y ) A ^T \), so that \(\alpha X \in \mathcal {I}(A)\) and \(X + Y \in \mathcal {I}(A)\).\(\square \)

Definition 2

For any non-zero matrix \( A \in \textbf{M}_{n}\) and \( \xi \in \mathbb {R} \), define

$$\begin{aligned} e^{ \xi A } = \sum _{ k = 0 } ^\infty \frac{ \xi ^k }{ k ! } A ^k \ . \end{aligned}$$
(34)

For more information on the exponential matrix, see, for example, [3, 11]. The following lemmas are corollaries of the above.

Lemma 1

Let \( A, g \in \textbf{M}_{n}\) be non-zero matrices and \( \xi \in \mathbb {R} \). Then \( e ^{ \xi A } g = g e ^{ \xi A ^T } \) if and only if \( A g = g A ^T \).

Proof

We define the matrix function \( F ( \xi ) = e ^{ \xi A } g e ^{ -\xi A ^T } \). Its derivative is \( F' ( \xi ) = e ^{ \xi A } \ ( A g - g A ^T ) e ^{ -\xi A ^T } \). If \( g \in \mathcal {I} ( A ) \), then \( F' ( \xi ) = 0 \), so that \( F ( \xi ) = F ( 0 ) \). Therefore, \( e ^{ \xi A } g = g e ^{ \xi A ^T } \). Now, if \( e ^{ \xi A } g = g e ^{ \xi A ^T } \), then \( F ( \xi ) = F ( 0 ) \). Its derivative at \( \xi = 0 \) gives \( A g = g A ^T \). \(\square \)

It is convenient to reduce the matrices we work with to simple matrices using the equivalence relation (23). In particular, to facilitate the computation of the matrix exponentials, we will use the Jordan matrices introduced in the next section.

5 Jordan Matrices

The invariance (23) allows to use normal forms for the matrix A which then is used to determine the matrix g. In this work we choose the real Jordan form of a matrix, because of its simplicity, and, since in this representation the matrix A is always real even if it has complex conjugate eigenvalues. For an example of using the natural normal form of matrices instead of the Jordan form, see [5], where the group \(SL(3,\mathbb {R} )\) was discussed. Here we are going to focus on the group \(SL(5,\mathbb {R} )\) in its Jordan representations. For more information on the Jordan form, see, for example, [1, 4, 9].

It is well-known that any real square matrix may have real and complex eigenvalues, where for each complex eigenvalue \(\alpha +\beta i\), also its complex conjugate \(\alpha -\beta i\) is an eigenvalue. To avoid to include the complex values explicitly in the Jordan matrix, it is possible to include each such pair \(\alpha \pm \beta i\) as represented by a real 2x2-matrix

$$\begin{aligned} \Lambda = \left[ \begin{array}{cc} \alpha &{} -\beta \\ \beta &{} \alpha \end{array}\right] \ . \end{aligned}$$
(35)

Therefore, we will consider Jordan blocks of two kinds, one for the real eigenvalues and another type for the pairs of complex conjugate eigenvalues. Furthermore, it is convenient for our work to represent the Jordan matrices as decomposed into blocks which make visible the type of eigenvalues. In consequence, we introduce several types of Jordan blocks and matrices, more general as the standard notions from the common literature, as follows.

Definition 3

For \( \lambda \in \mathbb {R} \), a Jordan cell \( J _n (\lambda ) \in \textbf{M}_{n}\) is an upper triangular matrix of the form

$$\begin{aligned} J _n (\lambda ) = \left[ \begin{array}{ccccc} \lambda &{} 1 &{} 0 &{} \cdots &{} 0 \\ &{} \lambda &{} 1 &{} \cdots &{} 0 \\ &{} &{} \ddots &{} \ddots &{} \vdots \\ &{} &{} &{} \lambda &{} 1 \\ &{} &{} &{} &{} \lambda \end{array}\right] \end{aligned}$$
(36)

Definition 4

Suppose

$$\begin{aligned} \Lambda = \left[ \begin{array}{cc} \alpha &{} -\beta \\ \beta &{} \alpha \end{array}\right] \in \textbf{M}_{2}\ ,\ \hbox {with}\ \beta > 0 \ . \end{aligned}$$
(37)

A Jordan \(\Lambda \)-block of the first kind \( J _n ( \Lambda ) \in \textbf{M}_{2n}\) is a block upper triangular matrix of the form

$$\begin{aligned} J _n ( \Lambda ) = \left[ \begin{array}{ccccc} \Lambda &{} I _2 &{} 0 _2 &{} \cdots &{} 0 _2 \\ &{} \Lambda &{} I _2 &{} \cdots &{} 0 _2 \\ &{} &{} \ddots &{} \ddots &{} \vdots \\ &{} &{} &{} \Lambda &{} I _2 \\ &{} &{} &{} &{} \Lambda \end{array}\right] \ . \end{aligned}$$
(38)

In the remainder of the article, \(\Lambda \) if not specified, is supposed to have the form in (37) .

Definition 5

Let \( \lambda \in \mathbb {R} \) and \( n_1, \ldots , n_m \) be positive integers such that \( n = n_1 + \ldots + n_m \). A Jordan matrix \( J _{ n _1, \ldots , n _m } ( \lambda ) \in \textbf{M}_{n}\) is a block diagonal matrix

$$\begin{aligned} J _{ n _1, \ldots , n _m } ( \lambda ) = \textrm{diag}\left[ J_{n_1} ( \lambda ), \ldots , J_{n_m} ( \lambda ) \right] \end{aligned}$$
(39)

where \(J_{n_i} ( \lambda )\) are Jordan cells for all \(i = 1, \ldots , m \).

Definition 6

Let \( n _1, \ldots , n _m \) be positive integers such that \( n = n _1 + \ldots + n _m \). A Jordan \(\Lambda \)-block of the second kind \( J _{ n _1, \ldots , n _m } ( \Lambda ) \in \textbf{M}_{2n}\) is a block diagonal matrix

$$\begin{aligned} J_ {n _1, \ldots , n _m } ( \Lambda ) = \textrm{diag}\left[ J _{n_1} ( \Lambda ), \ldots , J _{n _m} ( \Lambda ) \right] \end{aligned}$$
(40)

where \(J_{n _i} ( \Lambda ) \) are Jordan \(\Lambda \)-blocks of the first kind for all \( i = 1, \ldots , m \).

Definition 7

Let \( \lambda _i \in \mathbb {R} \), \(i \in \{ 1,2,\cdots ,p \}\) and

$$\begin{aligned} \Lambda _k = \left[ \begin{array}{cc} \alpha _k &{} -\beta _k \\ \beta _k &{} \alpha _k \end{array}\right] \in \textbf{M}_{2} ,\ \ k \in \{ 1,2,\cdots ,q \} \end{aligned}$$
(41)

with \( \beta _k > 0 \), such that all scalars and matrices are distinct. Let \( m ^i _1, \ldots , m ^i _{r _i} \) and \( n ^k _1, \ldots , n ^k _{s _k} \) be positive integers such that \( m ^i = m ^i _1 + \ldots + m ^i _{r _i} \), \( n ^k = n ^k _1 + \ldots + n ^k _{r _k} \), \( m = m ^1 + \ldots + m ^p \) and \( n = n ^1 + \ldots + n ^q \). A generalized Jordan matrix \( J \in \textbf{M}_{m + 2n}\) is defined as a block diagonal matrix of the form

$$\begin{aligned} J = \textrm{diag}\left[ J _{m ^1 _1, \ldots , m ^1 _{r _1}} ( \lambda _1 ), \ldots , J _{m ^p _1, \ldots , m ^p _{r _p}} ( \lambda _p ), J _{n ^1 _1, \ldots , n ^1 _{s _1}} ( \Lambda _1 ), \ldots , J _{n ^q _1, \ldots , n ^q _{s _q}} ( \Lambda _q ) \right] \end{aligned}$$
(42)

where \( J _{m ^i _1, \ldots , m ^i _{r _i}} ( \lambda _i ) \) are Jordan matrices for all \( \lambda _i \), and \( J _{n ^k _1, \ldots , n ^k _{s _k}} ( \Lambda _k ) \) are Jordan \(\Lambda \)-blocks of the second kind for all \( i \in \{ 1, \ldots , p \} \) and \( k \in \{ 1, \ldots , q \} \).

Theorem 2

(from [4]) Each \( A \in \textbf{M}_{n}\) is similar via a real similarity transformation matrix, to a generalized Jordan matrix of the form given in Definition 7 in which the scalars \( \lambda _1, \ldots , \lambda _p \) are real eigenvalues of A, and its complex conjugate eigenvalues \( \alpha _k \pm i \beta _k \) are represented by the matrices \( \Lambda _k \) for all \( k \in \{ 1, \ldots , q \} \).

Theorem 3

Let \( \lambda \in \mathbb {R} \) and \(J _n ( \lambda )\) be a Jordan cell. Then \(\mathcal {I}(J_n (\lambda ))\) coincides with the set of all real square matrices of order n which are of the form

$$\begin{aligned} \left[ \begin{array}{cccc} x _1 &{} x _2 &{} \cdots &{} x _n \\ x _2 &{} x _3 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ x _n &{} 0 &{} \cdots &{} 0 \end{array}\right] \ . \end{aligned}$$
(43)

Proof

Let be

$$\begin{aligned} X = \left[ \begin{array}{ccc} x _{11} &{} \cdots &{} x _{1n} \\ \vdots &{} \ddots &{} \vdots \\ x _{1n} &{} \cdots &{} x _{nn} \end{array}\right] \in \textbf{M}_{n} \end{aligned}$$
(44)

The intertwining relation \(J _n ( \lambda ) X = X J ^T _n ( \lambda )\) implies the following:

$$\begin{aligned} x _{i+1,j} = x _{i,j+1}\ \hbox {for}\ i, j \in \{ 1, \ldots , n - 1 \}, \ \ x _{k n} = 0\ \hbox {for}\ k \in \{ 2, \ldots , n \} . \end{aligned}$$
(45)

Equations (45) mean that all entries of any antidiagonal of X, are equal, and, that all antidiagonals below the main antidiagonal are zero.\(\square \)

Lemma 2

Let m and n be two positive integers such that \( m < n \) and let \( X \in \textbf{M}_{m \times n}\). Then any Jordan cells \(J_m, J_n\) satisfy that

$$\begin{aligned} J _m ( \lambda ) X = X J ^T _n ( \lambda ) \iff X = \left[ \begin{array}{cc} Y&0 \end{array}\right] , \ Y \in \mathcal {I} ( J _m ( \lambda ) ) \ . \end{aligned}$$
(46)

Proof

Let p be a positive integer such that \( m \le p \le n -1 \). If \( J _m ( \lambda ) X = X J ^T _n ( \lambda ) \), then \( J _m ( 0 ) X = X J ^T _n ( 0 ) \), so that \( J ^p _m ( 0 ) X = X ( J ^p _n ( 0 ) ) ^T \). Therefore \( X ( J ^p _n ( 0 ) ) ^T = 0 \). If \( p = n - 1 \), then \( x _{i n} = 0 \) for each \( i \in \{ 1, \ldots , m \} \). Proceeding analogously in decreasing order to \( p = m \) we obtain

$$\begin{aligned} X = \left[ \begin{array}{cccccc} x_{11} &{} \cdots &{} x _{1m} &{} 0 &{} \cdots &{} 0 \\ \vdots &{} \ddots &{} \vdots &{} \vdots &{} \ddots &{} \vdots \\ x_{m1} &{} \cdots &{} x _{mm} &{} 0 &{} \cdots &{} 0 \end{array}\right] \end{aligned}$$
(47)

Hence, we can write \(X = \left[ \begin{array}{cc} Y&0 \end{array}\right] \), where \(Y \in \textbf{M}_{m}\). If we partition \( J _n ( \lambda ) \) as

$$\begin{aligned} J _n ( \lambda ) = \left[ \begin{array}{cc} J _m ( \lambda ) &{} E _{m,1} \\ 0 &{} J _{n-m} ( \lambda ) \end{array}\right] \end{aligned}$$
(48)

where

$$\begin{aligned} E _{m,1} = \left[ \begin{array}{cccc} 0 &{} 0 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{} 0 &{} \cdots &{} 0 \\ 1 &{} 0 &{} \cdots &{} 0 \\ \end{array}\right] \in \textbf{M}_{m \times (n - m)} \end{aligned}$$
(49)

then the intertwining relation \( J _m ( 0 ) X = X J ^T _n ( 0 ) \) implies \( J _m ( \lambda ) Y = Y J ^T _m ( \lambda ) \), so that \( Y \in \mathcal {I} ( J _m ( \lambda ) ) \).

Now, let \(X = \left[ \begin{array}{cc} Y&0 \end{array}\right] \in \textbf{M}_{m \times n}\) and let \( Y \in \textbf{M}_{m}\). If \( Y \in \mathcal {I} ( J _m ( \lambda ) ) \), then \( J _m ( \lambda ) Y = Y J ^T _m ( \lambda ) \), which implies that \( J _m ( \lambda ) X = X J ^T _n ( \lambda ) \). \(\square \)

Lemma 3

Let m and n be two positive integers such that \( m > n \) and let \( X \in \textbf{M}_{m \times n}\). Then any Jordan cells \(J_m, J_n\) satisfy that

$$\begin{aligned} J _m ( \lambda ) X = X J ^T _n ( \lambda ) \iff X = \left[ \begin{array}{c} Y \\ 0 \end{array}\right] , \ Y \in \mathcal {I} ( J _n ( \lambda ) ) \ . \end{aligned}$$
(50)

Proof

We rewrite \( J _m ( \lambda ) X = X J ^T _n ( \lambda ) \) as \( J _n ( \lambda ) X ^T = X ^T J ^T _m ( \lambda ) \). By Lemma 2 we have \(X^T = \left[ \begin{array}{cc} Y&0 \end{array}\right] \) with \(Y \in \mathcal {I} ( J _n ( \lambda ) ) \), hence \( X = \left[ \begin{array}{c} Y \\ 0 \end{array}\right] \). \(\square \)

Theorem 4

Let \( n _1, \ldots , n _m \) be positive integers such that \( n = n _1 + \ldots + n _m \), \( \lambda \in \mathbb {R} \), and let \( J _{ n _1, \ldots , n _m } ( \lambda ) \in \textbf{M}_{n}\) be a Jordan matrix. Then every matrix \(X \in \mathcal {I}( J _{ n _1, \ldots , n _m } ( \lambda ) )\) is a block matrix of the form

$$\begin{aligned} X = \left[ \begin{array}{ccc} X _{1 1} &{} \cdots &{} X _{1 m} \\ \vdots &{} \ddots &{} \vdots \\ X _{m 1} &{} \cdots &{} X _{m m} \end{array}\right] \end{aligned}$$
(51)

where for each \(i, j \in \{ 1, \ldots , m\} \), \( X _{i j} \in \textbf{M}_{n _i\times n _j}\) satisfies \(X ^T _{i j} = X _{j i} \) and are of the following form:

  1. (i)

    If \(n_i = n_j\) then \(X_{ij} \in \mathcal {I}( J_{n_i}(\lambda ))\).

  2. (ii)

    If \(n_i < n_j\) then \(X_{ij} = \left[ \begin{array}{cc} Y _{ij}&0 \end{array}\right] \) with \(Y_{ij}\in \mathcal {I}(J_{n_i}(\lambda ))\).

  3. (iii)

    If \(n_i > n_j\) then \(X_{ij} = \left[ \begin{array}{c} Y _{ij} \\ 0 \end{array}\right] \) with \(Y_{ij} \in \mathcal {I}(J_{n_j}(\lambda ))\).

Proof

Let \( i, j \in \{ 1, \ldots , m\} \). If \(X \in \mathcal {I}( J _{ n _1, \ldots , n _m } ( \lambda ) )\) then \( J _{ n _1, \ldots , n _m } ( \lambda ) X = X J ^T _{ n _1, \ldots , n _m } ( \lambda ) \) and \( X ^T = X \), so that \( J _{n _i} (\lambda ) X _{i j} = X _{i j} J _{n _j} ^T (\lambda ) \) and \( X _{j i} = X ^T _{i j} \). If \(n _i = n _j \) then \(X _{i j}\) is symmetric, so that \(X _{i j} \in \mathcal {I}( J _{n _i} (\lambda ) )\). By Lemma 2 we have that \(X_{ij} = \left[ \begin{array}{cc} Y_{ij}&0 \end{array}\right] \) with \(Y_{ij} \in \mathcal {I}(J_{n_i}(\lambda ))\) for \(n_i < n_j\). For \(n _i > n _j\), by Lemma 3 we find \( X_{ij} = \left[ \begin{array}{c} Y _{ij} \\ 0 \end{array}\right] \) with \(Y_{ij}\in \mathcal {I}(J_{n_j}(\lambda ))\).\(\square \)

Lemma 4

For any \(\Lambda = \left[ \begin{array}{cc} \alpha &{} -\beta \\ \beta &{} \alpha \end{array} \right] \in \textbf{M}_{2}\) with \(\beta >0\), \(\mathcal {I}(\Lambda )\) coincides with the set of all real symmetric 2x2- matrices of the form

$$\begin{aligned} \left[ \begin{array}{cc} a &{} b \\ b &{} -a \end{array} \right] \end{aligned}$$
(52)

Proof

For any

$$\begin{aligned} X = \left[ \begin{array}{cc} x _1 &{} x _2 \\ x _3 &{} x _4 \end{array}\right] \in \textbf{M}_{2}\ , \end{aligned}$$
(53)

from the intertwining relation \(\Lambda X = X \Lambda ^T\) together with \( \beta > 0 \) we get \( x _3 = x _2 \) \( x _4 = - x _1 \).\(\square \)

Lemma 5

Let \( X \in \textbf{M}_{2}\), \(\Lambda = \left[ \begin{array}{cc} \alpha &{} -\beta \\ \beta &{} \alpha \end{array} \right] \in \textbf{M}_{2}\) with \(\beta > 0\). Any \( Y \in \mathcal {I} ( \Lambda ) \) satisfies that

$$\begin{aligned} \Lambda X = X \Lambda ^T + Y \iff X \in \mathcal {I} ( \Lambda ), Y = 0 \ . \end{aligned}$$
(54)

Proof

Let \(X = \left[ \begin{array}{cc} x &{} y \\ z &{} t \end{array}\right] \in \textbf{M}_{2}\). If \(Y \in \mathcal {I} ( \Lambda ) \), then \( Y = \left[ \begin{array}{cc} a &{} b \\ b &{} -a \end{array}\right] \in \textbf{M}_{2}\). The relation \( \Lambda X = X \Lambda ^T + Y \) together with \( \beta > 0 \) implies \( a = b = 0 \), \( z = y \) and \( t = -x \). On the other hand, if \( X \in \mathcal {I} ( \Lambda ) \) then \( \Lambda X = X \Lambda ^T \), hence \( Y = 0 \). \(\square \)

Theorem 5

Let \(J _n ( \Lambda )\) be a Jordan \(\Lambda \)-block of the first kind with complex conjugate eigenvalues represented by \(\Lambda = \left[ \begin{array}{cc} \alpha &{} -\beta \\ \beta &{} \alpha \end{array} \right] \). Then

$$\begin{aligned} \mathcal {I}( J _n ( \Lambda ) ) = \ \left\{ \left[ \begin{array}{cccc} X _1 &{} X _2 &{} \cdots &{} X _n \\ X _2 &{} X _3 &{} \cdots &{} 0 _2 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ X _n &{} 0 _2 &{} \cdots &{} 0 _2 \end{array}\right] \in \textbf{M}_{n} :\, X _i \in \mathcal {I}(\Lambda )\ \hbox {for}\ i \in \{1, \ldots , n \} \right\} \end{aligned}$$
(55)

Proof

For \( n = 1 \) see Lemma 4. For \( n = 2 \), let

$$\begin{aligned} \mathfrak {X} = \left[ \begin{array}{cc} X &{} Y \\ Z &{} T \end{array}\right] \in \textbf{M}_{4} \end{aligned}$$
(56)

where \( X, Y, Z, T \in \textbf{M}_{2}\). The intertwining relation \( J _2 ( \Lambda ) \mathfrak {X} = \mathfrak {X} J ^T _2 ( \Lambda ) \) implies

$$\begin{aligned} \begin{aligned} \Lambda T\,\,&= T \Lambda ^T \\ \Lambda Z&= Z \Lambda ^T + T \\ \Lambda Y + T&= Y \Lambda ^T \\ \Lambda X + Z&= X \Lambda ^T + Y \end{aligned} \end{aligned}$$
(57)

The first one of the (57) implies that \( T \in \mathcal {I} ( \Lambda ) \). Applying Lemma 5 to the second and third ones of (57), we obtain \(Y, Z \in \mathcal {I} ( \Lambda ) \) and \( T = 0 \). Using Lemma 5 in the last one of (57) we find that \(X \in \mathcal {I} ( \Lambda ) \) and \( Z = Y \). Thus

$$\begin{aligned} \mathcal {I}( J _2 ( \Lambda ) ) = \ \left\{ \left[ \begin{array}{cc} X &{} Y \\ Y &{} 0 \end{array}\right] : X, Y \in \mathcal {I} ( \Lambda ) \right\} \ . \end{aligned}$$
(58)

Now, assume that the property is true for n and let us prove that it is satisfied for \(n + 1\).

\( J _{n + 1} ( \Lambda ) \) can be partitioned as follows:

$$\begin{aligned} J _{n + 1} ( \Lambda ) = \left[ \begin{array}{cc} \Lambda &{} E_1 \\ 0 &{} J _n ( \Lambda ) \end{array}\right] \end{aligned}$$
(59)

where \(E _1 = \left[ \begin{array}{cccc} I_2&0&\cdots&0 \end{array}\right] \in \textbf{M}_{2\times 2n}\). Let

$$\begin{aligned} \mathfrak {X} = \left[ \begin{array}{cc} T &{} Y \\ Z &{} X \end{array}\right] \in \textbf{M}_{2(n+1)} \end{aligned}$$
(60)

where \( X \in \textbf{M}_{2n}\), \(Y = \left[ \begin{array}{ccc} Y_1&\cdots&Y_n \end{array}\right] , Z ^T = \left[ \begin{array}{ccc} Z_1^T&\cdots&Z_n^T \end{array}\right] \), and all matrices \(T, Y_1, \ldots , Y_n, \) \( Z_1, \ldots , Z_n\) belong to \(\textbf{M}_{2}\).

If \( J _{n + 1} ( \Lambda ) \mathfrak {X} = \mathfrak {X} J ^T _{n + 1} ( \Lambda ) \) then

$$\begin{aligned} \begin{aligned} J _n ( \Lambda ) X&= J ^T _n ( \Lambda ) \\ J _n ( \Lambda ) Z&= \Lambda Y + E _1 X \\ Z \Lambda ^T + X E _1 ^T&= Y J ^T _n ( \Lambda ) \\ \Lambda T + E _1 Z&= T \Lambda ^T + Y E _1 ^T \end{aligned} \end{aligned}$$
(61)

From the one of the (61) we obtain \( X \in \mathcal {I} ( J _n ( \Lambda ) ) \). By the induction hypothesis we can write

$$\begin{aligned} X = \left[ \begin{array}{cccc} X_1 &{} X_2 &{} \cdots &{} X_n \\ X_2 &{} X_3 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ X_n &{} 0 &{} \cdots &{} 0 \end{array}\right] \end{aligned}$$
(62)

The second and third one of (61) imply that

$$\begin{aligned} \begin{aligned} \Lambda Z_n&= Z_n \Lambda ^T + X_n\, , \ \Lambda Y_n + X_n = Y_n \Lambda ^T \\ \Lambda Z_{n-1} + Z_n&= Z_{n-1} \Lambda ^T + X_{n-1}\, , \ \Lambda Y_{n-1} + X_{n-1} = Y_{n-1} \Lambda ^T + Y_n \\&\vdots \\ \Lambda Z_2 + Z_3&= Z_2 \Lambda ^T + X_2\, , \ \Lambda Y_2 + X_2 = Y_2 \Lambda ^T + Y_3 \\ \Lambda Z_1 + Z_2&= Z_1 \Lambda ^T + X_1\, , \ \Lambda Y_1 + X_1 = Y_1 \Lambda ^T + Y_2\ \ . \end{aligned} \end{aligned}$$
(63)

Using Lemma 5 for (63), one obtains \( X_n = 0, Z_n = Y_n = X_{n-1}, \ldots , Z_3 = Y_3 = X_2, Z_2 = Y_2 = X_1 \) and \( Z_1, Y_1 \in \mathcal {I} ( \Lambda ) \). Then, applying Lemma 5 in the last one of (61) gives \( Z _1 = Y _1 \). Therefore,

$$\begin{aligned} \mathfrak {X} = \left[ \begin{array}{ccccc} T &{} Y_1 &{} X_1 &{} \cdots &{} X_{n-1} \\ Y_1 &{} X_1 &{} X_2 &{} \cdots &{} 0 \\ X_1 &{} X_2 &{} X_3 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots \\ X_{n-1} &{} 0 &{} 0 &{} \cdots &{} 0 \end{array}\right] \ . \end{aligned}$$
(64)

Finally, it is obvious that \(\mathfrak {X} ^T = \mathfrak {X} \). \(\square \)

Lemma 6

Let m and n be two positive integers such that \( m < n \), and let \( X \in \textbf{M}_{2m\times 2n}\). Then

$$\begin{aligned} J _m ( \Lambda ) X = X J ^T _n ( \Lambda ) \ \hbox {if and only if}\ X = \left[ \begin{array}{cc} Y&0 \end{array}\right] , Y \in \mathcal {I} ( J _m ( \Lambda ) ) \end{aligned}$$
(65)

Proof

Let

$$\begin{aligned} \mathfrak {X} = \left[ \begin{array}{ccc} X _{1 1} &{} \cdots &{} X _{1 n} \\ \vdots &{} \ddots &{} \vdots \\ X _{m 1} &{} \cdots &{} X _{m n} \end{array}\right] \in \textbf{M}_{2m \times 2n} \end{aligned}$$
(66)

where \( X _{i j} \in \textbf{M}_{2}\) for all \( i \in \{ 1, \ldots , m \} \) and \( j \in \{ 1, \ldots , n \} \). The intertwining relation \( J _m ( \Lambda ) \mathfrak {X} = \mathfrak {X} J ^T _n ( \Lambda ) \) implies the equations

$$\begin{aligned} \begin{aligned} \Lambda X _{m n}&= X _{m n} \Lambda ^T \\ \Lambda X _{i n} + X _{i + 1, n}&= X _{i n} \Lambda ^T \\ \Lambda X _{m j}&= X _{m j} \Lambda ^T + X _{m, j + 1 } \\ \Lambda X _{i j} + X _{i + 1, j}&= X _{i j} \Lambda ^T + X _{i, j + 1 } \\ \end{aligned} \end{aligned}$$
(67)

for each \( i \in \{ 1, \ldots , m - 1 \}\) and \( j \in \{ 1, \ldots , n - 1 \} \). The first one of the (67) implies \( X _{m n} \in \mathcal {I} ( \Lambda ) \). Moreover, applying Lemma 5 to the second and third ones of (67), one obtains that \(X _{1 n} \in \mathcal {I} ( \Lambda ), X _{2 n} = \cdots = X _{m n} = 0 \) and \( X _{m 1} \in \mathcal {I} ( \Lambda ), X _{m 2} = \cdots = X _{m n} = 0 \), respectively. Now, we will only use the last one of (67). Let us write \( X _n \) instead of \( X _{1 n} \). For \( j = n - 1 \), taking into account Lemma 5, we get \( X _{1, n - 1} \in \mathcal {I} ( \Lambda ), X _{2, n - 1} = X _n, X _{3, n - 1} = \cdots = X _{m, n - 1} = 0 \). Also, let us write \( X _{n - 1 } \) instead of \( X _{1, n - 1} \). Proceeding analogously as before, it turns out that

$$\begin{aligned} \mathfrak {X} = \left[ \begin{array}{cccccc} X _1 &{} \cdots &{} X _{n - m + 1} &{} X _{n - m + 2} &{} \cdots &{} X _n \\ \vdots &{} \ddots &{} \vdots &{} \vdots &{} \ddots &{} \vdots \\ X _m &{} \cdots &{} X _n &{} 0 &{} \cdots &{} 0 \end{array}\right] \end{aligned}$$
(68)

where \( X _{i + j - 1} = X _{i j} \) with \( i + j \le n + 1 \). However, we had found that only \( X _m \) is non-zero in the last row. Then, \( X _{m + 1} = \cdots = X _n = 0 \), so that

$$\begin{aligned} \mathfrak {X} = \left[ \begin{array}{cccccc} X _1 &{} \cdots &{} X _m &{} 0 &{} \cdots &{} 0 \\ \vdots &{} \ddots &{} \vdots &{} \vdots &{} \ddots &{} \vdots \\ X _m &{} \cdots &{} 0 &{} 0 &{} \cdots &{} 0 \end{array}\right] \ . \end{aligned}$$
(69)

On the other hand, we may partition \( J _n ( \Lambda ) \) as

$$\begin{aligned} J _n ( \Lambda ) = \left[ \begin{array}{cc} J _m ( \Lambda ) &{} E _{m,1} \\ 0 &{} J _{n - m} ( \Lambda ) \end{array}\right] \end{aligned}$$
(70)

where

$$\begin{aligned} E _{m,1} = \left[ \begin{array}{cccc} 0 &{} 0 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{} 0 &{} \cdots &{} 0 \\ I_2 &{} 0 &{} \cdots &{} 0 \\ \end{array}\right] \in \textbf{M}_{2m \times 2(n-m)} . \end{aligned}$$
(71)

Let \( X \in \textbf{M}_{2m}\) and \( \mathfrak {X} = \left[ \begin{array}{cc} X&0 \end{array}\right] \in \textbf{M}_{2m \times 2n}\). If \( X \in \mathcal {I} ( J _m ( \Lambda ) ) \) then \( J _m ( \Lambda ) X = X J ^T _m ( \Lambda ) \), hence \( J _m ( \Lambda ) \mathfrak {X} = \mathfrak {X} J ^T _n ( \Lambda ) \). \(\square \)

Lemma 7

Let m and n be two positive integer numbers such that \( m > n \), and let \( X \in \textbf{M}_{2m \times 2n}\). Then

$$\begin{aligned} J _m ( \Lambda ) X = X J ^T _n ( \Lambda ) \iff X = \left[ \begin{array}{c} Y \\ 0 \end{array}\right] , \ Y\in \mathcal {I} ( J _n ( \Lambda ) ) \ . \end{aligned}$$
(72)

Proof

This can be proved in a similar way to the proof of Lemma 6. \(\square \)

Theorem 6

Let \( n _1, \ldots , n _m \) be positive integers such that \( n = n _1 + \ldots + n _m \), and let \( J _{ n _1, \ldots , n _m } ( \Lambda ) \in \textbf{M}_{2n}\) be a Jordan matrix with complex conjugate eigenvalues represented by \(\Lambda = \left[ \begin{array}{cc} \alpha &{} -\beta \\ \beta &{} \alpha \end{array} \right] \). Then every matrix \(X \in \mathcal {I}( J _{ n _1, \ldots , n _m } ( \Lambda ) )\) is a block matrix of the form

$$\begin{aligned} X = \left[ \begin{array}{ccc} X _{1 1} &{} \cdots &{} X _{1 m} \\ \vdots &{} \ddots &{} \ddots \\ X _{m 1} &{} \cdots &{} X _{m m} \end{array}\right] \end{aligned}$$
(73)

where for each \(i, j \in \{ 1, \ldots , m\} \), \(X _{i j} \in \textbf{M}_{2n_i \times 2n_j}\) and \(X _{j i} = X ^T _{i j} \) which are of the following form:

  1. (i)

    If \(n_i = n_j\), then \(X_{ij} \in \mathcal {I}( J _{n _i} ( \Lambda ) )\).

  2. (ii)

    If \(n_i < n_j\), then \(X_{ij} = \left[ \begin{array}{cc} Y_{ij}&0 \end{array}\right] \) with \(Y_{ij} \in \mathcal {I}(J_{n_i} (\Lambda ) )\).

  3. (iii)

    If \(n_i > n_j\), then \(X_{ij} = \left[ \begin{array}{c} Y_{ij} \\ 0 \end{array}\right] \) with \(Y_{ij} \in \mathcal {I}(J_{n_j} (\Lambda ) )\).

Proof

Let \( i, j \in \{ 1, \ldots , m \}\). If \(X \in \mathcal {I}( J _{ n _1, \ldots , n _m } ( \Lambda ) )\) then \( J _{ n _1, \ldots , n _m } ( \Lambda ) X = X J ^T _{ n _1, \ldots , n _m } ( \Lambda ) \) and \( X ^T = X \), hence \( J _{n _i} ( \Lambda ) X _{ij} = X _{ij} J ^T _{n _j} ( \Lambda ) \) and \( X _{ji} = X ^T _{ij} \). It is obvious that \( X _{ij} \in \mathcal {I}( J _{n _i} ( \Lambda ) )\) for \(n _i = n _j\). If \(n _i < n _j\), by Lemma 6 we have \(X_{ij} = \left[ \begin{array}{cc} Y _{ij}&0 \end{array}\right] \) with \(Y_{ij} \in \mathcal {I}( J_{n_i} ( \Lambda ) )\). Using Lemma 7 we find that \( X _{ij} = \left[ \begin{array}{c} Y _{ij} \\ 0 \end{array}\right] \) with \(Y_{ij} \in \mathcal {I}( J_{n_j} ( \Lambda ) )\) for \(n_i > n_j\). \(\square \)

Theorem 7

Let J be a generalized Jordan matrix due to Definition 7. Then \(\mathcal {I}( J )\) is the set of all matrices \( \textrm{diag}\left[ X_1, \ldots , X_p, Y _1, \ldots , Y_q \right] \) such that \(X_i \in \mathcal {I}(J_{m ^i _1, \ldots , m ^i _{r _i}}(\lambda _i )) \) for all \(i \in \{ 1, \ldots , p \}\) , and \(Y_j \in \mathcal {I}(J_{n ^j _1, \ldots , n ^j _{s _j}}(\Lambda _j )) \) for all \(j \in \{ 1, \ldots , q \}\).

Proof

Let \(i,j \in \{ 1, \ldots , p \} \) and \(k,l \in \{ 1, \ldots , q \}\). Let

$$\begin{aligned} \mathfrak {X} = \left[ \begin{array}{cccccc} X _{1 1} &{} \cdots &{} X _{1 p} &{} Z _{1 1} &{} \cdots &{} Z _{1 q} \\ \vdots &{} \ddots &{} \vdots &{} \vdots &{} \ddots &{} \vdots \\ X _{p 1} &{} \cdots &{} X _{p p} &{} Z _{p 1} &{} \cdots &{} Z _{p q} \\ T _{1 1} &{} \cdots &{} T _{1 p} &{} Y _{1 1} &{} \cdots &{} Y _{1 q} \\ \vdots &{} \ddots &{} \vdots &{} \vdots &{} \ddots &{} \vdots \\ T _{q 1} &{} \cdots &{} T _{q p} &{} Y _{q 1} &{} \cdots &{} Y _{q q} \end{array}\right] \in \textbf{M}_{m+2n} \end{aligned}$$
(74)

where \(X _{i j} \in \textbf{M}_{m ^i \times m ^j}\), \(Y _{k l} \in \textbf{M}_{2n^k \times 2n^l}\), \(Z _{i l} \in \textbf{M}_{m^i \times 2n^l}\) and \( T _{k j} \in \textbf{M}_{2n^k \times m^j}\). If \(J \mathfrak {X} = \mathfrak {X} J ^T\), then

$$\begin{aligned} J _{m ^i _1, \ldots , m ^i _{r _i}} ( \lambda _i ) X _{i j}= & {} X _{i j} J ^T _{m ^j _1, \ldots , m ^j _{r _j}} ( \lambda _j ) \end{aligned}$$
(75)
$$\begin{aligned} J _{n ^k _1, \ldots , n ^k _{s _k}} ( \Lambda _k ) Y _{k l}= & {} Y _{k l} J ^T _{n ^l _1, \ldots , n ^l _{s _l}} ( \Lambda _l ) \end{aligned}$$
(76)
$$\begin{aligned} J _{m ^i _1, \ldots , m ^i _{r _i}} ( \lambda _i ) Z _{i k}= & {} Z _{i k} J ^T _{n ^k _1, \ldots , n ^k _{s _k}} ( \Lambda _k ) \end{aligned}$$
(77)

Since the Jordan matrices do not have common eigenvalues, by the Sylvester’s theorem on linear matrix equations [1, 4] we have \( X_{i j} = 0 \) for \(i \ne j\), \(Y _{k l} = 0\) for \(k \ne l\), \(Z _{i l} = 0\) and \( T _{k j} = 0 \). Furthermore, if \( \mathfrak {X} \) is symmetric, \( X _{ii} \) and \( Y _{k k} \) are also symmetric, so that \(X _{i i} \in \mathcal {I}( J _{m ^i _1, \ldots , m ^i _{r _i}} ( \lambda _i ) )\) and \(Y _{k k} \in \mathcal {I}( J _{n ^k _1, \ldots , n ^k _{s _k}} ( \Lambda _k ) )\). \(\square \)

Theorem 8

For any \(x _i \in \mathbb {R}\), \(i \in \{ 1, \ldots , n \}\), the following determinant formula holds:

$$\begin{aligned} \left| \begin{array}{cccc} x _1 &{} x _2 &{} \cdots &{} x _n \\ x _2 &{} x _3 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ x _n &{} 0 &{} \cdots &{} 0 \end{array}\right| = (-) ^{n(n-1)/2} \cdot x^n_n \end{aligned}$$
(78)

Proof

Let

$$\begin{aligned} K _n = \left[ \begin{array}{ccc} &{} &{} 1 \\ {} &{} \cdots &{} \\ 1 &{} &{} \end{array}\right] \in \textbf{M}_{n} \end{aligned}$$
(79)

be the exchange matrix. Using that \( \det K _n = (-) ^{n (n-1)/2} \) we find

$$\begin{aligned} \left| \begin{array}{cccc} x _1 &{} x _2 &{} \cdots &{} x _n \\ x _2 &{} x _3 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ x _n &{} 0 &{} \cdots &{} 0 \end{array}\right| = \left| \begin{array}{cccc} x _n &{} x _{n-1} &{} \cdots &{} x _1 \\ 0 &{} x _n &{} \cdots &{} x _2 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{} 0 &{} \cdots &{} x _n \end{array}\right| \cdot \left| \begin{array}{cccc} &{} &{} &{} 1 \\ &{} &{} 1 &{} \\ &{} \cdots &{} &{} \\ 1 &{} &{} &{} \end{array}\right| = (-) ^{n(n-1)/2} \cdot x^n_n \end{aligned}$$
(80)

\(\square \)

Theorem 9

Let \( X _i \in \mathcal {I}(Z)\) for \(i \in \{ 1, \ldots , n \}\). The following determinant formula holds:

$$\begin{aligned} \left| \begin{array}{cccc} X _1 &{} X _2 &{} \cdots &{} X _n \\ X _2 &{} X _3 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ X _n &{} 0 &{} \cdots &{} 0 \end{array}\right| = |X _n| ^n \end{aligned}$$
(81)

Proof

By means of the properties of the determinants we have

$$\begin{aligned} \left| \begin{array}{ccc} &{} &{} I_2 \\ &{} \cdots &{} \\ I_2 &{} &{} \end{array}\right| = ( -1 )^n \left| \begin{array}{ccc} &{} &{} K_2 \\ &{} \cdots &{} \\ K_2 &{} &{} \end{array}\right| = ( -1 )^n \det K_{2 n} = 1 \end{aligned}$$
(82)

Then,

$$\begin{aligned} \left| \begin{array}{cccc} X _1 &{} X _2 &{} \cdots &{} X _n \\ X _2 &{} X _3 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ X _n &{} 0 &{} \cdots &{} 0 \end{array}\right| = \left| \begin{array}{cccc} X _n &{} X _{n-1} &{} \cdots &{} X _1 \\ 0 &{} X _n &{} \cdots &{} X _2 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{} 0 &{} \cdots &{} X _n \end{array}\right| \cdot \left| \begin{array}{cccc} &{} &{} &{} I_2 \\ &{} &{} I_2 &{} \\ &{} \cdots &{} &{} \\ I_2 &{} &{} &{} \end{array}\right| = \ | X _n | ^n \end{aligned}$$
(83)

\(\square \)

6 Computing One-Dimensional Subspaces

Now we will apply the properties of Jordan matrices deduced in the last section, to obtain knowledge about the matrix g.

Theorem 10

Let \( \lambda \in \mathbb {R} \), and let \( J _n ( \lambda ) \) be a Jordan cell. Suppose \(g \in \textbf{Sym}_n\) as a matrix function such that \(g_{, \xi } = J _n ( \lambda ) g\). Then

$$\begin{aligned} g _n ( \lambda ) = \left[ \begin{array}{cccc} X _1 &{} X _2 &{} \cdots &{} X _n \\ X _2 &{} X _3 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ X _n &{} 0 &{} \cdots &{} 0 \\ \end{array}\right] \end{aligned}$$
(84)

where

$$\begin{aligned} X _i ( \xi ) = e ^{\lambda \xi } \sum _{j = 0} ^{n -i} \frac{\xi ^j}{j !} C _{i + j} \end{aligned}$$
(85)

and \( C _i \) is constant for each \( i = 1, \ldots , n \).

Proof

Applying \( g = g^T \) to \(g_{, \xi } = J _n ( \lambda ) g\) we get \( J _n ( \lambda ) g= g J ^T _n ( \lambda ) \), then \(g\in \mathcal {I} ( J _n ( \lambda ) ) \). By Theorem 3, g has the form given in (84). From \(g_{, \xi } = J _n ( \lambda ) g\) we obtain

$$\begin{aligned} \begin{aligned} X_{n, \xi }&= \lambda X_n \\ X_{n-1, \xi }&= \lambda X_{n-1} + X_n \\&\vdots \\ X_{1, \xi }&= \lambda X_1 + X_2 \end{aligned} \end{aligned}$$
(86)

Integrating successively we get (85).\(\square \)

Theorem 11

Let \(n_1, \ldots , n_m \) be positive integers such that \( n = n_1 + \ldots + n_m \). Let \( \lambda \in \mathbb {R} \), and let \( J _{ n _1, \ldots , n _m } ( \lambda ) \in \textbf{M}_{n}\) be a Jordan matrix. If \(g\in \textbf{Sym}_n\) is a matrix function such that \(g_{, \xi } = J _{ n _1, \ldots , n _m } ( \lambda ) g\), then

$$\begin{aligned} g_{n_1 , \ldots , n_m} ( \lambda ) = \left[ \begin{array}{ccc} X_{1 1} &{} \cdots &{} X_{1 m} \\ \vdots &{} \ddots &{} \ddots \\ X_{m 1} &{} \cdots &{} X_{m m} \end{array}\right] \end{aligned}$$
(87)

where for each \(i, j \in \{ 1, \ldots , m\} \), the matrix \(X_{i j}\) satisfies \(X_{i j}^T = X_{j i}\) and is defined as follows:

  1. (i)

    if \(n_i = n_j\) then \(X_{i j} = g_{n_i} ( \lambda ) \),

  2. (ii)

    if \(n_i < n_j\) then \(X_{i j} = \left[ \begin{array}{cc} g_{n_i} ( \lambda )&0 \end{array}\right] \),

  3. (iii)

    if \(n_i > n_j\) then \(X_{i j} = \left[ \begin{array}{c} g_{n_j} ( \lambda ) \\ 0 \end{array}\right] \),

where \(g_{n_i} ( \lambda ) \) is defined as in Theorem 10.

Proof

Let \( i,j \in \{ 1, \ldots , m \} \). Applying \(g = g^T \) to \(g_{, \xi } = J_{n_1, \ldots , n_m} ( \lambda ) g\) we get \(J_{n_1, \ldots , n_m} (\lambda ) g = g J^T _{n_1, \ldots , n_m } (\lambda )) \), then \(g\in \mathcal {I} (J_{ n_1, \ldots , n_m } (\lambda ) ) \). By Theorem 4,

$$\begin{aligned} g ( \xi ) = \left[ \begin{array}{ccc} X _{1 1} ( \xi ) &{} \cdots &{} X _{1 m} ( \xi ) \\ \vdots &{} \ddots &{} \ddots \\ X _{m 1} ( \xi ) &{} \cdots &{} X _{m m} ( \xi ) \end{array}\right] \end{aligned}$$
(88)

where \(X_{i j} \in \textbf{M}_{n_i \times n_j}\) satisfies \( X_{j i} = X_{i j}^T \) and is of the following form:

  1. (i)

    If \(n _i = n _j\) then \(X_{ij} \in \mathcal {I}( J_{n_i} (\lambda ) )\).

  2. (ii)

    If \(n _i < n _j\) then \(X_{ij} = \left[ \begin{array}{cc} Y_{n_i}&0 \end{array}\right] \) with \(Y_{n_i} \in \mathcal {I}( J_{n_i} (\lambda ) )\).

  3. (iii)

    If \(n _i > n _j\) then \(X_{ij} = \left[ \begin{array}{c} Y_{n_j} \\ 0 \end{array}\right] \) with \(Y_{n_j} \in \mathcal {I}( J_{n_j} (\lambda ) )\).

From \(g_{, \xi } = J_{ n_1, \ldots , n_m } (\lambda ) g\) we have \(X_{i j , \xi } = J_{n_i} (\lambda ) X_{i j}\). Observe that \( X_{j i, \xi } = \ ( J_{n_i} X_{i j} )^T = X_{j i} J^T_{n_i} (\lambda ) = J_{n_j} (\lambda ) X_{j i} \). By Theorem 10 we obtain

  1. (i)

    If \(n_i = n_j\), then \(X_{i j} = g_{n_i} ( \lambda )\).

  2. (ii)

    If \(n_i < n_j\), then \(X_{i j , \xi } = J_{n_i} (\lambda ) X_{i j} \) implies \(Y_{n_i , \xi } = J_{n_i} (\lambda ) Y_{n_i }\), so that \(Y_{n_i } = g_{n_i} ( \lambda )\). Therefore \(X_{i j} = \left[ \begin{array}{cc} g_{n_i} (\lambda )&0 \end{array}\right] \).

  3. (iii)

    If \(n_i > n_j\), then \(X_{i j , \xi } = J_{n_i} (\lambda ) X_{i j} = X_{i j} J_{n_j}^T (\lambda ) \) implies \(Y_{n_j , \xi } = Y_{n_j } J_{n_j}^T (\lambda ) = J_{n_j} (\lambda ) Y_{n_j } \), so that \(Y_{n_j } = g_{n_j} ( \lambda )\). Hence \(X_{i j} = \left[ \begin{array}{c} g_{n_j}( \lambda ) \\ 0 \end{array}\right] \).

\(\square \)

Theorem 12

For any Jordan \(\Lambda \)-block of the first kind \(J_n (\Lambda )\), if \(g\in \textbf{Sym}_{2n}\) is a matrix function such that \(g_{, \xi } = J_n (\Lambda ) g\), then

$$\begin{aligned} g_n ( \Lambda ) = \left[ \begin{array}{cccc} Z _1 &{} Z _2 &{} \cdots &{} Z _n \\ Z _2 &{} Z _3 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ Z _n &{} 0 &{} \cdots &{} 0 \\ \end{array}\right] \end{aligned}$$
(89)

where

$$\begin{aligned} \begin{aligned} Z_l&= \left[ \begin{array}{cc} X _l &{} Y _l \\ Y _l &{} -X _l \end{array}\right] \\ X_l (\xi )&= e ^{\alpha \xi } \cos \beta \xi \sum _{k = 0} ^{n - l} \frac{\xi ^k}{k !} C _{k + l} - e ^{\alpha \xi } \sin \beta \xi \sum _{k = 0} ^{n - l} \frac{\xi ^k}{k !} D _{k + l} \\ Y_l (\xi )&= e ^{\alpha \xi } \cos \beta \xi \sum _{k = 0} ^{n - l} \frac{\xi ^k}{k !} D _{k + l} + e ^{\alpha \xi } \sin \beta \xi \sum _{k = 0} ^{n - l} \frac{\xi ^k}{k !} C _{k + l} \end{aligned} \end{aligned}$$
(90)

and \(C_l\), \(D_l\) are constant for \(l = 1, \ldots , n\).

Proof

Let \( i,j \in \{ 1, \ldots , n \} \). Applying \(g = g^T\) to \(g_{, _\xi } = J_n (\Lambda )g\) we get \(J_n (\Lambda ) g = g J^T_n (\Lambda )\), then \(g \in \mathcal {I} (J_n (\Lambda ))\). By Theorem 5 we can express

$$\begin{aligned} g (\xi ) = \left[ \begin{array}{cccc} Z _1 ( \xi ) &{} Z _2 ( \xi ) &{} \cdots &{} Z _n ( \xi ) \\ Z _2 ( \xi ) &{} Z _3 ( \xi ) &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ Z _n ( \xi ) &{} 0 &{} \cdots &{} 0 \\ \end{array}\right] \end{aligned}$$
(91)

where

$$\begin{aligned} Z _i ( \xi ) = \left[ \begin{array}{cc} X _i (\xi ) &{} Y _i (\xi ) \\ Y _i (\xi ) &{} -X _i (\xi ) \end{array}\right] \ . \end{aligned}$$
(92)

From \(g_{, \xi } = J_n ( \Lambda ) g\) we have

$$\begin{aligned} \begin{aligned} Z_{n, \xi }&= \lambda Z_n \\ Z_{n-1, \xi }&= \lambda Z_{n-1} + Z_n \\&\vdots \\ Z_{1, \xi }&= \lambda Z_1 + Z_2 \end{aligned} \end{aligned}$$
(93)

Integrating successively we get

$$\begin{aligned} Z_i (\xi ) = e ^{\xi \Lambda } \sum _{j = 0} ^{n -i} \frac{\xi ^j}{j !} \mathfrak {C} _{i + j} \ , \ \ \mathfrak {C} _i = \left[ \begin{array}{cc} C_i &{} D_i \\ D_i &{} -C_i \end{array}\right] \ . \end{aligned}$$
(94)

Using

$$\begin{aligned} e ^{\xi \Lambda } = e ^{\alpha \xi } \left[ \begin{array}{cc} \cos \beta \xi &{} -\sin \beta \xi \\ \sin \beta \xi &{} \cos \beta \xi \end{array}\right] \end{aligned}$$
(95)

we obtain (90). \(\square \)

Theorem 13

Let \(n_1, \ldots , n_m \) be positive integers such that \( n = n_1 + \ldots + n_m \), and let \( J_{ n_1, \ldots , n_m } ( \Lambda ) \in \textbf{M}_{2 n}\) be a Jordan \(\Lambda \)-block of the second kind. If \(g\in \textbf{Sym}_{2n}\) is a matrix function such that \(g_{, \xi } = J_{ n_1, \ldots , n_m } (\Lambda ) g\), then

$$\begin{aligned} g_{n_1 , \ldots , n_m} ( \Lambda ) = \left[ \begin{array}{ccc} X _{1 1} &{} \cdots &{} X _{1 m} \\ \vdots &{} \ddots &{} \ddots \\ X _{m 1} &{} \cdots &{} X _{m m} \end{array}\right] \ . \end{aligned}$$
(96)

The matrices \(X_{i j}\) satisfy \(X_{i j}^T = X_{j i}\) and are defined as follows:

  1. (i)

    if \(n_i = n_j\), then \(X_{i j} = g_{n_i} ( \Lambda )\),

  2. (ii)

    if \(n_i < n_j\), then \(X_{i j} = \left[ \begin{array}{cc} g_{n_i} ( \Lambda )&0 \end{array}\right] \),

  3. (iii)

    if \(n_i > n_j\), then \(X_{i j} = \left[ \begin{array}{c} g_{n_j} ( \Lambda ) \\ 0 \end{array}\right] \),

where for each \(i, j \in \{ 1, \ldots , m\} \), \(g_{n_i} ( \Lambda ) \) is defined as in Theorem 12.

Proof

The proof is similar to that of Theorem 11. \(\square \)

Theorem 14

Let J be a generalized Jordan matrix due to Definition 7 and \(g\in \textbf{Sym}_{m+2n}\) a matrix function such that \(g_{, \xi } = J g\). Then

$$\begin{aligned} g = \textrm{diag}\left[ g_{{m^1_1} , \ldots , {m^1_{r_1}}} ( \lambda _1 ), \ldots , g_{{m^p_1} , \ldots , {m^p_{r_p}}} ( \lambda _p ), g_{{n^1_1} , \ldots , {n^1 {s_1}}} ( \Lambda _1 ), \ldots , g_{{n^q_1} , \ldots , {n^q {s_q}}} ( \Lambda _q ) \right] \end{aligned}$$
(97)

where \(g_{{m^i_1} , \ldots , {m^i_{r_i}}} ( \lambda _i )\) and \(g_{{n^k_1} , \ldots , {n^k_{s_k}}} ( \Lambda _k ) \) are the functions defined as in Theorems 11 and 13, respectively, for each \(i, j \in \{ 1, \ldots , p\} \) and \(k, l\in \{ 1, \ldots , q \} \).

Proof

Applying \(g = g^T \) to \(g_{, \xi } = J g\) we get \(J g = g J^T \), then \(g\in \mathcal {I} (J)\). By Theorem 7 we have \( g = \textrm{diag}\left[ X _1 ( \xi ), \ldots , X _p ( \xi ), Y _1 ( \xi ), \ldots , Y _q ( \xi ) \right] \), where \( X _i ( \xi ) \in \textbf{M}_{m ^i} \) and \( Y _k ( \xi ) \in \textbf{M}_{n ^k} \) are matrix functions for \( i \in \{ 1, \ldots , p \} \) and \( k \in \{ 1, \ldots , q \} \). The linear differential equation \(g_{, \xi } = J g\) implies \(X_{i ,\xi } = J_{m^i_1, \ldots , m^i_{r_i}} (\lambda _i ) X_i\) and \(Y_{k ,\xi } = J_{n^k_1, \ldots , n^k_{s_k}} (\Lambda _k ) Y_k \). By Theorems 11 and 13 we get \( X_i = g_{{m^i_1} , \ldots , {m^i_{r_i}}} ( \lambda _i ) \) and \(Y_k = g_{{n^k_1} , \ldots , {n^k_{s_k}}} ( \Lambda _k ) \), respectively, for each \(i \in \{ 1, \ldots , p\} \) and \(k \in \{ 1, \ldots , q\} \). \(\square \)

7 Equivalence Classes for the Matrix A

In this section we resume some facts from linear algebra which permit to describe the similarity equivalence classes for the matrix \(A\in SL(n,\mathbb {R})\) from Section 2, recall that A is a real traceless matrix which satisfies that \(Ag=gA^T\).

Definition 8

A real square matrix is non-derogatory if its minimal polynomial and characteristic polynomial are equal.

Definition 9

Let

$$\begin{aligned} p(\lambda ) = \lambda ^n + a_{n-1} \lambda ^{n-1} + \ldots + a_1 \lambda + a_0 \end{aligned}$$
(98)

be a polynomial and \(a_i \in \mathbb {R}\) for \(i = \{ 1, \ldots , n \} \). The matrix

$$\begin{aligned} \left[ \begin{array}{ccccc} 0 &{} 1 &{} 0 &{} \cdots &{} 0 \\ 0 &{} 0 &{} 1 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \cdots &{} 1 \\ -a_0 &{} -a_1 &{} -a_2 &{} \cdots &{} -a_{n-1} \end{array}\right] \end{aligned}$$
(99)

is the companion matrix of the polynomial \( p(\lambda ) \). The matrices of the form (99) are called natural normal cells.

Theorem 15

(from [4]) Let A be a real square matrix with characteristic polynomial \( p ( \lambda ) \). If A is non-derogatory, then A is similar to the companion matrix of \( p ( \lambda ) \).

Definition 10

Let \(n_1, \ldots , n_m \) be positive integers such that \(n = n_1 + \ldots + n_m\). A matrix of the form

$$\begin{aligned} A = \textrm{diag}\left[ A_1, \ldots , A_m \right] \in \textbf{M}_n \end{aligned}$$
(100)

is called natural normal form if

  1. (i)

    \(A_i \in \textbf{M}_{n_i}\) are natural normal cell with characteristic polynomial \(p_i(\lambda ) \) for \(i \in \{ 1, \ldots , m \} \),

  2. (ii)

    for every \(j \in \{ 1, \ldots , m-1\} \), the polynomial \(p_j(\lambda ) \) is a divisor of \(p_{j+1}(\lambda ) \).

Theorem 16

(from [1]) Every real square matrix is similar to a unique natural normal form.

Definition 11

Let

$$\begin{aligned} P = \left[ \begin{array}{ccc} p_{11} ( \lambda ) &{} \cdots &{} p_{1n} ( \lambda ) \\ \vdots &{} \ddots &{} \vdots \\ p_{n1} ( \lambda ) &{} \cdots &{} p_{nn} ( \lambda ) \end{array}\right] \in \textbf{M}_{n} \end{aligned}$$
(101)

be a polynomial matrix and \( D _k ( \lambda ) \) the greatest common divisor of all minors of order k in P for \( k \in \{ 1, \ldots , n \} \). The invariant factors of P are defined as follows:

$$\begin{aligned} d_1(\lambda ) = D_1(\lambda ), d_2(\lambda ) = \frac{D_2(\lambda )}{D_1(\lambda ) }, \cdots , d_r(\lambda ) = \frac{D_r(\lambda )}{D_{r-1}(\lambda )}, d_{r+1}(\lambda ) = 0, \cdots , d_n (\lambda ) = 0 . \end{aligned}$$
(102)

If all minors of order k are equal to zero, then \( D _k (\lambda ) = 0 \).

Lemma 8

(from [5]) Let \( A \in \textbf{M}_{n} \) be the companion matrix of the polynomial \( p ( \lambda ) \). The invariant factors of the matrix A are equal to \( 1, \ldots , 1, p ( \lambda ) \), where the number of the 1’s equals \( (n - 1) \).

Lemma 9

(from [5]) Let A be the matrix of Definition 10. The invariant factors of the matrix A, are equal to \( 1, \ldots , 1, p_1 ( \lambda ), \ldots , p_m ( \lambda ) \), where the number of the 1’s is given by \( (n - m) \).

Theorem 17

(from [1]) Two real square matrices are similar if and only if they have the same invariant factors.

Definition 12

Let n and m be positive integers such that \( 1 < m \le n \).

$$\begin{aligned} N _{m, n} = \{ ( n_1, \ldots , n_m ) \in \mathbb {Z} ^m: 0 < n_1 \le \cdots \le n _m, n = n_1 + \ldots + n _m \} \end{aligned}$$
(103)

Theorem 18

Let n and m be positive integers such that \(1< m < n\). The equivalence classes of the matrix \( A \in \mathfrak {sl}(n,\mathbb {R} )\) are as follows:

$$\begin{aligned}{}[A] _1 = \left\{ \left[ \begin{array}{cccccc} 0 &{} 1 &{} 0 &{} \cdots &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} \cdots &{} 0 &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \cdots &{} 0 &{} 1 \\ -a_0 &{} -a_1 &{} -a_2 &{} \cdots &{} -a_{n-2} &{} 0 \end{array}\right] \in \textbf{M}_{n} \right\} \end{aligned}$$

and \([A] _{(n_1, \ldots , n_m)}\), which is the set of matrices \(\textrm{diag}\left[ A_1, \ldots , A_m \right] \), where the matrices \(A_1, \ldots , A_m\) satisfy the following:

  • \( A_i \in \textbf{M}_{n_i} \) are natural normal cells for \( i = \{ 1, \ldots , m \} \),

  • \( (n_1, \ldots , n_m) \in N_{n, m} \),

  • \( p_{A _j} ( \lambda ) \) is a divisor of \(p_{A_{j+1}}(\lambda )\) for \(j = \{ 1, \ldots , m-1 \} \),

  • \( {\textrm{tr}}A_1 + \ldots + {\textrm{tr}}A_m = 0 \) .

Proof

Let \( X \in \mathfrak {sl} ( n, \mathbb {R} ) \). By the Theorems 15 and 16 we have that if X is non-derogatory, then X is similar to a natural normal cell, or, is similar to a natural normal form. Suppose that X is similar to A.

First case, A has the form (99). Since that \({\textrm{tr}}X = 0 \), then \( {\textrm{tr}}A = 0 \), so that \( a _{n - 1} = 0 \).

Second case, there exist an integer \( m \in \{ 2, \ldots , n \} \) such that A has the form \( \textrm{diag}\left[ A_1, \ldots , A_m \right] \), where \( A_i \) are natural normal cell with characteristic polynomial \( p _{A _i} ( \lambda ) \) of degree equal to \( n_i \) for \(i = \{ 1, \ldots , m \} \) and \( n = n_1 + \ldots + n _m \). Since that \( p _{A _j} ( \lambda ) \) is a divisor of \( p _{A _{j + 1} } ( \lambda ) \), then \( n _j \le n _{j + 1} \) for each \( j \in \{ 1, \ldots , m-1\} \), so that \( ( n_1, \ldots , n_m ) \in N _{m, n} \). Using the properties of the trace of a matrix we get \( {\textrm{tr}}A_1 + \ldots + {\textrm{tr}}A_m = 0 \), then for \( m = n \), we have \( A = 0 _n \).

By the Theorem 17 we find that X has the same invariant factors that A. This means that the equivalence classes are determined by the invariant factors of A. Therefore, A is a representation of the equivalence class where X belongs.\(\square \)

Table 1 Equivalence classes for the matrix \(A \in \mathfrak {sl} (5,\mathbb {R} )\)

8 Example: One-Dimensional \(SL(5,\mathbb {R})\)-Subspaces

As an example to illustrate our results we will find the solutions for g considering A as member of the Lie algebra \(\mathfrak {sl} ( 5, \mathbb {R} ) \). For this, the following steps must be performed:

  1. (i)

    compute the sets \( N_{m, n} \),

  2. (ii)

    find the equivalence classes for A,

  3. (iii)

    obtain the real Jordan forms for every equivalence classes,

  4. (iv)

    determine g for each real Jordan form.

The method can be used for \( n \ge 2 \). It is easy to find the sets

$$\begin{aligned} N_{2, 5}= & {} \{ ( 1, 4 ), ( 2, 3 ) \} \end{aligned}$$
(104)
$$\begin{aligned} N_{3, 5}= & {} \{ ( 1, 1, 3 ), ( 1, 2, 2 ) \} \end{aligned}$$
(105)
$$\begin{aligned} N_{4, 5}= & {} \{ ( 1, 1, 1, 2 ) \} \end{aligned}$$
(106)

Hence, we have six equivalence classes: \( \mathfrak {A} = [A] _1 \), \( \mathfrak {B} = [A] _{( 2, 3 )} \), \( \mathfrak {C} = [A] _{( 1, 4 )} \), \( \mathfrak {D} = [A] _{( 1, 2, 2 )} \), \( \mathfrak {E} = [A] _{( 1, 1, 3 )} \) and \( \mathfrak {F} = [A] _{( 1, 1, 1, 2 )} \).

In what follows, we will explain in detail how to determine \( \mathfrak {B} \). The other five equivalence classes can be obtained in a similar way, all classes are shown in Table 1. Let \( A \in \mathfrak {B} \), then A has the form \( \textrm{diag}\left[ A_1, A_2 \right] \), where \( A_1 \in \textbf{M}_{2} \) and \( A_2 \in \textbf{M}_{3} \) are natural normal cells. By Lemma 9 the invariant factors of the matrix A are given as \( 1, 1, 1, p_{A_1} ( \lambda ), p_{A_2} ( \lambda ) \), where \( p_{A_1} ( \lambda ) \) and \( p_{A_2} ( \lambda ) \) are characteristic polynomials of \( A_1 \) and \( A_2 \), respectively. Note that the degree of the polynomials \( p_{A_1} ( \lambda ) \) and \( p_{A_2} ( \lambda ) \) are 2 and 3, respectively. Now, assume that \( p_{A_1} ( \lambda ) = \lambda ^2 - b \lambda - a \), where \( a, b \in \mathbb {R} \). Since \( p_{A_1} ( \lambda ) \) is a divisor of \( p_{A_2} ( \lambda ) \), we can suppose, without loss of generality, that \( p_{A_2} ( \lambda ) = ( \lambda - c ) p_{A_1} ( \lambda ) \). From the characteristic polynomial of A, \( p_{A} ( \lambda ) = p_{A_1} ( \lambda ) p_{A_2} ( \lambda ) \), we find \( {\textrm{tr}}A = - 2b - c = 0 \), then \( p_{A_2} ( \lambda ) = ( \lambda + 2 b ) ( \lambda ^2 - b \lambda - a ) \). The matrices \( A_1 \) and \( A_2 \) are also the companion matrices of \( p_{A_1} ( \lambda ) \) and \( p_{A_2} ( \lambda ) = \lambda ^3 + b \lambda ^2 - c \lambda - 2 a b \), respectively, hence

$$\begin{aligned} A_1 = \left[ \begin{array}{c c} 0 &{} 1 \\ a &{} b \end{array} \right] , A_2 = \left[ \begin{array}{c c c} 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 \\ 2 a b &{} c &{} -b \end{array} \right] \end{aligned}$$
(107)

where \( c = a + 2 b ^2 \).

Table 2 Solutions for g considering \(A\in \mathfrak {A}\)
Table 3 Solutions for g considering \(A\in \mathfrak {B}\)
Table 4 Solutions for g considering \(A\in \mathfrak {C}\)
Table 5 Solutions for g considering \(A\in \mathfrak {D}\)
Table 6 Solutions for g considering \(A\in \mathfrak {E}\)
Table 7 Solutions for g considering \(A\in \mathfrak {F}\)

In order to obtain the real Jordan forms of \( \mathcal {B} \), we consider the fact that a quadratic equation with real coefficients can have either one or two distinct real roots, or a pair of complex conjugate roots. Hence we can rewrite

  1. (i)

    \( p_{A_1} ( \lambda ) = ( \lambda - r _1 ) ( \lambda - r _2 ) \), \( p_{A_2} ( \lambda ) = ( \lambda - r _1 ) ( \lambda - r _2 ) ^2 \), where \( r _1 \ne r _2 \)

  2. (ii)

    \( p_{A_1} ( \lambda ) = ( \lambda - r _1 ) ( \lambda - r _2 ) \), \( p_{A_2} ( \lambda ) = ( \lambda - r _1 ) ( \lambda - r _2 ) ( \lambda - r _3 ) \), where \( r _1 \ne r _2 \ne r _3 \)

  3. (iii)

    \( p_{A_1} ( \lambda ) = ( \lambda - r _1 ) ^2 \), \( p_{A_2} ( \lambda ) = ( \lambda - r _1 ) ^3 \).

  4. (iv)

    \( p_{A_1} ( \lambda ) = ( \lambda - r _1 ) ^2 \), \( p_{A_2} ( \lambda ) = ( \lambda - r _1 ) ^2 ( \lambda - r _2 ) \), where \( r _1 \ne r _2 \).

  5. (v)

    \( p_{A_1} ( \lambda ) = ( \lambda - r _1 ) ^2 + \theta ^2 \), \( p_{A_2} ( \lambda ) = ( ( \lambda - r _1 ) ^2 + \theta ^2 ) ( \lambda - r _2 ) \), where \( \theta > 0 \)

so that \( A_1 \) and \( A_2 \) are similar to

  1. (i)

    \( \textrm{diag}\left[ J _1 ( r_1 ), J _1 ( r_2 ) \right] \) and \( \textrm{diag}\left[ J _1 ( r_1 ), J _2 ( r_2 ) \right] \)

  2. (ii)

    \( \textrm{diag}\left[ J _1 ( r_1 ), J _1 ( r_2 ) \right] \) and \( \textrm{diag}\left[ J _1 ( r_1 ), J _1 ( r_2 ), J _1 ( r_3 ) \right] \)

  3. (iii)

    \( J _2 ( r_1 ) \) and \( J _3 ( r_1 ) \)

  4. (iv)

    \( J _2 ( r_1 ) \) and \( \textrm{diag}\left[ J _2 ( r_1 ), J _1 ( r_2 ) \right] \)

  5. (v)

    \( J _1 \left[ \begin{array}{cc} r_1 &{} -\theta \\ \theta &{} r_1 \end{array} \right] \) and \( \textrm{diag}\left[ J _1 \left[ \begin{array}{cc} r_1 &{} -\theta \\ \theta &{} r_1 \end{array} \right] , J _1 ( r_2 ) \right] \)

respectively. Therefore, A is similar to

  1. (i)

    \( \textrm{diag}\left[ J _{1,1} ( r_1 ), J _{1,2} ( r_2 ) \right] \)

  2. (ii)

    \( \textrm{diag}\left[ J _{1,1} ( r_1 ), J _{1,1} ( r_2 ), J _1 ( r_3 ) \right] \)

  3. (iii)

    \( J _{2,3} ( r_1 ) \)

  4. (iv)

    \( \textrm{diag}\left[ J _{2,2} ( r_1 ), J _1 ( r_2 ) \right] \)

  5. (v)

    \( \textrm{diag}\left[ J _{1,1} \left[ \begin{array}{cc} r_1 &{} -\theta \\ \theta &{} r_1 \end{array} \right] , J _1 ( r_2 ) \right] \)

Applying the condition \( {\textrm{tr}}A_1 + {\textrm{tr}}A_2 = 0 \) we get

  1. (i)

    \( r_1 = -3 q / 2, r_2 = q, q \ne 0 \)

  2. (ii)

    \( 2 r_1 + 2 r_2 + r_3 = 0 \)

  3. (iii)

    \( r_1 = 0 \)

  4. (iv)

    \( r_1 = q, r_2 = -4 q, q \ne 0 \)

  5. (v)

    \( r_1 = q, r_2 = -4 q, q \ne 0 \)

The real Jordan forms for every equivalence class of A is presented in the Tables 2, 3, 4, 5, 6 and 7. Note that q is a real constant, also r and \(\theta \), with or without indices, are real numbers.

Finally, we determine g for \( J _{2,3 } ( 0 ) \). By Theorem 11 we obtain \( g = \left[ \begin{array}{cc} g _{11} &{} g _{12} \\ g ^T _{12} &{} g _{22} \end{array} \right] \), where \( g_{12} = \left[ \begin{array}{cc} h&0 \end{array} \right] \in \textbf{M}_{2 \times 3} \); \( g _{11}, h \in \textbf{M}_{2} \) and \( g _{22} \in \textbf{M}_{3} \) are matrix functions given by Theorem 10. Thus

$$\begin{aligned} g _{11}= & {} \left[ \begin{array}{cc} A_1 + A_2 \xi &{} A_2 \\ A_2 &{} 0 \end{array} \right] ,\end{aligned}$$
(108)
$$\begin{aligned} g _{22}= & {} \left[ \begin{array}{c c c} B_1 + B_2 \xi + B_3 \xi ^2 /2 &{} B_2 + B_3 \xi &{} B_3 \\ B_2 + B_3 \xi &{} B_3 &{} 0 \\ B_3 &{} 0 &{} 0 \end{array} \right] ,\end{aligned}$$
(109)
$$\begin{aligned} g _{12}= & {} \left[ \begin{array}{c c c} C_1 + C_2 \xi &{} C_2 &{} 0 \\ C_2 &{} 0 &{} 0 \end{array} \right] \end{aligned}$$
(110)

The Tables 2, 3, 4, 5, 6 and 7 show the other solutions. Note that all letters ABCDE, with and without indices, are real constants.

In general relativity, the Boyer-Lindquist coordinates are very important. They are defined as \(\rho = \sqrt{ r^2 - 2 m r + \sigma ^2 } \sin \theta \) and \(\zeta = ( r - m ) \cos \theta \), where m and \(\sigma \) are constant parameters. The Laplace equation (28) is transform to

$$\begin{aligned} ( ( r^2 - 2 m r + \sigma ^2 ) \xi _{, r} ) _{, r} + \frac{1}{\sin \theta } ( \xi _{, \theta } \sin \theta ) _{, \theta } = 0 \ , \end{aligned}$$
(111)

Some solutions of (111) can be found in [7]. As an example we consider that the parameter \(\xi \) depends only on r and \(\sigma = 0\), then

$$\begin{aligned} \xi = \frac{ \gamma }{ 2 m } \ln \left( 1 - \frac{2 m}{r} \right) + \delta \end{aligned}$$
(112)

where \(\gamma \) and \(\delta \) are real constant. For \(n = 2\), we choose \(A = \textrm{diag}\left[ \lambda , -\lambda \right] \), then its corresponding matrix g is \(\textrm{diag}\left[ \epsilon e^{\lambda \xi } , - e^{-\lambda \xi }/\epsilon \right] \), where \(\lambda \) and \(\epsilon \) are real constant. Thus, \(g = \textrm{diag}\left[ -C \left( 1 - \frac{2 m}{r} \right) ^{-p} , \left( 1 - \frac{2 m}{r} \right) ^p /C \right] \), where \(p = - \frac{\lambda \gamma }{2m}\) and C is a real constant. Also, the differential equations for the function f (26) are transform to

$$\begin{aligned} \left( \ln f \sqrt{\rho }\right) _{, r}= & {} \frac{ 2 m^2 p^2 \sin ^2 \theta }{ r^2 - 2 m r + m^2 \sin ^2 \theta } \frac{ r - m }{ r^2 - 2 m r } \end{aligned}$$
(113)
$$\begin{aligned} \left( \ln f \sqrt{\rho }\right) _{, \theta }= & {} - \frac{ 2 m^2 p^2 \sin \theta \cos \theta }{ r^2 - 2 m r + m^2 \sin ^2 \theta } \end{aligned}$$
(114)

Solving them, we get

$$\begin{aligned} f = \frac{ D \Delta ^{-p^2} }{\sqrt{\rho }} \end{aligned}$$
(115)

where D is a constant and

$$\begin{aligned} \Delta = 1 + \frac{m^2 \sin ^2 \theta }{r^2 -2 m r} \end{aligned}$$
(116)

Therefore, a exact solution to EFE is

$$\begin{aligned} \begin{aligned} \hat{g}&= \frac{ D \Delta ^{1 -p^2} }{\sqrt{\rho }} \left( dr \otimes dr + ( r^2 - 2 m r ) d\theta \otimes d\theta \right) \\&- \frac{\rho }{C} \left( 1 - \frac{2 m}{r} \right) ^p d t \otimes d t + C \rho \left( 1 - \frac{2 m}{r} \right) ^{-p} d x^4 \otimes d x^4 \end{aligned} \end{aligned}$$
(117)

9 Conclusions

EFE are one of the most interesting and complicated equations to solve in physics. Techniques to solve them have been developed for 4-dimensions in the past. One of the most successful techniques relies on subspaces and subgroups. This method helps to generate solutions of the 4-dimensional EFE on demand, such that the Laplace equation gives the solutions for monopoles, dipoles, etc. In this work we used this technique to solve the (\(n+2\))-dimensional EFE in vacuum, reducing the final matrix equation to its normal Jordan form, which permits to solve the equations with some facility. We obtained a great amount of solutions of the EFE in terms of the Laplace parameter, such that for each solution of the Laplace equations, we may get a different solution of the EFE. One can play with the different combinations of solutions to obtain even more solutions.