Abstract
In this work we find solutions of the (\(n+2\))-dimensional Einstein Field Equations (EFE) with n commuting Killing vectors in vacuum. In the presence of n Killing vectors, the EFE can be separated into blocks of equations. The main part can be summarized in the chiral equation \(\ (\alpha g_{, \bar{z}} g^{-1})_{, z} + \ (\alpha g_{, z} g^{-1})_{, \bar{z}} = 0\) with \( g\in SL(n,\mathbb {R})\). The other block reduces to the differential equation \(\, (\ln f \alpha ^{1-1/n})_{, z} = 1/2 \, \alpha \,{\textrm{tr}}( g_{, z} g^{-1})^2\) and its complex conjugate. We use the ansatz \(g = g(\xi ) \), where \(\xi \) satisfies a generalized Laplace equation, so the chiral equation reduces to a matrix equation that can be solved using algebraic methods, turning the problem of obtaining exact solutions for these complicated differential equations into an algebraic problem. The different EFE solutions can be chosen with desired physical properties in a simple way.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The Einstein Field Equations (EFE) are one of the most interesting field equations in physics and from a mathematical point of view, the search for methods to obtain solutions has led to a large number of mathematical results. The first exact solution was obtained by Karl Schwarzschild in 1916, but his study has been a long debate over the meaning of the solution. We now know that the Schwarzschild solution represents a static black hole. His generalization for a stationary solution had to wait more than 40 years to be found by Roy Kerr. These solutions have been the cornerstone of the theory of general relativity and its interpretation and represent a stationary black hole. After the finding of Kerr’s solution, the exact solutions area of the EFE has been very active, see for example [10]. Several mathematical methods have been developed with great success to find exact solutions of EFEs. One of the most successful has been the method of subspaces and subgroups, which is capable of generating exact solutions on demand. It is possible to decide the exact physical content of the solution from the beginning. That is why in this work we will adopt this solution method.
On the other hand, interest in higher dimensional theories began in 1919 with Theodor Kaluza’s proposal for a five-dimensional space-time that unified gravitation with electromagnetism. Kaluza proposes that the metric of a five-dimensional spacetime can be separated as \(g^5_{\mu \nu }=g^4_{\mu \nu }+I^2A_\mu A_\nu \), for \(\mu ,\nu =1,\cdots ,4\), \(g^5_{5\mu }=IA_\mu \) and \(g^5_{55}=I^2\), where \(A_\mu \) is the tetraelectromagnetic potential and I is related to a scalar field, called dilaton, see for example [8]. This theory has evolved to the unification of all interactions; electromagnetic, strong and weak interactions with gravity. However, the theory includes quantum interactions, but it is not renormalizable, nor is it quantizable. So people propose string and superstring theory to have a quantizable renormalizable higher dimensional theory, see for example [2]. The price they have to pay is that the extra dimensions must be singular. In this paper we propose that the extra dimensions form an n-dimensional space with \(n-2\) Killing vectors that may be singular and interesting enough to be studied.
In this work we pretend to find exact solutions of the EFE from the mathematical point of view, using the method of space and subgroups which seems very successful to obtain a great amount of exact solutions.
We start with an (\(n+2\))-dimensional space and are interested in 4-dimensional spacetimes that are stationary and axially symmetric. This means that the 4-dimensional spacetime contains two commuting Killing vectors whose extra dimensional space is (\(n-2\))-dimensional with \(n-2\) commuting Killing vectors, so that the n-dimensional space contains n commuting Killing vectors. Thus, in this case we can work in a coordinate system where the metric depends only on two variables \(x^1\) and \(x^2\), so that the metric tensor has the form
where the components of \(\hat{g}\), f and g, for \(\mu , \nu = 3, \ldots , n+2\), depend on two variables \(x^1\) and \(x^2\). In the following, we will denote the uppercase indices as \(A,B=1,\ldots , n+2\) and the Greek indices as \(\mu , \nu = 3,\ldots , n+2\).
Throughout this paper, the set of matrices of size \(m\times n\) with entries in \(\mathbb {R}\) is denoted by \(\textbf{M}_{m \times n} \), we write \(\textbf{M}_{m}\) if \(m = n\). The identity matrix and the zero matrix are denoted by \(I_n\) and \(0_n\), respectively, and, \(\textbf{Sym}_{n}\) is the subset of symmetric matrices in \(\textbf{M}_{n}\).
This work is organized as follows. In Section 2 we follow [6] to write the main field equations, obtaining the Ricci tensor for an (\(n+2\))-dimensional space with n commutative Killing vectors. Section 4 presents the algebraic basis of the matrices used in this work. In Section 5 we express these matrices by their Jordan normal form in order to solve the final algebraic equation. In Section 8, using the Jordan form of matrices we obtain the solutions of the algebraic equations. Finally, Section 9 contains some conclusions.
2 Field Equations
In this section we derive the main field equations for an (\(n+2\))-dimensional Riemannian space with n commuting Killing vectors. The metric components depend only on the coordinates \(\hat{g}_{A B}=\hat{g}_{A B}(x^1,x^2)\). In this case, the Christoffel symbols are given as
For the metric (1) we have
the remaining components are zero.
In order to compute the Ricci tensor with our metric,
it is convenient to use the variables \(z = x _1 + i x _2 \) and its complex conjugate, \({\bar{z}}\). Hence, the non-zero components of the Ricci tensor are as follows:
where \(\det g_{\mu \nu } = - \alpha ^2\).
We will use matrix notation, let us define the matrix g from the components of the metric tensor \(g_{\mu \nu }\) as follows:
Note that the matrix g is real and symmetric, that is, denoting by T transpose of a matrix,
The vacuum Einstein equations are given by
From \(R_{\mu } ^{\nu } = 0 \) we obtain the chiral equations
Its trace gives a differential equation for \(\alpha \):
From now on, the index Z will take the values z and \(\bar{z}\). Using \(R_{1 1} - R_{2 2} \pm 2i R_{1 2} = 0 \) we find
Both Equations (for z and \({\bar{z}}\)) satisfy
Using the transformation
we normalize g, i.e., \(\det g = ( -1 ) ^{n+1}\). Therefore, g is a symmetric matrix in \(SL(n,\mathbb {R})\).
The chiral equation (17) does not change under the transformation (21), whereas (19) takes the form
The chiral equation (17) is invariant under transformations
where \( C \in SL(n,\mathbb {R}) \) is a constant matrix. The general solution of the differential (18) for \(\alpha \) is given as
where \( \alpha _z \) and \( \alpha _{\bar{z}} \) are arbitrary functions. Chosing Weyl coordinates, i.e.,
Equations (22) are reduced to
The next sections will introduce important quantities to transform the differential equations (17).
3 One-Dimensional Subspaces
Suppose that g depends on parameters \(\xi \) which are arbitrary functions of the variables z and \(\bar{z}\). Then, the chiral equation (17) changes to
Now we assume that the parameter \(\xi \) satisfies the Laplace equation
then \(g_{, \xi } g^{-1} = A\) is a constant matrix. Note that each new solution of the Laplace equation gives another solution for g. From the properties of the matrix g we obtain
Equations (29) and (30) imply that A belongs to the Lie algebra \(\mathfrak {sl} (n,\mathbb {R} )\), the Lie algebra corresponding to the group \(SL(n,\mathbb {R})\). The matrix A varies as
under the transformation (23). The relation (32) separates the set of matrices A into equivalence classes. We will work with a representative matrix of each class.
4 The Subspace \( \mathcal {I} ( A ) \)
It is possible to find the general form of g given A if we consider the property (15), together with the intertwining relation (31) satisfied for the matrix A. To do so, let us define the following set.
Definition 1
For any non-zero matrix \( A \in \textbf{M}_{n}\), define the set \(\mathcal {I}( A )\) as
Observe that \( g \in \mathcal {I}(A) \). Thus, the problem of finding the form of g has been transformed into a linear algebra problem. First, let us derive the following useful properties.
Theorem 1
For any non-zero matrix \( A \in \textbf{M}_{n}\), \(\mathcal {I}( A )\) is a subspace of the vector space \(\textbf{M}_{n}\).
Proof
Let \(\alpha \in \mathbb {R} \) and let \(X, Y \in \mathcal {I}(A)\). We have \( ( \alpha X ) ^T = \alpha X ^T = \alpha X\) and \( ( X + Y ) ^T = X ^T + Y ^T = X + Y \). Then, \(A ( \alpha X ) = \alpha ( A X ) = \alpha ( X A ^T ) = ( \alpha X ) A ^T \) and \( A ( X + Y ) = A X + A Y = X A ^T + Y A ^T = ( X + Y ) A ^T \), so that \(\alpha X \in \mathcal {I}(A)\) and \(X + Y \in \mathcal {I}(A)\).\(\square \)
Definition 2
For any non-zero matrix \( A \in \textbf{M}_{n}\) and \( \xi \in \mathbb {R} \), define
For more information on the exponential matrix, see, for example, [3, 11]. The following lemmas are corollaries of the above.
Lemma 1
Let \( A, g \in \textbf{M}_{n}\) be non-zero matrices and \( \xi \in \mathbb {R} \). Then \( e ^{ \xi A } g = g e ^{ \xi A ^T } \) if and only if \( A g = g A ^T \).
Proof
We define the matrix function \( F ( \xi ) = e ^{ \xi A } g e ^{ -\xi A ^T } \). Its derivative is \( F' ( \xi ) = e ^{ \xi A } \ ( A g - g A ^T ) e ^{ -\xi A ^T } \). If \( g \in \mathcal {I} ( A ) \), then \( F' ( \xi ) = 0 \), so that \( F ( \xi ) = F ( 0 ) \). Therefore, \( e ^{ \xi A } g = g e ^{ \xi A ^T } \). Now, if \( e ^{ \xi A } g = g e ^{ \xi A ^T } \), then \( F ( \xi ) = F ( 0 ) \). Its derivative at \( \xi = 0 \) gives \( A g = g A ^T \). \(\square \)
It is convenient to reduce the matrices we work with to simple matrices using the equivalence relation (23). In particular, to facilitate the computation of the matrix exponentials, we will use the Jordan matrices introduced in the next section.
5 Jordan Matrices
The invariance (23) allows to use normal forms for the matrix A which then is used to determine the matrix g. In this work we choose the real Jordan form of a matrix, because of its simplicity, and, since in this representation the matrix A is always real even if it has complex conjugate eigenvalues. For an example of using the natural normal form of matrices instead of the Jordan form, see [5], where the group \(SL(3,\mathbb {R} )\) was discussed. Here we are going to focus on the group \(SL(5,\mathbb {R} )\) in its Jordan representations. For more information on the Jordan form, see, for example, [1, 4, 9].
It is well-known that any real square matrix may have real and complex eigenvalues, where for each complex eigenvalue \(\alpha +\beta i\), also its complex conjugate \(\alpha -\beta i\) is an eigenvalue. To avoid to include the complex values explicitly in the Jordan matrix, it is possible to include each such pair \(\alpha \pm \beta i\) as represented by a real 2x2-matrix
Therefore, we will consider Jordan blocks of two kinds, one for the real eigenvalues and another type for the pairs of complex conjugate eigenvalues. Furthermore, it is convenient for our work to represent the Jordan matrices as decomposed into blocks which make visible the type of eigenvalues. In consequence, we introduce several types of Jordan blocks and matrices, more general as the standard notions from the common literature, as follows.
Definition 3
For \( \lambda \in \mathbb {R} \), a Jordan cell \( J _n (\lambda ) \in \textbf{M}_{n}\) is an upper triangular matrix of the form
Definition 4
Suppose
A Jordan \(\Lambda \)-block of the first kind \( J _n ( \Lambda ) \in \textbf{M}_{2n}\) is a block upper triangular matrix of the form
In the remainder of the article, \(\Lambda \) if not specified, is supposed to have the form in (37) .
Definition 5
Let \( \lambda \in \mathbb {R} \) and \( n_1, \ldots , n_m \) be positive integers such that \( n = n_1 + \ldots + n_m \). A Jordan matrix \( J _{ n _1, \ldots , n _m } ( \lambda ) \in \textbf{M}_{n}\) is a block diagonal matrix
where \(J_{n_i} ( \lambda )\) are Jordan cells for all \(i = 1, \ldots , m \).
Definition 6
Let \( n _1, \ldots , n _m \) be positive integers such that \( n = n _1 + \ldots + n _m \). A Jordan \(\Lambda \)-block of the second kind \( J _{ n _1, \ldots , n _m } ( \Lambda ) \in \textbf{M}_{2n}\) is a block diagonal matrix
where \(J_{n _i} ( \Lambda ) \) are Jordan \(\Lambda \)-blocks of the first kind for all \( i = 1, \ldots , m \).
Definition 7
Let \( \lambda _i \in \mathbb {R} \), \(i \in \{ 1,2,\cdots ,p \}\) and
with \( \beta _k > 0 \), such that all scalars and matrices are distinct. Let \( m ^i _1, \ldots , m ^i _{r _i} \) and \( n ^k _1, \ldots , n ^k _{s _k} \) be positive integers such that \( m ^i = m ^i _1 + \ldots + m ^i _{r _i} \), \( n ^k = n ^k _1 + \ldots + n ^k _{r _k} \), \( m = m ^1 + \ldots + m ^p \) and \( n = n ^1 + \ldots + n ^q \). A generalized Jordan matrix \( J \in \textbf{M}_{m + 2n}\) is defined as a block diagonal matrix of the form
where \( J _{m ^i _1, \ldots , m ^i _{r _i}} ( \lambda _i ) \) are Jordan matrices for all \( \lambda _i \), and \( J _{n ^k _1, \ldots , n ^k _{s _k}} ( \Lambda _k ) \) are Jordan \(\Lambda \)-blocks of the second kind for all \( i \in \{ 1, \ldots , p \} \) and \( k \in \{ 1, \ldots , q \} \).
Theorem 2
(from [4]) Each \( A \in \textbf{M}_{n}\) is similar via a real similarity transformation matrix, to a generalized Jordan matrix of the form given in Definition 7 in which the scalars \( \lambda _1, \ldots , \lambda _p \) are real eigenvalues of A, and its complex conjugate eigenvalues \( \alpha _k \pm i \beta _k \) are represented by the matrices \( \Lambda _k \) for all \( k \in \{ 1, \ldots , q \} \).
Theorem 3
Let \( \lambda \in \mathbb {R} \) and \(J _n ( \lambda )\) be a Jordan cell. Then \(\mathcal {I}(J_n (\lambda ))\) coincides with the set of all real square matrices of order n which are of the form
Proof
Let be
The intertwining relation \(J _n ( \lambda ) X = X J ^T _n ( \lambda )\) implies the following:
Equations (45) mean that all entries of any antidiagonal of X, are equal, and, that all antidiagonals below the main antidiagonal are zero.\(\square \)
Lemma 2
Let m and n be two positive integers such that \( m < n \) and let \( X \in \textbf{M}_{m \times n}\). Then any Jordan cells \(J_m, J_n\) satisfy that
Proof
Let p be a positive integer such that \( m \le p \le n -1 \). If \( J _m ( \lambda ) X = X J ^T _n ( \lambda ) \), then \( J _m ( 0 ) X = X J ^T _n ( 0 ) \), so that \( J ^p _m ( 0 ) X = X ( J ^p _n ( 0 ) ) ^T \). Therefore \( X ( J ^p _n ( 0 ) ) ^T = 0 \). If \( p = n - 1 \), then \( x _{i n} = 0 \) for each \( i \in \{ 1, \ldots , m \} \). Proceeding analogously in decreasing order to \( p = m \) we obtain
Hence, we can write \(X = \left[ \begin{array}{cc} Y&0 \end{array}\right] \), where \(Y \in \textbf{M}_{m}\). If we partition \( J _n ( \lambda ) \) as
where
then the intertwining relation \( J _m ( 0 ) X = X J ^T _n ( 0 ) \) implies \( J _m ( \lambda ) Y = Y J ^T _m ( \lambda ) \), so that \( Y \in \mathcal {I} ( J _m ( \lambda ) ) \).
Now, let \(X = \left[ \begin{array}{cc} Y&0 \end{array}\right] \in \textbf{M}_{m \times n}\) and let \( Y \in \textbf{M}_{m}\). If \( Y \in \mathcal {I} ( J _m ( \lambda ) ) \), then \( J _m ( \lambda ) Y = Y J ^T _m ( \lambda ) \), which implies that \( J _m ( \lambda ) X = X J ^T _n ( \lambda ) \). \(\square \)
Lemma 3
Let m and n be two positive integers such that \( m > n \) and let \( X \in \textbf{M}_{m \times n}\). Then any Jordan cells \(J_m, J_n\) satisfy that
Proof
We rewrite \( J _m ( \lambda ) X = X J ^T _n ( \lambda ) \) as \( J _n ( \lambda ) X ^T = X ^T J ^T _m ( \lambda ) \). By Lemma 2 we have \(X^T = \left[ \begin{array}{cc} Y&0 \end{array}\right] \) with \(Y \in \mathcal {I} ( J _n ( \lambda ) ) \), hence \( X = \left[ \begin{array}{c} Y \\ 0 \end{array}\right] \). \(\square \)
Theorem 4
Let \( n _1, \ldots , n _m \) be positive integers such that \( n = n _1 + \ldots + n _m \), \( \lambda \in \mathbb {R} \), and let \( J _{ n _1, \ldots , n _m } ( \lambda ) \in \textbf{M}_{n}\) be a Jordan matrix. Then every matrix \(X \in \mathcal {I}( J _{ n _1, \ldots , n _m } ( \lambda ) )\) is a block matrix of the form
where for each \(i, j \in \{ 1, \ldots , m\} \), \( X _{i j} \in \textbf{M}_{n _i\times n _j}\) satisfies \(X ^T _{i j} = X _{j i} \) and are of the following form:
-
(i)
If \(n_i = n_j\) then \(X_{ij} \in \mathcal {I}( J_{n_i}(\lambda ))\).
-
(ii)
If \(n_i < n_j\) then \(X_{ij} = \left[ \begin{array}{cc} Y _{ij}&0 \end{array}\right] \) with \(Y_{ij}\in \mathcal {I}(J_{n_i}(\lambda ))\).
-
(iii)
If \(n_i > n_j\) then \(X_{ij} = \left[ \begin{array}{c} Y _{ij} \\ 0 \end{array}\right] \) with \(Y_{ij} \in \mathcal {I}(J_{n_j}(\lambda ))\).
Proof
Let \( i, j \in \{ 1, \ldots , m\} \). If \(X \in \mathcal {I}( J _{ n _1, \ldots , n _m } ( \lambda ) )\) then \( J _{ n _1, \ldots , n _m } ( \lambda ) X = X J ^T _{ n _1, \ldots , n _m } ( \lambda ) \) and \( X ^T = X \), so that \( J _{n _i} (\lambda ) X _{i j} = X _{i j} J _{n _j} ^T (\lambda ) \) and \( X _{j i} = X ^T _{i j} \). If \(n _i = n _j \) then \(X _{i j}\) is symmetric, so that \(X _{i j} \in \mathcal {I}( J _{n _i} (\lambda ) )\). By Lemma 2 we have that \(X_{ij} = \left[ \begin{array}{cc} Y_{ij}&0 \end{array}\right] \) with \(Y_{ij} \in \mathcal {I}(J_{n_i}(\lambda ))\) for \(n_i < n_j\). For \(n _i > n _j\), by Lemma 3 we find \( X_{ij} = \left[ \begin{array}{c} Y _{ij} \\ 0 \end{array}\right] \) with \(Y_{ij}\in \mathcal {I}(J_{n_j}(\lambda ))\).\(\square \)
Lemma 4
For any \(\Lambda = \left[ \begin{array}{cc} \alpha &{} -\beta \\ \beta &{} \alpha \end{array} \right] \in \textbf{M}_{2}\) with \(\beta >0\), \(\mathcal {I}(\Lambda )\) coincides with the set of all real symmetric 2x2- matrices of the form
Proof
For any
from the intertwining relation \(\Lambda X = X \Lambda ^T\) together with \( \beta > 0 \) we get \( x _3 = x _2 \) \( x _4 = - x _1 \).\(\square \)
Lemma 5
Let \( X \in \textbf{M}_{2}\), \(\Lambda = \left[ \begin{array}{cc} \alpha &{} -\beta \\ \beta &{} \alpha \end{array} \right] \in \textbf{M}_{2}\) with \(\beta > 0\). Any \( Y \in \mathcal {I} ( \Lambda ) \) satisfies that
Proof
Let \(X = \left[ \begin{array}{cc} x &{} y \\ z &{} t \end{array}\right] \in \textbf{M}_{2}\). If \(Y \in \mathcal {I} ( \Lambda ) \), then \( Y = \left[ \begin{array}{cc} a &{} b \\ b &{} -a \end{array}\right] \in \textbf{M}_{2}\). The relation \( \Lambda X = X \Lambda ^T + Y \) together with \( \beta > 0 \) implies \( a = b = 0 \), \( z = y \) and \( t = -x \). On the other hand, if \( X \in \mathcal {I} ( \Lambda ) \) then \( \Lambda X = X \Lambda ^T \), hence \( Y = 0 \). \(\square \)
Theorem 5
Let \(J _n ( \Lambda )\) be a Jordan \(\Lambda \)-block of the first kind with complex conjugate eigenvalues represented by \(\Lambda = \left[ \begin{array}{cc} \alpha &{} -\beta \\ \beta &{} \alpha \end{array} \right] \). Then
Proof
For \( n = 1 \) see Lemma 4. For \( n = 2 \), let
where \( X, Y, Z, T \in \textbf{M}_{2}\). The intertwining relation \( J _2 ( \Lambda ) \mathfrak {X} = \mathfrak {X} J ^T _2 ( \Lambda ) \) implies
The first one of the (57) implies that \( T \in \mathcal {I} ( \Lambda ) \). Applying Lemma 5 to the second and third ones of (57), we obtain \(Y, Z \in \mathcal {I} ( \Lambda ) \) and \( T = 0 \). Using Lemma 5 in the last one of (57) we find that \(X \in \mathcal {I} ( \Lambda ) \) and \( Z = Y \). Thus
Now, assume that the property is true for n and let us prove that it is satisfied for \(n + 1\).
\( J _{n + 1} ( \Lambda ) \) can be partitioned as follows:
where \(E _1 = \left[ \begin{array}{cccc} I_2&0&\cdots&0 \end{array}\right] \in \textbf{M}_{2\times 2n}\). Let
where \( X \in \textbf{M}_{2n}\), \(Y = \left[ \begin{array}{ccc} Y_1&\cdots&Y_n \end{array}\right] , Z ^T = \left[ \begin{array}{ccc} Z_1^T&\cdots&Z_n^T \end{array}\right] \), and all matrices \(T, Y_1, \ldots , Y_n, \) \( Z_1, \ldots , Z_n\) belong to \(\textbf{M}_{2}\).
If \( J _{n + 1} ( \Lambda ) \mathfrak {X} = \mathfrak {X} J ^T _{n + 1} ( \Lambda ) \) then
From the one of the (61) we obtain \( X \in \mathcal {I} ( J _n ( \Lambda ) ) \). By the induction hypothesis we can write
The second and third one of (61) imply that
Using Lemma 5 for (63), one obtains \( X_n = 0, Z_n = Y_n = X_{n-1}, \ldots , Z_3 = Y_3 = X_2, Z_2 = Y_2 = X_1 \) and \( Z_1, Y_1 \in \mathcal {I} ( \Lambda ) \). Then, applying Lemma 5 in the last one of (61) gives \( Z _1 = Y _1 \). Therefore,
Finally, it is obvious that \(\mathfrak {X} ^T = \mathfrak {X} \). \(\square \)
Lemma 6
Let m and n be two positive integers such that \( m < n \), and let \( X \in \textbf{M}_{2m\times 2n}\). Then
Proof
Let
where \( X _{i j} \in \textbf{M}_{2}\) for all \( i \in \{ 1, \ldots , m \} \) and \( j \in \{ 1, \ldots , n \} \). The intertwining relation \( J _m ( \Lambda ) \mathfrak {X} = \mathfrak {X} J ^T _n ( \Lambda ) \) implies the equations
for each \( i \in \{ 1, \ldots , m - 1 \}\) and \( j \in \{ 1, \ldots , n - 1 \} \). The first one of the (67) implies \( X _{m n} \in \mathcal {I} ( \Lambda ) \). Moreover, applying Lemma 5 to the second and third ones of (67), one obtains that \(X _{1 n} \in \mathcal {I} ( \Lambda ), X _{2 n} = \cdots = X _{m n} = 0 \) and \( X _{m 1} \in \mathcal {I} ( \Lambda ), X _{m 2} = \cdots = X _{m n} = 0 \), respectively. Now, we will only use the last one of (67). Let us write \( X _n \) instead of \( X _{1 n} \). For \( j = n - 1 \), taking into account Lemma 5, we get \( X _{1, n - 1} \in \mathcal {I} ( \Lambda ), X _{2, n - 1} = X _n, X _{3, n - 1} = \cdots = X _{m, n - 1} = 0 \). Also, let us write \( X _{n - 1 } \) instead of \( X _{1, n - 1} \). Proceeding analogously as before, it turns out that
where \( X _{i + j - 1} = X _{i j} \) with \( i + j \le n + 1 \). However, we had found that only \( X _m \) is non-zero in the last row. Then, \( X _{m + 1} = \cdots = X _n = 0 \), so that
On the other hand, we may partition \( J _n ( \Lambda ) \) as
where
Let \( X \in \textbf{M}_{2m}\) and \( \mathfrak {X} = \left[ \begin{array}{cc} X&0 \end{array}\right] \in \textbf{M}_{2m \times 2n}\). If \( X \in \mathcal {I} ( J _m ( \Lambda ) ) \) then \( J _m ( \Lambda ) X = X J ^T _m ( \Lambda ) \), hence \( J _m ( \Lambda ) \mathfrak {X} = \mathfrak {X} J ^T _n ( \Lambda ) \). \(\square \)
Lemma 7
Let m and n be two positive integer numbers such that \( m > n \), and let \( X \in \textbf{M}_{2m \times 2n}\). Then
Proof
This can be proved in a similar way to the proof of Lemma 6. \(\square \)
Theorem 6
Let \( n _1, \ldots , n _m \) be positive integers such that \( n = n _1 + \ldots + n _m \), and let \( J _{ n _1, \ldots , n _m } ( \Lambda ) \in \textbf{M}_{2n}\) be a Jordan matrix with complex conjugate eigenvalues represented by \(\Lambda = \left[ \begin{array}{cc} \alpha &{} -\beta \\ \beta &{} \alpha \end{array} \right] \). Then every matrix \(X \in \mathcal {I}( J _{ n _1, \ldots , n _m } ( \Lambda ) )\) is a block matrix of the form
where for each \(i, j \in \{ 1, \ldots , m\} \), \(X _{i j} \in \textbf{M}_{2n_i \times 2n_j}\) and \(X _{j i} = X ^T _{i j} \) which are of the following form:
-
(i)
If \(n_i = n_j\), then \(X_{ij} \in \mathcal {I}( J _{n _i} ( \Lambda ) )\).
-
(ii)
If \(n_i < n_j\), then \(X_{ij} = \left[ \begin{array}{cc} Y_{ij}&0 \end{array}\right] \) with \(Y_{ij} \in \mathcal {I}(J_{n_i} (\Lambda ) )\).
-
(iii)
If \(n_i > n_j\), then \(X_{ij} = \left[ \begin{array}{c} Y_{ij} \\ 0 \end{array}\right] \) with \(Y_{ij} \in \mathcal {I}(J_{n_j} (\Lambda ) )\).
Proof
Let \( i, j \in \{ 1, \ldots , m \}\). If \(X \in \mathcal {I}( J _{ n _1, \ldots , n _m } ( \Lambda ) )\) then \( J _{ n _1, \ldots , n _m } ( \Lambda ) X = X J ^T _{ n _1, \ldots , n _m } ( \Lambda ) \) and \( X ^T = X \), hence \( J _{n _i} ( \Lambda ) X _{ij} = X _{ij} J ^T _{n _j} ( \Lambda ) \) and \( X _{ji} = X ^T _{ij} \). It is obvious that \( X _{ij} \in \mathcal {I}( J _{n _i} ( \Lambda ) )\) for \(n _i = n _j\). If \(n _i < n _j\), by Lemma 6 we have \(X_{ij} = \left[ \begin{array}{cc} Y _{ij}&0 \end{array}\right] \) with \(Y_{ij} \in \mathcal {I}( J_{n_i} ( \Lambda ) )\). Using Lemma 7 we find that \( X _{ij} = \left[ \begin{array}{c} Y _{ij} \\ 0 \end{array}\right] \) with \(Y_{ij} \in \mathcal {I}( J_{n_j} ( \Lambda ) )\) for \(n_i > n_j\). \(\square \)
Theorem 7
Let J be a generalized Jordan matrix due to Definition 7. Then \(\mathcal {I}( J )\) is the set of all matrices \( \textrm{diag}\left[ X_1, \ldots , X_p, Y _1, \ldots , Y_q \right] \) such that \(X_i \in \mathcal {I}(J_{m ^i _1, \ldots , m ^i _{r _i}}(\lambda _i )) \) for all \(i \in \{ 1, \ldots , p \}\) , and \(Y_j \in \mathcal {I}(J_{n ^j _1, \ldots , n ^j _{s _j}}(\Lambda _j )) \) for all \(j \in \{ 1, \ldots , q \}\).
Proof
Let \(i,j \in \{ 1, \ldots , p \} \) and \(k,l \in \{ 1, \ldots , q \}\). Let
where \(X _{i j} \in \textbf{M}_{m ^i \times m ^j}\), \(Y _{k l} \in \textbf{M}_{2n^k \times 2n^l}\), \(Z _{i l} \in \textbf{M}_{m^i \times 2n^l}\) and \( T _{k j} \in \textbf{M}_{2n^k \times m^j}\). If \(J \mathfrak {X} = \mathfrak {X} J ^T\), then
Since the Jordan matrices do not have common eigenvalues, by the Sylvester’s theorem on linear matrix equations [1, 4] we have \( X_{i j} = 0 \) for \(i \ne j\), \(Y _{k l} = 0\) for \(k \ne l\), \(Z _{i l} = 0\) and \( T _{k j} = 0 \). Furthermore, if \( \mathfrak {X} \) is symmetric, \( X _{ii} \) and \( Y _{k k} \) are also symmetric, so that \(X _{i i} \in \mathcal {I}( J _{m ^i _1, \ldots , m ^i _{r _i}} ( \lambda _i ) )\) and \(Y _{k k} \in \mathcal {I}( J _{n ^k _1, \ldots , n ^k _{s _k}} ( \Lambda _k ) )\). \(\square \)
Theorem 8
For any \(x _i \in \mathbb {R}\), \(i \in \{ 1, \ldots , n \}\), the following determinant formula holds:
Proof
Let
be the exchange matrix. Using that \( \det K _n = (-) ^{n (n-1)/2} \) we find
\(\square \)
Theorem 9
Let \( X _i \in \mathcal {I}(Z)\) for \(i \in \{ 1, \ldots , n \}\). The following determinant formula holds:
Proof
By means of the properties of the determinants we have
Then,
\(\square \)
6 Computing One-Dimensional Subspaces
Now we will apply the properties of Jordan matrices deduced in the last section, to obtain knowledge about the matrix g.
Theorem 10
Let \( \lambda \in \mathbb {R} \), and let \( J _n ( \lambda ) \) be a Jordan cell. Suppose \(g \in \textbf{Sym}_n\) as a matrix function such that \(g_{, \xi } = J _n ( \lambda ) g\). Then
where
and \( C _i \) is constant for each \( i = 1, \ldots , n \).
Proof
Applying \( g = g^T \) to \(g_{, \xi } = J _n ( \lambda ) g\) we get \( J _n ( \lambda ) g= g J ^T _n ( \lambda ) \), then \(g\in \mathcal {I} ( J _n ( \lambda ) ) \). By Theorem 3, g has the form given in (84). From \(g_{, \xi } = J _n ( \lambda ) g\) we obtain
Integrating successively we get (85).\(\square \)
Theorem 11
Let \(n_1, \ldots , n_m \) be positive integers such that \( n = n_1 + \ldots + n_m \). Let \( \lambda \in \mathbb {R} \), and let \( J _{ n _1, \ldots , n _m } ( \lambda ) \in \textbf{M}_{n}\) be a Jordan matrix. If \(g\in \textbf{Sym}_n\) is a matrix function such that \(g_{, \xi } = J _{ n _1, \ldots , n _m } ( \lambda ) g\), then
where for each \(i, j \in \{ 1, \ldots , m\} \), the matrix \(X_{i j}\) satisfies \(X_{i j}^T = X_{j i}\) and is defined as follows:
-
(i)
if \(n_i = n_j\) then \(X_{i j} = g_{n_i} ( \lambda ) \),
-
(ii)
if \(n_i < n_j\) then \(X_{i j} = \left[ \begin{array}{cc} g_{n_i} ( \lambda )&0 \end{array}\right] \),
-
(iii)
if \(n_i > n_j\) then \(X_{i j} = \left[ \begin{array}{c} g_{n_j} ( \lambda ) \\ 0 \end{array}\right] \),
where \(g_{n_i} ( \lambda ) \) is defined as in Theorem 10.
Proof
Let \( i,j \in \{ 1, \ldots , m \} \). Applying \(g = g^T \) to \(g_{, \xi } = J_{n_1, \ldots , n_m} ( \lambda ) g\) we get \(J_{n_1, \ldots , n_m} (\lambda ) g = g J^T _{n_1, \ldots , n_m } (\lambda )) \), then \(g\in \mathcal {I} (J_{ n_1, \ldots , n_m } (\lambda ) ) \). By Theorem 4,
where \(X_{i j} \in \textbf{M}_{n_i \times n_j}\) satisfies \( X_{j i} = X_{i j}^T \) and is of the following form:
-
(i)
If \(n _i = n _j\) then \(X_{ij} \in \mathcal {I}( J_{n_i} (\lambda ) )\).
-
(ii)
If \(n _i < n _j\) then \(X_{ij} = \left[ \begin{array}{cc} Y_{n_i}&0 \end{array}\right] \) with \(Y_{n_i} \in \mathcal {I}( J_{n_i} (\lambda ) )\).
-
(iii)
If \(n _i > n _j\) then \(X_{ij} = \left[ \begin{array}{c} Y_{n_j} \\ 0 \end{array}\right] \) with \(Y_{n_j} \in \mathcal {I}( J_{n_j} (\lambda ) )\).
From \(g_{, \xi } = J_{ n_1, \ldots , n_m } (\lambda ) g\) we have \(X_{i j , \xi } = J_{n_i} (\lambda ) X_{i j}\). Observe that \( X_{j i, \xi } = \ ( J_{n_i} X_{i j} )^T = X_{j i} J^T_{n_i} (\lambda ) = J_{n_j} (\lambda ) X_{j i} \). By Theorem 10 we obtain
-
(i)
If \(n_i = n_j\), then \(X_{i j} = g_{n_i} ( \lambda )\).
-
(ii)
If \(n_i < n_j\), then \(X_{i j , \xi } = J_{n_i} (\lambda ) X_{i j} \) implies \(Y_{n_i , \xi } = J_{n_i} (\lambda ) Y_{n_i }\), so that \(Y_{n_i } = g_{n_i} ( \lambda )\). Therefore \(X_{i j} = \left[ \begin{array}{cc} g_{n_i} (\lambda )&0 \end{array}\right] \).
-
(iii)
If \(n_i > n_j\), then \(X_{i j , \xi } = J_{n_i} (\lambda ) X_{i j} = X_{i j} J_{n_j}^T (\lambda ) \) implies \(Y_{n_j , \xi } = Y_{n_j } J_{n_j}^T (\lambda ) = J_{n_j} (\lambda ) Y_{n_j } \), so that \(Y_{n_j } = g_{n_j} ( \lambda )\). Hence \(X_{i j} = \left[ \begin{array}{c} g_{n_j}( \lambda ) \\ 0 \end{array}\right] \).
\(\square \)
Theorem 12
For any Jordan \(\Lambda \)-block of the first kind \(J_n (\Lambda )\), if \(g\in \textbf{Sym}_{2n}\) is a matrix function such that \(g_{, \xi } = J_n (\Lambda ) g\), then
where
and \(C_l\), \(D_l\) are constant for \(l = 1, \ldots , n\).
Proof
Let \( i,j \in \{ 1, \ldots , n \} \). Applying \(g = g^T\) to \(g_{, _\xi } = J_n (\Lambda )g\) we get \(J_n (\Lambda ) g = g J^T_n (\Lambda )\), then \(g \in \mathcal {I} (J_n (\Lambda ))\). By Theorem 5 we can express
where
From \(g_{, \xi } = J_n ( \Lambda ) g\) we have
Integrating successively we get
Using
we obtain (90). \(\square \)
Theorem 13
Let \(n_1, \ldots , n_m \) be positive integers such that \( n = n_1 + \ldots + n_m \), and let \( J_{ n_1, \ldots , n_m } ( \Lambda ) \in \textbf{M}_{2 n}\) be a Jordan \(\Lambda \)-block of the second kind. If \(g\in \textbf{Sym}_{2n}\) is a matrix function such that \(g_{, \xi } = J_{ n_1, \ldots , n_m } (\Lambda ) g\), then
The matrices \(X_{i j}\) satisfy \(X_{i j}^T = X_{j i}\) and are defined as follows:
-
(i)
if \(n_i = n_j\), then \(X_{i j} = g_{n_i} ( \Lambda )\),
-
(ii)
if \(n_i < n_j\), then \(X_{i j} = \left[ \begin{array}{cc} g_{n_i} ( \Lambda )&0 \end{array}\right] \),
-
(iii)
if \(n_i > n_j\), then \(X_{i j} = \left[ \begin{array}{c} g_{n_j} ( \Lambda ) \\ 0 \end{array}\right] \),
where for each \(i, j \in \{ 1, \ldots , m\} \), \(g_{n_i} ( \Lambda ) \) is defined as in Theorem 12.
Proof
The proof is similar to that of Theorem 11. \(\square \)
Theorem 14
Let J be a generalized Jordan matrix due to Definition 7 and \(g\in \textbf{Sym}_{m+2n}\) a matrix function such that \(g_{, \xi } = J g\). Then
where \(g_{{m^i_1} , \ldots , {m^i_{r_i}}} ( \lambda _i )\) and \(g_{{n^k_1} , \ldots , {n^k_{s_k}}} ( \Lambda _k ) \) are the functions defined as in Theorems 11 and 13, respectively, for each \(i, j \in \{ 1, \ldots , p\} \) and \(k, l\in \{ 1, \ldots , q \} \).
Proof
Applying \(g = g^T \) to \(g_{, \xi } = J g\) we get \(J g = g J^T \), then \(g\in \mathcal {I} (J)\). By Theorem 7 we have \( g = \textrm{diag}\left[ X _1 ( \xi ), \ldots , X _p ( \xi ), Y _1 ( \xi ), \ldots , Y _q ( \xi ) \right] \), where \( X _i ( \xi ) \in \textbf{M}_{m ^i} \) and \( Y _k ( \xi ) \in \textbf{M}_{n ^k} \) are matrix functions for \( i \in \{ 1, \ldots , p \} \) and \( k \in \{ 1, \ldots , q \} \). The linear differential equation \(g_{, \xi } = J g\) implies \(X_{i ,\xi } = J_{m^i_1, \ldots , m^i_{r_i}} (\lambda _i ) X_i\) and \(Y_{k ,\xi } = J_{n^k_1, \ldots , n^k_{s_k}} (\Lambda _k ) Y_k \). By Theorems 11 and 13 we get \( X_i = g_{{m^i_1} , \ldots , {m^i_{r_i}}} ( \lambda _i ) \) and \(Y_k = g_{{n^k_1} , \ldots , {n^k_{s_k}}} ( \Lambda _k ) \), respectively, for each \(i \in \{ 1, \ldots , p\} \) and \(k \in \{ 1, \ldots , q\} \). \(\square \)
7 Equivalence Classes for the Matrix A
In this section we resume some facts from linear algebra which permit to describe the similarity equivalence classes for the matrix \(A\in SL(n,\mathbb {R})\) from Section 2, recall that A is a real traceless matrix which satisfies that \(Ag=gA^T\).
Definition 8
A real square matrix is non-derogatory if its minimal polynomial and characteristic polynomial are equal.
Definition 9
Let
be a polynomial and \(a_i \in \mathbb {R}\) for \(i = \{ 1, \ldots , n \} \). The matrix
is the companion matrix of the polynomial \( p(\lambda ) \). The matrices of the form (99) are called natural normal cells.
Theorem 15
(from [4]) Let A be a real square matrix with characteristic polynomial \( p ( \lambda ) \). If A is non-derogatory, then A is similar to the companion matrix of \( p ( \lambda ) \).
Definition 10
Let \(n_1, \ldots , n_m \) be positive integers such that \(n = n_1 + \ldots + n_m\). A matrix of the form
is called natural normal form if
-
(i)
\(A_i \in \textbf{M}_{n_i}\) are natural normal cell with characteristic polynomial \(p_i(\lambda ) \) for \(i \in \{ 1, \ldots , m \} \),
-
(ii)
for every \(j \in \{ 1, \ldots , m-1\} \), the polynomial \(p_j(\lambda ) \) is a divisor of \(p_{j+1}(\lambda ) \).
Theorem 16
(from [1]) Every real square matrix is similar to a unique natural normal form.
Definition 11
Let
be a polynomial matrix and \( D _k ( \lambda ) \) the greatest common divisor of all minors of order k in P for \( k \in \{ 1, \ldots , n \} \). The invariant factors of P are defined as follows:
If all minors of order k are equal to zero, then \( D _k (\lambda ) = 0 \).
Lemma 8
(from [5]) Let \( A \in \textbf{M}_{n} \) be the companion matrix of the polynomial \( p ( \lambda ) \). The invariant factors of the matrix A are equal to \( 1, \ldots , 1, p ( \lambda ) \), where the number of the 1’s equals \( (n - 1) \).
Lemma 9
(from [5]) Let A be the matrix of Definition 10. The invariant factors of the matrix A, are equal to \( 1, \ldots , 1, p_1 ( \lambda ), \ldots , p_m ( \lambda ) \), where the number of the 1’s is given by \( (n - m) \).
Theorem 17
(from [1]) Two real square matrices are similar if and only if they have the same invariant factors.
Definition 12
Let n and m be positive integers such that \( 1 < m \le n \).
Theorem 18
Let n and m be positive integers such that \(1< m < n\). The equivalence classes of the matrix \( A \in \mathfrak {sl}(n,\mathbb {R} )\) are as follows:
and \([A] _{(n_1, \ldots , n_m)}\), which is the set of matrices \(\textrm{diag}\left[ A_1, \ldots , A_m \right] \), where the matrices \(A_1, \ldots , A_m\) satisfy the following:
-
\( A_i \in \textbf{M}_{n_i} \) are natural normal cells for \( i = \{ 1, \ldots , m \} \),
-
\( (n_1, \ldots , n_m) \in N_{n, m} \),
-
\( p_{A _j} ( \lambda ) \) is a divisor of \(p_{A_{j+1}}(\lambda )\) for \(j = \{ 1, \ldots , m-1 \} \),
-
\( {\textrm{tr}}A_1 + \ldots + {\textrm{tr}}A_m = 0 \) .
Proof
Let \( X \in \mathfrak {sl} ( n, \mathbb {R} ) \). By the Theorems 15 and 16 we have that if X is non-derogatory, then X is similar to a natural normal cell, or, is similar to a natural normal form. Suppose that X is similar to A.
First case, A has the form (99). Since that \({\textrm{tr}}X = 0 \), then \( {\textrm{tr}}A = 0 \), so that \( a _{n - 1} = 0 \).
Second case, there exist an integer \( m \in \{ 2, \ldots , n \} \) such that A has the form \( \textrm{diag}\left[ A_1, \ldots , A_m \right] \), where \( A_i \) are natural normal cell with characteristic polynomial \( p _{A _i} ( \lambda ) \) of degree equal to \( n_i \) for \(i = \{ 1, \ldots , m \} \) and \( n = n_1 + \ldots + n _m \). Since that \( p _{A _j} ( \lambda ) \) is a divisor of \( p _{A _{j + 1} } ( \lambda ) \), then \( n _j \le n _{j + 1} \) for each \( j \in \{ 1, \ldots , m-1\} \), so that \( ( n_1, \ldots , n_m ) \in N _{m, n} \). Using the properties of the trace of a matrix we get \( {\textrm{tr}}A_1 + \ldots + {\textrm{tr}}A_m = 0 \), then for \( m = n \), we have \( A = 0 _n \).
By the Theorem 17 we find that X has the same invariant factors that A. This means that the equivalence classes are determined by the invariant factors of A. Therefore, A is a representation of the equivalence class where X belongs.\(\square \)
8 Example: One-Dimensional \(SL(5,\mathbb {R})\)-Subspaces
As an example to illustrate our results we will find the solutions for g considering A as member of the Lie algebra \(\mathfrak {sl} ( 5, \mathbb {R} ) \). For this, the following steps must be performed:
-
(i)
compute the sets \( N_{m, n} \),
-
(ii)
find the equivalence classes for A,
-
(iii)
obtain the real Jordan forms for every equivalence classes,
-
(iv)
determine g for each real Jordan form.
The method can be used for \( n \ge 2 \). It is easy to find the sets
Hence, we have six equivalence classes: \( \mathfrak {A} = [A] _1 \), \( \mathfrak {B} = [A] _{( 2, 3 )} \), \( \mathfrak {C} = [A] _{( 1, 4 )} \), \( \mathfrak {D} = [A] _{( 1, 2, 2 )} \), \( \mathfrak {E} = [A] _{( 1, 1, 3 )} \) and \( \mathfrak {F} = [A] _{( 1, 1, 1, 2 )} \).
In what follows, we will explain in detail how to determine \( \mathfrak {B} \). The other five equivalence classes can be obtained in a similar way, all classes are shown in Table 1. Let \( A \in \mathfrak {B} \), then A has the form \( \textrm{diag}\left[ A_1, A_2 \right] \), where \( A_1 \in \textbf{M}_{2} \) and \( A_2 \in \textbf{M}_{3} \) are natural normal cells. By Lemma 9 the invariant factors of the matrix A are given as \( 1, 1, 1, p_{A_1} ( \lambda ), p_{A_2} ( \lambda ) \), where \( p_{A_1} ( \lambda ) \) and \( p_{A_2} ( \lambda ) \) are characteristic polynomials of \( A_1 \) and \( A_2 \), respectively. Note that the degree of the polynomials \( p_{A_1} ( \lambda ) \) and \( p_{A_2} ( \lambda ) \) are 2 and 3, respectively. Now, assume that \( p_{A_1} ( \lambda ) = \lambda ^2 - b \lambda - a \), where \( a, b \in \mathbb {R} \). Since \( p_{A_1} ( \lambda ) \) is a divisor of \( p_{A_2} ( \lambda ) \), we can suppose, without loss of generality, that \( p_{A_2} ( \lambda ) = ( \lambda - c ) p_{A_1} ( \lambda ) \). From the characteristic polynomial of A, \( p_{A} ( \lambda ) = p_{A_1} ( \lambda ) p_{A_2} ( \lambda ) \), we find \( {\textrm{tr}}A = - 2b - c = 0 \), then \( p_{A_2} ( \lambda ) = ( \lambda + 2 b ) ( \lambda ^2 - b \lambda - a ) \). The matrices \( A_1 \) and \( A_2 \) are also the companion matrices of \( p_{A_1} ( \lambda ) \) and \( p_{A_2} ( \lambda ) = \lambda ^3 + b \lambda ^2 - c \lambda - 2 a b \), respectively, hence
where \( c = a + 2 b ^2 \).
In order to obtain the real Jordan forms of \( \mathcal {B} \), we consider the fact that a quadratic equation with real coefficients can have either one or two distinct real roots, or a pair of complex conjugate roots. Hence we can rewrite
-
(i)
\( p_{A_1} ( \lambda ) = ( \lambda - r _1 ) ( \lambda - r _2 ) \), \( p_{A_2} ( \lambda ) = ( \lambda - r _1 ) ( \lambda - r _2 ) ^2 \), where \( r _1 \ne r _2 \)
-
(ii)
\( p_{A_1} ( \lambda ) = ( \lambda - r _1 ) ( \lambda - r _2 ) \), \( p_{A_2} ( \lambda ) = ( \lambda - r _1 ) ( \lambda - r _2 ) ( \lambda - r _3 ) \), where \( r _1 \ne r _2 \ne r _3 \)
-
(iii)
\( p_{A_1} ( \lambda ) = ( \lambda - r _1 ) ^2 \), \( p_{A_2} ( \lambda ) = ( \lambda - r _1 ) ^3 \).
-
(iv)
\( p_{A_1} ( \lambda ) = ( \lambda - r _1 ) ^2 \), \( p_{A_2} ( \lambda ) = ( \lambda - r _1 ) ^2 ( \lambda - r _2 ) \), where \( r _1 \ne r _2 \).
-
(v)
\( p_{A_1} ( \lambda ) = ( \lambda - r _1 ) ^2 + \theta ^2 \), \( p_{A_2} ( \lambda ) = ( ( \lambda - r _1 ) ^2 + \theta ^2 ) ( \lambda - r _2 ) \), where \( \theta > 0 \)
so that \( A_1 \) and \( A_2 \) are similar to
-
(i)
\( \textrm{diag}\left[ J _1 ( r_1 ), J _1 ( r_2 ) \right] \) and \( \textrm{diag}\left[ J _1 ( r_1 ), J _2 ( r_2 ) \right] \)
-
(ii)
\( \textrm{diag}\left[ J _1 ( r_1 ), J _1 ( r_2 ) \right] \) and \( \textrm{diag}\left[ J _1 ( r_1 ), J _1 ( r_2 ), J _1 ( r_3 ) \right] \)
-
(iii)
\( J _2 ( r_1 ) \) and \( J _3 ( r_1 ) \)
-
(iv)
\( J _2 ( r_1 ) \) and \( \textrm{diag}\left[ J _2 ( r_1 ), J _1 ( r_2 ) \right] \)
-
(v)
\( J _1 \left[ \begin{array}{cc} r_1 &{} -\theta \\ \theta &{} r_1 \end{array} \right] \) and \( \textrm{diag}\left[ J _1 \left[ \begin{array}{cc} r_1 &{} -\theta \\ \theta &{} r_1 \end{array} \right] , J _1 ( r_2 ) \right] \)
respectively. Therefore, A is similar to
-
(i)
\( \textrm{diag}\left[ J _{1,1} ( r_1 ), J _{1,2} ( r_2 ) \right] \)
-
(ii)
\( \textrm{diag}\left[ J _{1,1} ( r_1 ), J _{1,1} ( r_2 ), J _1 ( r_3 ) \right] \)
-
(iii)
\( J _{2,3} ( r_1 ) \)
-
(iv)
\( \textrm{diag}\left[ J _{2,2} ( r_1 ), J _1 ( r_2 ) \right] \)
-
(v)
\( \textrm{diag}\left[ J _{1,1} \left[ \begin{array}{cc} r_1 &{} -\theta \\ \theta &{} r_1 \end{array} \right] , J _1 ( r_2 ) \right] \)
Applying the condition \( {\textrm{tr}}A_1 + {\textrm{tr}}A_2 = 0 \) we get
-
(i)
\( r_1 = -3 q / 2, r_2 = q, q \ne 0 \)
-
(ii)
\( 2 r_1 + 2 r_2 + r_3 = 0 \)
-
(iii)
\( r_1 = 0 \)
-
(iv)
\( r_1 = q, r_2 = -4 q, q \ne 0 \)
-
(v)
\( r_1 = q, r_2 = -4 q, q \ne 0 \)
The real Jordan forms for every equivalence class of A is presented in the Tables 2, 3, 4, 5, 6 and 7. Note that q is a real constant, also r and \(\theta \), with or without indices, are real numbers.
Finally, we determine g for \( J _{2,3 } ( 0 ) \). By Theorem 11 we obtain \( g = \left[ \begin{array}{cc} g _{11} &{} g _{12} \\ g ^T _{12} &{} g _{22} \end{array} \right] \), where \( g_{12} = \left[ \begin{array}{cc} h&0 \end{array} \right] \in \textbf{M}_{2 \times 3} \); \( g _{11}, h \in \textbf{M}_{2} \) and \( g _{22} \in \textbf{M}_{3} \) are matrix functions given by Theorem 10. Thus
The Tables 2, 3, 4, 5, 6 and 7 show the other solutions. Note that all letters A, B, C, D, E, with and without indices, are real constants.
In general relativity, the Boyer-Lindquist coordinates are very important. They are defined as \(\rho = \sqrt{ r^2 - 2 m r + \sigma ^2 } \sin \theta \) and \(\zeta = ( r - m ) \cos \theta \), where m and \(\sigma \) are constant parameters. The Laplace equation (28) is transform to
Some solutions of (111) can be found in [7]. As an example we consider that the parameter \(\xi \) depends only on r and \(\sigma = 0\), then
where \(\gamma \) and \(\delta \) are real constant. For \(n = 2\), we choose \(A = \textrm{diag}\left[ \lambda , -\lambda \right] \), then its corresponding matrix g is \(\textrm{diag}\left[ \epsilon e^{\lambda \xi } , - e^{-\lambda \xi }/\epsilon \right] \), where \(\lambda \) and \(\epsilon \) are real constant. Thus, \(g = \textrm{diag}\left[ -C \left( 1 - \frac{2 m}{r} \right) ^{-p} , \left( 1 - \frac{2 m}{r} \right) ^p /C \right] \), where \(p = - \frac{\lambda \gamma }{2m}\) and C is a real constant. Also, the differential equations for the function f (26) are transform to
Solving them, we get
where D is a constant and
Therefore, a exact solution to EFE is
9 Conclusions
EFE are one of the most interesting and complicated equations to solve in physics. Techniques to solve them have been developed for 4-dimensions in the past. One of the most successful techniques relies on subspaces and subgroups. This method helps to generate solutions of the 4-dimensional EFE on demand, such that the Laplace equation gives the solutions for monopoles, dipoles, etc. In this work we used this technique to solve the (\(n+2\))-dimensional EFE in vacuum, reducing the final matrix equation to its normal Jordan form, which permits to solve the equations with some facility. We obtained a great amount of solutions of the EFE in terms of the Laplace parameter, such that for each solution of the Laplace equations, we may get a different solution of the EFE. One can play with the different combinations of solutions to obtain even more solutions.
Data availability statement
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
References
Gantmacher, F.R.: The Theory of Matrices, vol. I. Chelsea Publishing Comp, New York (1959)
Green, J.H., Schwarz, M.B., Witten, E.: Superstring Theory. Cambridge University Press, (1987)
Hall, B.: Lie Groups, Lie Algebras, and Representations: An Elementary Introduction. Graduate Texts in Mathematics. Springer International Publishing, (2015)
Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, (1985)
Matos, T., Rodriguez, G., Becerril, R.: Exact solutions of SL(N, R) invariant chiral equations one-dimensional and two-dimensional subspaces. J. Math. Phys. 33, 3521–3535 (1992)
Matos, T.: Ecuaciones de quiral en teorías de gravitación. Revista Mexicana de Física 35, 208–221, 02 (1989)
Matos, T.: Class of einstein-maxwell phantom fields: rotating and magnetized wormholes. Gen. Relativ. Gravit. 42(8), 1969–1990 (2010)
Matos, T., Nieto, J.A.: Topics on Kaluza-Klein theory. Rev. Mex. Fis. 39, S81–S131 (1993)
Matos, T., Wiederhold, P.: Principios Matemáticos para Ciencias Exáctas. México D.F, Colofón (2017)
Stephani, D., Kramer, H.: Exact Solutions of EFE. Cambridge University Press, (2003)
Tu, L.W.: An Introduction to Manifolds. Springer, New York, NY (2010)
Acknowledgements
This work was partially supported by CONACyT México under grants A1-S-8742, 304001, 376127, 240512, FORDECYT-PRONACES grant No. 490769 and I0101/131/07 C-234/07 of the Instituto Avanzado de Cosmología (IAC) collaboration (http://www.iac.edu.mx/).
Author information
Authors and Affiliations
Contributions
All authors write and reviewed the manuscript.
Corresponding author
Ethics declarations
I confirm that this work is original and has not been published elsewhere, nor is it currently under consideration for publication elsewhere. On behalf of all authors, Dr. Ignacio Abraham Sarmiento Alvarado states that there is no conflict of interest.
Competing interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sarmiento-Alvarado, I.A., Wiederhold, P. & Matos, T. One-Dimensional Subspaces of the \(SL(n,\mathbb {R})\) Chiral Equations. Int J Theor Phys 62, 270 (2023). https://doi.org/10.1007/s10773-023-05520-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10773-023-05520-8