Computing low-rank rightmost eigenpairs of a class of matrix-valued linear operators

In this article, a new method is proposed to approximate the rightmost eigenpair of certain matrix-valued linear operators, in a low-rank setting. First, we introduce a suitable ordinary differential equation, whose solution allows us to approximate the rightmost eigenpair of the linear operator. After analyzing the behaviour of its solution on the whole space, we project the ODE on a low-rank manifold of prescribed rank and correspondingly analyze the behaviour of its solutions. For a general linear operator we prove that—under generic assumptions—the solution of the ODE converges globally to its leading eigenmatrix. The analysis of the projected operator is more subtle due to its nonlinearity; when ca is self-adjoint, we are able to prove that the associated low-rank ODE converges (at least locally) to its rightmost eigenmatrix in the low-rank manifold, a property which appears to hold also in the more general case. Two explicit numerical methods are proposed, the second being an adaptation of the projector splitting integrator proposed recently by Lubich and Oseledets. The numerical experiments show that the method is effective and competitive.


Introduction
In several applications, one is interested in computing eigen-solutions of a matrix valued linear operator A : X ∈ R n×n −→ A(X) ∈ R n×n . Examples of such operators are the following: (i) A(X) = AX + XB (Sylvester equation [5,14]); (ii) A(X) = i B i XC i ; (iii) A(X) = A • X (componentwise multiplication); (iv) A combination of previous cases.
There has been quite some work on computing low-rank approximations of eigensolutions for symmetric problems. But most of these methods are based on optimization and cannot be extended to nonsymmetric matrices.
In the case where A is a vector-valued symmetric operator, a continuous-time approach similar to the one we consider here is discussed, e.g. by Absil in [1]. Recent papers addressing efficient methods to approximate eigenvectors (reshaped into lowrank eigenmatrices) of a symmetric matrix or eigenmatrices of a symmetric operator are discussed, e.g. in [9,12,13] and [15]; we also refer the reader to the extensive bibliography in these articles.
Our goal, in this article, is to approximate the leading eigenmatrix (that is associated to the rightmost eigenvalue of the linear operator) in a suitable low-rank manifold (which is justified in several applications).
The paper is organized as follows. First, we state the framework, then in Section 2 we introduce a differential equation having the capability of approximating the leading eigenpair(s) of a linear matrix-valued operator. In Section 3, we analyze its equilibria and, in Section 4, we study the asymptotic behaviour of its solution and in particular convergence to equilibria. In Section 5, we project the ODE on a lowrank manifold (with fixed rank) and analyze its properties. In Section 6, we propose two explicit schemes for its numerical integration. Finally, in Section 7, we show the effectiveness of the proposed method for a few examples. In particular, for the case of self-adjoint operators, we compare our algorithm to the well-known ALS method.

Framework
Let A be a real matrix-valued linear operator, A : R n×n → R n×n that is X ∈ R n×n =⇒ A(X) ∈ R n×n . We are interested in computing its eigenpairs, that is where X and λ are allowed to be complex-valued. We call a solution X of (1) eigenmatrix of the operator A. Note that, as usual, the eigenvalues of A coincide with the eigenvalues of its matrix representation, A ∈ R n 2 ×n 2 . We focus on computing either the rightmost eigenvalue λ 1 when this is real (and simple) or the righmost complex-conjugate pair (which is generically unique), and make, a priori, the following hypothesis (as this is a common property of several applications) [7]: (H) X has quickly decaying singular values.
This motivates to constrain the search of approximate eigensolutions to a low-rank manifold. In this article, we approach this problem by integrating a suitable system of ODEs in a low-rank manifold. This can be efficiently pursued by, e.g. a variant of the splitting integrator by Lubich and Oseledets [10].
We remark that we are not assuming that A is self-adjoint and are indeed mostly interested in the case when A is not self-adjoint.

A suitable ODE
For a pair of matrices A, B ∈ R n×n , we let denote the Frobenius inner product and A = A, A 1/2 the associated Frobenius norm.
We consider the following system of ODEs for general real initial data. In the sequel we omit-when not necessary-the dependence on t.
Theorem 1 (Norm conservation) The solution of (2) has the property Proof Let X(t) = 1 at some instant t. A direct computation shows that Then, by the initial condition, the property holds for all t.
Theorem 1 implies that the solution of (2) satisfies the property Hence, X is a Lyapunov function for the system and the solution of (2) remains bounded. In the following result, we study its equilibria. We let α(X) = A(X), X denote the Rayleigh quotient.

Related literature
For a vector-valued linear operator, that is, for a matrix A ∈ R n×n , the following ODEẋ has been previously studied in the literature along with several variants (see, e.g. the works [2] and [3]). The symmetric case has received particular attention (see, e.g. [1]). In particular [11] has shown that its solution is given by which allows for a direct investigation and also suggests a direct relation to the power method.

Numerical observation
Under the generic assumption that the rightmost eigenvalues of A are simple, we observe in our numerical experiments that when the rightmost eigenvalue λ 1 of A is real then lim t→∞ On the other hand, when the rightmost eigenvalues are complex conjugate, the solution of (2) does not converge. Nevertheless, asymptotically, where V , V are the eigenmatrices associated to the rightmost eigenvalues λ 1 , λ 1 . This property can be exploited to compute λ 1 in a simple and inexpensive way. The aim of the next two sections is to explain the observed behaviour.

Equilibria
We characterize the real stationary points of (2); this is a variation of existing results for the vector-valued case.
Proof If X is a eigenmatrix of A, then the Rayleigh quotient α(X) ∈ R is the associated eigenvalue and the right-hand side of (2) vanishes. Vice versa, if we set the right-hand side of (2) equal to zero we get which implies that X is an eigenmatrix of A and α(x) is the associated real eigenvalue. Since A is real and the eigenvalue is real, X has necessarily to be real.

Stability of equilibria
Let V ∈ B be an eigenmatrix of A. In order to analyze its stability, we set Neglecting the second-order term and exploiting the property which means that the variation should belong to the tangent space of B at V .

Real rightmost eigenvalue
To study the stability of equlibria, we first consider the case where the rightmost eigenvalue λ 1 is real and simple.

Theorem 3
Assume that A has a unique rightmost eigenvalue λ 1 , which is assumed to be real. Let V ∈ B be an eigenmatrix associated with a simple eigenvalue λ ∈ R of A. Then, V is a stable equilibrium of (2) if and only if λ = λ 1 .
Proof Consider the linearized system where J is the Jacobian of the right-hand side of (2).
It is convenient to pass to the matrix representation A ∈ R n 2 ×n 2 of A. In this equivalent setting, we let v = col(V ) ∈ R n 2 and δ = col( ) ∈ R n 2 denote the vectorizations of V and , respectively. The orthogonality relationship (3) transfers directly to v and δ, δ, v = 0 so that we get the equivalent system where J is the matrix representation of the Jacobian J . Calculating the Jacobian yields: with I the n 2 × n 2 identity matrix, ∇ the gradient of α and (when referred to vectors x ∈ R n 2 ), Exploiting Letting Q be an orthonormal basis of S = span(v) ⊥ , the subspace orthogonal to v. In order to describe the dynamics of δ, we have to restrict J (v) to S, that is, where we have used Q T v = 0 which cancels the last term in (4). As a consequence, the spectrum of J | S is given by . . , λ n 2 denote the eigenvalues of A.
If λ = λ 1 , then J | S is Hurwitz-stable, that is, all its eigenvalues are in the open left-complex plane, which implies On the contrary, if λ = λ 1 then J | S has at least one eigenvalue lying in the rightcomplex plane and δ diverges for generic initial data.

The self-adjoint case
In the case when A is self-adjoint, a continuous-time approach has been considered in the literature (see, e.g. [1]) and the convergence to the largest eigenvalue can be obtained more simply by the fact that, in this case, the system of ODEs is a gradient system for the functional given by the Rayleigh quotient. We have, in fact, the following result.

Theorem 4 If
A is self-adjoint the Rayleigh functional α(X) increases monotonically along the solution of (2).
ReplacingẊ by the vector field of the ODE gives by the Cauchy-Schwarz inequality and the property that X(t) ≡ 1 along the flow of (2).
As a consequence, we have the following result.

Theorem 5 Let
A be self-adjoint, assume that its largest eigenvalue λ 1 is simple and let V 1 , V 1 = 1, denote an eigenmatrix associated with λ 1 . If X 0 , V 1 = 0, then the solution of (2) is such that Proof The result is a direct consequence of the fact that-by Theorem 4-(2) is a gradient system for the functional α(X); see also the discussion in the next section.

Diagonalization
Assume generically that the operator is diagonalizable (note that this assumption is not strictly necessary, it only simplifies the analysis). Then, we can write with Proof The proof is immediate when passing to the equivalent form (5).

Periodic orbits
In this subsection, we analyze the possibility of having periodic solutions to (2).

Lemma 2
The solution of (5) has the property Proof It is sufficient to observe that d|u j | 2 dt = 2Reu juj and replaceu j by the righthand side of the ODE (5).

Theorem 6
Assume that all eigenvalues of A are real and simple. Then, the solution of (2) (or, equivalently, of (5)) cannot be periodic.
Proof A periodic solution implies that is periodic of period T for every j . Then, by Lemma 2, it holds that Because all eigenvalues of A are simple this can only hold for one value j =j . If j =j one would have that u j (t) either tends to 0 or tends to ∞ as t → ∞ which implies that, in order to have a periodic orbit, one should require that u j = 0 for all j =j . This reduces the analysis to an initial value associated to a single eigenvector, for which no periodic orbit can occur.
Then, X(t) tends to a periodic solution.
Proof We can consider the equivalent system (5), which is 2-dimensional in this case. We know that there cannot exist equilibria, since this would mean that there is a real eigenvector in span V , V , which is not possible. Then-by Poincaré theory, since the solution is bounded-there has to be a limit cycle to which the solution approaches as t → ∞, which gives the result.

Global convergence
We now prove the asymptotic behaviour of the solution to (2), in the two interesting cases of real and complex conjugate rightmost eigenvalues.
Proof We first consider the case when A is diagonalizable and focus our attention on the equivalent system (5). By the assumption u 1 (0) = 0. Moreover, by Lemma 2, we have that -for j > 1 - By the constraint An adaptation to the case where A is not diagonalizable is straightforward (we would simply make use of the Jordan canonical form and generalized eigenvectors in this non-generic case).
Theorem 9 (Complex conjugate case) Assume A has a unique simple complex conjugate pair of rightmost eigenvalues with associated eigenmatrices V 1 and Proof Similar to the proof of Theorem 8, we consider the equivalent system (5) to establish that lim t→∞ u j (t) = 0, j = 3, . . . , n 2 .
By reality of the solution, we have that u 2 (t) = u 1 (t), which implies the claim. In particular, lim Using Theorem 7, we get the following result.

Summary
We have fully analyzed the generic cases of a unique simple rightmost real eigenvalue or a unique pair of rightmost complex conjugate eigenvalues, respectively.

Rightmost real eigenvalue
We have proved that the solution of (2) converges to the equilibrium given by the eigenvector associated to the rightmost eigenvalue, scaled to unit norm.

An illustrative example
Consider the following matrix-valued operator, for given matrices A, B, X ∈ R n×n , The rightmost eigenvalue is λ 1 = −1.378076094437169 . . ..

Complex conjugate rightmost eigenvalues
In this case, we have that X(t) ∈ B lies asymptotically in the 2-dimensional subspace Choosing two vectors X(t), X(t − τ ) for large t, with τ suitably chosen would provide generically a pair of linearly independent vectors of S 2 . Orthonormalizing this real basis of S 2 we would get the pair we would easily approximate the pair λ 1 , λ 1 , by computing the eigenvalues of M.

An illustrative example
Consider the following matrix-valued operator, for given matrices A, B, X ∈ R n×n , The rightmost eigenvalues are λ 1,2 = 1.902781997845534 . . . ± 1.052820195655 316 . . . i. After a numerical integration of (2), we observe a periodic solution (see Fig. 2). Doing the computations described in previous section with t = 100 and τ = 0.3745, we get

Low-rank integration
We would like to find an approximate solution to the differential equation, working only with its low-rank approximation.  (8) A natural criterion is the following: where the minimization is over all matrices that are tangent to X(t) on the manifold M r of matrices of rank r, and the norm is the Frobenius norm.
In this section, we describe an efficient method to integrate (2), when X(t) is constrained to belong the the manifold of rank-r matrices, M r = {Z ∈ R n,n : rank(Z) = r}.

Rank-r matrices and their tangent matrices
Following [8], we first collect some basic properties. Matrices of rank r form a manifold, denoted by M r . Note that instead, the set of matrices of rank at most r is an algebraic variety. In this subsection, we follow [8]. Every real rank-r matrix X of dimension n × n can be written in the form where U ∈ R n×r and V ∈ R n×r have orthonormal columns, i.e.
(with the identity matrix I r of dimension r), and S ∈ R r×r is nonsingular. The singular value decomposition yields S diagonal, but here we will not assume a special form of S. The representation (9) is not unique: replacing U by U = UP and V by V = V Q with orthogonal matrices P , Q ∈ R r×r and correspondingly S by S = P T SQ, yields the same matrix E = USV T = U S V T . As a substitute for the non-uniqueness in the decomposition (9), we will use a unique decomposition in the tangent space. Let V n,r denote the Stiefel manifold of real n × r matrices with orthonormal columns. The tangent space at U ∈ V n,r is As is shown in [8], every tangent matrixĖ ∈ T X M r is of the forṁ We note the following lemma from [8].

Lemma 3
The orthogonal projection onto the tangent space T X M r at X = USV T ∈ M r is given by for Z ∈ R n×n .

A differential equation for rank-r matrices
In the differential equation (2), we replace the right-hand side by its orthogonal projection to T X M r , so that solutions starting with rank r will retain rank r for all times: Since X ∈ T X M r , we have P X (X) = X and X, Z = X, P X (Z) , and hence the differential equation can be rewritten aṡ which differs from (2), only in that A(X) is replaced by its orthogonal projection to T X M r .
Theorem 10 (Norm conservation) Assume that X(0) = 1. Then, the solution of (11) has the property Proof It is direct to see that X,Ẋ = 0, so that, the unit norm of X is conserved along solutions of (11).
To obtain the differential equation in a form that uses the factors in X = USV T rather than the full n × n matrix X, we use the following result.

Lemma 4 [8, Prop. 2.1]
For X = USV T ∈ M r with nonsingular S ∈ R r×r and with U ∈ R n×r and V ∈ R r×r having orthonormal columns, the equationẊ = P E (Z) is equivalent toẊ =USV T + UṠV T + USV T , wherė Replacing Z by F (X) in (13), we obtain the projected system of ODEs (12), written in terms of the factors U, S, and V of X.

Equilibria
The following result characterizes possible equilibria of (12).

Theorem 11
Let X = USV T (with U ∈ R n×r and V ∈ R n×r have orthonormal columns and S ∈ R r×r is nonsingular). X is an equilibrium of (12) if and only if under the constraints U T U = I r , V T V = I r , S = 1.
Proof Replacing Z by F = F USV T in (13)  System (14) for n, r > 1 consists of 2 n r + r (r + 1) + 1 equations in 2 n r + r 2 variables. Indeed multiplying U by a diagonal matrix with elements ±1 and the same for V would not change the solution.

Remark 1
The existence and the number of equilibria for (12) (equivalently of (13)) is non trivial, given the nonlinearity of the problem. The asymptotic behaviour of the solution of (12) is also more complicated to determine.

Self-adjoint case
In such case, the monotonicity result stated by Theorem 4 remains valid. (11).

Theorem 12 If A is self-adjoint the Rayleigh functional α(X) increases monotonically along the solution of
= P X (A(X)) 2 − X, P X (A(X)) 2 ≥ 0 by the Cauchy-Schwarz inequality.
The following is an important consequence of the previous result (an illustrative example is given later in Section 5.7).
Remark 2 Theorem 12 shows that the projected system (12) is still a gradient systems, so that, the limit of α(X(t)) exists and for generic initial data maximizes (at least locally) the Rayleigh function over M r .

Corollary 2 Assume
A is self-adjoint and X ∈ R n×n is the eigenmatrix associated to the largest eigenvalue λ. Let X ∈ M r be its best approximation of rank-r. Then, the solution of X(t) of (12) (in M r ) with initial datum X(0) = X is such that lim t→∞ X(t) = X ∈ M r with P X A( X) = λ X with λ ≥ α(X r ).

Special cases
Consider the case where A(X) = AX + XB with X ∈ R n×n and given A, B ∈ R n×n , which includes for example continuous Lyapunov equations.
It is immediately seen that, for X 0 ∈ M r , we have that X(t) ∈ M r , for all t. In such case, we can obtain an approximation of the factors U, S and V of the solution, by integrating numerically the system of ODEṡ In this case, it is convenient to choose r = 1 since M 1 is invariant for A, which implies that the real eigenmatrices are rank-1.
Next, consider the the case where A(X) = AXB T with X ∈ R n×n and given A, B ∈ R n×n . It is easy to observe that, when one restricts X ∈ M 1 , then A(X) ∈ M 1 , which directly shows that eigenvectors of A(X) are rank-1. Integrating the projected ODE is convenient in this case to compute the leading eigenpair of A(X).

The general case: an illustrative example
Consider, again, the operator given by 6, with A and B as in (7).
The rightmost eigenvalue is λ 1 = −1.378076094437169 . . . and the associated eigenmatrix X 1 has singular values Hence, X 1 may be well-approximated by a rank-2 matrix.
Integrating numerically for rank r = 2, we obtain the behaviour of α(X(t)) illustrated in Fig. 3, where it is also drawn (in red) the behaviour of α when integrating the full ODE (2) in the whole space.
In Fig. 4, it is shown the behavior of the solution of (12) with r = 1.
In particular, lim This seems to indicate that the projected operator has a real rightmost eigenvalue. Repeating the same integration over the rank-1 manifold (i.e. with r = 1) we obtain the behaviour of α(X(t)) illustrated in Fig. 4. This indicates that the solution is periodic and we may conjecture that the projected operator has a pair of complexconjugate rightmost eigenvalues.

A case study: rank-1 projection
We consider here the case where r = 1, i.e. projection onto M 1 .
We let X = uv T with u, v ∈ R n of unit norm and get In general, we can rewrite (with suitable A ij ∈ R n×n ) The projected ODEs (13) would reaḋ

The equations for equilibria, F (uv
that is a system of cubic equations, with no quadratic terms.

The case X ∈ R 2×2
Consider the illustrative case where In general, we are able to prove that the projected eigenvalue problem (16) has 8 (possibly complex-conjugate) eigenvalues, that is the double of the eigenvalues of the corresponding problem in R n×n . In all our experiments, we have observed that, in presence of a rightmost eigenvalue of the projected eigenvalue problem, all the eigenvalues of the Jacobian of the r.h.s. of (15) have negative real part, implying stability of the corresponding equilibrium. On the contrary, the remaining eigenvalues correspond to unstable equilibria. In the self-adjoint case, this property is rigorous as it is a consequence of Theorem 12.
By a direct computation, it is possible to show that in order that A is self-adjoint A has the following structure: In the generic case, we are able to prove that the projected eigenvalue problem (16) has 8 eigenvalues, that is the double of the eigenvalues of the corresponding problem in R n×n .
As an example, we let a = c = h = l = 0, b = f = −1 and all the other coefficients equal to 1, that yields We compute 4 double eigenvalues, λ 1,2 = √ 2, λ 3,4 = 1, λ 5,6 = −1, λ 7,8 = − √ 2, which means more eigenvalues than those of the eigenvalue problem A(X) = λX in R 2×2 (which are ±1 and ± √ 5). The rightmost eigenpairs follow: As we expect, both equilibria are stable, i.e. they correspond to maxima for α, as the eigenvalues of the Jacobian of the r.h.s. of (15) are given by All other (6) equilibria are unstable. This is consistent with the fact (see Theorem 12) that, the projected ODE in the self-adjoint case is still a gradient system for α, which means that an integration of (12) allows to compute local maxima of α, over the low-rank manifold.
As we expect, if we slightly perturb the problem, replacing e = 1 by e = 0.99 we obtain two local maxima corresponding to the eigenvalues λ 1 ≈ 1.41775 and λ 2 ≈ 1.41068, so that, the solution of the projected ODE may not converge to the global maximum, contrarily, to the behaviour of the solution of (2) in R n×n . Figure 5 shows the behavior of α for this problem for e = 1.

Numerical integration
We focus our attention on non-stiff operators and consider two explicit schemes.

Euler-based integration
The algorithm starts from the factorized rank-r matrix of unit norm at time t 0 = 0 and computes the factors of the approximation X 1 = U 1 S 1 V T 1 , again of unit Frobenius norm, at the next time t 1 = t 0 + h: 3. Project to the respective manifolds, where orth(·) is obtained via a QR-decomposition.

Remark 3
The presence of S −1 in the right-hand-side of the differential (13) might be challenging when S is close to singularity, meaning that, the solution X is almost rank-r −1. The following numerical integrator performs very well in this case.

A projector splitting integrator
We shortly describe a variant of the integration method of Lubich and Oseledets [10] to integrate (2) on the intersection of the r-rank manifold M r and the set B of matrices of unit norm. Our goal is to integrate the n × n matrix differential equation which -as we have seen in (13) -is written equivalently as a system of differential equations for the factors U, S, V of X ∈ M r (as shown in [8]). The right-hand sides of the differential equations for U and V contain, however, the inverse of S, which leads to difficulties with standard numerical integrators when S is nearly singular, that is, when X is close to the manifold of matrices of rank smaller than r.
We, therefore, follow the alternative approach of [10], which uses an integration method that is based on splitting the tangent space projection P X , which by (10) is an alternating sum of three subprojections. A time step of the numerical integrator, based on the Lie-Trotter splitting corresponding to these three terms, can then be implemented in the following way.
Here, we consider a slight variant of the projector-splitting integrator of [10], such that, the unit Frobenius norm is preserved (see also [4]).
The algorithm starts from the factorized rank-r matrix of unit norm at time t 0 = 0 and computes the factors of the approximation X 1 = U 1 S 1 V T 1 , again of unit Frobenius norm, at the next time t 1 = t 0 + h: and, via a QR decomposition, compute the factorization with U 1 having orthonormal columns, with an r × r matrix S 1 of unit Frobenius norm, and a positive scalar σ 1 .

Set
where S 0 is of unit Frobenius norm and σ 0 > 0. 3. Set L 1 = V 0 S T 0 + hF T 0 U 1 and, via a QR decomposition, compute the factorization V 1 S T 1 σ 1 = L 1 , with V 1 having orthonormal columns, with an r × r matrix S 1 of unit Frobenius norm, and a positive scalar σ 1 .
The algorithm computes a factorization of the rank-r matrix of unit Frobenius norm which is taken as an approximation to X(t 1 ). As is shown in [6], this is a first-order method with an error bound that is independent of possibly small singular values of X 0 or X 1 .

Implementation issues
We discuss here a few implementative aspects.

Guess of the rank r
In order to guess the correct rank-r of the rightmost eigenmatrix of A, one may apply a few Euler steps to integrate (2) and choose r after a singular value decomposition of the computed numerical approximation.

Stepsize control
In order to control the stepsize, we have a few instruments. The main point is that we are interested in steady state solution of (2) so that standard error control techniques do not appear suitable to our task.
To distinguish the case when X(t) converges to the eigenvector associated to the rightmost eigenvalue, when t → ∞ and the case where X(t) does not converge, but simply approaches the two-dimensional subspace span(V , V ) spanned by the complex conjugate eigenvectors associated to λ 1 and λ 1 .
In the first case, one possible functional which may drive the stepsize selection is the residual A(X) − α(X)X which can suggest an increase of the stepsize when it decreases monotonically (in Frobenius norm).
In the second case, we may ask that the third singular value σ 3 of the three arrays X n ≈ X(t n ), X n−m ≈ X(t n−m ) and X n−2m ≈ X(t n−2m ) (with m suitably chosen) decreases.

Self-adjoint case
When A is self-adjoint, we have seen in Lemma 4 that the Rayleigh function α(X) is monotonic. This can be naturally exploited to drive the stepsize selection, so that, in order to accept the approximation X k+1 ≈ X(t k+1 ), we require that α (X k+1 ) > α (X k ) .

Example 1
We start with a simple example by considering the operator A(X) = AX + XA T + BXC T for given matrices A, B, C, X ∈ R n×n . We consider A diagonal and B, C of moderate norm so that intuition suggests that the eigenvectors of A are reasonably close to those of AX + XA T which have rank-1.
We choose n = 50, B and C full random matrices of Frobenius norm σ n with σ ∈ [0.1, 1]. Table 1 illustrates the results (λ 1 indicates the rightmost eigenvalue of A(X),λ 1 denotes its approximation obtained by integrating the projected ODE (12) on the rank-r manifold and X 1 − USV T indicates the norm of the difference between the leading eigenmatrix and its rank-r approximation computed integrating (12)).
Solving the eigenvalue problem and computing the eigenmatrix U 1 , associated to the rightmost eigenvalue λ 1 would determine the main mode in the dynamics of the evolution PDE, as well as, the decay rate of its approximate solution.

Self-adjoint case: comparison to the ALS method
Given a self-adjoint linear operator A, according to the Courant-Fischer-Weyl minmax principle, we have that A(X), X X, X . A typical approach for the low rank optimization is the alternating least square (ALS) procedure. Prescribed a rank r, we impose the factorization X = UV T with U ∈ R n×r and V ∈ R n×r , so that A(X), X X, X The key idea is to optimize the Rayleigh quotient successively over U and V . Fixing where v is the vectorization of V T , so that the optimization for V coincides with the eigenvalue problem for the matrix A U = (I ⊗ U) T A(I ⊗ U). We compare our approach based on the modified projector splitting integrator (MPS) with the ALS method on a symmetric operator of the type with A diagonal and B symmetric, of dimensions 50 × 50. The exact value for the maximum eigenvalue is λ = −1.7391. The initial data are randomly chosen for both codes, which are performed for r = 1, 3, 5, 7. The computational results are shown in Table 2. d X is distances between the rank r eigenmatrix computed by a method and the exact eigenvector X, while d X r is the distance between the computed eigenmatrix and X r , the best rank r approximation of X. The results on this example show that our method is quite competitive with ALS.

Conclusions
In this article, we have described an efficient method to compute low-rank eigensolutions of matrix-valued linear operators associated to their rightmost eigenvalue. In practise, the integration of the projected ODE (12) on a low-rank manifold of prescribed rank works very well and provides accurate approximations of exact eigensolutions when they are close to the manifold M r . A complete analysis of the asymptotic behaviour of the solutions of (12) and of the closedness between the equilibria of (12) and those of (2) remains an open challenging problem.