Residual-based iterations for the generalized Lyapunov equation

This paper treats iterative solution methods to the generalized Lyapunov equation. Specifically it expands the existing theoretical justification for the alternating linear scheme (ALS) from the stable Lyapunov equation to the stable generalized Lyapunov equation. Moreover, connections between the energy-norm minimization in ALS and the theory to H2-optimality of an associated bilinear control system are established. It is also shown that a certain ALS-based iteration can be seen as iteratively constructing rank-1 model reduction subspaces for bilinear control systems associated with the residual. Similar to the ALS-based iteration, the fixed-point iteration can also be seen as a residual-based method minimizing an upper bound of the associated energy norm. Lastly a residual-based generalized rational-Krylov-type subspace is proposed for the generalized Lyapunov equation.


Introduction
This paper concerns iterative ways to compute approximate solutions to what has become known as the generalized Lyapunov equation L (X) + Π (X) + BB T = 0, (1.1) where X ∈ R n×n is unknown, and the operators L , Π : R n×n → R n×n are defined as L (X) := AX + XA T (1.2) with A, N i ∈ R n×n for i = 1, . . . , m given. The operator L is commonly known as the Lyapunov operator, and Π is sometimes called a correction. We further assume that A is stable, i.e., A has all its eigenvalues in the left-half plane, which implies that L is invertible [22,Theorem 4.4.6]. Moreover, we assumed that ρ(L −1 Π ) < 1, where ρ denotes the (operator) spectral radius. The assumption on the spectral radius implies that (1.1) has a unique solution, see, e.g., [23,Theorem 2.1]. Furthermore, the definition of Π in (1.3) implies that it is non-negative, in the sense that Π (X) is positive semidefinite when X is positive semidefinite. Thus one can assert that the unique solution X is indeed positive definite, see, e.g., [9,Theorem 3.9] or [12,Theorem 4.1].

Related work
The standard Lyapunov equation, AX + XA T + BB T = 0, has been well studied for a long time and considerable research effort has been, and is still, put into finding efficient algorithms for computing the solution. Some examples of methods are the classical Bartels-Stewart algorithm [5] for small and dense problems based on factorizing the matrix A and backward substitution, the Smith method presented in [31], and the Riemannian optimization method [32] which computes a low-rank approximation by minimizing an associated cost function over the manifold of rank-k matrices, where k n. The Lyapunov equation has a lose connection to control theoretic methods, such as the iterative rational Krylov algorithm (IRKA) [18,20] which computes locally H 2 -optimal reduced order systems. Related research is presented in a series of paper [13][14][15], where Druskin and co-authors develop a strategy to choose shifts for the rational Krylov subspace for efficient subspace reduction when solving PDEs [13,14], as well as for model reduction of linear single-input-single-output (SISO) systems and solutions to Lyapunov equations [15]. Instead of computing full spaces iteratively with a method such as IRKA, the idea is to construct an infinite sequence with asymptotically optimal convergence speed [13]. Then the subspace can be dynamically extended as needed, until required precision is achieved, rather than starting the process by deciding the dimension of the space, an a priori parameter choice with (complex) implications of the final approximation accuracy. The idea is also further developed by using tangential directions, proving especially useful for situations where the right-hand side is not of particularly low rank [16], e.g., multipleinput-multiple-output (MIMO) systems. For a more complete overview of results and techniques for Lyapunov equations, see, e.g., the review article [30]. The generalized Lyapunov equation (1.1) has received increased attention over the last decade. Results on low-rank approximability has emerged [7, Theorem 1][23, Theorem 2], i.e., similarly to the standard Lyapunov equation one can in certain cases when the right-hand-side B is of low rank, r n, expect the singular values of the solution to decay rapidly even for the generalized Lyapunov equation. Such low-rank approximability is useful for large-scale problems since algorithms can be adapted to exploit the low-rank format, reducing computational effort and storage requirement. Example of algorithms exploiting low-rank structures are a Bilinear ADI method [7], specializations of Krylov methods for matrix equations [23], as well as greedy lowrank methods [24], and exploitations of the fixed-point iteration [29]. Through the connection with bilinear control systems there is an extension of IRKA, known as bilinear iterative rational Krylov (BIRKA) [6,19]. There are also methods based on Lyapunov and ADI-preconditioned GMRES and BICGSTAB [12], and in general for problems with tensor product structure [25]. We also mention that in the case when the correction Π has low operator-rank, typically if all the matrices N i are of low rank, there is a specialization of the Sherman-Morrison-Woodbury formula to the linear matrix equation, see, e.g. [12, Section 3].

Outline and summary of contributions
The paper is outlined as follows: In Section 2 we present some existing theory and preliminary results which sets the context of the paper. This is followed up in Section 3 where we prove that the alternating linear scheme (ALS) presented by Kressner and Sirković in [24] computes search directions which at each step fulfill a first order necessary condition for being H 2 -optimal, in a certain sense. Moreover, we show equivalence between ALS and BIRKA. In Section 4 we make an analogue to the fixed-point iteration, showing that it minimizes an upper bound of the energy-norm, before we in Section 5 present a residual-based generalized rational-Krylov-type subspace adapted for solving the generalized Lyapunov equation. While the main focus of this paper is on solution methods to the generalized Lyapunov equation, in Section 5.2 we draw parallels with rational Krylov subspaces for the standard Lyapunov equation. We end the paper with some numerical experiments presented in Section 6, and conclusions and outlooks in Section 7.

Generalized matrix equations and approximations
In this paper we are concerned with iterative methods for computing approximative solutions to the generalized Lyapunov equation (1.1). In this section we recall some basic definitions and results that will be used in later in the paper. Regarding the generalized Lyapunov equation there is one special class of problems where more can be said, and that is the symmetric case. In general we will think ofX k ∈ R n×n as an approximation of the solution to (1.1), where k is an iteration count. The goal is to find anX k such that X −X k is small for some norm, where X is the exact solution to (1.1). However, in many situations there is no precise definition, and no need for one, of what is meant by approximation. Nevertheless, in order to discuss projection methods and make the results precise, we make the following (standard) definition of a Galerkin approximation.
Definition 2.2 (The Galerkin approximation) Let K k ⊆ R n be an n k ≤ n dimensional subspace for k = 0, 1, . . . , and let V k ∈ R n×n k be a matrix containing an orthogonal basis of K k . We callX k the Galerkin approximation to (1.1) known as the projected problem.
It is possible that the projected problem does not have a solution, or that the solution is not unique, and in such case the definition of (the) Galerkin approximation in Definition 2. is preserved by orthogonal projections in the symmetric case, cf. Proposition 3.1. However, we will not delve deeper into this topic, and further on simply assume that the projected problem has a unique solution. Related to the (Galerkin) approximation we also have the (Galerkin) residual.

Definition 2.3 (The Galerkin residual)
Let K k and V k be as in Definition 2.2 and, assuming that it exists, letX k be the Galerkin approximation in K k to (1.1). We define the Galerkin residual as By using this definition the projected problem is commonly expressed as V T k R k V k = 0, which is also known as the Galerkin orthogonality condition and is a characteristic feature of the Galerkin residual.
Remark 2.1 (Generic residual) It is possible to define a generic residual, which we with a slight abuse of notation also call R k . We define it as: for any matrixX k ∈ R n×n , R k := L (X k ) + Π (X k ) + BB T . Some of the results and arguments presented are valid for a (generic) residual and others, more specialized, only for the Galerkin residual, or another residual specific to some approximation scheme. However, we will always connect the residual to a specific approximation and the Galerkin residual will always be referenced as such.
We end this section with a specialization of a classical result from linear algebra, sometimes called the residual equation.

Proposition 2.1 (A matrix equation for the error)
Let L and Π be as in (1.2)-(1.3), and let X ∈ R n×n be the solution to (1.1). Moreover, letX k ∈ R n×n be a given matrix, let R k be the (generic) residual, and define the error X e k := X −X k . Then The result in Proposition 2.1 is useful since it may serve as a starting point to derive iterative algorithms for computing approximations to (1.1). Hence, one strategy for computing updates to the current iterate is to compute, or approximate, X e k , which Proposition 2.1 allows us to do based on the known, or computable, quantities L , Π and R k . The result for the Lyapunov equation was presented already by Smith in [31], and for generalized matrix equations cf. [12,Section 4.2], and [24, Algorithm 2].

Bilinear systems
We recall some control theoretic concepts for bilinear control systems of the form with A, N i ∈ R n×n , B ∈ R n×r and C ∈ R r×n and control inputs u(t) ∈ R r and w(t) ∈ R m .

Remark 2.2
We highlight that the bilinear system Σ in (2.1) differs from the usual notation used in the literature, see e.g., [2,3,6,9,12,19,33]. The formulation (2.1) is convenient since it allows for m = r. However, the system Σ can be put on the usual form by considering the input vector [w(t) T u(t) T ] T , adding m zero-columns to the beginning of B, i.e., considering [0, B], and considering the matrices N i = 0 for i = m + 1, m + 2, . . . , m + r. Thus it is reasonable to consider the same Gramians. The system Σ can also be compared to systems from applications, e.g., [26,Equation (2)].
As in [3], for a MIMO bilinear system (2.1), we define the controllability and observability Gramian as follows.
Definition 2.4 (Bilinear Gramians) Let A ∈ R n×n be a stable matrix, N 1 , . . . , N m ∈ R n×n , B ∈ R n×r , and C ∈ R r×n , and consider the bilinear system (2.1). Moreover, let P 1 (t 1 ) := e At 1 B, P j (t 1 , . . . ,t j ) := e At j [N 1 P j−1 , . . . , N m P j−1 ] for j = 2, 3, . . . , Q 1 (t 1 ) := Ce At 1 , and Q j (t 1 , . . . ,t j ) := [N T 1 Q T j−1 , . . . , N T m Q T j−1 ] T e At j for j = 2, 3, . . . . We define the controllability and observability Gramian respectively as It is possible that the generalized Gramians from Definition 2.4 do not exist; sufficient conditions are given in, e.g., [33,Theorem 2]. However, if the Gramians exist we also know that they satisfy the following matrix equations In relation to the generalized controllability and observability Gramians, one might also define a generalized cross Gramian similar to the SISO case discussed in [28]. Consider the generalized Lyapunov equation (1.1), and an approximationX k with related error X e k and residual R k . One can easily verify that for the auxiliary system where B R k = US 1/2 , and C R k = S 1/2 V T , with R k = USV T being a singular value decomposition of R k , the associated cross Gramian coincides with the error X e k . In the special case of a symmetric system, we additionally have the following result. For what follows, let us recall the definition of the H 2 -norm for bilinear systems that was introduced by Zhang and Lam in [33].

Definition 2.5
We define the H 2 -norm for the bilinear system (2.1) as . . , s k ) := Ce As j N 1 e As j−1 N 2 · · · e As 1 b j .
It has been shown in [33,Theorem 6] that if the Gramians P and Q from Definition 2.4 exist, then it holds that

ALS and H 2 -optimal model reduction for bilinear systems
In this section, we discuss a low rank approximation method proposed by Kressner and Sirković in [24]. We show that several results can be generalized from the case of standard Lyapunov equations to the more general form (1.1). Moreover, we show that in the symmetric case the method allows for an interpretation in terms of H 2 -optimal model reduction for bilinear control systems. With this in mind, we assume that A = A T ≺ 0 and N i = N T i for i = 1, . . . , m. If additionally we have that ρ(L −1 Π ) < 1, the operator M (X) := − L (X) − Π (X) allows us to define a weighted inner product via with a corresponding induced M -norm, also known as energy norm,

ALS for the generalized Lyapunov equation
In [24], it is suggested to construct iterative approximationsX k by rank-1 updates that are locally optimal with respect to the M -norm. To be more precise, assume that X is a solution to the symmetric Lyapunov equation (1.1), i.e., Given an approximationX k , we consider the minimization problem Since the minimization involves the constant term X −X k 2 M , it suffices to focus on where R k is the current residual, i.e., Locally optimal vectors v k and w k are then (approximately) determined via an alternating linear scheme (ALS). The main step is to fix one of the two vectors, e.g., v and then minimize the strictly convex objective function in order to obtain an update for w. A pseudocode of the algorithm presented in [24] is given in Algorithm 1. The ALS-based approach for computing new subspace extensions can also, in terms of Proposition 2.1, be seen as searching for an approximation to X e k of the input : System: A, N 1 , . . . , N m ∈ R n×n R k ∈ R n×n , tol Initial guess: v, w ∈ R n output: New approximation vectors: v a , w a ∈ R n while Change in ( v T Av v 2 + w T A T w w 2 )/2 larger than tol do Proof The proof, naturally, follows along the lines of [24, Lemma 2.3], and hence without loss of generality we assume that v * = 0 , w * = 0, and v * = w * . Thus v * w T * is positive semidefinite if and only if v * = w * . The proof is by contradiction and we assume that v * = w * . Then from the strict convexity of J we have that Simplifying the left-hand-side we get and similarly the right-hand-side gives and thus the inequality reduces to We can without loss of generality consider m = 1, hence only one N-matrix, since the following argument can be applied to all terms in the sum independently. Observe that , which shows the desired inequality, and thus concludes the proof.
Algorithm 1 and the argument in Lemma 3.1 is based on a residual. However, if one considersX k = 0, then R k = BB T , and hence the result is applicable directly to any symmetric generalized Lyapunov equation. The focus on the residual in the previous results is natural since it leads to the following extension of [24,Theorem 2.4] to the case of symmetric generalized Lyapunov equations.
where v k+1 is a locally optimal vector computed with ALS (Algorithm 1). Then R k = R T k 0 for all k ≥ 0.
Proof We show the assertion by induction. It trivially holds that R 0 = R T 0 0. Now assume that this is the case for some R k . From Lemma 3.1 the local minimizers of (3.2) are symmetric and henceX k+1 as in (3.3) is reasonable as defined. Moreover, sinceX k+1 and the operators in (1.1) are symmetric we know that R k+1 is symmetric. Thus what is left to show is that R k+1 0.
We have that R k+1 0 if and only if y T R k+1 y ≥ 0 for all y ∈ R n . Hence take an arbitrary y ∈ R n and consider y T R k+1 y. We derive properties similar to [24, equations (12)- (14)]: Since where w = v k+1 . Note that the gradient ∇ v J w of J w with respect to v is given by Due to the optimality of v k+1 with respect to J v k+1 , first order optimality conditions then imply that Striking this equality with v T k+1 from the left implies that Based on this we can write the residual as Here, the third equality follows from (3.4) and its transpose by exploiting the symmetry of the involved matrices. We rearrange, identify the term −2(v T k+1 Av k+1 )v k+1 v T k+1 and insert (3.5) to get This asserts the inductive step and hence concludes the proof.
3.2 H 2 -optimal model reduction for symmetric state space systems For the standard Lyapunov equation it has been shown, in [8], that minimization of the energy norm induced by the Lyapunov operator (see [32]) is related to H 2 -optimal model reduction for linear control systems. As it turns out, a similar conclusion can be drawn for the minimization of the cost functional (3.2) and H 2 -optimal model reduction for symmetric bilinear control systems. In this regard, let us briefly summarize the most important concepts from bilinear model reduction. Given a bilinear system Σ as in (2.1) with dim(Σ ) = n, the goal of model reduction is to construct a surrogate model Σ of the form Σ : with A, N i ∈ R k×k , B ∈ R n×k ,C ∈ R r×k and control inputs u(t) ∈ R r and w(t) ∈ R m . In particular, the reduced system should satisfy k n and y(t) ≈ y(t) in some norm. For the bilinear H 2 -norm defined in Definition 2.5, in [6,19], the authors have suggested an algorithm, BIRKA, that iteratively tries to compute a reduced model satisfying first order necessary conditions for H 2 -optimality. A corresponding pseudocode is given in Algorithm 2.
Algorithm 2: BIRKA [6, Algorithm 2] and [19,Algorithm 5] input : System: A, N 1 , . . . , N m ∈ R n×n B ∈ R n×r , C ∈ R r×n , tol Initial guess:Ṽ ,W ∈ R n×k output: Optimal H 2 -approximation spaces:Ṽ opt ,W opt ∈ R n×k while Change in eigenvalues ofÃ larger than tol do To establish the connection we introduce the following generalizations of the operator M : . . , m, and V ∈ R n×r is orthogonal. Our first result is concerned with the invertibility of the operators M and M , respectively.
Proof Note that σ ( M ) is determined by the eigenvalues of the matrix Similarly, we obtain σ (M ) by computing the eigenvalues of the matrix Since A and N i are assumed to be symmetric, we conclude that M = M T 0. Let us then define the orthogonal matrix V = I ⊗ V . It follows that M = V T MV and, consequently, M = M T 0. A similar argument with V = V ⊗ V can be applied to show the second assertion.
Given a reduced bilinear system, we naturally obtain an approximate solution to the generalized Lyapunov equation. Moreover, the error with respect to the M inner product is given by the H 2 -norms of the original and reduced system, respectively.
Let X be the solution to (1.1), and letX be the solution tô Proof By assumption it holds that the controllability Gramian X exists and the operator M is invertible. Moreover, for the reduced system we obtain Hence, we obtain Extending the results from [8], we obtain a lower bound for the previous terms by means of the H 2 -norm of the error system Σ − Σ .
Proof The proof follows by arguments similar to those used in [8,Lemma 3.1]. By definition of the H 2 -norm for bilinear systems we have Analyzing the block structure of X e we thus find the equivalent expression We claim that trace( B T , −B T Ŷ X B ) ≥ 0 which then shows the first assertion. In fact, Y andX are the solutions of With the operators introduced in (3.6) and (3.7), we obtain As in [8, Lemma 3.1], it follows that the previous expression contains the Schur com- can be shown to be positive semidefinite. We omit the details here and instead refer to [8].
Assume now that Σ is locally H 2 -optimal. From [33], we have the following first-order necessary optimality conditions where Y,X are as before and Z,Ẑ satisfy From the symmetry of A,Â, N i andN i as well as the fact that B = C T andB =Ĉ T , we conclude thatẐ =X and Z = −Y . Hence, from the optimality conditions, we obtain which in particular implies that trace( B T , −B T Ŷ X B ) = 0. This shows the second assertion.
As a consequence from Proposition 3.2 and Proposition 3.3, we obtain the following result.
. . , m and B = C T be given. Assume that ρ(L −1 Π ) < 1. Given an orthogonal V ∈ R n×k , k < n, define a reduced bilinear system Σ viaÂ = V T AV, If Σ is locally H 2 -optimal, then VXV T is locally optimal with respect to the M -norm.

Equivalence of ALS and rank-1 BIRKA
So far we have shown that having a subspace producing a locally H 2 -optimal model reduction implies having a subspace for which the Galerkin approximation is locally optimal in the M -norm. In this part we, algorithmically, establish an equivalence between BIRKA and ALS. More precisely, for the symmetric case the equivalence is between BIRKA applied with the target model reduction subspace of dimension 1 for (2.1), and ALS applied to (1.1). The proof is based on the following lemmas. When speaking about redundant steps and operations we mean that the entities assigned in that step is exactly equal to another, existing, entity. In such a situation the algorithm can be rewritten, by simply changing the notation, in a way that skips the redundant step and still produces the same result. Proof The proof is by induction, and it suffices to show that ifṼ =W at the beginning of a loop, the same holds at the end of the loop. Thus assumeṼ =W .
. . , m, andC = CṼ = B TW =B T . By Lemma 3.2 we do not need to consider Steps 2-3. We can now conclude that Step 4 and Step 5 are equal, and thus at the end of the iteration we still haveṼ =W . Lemma 3.4 Consider a symmetric generalized Lyapunov equation and let v, w ∈ R n be two given vectors. Let v ALS , w ALS ∈ R n be the approximations obtained by applying the ALS algorithm (Algorithm 1) to (1.1) with initial guesses v and w. If v = w, then v ALS = w ALS .
Proof Similar to the proof of Lemma 3.3 it is enough to show that if v = w at the beginning of a loop then it also holds at the end of the loop. Hence we assume that v = w. ThenÂ 1 =Â 2 follows by direct calculations. Moreover, by assumption R k = R T k . Thus Step 3 and Step 6 are equal, and hence at the end of the iteration we still have that v = w. Theorem 3.3 Consider a symmetric generalized Lyapunov equation and let v ∈ R n be a given vector. Let v BIRKA ∈ R n be the approximation obtained by applying BIRKA (Algorithm 2) to (1.1) with C = B T and initial guess v. Moreover, let v ALS ∈ R n be the approximation obtained by applying the ALS algorithm (Algorithm 1) to (1.1) with initial guess v. Then v BIRKA = v ALS .
Proof Firstly, Lemma 3.3 and Lemma 3.4 makes it reasonable to assess the algorithms with only a single initial guess as well as a single output. Moreover, Step 5 in BIRKA as well as Steps 2-4 in ALS are redundant. Furthermore, we know from Lemma 3.2 that in this situation Steps 2, 3, and 6 of BIRKA are also redundant. Hence we need to compare the procedure consisting of Steps 1 and 4 from BIRKA, with the procedure consisting of Steps 1, 5, and 6 from ALS. It can be observed that the computations are equivalent and thus the asserted equality holds if they stop after an equal amount of iterations. We hence consider the stopping criteria and note that they are the same, since Corollary 3.2 Theorem 3.1 is applicable with ALS changed to BIRKA, using subspaces of dimension 1.

Fixed-point iteration and approximative M -norm minimization
In this section we present a similar result as in the previous section, but for the fixedpoint iteration. Instead of (locally) minimizing the error in the M -norm with rank-1 updates, the fixed-point iteration minimizes an upper bound for the M -norm, but with no rank-constraint on the minimizer. As in the previous section, we assume that A = A T ≺ 0, and N i = N T i for i = 1, . . . , m, as well as ρ(L −1 Π ) < 1. We remind ourselves that with these assumptions the symmetric generalized Lyapunov equation (1.1) has a unique solution X, and specifically it is symmetric and positive definite, see, e.g., [12,Theorem 4.1].
Recall the fixed-point iteration for the generalized Lyapunov equation (1.1), withX 0 = 0. The iteration has been presented as a convergent splitting in, e.g., [12,Equation (12)], [33, Equation (12)], and [29, Equation (4)]. We want to relate itera-tion (4.1) to the M -norm minimization problem. Hence consider the problem A reason to restrict the minimization to symmetric positive semidefinite matrices is that we know that the solution X = X T 0. Hence an iteration started with X 0 = 0 creates a sequence of symmetric positive semidefinite approximations, ordered aŝ X k+1 X k , and converging to the symmetric positive definite solution, cf. Section 3.1 and [24, Theorem 2.4]. Naturally Proposition 2.1 gives us the solution in just one step. However, the computation is as difficult as the original problem and hence we consider minimizing an upper bound. Similar as before we disregard the constant term X −X k 2 M in the minimization and consider min where the inequality comes from that ∆ T and Π (∆ ) are positive semidefinite matrices, and hence the trace is non-negative [27]. We want to say that the last step is minimized by ∆ = − L −1 (R k ), which is true if the residual is symmetric and positive semidefinite. We also want to show that this iteration is equivalent to the fixed-point iteration. The argument is closed by the following result.

2)
where R k is the residual associated withX k . ThenX k =X T k 0 and R k = R T k 0, for all k ≥ 0. Furthermore, the iteration (4.2) is equivalent to (4.1).
Proof The first part of the proof is by induction and similar to that of Theorem 3.1. It trivially holds that X 0 = X T 0 0 and R 0 = R T 0 0. Now assume that this is the case for some k. Then ∆ := − L −1 (R k ) is symmetric and positive semidefinite, and hencê X k+1 is symmetric and positive semidefinite. Moreover, sinceX k+1 is symmetric and because of the structure of the operators in (1.1) we know that R k+1 is symmetric. Thus what is left to show for the first part is R k+1 0. We have that R k+1 0 if and only if y T R k+1 y ≥ 0 for all y ∈ R n . Hence take an arbitrary y ∈ R n and consider y T R k+1 y = y T R k y + y T (L (∆ ) + Π (∆ )) y = y T (Π (∆ )) y ≥ 0, where we in the first equality use the linearity of the operators and in the second the definition of ∆ . The last inequality holds since ∆ is symmetric and positive semidefinite and Π is a symmetric operator.
For the second part of the proof, what is left to show is that (4.2) and (4.1) are equivalent. This comes directly from adding and subtracting L (X k ) from the righthand-side of (4.1), and using the linearity of L . Remark 4.1 One could consider creating a subspace iteration from (4.2), by computing a few singular vectors of L −1 (R k ) and add to the basis. The method seems to have nice convergence properties per iteration in the symmetric case, but not in the non-symmetric case. However, the (naïve) computations are prohibitively expensive. See [29] for a computationally more efficient way of exploiting the fixed-point iteration.

A residual-based rational Krylov generalization
Given Proposition 2.1 and the discussion in Sections 3 and 4 it seems as constructing an iteration based on the current residual can be a useful technique when solving the generalized Lyapunov equation. Moreover, in [24,Section 4] it is suggested that, so called, preconditioned residuals can be used in expanding the search space. It is further suggested that, for the Lyapunov equation, one such preconditioner could be a one-step-ADI preconditioner P −1 ADI = (A − σ I) −1 ⊗(A − σ I) −1 , for a suitable choice of the shift. This strategy can be, somewhat, motivated by Theorem 4.1.
In this section we present a method that can be seen as a generalization of the rational Krylov subspace method. We further motivate why it can be natural to see it as a generalization of the rational Krylov subspace method, by considering the standard Lyapunov equation.

Suggested search space
For the generalized Lyapunov equation (1.1), we suggest the following search space: where r k−1 is the most dominant left singular vector of the Galerkin residual R k−1 of K k−1 , and {σ i } k i=1 is a sequence shift that needs to be chosen. In analogy with the discussion in [15], we suggest that the shift is chosen iteratively, where the approximation error along the current direction is the largest. More precisely, where V k−1 is a matrix with orthogonal columns containing basis of K k−1 , A k−1 = V T k−1 AV k−1 , and [σ min , σ max ] is a search interval. Typically for a stable matrix A, then σ min is the negative real part of the eigenvalue of A with largest real part, and correspondingly σ max is the negative real part of the eigenvalue of A with smallest real part. Equations (5.1) and (5.2) can be straightforwardly incorporated in a Galerkin method for the generalized Lyapunov equation, the pseudocode is presented in Algorithm 3.

Algorithm 3:
Residual-based rational Krylov type solver input : A, N 1 , . . . , N m ∈ R n×n B ∈ R n×r , tol output:X Compute the projected matrices: Solve the projected problem: Construct the (Galerkin) approximation: Compute the residual: Such computations may, however, introduce inexactness and present a difficulty in a convergence analysis.

Remark 5.2
The dynamic shift-search in Step 9 can, heuristically, be changed for and analogue to [15, (2.4) and (2.2)]. In our case we suggest where S is approximates the mirrored spectrum of A and ∂ S is the boundary of S, and being the Ritz values of A k−1 . Typically S is approximated at each step using the convex hull of the Ritz values of A k−1 . It has been observed efficient in experiments since the maximization of (5.3) is computationally faster compared to (5.2). For a practical comparison of convergence properties, see Section 6.

Remark 5.3
The steps 8-9 in Algorithm 3 can be changed for a tangential-direction approach according to [16]. One practical way, although a heuristic, is to do the shift search according to either (5.2) or (5.3), and then compute the principal direction(s) according to [16,Section 3], i.e., through a singular value decomposition of It has been observed in experiments that such an approach tend to speed up the convergence, in terms of computation time, since the computation of the residual is costly.
Remark 5.4 It is (sometimes) desirable to allow for complex conjugate shifts σ k andσ k , although, for reasons due to computations and model interpretation, one wants to keep the basis real. This is achievable also in the proposed setting by observing the following standard relation Span (A − σ k I) −1 r k−1 , (A −σ k I) −1 r k−1 = Span Re((A − σ k I) −1 r k−1 ), Im((A − σ k I) −1 r k−1 ) . Although it requires two shifts to be used together with the vector r k−1 .

Analogies to the linear case
To give further motivation to the suggested subspace in (5.1), we draw parallels with the (standard) rational Krylov subspace for the standard Lyapunov equation. The reasoning in this section can be compared to [4,Section 2]. We consider the equation where L is defined by (1.2) and b ∈ R n . Note that Definitions 2.2 and 2.3 are analogous for the (standard) Lyapunov equation (5.4), but with Π = 0. To prove the main result of this section we need the following lemma.
Lemma 5.1 Let A ∈ R n×n and s a ∈ R be any scalar such that (A−s a I) is nonsingular. Moreover, let V ∈ R n×k , k ≤ n, be orthogonal, i.e., V T V = I, and let R ∈ R n×n be such that Range ( Theorem 5.1 Let A ∈ R n×n , b ∈ R n , and let {s i } k+1 i=1 be a sequence of shifts such that A − s i I is nonsingular, and define K k := Span{b, (A − s 1 I) −1 b, . . . , (A − s k I) −1 b}, and K k+1 analogously. Let V k be a basis of K k , V k+1 a basis of K k+1 , and let v k+1 ∈ R n be such that V k+1 = V k v k+1 . Moreover, let R k ∈ R n×n be the Galerkin residual with respect to (5.4). Then each column of (A − s k+1 I) −1 R k is in Span(V k+1 ), i.e., Proof We start by proving the first claim. From [15,Proposition 4.2] we have that R k = UJU T with with u 1 := V k Y k H −T k e k h k+1,k and u 2 := v k+1 s k+1 − (I − V k V T k )Av k+1 . From the definition of u 1 one can see that u 1 = V k α for some α ∈ R n , hence u 1 ∈ K m , and thus (A − s m+1 I) −1 u 1 ∈ K m+1 . Similarly, from the definition of u 2 one can see that To prove the second claim we assume that Range((A − s m+1 I) −1 R m ) ⊆ K k = Span(V k ). Under this assumption we can use Lemma 5.1 and the fact that R k = R T k to get since R k is the Galerkin residual and thus V T k R k V k = 0.
The main observation and conclusion from Theorem 5.1 can be phrased as follows: Consider the two spaces K k := Span{b, (A − s 1 I) −1 b, . . . , (A − s k I) −1 b} and K k := Span{R −1 , (A − s 1 I) −1 R 0 , . . . , (A − s k I) −1 R k−1 }, where R −1 = b and R i is the Galerkin residual in space K i , i = 0, 1, . . . , k − 1 . Then for for all relevant cases, i.e., R i = 0 for i = −1, 0, . . . , k −1, we have that K k =K k . Hence the suggested subspace in (5.1) can be seen as a natural generalization of a Rational Krylov subspace for linear matrix equations.
Remark 5.5 The result in Theorem 5.1 generalizes naturally to the case B ∈ R n×r , where the arguments have to be done with block matrices. As noted in [15, just before Section 4.1] the changes when going to blocks are mostly technical. Results needed to generalize [15,Proposition 4.2] are found in, e.g., [1], and the block case is implemented in the code available at Simoncini's webpage 1 .

Numerical examples
In this section we numerically compare different methods discussed in the paper. All algorithms are treated in a subspace fashion and we compare practically achieved approximation properties as a function of subspace dimension. For small and moderate sized problems there are algorithms for computing the full solution, although costly, see [23,Algorithm 2], cf. [33, equation (12)]. Nevertheless this allows for inspection of the actual relative error, i.e., X −X k / X , whereX k is the approximation and X is the exact solution. Moreover, it also allows comparison with the (in the Frobenius norm) optimal low-rank approximation based on the SVD.
We summarize some of the implementation details. Specifically, BIRKA is implemented as described in Algorithm 2, with a maximum allowed number of iterations equal to 100. Convergence tolerance is implemented as relative norm difference of the vector of sorted eigenvalues and was set to 10 −3 . Each subspace is computed separately since the algorithm does not have a natural way to extend the subspace dimension based on a smaller subspace. The complete method based on ALS is a subspace method along the lines of [24,Algorithm 3], rather than an iteratively updated method as described in Theorem 3.1. Moreover, because of the structure of the generalized Lyapunov equation, the solution is symmetric even if the coefficient matrices are not. Hence we use a symmetric version of ALS even for the non-symmetric examples. The convergence tolerance is implemented analogously to BIRKA, but here it is only a scalar each time. The maximum allowed number of iterations in ALS was set to 20, and the tolerance was set to 10 −2 . Regarding rational-Krylov-type methods there are a plethora of variants to choose from. We compare the following methods, which we give short labels for the legends further down: -A: K k as in (5.1), according to Algorithm 3 -B: Algorithm 3 but with tangential directions according to Remark 5.3,though with shifts according to (5.2) -C: Algorithm 3 but with shifts according to (5.3) -D: Algorithm 3 but with tangential directions according to Remark 5.3 and shifts according to (5.3) -E: Standard Rational Krylov. More precisely, K k similar to (5.1) but instead of using the singular vector of the residual, r k−1 , we use the right-hand-side B in both (5.1) and (5.2) -F: A: K k as in (5.1), but with on-beforehand-prescribed shifts given as the recycling of mirrored eigenvalues from a size-10-BIRKA (convergence tolerance set to 10 −3 ). Mirrored eigenvalues are potentially complex, with positive real part and taken in ascending order according to the real parts.
For C, D, and F the shifts may be complex-valued, and the complex arithmetic is avoided by creating the space in accordance with Remark 5.4. For the shift-computing versions of the rational-Krylov-type algorithms, the maximum and minimum eigenvalues were computed and the mirrored shift-search-boundaries were perturbed with the factors 1.01 and 0.99 for the upper and lower boundaries respectively, as to slightly enlarge the region. Orthogonalization of the basis is implemented very similar to how the SVD-based orthogonalization is done in MATLAB. More precisely, it is using MATLAB built-in QR factorization and keep vectors only if the corresponding diagonal element in R is large enough. Implementation for the methods A-F is available online. 2 The simulations were done in MATLAB R2018a (9.4.0.813654) on a computer with four 1.6 GHz processors and 16 GB of RAM. We test the algorithms on three different problems. In all problems we are computing an approximation to an associated controllability Gramian to a bilinear control system, as in (2.2). The examples all have stable Lyapunov operators. The first example is symmetric, the second is non-symmetric but symmetrizable, and the third example non-symmetric.

Heat equation
The first example is motivated by an optimal control problem for the selective cooling of steel profiles, see [17]. In this example, the state variable y models the evolution of a temperature and is described by a two-dimensional heat equation, where a control u(t) enters bilinearly from the left boundary through a Robin boundary condition, and the other spatial boundaries satisfy homogeneous Dirichlet conditions. The control can be interpreted as the spraying intensity of a cooling fluid. The system is discretized in space using centred finite difference, which yields a bilinear system with A ∈ R 5041×5041 , B ∈ R 5041 , m = 1, and N 1 = N ∈ R 5041×5041 . It can be further noted that, A = A T ≺ 0 and N = N T , and hence the theory of H 2 -optimality and the definition of the M -norm, (3.1), from above is applicable. We compare different methods discussed in the paper, and we compare both the relative residual norm, as well as the relative error. For readability the plots have been split, hence in Figure 6.1 we compare across different methods, and in Figure 6.2 we compare between different flavours the rational-Krylov-type methods. It can be observed, see Figure 6.1, that for this problem BIRKA has extremely good performance, even outperforming the SVD in relative residual norm. Nevertheless, the larger BIRKA subspaces are rather costly to compute. In comparison ALS shows good performance compared to the rational-Krylov-type subspace, and is rather cheap to compute. When comparing the different Rational-Krylov-type methods, see Figure 6.2, we see that standard rational-Krylov (E) has the problem that it reaches an invariant subspace and is unable to extend larger than dimension 13. However, stagnation has already been observed. The methods A, C, and F are similar, although B and D are only slightly worse in the error per subspace dimension comparison and practically sometimes even faster to compute.
Since the M -norm is defined for this problem we compare the relative error also in this norm, see Figure 6.3. The trend is similar as in the Frobenius norm, although it can be observed that in general the error is smaller and BIRKA has best performance, even compared to the SVD (consider that this is relative error, however, in another norm).

1D Fokker-Planck
The second example is from quantum physics, where a one-dimensional Fokker-Planck equation describes the evolution of a probability density function, ρ, of a particle affected by a potential. Parts of the potential can be manipulated by a socalled optical tweezer, which constitutes our control. For more details see [21]. More precisely, we consider where the potential is V (x,t) = W (x) + α(x)u(t), with the ground (fixed) potential being W (x) = (0.5x 2 − 15)x 2 + 199 x 2 + 28x + 50 /200, and α(x) is an approximately linear control shape function, see [11] for further details. In a weighted inner product, the dynamics can be described by self-adjoint operators. However, here we employ an upwinding type finite difference scheme with 5000 grid point, leading to a non-symmetric system. As has been pointed out in [21], the system matrix A is not asymptotically stable due to a simple zero eigenvalue associated with the stationary probability distribution. Using a projection-based decoupling, it is however possible to work with an asymptotically stable system of dimension n = 4999. Similar to the first example, our control variable is a scalar and, consequently, we only obtain a single bilinear coupling matrix N 1 = N. Since the system is non-symmetric, the operator M is generally indefinite and hence we make no comparisons in the M -norm. The plots in Figures 6.4 and 6.5 are analogous to the plots in Figures 6.1 and 6.2 respectively. However, for this example the direct solver stagnated at a relative residual of about 10 −8 , which can be seen in the stagnation of the SVD approximation in the left of Figure 6 method-approximations. Nevertheless we believe the comparisons to be fair more or less upto to point of stagnation, which is further justified by the relative residual plots showing similar behaviour. However, the relative residual also indicating stagnation around 10 −8 for the other methods as well, although not quite as clear as for the SVD.
From Figure 6.4 we see the BIRKA performs well for this problem. However, the subspaces of dimension 28 and 29 did not converge in a 100 iterations and hence these, as well as iteration 30, are left out of the plots. This illustrates a drawback with the method. The performance difference between ALS and the rational-Krylov-type method is slightly smaller compared to the previous example. Among the rational-Krylov-type methods A, B, and F seems to have similar performance, whereas C is clearly worse. Method E has the same problem as in the previous example, and method D also ends up with an insufficient subspace.

Burgers equation
In the third example we consider an approximation to the one-dimensional viscous Burgers equation

∂ ∂t
w(x,t) + w(x,t) ∂ ∂ x w(x,t) = ν ∂ 2 ∂ x 2 w(x,t) (x,t) ∈ (0, 1) × (0, T ) w(x, 0) = w 0 (x) x ∈ (0, 1) where ν = 0.1 is constant. The solution w(x,t) can be interpreted as a velocity and the equation occurs in, e.g., modelling of gas or traffic flow. A control input u(t) is applied at the left boundary. The problem is discretized in space using centered finite differences with 71 uniformly distributed grid points. Using a second order Carleman bilinearization, we obtain a bilinear control system approximation with A, N ∈ R 5112×5112 and B ∈ R 5112 , see [10] for further details. Note that in this case A is an asymptotically stable but non-symmetric matrix. In order to ensure the positive semidefiniteness of the Gramian, we scale the control matrices N and B with a factor α = 0.25. We emphasize that the control law is scaled proportionally with 1 α such that the dynamics remain unchanged, for further discussion see [9,Section 3.4].
The comparison is similar to the previous examples and the Figures 6.6 and 6.7 are analogous to the respective Figures 6.1 and 6.2. The problem is difficult in the sense that the singular values of the solution decays slowly. Moreover, the direct method stagnates at a relative residual norm of 5 · 10 −6 . This is, however, less visible compared to the previous example since in general the convergence is slower.
For this example the performance of BIRKA is not significantly better than other methods, which is not surprising since the theoretical justifications for the method are not valid. ALS shows faster convergence in relative residual norm but slower convergence in relative error, as well as indications of stagnation. However, the theoretical justifications for ALS are also not valid for this problem and the result is in line with the results in [24]. Regarding the rational-Krylov-type methods it seems as if method D and B has good performance, and method E is not working for this problem either.

Conclusions and outlooks
In this paper we have studied iterative methods for computing approximations to the generalized Lyapunov equation. The methods have been studied from an energynorm perspective, as well as a model-reduction perspective, with connections made in between. Common for all methods studied is that they use the current residual in the iterations. Computing the residual can in itself be costly for a truly large scale problem, although approximate dominant directions can be computed in an iterative fashion, resulting in an inner-outer-type iteration. However, more research is needed to understand the consequences of such inexact subspaces. Moreover, we have proposed a rational-Krylov-type subspace for solving the generalized Lyapunov equation. Simulations indicate competitive performance, at least in the non-symmetric case where optimality statements for the other methods are no longer valid. Simulations show that methods A and F do good or decently good for all three examples. The ALS iteration, as well as results from the literature, cf. [2], seems to indicate that subspaces of the type (A − σ I − µN) −1 B could be useful. Although we have not been able to exploit this efficiently. More research is needed to understand the theoretical aspects of the suggested, and related, spaces.