Residual-based iterations for the generalized Lyapunov equation

This paper treats iterative solution methods for the generalized Lyapunov equation. Specifically, a residual-based generalized rational-Krylov-type subspace is proposed. Furthermore, the existing theoretical justification for the alternating linear scheme (ALS) is extended from the stable Lyapunov equation to the stable generalized Lyapunov equation. Further insights are gained by connecting the energy-norm minimization in ALS to the theory of H2-optimality of an associated bilinear control system. Moreover it is shown that the ALS-based iteration can be understood as iteratively constructing rank-1 model reduction subspaces for bilinear control systems associated with the residual. Similar to the ALS-based iteration, the fixed-point iteration can also be seen as a residual-based method minimizing an upper bound of the associated energy norm.


Introduction
This paper concerns iterative ways to compute approximate solutions to what has become known as the generalized Lyapunov equation, where X ∈ R n×n is unknown, B ∈ R n×r is given, and the operators L , Π : R n×n → R n×n are defined as with A, N i ∈ R n×n for i = 1, . . . , m given. The operator L is commonly known as the Lyapunov operator, and Π is sometimes called a correction. We further assume that A is stable, i.e., A has all its eigenvalues in the left-half plane, which implies that L is invertible [23,Theorem 4.4.6]. Moreover, we assume that ρ(L −1 Π) < 1, where ρ denotes the (operator) spectral radius. The assumption on the spectral radius implies that (1) has a unique solution [24,Theorem 2.1]. Furthermore, the definition of Π in (3) implies that it is non-negative, in the sense that Π(X ) is positive semidefinite when X is positive semidefinite. Thus one can assert that, for all positive definite righthand-sides, the unique solution X is indeed positive definite [9, Theorem 3.9] [12, Theorem 4.1]. Under these assumptions we prove that the alternating linear scheme (ALS) presented by Kressner and Sirković in [25] computes search directions which at each step fulfill a first order necessary condition for being H 2 -optimal. Moreover, we show an equivalence between the bilinear iterative rational Krylov (BIRKA) method [5,19] and the ALS-iteration for the generalized Lyapunov equation. The established equivalence leads to that the ALS-iteration for the generalized Lyapunov equation can be understood as iteratively computing model reduction spaces of dimension 1 for a sequence of bilinear control systems associated with the residual of the generalized Lyapunov equation (Sect. 3). We also present a residual-based generalized rational-Krylov-type subspace adapted for solving the generalized Lyapunov equation (Sect. 5). A further result regards the fixed-point iteration, a residual-based iteration which we show minimizes an upper bound of the energy-norm (Sect. 4). The standard Lyapunov equation, AX + X A T + B B T = 0, has been well studied for a long time and considerable research effort has been, and is still, put into finding efficient algorithms for computing the solution and approximations thereof. For large and sparse problems it is typical to look for low-rank approximations since algorithms can be adapted to exploit the low-rank format, reducing computational effort and storage requirement. One such algorithm is the Riemannian optimization method from [40] which computes a low-rank approximation by minimizing an associated cost function over the manifold of rank-k matrices, where k n. The Lyapunov equation has a close connection to control theory. Hence methods such as the iterative rational Krylov algorithm (IRKA) [18,21], which computes subspaces for locally H 2 -optimal reduced order linear systems, provide good approximation spaces for low-rank approximations. Related research is presented in a series of papers [13][14][15], where Druskin and co-authors develop a strategy to choose shifts for the rational Krylov subspace for efficient subspace reduction when solving PDEs [13,14], as well as for model reduction of linear single-input-single-output (SISO) systems and solutions to Lyapunov equations [15]. Instead of computing full spaces iteratively with a method such as IRKA, the idea is to construct an infinite sequence with asymptotically optimal con-vergence speed [13]. Then the subspace can be dynamically extended as needed, until required precision is achieved. The idea is also further developed by using tangential directions, proving especially useful for situations where the right-hand-side is not of particularly low rank [16], e.g., multiple-input-multiple-output (MIMO) systems. For a more complete overview of results and techniques for Lyapunov equations see the review article [38].
The generalized Lyapunov equation has received increased attention over the last decade. Results on low-rank approximability have emerged [6,24]. More precisely, similarly to the standard Lyapunov equation one can in certain cases when the righthand-side B is of low rank, r n, expect the singular values of the solution to decay rapidly even for the generalized Lyapunov equation. The result [6,Theorem 1] is applicable when the matrices N i for i = 1, . . . , m have low rank, and the result [24,Theorem 2] when ρ(L −1 Π) < 1. Examples of algorithms exploiting lowrank structures are a Bilinear ADI method [6], specializations of Krylov methods for matrix equations [24], as well as greedy low-rank methods [25], and exploitations of the fixed-point iteration [37]. Through the connection with bilinear control systems there is an extension of IRKA, known as bilinear iterative rational Krylov (BIRKA) [5,19]. There are also methods based on Lyapunov and ADI-preconditioned GMRES and BICGSTAB [12], and in general for problems with tensor product structure [26]. In the context of stochastic steady-state diffusion equations, rational Krylov subspace methods for generalized Sylvester equations have also been analyzed in [32]. The suggested search space is based on a union of rational Krylov subspaces, as well as combinations of rational functions, generated by the coefficient matrices defining the generalized Sylvester operator. We also mention that for the case when the correction Π has low operator-rank, there is a specialization of the Sherman-Morrison-Woodbury formula to the linear matrix equation; see [33] or [12,Section 3]. The result has been exploited in works such as [6,28,34]. Recently, the generalized Lyapunov equation has also been considered on an infinite-dimensional Hilbert space, see [4]. In particular, the authors show ([4, Proposition 1.1]) that the Gramians solving the generalized linear operator equations can be approximated by truncated Gramians that are associated to a sequence of standard operator Lyapunov equations.

Generalized matrix equations and approximations
We recall some basic definitions and results that will be used later in the paper. In general we will think ofX k ∈ R n×n as an approximation of the solution to (1), where k is typically an iteration count. Connected with an approximationX k is the corresponding error where X is the exact solution to (1), and the residual, The goal is to find anX k such that X e k is small for some norm. Since X e k is usually not available in practice, one instead aims at a small residual norm R k . To discuss projection methods and make the results precise, we make the following (standard) definition.
Definition 1 (The Galerkin approximation) Let K k ⊆ R n be an n k ≤ n dimensional subspace for k = 0, 1, . . . , and let V k ∈ R n×n k be a matrix containing an orthogonal basis of K k . We callX k the Galerkin approximation to (1) For the generalized Lyapunov equation there are certain sufficient conditions for the Galerkin approximation to exist and be unique, e.g., the criteria in [9, Theorem 3.9], [12,Theorem 4.1] or [24,Proposition 3.2]. Related to the Galerkin approximation there is also the (standard) definition of the Galerkin residual.

Definition 2 (The Galerkin residual)
We call R k from (5) the Galerkin residual ifX k is the Galerkin approximation.
The condition (6) is known as both the projected problem and the Galerkin condition, and it states that V T k R k V k = 0 for the Galerkin residual. Some of the results and arguments presented below are valid for a (generic) residual and others, more specialized, only for the Galerkin residual. However, it will be clear from context and the Galerkin residual will always be referenced as such.
The following fundamental result from linear algebra will be important for us. The specialization for the Lyapunov equation was presented already by Smith in [39]. For generalized matrix equations cf. [12,Section 4.2], and [25,Algorithm 2]; and an analogy for the algebraic Riccati equation in [29].

Proposition 3 (Residual equation) Consider Eq.
(1). LetX k be an approximation of the solution, R k be the residual (5), and X e k be the error (4). Then One strategy for computing updates to the current approximation is to compute approximations of the error. Proposition 3 allows such iterations by connecting the error with the known, or computable, quantities L , Π and R k . The idea is well established in the literature and is, e.g., analogous to the defect correction method [29] and the RADI method [8] for the algebraic Riccati equation, as well as the iterative improvement [20, Section 3.5.3] for a general linear system. For future reference we also need the following basic definition.

Definition 4 (Symmetric generalized Lyapunov equation) The generalized Lyapunov equation
is called symmetric if A = A T and N i = N T i for i = 1, . . . , m.

Bilinear systems
We recall some control theoretic concepts for bilinear control systems of the form with A, N i ∈ R n×n , B ∈ R n×r and C ∈ R p×n and control inputs u(t) ∈ R r and w(t) ∈ R m .

Remark 5
Note that the bilinear system (8) differs from the notation frequently used in the literature, e.g., [1,2,5,9,12,19,41]. The formulation (8) is convenient since it allows for m = r . However, the system Σ can be put into the usual form by considering the input vector w(t) T , u(t) T T , adding m zero-columns to the beginning of B, i.e., 0, B , and considering the matrices N m+1 = 0, . . . , N m+r = 0. The system Σ can also be compared to systems from applications, e.g., [30,Equation (2)].
As in [2], for a MIMO bilinear system (8), we define the controllability and observability Gramians as follows.
Definition 6 ([2], Bilinear Gramians) Consider the bilinear system (8) and let A be stable. Moreover, let P 1 (t 1 ) := e At 1 B, P j (t 1 , . . . , T e At j for j = 2, 3, . . . . We define the controllability and observability Gramian respectively as It is possible that the generalized Gramians from Definition 6 do not exist; sufficient conditions are given in, e.g., [41,Theorem 2]. However, if the Gramians exist we also know that they satisfy the following matrix equations In relation to the generalized controllability and observability Gramians, one might also define a generalized cross Gramian similar to the SISO case discussed in [36]. Consider the symmetric generalized Lyapunov equation (7), and an approximation X k with related error X e k , and residual R k . One can easily verify that for the auxiliary system is a singular value decomposition of R k , the associated cross Gramian coincides with the error X e k . In the special case where R k = R T k 0, it is easy to show the following result. For what follows, we recall the definition of the H 2 -norm for bilinear systems that was introduced by Zhang and Lam in [41]. Definition 8 ([41], Bilinear H 2 -norm) Consider the bilinear system Σ from (8). We define the H 2 -norm of Σ as . . , s j ) := Ce As j N 1 e As j−1 N 2 · · · e As 1 b j .
It has been shown [41,Theorem 6] that if the Gramians from Definition 6 exist, then

ALS and H 2 -optimal model reduction for bilinear systems
In this section, we discuss a low-rank approximation method proposed by Kressner and Sirković in [25]. We show that several results can be generalized from the case of the standard Lyapunov equation to the more general form (1). Moreover, we show that in the symmetric case the method allows for an interpretation in terms of H 2optimal model reduction for bilinear control systems. With this in mind, we assume that we have a symmetric generalized Lyapunov equation (7). If additionally A ≺ 0 and ρ(L −1 Π) < 1, then the operator M (X ) := − L (X )−Π(X ) is positive definite and allows us to define a weighted inner product via with a corresponding induced M -norm, also known as energy norm,

ALS for the generalized Lyapunov equation
In [25], it is suggested to construct iterative approximationsX k by rank-1 updates that are locally optimal with respect to the M -norm. To be more precise, assume that X is a solution to the symmetric Lyapunov equation (7), i.e., AX + X A + m i=1 N i X N i + B B T = 0. Given an approximationX k , we consider the minimization problem Since the minimization involves the constant term X −X k 2 M , it suffices to focus on where R k is the current residual, i.e., (5). Locally optimal vectors v k and w k are then (approximately) determined via an alternating linear scheme (ALS). The main step is to fix one of the two vectors, e.g., v and then minimize the strictly convex objective function to obtain an update for w. A pseudocode is given in Algorithm 1. In view of Proposition 3 the ALS-based approach for computing new subspace extensions can be seen as searching for an approximation to X e k of the form This is to say that the error is approximated by a rank-1 matrix, and at convergence this would result in the new residual, R k+1 , being left-orthogonal to v k and right-orthogonal to w k . In the symmetric case, local minimizers of (10) are necessarily symmetric positive semidefinite. This yields the following extension of [

Lemma 9
Consider the symmetric generalized Lyapunov equation (7) and assume that Let J be as in (10). Then every local Proof The proof naturally follows along the lines of [25, Lemma 2.3], and hence without loss of generality we assume that v * = 0 , w * = 0, and v * = w * . Thus v * w T * is positive semidefinite if and only if v * = w * . The proof is by contradiction and we assume that Simplifying the left-hand-side we get and similarly the right-hand-side gives Collecting the terms involving the L -operator we observe that Thus by collecting the terms involving the Π -operator to the left, and the residual to the right, the inequality reduces to The argument is now concluded by showing that We can without loss of generality consider m = 1, i.e., only one N -matrix, since the following argument can be applied to all terms in the sum independently. We observe that which shows the desired inequality and thus concludes the proof.
Algorithm 1 and the argument in Lemma 9 are based on a residual. However, if X k = 0, then R k = B B T , and hence the result is applicable directly to any symmetric generalized Lyapunov equation. The focus on the residual in the previous results is natural since it leads to the following extension of [25,Theorem 2.4] to the case of the symmetric generalized Lyapunov equation.

Theorem 10
Consider the symmetric generalized Lyapunov equation (7) with the additional assumptions that A ≺ 0 and ρ(L −1 Π) < 1. Moreover, consider the sequence of approximations constructed aŝ where v k+1 is a locally optimal vector computed with ALS (Algorithm 1).
Proof We show the assertion by induction. It clearly holds that R 0 = R T 0 0. Now assume that this is the case for some k. From Lemma 9 the local minimizers of (10) are symmetric and henceX k+1 is reasonably defined in (11). Moreover, sinceX k+1 and the operators in (1) are symmetric it follows that R k+1 is symmetric. Thus what is left to show is that R k+1 0, which is true if and only if y T R k+1 y ≥ 0 for all y ∈ R n . Hence take an arbitrary y ∈ R n and consider y T R k+1 y. We derive properties similar to [25, equations (12)- (14)]: Since where w = v k+1 . Note that the gradient ∇ v J w of J w with respect to v is given by Due to the optimality of v k+1 with respect to J v k+1 , first order optimality conditions then imply that Striking this equality with v T k+1 from the left implies that Based on (12) and its transpose, and by exploiting the symmetry of the involved matrices, we can write the residual as

H 2 -optimal model reduction for symmetric state space systems
For the standard Lyapunov equation it has been shown, in [7], that minimization of the energy norm induced by the Lyapunov operator (see [40]) is related to H 2 -optimal model reduction for linear control systems. We show that a similar conclusion can be drawn for the minimization of the cost functional (10) and H 2 -optimal model reduction for symmetric bilinear control systems. In this regard, let us briefly summarize the most important concepts from bilinear model reduction. Given a bilinear system Σ as in (8) with dim(Σ) = n, the goal of model reduction is to construct a surrogate model Σ of the form Σ : withÂ,N i ∈ R k×k ,B ∈ R k×r ,Ĉ ∈ R r ×k and control inputs u(t) ∈ R r and w(t) ∈ R m . In particular, the reduced system should satisfy k n and y(t) ≈ y(t) in some norm. In [5,19] the authors have suggested an algorithm, BIRKA, that iteratively tries to compute a reduced model satisfying first order necessary conditions for H 2 -optimality, for the bilinear H 2 -norm as defined in Definition 8. A corresponding pseudocode is given in Algorithm 2.
Algorithm 2: BIRKA [5, Algorithm 2] and [19,Algorithm 5] input : System: A, N 1 , . . . , N m ∈ R n×n B ∈ R n×r , C ∈ R r ×n , tol Initial guess:Ṽ ,W ∈ R n×k output: Approximation spaces:Ṽ opt ,W opt ∈ R n×k satisfying necessary conditions for H 2 -optimality while Change in eigenvalues ofÃ larger than tol do To establish the connection we introduce the following generalizations of the operator M : . . , m, and V ∈ R n×k is orthogonal. Our first result is concerned with the invertibility of the operators M and M , respectively.
Proof Note that σ ( M ) is determined by the eigenvalues of the matrix Similarly, we obtain σ (M ) by computing the eigenvalues of the matrix Since A and N i are assumed to be symmetric, we conclude that M = M T 0. Let us then define the orthogonal matrix V = V ⊗ I . It follows that M = V T MV and, consequently, M = M T 0. A similar argument with V = V ⊗ V can be applied to show the second assertion.
Given a reduced bilinear system, we naturally obtain an approximate solution to the generalized Lyapunov equation. Moreover, the error with respect to the M -inner product is given by the H 2 -norms of the original and reduced system, respectively.

Proposition 13
Let Σ denote a bilinear system (8) and let A = A T ≺ 0, N i = N T i for i = 1, . . . , m, and B = C T . Assume that ρ(L −1 Π) < 1. Given an orthogonal V ∈ R n×k , k < n, define Σ, the reduced bilinear system (14) Proof By assumption it holds that M and M are invertible and the controllability Gramians X andX exist. We observe that Moreover, for the reduced system we obtain Extending the results from [7], we obtain a lower bound for the previous terms by the H 2 -norm of the error system Σ − Σ.

Proposition 14
Let Σ denote a bilinear system (8) and Proof The proof follows by arguments similar to those used in [7, Lemma 3.1]. By definition of the H 2 -norm for bilinear systems Analyzing the block structure of X e , adding and subtracting Σ 2 H 2 = trace(B TXB ), we find the equivalent expression  (15) and (16), we obtain where b = vec(BB T ) and b = vec(BB T ). As in [7, Lemma 3.1], it follows that the previous expression contains the Schur complement of M −1 in S = V T MV V T V M −1 which can be shown to be positive semidefinite. We omit the details and refer to [7].
Assume now that Σ is locally H 2 -optimal. From [41], we have the following firstorder necessary optimality conditions where Y ,X are as before and Z ,Ẑ satisfy This shows the second assertion.
As a consequence of Propositions 13 and 14, we obtain the following result.

Theorem 15
Let Σ denote a bilinear system (8) and let A = A T ≺ 0, N i = N T i for i = 1, . . . , m and B = C T . Assume that ρ(L −1 Π) < 1. Given an orthogonal V ∈ R n×k , k < n, define Σ, the reduced bilinear system (14), then VX V T is locally optimal with respect to the M -norm.

Equivalence of ALS and rank-1 BIRKA
So far we have shown that a subspace producing a locally H 2 -optimal model reduction is also a subspace for which the Galerkin approximation is locally optimal in the Mnorm. In this part we, algorithmically, establish an equivalence between BIRKA and ALS. More precisely, for the symmetric case the equivalence is between BIRKA applied with the target model reduction subspace of dimension 1 for (8), and ALS applied to (1). The proof is based on the following lemmas. When speaking about redundant steps and operations we mean that the entities assigned in that step are exactly equal to another, existing, entity. In such a situation the algorithm can be rewritten, by simply changing the notation, in a way that skips the redundant step and still produces the same result. (7) and let v, w ∈ R n be two given vectors. Let v birka , w birka ∈ R n be the approximations obtained by applying BIRKA (Algorithm 2) to (1) with C = B T and initial guesses v and w.

Lemma 17 Consider a symmetric generalized Lyapunov equation
Proof The proof is by induction, and it suffices to show that ifṼ =W at the beginning of a loop, the same holds at the end of the loop. Thus assumeṼ =W .
. . , m, andC = CṼ = B TW =B T . By Lemma 16 we do not need to consider Steps 2-3. We can now conclude that Step 4 and Step 5 are equal, and thus at the end of the iteration we still haveṼ =W . (7) and let v, w ∈ R n be two given vectors. Let v als , w als ∈ R n be the approximations obtained by applying the ALS algorithm (Algorithm 1) to (1) with initial guesses v and w. If v = w, then v als = w als .

Lemma 18 Consider a symmetric generalized Lyapunov equation
Proof Similar to the proof of Lemma 17 it is enough to show that if v = w at the beginning of a loop then it also holds at the end of the loop. Hence we assume that v = w. ThenÂ 1 =Â 2 follows by direct calculations. Moreover, by assumption R k = R T k . Thus Step 3 and Step 6 are equal, and hence at the end of the iteration we still have that v = w. (7) and let v ∈ R n be a given vector. Let v birka ∈ R n be the approximation obtained by applying BIRKA (Algorithm 2) to (1) with C = B T and initial guess v. Moreover, let v als ∈ R n be the approximation obtained by applying the ALS algorithm (Algorithm 1) to (1) with initial guess v. Then v birka = v als .

Theorem 19 Consider a symmetric generalized Lyapunov equation
Proof First, Lemma 17 and Lemma 18 makes it reasonable to assess the algorithms with only a single initial guess as well as a single output. Moreover, Step 5 in BIRKA as well as Steps 2-4 in ALS are redundant. Furthermore, it follows from Lemma 16 that in this situation Steps 2, 3, and 6 of BIRKA are also redundant. Hence we need to compare the procedure consisting of Steps 1 and 4 from BIRKA, with the procedure consisting of Steps 1, 5, and 6 from ALS. It can be observed that the computations are equivalent and thus the asserted equality holds if they stop after an equal amount of iterations. We hence consider the stopping criteria and note that they are the same, Corollary 20 Theorem 10 is applicable with ALS changed to BIRKA, using subspaces of dimension 1.

Remark 21
Note that ALS can be generalized such that the optimization is computing rank-corrections, see [25,Remark 2.2]. With similar arguments as above, one can show that for symmetric systems this can equivalently be achieved by BIRKA. From a theoretical point of view, this will yield more accurate approximations. However, the computational complexity increases quickly since each ALS or BIRKA step then requires solving a generalized Sylvester equation of dimension n × .

Fixed-point iteration and approximative M -norm minimization
In the previous section we showed that the ALS-based iteration (11) locally minimizes the error in the M -norm with rank-1 updates. In contrast we here show that the fixedpoint iteration minimizes an upper bound for the M -norm, but with no rank constraint on the minimizer.
Recall the fixed-point iteration for the generalized Lyapunov equation (1), withX 0 = 0. Under the assumption that ρ(L −1 Π) < 1 the iteration is a convergent splitting and has been presented in, e.g., [12,Equation (12)], [41,Equation (12)], and [37, Equation (4)]. The fixed-point iteration is a residual-based iteration since (17) is known to be equivalent tô withX 0 = 0. To relate the fixed-point iteration to the M -norm minimization problem we consider the problem The minimization is restricted to symmetric positive semidefinite matrices since we know that the solution X = X T 0. Hence it is desired to have that ifX k =X T k 0, then the new iterateX k+1 =X k + Δ also fulfillsX k+1 =X T k+1 0; specifically thenX k+1 X k . Proposition 3 gives us the solution in just one step. However, the computation is as difficult as the original problem and hence the goal is to minimize an upper bound on the error. As before we disregard the constant term X −X k where the inequality is a consequence of the linearity of the trace and the positive semidefiniteness of Δ T and Π(Δ). Hence the trace is non-negative [31]. The last expression is minimized by Δ = − L −1 (R k ), if R k is symmetric and positive definite. The latter is asserted in the following theorem. (7) with the additional assumptions that A ≺ 0 and ρ(L −1 Π) < 1. Moreover, consider the sequence of approximations constructed by (18) where R k is the residual associated withX k .

Theorem 22 Consider the symmetric generalized Lyapunov equation
Proof The proof is by induction and similar to that of Theorem 10. It holds that X 0 = X T 0 0 and R 0 = R T 0 0. Now assume that this is the case for some k. Then Δ = − L −1 (R k ) is symmetric and positive semidefinite, and henceX k+1 is symmetric and positive semidefinite. Moreover, sinceX k+1 and the operators in (1) are symmetric it follows that R k+1 is symmetric. Thus what is left to show is R k+1 0, which is true if and only if y T R k+1 y ≥ 0 for all y ∈ R n . Hence take an arbitrary y ∈ R n and consider The last inequality holds since Δ is symmetric and positive semidefinite and Π is a symmetric operator.

Remark 24
One could consider creating a subspace iteration from (18), by computing a few singular vectors of L −1 (R k ) and adding these to the basis. The method seems to have nice convergence properties per iteration in the symmetric case, but not in the nonsymmetric case. However, the (naïve) computations are prohibitively expensive. See [37] for a computationally more efficient way of exploiting the fixed-point iteration.

A residual-based rational Krylov generalization
A viable technique for designing iterative methods for the generalized Lyapunov equation seems to be working with the residual; see the discussion in connection to Proposition 3, and in Sects. 3 and 4. In [25,Section 4] it is suggested that, so called, preconditioned residuals can be used to expand the search space. It is further suggested that one such preconditioner could be a one-step-ADI preconditioner for a suitable choice of the shift. We present a method along those lines, and show that it can be seen as a generalization of the rational Krylov subspace method.

Suggested search space
For the generalized Lyapunov equation (1), we suggest the following search space: (19) where u k−1 is the most dominant left singular vector of the Galerkin residual R k−1 of K k−1 , and {σ } k =1 is a sequence of shifts that needs to be chosen. In analogy with the discussion in [15], we suggest that the shifts are chosen according to the largest approximation error along the current direction. More precisely, where V k−1 is a matrix with orthogonal columns containing a basis of K k−1 , the matrixÂ k−1 = V T k−1 AV k−1 , and [σ min , σ max ] is a search interval. Typically for a stable matrix A we let σ min be the negative real part of the eigenvalue of A with largest real part (closest to 0). Correspondingly we let σ max the negative real part of the eigenvalue of A with smallest real part. Equations (19) and (20) can be straightforwardly incorporated in a Galerkin method for the generalized Lyapunov equation; the pseudocode is presented in Algorithm 3.

Remark 25
In practice the computation of the left singular vector can typically be done approximatively in an iterative fashion. This would also remove the need of computing the approximative solutionX k in Step 5 and the residual in Step 6 explicitly, since the matrix vector product can be implemented as However, such computations may introduce inexactness which can present a difficulty in a subspace method.

Remark 26
Heuristically the dynamic shift-search in Step 9 can be changed to an analogue of [15, (2.4) and (2.2)]. We suggest where S approximates the mirrored spectrum of A and ∂ S is the boundary of S, and j being the Ritz values ofÂ k−1 . Typically S is approximated at each step using the convex hull of the Ritz values ofÂ k−1 . It has been observed efficient in experiments since the maximization of (21) is computationally faster compared to (20). See Sect. 6 for a practical comparison of convergence properties.

Remark 27
The steps 8-9 in Algorithm 3 can be changed for a tangential-direction approach according to [16]. One practical way, although a heuristic, is to do the shift search according to either (20) or (21), and then compute the principal direction(s) according to [16,Section 3], i.e., through a singular value decomposition of It has been observed in experiments that such an approach tends to speed up the convergence, in terms of computation time, since the computation of the residual is costly.

Remark 28
It is (sometimes) desirable to allow for complex conjugate shifts σ k and σ k , although, for reasons of computations and model interpretation one wants to keep the basis real. This goal is achievable using the same idea as in [35]. More precisely, one can utilize the relation Span . Although it requires two shifts to be used together with the vector u k−1 .

Analogies to the linear case
To give further insight to the suggested subspace in (19), we draw parallels with the (standard) rational Krylov subspace for the (standard) Lyapunov equation, where L is defined by (2) and B ∈ R n×r . The idea is to show that the suggested space (19), reduces to something well known in this case. As a technical note we observe that Definitions 1 and 2 are analogous for the (standard) Lyapunov equation (22), but with Π = 0. The reasoning in this section can be compared to that of [3,Section 2]. To prove the main result of this section we need the following lemma.

Lemma 29
Let A ∈ R n×n and σ a ∈ R be any scalar such that (A−σ a I ) is nonsingular. Moreover, let V ∈ R n×k , k ≤ n, be orthogonal, i.e., V T V = I , and let R ∈ R n×n be such that Proof We introduce the notation S := (A − σ a I ) andŜ := (V T AV − σ a I ). To prove the statement we consider the right-hand-side of the asserted equality, where the second equality follows from the assumption Range(S −1 R) ⊆ Span(V ). By observing thatŜ −1 V T SV = I the expression can be further simplified as where, again, the second equality follows from Range(S −1 R) ⊆ Span(V ).

Theorem 30
Let A ∈ R n×n , B ∈ R n×r , and let {σ } k+1 =1 be a sequence of shifts such that A − σ I is nonsingular for = 1, . . . , k + 1. Define the space , and K k+1 analogously. Let V k be an orthogonal basis of K k , V k+1 an orthogonal basis of K k+1 , and let v k+1 ∈ R n×r be such that V k+1 = V k , v k+1 . 1 Moreover, let R k ∈ R n×n be the Galerkin residual with respect to (22). Then each column of ( Proof We introduce the notation S k+1 := (A−σ k+1 I ) andŜ k+1 := (V T AV −σ k+1 I ).
From existing results on rational Krylov subspaces, see, e.g., [27, Proposition 2.2], there exists α ∈ R r ×n such that for a suitable β ∈ R (k+1)r ×n . This shows the first claim.
To prove the second claim we assume that Range( . Under this assumption we can use Lemma 29 and the fact that R k = R T k to get since R k is the Galerkin residual and thus V T k R k V k = 0.

Remark 31
The interpretation of Theorem 30 is easiest in the case when B = b ∈ R n . Consider the two spaces K k := Span{b, and R j is the Galerkin residual in space K j , with j = 0, 1, . . . , k − 1 . Then for all relevant cases, i.e., R j = 0 for j = −1, 0, . . . , k − 1, we have that K k =K k . In this sense the suggested subspace in (19) can be seen as a natural generalization of a rational Krylov subspace for linear matrix equations.

Numerical examples
We now numerically compare different methods discussed in the paper. All algorithms are treated in a subspace fashion 2 and we compare practically achieved approximation properties as a function of subspace dimension. Since the paper focuses on the symmetric problem we use Galerkin projection in the tested methods, except BIRKA. However, to (numerically) investigate the domain of application we test the methods on problems with varying degree of symmetry.
For small and moderate sized problems there are algorithms for computing the full solution, see [24,Algorithm 2], cf. [41, equation (12)]. Although costly, this nevertheless allows for inspection of the relative error, i.e., Moreover, it also allows comparison with the (in the Frobenius norm) optimal low-rank approximation based on the SVD.
We summarize some of the implementation details. Specifically, BIRKA is implemented as described in Algorithm 2, with a maximum allowed number of iterations equal to 100. Convergence tolerance is implemented as relative norm difference of the vector of sorted eigenvalues and was set to 10 −3 . Each subspace is computed independently from a random initial guess. We emphasize that the method based on ALS is a subspace method, and not an iteratively updated method as described in (11). Because of the structure of the generalized Lyapunov equation, the solution is symmetric even if the coefficient matrices are not, we use a symmetric version of ALS even for the non-symmetric examples. More precisely, a symmetrized version of Algorithm 1, cf. Lemma 18, was used as an inner iteration, the resulting vector was used in an outer iteration to expand the search space, and the approximation was found using Galerkin projection. The maximum allowed number of iterations in ALS (inner iteration) was set to 20, and the tolerance to 10 −2 . With reference to [25] we note that preconditioned residuals were not used, although it may accelerate convergence. Regarding the rational-Krylov-type methods we compare the following methods, which we give short labels for the legends further down: -A: K k as in (19), according to Algorithm 3 -B: Algorithm 3 but with tangential directions according to Remark 27,though with shifts according to (20) -C: Algorithm 3 but with shifts according to (21) -D: Algorithm 3 but with tangential directions according to Remark 27 and shifts according to (21) -E: Standard rational Krylov. More precisely, similar to Algorithm 3, but instead of using u k−1 we use the right-hand-side B in both (19) and (20) -F: K k as in (19), but with on-beforehand-prescribed shifts given as the recycling of mirrored eigenvalues from a size-10-BIRKA (convergence tolerance set to 10 −3 ). Mirrored eigenvalues are potentially complex, with positive real part, and taken in ascending order according to the real parts.
For methods C, D, and F the shifts may be complex-valued, and the complex arithmetic is avoided by creating the space in accordance with Remark 28. For methods A-E, the shift-search-boundaries were set to σ max = −1.01 · min λ∈σ (A) λ and σ min = −0.99 · max λ∈σ (A) λ, as to slightly enlarge the region. For methods A, B, and E, the shifts are taken as approximations to (20). The approximation is computed by discretizing the interval [σ min , σ max ] in 30 equidistant points and comparing the value of the target function. Orthogonalization of the basis is implemented using Matlab built-in QR factorization and keeping vectors only if the corresponding diagonal element in R is large enough. Implementations for the methods A-F are available online. 3 The simulations were done in Matlab R2018a (9.4.0.813654) on a computer with four 1.6 GHz processors and 16 GB of RAM. We test the algorithms on three different problems. All examples are bilinear control systems and we approximate the associated controllability Gramian, as in (9). The examples all have stable Lyapunov operators. The first example is symmetric, the second is non-symmetric but symmetrizable, and the third example is non-symmetric.

Heat equation
The first example is motivated by an optimal control problem for selective cooling of steel profiles, see [17]. In this example, the state variable w models the evolution of a temperature and is described by a two-dimensional heat equation, where a control u(t) enters bilinearly from the left through a Robin condition, The control can be interpreted as the spraying intensity of a cooling fluid. The other spatial boundaries satisfy homogeneous Dirichlet conditions, and at t = 0 an initial temperature profile is specified. The equation is discretized in space using centered We compare different methods discussed in the paper, both the relative residual norm and the relative error. For readability the plots have been split in different figures. Hence in Fig. 1 we compare across different classes of methods, and in Fig. 2 we compare between different flavors the rational-Krylov-type methods. It can be observed, see Fig. 1, that for this example BIRKA has extremely good performance, even outperforming the SVD in relative residual norm. Nevertheless, the larger BIRKA subspaces can be rather costly to compute. In comparison ALS shows good performance compared to the rational-Krylov-type subspace, and is rather cheap to compute. When comparing the different rational-Krylov-type methods, see Fig. 2, we see that stan- Since the M -norm is defined for this example we compare the relative error also in this norm, see Fig. 3. The trend is similar as in the Frobenius norm, although it can be observed that in general the error is smaller and BIRKA has best performance, even compared to the SVD.

1D Fokker-Planck
The second example is from quantum physics, where a one-dimensional Fokker-Planck equation is used to describe the evolution of a probability density function, ρ, of a particle affected by a potential. Parts of the potential can be manipulated by a so-called optical tweezer, which constitutes the control. For further details of the problem see [22]. More precisely we consider where the potential is V (x, t) = W (x) + α(x)u(t), with the ground (fixed) potential being W (x) = (((0.5x 2 − 15)x 2 + 199)x 2 + 28x + 50)/200, and α(x) is an approximately linear control shape function; for more details see [11]. In a weighted inner product, the dynamics can be described by self-adjoint operators. However, here we employ an upwinding type finite difference scheme with 5000 grid points, leading to a non-symmetric system. As has been pointed out in [22], the system matrix A is not asymptotically stable due to a simple zero eigenvalue associated with the stationary probability distribution. Using a projection-based decoupling, it is however possible to work with an asymptotically stable system of dimension n = 4999. Similar to the first example, the control variable is a scalar and, consequently, we only obtain a single bilinear coupling matrix N 1 = N . Since the system is non-symmetric, the operator M is generally indefinite and hence we make no comparisons in the M -norm.
The plots in Figs. 4 and 5 are analogous to the plots in Figs. 1 and 2 respectively. However, for this example the direct solver stagnated at a relative residual of about 10 −8 , which can be seen in the stagnation of the SVD approximation in the left of Fig. 4. As a result, the comparisons of relative error performance, the right of Figs. 4 and 5, show an artificial stagnation. At a certain level the convergence stagnates since it measures the discrepancy between the method approximations and the inexact reference solution, rather than the true error of the method approximations. Nevertheless we believe the comparisons to be fair more or less up to to the point of stagnation, which is justified by the relative residual plots showing similar behavior. However, the relative residual indicates stagnation around 10 −8 for the other methods as well, although not quite as clear as for the SVD.
From Fig. 4 we see the BIRKA performs well for this example. However, the subspaces of dimension 28 and 29 did not converge in a 100 iterations and hence for clarity these are left out of the plots. This illustrates a drawback of the method. The performance difference between ALS and the rational-Krylov-type method is slightly smaller compared to the previous example. Among the rational-Krylov-type methods A, B, and F seems to have similar performance, whereas C is clearly worse. Method E is competitive for about 10 iterations and then the convergence is significantly slower. However, method D ends up with an insufficient subspace.

Burgers' equation
In the third example we consider an approximation to the one-dimensional viscous Burgers' equation where ν = 0.1 is constant. The spatial boundary conditions are Dirichlet conditions. More specifically, w(1, t) = 0 and w(0, t) = u(t), where u(t) is an applied control input. The solution w(x, t) can be interpreted as a velocity and the equation occurs in, e.g., modeling of gas or traffic flow. The problem is discretized in space using centered finite differences with 71 uniformly distributed grid points. Using a second order Carleman bilinearization, we obtain a bilinear control system approximation with A, N ∈ R 5112×5112 and B ∈ R 5112 ; see [10] for further details. Note that in this case A is an asymptotically stable but non-symmetric matrix. To ensure the positive semidefiniteness of the Gramian, we scale the control matrices N and B with a factor α = 0.25. We emphasize that the control law is scaled proportionally with 1 α such that the dynamics remain unchanged, for further discussion see [9,Section 3.4].
The comparison is similar to the previous examples and the Figs. 6 and 7 are analogous to the respective Figs. 1 and 2. The problem is difficult in the sense that the singular values of the solution decay slowly. Moreover, the direct method stagnates at a relative residual norm of 5 × 10 −6 . This is, however, less visible compared to the previous example since in general the convergence is slower.
For this example the performance of BIRKA is not significantly better than other methods, which is not surprising since the theoretical justifications for the method are not valid. ALS shows faster convergence in relative residual norm but slower convergence in relative error, as well as indications of stagnation. However, the theoretical justifications for ALS are also not valid for this example and the result is in line with the results in [25]. Regarding the rational-Krylov-type methods it seems as if method D and B has the best performance. However, method E does not provide a useful subspace for this example.

Execution time experiment
We conclude the numerical examples with a small experiment comparing the execution time of different methods considered. The problems are the same as above, i.e., the heat equation, the 1D Fokker-Planck equation, and the Burger's equation. For all these we generate a BIRKA subspace of dimension 30, an ALS subspace of dimension 60, and a subspace of type A of dimension 60. The approximation properties of these spaces are similar for the heat equation, see Fig. 1. The cumulative CPU time as a function of iteration count, in the respective method, is plotted in Fig. 8. Note that for ALS and method A the iteration count corresponds to increasing the dimension of the subspace with one, since the right-hand-side is rank one. However, for BIRKA the dimension of the subspace is fixed on beforehand and hence there is an irregular number of iterations, corresponding to the convergence of the fixed-point problem rather than the size of the subspace. It was, for example, mentioned above that the BIRKA iterations for subspaces of dimension 28 and 29, for the Fokker-Planck equation, did not converge to the specified tolerance in the allowed 100 iterations.
In this situation, and for the chosen parameters, BIRKA is faster for the heat equation, and slower for the Fokker-Planck equation. In the case of the Burger's equation it seems as if BIRKA is faster. However, if we take the approximation properties into account we find, by looking at Fig. 6, that a more fair comparison with method A is to consider the latter only up to iteration 30. Moreover, fixing the subspace dimension, rather than the tolerance, is (likely) advantageous for BIRKA.

Conclusions and outlooks
We have proposed a rational-Krylov-type subspace for solving the generalized Lyapunov equation. Simulations indicate competitive performance, at least in the nonsymmetric case where optimality statements for the other methods are no longer valid. Simulations show that methods A and F perform well for all three examples. The ALS iteration, as well as results from the literature, cf. [1], seems to indicate that subspaces of the type (A − σ I − μN i ) −1 B could be useful. Although we have not been able to exploit this efficiently. Another generalization of the rational Krylov subspace, for general linear matrix equations, is presented in [32]. It is suggested to use subspaces of the type (A − σ I ) −1 v, and (N i − σ I ) −1 v, where v is a vector from the previous space. We see that more research is needed to understand the theoretical aspects of the suggested, and related, spaces.
Common for all methods studied is that they use the current residual in the iterations. Computing the residual can in itself be costly for a truly large scale problem, although approximate dominant directions can be computed in an iterative fashion, resulting in an inner-outer-type iteration. However, more research is needed to understand the consequences of such inexact subspaces.